uuid
int64 541B
3,299B
| dataset
stringclasses 1
value | text
stringlengths 1
4.29M
|
---|---|---|
1,116,691,499,815 | arxiv | \section{Introduction}
We propose a generalized version of spatial scan statistics called the \emph{kernel spatial scan statistic}.
In contrast to the many variants~\cite{AMPVW06, Kul97, huang2007spatial, jung2007spatial, neill2006bayesian, jung2010spatial, kulldorff2009scan, FNN17,HMWN18, SSMN16} of this classic method for geographic information sciences, the kernel version allows for the modeling of a gradually dampening of an anomalous event as a data point becomes further from the epicenter. As we will see, this modeling change allows for more statistical power \emph{and} faster algorithms (independent of data set size), in addition to the more realistic modeling.
\begin{figure}
\vspace{-1mm}
\includegraphics[width=\linewidth]{philly.pdf}
\vspace{-3mm}
\caption{\label{fig:Bernoulli-model}
An anomaly affecting $8\%$ of data (and rate parameters $p=.9$, $q=.5$) under the Bernoulli Kernel Spatial Scan Statistic model on geo-locations of crimes in Philadelphia.}
\vspace{-2mm}
\end{figure}
To review, spatial scan statistics consider a baseline or population data set $B$ where each point $x \in B$ has an annotated value $m(x)$. In the simplest case, this value is binary ($1$ = person has cancer; $0$ = person does not have cancer), and it is useful to define the measured set $M \subset B$ as $M = \{x \in X \mid m(x) = 1\}$.
Then the typical goal is to identify a region where there are significantly more measured points than one would expect from the baseline data $B$.
To prevent overfitting (e.g., gerrymandering), the typical formulation fixes a set of potential anomalous regions $\ensuremath{\Eu{R}}$ induced by a family of geometric shapes: disks~\cite{Kul97}, axis-aligned rectangles~\cite{NM04}, ellipses~\cite{Kul7.0}, halfspaces~\cite{MP18a}, and others~\cite{Kul7.0}.
Then given a statistical discrepancy function $\Phi(R) = \phi(R(M),R(B))$, where typically $R(M) = \frac{|R \cap M|}{|M|}$ and $R(B) = \frac{|R \cap B|}{|B|}$, the spatial scan statistic SSS is
\[
\Phi^* = \max_{R \in \ensuremath{\Eu{R}}} \Phi(R).
\]
And the hypothesized anomalous region is $R^* = \ensuremath{\mathrm{argmax}}_{R \in \ensuremath{\Eu{R}}} \Phi(R)$ so $\Phi^* = \Phi(R^*)$.
Conveniently, by choosing a fixed set of shapes, and having a fixed baseline set $B$, this actually combinatorially limits the set of all possible regions that can be considered (when computing the $\max$ operator), since for instance, there can be only $O(|B|^3)$ distinct disks which each contains a different subset of points. This allows for tractable~\cite{AMPVW06} (and in some cases very scalable~\cite{MP18b}) combinatorial and statistical algorithms which can (approximately) search over the class of \emph{all} shapes from that family.
Alternatively, the most popular software, SatScan~\cite{Kul7.0} uses a fixed center set of possible epicenters of events, for simpler and more scalable algorithms.
However, the discreteness of these models has a strange modeling side effect. Consider a the shape model of disks $\ensuremath{\Eu{D}}$, where each disk $D \in \ensuremath{\Eu{D}}$ is defined $D = \{x \in \ensuremath{\mathbb{R}}^d \mid \|x-c\| \leq r\}$ by a center $c \in \ensuremath{\mathbb{R}}^d$ and a radius $r > 0$. Then solving a spatial scan statistic over this family $\ensuremath{\Eu{D}}$ would yield an anomaly defined by a disk $D$; that is, all points $x \in B \cap D$ are counted entirely inside the anomalous region, and all points $x' \in B \setminus D$ are considered entirely outside the anomalous region.
If this region is modeling a regional event; say the area around a potentially hazardous chemical leak suspected of causing cancer in nearby residents, then the hope is that the center $c$ identifies the location of the leak, and $r$ determines the radius of impact.
However, this implies that data points $x \in B$ very close to the epicenter $c$ are affected equally likely as those a distance of almost but not quite $r$ away. And those data points $x' \in B$ that are slightly further than $r$ away from $c$ are not affected at all.
In reality, the data points closest to the epicenter should be more likely to be affected than those further away, even if they are within some radius $r$, and data points just beyond some radius should still have some, but a lessened effect as well.
\Paragraph{Introducing the Kernel Spatial Scan Statistic}
The main modeling change of the kernel spatial scan statistic (KSSS) is to prescribe these diminishing effects of spatial anomalies as data points become further from the epicenter of a proposed event. From a modeling perspective, given the way we described the problem above, the generalization is quite natural: we simply replace the shape class $\ensuremath{\Eu{R}}$ (e.g., the family of all disks $\ensuremath{\Eu{D}}$) with a class of non-binary continuous functions $\ensuremath{\Eu{K}}$. The most natural choice (which we focus on) are kernels, and in particular Gaussian kernels. We define each $K \in \ensuremath{\Eu{K}}$ by a center $c$ and a bandwidth $r$ as $K(x) = \exp(-\|x-c\|^2/r^2).$ This provides a real value $K(x) \in [0,1]$, in fact a probability, for each $x \in B$.
We interpret this as: given an anomaly model $K$, then for each $x \in B$ the value $K(x)$ is the probability that the rate of the measured event (chance that a person gets cancer) is increased.
\Paragraph{Related Work on SSS and Kernels} There have been many papers on computing various range spaces for SSS \cite{NM04,Tango2005,ACTW18, ICDMS2015} where a geometric region defines the set of points that included in the region for various regions such as disks, ellipses, rings, and rectangles. Other work has combined SSS and kernels as a way to penalize far away points, but still used binary regions, and only over a set of predefined starting points~\cite{DA04, DCTB2007, Patil2004}.
Another method~\cite{FNN17} uses a Kernel SVM boundary to define a region; this provides a regularized, but otherwise very flexible class of regions -- but they are still binary.
A third method~\cite{JRSA89}, proposes an inhomogeneous Poisson process model for the spatial relationship between a measured cancer rate and exposure to single specified region (from industrial pollution source).
This models the measured rate similar to our work, but does not search over a family of regions, and does not model a background rate.
\Paragraph{Our contributions, and their challenges}
We formally derive and discuss in more depth the KSSS in Section \ref{sec:model}, and contrast this with related work on SSS.
While the above intuition is (we believe) quite natural, and seems rather direct, a complication arises: the contribution of the data points towards the statistical discrepancy function (derived as a log-likelihood ratio) are no longer independent. This implies that $K(B)$ and $K(M)$ can no longer in general be scalar values (as they were with $R(B)$ and $R(M)$); instead we need to pass in sets. Moreover, this means that unlike with traditional binary ranges, the value of $\Phi$ no longer in general has a closed form; in particular the optimal rate parameters in the alternative hypothesis do not have a closed form. We circumvent this by describing a simple convex cost function for the rate parameters. And it turns out, these can then be effectively solved for with a few steps of gradient descent for each potential choice of $K$ within the main scanning algorithm.
Our paper then focuses on the most intuitive Bernoulli model for how measured values are generated, but the procedures we derive will apply similar to the Poisson and Gaussian models we also derive. For instance, it turns out that the Gaussian model kernel statistical discrepancy function has a closed form.
The second major challenge is that there is no longer a combinatorial limit on the number of distinct ranges to consider. There are an infinite set of potential centers $c \in \ensuremath{\mathbb{R}}^d$ to consider, even with a fixed bandwidth, and each could correspond to a different $\Phi(K)$ value. However, there is a Lipschitz property on $\Phi(K)$ as a function of the choice of center $c$; that is if we change $c$ to $c'$ by a small amount, then we can upperbound the change in $\Phi(K_c)$ to $\Phi(K_{c'})$ by a linear function of $\|c-c'\|$.
This implies a finite resolution needed to consider on the set of center points: we can lay down a fixed resolution grid, and only consider those grid points. Notably: \emph{this property does not hold for the combinatorial SSS version}, as a direct effect of the problematic boundary issue of the binary ranges.
We combine the insight of this Lipschitz property, and the gradient descent to evaluate $\Phi(K_c)$ for a set of center points $c$, in a new algorithm \textsf{KernelGrid}. We can next develop two improvements to this basic algorithm which make the grid adaptive in resolution, and round the effect of points far from the epicenter; embodied in our algorithm \textsc{KernelFast}, these considerably increase the efficiency of computing the statistic (by $30$x) without significantly decreasing their accuracy (with provable guarantees).
Moreover, we create a coreset $B_\varepsilon$ of the full data set $B$, independent of the original size $|B|$ that provably bounds the worst case error $\varepsilon$.
We empirically demonstrate the efficiency, scalability, and accuracy of these new KSSS algorithms. In particular, we show the KSSS has superior statistical power compared to traditional SSS algorithms, and exceeds the efficiency of even the heavily optimized version of those combinatorial Disk SSS algorithms.
\section{Derivation of the Kernel Spatial Scan Statistic}
\label{sec:model}
In this section we will provide a general definition for a spatial scan statistic, and then extend this to the kernel version.
It turns out, there are two reasonable variations of such statistics, which we call the continuous and binary settings.
In each case, we will then define the associated kernelized statistical discrepancy function $\Phi$ under the Bernoulli $\Phi^{\text{Be}}$, Poisson $\Phi^P$, and Gaussian $\Phi^G$ models. These settings are the same in the Bernoulli model, but different in the other two models.
\Paragraph{General derivation}
A spatial scan statistic considers a spatial data set $B \subset \ensuremath{\mathbb{R}}^d$, each data point $x \in B$ endowed with a measured value $m(x)$, and a family of measurement regions $\ensuremath{\Eu{R}}$. Each region $R \in \ensuremath{\Eu{R}}$ specifies the way a data point $x \in B$ is associated with the anomaly (e.g., affected or not affected).
Then given a statistical discrepancy function $\Phi$ which measures the anomalousness of a region $R$, the statistic is $\max_{R \in \ensuremath{\Eu{R}}} \Phi(R)$.
To complete the definition, we need to specify $\ensuremath{\Eu{R}}$ and $\Phi$, which it turns out in the way we define $\ensuremath{\Eu{R}}$ are more intertwined that previously realized.
To define $\Phi$ we assume a statistical model in how the values $m(x)$ are realized, where data points $x$ affected by the anomaly have $m(x)$ generated at rate $p$ and those unaffected generated at rate $q$.
Then we can define a null hypothesis that a potential anomalous region $R$ has no effect on the rate parameters so $p=q$; and an alternative hypothesis that the region does have an effect and (w.l.o.g.) $p > q$.
For both the null and alternative hypothesis, and a region $R \in \ensuremath{\Eu{R}}$, we can then define a likelihood, denoted $L_0(q)$ and $L(p,q,R)$, respectively. The spatial scan statistic is then the log-likelihood ratio (LRT)
\[
\Phi(R) = \log\left( \frac{\max_{p,q} L(p,q,R)}{\max_q L_0(q)} \right) = \left(\max_{p,q} \log L(p,q,R)\right) - \left(\max_q \log L_0(q)\right).
\]
Now the main distinction with the kernel spatial scan statistic is that $\ensuremath{\Eu{R}}$ is specified with a family of kernels $\ensuremath{\Eu{K}}$ so that each $K \in \ensuremath{\Eu{K}}$ specifies a \emph{probability} $K(x)$ that $x \in B$ is affected by the anomaly. This is consistent with the traditional spatial scan statistic (e.g., with $\ensuremath{\Eu{R}}$ as disks $\ensuremath{\Eu{D}}$), where this probability was always $0$ or $1$. Now this probability can be any continuous value. Then we can express the mean rate/intensity $g(x)$ for each of the underlying distributions from which $m(x)$ is generated, as a function of $K(x)$, $p$, and $q$.
Two natural and distinct models arise for kernel spatial scan statistics.
\Paragraph{Continuous Setting}
In the \emph{continuous setting}, which will be our default model, we directly model the mean rate $g(x)$ as a convex combination between $p$ and $q$ as,
\[ g(x) = K(x)p + (1 - K(x))q. \]
Thus each $x$ (with nonzero $K(x)$ value) has a slightly different rate.
Consider a potentially hazardous chemical leak suspected of causing cancer, this model setting implies that the residents who live closer to the center ($K(x)$ is larger) of potential leak would be affected more (have elevated rate) compared to the residents who live farther away.
The kernel function $K(x)$ models a decay effect from a center, and smooths out the effect of distance.
\Paragraph{Binary Setting}
In the second setting, the \emph{binary setting}, the mean rate $g(x)$ is defined
\[
\breve g(x)= \begin{cases}
p & \text{ w.p. } K(x)
\\
q & \text{ w.p. }(1 - K(x)).
\end{cases}
\]
To clarify notation, each part of the model associated with this setting (e.g., $\breve g$) will carry a $\breve{}$\, to distinguish it from the continuous setting.
In this case, as with the traditional SSS, the rate parameter for each $x$ is either $p$ or $q$, and cannot take any other value. However, this rate assignment is not deterministic, it is assigned with probability $K(x)$, so points closer to the epicenter (larger $K(x)$) have higher probability of being assigned a rate $p$. The rate $\breve g(x)$ for each $x$ is a mixture model with known mixture weight determined by $K(x)$.
\Paragraph{The null models}
We show that the choice of model, binary setting, vs continuous setting, does not change the null hypothesis (e.g. $\ell_0 = \breve \ell_0$).
\subsection{Bernoulli}
Under the Bernoulli model, the measured value $m(x) \in \{0,1\}$, and these values are generated independently.
Consider a $1$ value indicating that someone is diagnosed with cancer. Then an anomalous region may be associated with a leaky chemical plant, where the residents nearby the plant have an elevated rate of cancer $p$, whereas the background population may have lower rate $q$ of cancer. That is, the cancer occurs through natural mutation at a rate $q$, but if exposed to certain chemicals, there is another mechanism to get cancer that occurs at rate $p-q$ (for a total rate of $q + (p-q) = p$).
Under the binary model, \emph{any} exposure to the chemical triggers this secondary mechanism, and so the chance of exposure is modeled as proportional to $K(x)$, and rate at $x$ is $\breve g(x)$ is modeled well by the binary setting.
Alternatively, the rate of the secondary mechanism may increase as the amount of exposure to the chemical increases (those living closer are exposed to more chemicals), with rate $g(x)$ modeled in the continuous setting. These are both potentially the correct biological model, so we analyze both of them.
For this model we can define two subsets of $B$ as $M = \{x \in B \mid m(X) =1\}$ and $B \setminus M = \{x \in B \mid m(x) = 0\}$.
In either setting the null likelihood is defined
\[L_0^\textsf{Be}(q) = \breve L_0^\textsf{Be}(q) = \prod_{x \in M} q \prod_{x \in B \setminus M} (1 - q), \]
then
\[
\ell_0^\textsf{Be}(q) = \breve \ell_0^\textsf{Be}(q) = \sum_{x \in M} \log(q) + \sum_{x \in B \setminus M} \log(1-q),
\]
which is maximized over $q$ at $q = |M|/|B|$ as,
\[\ell_0^{\textsf{Be}^*} = \breve \ell_0^{\textsf{Be}^*} = |M| \log\frac{|M|}{|B|} + (|B| - |M|)\log(1 - \frac{|M|}{|B|}). \]
\Paragraph{The continuous setting}
We first deriving the continuous setting $\Phi^\textsf{Be}$, starting with the likelihood under the alternative hypothesis.
This is a product of the rate of measured and baseline.
\[
L^\textsf{Be}(p,q,K)
=
\prod_{x \in M} g(x) \cdot \prod_{x \in B \setminus M} (1-g(x))
\]
and so
\begin{align*}
&\ell^\textsf{Be}(p,q,K)
\\&=
\log(L^\textsf{Be}(p,q,K))
=
\sum_{x \in M} \log g(x) + \sum_{x \in B \setminus M} \log (1-g(x))
\\ &=
\sum_{x \in M} \log(p K(x) + q (1-K(x))) + \sum_{x \in B \setminus M} \log(1-p K(x) - q(1-K(x))).
\end{align*}
Unfortunately, we know of no closed form for the maximum of $\ell^\textsf{Be}(p,q,K)$ over the choice of $p,q$ and therefore this form cannot be simplified further than
\[
{\Phi^\textsf{Be}}^*(K) = \max_{p,q} \ell^\textsf{Be}(p,q,K) - {\ell_0^\textsf{Be}}^*
\]
\Paragraph{The binary setting}
We continue deriving the binary setting $\breve{\Phi}^\textsf{Be}$, starting with
\[
\breve L^\textsf{Be}(p,q,K)
=
\prod_{x \in B} \left( p^{m(x)}(1-p)^{m(x)}K(x) + q^{m(x)}(1-q)^{m(x)}K(x)\right),
\]
and so
\begin{align*}
\breve \ell^\textsf{Be}(p,q,K)
&=
\log(\breve L^\textsf{Be}(p,q,K))
\\ &=
\sum_{x \in B} \log \left( p^{m(x)}(1-p)^{m(x)}K(x) + q^{m(x)}(1-q)^{m(x)}K(x) \right).
\end{align*}
Similarly as in the continuous setting, there is no closed form of the maximum of $\log(\breve L^\textsf{Be}(p,q,K))$ over choices of $p$,$q$, so we write the form below.
\[
\breve{\Phi}^{\textsf{Be}^*}(K) = \max_{p,q} \breve \ell^\textsf{Be}(p,q,K) - {\ell_0^\textsf{Be}}^*
\]
\Paragraph{Equivalence}
It turns out, under the Bernoulli model, these two settings have equivalent statistics to optimize.
\begin{lemma}
The $\breve \ell^\textsf{Be}(p,q,K)$ and $\ell^\textsf{Be}(p,q,K)$ are exactly same, hence $\breve \Phi^{\textsf{Be}^*}$ = ${\Phi^\textsf{Be}}^*$ which implies that the Bernoulli model under two settings are equivalent to each other.
\end{lemma}
\begin{proof}
We simply expand the binary setting as follows.
\begin{align*}
& \breve \ell^\textsf{Be}(p,q,K)
\\ &=
\sum_{x \in M} \log( p^{m(x)} (1 - p)^{1 - m(x)} K(x) + q^{m(x)} (1 - q)^{1 - m(x)} (1 - K(x)))
\\ &
+ \sum_{x \in B \setminus M} \log( p^{m(x)} (1 - p)^{1 - m(x)} K(x) + q^{m(x)} (1 - q)^{1 - m(x)} (1 - K(x)))
\\ & =
\sum_{x \in M} \log( p K(x) + q(1 - K(x))) + \sum_{x \in B\setminus M} \log(1 - (p-q)K(x) - q)
\\ & =
\sum_{x \in M} \log(g(x)) + \sum_{x \in B\setminus M} \log(1 - g(x))
\\ & =
\ell^\textsf{Be}(p,q,K)
\end{align*}
Since the $\breve \ell^\textsf{Be}(p,q,K) = \ell^\textsf{Be}(p,q,K)$ and $\breve \ell_0^{\textsf{Be}^*} = {\ell_0^\textsf{Be}}^*$, then $\breve \Phi^{\textsf{Be}^*}$ = ${\Phi^\textsf{Be}}^*$.
\end{proof}
\subsection{Gaussian}
The Gaussian model can be used to analyze spatial datasets with continuous values $m(x)$ (e.g., temperature, rainfall, or income), which we assume varies with a normal distribution with a fixed known standard deviation $\sigma$. Under this model, both the continuous and binary settings are again both well motivated.
Consider an insect infestation, as it affects agriculture. Here we assume fields of crops are measured at discrete locations $B$, and each has an observed yield rate $m(x)$, which under the null models varies normally around a value $q$. In the continuous setting, the yield rate at $x \in B$ is effected proportional to $K(x)$, depending on how close it is to the epicenter. This may for instance model that fewer insects reach further from the epicenter, and the yield rate is effected relative to the number of insects that reach the field $x$.
In the binary setting, it may be that if insects reach a field, then they dramatically change the yield rate (e.g., they eat and propagate until almost all of the crops are eaten). In the latter scenario, the correct model is binary one, with a mixture model of two rates governed by $K(x)$, the closeness to the epicenter.
In either setting the null likelihood is defined as,
\[L_0^\textsf{G}(q) = \breve L_0^\textsf{G}(q) = \prod_{x \in B} \exp(-\frac{(m(x) -q)^2}{2 \sigma^2}), \]
then
\[\ell_0^\textsf{G}(q) = \breve \ell_0^\textsf{G}(q) = \sum_{x \in B} -\frac{(m(x) -q)^2}{2 \sigma^2}, \]
which is maximized over $q$ at $q = \frac{\sum_{x \in B} m(x)}{|B|} = \hat m$ as,
\[\ell_0^{\textsf{G}^*} = \breve \ell_0^{\textsf{G}^*} = \frac{-1}{2 \sigma^2} \sum_{x \in B}(\hat m - m(x))^2. \]
\Paragraph{The continuous setting}
We first derive the continuous setting $\Phi^\textsf{G}$ and terms free of $p$ and $q$ would be treated as constant then ignored, starting with,
\[
L^\textsf{G}(p,q,K)
=
\prod_{x \in B} \exp(-\frac{(m(x) - g(x))^2}{2\sigma^2})
\]
and so
\begin{align*}
\ell^\textsf{G}(p,q,K)
&=
\log(L^\textsf{G}(p,q,K)) = \sum_{x \in B} -\frac{(m(x) - g(x))^2}{2\sigma^2} \\ &=
\sum_{x \in B} -\frac{(m(x) - pk(x) - q(1 - K(x))^2}{2\sigma^2}.
\end{align*}
Fortunately, there is a closed form for the maximum of $\ell^\textsf{G}(p,q,K)$ over the choice of $p,q$ by setting the $\frac{\dir \ell^\textsf{G}(p,q,K)}{\dir p} = 0$ and $\frac{\dir \ell^\textsf{G}(p,q,K)}{\dir q} = 0$. Hence we come up with the closed form solution of Gaussian Kernel statistical discrepancy shown by the theorem below.
\begin{theorem}
\label{thm:GKSSS}
Gaussian kernel statistical discrepancy function is
\[
\Phi^{\textsf{G}^*}(K) = \sum_{x \in B} (m(x) - \hat p K(x) - \hat q (1-K(x)))^2 - \sum_{x\in B}(\bar m - m(x))^2
\]
where
$\bar m = \frac{1}{|B|} \sum_{x \in B} m(x) $,
$\hat p = \frac{K_{\pm}K_{-m} - K_m K_{-2}}{K_{\pm}^2 - K_2 K_{-2}}$,
$\hat q = \frac{K_m K_{\pm} - K_2 K_{-m}}{K_{\pm}^2 - K_2 K_{-2}}$, using the following terms
$K_m = \sum_{x \in B} K(x) m_{x}$, \;
$K_2 = \sum_{x \in B} K(x)^2$, \;\;
$K_{\pm} = \sum_{x \in B} K(x) (1 - K(x))$,
$K_{-m} = \sum_{x \in B} m(x) (1 - K(x))$, and
$K_{-2} = \sum_{x \in B} (1 - K(x))^{2}$.
\end{theorem}
\begin{proof}
For the alternative hypothesis, the log-likelihood is
\[
\ell^\textsf{G}(p,q,K) = \frac{-1}{2\sigma^2} \sum_{x \in B} (m(x) - g(x))^2
\]
The optimal values of $p,q$ minimize
\[
-\ell^\textsf{G}(p,q,K)
=
\frac{1}{2\sigma^2} \sum_{x \in B} (m(x) - pK(x) - q(1-K(x)))^2.
\]
By setting the partial derivatives wrt $p$ and $q$ of $-\ell^\textsf{G}(p,q,L)$ equal to $0$, we have,
\[
\frac{\dir \ell^\textsf{G}(p,q,K)}{\dir p}
= \sum_{x \in B} K(x)[m(x) - K(x)p - (1 - K(x)q] = 0,
\]
and
\[
\frac{\dir \ell^\textsf{G}(p,q,K)}{\dir q}
= \sum_{x \in B} (1 - K(x))[m(x) - K(x)p - (1 - K(x)q] = 0,
\]
and these two can be further simplified to,
\begin{align*}
\frac{\dir \ell^\textsf{G}(p,q,K)}{\dir p}
&=
\sum_{x \in B} K(x)m(x) - p\sum_{x \in B}K(x)^2 - q\sum_{x \in B}K(x)(1 - K(x))
\\&=
0,
\end{align*}
and
\begin{align*}
&\frac{\dir \ell^\textsf{G}(p,q,K)}{\dir q}
\\&= \sum_{x \in B} (1 - K(x))m(x) - p\sum_{x \in B}(1 - K(x))K(x) - q\sum_{x \in B}(1 - K(x))^2 \\&= 0.
\end{align*}
We replace these terms by notations defined in the theorem,
\[
K_m - pK_{2} - qK_{\pm} = 0,
\]
and
\[
K_{-m} - pK_{\pm} - qK_{-2} = 0.
\]
Then we can solve the optimum value of $p$ and $q$ as,
$p = \frac{K_{\pm}K_{-m} - K_m K_{-2}}{K_{\pm}^2 - K_2 K_{-2}} = \hat p$
and
$q = \frac{K_m K_{\pm} - K_2 K_{-m}}{K_{\pm}^2 - K_2 K_{-2}} = \hat q$.
Hence we have the closed form
\[
\ell^{\textsf{G}^*} = \max_{p,q} \ell^\textsf{G}(p,q,K) = \frac{-1}{2\sigma^2}\sum_{x \in B} (m(x) - \hat p K(x) - \hat q (1-K(x)))^2.
\]
\end{proof}
\Paragraph{The binary Setting}
Now we derive the binary setting $\breve{\Phi}^\textsf{Be}$, starting
\[
\breve L^\textsf{G}(p,q,K) = \prod_{x \in B} \exp(-\frac{(m(x) - p)^2}{2 \sigma^2})K(x) + \exp(-\frac{(m(x) - q)^2}{2 \sigma^2})(1 - K(x))
\]
and so,
\begin{align*}
&\breve \ell^\textsf{G}(p,q,K) \\&= \sum_{x \in B} \log(\exp(-\frac{(m(x) - p)^2}{2 \sigma^2})K(x) + \exp(-\frac{(m(x) - q)^2}{2 \sigma^2})(1 - K(x))).
\end{align*}
Different from the continuous setting we know of no closed form for the maximum of $\breve \ell^\textsf{G}(p,q,K)$ over $p$ and $q$. Hence,
\[
\breve{\Phi}^{\textsf{G}^*}(K) = \max_{p,q} \breve \ell^\textsf{G}(p,q,K) - {\ell_0^\textsf{G}}^*.
\]
\Paragraph{Equivalence}
Different from the Bernoulli models, the Gaussian models under the binary setting and continuous setting are not equivalent to each other. Under the continuous setting,
\[ m(x) \sim \mathcal{N}(g(x),\sigma), \]
however, under the binary setting, each data point follow a two components Gaussian mixture model where the mixture weight is given by $K(x)$ and $1 - K(x)$, and so it is not a Gaussian distribution anymore.
\subsection{Poisson}
In the Poisson model the measured value $m(x)$ is discrete and non-negative, but it can now take any positive integer value with $m(x) \in \mathbb{Z}_+$. This can for instance model the number of check-ins or comments $m(x)$ posted at each geo-located business $x \in B$ (this can be a proxy for instance for the number of customers). An event, e.g., a festival, protest, or other large impromptu gathering could be modeled spatially by a kernel $K$, and it affects the rates at each $x$ in the two different settings.
In the continuous setting, the closer a distance a restaurant is from the center of the event (modeled by $K(x)$) the more the usual number of check-ins (modeled by $q$) will trend towards $p$.
On the hand, in the binary setting, only certain businesses are affected (e.g., a coffee shop, but not a fancy dinner location), but if it is affected, its rate is elevated all the way from $q$ to $p$. Perhaps advertising at a festival encouraged people to patronize certain restaurants, or a protest encouraged them to give bad reviewers to certain nearby restaurants -- but not others. Hence, these two settings relate to two different ways an event could affect Poisson check-in or comment rates.
In either setting, the null likelihood is defined as,
\[L_0^\textsf{P}(q) = \breve L_0^\textsf{P}(q) = \prod_{x \in B} \frac{e^{-q}q^{m(x)}}{m(x)!}, \]
then
\begin{align*}
\ell_0^\textsf{P}(q) = \breve \ell_0^\textsf{P}(q) &= \sum_{x \in B} \log(\frac{e^{-q}q^{m(x)}}{m(x)!})
\\& = \sum_{x \in B} -q + m(x)\log(q) - \log(m(x)!),
\end{align*}
which is maximized over $q$ at $q = \frac{\sum_{x \in B} m(x)}{|B|} = \hat m$ as,
\begin{align*}
\ell_0^{\textsf{P}^*}
=
\breve \ell_0^{\textsf{P}^*}
& =
\sum_{x \in B} -\hat m + m(x) \log(\hat m) - \log(m(x)!)
\\ &=
- |B| \hat m - \sum_{x \in B} \log (m(x)!) - m(x) \log (\hat m).
\end{align*}
\Paragraph{The continuous setting}
We first derive the continuous setting $\Phi^\textsf{P}$, starting with,
\begin{align*}
L^\textsf{P}(p,q,K)
&= \prod_{x \in B} \frac{e^{-g(x)} g(x)^{m(x)}}{m(x)!}
\\&= \prod_{x \in B} \frac{e^{-pK(x) -q(1- K(x))} (pK(x) + q(1-K(x))^{m(x)}}{m(x)!},
\end{align*}
so
\begin{align*}
&\ell^\textsf{P}(p,q,K)
\\&= \sum_{x \in B} -g(x) + m(x)\log(g(x)) - \log(m(x)!)
\\&= \sum_{x \in B} -(pK(x) + q(1-K(x)) + m(x)\log(pK(x) + q(1-K(x)) - \log(m(x)!))
\end{align*}
There is no closed form for the maximum of $\ell^\textsf{P}(p,q,K)$ over the choice of $p$ and $q$ and hence
\[
\Phi^{\textsf{P}^*}(K) = \max_{p,q}\ell^\textsf{P}(p,q,K) - \ell_0^{\textsf{P}^*}(q)
\]
\Paragraph{The binary setting}
We continue deriving the binary setting $\breve \Phi^{\textsf{P}^*}$, starting with
\[
\breve L^\textsf{P}(p,q,K) = \prod_{x \in B} \frac{e^{-q} q^{m(x)}}{m(x)!} (1 - K(x)) + \frac{e^{-p} p^{m(x)}}{m(x)!} K(x),
\]
so
\[
\breve \ell^\textsf{P}(p,q,k) = \sum_{x \in B} \log\left(\frac{e^{-q} q^{m(x)}}{m(x)!} (1 - K(x)) + \frac{e^{-p} p^{m(x)}}{m(x)!} K(x) \right).
\]
Same as the continuous setting, there is no closed form of $\breve \ell^\textsf{P}(p,q,k)$, hence,
\[ \breve \Phi^{\textsf{P}^*}(K) = \max_{p,q} \breve \ell^\textsf{P}(p,q,k) - \breve \ell_0^{\textsf{P}^*}(q) \]
\Paragraph{Equivalence}
In the Poisson model the binary setting and the continuous setting are not equivalent to each other. Under the continuous setting,
\[
m(x) \sim \mathsf{Poi}(g(x)),
\]
however under the binary setting $m(x)$ follows a mixture Poisson model which is not a Poisson distribution anymore and the mixture weight is given by $K(x)$ and $1 - K(x)$.
\section{Computing the Approximate KSSS}
\label{sec:algorithm}
The traditional SSS can combinatorially search over all disks~\cite{SSSS,MP18a,Kul97} to solve for or approximate $\max_{D \in \ensuremath{\Eu{D}}}\Phi(D)$, evaluating $\Phi(D)$ exactly. Our new KSSS algorithms will instead search over a grid of possible centers $c$, and approximate $\Phi(K_c)$ with gradient descent, yet will achieve the same sort of strong error guarantees as the combinatorial versions. Improvements will allow for adaptive gridding, pruning far points, and sampling.
\subsection{Approximating $\Phi$ with GD}
\label{sec:approx-log-lambda}
We cannot directly calculate $\Phi(K) = \max_{p,q} \Phi_{p,q}(K)$, since it does not have a closed form. Instead we run gradient descent $\mathsf{GradDesc}$ over $-\Phi_{p,q}(K_c)$ on $p,q$ for a fixed $c$. Since we have shown $\Phi_{p,q}(K)$ is convex over $p,q$ this will converge, and since Lemma \ref{lem:lipshitz-pq} bounds its Lipschitz constant at $2 |B|$ it will converge quickly.
In particular, from starting points $p_0,q_0$, after $s$ steps we can bound
\[
|\Phi_{p^*, q^*}(K) - \Phi_{p_s, q_s}(K)| \le \frac{|B| \|(p^*,q^*) - (p_0,q_0)\| }{s},
\]
for the found rate parameters $p_s,q_s$. Since $0 \leq p,q \leq 1$, then after $s_\varepsilon = |B|/\varepsilon$ steps we are guaranteed to have
$|\Phi_{p^*, q^*}(K) - \Phi_{p_{s_\varepsilon}, q_{s_\varepsilon}}(K)| \leq \varepsilon$.
We always initiate this procedure on $\Phi_{p,q}(K_c)$ with the $\hat p, \hat q$ found on a nearby $K_{c'}$, and as a result found that running for $s = 3$ of $4$ steps is sufficient.
Each step of gradient descent takes $O(|B|)$ to compute the gradient.
\subsection{Gridding and Pruning}
\label{sec:algorithm-description}
Computing $\Phi(K)$ on every center in $\mathbb{R}^2$ is impossible, but Lemma \ref{lem:spatial-lipshitz} shows that $\Phi(K_{\hat c})$ with $\hat c$ close to the true maximum $c^*$, then $\Phi(K_{\hat c})$ will approximate the true maximum.
From Lemma \ref{lem:spatial-lipshitz} we have $|\Phi(K_{\hat{c}}) - \Phi(K_{c^*})| \le \frac{1}{r} \sqrt{\frac{8}{e}} \|\hat{c} - c^*\|$. To get a bound of $|\Phi(K_{\hat{c}}) - \Phi(K_{c^*})| \le \varepsilon$ we need that $\|\hat{c} - c^*\|\frac{1}{r}\sqrt{\frac{8}{e}} \le \varepsilon$ or rearanging that $\|\hat{c} - c^*\| \le \varepsilon r \sqrt{\frac{e}{8}}$. Placing center points on a regular grid with sidelength $\tau_\varepsilon = \varepsilon r \sqrt{\frac{e}{4}}$ will ensure
the a center point will be close enough to the true maximum.
Assume that $B \subset \Omega_\Lambda \subset \ensuremath{\mathbb{R}}^2$, which w.l.o.g. we assume $\Omega_\Lambda \in [0,L] \times [0,L]$, where $\Lambda = L/r$ is a unitless resolution parameter which represents the ratio of the domain size to the scale of the anomalies.
Next define a regular orthogonal grid $G_\varepsilon$ over $\Omega_\Lambda$ at resolution $\tau_\varepsilon$. This contains $|G_\varepsilon \cap \Omega_\Lambda| = \frac{\Lambda^2}{\varepsilon^2}\frac{4}{e}$ points.
We compute the scan statistic $\Phi(K_c)$ choose as $c$ each point in $G_\varepsilon \cap \Omega_\Lambda$.
This algorithm, denoted \textsc{KernelGrid} and shown in Algorithm \ref{alg:D2}, can be seen to run in $O(|G_\varepsilon \cap \Omega_\Lambda| \cdot |B| t_{g} ) = O(\frac{\Lambda^2}{\varepsilon^2} |B| s_\varepsilon)$ time, using $s_\varepsilon$ iterations of gradient decent (in practice $s_\varepsilon = 4$).
\begin{lemma}
\label{lem:kernel-grid}
$\textsc{\em KernelGrid}(B,\varepsilon,\Omega_\Lambda)$ returns $\Phi(K_{\hat c})$ for a center $\hat c$
so
$|\max_{K_c \in \ensuremath{\Eu{K}}_r} \Phi(K_c) - \Phi(K_{\hat c})| \le \varepsilon$
in time $O(\frac{\Lambda^2}{\varepsilon^2} |B| s_\varepsilon)$, which in the worst case is $O(\frac{\Lambda^2}{\varepsilon^3} |B|^2)$.
\end{lemma}
\begin{algorithm}
\caption{\textsc{KernelGrid}$(B, \varepsilon, \Omega_\Lambda)$}
\label{alg:D2}
\begin{algorithmic}
\STATE Initialize $\Phi = 0$; define $G_{\varepsilon,\Lambda} = G_\varepsilon \cap \Omega_\Lambda$
\FOR{$c \in G_{\varepsilon,\Lambda}$}
\STATE $\Phi_c = \mathsf{GradDesc}(\Phi_{p, q}(K_c))$ over $p,q$ on $B$
\STATE \textbf{if } ($\Phi_c > \Phi$) \textbf{ then } $\Phi = \Phi_c$
\ENDFOR
\STATE return $\Phi$
\end{algorithmic}
\end{algorithm}
\Paragraph{Adaptive Gridding}
We next adjust the grid resolution based on the density of $B$.
We partition the $\Omega_\Lambda$ domain with a coarse grid $H_\varepsilon$ with side length $2r_{max}$ (from Lemma \ref{lem:truncate}).
For a cell $\gamma \in H_\varepsilon$ in this grid, let $S_\gamma$ denote the $6 r_{\textrm{max}} \times 6 r_{\textrm{max}}$ region which expands the grid cell $\gamma$ the length of one grid cell in either direction. For any center $c \in \gamma$, all points $x \in B$ which are within a distance of $2r_{\textrm{max}}$ from $c$ must be within $S_\gamma$. Hence, by Lemma \ref{lem:truncate} we can evaluate $\Phi(K'_c)$ for any $c \in \gamma$ only inspecting $S \cap B$.
Moreover, by the local density argument in Lemma \ref{lem:spatial-lipshitz-adaptive}, we can describe a new grid $G'_{\varepsilon,\gamma}$ inside of each $\gamma \in H_\varepsilon$ with center separation $\beta$ only depending on the local number of points $|S_\gamma \cap B|$. In particular we have for $c,c' \in \gamma$ with $\|c-c'\|=\beta$
\[
|\Phi(K'_c) - \Phi(K'_{c'})|
\le
\beta \frac{|S \cap B|}{|B|}\frac{2 r_\text{max}}{r^2} + \varepsilon
\]
To guarantee that all $c \in \gamma$ have another center $c' \in G'_{\varepsilon,\gamma}$ so that
$|\Phi(K_c) - \Phi(K'_{c'})| \leq 2\varepsilon$
we set the side length in $G'_{\varepsilon,\gamma}$ as
\[
\beta_\gamma = \varepsilon \frac{|B|}{|B \cap S_\gamma|} \frac{r^2}{2r_\textrm{max}},
\]
so the number of grid points in $G'_{\varepsilon,\gamma}$ is
\[
|G'_{\varepsilon,\gamma}|
=
\frac{4 r_\textrm{max}^2}{\beta_\gamma^2}
=
\frac{r^4_\text{max}}{r^4}
\frac{16}{\varepsilon^2} \frac{|B \cap S_\gamma|^2}{|B|^2}.
\]
Now define $G'_\varepsilon = \bigcup_{\gamma \in H_\varepsilon} G'_{\varepsilon,\gamma}$ as the union of these adaptively defined subgrids over each coarse grid cell. Its total size is
\begin{align*}
|G'_\varepsilon|
&=
\sum_{\gamma \in H_\varepsilon} |G_{\varepsilon,\gamma}|
=
\sum_{\gamma \in H_\varepsilon} \frac{r^4_\text{max}}{r^4}
\frac{16}{\varepsilon^2} \frac{|B \cap S_\gamma|^2}{|B|^2}.
\\ &=
\frac{r^4_\text{max}}{r^4}
\frac{16}{\varepsilon^2}\cdot \frac{1}{|B|^2}
\sum_{\gamma \in H_\varepsilon} |B \cap S_\gamma|^2.
\\ &=
\log^2(|B|/\varepsilon)
\frac{16}{\varepsilon^2}
\sum_{\gamma \in H_\varepsilon} \cdot \frac{1}{|B|^2}
\sum_{\gamma \in H_\varepsilon} |B \cap S_\gamma|^2.
\\ &\leq
\log^2(|B|/\varepsilon)
\frac{1296}{\varepsilon^2},
\end{align*}
where the last inequality follows from Cauchy-Schwarz, and that each point $x \in B$ appears in $9$ cells $S_\gamma$. This replaces dependence on the domain size $\Lambda^2$ in the size of the grid $G_\varepsilon$, with a mere logarithmic $\log^2(|B|/\varepsilon)$ dependence on $|B|$ in $G'_\varepsilon$. We did not minimize constants, and in practice we use significantly smaller constants.
Moreover, since this holds for each $c \in \gamma$, and the same gridding mechanism is applied for each $\gamma \in H_\varepsilon$, this holds for all $c \in \Omega_\Lambda$.
We call the algorithm that extends Algorithm \ref{alg:D2} to use this grid $G'_\varepsilon$ in place of $G_\varepsilon$ \textsc{KernelAdaptive}.
\begin{lemma}
\label{lem:kernel-adaptive}
$\textsc{\em KernelAdaptive}(B,\varepsilon,\Omega_\Lambda)$ returns $\Phi(K_{\hat c})$ for a center $\hat c$
so
$|\max_{K_c \in \ensuremath{\Eu{K}}_r} \Phi(K_c) - \Phi(K_{\hat c})| \le \varepsilon$
in time $O(\frac{1}{\varepsilon^2} \log^2 \frac{|B|}{\varepsilon} |B| s_\varepsilon)$, which in the worst case is $O(\frac{1}{\varepsilon^3} |B|^2 \log^2 \frac{|B|}{\varepsilon})$.
\end{lemma}
\Paragraph{Pruning}
For both gridding methods the runtime is roughly the number of centers times the time to compute the gradient $O(|B|)$. But via Lemma \ref{lem:truncate} we can ignore the contribution of far away points, and thus only need those in the gradient computation.
However, this provides no worst-case asymptotic improvements in runtime for \textsc{KernelGrid}, or \textsc{KernelAdaptive} since all of $B$ may reside in a $r_\text{max} \times r_\text{max}$ cell.
But in the practical setting we consider, this does provide a significant speedup as the data is usually spread over a large domain that is many times the size of $r_\text{max}$.
We will define two new methods \textsc{KernelPrune} and \textsc{KernelFast} (shown in Algorithm \ref{alg:Kfast}) where the former extends \textsc{KernelGrid} method, and latter extends \textsc{KernelAdaptive} with pruning.
In particular, we note that bounds in Lemma \ref{lem:kernel-adaptive} hold for \textsc{KernelFast}.
\begin{algorithm}
\caption{\textsc{KernelFast}$(B, \varepsilon, \Omega_\Lambda)$}
\label{alg:Kfast}
\begin{algorithmic}
\STATE Initialize $\Phi = 0$; define grid $H_{\varepsilon}$ over $\Omega_\Lambda$
\FOR{$\gamma \in H_\varepsilon$}
\STATE Define $G'_{\varepsilon, \gamma}$ adaptively on $\gamma$ and $S_\gamma \cap B$
\FOR{$c \in G'_{\varepsilon,\Lambda}$}
\STATE $\Phi_c = \mathsf{GradDesc}(\Phi_{p, q}(K_c))$ over $p,q$ on \emph{pruned set} $B \cap S_\gamma$
\STATE \textbf{if } ($\Phi_c > \Phi$) \textbf{ then } $\Phi = \Phi_c$
\ENDFOR
\ENDFOR
\STATE return $\Phi$
\end{algorithmic}
\end{algorithm}
\vspace{-.1in}
\subsection{Sampling}
\label{sec:alg-sample}
We can dramatically improve runtimes on large data sets by sampling a coreset $B_\varepsilon$ iid from $B$, according to Lemma \ref{lem:sample-bound}. With probability $1-\delta$ we need $|B_\varepsilon| = O(\frac{1}{\varepsilon^2} \log^2 \frac{1}{\varepsilon} \log \frac{\kappa}{\delta})$ samples, where $\kappa$ is the number of center evaluations, and can be set to the grid size.
For \textsc{KernelGrid} $\kappa = |G_\varepsilon| = O(\frac{\Lambda^2}{\varepsilon^2})$ and in \textsc{AdaptiveGrid} $\kappa = |G'_\varepsilon| = O(\frac{1}{\varepsilon^2} \log^2\frac{|B_\varepsilon|}{\varepsilon}) = (\frac{1}{\varepsilon^2} \log^2\frac{1}{\varepsilon})$.
We restate the runtime bounds with sampling to show they are independent of $|B|$.
\begin{lemma}
\label{lem:kernel-gridding-sample}
$\textsc{\em KernelGrid}(B_\varepsilon,\varepsilon,\Omega_\Lambda)$ $\&$
$\textsc{\em KernelPrune}(B_\varepsilon,\varepsilon,\Omega_\Lambda)$
with sample size $|B_\varepsilon| = O(\frac{1}{\varepsilon^2} \log \frac{\Lambda}{\varepsilon \delta})$
returns $\Phi(K_{\hat c})$ for a center $\hat c$
so
$|\max_{K_c \in \ensuremath{\Eu{K}}_r} \Phi(K_c) - \Phi(K_{\hat c})| \le \varepsilon$
in time
$O(\frac{s_\varepsilon}{\varepsilon^4} \log \frac{\Lambda}{\varepsilon \delta})$,
with probability $1 - \delta$.
In the worst case the runtime is
$O(\frac{\Lambda^2}{\varepsilon^7} \log^2 \frac{\Lambda}{\varepsilon \delta})$.
\end{lemma}
\begin{lemma}
\label{lem:adaptive-gridding-sample}
$\textsc{\em KernelAdaptive}(B_\varepsilon,\varepsilon,\Omega_\Lambda)$ $\&$
$\textsc{\em KernelFast}(B_\varepsilon,\varepsilon,\Omega_\Lambda)$
with sample size $|B_\varepsilon| = O(\frac{1}{\varepsilon^2} \log \frac{1}{\varepsilon \delta})$
returns $\Phi(K_{\hat c})$ for a center $\hat c$
so
$|\max_{K_c \in \ensuremath{\Eu{K}}_r} \Phi(K_c) - \Phi(K_{\hat c})| \le \varepsilon$
in time
$O(\frac{s_\varepsilon}{\varepsilon^4} \log^3 \frac{1}{\varepsilon \delta})$,
with probability $1 - \delta$.
In the worst case the runtime is
$O(\frac{1}{\varepsilon^7} \log^4 \frac{1}{\varepsilon \delta})$.
\end{lemma}
\vspace{-.1in}
\subsection{Multiple Bandwidths}
\label{sec:alg-bandwidth}
We next show a sequence of bandwidths, such that one of them is close to the $r$ used in any $K \in \ensuremath{\Eu{K}}$ (assuming some reasonable but large range on the values $r$), and then take the maximum over all of these experiments.
If the sequence of bandwidths $r_{0} \ldots r_{s}$ is such that $r_{i} - r_{i - 1} \le \frac{e r_{i}\varepsilon}{4}$ then $|\Phi(K_{r_{i}}) - \Phi(K_{r_{i}})| \le \varepsilon$.
\begin{lemma}
A geometrically increasing sequence of bandwidths $R_\text{min} = r_{0}, \ldots, R_\text{max} = r_s$ with $s = O(\frac{1}{\varepsilon}\log \frac{R_\text{max}}{R_\text{min}})$ is sufficient so $|\max_i \Phi(K_{r_i}) - \Phi(K_r)| \le \varepsilon$ for any bandwidth $r \in [R_\text{min}, R_\text{max}]$.
\end{lemma}
\begin{proof}
To guarantee a $\varepsilon$ error on the bandwidth $r$ there must be a nearby bandwidth $r_i$. Therefore if $|\Phi(K_{r_{i}}) - \Phi(K_{r_{i + 1}})| \le \varepsilon$ then $|\Phi(K_{r_{i}}) - \Phi(K_{r})| \le \varepsilon$ if $r \in [r_i, r_{i + 1}]$.
From Lemma \ref{lem:bandwidth} we can use the Lipshitz bound at $r_i$ to note that
$|\Phi(K_{r_{i}}) - \Phi(K_{r_{i + 1}})| \le \frac{4}{e r_i}(r_{i + 1} - r_i)$. Setting this less than $\varepsilon$ we can rearrange to get that $r_{i + 1} \le (\frac{e}{4}\varepsilon + 1) r_i$.
That is
$r_{0} (\frac{\varepsilon e}{4} + 1)^{s} \ge r_{s}$,
which can be rearranged to get $s$
$s = \frac{\log(\frac{R_\text{max}}{R_\text{min}})}{\log(\frac{\varepsilon e}{4} + 1)}$.
Since $\log(x + 1) \ge \frac{x}{2}$ when $x$ is in $(0,1)$, we have
$s \le \frac{8}{\varepsilon e}\log \frac{R_\text{max}}{R_\text{min}}$.
\end{proof}
Running our KSSS over a large sequence of bandwidths is simple and merely increases the runtime by a $O(\frac{1}{\varepsilon}\log \frac{r_s}{r_0})$ factor. Our experiments in Section \ref{sec:bandwidth} suggest that choosing $4$ to $6$ bandwidths should be sufficient (e.g., for scales $R_\text{max}/R_\text{min} = 1{,}000$).
\begin{figure*}[h!]
\includegraphics[width=.3\linewidth]{kernel_l2_bars.pdf}
\includegraphics[width=.3\linewidth]{kernel_discrepancy_bars.pdf}
\includegraphics[width=.3\linewidth]{kernel_jc_bars.pdf}
\vspace{-5mm}
\caption{\label{fig:measure1-kernel}
New KSSS algorithms with statistican power compared with increased sample size using $\|c-\hat c\|$, $\Phi(K_{\hat c})$, and $\mathsf{JS}(K_c, K_{\hat c})$. }
\end{figure*}
\section{Experiments}
\label{sec:experiments}
We compare our new KSSS algorithms to the state-of-the-art methods in terms of empirical efficiency, statistical power, and sample complexity, on large spatial data sets with planted anomalies.
\Paragraph{Data sets}
We run experiments on two large spatial data sets recording incidents of crime, these are used to represent the baseline data $B$.
The first contains geo-locations of all crimes in Philadelphia from 2006-2015, and has a total size of $|B| = 687{,}636$; a subsample is shown in Figure \ref{fig:Bernoulli-model}.
The second is the well-known Chicago Crime Dataset from 2001-2017, and has a total size of $|B| = 6{,}886{,}676$; which is $10x$ the size of the Philadelphia set.
In modeling crime hot spots, these may often be associated with an individual or group of individuals who live at a fixed location. Then the crimes they may commit would often be centered at that location, and be more likely to happen nearby, and less likely further away. A natural way to model this decaying crime likelihood would be with a Gaussian kernel --- as opposed to a uniform probability within a fixed radius, and no increased probability outside that zone. Hence, our KSSS is a good model to potentially detect such spatial anomalies.
\Paragraph{Planting anomalous regions}
To conduct controlled experiments, we use a spatial data sets $B$ above, but choose the $m$ values in a synthetic way. In particular, we \emph{plant} anomalous regions $K_c \in \ensuremath{\Eu{K}}_r$, and then each data point $x \in B$ is assigned to a group $P$ (with probability $K(x)$) or $Q$ (otherwise). Those $x \in P$ will be assigned $m(x)$ through a Bernoulli process at rate $p$, that is $m(x) =1$ with probability $p$ and $0$ otherwise; those $x \in Q$ are assigned $m(x)$ at
rate $q$. Given a region $K_c$, this could model a pattern crimes (those with $m(x) = 1$ may be all vehicle theft or have suspect matching a description), where $c$ may represent the epicenter of the targeted pattern of crime. We use $p=0.8$ and $q=0.5$.
We repeat this experiment with $20$ planted regions and plot the median on the Phileadelphia data set to compare our new algorithms and to compare sample complexity properties against existing algorithms. We use $3$ planted regions on the Chicago data set to compare scalability (these take considerably longer to run). We attempt to fix the size $P$ so $|P| = f |B|$, by adjusting the fixed and known bandwidth parameter $r$ on each planted region. We set $f=0.03$ for Philadelphia, and $f=0.01$ for Chicago, so the region contains a fairly small region with about $3\%$ or $1\%$ of the data.
\Paragraph{Evaluating the models}
A statistical power test, plants an anomalous region (for instance as described above), and then determines how often an algorithm can recover that region; it measures recall.
However, all considered algorithms typically do not recover the exact same region as the one planted, so we measure how close to the planted region $K_c$ the recovered one $K_{\hat c}$ is. To do so we measure:
\begin{itemize}
\item distance been found centers $\|c-\hat c\|$, smaller is better.
\item $\Phi(K_{\hat c})$, the larger the better; it may be larger than $\Phi(K_c)$
\item the extended Jaccard similarity $\mathsf{JS}(K_c, K_{\hat c})$ defined
\[
\mathsf{JS}(K,\hat K) = \frac{\langle K(x), \hat K(x) \rangle_B}{\langle K(x), K(x) \rangle_B + \langle \hat K(x), \hat K(x) \rangle_B - \langle K(x), \hat K(x) \rangle_B}
\]
where $\langle K(x), \hat K(x') \rangle_B = \sum_{x \in B} K(x) \hat K(x)$; larger is better.
\end{itemize}
We plot medians over $20$ trials; the targeted and hence measured values have variance because planted regions may not be the optimal region, since the $m(x)$ values are generated under a random process.
When we cannot control the $x$-value (when using time) we plot a kernel smoothing over different parameters on $3$ trials.
\begin{figure}
\vspace{-2mm}
\includegraphics[width=.8\linewidth]{kernel_time_disc_bars.pdf}
\vspace{-5mm}
\caption{\label{fig:time-kernel}
Runtime of new KSSS algorithms in sample size.}
\end{figure}
\begin{figure*}
\vspace{-2mm}
\includegraphics[width=.3\linewidth]{size_l2.pdf}
\includegraphics[width=.3\linewidth]{size_discrepancy.pdf}
\includegraphics[width=.3\linewidth]{size_jc.pdf}
\vspace{-5mm}
\caption{\label{fig:comparison}
New KSSS vs. Disk SSS algorithms via statistican power from sample size using $\|c-\hat c\|$, $\Phi(K_{\hat c})$, and $\mathsf{JS}(K_c, K_{\hat c})$. }
\end{figure*}
\begin{figure*}
\includegraphics[width=.3\linewidth]{time_distance.pdf}
\includegraphics[width=.3\linewidth]{time_disc.pdf}
\includegraphics[width=.3\linewidth]{size_time_jc.pdf}
\vspace{-5mm}
\caption{\label{fig:power-time}
Accuracy measures as a function of runtime using $\|c-\hat c\|$, $\Phi(K_{\hat c})$, and $\mathsf{JS}(K_c, K_{\hat c})$.}
\end{figure*}
\subsection{Comparing New KSSS Algorithms}
We first compare the new KSSS algorithms against each other, as we increase the sample size $|B_\varepsilon|$ and the corresponding other griding and pruning parameters to match the expected error $\varepsilon$ from sample size $|B_\varepsilon|$ as dictated in Section \ref{sec:algorithm-description}.
We observe in Figure \ref{fig:measure1-kernel} that all of the new KSSS algorithms achieve high power at about the same rate. In particular, when the sample size reaches about $|B_\varepsilon| = 1{,}000$, they have all plateaued near their best values, with large power: the center distance is close to $0$, $\Phi(K_{\hat c})$ near maximum, and $\mathsf{JS}(K_c, K_{\hat c})$ almost $0.9$.
At medium sample sizes $|B_\varepsilon| = 500$, \textsc{KernelAdaptive} and \textsc{KernelFast} have worse accuracy, yet reach maximum power around the same sample size -- so for very small sample size, we recommend \textsc{KernelPrune}.
In Figure \ref{fig:time-kernel} we see that the improvements from \textsc{KernelGrid} up to \textsc{KernelFast} are tremendous; a speed-up of roughly $20$x to $30$x improvement. By considering \textsc{KernelPrune} and \textsc{KernelAdaptive} we see most of the improvement comes from the adaptive gridding, but the pruning is also important, itself adding $2$x to $3$x speed up.
\vspace{-1mm}
\subsection{Power vs. Sample Size}
We next compare our KSSS algorithms against existing, standard Disk SSS algorithms. As comparison, we first consider a fast reimplementation of SatScan~\cite{Kul97,Kul7.0} in C++. To make-even the comparison, we consider the exact same center set (defined on grid $G_\varepsilon$) for potential epicenters, and consider all possible radii of disks.
Second, we compare against a heavily-optimized \textsc{DiskScan} algorithm~\cite{MP18b} for Disks, which chooses a very small ``net'' of points to combinatorially reduce the set of disks scanned, but still guarantee $\varepsilon$-accuracy (in some sense similar to our adaptive approaches). For these algorithms we maximize Kuldorff's Bernoulli likelihood function~\cite{Kul97}, whose $\log$ has a closed for over binary ranges $D \in \ensuremath{\Eu{D}}$.
Figure \ref{fig:comparison} shows the power versus sample size (representing how many data points are available), using the same metrics as before. The KSSS algorithms perform consistently significantly better -- to see this consider a fixed $y$ value in each plot.
For instance the KSSS algorithms reach $\|c-\hat c\| < 0.05$, $\Phi(K_{\hat c}) > 0.003$ and $\mathsf{JS}(K_c,K_{\hat c}) > 0.8$ after about 1000 data samples, whereas it takes the Disk SSS algorithms about 2500 data samples.
\vspace{-1mm}
\subsection{Power vs. Time}
We next measure the power as a function of the runtime of the algorithms, again the new KSSS algorithms versus the traditional Disk SSS algorithms. We increase the sample size $|B_\varepsilon|$ as before, now from the Chicago dataset, and adjust other error parameters in accordance to match the theoretical error.
Figure \ref{fig:power-time} shows \textsc{KernelFast} significantly outperforms \textsc{SatScan} and \textsc{DiskScan} in these measures in orders of magnitude less time.
It efficiently reaches small distance to the planted center faster (10 seconds vs 1000 or more seconds). In 5 seconds it achieves $\Phi^*$ of 0.0006, and 0.00075 in 100 seconds; whereas in 1000 seconds the Disk SSS only reaches 0.0004.
Similarly for Jaccard similarity, \textsc{KernelFast} reaches 0.8 in 5 seconds, and 0.95 in 100 seconds; whereas in 1000 seconds the Disk SSS algorithms only reach 0.5.
\begin{figure}
\includegraphics[width=.49\linewidth]{bandwidth_jc.pdf}
\includegraphics[width=.49\linewidth]{bandwidth_discrepancy.pdf}
\vspace{-5mm}
\caption{\label{fig:bandwidth-sens}
Accuracy on bandwidth $r$ of planted region.}
\end{figure}
\subsection{Sensitivity to Bandwidth}
\label{sec:bandwidth}
So far we chose $r$ to be the bandwidth of the planted anomaly $K_c \in \ensuremath{\Eu{K}}_r$ (this is natural if we know the nature of the event). But if the true anomaly bandwidth is not known or only known in some range then our method should be insensitive to this parameter.
On the Philadelphia dataset we consider $30$ geometrically increasing bandwidths scaled so for original bandwidth $r$ we considered $r s$ where $s \in [10^{-2}, 10]$.
In Figure \ref{fig:bandwidth-sens} we show the accuracy using Jaccard similarity and the $\Phi$-value found, over $20$ trials. Our KSSS algorithms are effective at fitting events with $s \in [0.5, 2]$, indicating quite a bit of lee-way in which $r$ to use. That is, \emph{the sample complexity would not change}, but the time complexity may increase by a factor of only $2$x - $5$x if we also search over a range of $r$.
\section{Conclusion}
In this work, we generalized the spatial scan statistic so that ranges can be more flexible in their boundary conditions. In particular, this allows the anomalous regions to be defined by a kernel, so the anomaly is most intense at an epicenter, and its effect decays gradually moving away from that center. However, given this new definition, it is no longer possible to define and reason about a finite number of combinatorially defined anomalous ranges. Moreover, the log-likelihood ratio test derived do not have closed form solutions and as a result we develop new algorithmic techniques to deal with these two issues. These new algorithms are guaranteed to approximately detect the kernel range which maximizes the new discrepancy function up to any error precision, and the runtime depends only on the error parameter.
We also conducted controlled experiments on planted anomalies which conclusively demonstrated that our new algorithms can detect regions with few samples and in less time than the traditional disk-based combinatorial algorithms made popular by the SatScan software. That is, we show that the newly proposed Kernel Spatial Scan Statistics theoretically and experimentally outperform the existing Spatial Scan Statistic methods.
\vspace{-2mm}
\bibliographystyle{abbrv}
|
1,116,691,499,816 | arxiv | \section{Introduction} \label{sec:Introduction}
\indent The Warm Absorber (WA) is an optically thin ionized medium, first proposed by Halpern (\cite{Halpern}) in order to explain the shape of the X-ray spectrum of the \object{QSO MR2251-178} observed with the Einstein Observatory.
The main signatures of this medium are the two high-ionization oxygen absorption edges, \ion{O}{vii} and \ion{O}{viii} at 0.74 keV and 0.87 keV respectively, seen in about fifty percent of Seyfert 1 galaxies (Nandra $\&$ Pounds \cite{Nandra}; Reynolds 1997, hereafter referred as \cite{Reynolds}).\\
\indent Mihara et al. (\cite{Mihara}) found with ASCA observations of \object{NGC 4051} that the absorption edges of \ion{O}{vii} and \ion{O}{viii} may be blueshifted by about 3$\%$. This could be due to an outflow velocity of $\sim$ 10\,000 km.s$^{-1}$.\\
\indent According to Netzer (\cite{Netzer93}), an emission line spectrum from the WA should also be observed. Indeed, an \ion{O}{vii} line (0.57 keV) was detected in \object{NGC 3783} (George et al. \cite{George}). Other Seyferts may also show oxygen emission lines: \object{MCG-6-30-15} (\ion{O}{vii}-\ion{O}{viii}: Otani et al. 1996 but the authors mentioned that those features could have an instrumental origin), and \object{1E 1615+061} (\ion{O}{vii}-\ion{O}{viii}: Piro et al. \cite{Piro}). The WA may also contribute to the emission of the UV lines (Shields et al. \cite{Shields}, Netzer \cite{Netzer96}), like \ion{Ne}{viii} 774\AA$~$ (Hamann et al. {\cite{Hamann}) and \ion{O}{vi} 1034\AA$~$ which are also produced in highly ionized regions.\\
\indent The WA is generally thought to be a photoionized medium which lies on the line of sight of the ionizing X-ray source. But the possibility of collisional ionization is not ruled out, and it is therefore also investigated in this article.\\
\indent In Seyfert 1 spectra, coronal lines are also observed. They are fine structure transitions in the ground level of highly ionized ions which have threshold energies above 100 eV.
According to Penston et al. (\cite{Penston}), [\ion{Fe}{x}] 6375\AA$~$ is found in half Seyfert objects (with no preference between type-1 and type-2 Seyferts). From their Table 4, [\ion{Fe}{xi}] 7892\AA$~$ is detected in 6 of their 19 Seyfert 1s. In the sample used by Erkens et al. (1997) (hereafter referred as \cite{Erkens}), the mean widths of the forbidden high ionization lines (FHILs) are intermediate between those of the Broad Line Region (BLR) and the Narrow Line Region (NLR), but in some cases the wings of the FHILs could be comparable to or broader than the BLR profiles. The FHIL region seems therefore to be located near the BLR, or between the BLR and the NLR. According to \cite{Erkens}, it is located outside the BLR if the Unified Scheme is correct, since broad FHILs are also found in Seyfert 2 spectra.
\cite{Erkens} confirmed that FHILs like [\ion{Fe}{x}] 6375\AA, [\ion{Fe}{xi}] 7892\AA$~$ and [\ion{Fe}{xiv}] 5303\AA$~$ are in average broader and more blueshifted than lower ionization lines like [\ion{Ne}{v}] 3426\AA$~$ and [\ion{Fe}{vii}] 6087\AA. The line widths and blueshifts are correlated with the ionization potential; In other words, the more ionized species have the larger outflowing velocities.\\
\indent Therefore, the WA and the high-ionization coronal line region seem to have common characteristics: a high ionization state, a location between the NLR and the BLR, and an outflowing gas.\\
This leads us to discuss the possibility that the coronal lines and the WA features could be produced in the same medium. We shall study the constraints that the coronal line intensities impose on the WA. Preliminary results have been already presented by Porquet $\&$ Dumont (\cite{Porquet}).\\
\indent In Sect.~\ref{sec:Observation}, we report the observational data relative to the coronal lines, to the optical depths of \ion{O}{vii} and \ion{O}{viii}, and to the UV resonance emission lines. In Sect.~\ref{sec:Calculation}, we discuss previous models and describe our computations. The results of a pure photoionized model and of an hybrid model (photoionized medium out of thermal equilibrium) for two shapes of incident continua are given in Sect.~\ref{sec:results} and compared to the mean observed Seyfert 1 features. The particular case of \object{MCG-6-30-15} is treated in Sect.~\ref{sec:mcg6-30-15}. In Sect.~\ref{sec:Conclusion}, we discuss some implications of the results.\\
\indent Throughout this article, we assume H$_{o}$=50 km\,s$^{-1}$\,
Mpc$^{-1}$ and q$_{o}$=0.
\section{Observational data} \label{sec:Observation}
For Figures \ref{f1} to \ref{f5}, when they exist, error bars are reported.
\subsection{Optical depths of Oxygen} \label{sec:tau}
\begin{figure}
\resizebox{7.75cm}{!}{\includegraphics{porquet_1.eps}}
\caption{$\tau_{\ion{O}{viii}}$ versus $\tau_{\ion{O}{vii}}$ for 20 Seyfert 1s taken from Reynolds (\cite{Reynolds97}). {\it Filled circle}: real value, {\it triangle left}: upper limit for $\tau_{\ion{O}{vii}}$, {\it triangle down}: upper limit for $\tau_{\ion{O}{viii}}$ and {\it diamond}: upper limit for both values.}
\label{f1}
\end{figure}
\begin{figure}
\resizebox{7.75cm}{!}{\includegraphics{porquet_2.eps}}
\caption{Observed dereddened luminosity of [\ion{Fe}{x}] (erg.s$^{-1}$) versus the soft X-ray luminosity (erg.s$^{-1}$). {\it Filled triangle down}: upper limit for [\ion{Fe}{x}] and {\it filled circle}: real value. [\ion{Fe}{x}] data are from Reynolds et al. (\cite{Reynolds97}) (\object{MCG-6-30-15}), Penston et al. (\cite{Penston}) (\object{NGC 7469}, \object{NGC 4051}, \object{NGC 5548}, \object{NGC 3516}, \object{Mrk 335}, \object{Mrk 509}, \object{IC 4329A}, \object{Fairall 9}, \object{Mrk 79}, \object{ESO G141-55}, \object{NGC 6814}, \object{IZw1}, \object{Mrk 618}, \object{Mrk 926}), \cite{Erkens} (\object{Mrk 9}, \object{Mrk 704}, \object{Mrk 1239}, \object{Akn 120}, \object{Akn 564}), Morris $\&$ Ward (\cite{Morris}) (\object{NGC 4593}, \object{Mrk 1347}, \object{Fairall 51}, \object{UGC 10683B}, but line has not been corrected for blending with [\ion{O}{i}] 6364\AA).
The soft X-ray luminosities are from Rush et al. (\cite{Rush}) (\object{MCG-6-30-15}, \object{NGC 7469}, \object{NGC 4051}, \object{NGC 5548}, \object{NGC 3516}, \object{Mrk 335}, \object{Mrk 509}, \object{NGC 4593}, \object{IC 4329A}, \object{Mrk 704}, \object{Mrk 79}, \object{Mrk 1239}, \object{ESO G141-55}, \object{IZw1}, \object{Mrk 618}), Schartel et al. (\cite{Schartel}) (\object{Fairall 9}, \object{Mrk 926}), Boller et al. (\cite{Boller}) (\object{Mrk 9}, \object{Akn 120}, \object{Akn 564}, \object{NGC 6814}, \object{Mrk 1347}, \object{Fairall 51}, \object{UGC 10683B}).}
\label{f2}
\end{figure}
\indent All the optical depth values at the \ion{O}{vii} edge ($\tau_{\ion{O}{vii}}$) and \ion{O}{viii} edge ($\tau_{\ion{O}{viii}}$) for Seyfert 1s are taken from \cite{Reynolds}, in order to have a homogeneous measurement sample. They have been derived using the same type of fit for all spectra. Figure \ref{f1} displays $\tau_{\ion{O}{vii}}$ versus $\tau_{\ion{O}{viii}}$. There is a good correlation between the two parameters and $\tau_{\ion{O}{vii}}$ seems to be almost always greater than $\tau_{\ion{O}{viii}}$. In some cases, variability of the optical depths is observed (as for example \object{MCG-6-30-15}: Reynolds et al. \cite{Reynolds95} and \object{NGC 4051}: Guainazzi et al. \cite{Guainazzi}) and their position in this graph will change with respect to the Reynolds (\cite{Reynolds97}) values.
\indent In order to take into account the upper limits, we have calculated the mean value for the 20 objects in R97, using the Kaplan-Meier estimator with ASURV Rev 1.2 (Lavalley et al. \cite{Lavalley}), which implements the method presented in Feigelson $\&$ Nelson (\cite{Feigelson}). We obtain $\tau_{\ion{O}{vii}}$=0.33$\pm$0.07 and $\tau_{\ion{O}{viii}}$=0.20$\pm$0.07.
\subsection{Coronal lines}\label{sec:rc}
\begin{figure}
\resizebox{7.75cm}{!}{\includegraphics{porquet_3.eps}}
\caption{Observed dereddened [\ion{Fe}{x}] luminosity (erg.s$^{-1}$) versus $\tau_{\ion{O}{vii}}$ and $\tau_{\ion{O}{viii}}$. {\it Filled symbols}: $\tau_{\ion{O}{vii}}$ and {\it open symbols}: $\tau_{\ion{O}{viii}}$; {\it Triangle down}: upper limit for the Y-axis value (here the [\ion{Fe}{x}] luminosity), {\it triangle left}: upper limit for the X-axis value and {\it diamond}: upper limits for both X and Y axis values.}
\label{f3}
\end{figure}
\begin{figure}
\resizebox{7.65cm}{!}{\includegraphics{porquet_4.eps}}
\caption{Equivalent widths (in \AA) of [\ion{Fe}{x}] versus the optical depths of \ion{O}{vii} and \ion{O}{viii}. Same legend as Fig.~\ref{f3}.}
\label{f4}
\end{figure}
\begin{figure}
\resizebox{7.60cm}{!}{\includegraphics{porquet_5.eps}}
\caption{Ratio of the observed dereddened luminosities of [\ion{Fe}{xi}] over [\ion{Fe}{x}] versus $\tau_{\ion{O}{vii}}$ and $\tau_{\ion{O}{viii}}$. Same legend as Fig.~\ref{f3} and {\it triangle up}: lower limit for the Y-axis value. [\ion{Fe}{xi}] data are from Reynolds et al. (\cite{Reynolds97}) (\object{MCG-6-30-15}), Penston et al. (\cite{Penston}) (\object{NGC 7469}, \object{Mrk 79}, \object{NGC 4051}, \object{NGC 5548}, \object{NGC 3516}, \object{Mrk 335}), and \cite{Erkens} (\object{Mrk 9}, \object{Mrk 704}, \object{Mrk 705}, \object{Mrk 1239}, \object{Akn 120}, \object{Mrk 699}, \object{Akn 564}).}
\label{f5}
\end{figure}
\indent Figure~\ref{f2} displays dereddened [\ion{Fe}{x}] 6375\AA\, luminosity versus soft X-ray luminosity integrated over 0.1-2.4 keV. The values have been compiled in the literature (see Ref. in Fig.\ref{f2} caption).
There is a clear correlation between the two quantities, as also found by E97. This could be an indication that the high-ionization coronal lines, as the features of the WA, are formed either in a photoionized medium or in a medium which is partly photoionized and partly ionized by collisions. Figure~\ref{f3} displays the observed dereddened luminosity of [\ion{Fe}{x}] versus $\tau_{\ion{O}{vii}}$ and $\tau_{\ion{O}{viii}}$. Figure~\ref{f4} displays the [\ion{Fe}{x}] equivalent width (EW) versus oxygen optical depths. No correlations between these quantities are apparent. We have calculated (same method as defined above) a mean [\ion{Fe}{x}] equivalent width of 1.44$\pm$0.30$~$\AA. We take throughout this article EW[\ion{Fe}{x}]=1.5$~$\AA.\\
\indent For [\ion{Fe}{xi}] 7892\AA, we obtained a mean EW of about 4$~$\AA$~$ with only 3 EWs available for \object{MCG-6-30-15}, \object{NGC 3783} and \object{Mrk 1347} (Morris $\&$ Ward \cite{Morris}).
Figure~\ref{f5} displays the luminosity ratio [\ion{Fe}{xi}]/[\ion{Fe}{x}] versus optical depths of \ion{O}{vii} and \ion{O}{viii}. For the reported objects, the [\ion{Fe}{x}] luminosity is greater than the [\ion{Fe}{xi}] luminosity, except for one object.\\
\indent For [\ion{Fe}{xiv}] 5303\AA, \cite{Erkens}, who have selected objects in which the presence of FHILs has already been reported in the literature, found that only 4 of their 15 objects ($\sim$27$\%$) required the presence of a significant [\ion{Fe}{xiv}] contribution. So this line is generally not detected in Seyfert 1s. The resolution and S/N of their spectra were not sufficient to separate the blend of [\ion{Fe}{xiv}] 5303\AA\,+\,[\ion{Ca}{v}] 5309\AA\ and no [\ion{Fe}{xiv}] EW values are quoted. Since only one spectrum has been plotted, we are thus unable to determine mean values for this coronal line. Nevertheless when [\ion{Fe}{xiv}] is present, its flux is significant ($>$25$\%$ of the [\ion{Ca}{v}] flux), then we will use EW[\ion{Fe}{xiv}]=3$~$\AA$~$ as a conservative upper limit. We will also use a value of 2 \AA$~$ to illustrate the sensitivity of this EW value on the physical parameters.\\
\indent Until now, only very little information concerning some infrared coronal lines has been published for Seyfert 1s: for \object{NGC 7469} ([\ion{Si}{x}]1.43$\mu$m and [\ion{S}{xi}]1.93$\mu$m: Thompson \cite{Thompson96}, [\ion{Fe}{xii}]2.20$\mu$m: Genzel et al. \cite{Genzel}) and for \object{NGC 3516} ([\ion{Ca}{viii}]2.32$\mu$m: Giannuzzo et al. \cite{Giannuzzo}). Mean EWs for these coronal lines cannot be determined. We will assume that they are smaller than 10$~$\AA, which is compatible with the data of \object{NGC 3516}.
\subsection{UV high-ionization resonance lines: \ion{O}{vi} 1034\AA$~$ and \ion{Ne}{viii} 774\AA}
Resonance lines correspond to allowed transitions to the ground level. Zheng et al. (\cite{Zheng97}) who have constructed a composite Radio-Quiet quasar spectrum with 101 quasars with z$>$0.33 gave: EW(\ion{Ne}{viii})=4\,\AA$~$. According to Figure 2 of Zheng et al. (\cite{Zheng95}), the EW(\ion{O}{vi}) for a sample of 24 Radio-Quiet Quasars is about 7\,\AA$~$ (the four atypical EWs $>>$20\,\AA$~$ being excluded).
\section{Calculations} \label{sec:Calculation}
\subsection{Previous models}
\indent The WA has been investigated by Netzer (\cite{Netzer93}, \cite{Netzer96}). In his photoionization code "ION", he considered not only the absorption properties but also the emission and reflection spectra, which were not included in previous computations. He showed that intense X-ray lines as well as a non negligible reflection continuum might be produced. So the spectral shape could be changed significantly with respect to the pure transmitted spectrum, especially around the absorption edges which are reduced. He computed the UV and soft X-ray line intensities for a large range of parameters and found that the strongest X-ray lines should have EWs of about 5--50\AA. Indeed, George et al. (\cite{George98}) showed that the introduction of the X-ray emission lines significantly improves the fit of X-ray spectra. Another computation of the emission and reflection spectra has been performed by Krolik $\&$ Kriss (\cite{Krolik95}), in the absence of thermal equilibrium.\\
\indent The thermal stability of the WA has been discussed by Reynolds $\&$ Fabian (\cite{Reynolds Fabian}), using the photoionization code ``CLOUDY'' (Ferland \cite{Ferland91}). They showed the importance of the shape of the ionizing continuum. Also Krolik $\&$ Kriss (\cite{Krolik95}) have studied the thermal stability of the WA.\\
\indent Since coronal lines are associated with highly ionized ions, the creation of such ions requires a highly powerful process. Two main models could explain the emission of the coronal lines: a hot gas with collisional ionization, with T$>$10$^{6}$K (Oke $\&$ Sargent \cite{Oke}; Nussbaumer $\&$ Osterbrock \cite{Nussbaumer}) and a gas photoionized by a hard UV-X-ray continuum, with T$\sim$ a few 10$^{4}$K (Osterbrock \cite{Osterbrock}; Grandi \cite{Grandi}; Korista $\&$ Ferland \cite{Korista}, Oliva et al. \cite{Oliva}, Moorwood $\&$ Oliva \cite{Moorwood}). A third model involving photoionization plus shocks inside the NLR has also been proposed (Viegas-Aldrovandi $\&$ Contini \cite{Viegas-Aldrovandi}).\\
\indent With a sample of 15 Seyferts (including 11 Seyfert 1s), \cite{Erkens} found that the observed line strengths appear compatible with the predictions of a simple photoionized model calculated by Korista $\&$ Ferland (\cite{Korista}) and Spinoglio $\&$ Malkan (\cite{Spinoglio}). The former gave flux ratios of iron coronal lines for very low densities (n$_{H}\leq$10 cm$^{-3}$). For \object{PG1211+143}, Appenzeller $\&$ Wagner (\cite{Appenzeller}) observed a flux ratio of [\ion{Fe}{vii}] 6087\AA/[\ion{Fe}{x}] 6375\AA=0.8$\pm$0.2 compatible with their calculations. The latter made computations for the infrared fine-structure lines for higher densities (10$^{2}\leq$n$_{H}\leq$10$^{6}$ cm$^{-3}$), and for low ionization parameters (see~$\S$\ref{sec:our model} for a definition). Their predictions for [\ion{Mg}{viii}], [\ion{Si}{vii}], [\ion{Si}{ix}], [\ion{Si}{x}], are referred to the optical forbidden line [OIII] 5007\AA$~$ assuming both lines coming from the same medium whereas they are probably produced by two different media. Both articles (Korista $\&$ Ferland and Spinoglio $\&$ Malkan) used an old version of ``CLOUDY'' with inaccurate electronic collision strengths for iron ions (Mason \cite{Mason}): resonance effects near the threshold energy, which have a significant influence on the coronal line fluxes, are not included. These resonance effects are discussed in Dumont $\&$ Porquet (1998, hereafter referred as \cite{Dumont}). For example, the electronic collision rates published by Storey et al. (\cite{Storey}) and Pelan $\&$ Berrington (\cite{Pelan}) (as used in our code for [\ion{Fe}{xiv}] and [\ion{Fe}{x}] respectively) are much larger than the previous data, especially at low temperatures.
\subsection{Our model} \label{sec:our model}
\indent Our calculations are based on a new code IRIS, which computes detailed multi-wavelength spectra of photoionized and/or collisionally ionized gas, using as input the thermal and ionization structure computed by the photoionization code PEGAS (for more information about these two codes see \cite{Dumont}). It includes a large number of levels and splits the multiplets, thus providing an accurate spectrum from the soft X-rays to the infrared.\\
\indent All important atomic processes were taken into account: collisional electronic excitation and ionization, three-body recombination, photoionization, radiative and dielectronic recombination, excitation-autoionization and proton impact excitation. Charge transfer has not been implemented for the ground transitions of the coronal ions, since it is negligible in our calculations.\\
\indent For [\ion{Fe}{x}], the effective electronic collision strengths are taken from Pelan $\&$ Berrington (\cite{Pelan}) and the proton collision rates from Bely $\&$ Faucher (\cite{Bely}). For [\ion{Fe}{xi}] there are no published data available for ground transition effective collision strengths, taking into account the resonance effects, so we have computed them with an IDL subroutine of the CHIANTI database (Dere et al. \cite{Dere}). The proton collision rates for [\ion{Fe}{xi}] are from Landman (\cite{Landman}). For [\ion{Fe}{xiv}], the effective collision strengths are from Storey et al. (\cite{Storey}) and the proton collision rates for ground level transitions are from Heil et al. (\cite{Heil}).\\
\indent The element abundances are from Allen (\cite{Allen}).\\
\indent Our model assumes optically thin clouds in plane-parallel geometry with constant hydrogen density, in ionization equilibrium and surrounding a central source of radiation. The grid of parameters investigated here is:
\begin{enumerate}
\item Hydrogen density: $10^{8}\leq n_{H}\leq10^{12}$ cm$^{-3}$
\item Hydrogen column density: $10^{20}\leq N_{H}\leq 5\,10^{23}$ cm$^{-2}$
\item Ionization parameter: $2\leq \xi \leq4000$ erg\,cm\,s$^{-1}$
\end{enumerate}
with $\xi=\frac{L}{n_{H}~R^{2}}$
\noindent where $L$ is the bolometric luminosity (erg.s$^{-1}$) integrated over 0.1 eV to 100 keV and R is the distance from the ionizing radiation source at the illuminated face of the cloud (cm).\\
\indent The WA is seen in about 50$\%$ of Seyfert 1s, so we suppose that it is present in all these galaxies with a covering factor of 0.5.\\
\indent Finally, we assume that no dust near the BLR could survive so close to the ionizing central radiation.\\
\indent We also compute an hybrid model which consists in a photoionized gas out of thermal equilibrium (contrary to the pure photoionized gas). In this case the temperature is set constant at 10$^{6}$K.
\begin{figure}
\resizebox{7.75cm}{!}{\includegraphics{porquet_6.eps}}
\caption{Shape of the two incident continua for Radio-Quiet quasars used in this paper, as well as a power law continuum (F$_{\nu}\propto\nu^{-1}$) used for comparison. All continua are normalized at the same $\xi$. {\it Solid line}: Laor et al. (\cite{Laor}) (``Laor continuum''), {\it dashed line}: Mathews $\&$ Ferland (\cite{Mathews}) but with a break at 10$\mu$m (``AGN continuum'') and {\it dot-dashed line}: simple power law.}
\label{f6}
\end{figure}
\indent We use two typical incident continua for Radio-Quiet AGN which are displayed on Figure~\ref{f6}. The first one is published by Laor et al. (\cite{Laor}) (``Laor continuum''), and the second one is used in ``CLOUDY'' (version 9004, Ferland et al. \cite{Ferland98}) and is similar to that published by Mathews $\&$ Ferland (\cite{Mathews}) but with a sub-millimeter break at 10 $\mu$m (``AGN continuum''). The ``Laor continuum'' in the X-ray range is based on ROSAT PSPC observations from the Bright Quasar Survey, with high S/N spectra. The selection of the objects in the sample is independent of their X-ray properties to avoid any bias. They found that the soft X-ray flux at $\sim$0.2--1 keV in Mathews $\&$ Ferland (\cite{Mathews}) is significantly underestimated. \\
\indent The thermal stability of clouds irradiated by both continua will be discussed in \cite{Dumont}.\\
\indent Emission due to radiative recombination is shaped as an exponential decay ($\propto~$exp($\frac{-(h\nu-\chi)}{kT}$)) with a width of kT and either appears as a hump or fills the absorption hollow near the ionization threshold and could be observed as a blueshift of the edge. In the case of photoionized models, kT is small and the \ion{O}{vii} or \ion{O}{viii} edges appear generally smoothed, while in the hybrid models (photoionized gas out of thermal equilibrium with a constant temperature T=10$^{6}$K), the hollow can be partly filled. Therefore, our comparisons with observations take into account this emission and the current spectral resolution of the X-ray spectra (ASCA).\\
\indent The EWs are calculated relatively to the attenuated (transmitted plus emitted) central continuum. The reflected continuum is small and has very little influence on the value of the EWs.
\section{Results for the mean observed Seyfert 1 features} \label{sec:results}
\begin{figure*}
\resizebox{8.20cm}{!}{\includegraphics{porquet_7a.eps}}\hspace{0.25cm}
\resizebox{8.20cm}{!}{\includegraphics{porquet_7b.eps}}
\resizebox{8.20cm}{!}{\includegraphics{porquet_7c.eps}}\hspace{0.25cm}
\resizebox{8.20cm}{!}{\includegraphics{porquet_7d.eps}}
\caption{Isovalue curves in the plane ($\xi$,N$_{H}$) for the pure photoionized model with the incident ``Laor continuum'' for various hydrogen densities. (a) n$_{H}=10^{8}$ cm$^{-3}$, (b) n$_{H}=10^{9}$ cm$^{-3}$, (c) n$_{H}=10^{10}$ cm$^{-3}$ and (d) n$_{H}=10^{12}$ cm$^{-3}$. {\it Thick lower and upper solid lines}: $\tau_{\ion{O}{vii}}$=0.10 and 0.33 respectively, {\it thick long dashed line}: $\tau_{\ion{O}{viii}}$=0.20. {\it Thin long dashed line}: EW([\ion{Fe}{x}])=1.5$~$\AA, {\it Thin lower and upper dotted lines} EW([\ion{Fe}{xiv}])=2 and 3$~$\AA$~$ respectively, {\it thin dotted-dashed line}: EW(\ion{Ne}{viii})=4$~$\AA$~$ and {\it thin solid line}: EW(\ion{O}{vi})=7\AA. {\bf For the mean observed Seyfert 1 features, the regions above each thin curve are forbidden since producing too large EWs}.}
\label{f7}
\end{figure*}
\begin{center}
\begin{table*}
\caption{Results for the pure photoionized model with the incident ``Laor continuum'' for the mean observed Seyfert 1 features (cf Fig.~\ref{f7}).}
\label{t1}
\begin{tabular}{|c|l|l|}
\hline
& & \\
$n_{H}$ & \multicolumn{1}{c|}{\ion{O}{vii} zone} & \multicolumn{1}{c|}{\ion{O}{viii} zone} \\
(cm$^{-3}$) & \multicolumn{1}{c|}{$\tau_{\ion{O}{vii}}$=0.33 ($\xi<$250)} &\multicolumn{1}{c|}{$\tau_{\ion{O}{viii}}$=0.20 ($\xi>$250)} \\
& & \\
\hline
& & \\
10$^{8}$ & EW([\ion{Fe}{x}])$>>$1.5\AA$~$ or EW([\ion{Fe}{xiv}])$>>$3\AA & $\bullet$ {\bf if} ($\xi\leq$600) {\bf then} EW([\ion{Fe}{xiv}])$>>$3\AA \\
& $\Longrightarrow$ this density is ruled out & $\bullet$ {\bf else} no constraint \\
& & \\
\hline
& & \\
10$^{9}$ & idem as for n$_{H}$=10$^{8}$cm$^{-3}$ & idem as for n$_{H}$=10$^{8}$cm$^{-3}$ \\
& & \\
\hline
& & \\
10$^{10}$ & $\bullet$ {\bf if} ($\xi\leq$40) {\bf then} EW(\ion{O}{vi})$>>$7\AA & no constraint \\
& $\bullet$ {\bf else} no constraint & \\
& & \\
\hline
& & \\
10$^{12}$ & $\bullet$ {\bf if} ($\xi\leq$20) {\bf then} EW(\ion{O}{vi})$>>$7\AA & no constraint \\
& $\bullet$ {\bf else} no constraint & \\
& & \\
\hline
\end{tabular}
\end{table*}
\end{center}
\begin{figure*}
\resizebox{8.20cm}{!}{\includegraphics{porquet_8a.eps}}\hspace{0.25cm}
\resizebox{8.20cm}{!}{\includegraphics{porquet_8b.eps}}
\resizebox{8.20cm}{!}{\includegraphics{porquet_8c.eps}}\hspace{0.25cm}
\resizebox{8.20cm}{!}{\includegraphics{porquet_8d.eps}}
\caption{Same as Fig.~\ref{f7} for the pure photoionized model with the ``AGN continuum''.}
\label{f8}
\end{figure*}
\begin{figure*}
\resizebox{8.20cm}{!}{\includegraphics{porquet_9a.eps}}\hspace{0.25cm}
\resizebox{8.20cm}{!}{\includegraphics{porquet_9b.eps}}
\resizebox{8.20cm}{!}{\includegraphics{porquet_9c.eps}}\hspace{0.25cm}
\resizebox{8.20cm}{!}{\includegraphics{porquet_9d.eps}}
\caption{Same as Fig.~\ref{f7} for the hybrid model with the ``Laor continuum''.}
\label{f9}
\end{figure*}
\begin{figure*}
\resizebox{8.20cm}{!}{\includegraphics{porquet_10a.eps}}\hspace{0.25cm}
\resizebox{8.20cm}{!}{\includegraphics{porquet_10b.eps}}
\resizebox{8.20cm}{!}{\includegraphics{porquet_10c.eps}}\hspace{0.25cm}
\resizebox{8.20cm}{!}{\includegraphics{porquet_10d.eps}}
\caption{Same as Fig.~\ref{f7} for the hybrid model with the ``AGN continuum''.}
\label{f10}
\end{figure*}
\indent Figures \ref{f7}, \ref{f8}, \ref{f9} and \ref{f10} display, in the plane ($\xi$,N$_{H}$), isovalue curves corresponding to the {\bf mean} observed values of EWs and optical depths, in the case of the pure photoionized and hybrid models and for the ``Laor'' and ``AGN'' incident continua (n$_{H}$=10$^{8}$--10$^{9}$--10$^{10}$--10$^{12}$cm$^{-3}$). The thick long dashed, and the upper and lower solid curves represent $\tau_{\ion{O}{viii}}$=0.20 and $\tau_{\ion{O}{vii}}$=0.33 and 0.10, respectively. $\tau_{\ion{O}{vii}}$=0.33 and $\tau_{\ion{O}{viii}}$=0.20 are the mean values for Seyfert 1s (see $\S$\ref{sec:tau}). $\tau_{\ion{O}{vii}}$=0.10 is roughly the lower limit to detect the presence of the WA. EWs of coronal lines ([\ion{Fe}{x}] and [\ion{Fe}{xiv}]) and resonance lines (\ion{Ne}{viii} and \ion{O}{vi}) are also reported. They are mean values of EWs, except for [\ion{Fe}{xiv}] as explained in Sect.~\ref{sec:rc}.
Grey thin long dashed, lower and upper dotted, dot-dashed and solid curves display isovalues of EW([\ion{Fe}{x}])=1.5$~$\AA, EW([\ion{Fe}{xiv}])=2 and 3$~$\AA, EW(\ion{Ne}{viii})=4$~$\AA\, and EW(\ion{O}{vi})=7$~$\AA$~$ respectively.\\
Isovalue curves for the IR coronal lines (upper limits taken at 10 \AA) and the [\ion{Fe}{xi}] coronal line (mean value of 4 \AA, based on three objects) are not displayed, since they do not constrain models more than the iron coronal lines [\ion{Fe}{x}] and [\ion{Fe}{xiv}] and than the resonance lines of \ion{Ne}{viii} and \ion{O}{vi}.\\
\indent For a given incident continuum shape and a given model (pure photoionized or hybrid), only a restricted range of $\xi$ and N$_{H}$ values is allowed to reproduce both $\tau_{\ion{O}{vii}}$=0.33 and $\tau_{\ion{O}{vii}}$=0.20. To avoid such a fine-tuning ($\xi\sim$250 and N$_{H}\sim$10$^{22}$ cm$^{-2}$ for the pure photoionized model with the ``Laor continuum''), a two-zone warm absorber is suggested, with the \ion{O}{vii} edge being formed in a region at lower $\xi$ and N$_{H}$ than the \ion{O}{viii} edge. Each zone responsible of a given edge has a negligible contribution to the other edge mainly formed in the second zone.\\
\indent The regions of parameters ($\xi$,N$_{H}$) above the thin isovalue curves produce EWs for coronal and resonance lines greater than the mean observed values (except for [\ion{Fe}{xiv}] as explained above). Therefore, {\bf the region above each thin curve is forbidden}.
\subsection{Pure photoionized models}
\indent Figure~\ref{f7} shows the isovalue curves for the incident ``Laor continuum''. As an example, results for each density are displayed in Table~\ref{t1}.
For $\tau_{\ion{O}{vii}}$=0.33, a high density (n$_{H}\geq$10$^{10}$cm$^{-3}$) is required, in order not to produce larger EWs of [\ion{Fe}{x}] or [\ion{Fe}{xiv}] than the mean observed ones. A very small region where $\tau_{\ion{O}{viii}}$=0.20 could correspond to n$_{H}$=10$^{8}$cm$^{-3}$ if N$_{H}\geq$10$^{22}$cm$^{-3}$ and $\xi\geq$600.
\indent Figure~\ref{f8} shows the isovalue curves for the case of the incident ``AGN continuum''. A one-zone model requires $\xi\sim$500 and N$_{H}\sim$7$\,$10$^{21}$ cm$^{-2}$. For $\tau_{\ion{O}{vii}}$=0.33, a high density is required (n$_{H}>$10$^{9}$cm$^{-3}$). For $\tau_{\ion{O}{viii}}$=0.20, n$_{H}$ values as low as 10$^{9}$ cm$^{-3}$ are allowed.
\subsection{Results for the hybrid models}
\indent The pure photoionized model might be too simple to represent the WA, e.g. it could be in non-equilibrium or collisionally photoionized. So we have computed a grid for an hybrid model consisting in a photoionized gas out of thermal equilibrium, the temperature being taken constant at $T_{e}=10^{6}~$K. This temperature corresponds approximatively to the maximum of the ionic abundance of \ion{Fe}{x} and \ion{Fe}{xi} ions.\\
\indent Figure~\ref{f9} shows the isovalue curves for the incident ``Laor continuum''. Both edges could be produced in the same region for $\xi \sim$200 and N$_{H}\sim$5$\,$10$^{22}$ cm$^{-2}$.
For $\tau_{\ion{O}{vii}}$=0.33, in order to have a non negligible allowed region, n$_{H}>$10$^{10}$cm$^{-3}$ is required with 50$<\xi<$200 and 10$^{22}<$N$_{H}<$5$\,$10$^{22}$ cm$^{-2}$. $\tau_{\ion{O}{viii}}$=0.20 could be accounted for by densities as low as n$_{H}\sim$10$^{8}$cm$^{-3}$ if N$_{H}\geq$4\,10$^{22}$cm$^{-2}$ and $\xi\geq$500.\\
\indent Figure~\ref{f10} displays the corresponding isovalue curves for the incident ``AGN continuum''. The one-zone model requires $\xi\sim$500 and N$_{H}\sim$3.5$\,$10$^{22}$ cm$^{-2}$. For $\tau_{\ion{O}{vii}}$=0.33, a high density is required (n$_{H}\geq$10$^{10}$ cm$^{-3}$) as in the previous cases. For $\tau_{\ion{O}{viii}}$=0.20, high ionization parameters ($\xi>$100) are required for low density values (n$_{H}\sim$10$^{8}$ cm$^{-3}$).
\subsection{Conclusions for the pure photoionized and for the hybrid models}
\indent The confrontation of the regions of parameters ($\xi$,N$_{H}$) allowed by the EWs with those producing $\tau_{\ion{O}{vii}}$ and $\tau_{\ion{O}{viii}}$ separately, strongly constrains the hydrogen density of the WA. For n$_{H}\leq$10$^{10}$ cm$^{-3}$ the physical parameters are mainly constrained by the coronal lines due to their weak observed EWs and their high critical densities. On the contrary, at higher densities constraints are given by the resonance lines.\\
\indent The isovalue curves between the models with the ``Laor'' and ``AGN'' continua are shifted by a factor of about 2 in $\xi$ due to continuum shape differences. In the same way, similar values of the optical depths for the hybrid case are obtained for a $\xi$ value five times smaller than for the pure photoionized case. Notice that for the hybrid model, the \ion{Ne}{viii} line is enhanced.\\
\indent A one-zone model which would be responsible for all features considered here is ruled out by all considered models.\\
\indent For both pure photoionized and hybrid models a high density n$_{H}\geq$10$^{10}$cm$^{-3}$ for the WA is required for
$\tau_{\ion{O}{vii}}$=0.33, in order to explain the mean observed coronal lines and resonance lines of Seyfert 1 galaxies. On the contrary, $\tau_{\ion{O}{viii}}$=0.20 could be obtained with n$_{H}$ as low as 10$^{8}$cm$^{-3}$ but for a smaller range of parameters corresponding to high $\xi$ and N$_{H}$ values.
\section{Example of a particular case: MCG-6-30-15} \label{sec:mcg6-30-15}
\begin{figure}
\resizebox{8.20cm}{!}{\includegraphics{porquet_11a.eps}}
\resizebox{8.20cm}{!}{\includegraphics{porquet_11b.eps}}
\caption{Isovalue curves for MCG-6-30-15 for the pure photoionized model with the ``Laor continuum'' for two different hydrogen densities: (a) n$_{H}$=10$^{9}$ cm$^{-3}$ and (b) n$_{H}$=10$^{10}$ cm$^{-3}$. {\it Thick lower and upper solid lines}: $\tau_{\ion{O}{vii}}$=0.53 and 0.63 respectively, {\it thick lower and upper dashed lines}: $\tau_{\ion{O}{viii}}$=0.19 and 0.44, {\it thin lower and upper long dashed lines}: EW([\ion{Fe}{x}])=3 and 4$~$\AA, {\it thin lower and upper dot-dashed lines}: EW([\ion{Fe}{xi}])=3 and 4$~$\AA, {\it thin lower and upper solid lines} EW([\ion{Fe}{xiv}])=3 and 5$~$\AA$~$ respectively.}
\label{f11}
\end{figure}
\begin{figure}
\resizebox{8.20cm}{!}{\includegraphics{porquet_12a.eps}}
\resizebox{8.20cm}{!}{\includegraphics{porquet_12b.eps}}
\caption{Same as Fig.~\ref{f11} for the pure photoionized models with the ``AGN continuum''.}
\label{f12}
\end{figure}
\begin{figure}
\resizebox{8.20cm}{!}{\includegraphics{porquet_13a.eps}}
\resizebox{8.20cm}{!}{\includegraphics{porquet_13b.eps}}
\caption{Same as Fig.~\ref{f11} for the hybrid models with the ``Laor continuum''.}
\label{f13}
\end{figure}
\begin{figure}
\resizebox{8.20cm}{!}{\includegraphics{porquet_14a.eps}}
\resizebox{8.20cm}{!}{\includegraphics{porquet_14b.eps}}
\caption{Same as Fig.~\ref{f11} for the hybrid models with the ``AGN continuum''.}
\label{f14}
\end{figure}
\indent It is interesting to apply our computations to the Seyfert 1 galaxy \object{MCG-6-30-15}, for which several emission lines have been measured, as well as the optical depths of the \ion{O}{vii} and \ion{O}{viii} edges.\\
\indent With ASCA, Reynolds et al. (\cite{Reynolds95}) observed in this object that the optical depth at the \ion{O}{viii} edge ($\tau_{\ion{O}{viii}}$) responds to continuum changes with a characteristic time-scale of about $10^{4}$s, whereas the optical depth of \ion{O}{vii} ($\tau_{\ion{O}{vii}}$) seems to be almost constant during all the observations. Otani et al. (\cite{Otani}) made the hypothesis that the WA could consist of two different components: the inner absorber which has a high ionization, responsible in large part of the \ion{O}{viii} edge and located in or just outside the BLR (R$<$10$^{17}$ cm), and the outer absorber mainly responsible of the \ion{O}{vii} edge, less ionized, and located near the molecular torus (R$>$1 pc). With BeppoSax data, Orr et al. (\cite{Orr}) confirmed that $\tau_{\ion{O}{vii}}$ did not change significantly during the exposure time ($\sim$400 ks), contrary to $\tau_{\ion{O}{viii}}$. But they did not find any simple correlation between $\tau_{\ion{O}{viii}}$ and the continuum luminosity, although the variations of the continuum emission and those of the WA (edges) have a similar time-scale ($<\,2.10^{4}$s). The ASCA mean spectrum obtained in July 1994 (Otani et al. \cite{Otani}) and the BeppoSax spectrum (Orr et al. \cite{Orr}) give optical depth values consistent with those derived from the 1993 ASCA data (Reynolds et al. \cite{Reynolds95}). In the following we use the two sets of values derived by Reynolds et al.: $\tau_{\ion{O}{vii}}$=0.53$\pm$0.04, 0.63$\pm$0.05 and $\tau_{\ion{O}{viii}}$=0.19$\pm$0.03, 0.44$\pm$0.04, respectively for the July and August 1993 datasets.\\
\indent Data for the EWs of iron coronal lines are taken from Reynolds et al. ({\cite{Reynolds97}).
They gave EW([\ion{Fe}{x}])=3.0$\pm$0.4$~$\AA, EW([\ion{Fe}{xi}])=3.0$\pm$0.7$~$\AA$~$ and EW([\ion{Fe}{xiv}])=5.2$\pm$0.4$~$\AA. But the EW measurements of [\ion{Fe}{x}] and [\ion{Fe}{xi}] have not been galaxy-subtracted, they are therefore underestimated.
Since no information is given to estimate the contribution of the stellar component near these lines, we take EWs for [\ion{Fe}{x}] and [\ion{Fe}{xi}] of 4$~$\AA, estimating the effect of a dilution of about 1/3. This value is close to that found by Serote-Roos et al. ({\cite{Serote}) for \object{NGC 3516} which displays an EW of the Calcium triplet about equal to that of \object{MCG-6-30-15} (Morris $\&$ Ward \cite{Morris}). The [\ion{Fe}{xiv}] 5303\AA$~$ line is blended with [\ion{Ca}{v}] 5309\AA, so its EW is overestimated. That is why we also use another EW value for [\ion{Fe}{xiv}] of 3$~$\AA$~$ which should be closer to the real value.\\
\indent In Figs.~\ref{f11}, \ref{f12}, \ref{f13} and \ref{f14} both values obtained by Reynolds et al. (\cite{Reynolds95}) for each optical depth are displayed ($\tau_{\ion{O}{vii}}$=0.53 and 0.63: thick lower and upper solid lines respectively and $\tau_{\ion{O}{viii}}$=0.19 and 0.44: thick lower and upper long dashed lines respectively). Isovalue curves for the optical iron coronal lines are also reported (thin lower and upper long dashed lines: EW([\ion{Fe}{x}])=3 and 4$~$\AA, thin lower and upper dot-dashed lines: EW([\ion{Fe}{xi}])=3 and 4$~$\AA, thin lower and upper solid lines: EW([\ion{Fe}{xiv}])=3 and 5$~$\AA$~$ respectively).
\indent In order to reproduce both $\tau_{\ion{O}{vii}}$ and $\tau_{\ion{O}{viii}}$ values of July 1993 (one-zone model), an ionization parameter of the order of 200--400 is needed depending on the shape of the ionized spectrum and of the type of model (pure photoionized or hybrid). A range of $\xi$ between 300 to 900 is required for the August 1993 data. These values are significantly greater than values found by Reynolds et al. (\cite{Reynolds95}), reflecting the different shape of the ionization continuum (power law) they assumed. The ratio of the ionization parameter values which are derived for both epochs allows to get rid of the shape of the ionizing continuum. This ratio is consistent with the one obtained by Reynolds et al. ($\sim$1.3) except for the pure photoionized model with the ``AGN continuum'' ($\sim$2.6).\\
\indent However, the short variability time-scale ($\sim$10\,000 s) of the \ion{O}{viii} edge favors a two-zone model (Reynolds et al. \cite{Reynolds95}, Otani et al. \cite{Otani}).\\
\indent For the pure photoionization model with the ``AGN continuum'' a n$_{H}$ value of about 10$^{9}$ cm$^{-3}$ could account for the [\ion{Fe}{xiv}] (EW$\sim$3\AA) for a narrow range of $\xi$: 150--300 (cf Fig.~\ref{f12}).\\
\indent For the range of densities considered here, the inner zone responsible of the \ion{O}{viii} edge would contribute weakly to the coronal lines of \ion{Fe}{x} and \ion{Fe}{xi} and it is not constrained by the [\ion{Fe}{xiv}] line at very high values of $\xi$.\\
\indent The recombination time-scale derived for the high density value associated with the \ion{O}{vii} edge is much smaller than the variability time-scale of the associated region. Thus, the photoionization equilibrium can be applied.\\
\indent We also point out that our computations are restricted to dust-free models, whereas dust inside the outer warm absorber has been considered as a viable solution by Reynolds et al. (\cite{Reynolds97}).
\section{Conclusions} \label{sec:Conclusion}
\indent Using a photoionization code, including the most recent atomic data available for the coronal lines, we have found that the coronal lines could be formed in the Warm Absorber of the Seyfert 1 galaxies, and they strongly constrain its physical parameters, especially the hydrogen density. In order to take into account the available observational constraints, a high density is required for the mean observed Seyfert 1 features, as well as for the case of \object{MCG-6-30-15}, for both considered models (photoionized medium in or out of thermal equilibrium).
A model with two different regions is favored, an inner zone mainly producing the \ion{O}{viii} associated with a high ionization parameter ($\xi\sim$a few hundreds) and an outer zone where the \ion{O}{vii} edge is formed, corresponding to a smaller $\xi$ ($\sim$a few tens).
A density n$_{H}\sim$10$^{10}$cm$^{-3}$ and $\xi\sim$10--100 for a typical Seyfert 1 \ion{O}{vii} edge implies a radius similar to that of the BLR (R$\sim$a few 10$^{16}$ cm). For higher $\xi$ producing the \ion{O}{viii} edge, a region of low density ($\sim$10$^{8}$ cm$^{-3}$) is not obviously ruled out, which would be at a similar distance as the BLR, while a more likely high density region ($\sim$10$^{10}$ cm$^{-3}$) would be located even inside the BLR.\\
\indent The gas pressure being proportional to the ratio of the temperature over the ionization parameter (P$_{gas}\propto~$T/$\xi$), we deduce that the pressure is the same in the BLR and in the WA (using T$_{BLR}\sim$10$^{4}$K and $\xi_{BLR}\sim$10, T$_{WA}\sim$10$^{5}$K and $\xi_{WA}\sim$100). So the WA could coexist with the BLR and be a second gaseous phase of this medium.\\
\indent In this paper, several assumptions such as a solar abundance, a constant density, a covering factor of 0.5 and a dust-free medium have been made. This analysis is restricted to available coronal and high-ionization resonance line measurements. In a near future additional constraints will be brought by measurements of coronal lines in the IR and of X-ray resonance lines which will be detectable thanks to the next generation of X-ray telescopes (AXAF, XMM, ASTRO-E). Our knowledge of the WA should also be improved by X-ray temporal variability studies. These investigations are crucial since they provide important diagnostics for the physical conditions which prevail in the ionized plasma. For instance, if the recombination time-scale is larger than the variability time-scale of the source, photoionization equilibrium could not be applied (Reynolds $\&$ Fabian \cite{Reynolds Fabian}).\\
\indent In the same way the fact that coronal lines could be formed in the WA should be confirmed by detection of rapid variations of these lines, which have not yet been observed.
\begin{acknowledgements}
We acknowledge Monique Joly for fruitful discussions and Claude Zeippen for helpful conversations about atomic processes.
\end{acknowledgements}
|
1,116,691,499,817 | arxiv | \section{Introduction}
Planning algorithms based on lookahead search has been very successful for decision-making problems with known dynamics, such as board games \cite{silver2017masteringshogi,silver2016mastering,silver2017mastering} and simulated robot control \cite{tassa2012synthesis}.
However, to apply planning algorithms to more general tasks with unknown dynamics, the agent needs to learn the dynamics model from the interactions with the environment. Although learning the dynamics model has been a long-standing challenging problem, planning with a learned model has several benefits, including data efficiency, better performance, and adaptation to different tasks \cite{hafner2018learning}.
Recently, a model-based reinforcement learning algorithm \textit{MuZero} \cite{schrittwieser2019mastering} was proposed to extend the planning ability to more general environments through learning the dynamics model from the experiences. Building upon \textit{AlphaZero}'s \cite{silver2017mastering} powerful search and search-based policy iteration algorithms, \textit{MuZero} achieves state-of-the-art performance in Atari 2600 with visually rich domains and board games that require precision planning.
However, while \textit{MuZero} is able to solve problems efficiently with high-dimensional observation spaces, it can only handle environments with discrete action spaces. Many real-world applications, especially physical control problems, require agents to sequentially choose actions from continuous action spaces. While discretizing the action space is a possible way to adapt \textit{MuZero} to continuous control problems, the number of actions increases exponentially with the number of degrees of freedom. Besides, action space discretization can not maintain the information about the structure of the action domain, which may be essential for solving many problems \cite{lillicrap2015continuous}.
In this paper, we provide a possible way and the necessary theoretical results to extend the \textit{MuZero} algorithm to the continuous action space environments. More specifically, to enable the tree search process to handle the continuous action space, we use \textit{progressive widening} \cite{chaslot2008progressive} strategy which gradually adding actions from the action space to the search tree.
For the policy parameterization of the policy network output that aims to narrow down the search to high-probability moves, we use the Gaussian distribution to represent the policy and learn statistics of the probability distribution from the experience data \cite{sutton2018reinforcement}.
For the policy training, a loss function is derived to match the predicted policy output by the policy network and the search policy during the Monte Carlo Tree Search (MCTS) simulation process. Through the above extensions, we show the proposed algorithm in this paper, continuous \textit{MuZero}, outperforms the soft actor-critic method (SAC) \cite{haarnoja2018soft} in relative low-dimensional MuJoCo environments.
This paper is organized as follows. Section 2 presents related work on model-based reinforcement learning and tree search algorithm in continuous action space. In Section 3 we describe the MCTS with the progressive widening strategy in continuous action space. Section 4 covers the network loss function and the algorithm training process. Section 5 presents the numerical experiment results, and Section 6 concludes this paper.
\section{Related Work}
In this section, we briefly review the model-based reinforcement learning algorithms and the Monte Carlo Tree Search (MCTS) in continuous action space.
\subsection{Model-based Reinforcement Learning}
Reinforcement learning algorithms are often divided into model-free and model-based algorithms
\cite{sutton2018reinforcement}. Model-based reinforcement learning algorithms learn a model of the environments, with which they can use for planning and predicting the future steps. This gives the agent an advantage in solving problems requiring a sophisticated lookahead. A classic approach is to directly model the dynamics of the observations \cite{sutton1991dyna,ha2018recurrent,kaiser2019model,hafner2018learning}.
Model-based reinforcement learning algorithm like Dyna-Q \cite{sutton1990integrated,yao2009multi} combines model-free and model-based algorithms using its model to generate samples for model-free algorithms which augment the samples obtained through interaction with the environment. Dyna-Q has been adapted to continuous control problems in \cite{gu2016continuous}. This approach requires learning to reconstruct the observations without distinguishing between useful information and details.
The \textit{MuZero} algorithm \cite{schrittwieser2019mastering} avoids this by encoding observations into a hidden state without imposing the constraints of capturing all information necessary to reconstruct the original observation, which helps reduce computations by focusing on the information useful for planning. From the encoded hidden state (which has no semantics of environment state attached to it), \textit{MuZero} also has the particularity of predicting the quantities necessary for a sophisticated lookahead: the policy and value function based on the current hidden state, the reward and next hidden state based on the current hidden state and the selected action. \textit{MuZero} uses its model to plan with an MCTS search which outputs an improved policy target. It is quite similar to value prediction networks \cite{oh2017value} but uses a predicted policy in addition to the value to prune the search space. However, most of this work is on discrete models but many real-world reinforcement learning domains have continuous action spaces. The most successful methods in domains with a continuous action space remain model-free algorithms \cite{wang2019benchmarking,haarnoja2018soft,schulman2017proximal}.
\subsection{MCTS with Continuous Action Space}
Applying the tree search algorithm to continuous action case causes tension between exploring the larger set of candidate actions to cover more actions, and exploiting the current candidate actions to evaluate them more accurately through deeper search and more execution outcomes. Several recent research works have sought to adapt the tree search to continuous action space.
\cite{tesauro1997line} proposed truncated Monte Carlo that prunes away both candidate actions that are unlikely to be the best action, and the candidates with values close to the current best estimate (i.e., choosing either one would not make a significant difference). Similarly, AlphaGo \cite{silver2016mastering} uses a trained policy network to narrow down the search to high-value actions.
The classical approach of progressive widening (or unpruning) \cite{coulom2007computing,chaslot2008progressive,couetoux2011continuous,yee2016monte} can handle continuous action space by considering a slowly growing discrete set of sampled actions, which has been theoretically analyzed in \cite{wang2009algorithms}.
\cite{mansley2011sample} replaces the Upper Confidence Bound (UCB) method \cite{kocsis2006bandit} in the MCTS algorithm with Hierarchical Optimistic Optimization (HOO) \cite{bubeck2011x}, an algorithm with theoretical guarantees in continuous action spaces. However, the HOO method has quadratic running time, which makes it intractable in games that require extensive planning. In this paper, we apply the progressive widening strategy since for its computation efficiency (it does not increase computation time during the tree search process).
\section{Monte Carlo Tree Search in Continuous Action Space}
We now describe the details of the MCTS algorithm in continuous action space, with a variant of the UCB \cite{kocsis2006bandit} algorithm named PUCB (Predictor + UCB) \cite{rosin2011multi}. UCB has some promising properties: it’s very efficient and guaranteed to be within a constant factor of the best possible bound on the growth of regret (defined as the expected loss due to selecting sub-optimal action), and it can balance exploration and exploitation very well \cite{kocsis2006bandit}. Building upon UCB, PUCB incorporates the prior information for each action to help the agent select the most promising action, which can bring benefits especially for large/continuous action space.
For the MCTS algorithm with a discrete action space, the PUCB score \cite{rosin2011multi} for all actions can be evaluated and the action with the max PUCB score will be selected. However, when the action space becomes large or continuous, it is impossible to enumerate the PUCB score for all possible actions.
Under such scenarios, \textit{progressive widening} strategy deals with the large/continuous action space through artificially limiting the number of actions in the search tree based on the number of visits to the node and slowly growing the discrete set of sampled actions during the simulation process. After the quality of the best available action is estimated well, additional actions are taken into consideration.
More specifically, at the beginning of each action selection step, the algorithm continues by either improving the estimated value of current child actions in the search tree by selecting an action with max PUCB score, or exploring untried actions by adding an action under the current node. This decision is based on keeping the number of child actions for a node bounded by a sublinear function $p({s})$ of the number of visits to the current node denoted as $n({s})$:
\begin{equation}
\label{pw}
p({s}) = C_{pw} \cdot n({s})^\alpha
\end{equation}
In Eq.~\ref{pw}, $C_{pw}$ and $\alpha \in (0, 1)$ are two parameters that balance whether the MCTS algorithm should cover more actions or improve the estimate of a few actions. At each selection step, if the number of child actions of a node ${s}$ is smaller than $p({s})$, a new action will be added as a child action to the current node. Otherwise, the agent will select an action from the current child actions according to their PUCB score.
PUCB score assures that the tree grows deeper more quickly in the promising parts of the search tree. The progressive widening strategies add that it also grows wider to explore more for some part of the search tree.
To represent the action probability density function in continuous action space, in this paper we let the policy network learn a Gaussian distribution as the policy distribution. The policy network will output ${\mu}$ and ${\sigma}$ as the mean and standard deviation of the normal distribution. With the mean and standard deviation, the action is sampled from the distribution
\begin{equation}
\label{pdf}
\pi({a} | {s}, \theta) = \frac{1}{\sigma(s, {\theta}) \sqrt{2 \pi}} \exp \left(-\frac{({a} - \mu(s, \theta))^{2}}{2 \sigma(s, \theta)^{2}}\right)
\end{equation}
In the tree search process of the Continuous \textit{MuZero} algorithm, every node of the search tree is associated with a hidden state $h$, either through the pre-processing of the representation network, or through the dynamics network prediction. For each of the action $a$ currently in the search tree (note the number of child actions for each node keeps changing during the simulation process), there is an edge $(s, a)$ that stores a set of statistics ${N(s, a), Q(s, a), \mu(s, a), \sigma(s, a), R(s, a), S(s, a)}$, respectively representing visit counts $N$, mean value $Q$, policy mean $\mu$, policy standard deviation $\sigma$, reward $R$, and state transition $S$. Similar to the \textit{MuZero} algorithm, the search is divided into three stages, repeated for a number of simulations.
\textbf{Selection:} Each simulation starts from the current root node $s^0$, keeps repeating the selection until reaching an unexpanded node that has no child actions. For each hypothetical time step $k = 1, \cdots$, a decision is made by comparing the number of child actions $|\mathcal{A}_{s^{k-1}}|$ for the node $s^{k-1}$ and the value $p(s^{k-1})$ in Eq.~\ref{pw}.
If $|\mathcal{A}_{s^{k-1}}| \geq p(s^{k-1})$, an action $a^k$ will be selected according to the stored statistics of node $s^{k-1}$, by maximizing over the PUCB score \cite{rosin2011multi,silver2018general}
\begin{equation}
a^{k}=\arg \max _{a}\left[Q(s, a) + \bar{\pi}(a|s, \theta) \cdot \frac{\sqrt{\sum_{b} N(s, b)}}{1+N(s, a)}\left(c_{1}+\log \left(\frac{\sum_{b} N(s, b)+c_{2}+1}{c_{2}}\right)\right)\right]
\end{equation}
Here we note the difference with the \textit{MuZero} algorithm is that the prior value here $\bar{\pi}(a|s, \theta)$ is normalized from the policy probability density function value $\pi(a|s, \theta)$ in Eq.~\ref{pdf}, since the density value can be unbounded and PUCB algorithm requires that the prior values are all positive and summed to 1.
\begin{equation}
\label{prior_normalization}
\bar{\pi}(a|s, \theta) = \frac{{\pi}(a|s, \theta)}{\sum_b {\pi}(b|s, \theta)}
\end{equation}
The constants $c_1$ and $c_2$ are used to control the influence of the prior $\bar{\pi}(a|s, \theta)$ relative to the value $Q(s, a)$, which follow the same parameter setting with \textit{MuZero} algorithm.
If $|\mathcal{A}_{s^{k-1}}| < p(s^{k-1})$, the agent will select a new action from the action space, add it to the search tree, and expand this new edge. \cite{moerland2018a0c} proposed to sample new a new action according to the mean and standard deviation values output by the policy network and stored at the parent node $s^{k-1}$, which can effectively prune away child actions with low prior value. In this paper, we adopt this naive strategy to focus our work on the first step to extend the \textit{MuZero} algorithm in continuous action space. We expect the algorithm performance can be further improved with a better action sampling strategy such as \cite{yee2016monte,lee2018deep}.
\textbf{Expansion:} When the agent reaches an unexpanded node either through the PUCB score maximization, or through the progressive widening strategy, the selection stage finishes and the new node will get expanded (we denote this final hypothetical time step during the selection stage as $l$). In the node expansion process, based on the current state-action information $(s^{l-1}, a^l)$ the reward and next state are first computed by the dynamics function, $r^{l}, s^{l} = g_{\theta}(s^{l-1}, a^{l})$. With the state information $s^l$ for the next step, the policy and value are then computed by the prediction function, ${\mu}^{l}, {\sigma}^l, v^{l}=f_{\theta}(s^{l})$.
After the function prediction, the new node corresponding to state $s^{l}$ is added to the search tree under the edge $(s^{l-1}, a^l)$.
In the \textit{MuZero} algorithm, with a finite number of actions, each edge $(s^{l}, a)$ from the newly expanded node is initialized according to the predicted policy distribution.
Since the action space is now continuous, only one edge with the action value randomly sampled from $\mathcal{N}(\mu^l, (\sigma^l)^2)$ is added to the newly expanded node, and the statistics for this edge is initilaized to ${N({s}, {a}) = 0, Q({s}, {a}) = 0, \mu({s}, {a}) = {\mu}, \sigma({s}, {a}) = {\sigma}}$, where $\mu$ and $\sigma$ are used to determine the probability density prior for the sampled $a$.
Similar to the \textit{MuZero} and \textit{AlphaZero} algorithm, the search algorithm with progressive strategy makes at most one call to the dynamics function and prediction function respectively per simulation, maintaining the same order of computation cost.
\textbf{Backup:} In the general MCTS algorithm, there is a simulation step that performs one random playout from the newly expanded node to a terminal state of the game. However, in the \textit{MuZero} algorithm, each step from an unvisited node will require one call to the dynamics function and the prediction function, which makes it intractable for games that need a long trajectory to finish. Thus similar to the \textit{MuZero} algorithm, immediately after the Expansion step, the statistics (the mean value $Q$ and the visit count $N$) for each edge in the simulation path will be updated based on the cumulative discounted reward, bootstrapping from the value function of the newly expanded node.
\section{Neural network training in continuous action space}
In the \textit{MuZero} algorithm, the parameters of the representation, dynamics, and prediction networks are trained jointly, through backpropagation-through-time, to predict the policy, the value function, and the reward. At each training step, a trajectory with $K$ consecutive steps are sampled from the replay buffer, from which the network targets for the policy, value, and reward are calculated. The loss functions for the policy, value, and reward are designed respectively to minimize the difference between the network predictions and targets. In the following, we describe how we design the loss function for the policy network in continuous action space, and briefly review the loss function for the value/reward network.
\subsection{Policy Network}
Similar to other policy-based algorithms \cite{schulman2015trust,duan2016benchmarking}, the policy network outputs the mean $\mu(s, \theta)$ and the standard deviation $\sigma(s, \theta)$ of a Gaussian distribution. For the policy mean, we use a fully-connected MLP with leakyrelu nonlinearities for the hidden layers and tanh activation for the output layer. A separate fully-connected MLP specifies the log standard deviation, which also depends on the state input.
For the training target of the policy network, we want to transform the MCTS result of the root node to a continuous target probability density $\hat{\pi}$.
In general, to estimate the density of a probability distribution with continuous support, independent and identically distributed (i.i.d.) samples are drawn from the underlying distribution \cite{perez2008kullback}.
However, in the MCTS algorithm, only a finite number of actions with different visit counts will be returned (for MCTS algorithm with $N$ simulations, the number of actions will be $\left \lceil{C_{pw} \cdot N^{\alpha}}\right \rceil$). Thus here we assume the density value of the target distribution $\hat{\pi}(a | {s})$ at a root action ${a}_i$ is proportional to its visit counts
\begin{equation}
\label{density_norm}
\hat{\pi}(a_{i} | {s})=\frac{n({s}, {a}_i)^{\tau}}{Z({s}, \tau)}
\end{equation}
where $\tau \in \mathbb{R}^{+}$ specifies the temperature parameter, and $Z({s}, \tau)$ is a normalization term that only depends on the current state and the temperature parameter.
For the loss function, we use the Kullback-Leibler divergence between the network output $\pi_{\theta}(a | s)$ and the empirical density $\hat{\pi}({a} | {s})$ from the MCTS result.
\begin{equation}
\label{kl_loss}
l^p (\theta) = \mathrm{D}_{KL}(\pi_{\theta}({a}| {s}) \| \hat{\pi}({a} | {s}))
\end{equation}
In general, since the KL divergence between two distributions is asymmetric, we have the choice of minimizing either $\mathrm{D}_{KL}(\pi_{\theta} \| \hat{\pi})$ or $\mathrm{D}_{KL}(\hat{\pi} \| \pi_{\theta})$. As illustrated in Fig.~\ref{kl} from \cite{Goodfellow-et-al-2016}, the choice of the direction is problem dependent. The loss function $\mathrm{D}_{KL}(\pi_{\theta} \| \hat{\pi})$ require $\pi_\theta$ to place high probability anywhere that the $\hat{\pi}$ places high probability, while the other loss function requires $\pi_\theta$ to rarely places high probability anywhere that $\hat{\pi}$ places low probability. As in our problem, we desire the trained policy network to prune the undesired actions with low return, thus we choose $\mathrm{D}_{KL}(\pi_{\theta} \| \hat{\pi})$ as the policy loss function.
\begin{figure}
\centering
\includegraphics[width=.7\textwidth]{figs/kl.png}
\caption{Illustration of the KL divergence between the prediction and target.}
\label{kl}
\end{figure}
Also note that the empirical density $\hat{\pi}({a} | {s})$ from the search result does not define a proper density, as we never specify the density value in between the finite support points. However, through the following theorem, we show that even if we only consider the loss at the support points as show in Eq.~\ref{hat_policy_loss}, the expectation of loss function equals the true KL divergence if we sample actions according to the policy network prediction $a \sim \pi_{\theta}(a|s)$.
\begin{equation}
\label{hat_policy_loss}
\hat{l}_N^p(\theta) = \frac{1}{N} \sum_i^N
\log \pi_{\theta}(a_i | {s}) - \log \hat{\pi}_{\theta}(a_i | {s})
\end{equation}
\newtheorem{th1}{Theorem}
\begin{th1}
If the actions are sampled according to $\pi_\theta(a | {s})$, then for the empirical estimator $\hat{l}_N^p(\theta)$ of the policy loss function, we have
\begin{equation}
\label{th1}
\E_{a \sim \pi_\theta(a | {s})} \left[\hat{l}_N^p(\theta) \right] = \mathrm{D}_{KL}(\pi_{\theta}(a| {s}) \| \hat{\pi}(a | {s}))
\end{equation}
Further more, the variance of the empirical estimator $\hat{l}_N^p(\theta)$ converges to 0 with $O(N)$.
\end{th1}
We provide the proof in the Appendix. Theorem 1 states that if $a \sim \pi_\theta(a | {s})$, then the empirical estimator $\hat{l}_N^p(\theta)$ for the policy loss function in Eq.~\ref{kl_loss} is an unbiased estimator.
By subtituting Eq.~\ref{density_norm} into the estimator in Eq.~\ref{hat_policy_loss}, we can further simplify the estimator:
\begin{equation}
\begin{split}
\hat{l}_N^p(\theta)
&= \frac{1}{N} \sum_i^N \left( \log \pi_{\theta}(a_i | {s}) - \log \hat{\pi}_{\theta}(a_i | {s}) \right) \\
&= \frac{1}{N} \sum_i^N \left( \log \pi_{\theta}(a_i | {s}) - \log \frac{n({s}, {a}_{{i}})^{\tau}}{Z({s}, \tau)} \right) \\
&= \frac{1}{N} \sum_i^N \left( \log \pi_{\theta}(a_i | {s}) - \log n({s}, {a}_{{i}})^{\tau} \right) + \log Z({s}, \tau)
\end{split}
\end{equation}
where the term $\log Z({s},\tau)$ can be dropped since it does not depend on neural network weights $\theta$ and action $a$, which means it is a constant given a specific data sample. Thus the policy loss function becomes
\begin{equation}
\label{final_policy_loss}
\tilde{l}_N^p(\theta) = \frac{1}{N} \sum_i^N \left( \log \pi_{\theta}(a_i | {s}) - \log n({s}, {a}_{{i}})^{\tau} \right)
\end{equation}
We also give the derivation of the expected gradient for the empirical loss $\tilde{l}_N^p(\theta)$, with the details provided in the Appendix:
\begin{equation}
\E_{a \sim \pi_{\theta}(a | {s})} \nabla_{\theta} \tilde{l}_N^p(\theta) = \E_{a \sim \pi_{\theta}(a | {s})} \left[\nabla_{\theta} \log \pi_\theta(a| {s}) \cdot (\log \pi_\theta(a| {s}) - \tau \log n(a, {s})) \right]
\end{equation}
Also note here that for the estimator $\hat{l}_N^p(\theta)$ to be unbiased, the actions need to be sampled according to the distribution $\pi_\theta$, which depends on $\mu_\theta$ and $\sigma_\theta$ predicted from the policy network. However, for the experience data sampled from the replay buffer, the actions are sampled according to old policy network prediction during the self play phase. In this paper, we replace the expectation over ${a} \sim \pi_{\theta}({a} | {s})$ with the empirical support points from old policy distribution ${a} \sim \pi_{\theta_{old}}$, where $\theta_{old}$ denotes the old network weights during the self play phase. Although the empirical estimate become biased with this replacement, it does not affect the performance of the proposed algorithm from the numerical experiments results. Future work beyond this paper would include using weighted sampling to determine an unbiased estimation of the policy loss. The final policy loss function and its gradient become
\begin{equation}
\hat{l}^p(\theta) = \E_{{a} \sim \pi_{\theta_{old}}} \left( \log \pi_{\theta}(a_i | {s}) - \tau \log n({s}, {a}_{{i}}) \right)
\end{equation}
\begin{equation}
\nabla_{\theta} \hat{l}^p(\theta) =\E_{{a} \sim \pi_{\theta_{old}}}
[\nabla_{\theta} \log \pi_\theta({a}| {s}) \cdot (\log \pi_\theta({a}| {s}) - \tau \log n({a}, {s}))]
\end{equation}
\subsection{Value/Reward Network}
For the value/rework network, we follow the same setting with the \textit{MuZero} algorithm, and we briefly review the details in this subsection.
Following \cite{pohlen2018observe}, the value/reward targets are scaled using an invertible transform
\begin{equation}
\label{transform}
h(x) = \operatorname{sign}(x)(\sqrt{|x|+1}-1)+\epsilon x
\end{equation}
where $\epsilon=0.001$ in our experiments. We then apply a transformation $\phi$ to the scalar reward and value targets to obtain equivalent categorical representations. For the reward/value target, we use a discrete support set of size 21 with one support for every integer between -10 and 10. Under this transformation, each scalar is represented as the linear combination of its two adjacent supports, such that the original value can be recovered by $x = x_{\text{low}} \cdot p_{\text{low}} + x_{\text{high}} \cdot p_{\text{high}}$.
During inference the actual value and rewards are obtained by first computing their expected value under their respective softmax distribution and subsequently by inverting the scaling transformation using Eq.~\ref{transform}. Scaling and transformation of the value and reward happen transparently on the network side and is not visible to the rest of the algorithm.
With the above formulation, the loss for the value/reward has the following form
\begin{equation}
l^{v}(z, \mathbf{q}) = \boldsymbol{\phi}(z)^{T} \log \mathbf{q}
\end{equation}
\begin{equation}
l^r(u, \mathbf{r}) = \boldsymbol{\phi}(u)^{T} \log \mathbf{r}
\end{equation}
where $z, u$ are the value/reward targets, $\mathbf{q}, \mathbf{r}$ are the value/reward network output, and $\boldsymbol{\phi}$ denotes the transformation from the scalar values to the categorical representations.
\subsection{Loss Function}
For reinforcement learning algorithms in continuous action space, there is a risk that the policy network may converge prematurely, hence losing any exploration \cite{haarnoja2018soft}. To learn a policy that acts as randomly as possible while still being able to succeed at the task, in the numerical experiment we also add an entropy loss to the policy:
\begin{equation}
l^h(\theta) = -H(\pi_{\phi}(a | s))
\end{equation}
With the above formulation, the loss function for the proposed continuous \textit{MuZero} algorithm is a weighted sum of the loss functions described above, with a weight normalization term:
\begin{equation}
l(\theta) =
l^{r} +
l^{v} +
\tilde{l}_N^p +
\lambda \cdot l^{h} +
c \cdot \|\theta\|^{2}
\end{equation}
where $\lambda$ controls the contribution of the entropy loss and $c$ is the coefficient for the weight normalization term.
\subsection{Network Training}
The original version of the \textit{MuZero} algorithm uses a residual network which is suitable for image processing in Atari environments and board games. In our experiments with MuJoCo environments, we replace the networks (policy network, value network, reward network, and dynamics network) with fully connected networks. The hyperparameter and network architecture details are provided in the Appendix.
During the data generation process, we use 3 actors deployed on the CPU to keep generating experience data using the proposed algorithm, by pulling the most recent network weights from time to time. The same exploration scheme with the \textit{MuZero} algorithm is used, where the visit count distribution is parametrized using a temperature parameter $T$ that can balance the exploitation and exploration.
At each training step, an episode is first sampled from the replay buffer, and then $K$ consecutive transitions are sampled from the episode. During the sampling process, the samples are drawn according to prioritized replay \cite{schaul2015prioritized,brittain2019prioritized}. The priority for transition $i$ is $P(i)=\frac{p_{i}^{\alpha}}{\sum_{k} p_{k}^{\alpha}}$, where $p_i$ is determined through the difference between the search value and the observed n-step return. The priority for an episode equals the mean priorities of all the transitions in this episode. To correct for sampling bias introduced by the prioritized sampling, we scale the loss using the importance sampling ratio $w_{i}=\left(\frac{1}{N} \cdot \frac{1}{P(i)}\right)^{\beta}$.
To maintain a similar magnitude of the gradient across different unroll steps, we scale the gradient following the \textit{MuZero} algorithm.
To improve the learning process and bound the activation function output, we scale the hidden state to the same range as
the action input ($[-1, 1]$):
\begin{equation}
s_{\text {scaled}}= \frac{2s- [\min (s) + \max(s)] }{\max (s)-\min (s)}
\end{equation}
\section{Experimental Results}
In this section, we show the preliminary experimental results on two relatively low-dimensional MuJoCo environments, compared with Soft Actor-Critic (SAC), a state-of-the-art model-free deep reinforcement learning algorithm. For the comparison with the SAC algorithm, the stable baselines \cite{stable-baselines} implementation was used, with the same parameter setting following the SAC paper \cite{haarnoja2018soft}.
\begin{figure}
\centering
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=.99\linewidth]{figs/ip-sac-comp-trunc.png}
\caption{Converged after 4k training steps.}
\label{fig:sub1}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=.99\linewidth]{figs/idp-sac-comp-trunc.png}
\caption{Converged after 9k training steps.}
\label{fig:sub2}
\end{subfigure}
\caption{Episode rewards achieved during the training process averaged over 5 random seeds. Our proposed algorithm (blue line) achieves better score at earlier training steps than SAC (orange line). Note the log scale on the x-axis.}
\label{IDP}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=.99\linewidth]{figs/sim_comp.png}
\label{fig:sub3}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=.99\linewidth]{figs/sim_comp_idp.png}
\label{fig:sub4}
\end{subfigure}
\caption{Performance of the proposed algorithm with different number of simulations, with more simulations resulting in better performance.}
\label{simulations-results}
\end{figure}
We conducted experiments on InvertedPendulum-v2 and InvertedDoublePendulum-v2 tasks over 5 random seeds, and the results are shown in Fig.~\ref{IDP}. From this plot we can see the proposed continuous \textit{MuZero} algorithm consistently outperforms the SAC algorithm. Our proposed algorithm converges to the optimal score after training for 4k steps for Inverted Pendulum and 9k steps for Inverted Double Pendulum, achieves better data efficiency.
In Fig.~\ref{simulations-results}, we also varied the number of simulations in the experiments to illustrate its effect on the continuous \textit{MuZero} algorithm. The algorithm trained and played with more simulations is able to converge faster, which corresponds to our intuition, since
the simulation number determines the size of the search tree, where a higher number allows the action space to be explored more (resulting in a wider tree) and estimated with greater precision (resulting in a deeper tree), at the cost of more intensive calculations.
\section{Conclusion}
This paper provides a possible way and the necessary related theoretical results to extend the \textit{MuZero} algorithm to continuous action space environments. We propose a loss function for the policy in continuous action case that can help the policy network to match the search results of the MCTS algorithm. The progressive widening is used to gradually extend the action space, which is an effective strategy to deal with large/continuous action space. Preliminary results on low-dimensional MuJoCo environments show that our approach performs much better than the soft actor-critic (SAC) algorithm. Future work will further explore the empirical performance of the continuous \textit{MuZero} algorithm on MuJoCo environments with higher dimensions, since the adaption to the \textit{MuZero} algorithm proposed in this paper can be easily extended to higher dimension action space. Improving the selection process in the MCTS progressive widening process could also be a future direction to help speed up the algorithm convergence.
\section*{Broader Impact}
In this paper, we introduce continuous \textit{MuZero} algorithm, that achieves the state-of-the-art across some low dimensional continuous control tasks.
Although the experiments are for MuJoCo tasks, we broaden our focus to consider the longer-term impacts of developing decision-making agents with planning capabilities. Such capabilities could be applied to a range of domains, such as robotics, games, business management, finance, and transportation, etc.
Improvements to decision-making strategy likely have complex effects on welfare, depending on how these capabilities are distributed and the character of the strategic setting. For example, depending on who can use this scientific advance, such as criminals or well-motivated citizens, this technology may be socially harmful or beneficial.
|
1,116,691,499,818 | arxiv | \section{Why we do Believe in the SM: Precision Tests}
In recent years new powerful tests of the Standard Model (SM) have been performed mainly at LEP but also
at SLC and at the Tevatron. The running of LEP1 was terminated in 1995 and close-to-final results of the data
analysis are now available \cite{tim},\cite{ew}. The experiments at the Z resonance have enormously improved
the accuracy of the data in the electroweak neutral current sector \cite{sta}. The top quark has been at last
found and the errors on $m_Z$ and $\sin^2\theta_{eff}$ went down by two and one orders of magnitude
respectively since the start of LEP in 1989. The LEP2 programme is in progress. The validity
of the SM has been confirmed to a level that we can say was unexpected at the beginning. In the present data
there is no significant evidence for departures from the SM, no convincing hint of new physics (also
including the sofar available results from LEP2) \cite{tre}. The impressive success of the SM poses strong
limitations on the possible forms of new physics. Favoured are models of the Higgs sector and of new physics that
preserve the SM structure and only very delicately improve it, as is the case for fundamental Higgs(es) and
Supersymmetry. Disfavoured are models with a nearby strong non perturbative regime that almost inevitably
would affect the radiative corrections, as for composite Higgs(es) or technicolor and its variants.
The main results of the precision tests of the standard electroweak theory can be summarised as follows. It has
been checked that the couplings of quark and leptons to the weak gauge bosons $W^{\pm}$ and $Z$ are indeed
precisely as prescribed by the gauge symmetry. The accuracy of a few 0.1\% for these tests implies that, not
only the tree level, but also the structure of quantum corrections has been verified. To a lesser accuracy the
triple gauge vertices
$\gamma W^+ W^-$ and
$Z W^+ W^-$ have also been found in agreement with the specific prediction of the $SU(2)\bigotimes U(1)$ gauge
theory, at the tree level. This means that we have verified that the gauge symmetry is indeed unbroken in the
vertices of the theory: the currents are indeed conserved. Yet we have immediate evidence that the symmetry is
otherwise badly broken in the masses. In fact the $SU(2)\bigotimes U(1)$ gauge symmetry forbids masses for all
the particles that have been sofar observed: quarks, leptons and gauge bosons. Of all these particles
only the photon is massless (and the gluons protected by the $SU(3)$ colour gauge symmetry), all other are massive
(probably also the neutrinos). Thus the currents are conserved but the particle states are not symmetric. This is
the definition of spontaneous symmetry breaking. The simplest implementation of spontaneous symmetry breaking in a
gauge theory is via the Higgs mechanism. In the Minimal Standard Model (MSM) one single scalar Higgs isospin
doublet is introduced and its vacuum expectation value v breaks the symmetry. All masses are proportional to v,
although for quarks and leptons the spread of the Yukawa couplings that multiply v in the expression for the masses
are distributed over a wide range. The Higgs sector is still largely untested. The Higgs particle has not been
found: being coupled in proportion to masses one has first to produce heavy particles and then try to detect the
Higgs (itself heavy) in their couplings. The present limit is $m_H\mathrel{\rlap {\raise.5ex\hbox{$>$}} {\lower.5ex\hbox{$\sim$}}} m_Z$ from LEP. What has been tested is
the relation $m^2_W=m^2_Z \cos^2{\theta_W}$, modified by computable radiative corrections. This relation means that
the effective Higgs (be it fundamental or composite) is indeed a weak isospin doublet. Quantum corrections to the
electroweak precision tests depend on the masses and the couplings in the theory. For example they depend on the top
mass
$m_t$, the Higgs mass
$m_H$, the strong coupling $\alpha_s(m_Z)$, the QED coupling $\alpha(m_Z)$ (these are running couplings at the Z mass)
and other parameters which are better known. In particular quantum corrections depend quadratically on $m_t$ and only
logaritmically on $m_H$. From the observed radiative corrections one obtains a value of $m_t$ in fair agreement
with the observed value from the TeVatron. For the Higgs mass one finds $\log_{10}{m_H(GeV)}=1.92^{+0.32}_{-0.41}$
(or $m_H=84^{+91}_{-51}$). This result on the Higgs mass is particularly remarkable. Not only the value of
$\log_{10}{m_H(GeV)}$ is right on top of the small window between $\sim 2$ and $\sim 3$ which is allowed by the
direct limit, on the one side, and the theoretical upper limit on the Higgs mass in the MSM,
$m_H\mathrel{\rlap{\raise.5ex\hbox{$<$}} {\lower.5ex\hbox{$\sim$}}} 800~GeV$, on the other side. If one had found a central value like $\mathrel{\rlap {\raise.5ex\hbox{$>$}} {\lower.5ex\hbox{$\sim$}}} 4$ the model would have
been discarded. Thus the whole picture of a perturbative theory with a fundamental Higgs is well supported by the
data. But also there is clear indication for a particularly light Higgs. This is quite encouraging for the ongoing
search for the Higgs particle. More in general, if the Higgs couplings are removed from the lagrangian the
resulting theory is non renormalisable. A cutoff $\Lambda$ must be introduced. In the quantum corrections
$\log{m_H}$ is then replaced by $\log{\Lambda}$. The precise determination of the associated finite terms would be
lost (that is, the value of the mass in the denominator in the argument of the logarithm). But the generic
conclusion would remain, that, whatever the mechanism of symmetry breaking, the experimental solution of the
corresponding problem, is not far away in energy.
\section{Why we do not Believe in the SM}
\subsection{Conceptual Problems}
Given the striking success of the SM why are we not satisfied with that theory? Why not just find the Higgs
particle, for completeness, and declare that particle physics is closed? The main reason is that there are
strong conceptual indications for physics beyond the SM.
It is considered highly unplausible that the origin of the electro-weak symmetry breaking can be explained by
the standard Higgs mechanism, without accompanying new phenomena. New physics should be manifest at energies in
the TeV domain. This conclusion follows fron an extrapolation of the SM at very high energies. The computed
behaviour of the $SU(3)\otimes SU(2)\otimes U(1)$ couplings with energy clearly points towards the
unification of the electro-weak and strong forces (Grand Unified Theories:
GUTs) at scales of energy
$M_{GUT}\sim 10^{14}-10^{16}~ GeV$ which are close to the scale of quantum gravity, $M_{Pl}\sim 10^{19}~ GeV$
\cite{qqi}. One can also imagine a unified theory of all interactions also including gravity (at
present superstrings
\cite{ler} provide the best attempt at such a theory). Thus GUTs and the realm of quantum gravity set a
very distant energy horizon that modern particle theory cannot anymore ignore. Can the SM without new physics be
valid up to such large energies? This appears unlikely because the structure of the SM could not naturally
explain the relative smallness of the weak scale of mass, set by the Higgs mechanism at $\mu\sim
1/\sqrt{G_F}\sim 250~ GeV$ with $G_F$ being the Fermi coupling constant. This so-called hierarchy problem
\cite{ssi} is related to the presence of fundamental scalar fields in the theory with quadratic mass divergences
and no protective extra symmetry at $\mu=0$. For fermions, first, the divergences are logaritmic and, second, at
$\mu=0$ an additional symmetry, i.e. chiral symmetry, is restored. Here, when talking of divergences we are not
worried of actual infinities. The theory is renormalisable and finite once the dependence on the cut off is
absorbed in a redefinition of masses and couplings. Rather the hierarchy problem is one of naturalness. If we
consider the cut off as a manifestation of new physics that will modify the theory at large energy scales, then it
is relevant to look at the dependence of physical quantities on the cut off and to demand that no unexplained
enormously accurate cancellations arise.
According to the above argument the observed value of $\mu\sim 250~ GeV$ is indicative of the existence of new
physics nearby. There are two main possibilities. Either there exist fundamental scalar Higgses but the theory
is stabilised by supersymmetry, the boson-fermion symmetry that would downgrade the degree of divergence from
quadratic to logarithmic. For approximate supersymmetry the cut off is replaced by the splitting between the
normal particles and their supersymmetric partners. Then naturalness demands that this splitting (times the
size of the weak gauge coupling) is of the order of the weak scale of mass, i.e. the separation within
supermultiplets should be of the order of no more than a few TeV. In this case the masses of most supersymmetric
partners of the known particles, a very large managerie of states, would fall, at least in part, in the discovery
reach of the LHC. There are consistent, fully formulated field theories constructed on the basis of this idea, the
simplest one being the Minimal Supersymmetric Standard Model (MSSM) \cite{43}. As already mentioned, all normal observed states are those whose masses are
forbidden in the limit of exact
$SU(2)\otimes U(1)$. Instead for all SUSY partners the masses are allowed in that limit. Thus when
supersymmetry is broken in the TeV range but $SU(2)\otimes U(1)$ is intact only s-partners take mass while all
normal particles remain massless. Only at the lower weak scale the masses of ordinary particles are generated.
Thus a simple criterium exists to understand the difference between particles and s-particles.
The other main avenue is compositeness of some sort. The Higgs boson is not elementary but either a bound
state of fermions or a condensate, due to a new strong force, much stronger than the usual strong interactions,
responsible for the attraction. A plethora of new "hadrons", bound by the new strong force would exist in the
LHC range. A serious problem for this idea is that nobody sofar has been able to build up a realistic model
along these lines, but that could eventually be explained by a lack of ingenuity on the theorists side. The
most appealing examples are technicolor theories \cite{30},\cite{chi}. These models were inspired by the
breaking of chiral symmetry in massless QCD induced by quark condensates. In the case of the electroweak
breaking new heavy techniquarks must be introduced and the scale analogous to $\Lambda_{QCD}$ must be about
three orders of magnitude larger. The presence of such a large force relatively nearby has a strong tendency to
clash with the results of the electroweak precision tests \cite{32}.
The hierarchy problem is certainly not the only conceptual problem of the SM. There are many more: the
proliferation of parameters, the mysterious pattern of fermion masses and so on. But while most of these
problems can be postponed to the final theory that will take over at very large energies, of order $M_{GUT}$ or
$M_{Pl}$, the hierarchy problem arises from the unstability of the low energy theory and requires a solution at
relatively low energies.
A supersymmetric extension of the SM provides a way out which is well defined,
computable and that preserves all virtues of the SM. The necessary SUSY breaking can be introduced through soft
terms that do not spoil the good convergence properties of the theory. Precisely those terms arise from
supergravity when it is spontaneoulsly broken in a hidden sector \cite{yyi}. But alternative mechanisms of SUSY
breaking are also being considered
\cite{gauge}. In the
most familiar approach SUSY is broken in a hidden sector and the scale of SUSY breaking is very
large of order
$\Lambda\sim\sqrt{G^{-1/2}_F M_P}$ where
$M_P$ is the Planck mass. But since the hidden sector only communicates with the visible sector
through gravitational interactions the splitting of the SUSY multiplets is much smaller, in the TeV
energy domain, and the Goldstino is practically decoupled. In an alternative scenario the (not so
much) hidden sector is connected to the visible one by ordinary gauge interactions. As these are much
stronger than the gravitational interactions, $\Lambda$ can be much smaller, as low as 10-100
TeV. It follows that the Goldstino is very light in these models (with mass of order or below 1 eV
typically) and is the lightest, stable SUSY particle, but its couplings are observably large. The radiative
decay of the lightest neutralino into the Goldstino leads to detectable photons. The signature of photons comes
out naturally in this SUSY breaking pattern: with respect to the MSSM, in the gauge mediated model there are typically
more photons and less missing energy. Gravitational and gauge mediation are extreme alternatives: a spectrum
of intermediate cases is conceivable. The main appeal of gauge mediated models is a better protection against
flavour changing neutral currents. In the gravitational version even if we accept that gravity leads to
degenerate scalar masses at a scale near $M_{Pl}$ the running of the masses down to the weak scale can
generate mixing induced by the large masses of the third generation fermions \cite{ane}.
\subsection{Hints from Experiment}
\subsubsection{Unification of Couplings}
At present the most direct
phenomenological evidence in favour of supersymmetry is obtained from the unification of couplings in
GUTs.
Precise LEP data on $\alpha_s(m_Z)$ and $\sin^2{\theta_W}$ confirm what was already known with less accuracy:
standard one-scale GUTs fail in predicting $\sin^2{\theta_W}$ given
$\alpha_s(m_Z)$ (and $\alpha(m_Z)$) while SUSY GUTs \cite{zzi} are in agreement with the present, very precise,
experimental results. According to the recent analysis of ref\cite{aaii}, if one starts from the known values of
$\sin^2{\theta_W}$ and $\alpha(m_Z)$, one finds for $\alpha_s(m_Z)$ the results:
\bea
\alpha_s(m_Z) &=& 0.073\pm 0.002 ~(\rm{Standard~ GUTs})\nonumber \\
\alpha_s(m_Z) &=& 0.129\pm0.010~(\rm{SUSY~ GUTs})
\label{24}
\end{eqnarray}
to be compared with the world average experimental value $\alpha_s(m_Z)$ =0.119(4).
\subsubsection{Dark Matter}
There is solid astrophysical and cosmological evidence \cite{kol}, \cite{spi} that most of the matter in the universe
does not emit electromagnetic radiation, hence is "dark". Some of the dark matter must be baryonic but most of it must
be non baryonic. Non baryonic dark matter can be cold or hot. Cold means non relativistic at freeze out, while hot is
relativistic. There is general consensus that most of the non baryonic dark matter must be cold dark matter. A couple
of years ago the most likely composition was quoted to be around 80\% cold and
20\% hot. At present it appears to me
that the need of a sizeable hot dark matter component is more uncertain. In fact, recent experiments have indicated the
presence of a previously disfavoured cosmological constant component in
$\Omega=\Omega_m+\Omega_{\Lambda}$ \cite{kol}. Here
$\Omega$ is the total matter-energy density in units of the critical density, $\Omega_m$ is the matter component
(dominated by cold dark matter) and $\Omega_{\Lambda}$ is the cosmological component. Inflationary theories almost
inevitably predict
$\Omega=1$ which is consistent with present data. At present, still within large uncertainties, the approximate
composition is indicated to be
$\Omega_m\sim 0.4$ and
$\Omega_{\Lambda}\sim0.6$ (baryonic dark matter gives $\Omega_b\sim0.05$).
The implications for particle physics is that certainly there must exist a source of cold dark matter. By far the
most appealing candidate is the neutralino, the lowest supersymmetric particle, in general a superposition of
photino, Z-ino and higgsinos. This is stable in supersymmetric models with R parity conservation, which are the
most standard variety for this class of models (including the MSSM). A
neutralino with mass of order 100 GeV would fit perfectly as a cold dark matter candidate. Another common
candidate for cold dark matter is the axion, the elusive particle associated to a possible solution of the strong
CP problem along the line of a spontaneously broken Peccei-Quinn symmetry. To my knowledge and taste this option is
less plausible than the neutralino. One favours supersymmetry for very diverse conceptual and
phenomenological reasons, as described in the previous sections, so that neutralinos are sort of standard by now.
For hot dark matter, the self imposing candidates are neutrinos. If we demand a density fraction
$\Omega_{\nu}\sim0.1$ from neutrinos, then it turns out that the sum of stable neutrino masses should be around 5
eV.
\subsubsection{Baryogenesis}
Baryogenesis is interesting because it could occur at the weak
scale \cite{rub} but not in the SM. For baryogenesis one needs the three famous Sakharov conditions \cite{sak}: B
violation, CP violation and no termal equilibrium. In principle these conditions could be verified in the SM. B is
violated by instantons when kT is of the order of the weak scale (but B-L is conserved). CP is violated by the CKM
phase and out of equilibrium conditions could be verified during the electroweak phase transition. So the
conditions for baryogenesis appear superficially to be present for it to occur at the weak scale in the SM.
However, a more quantitative analysis \cite{rev}, \cite{cw1} shows that baryogenesis is not possible
in the SM because there is not enough CP violation and the phase transition is not sufficiently strong first order,
unless
$m_H<80~GeV$, which is by now excluded by LEP. Certainly baryogenesis could also occur below the
GUTs scale, after
inflation. But only that part with
$|B-L|>0$ would survive and not be erased at the weak scale by instanton effects. Thus baryogenesis at $kT\sim
10^{12}-10^{15}~GeV$ needs B-L violation at some stage like for $m_\nu$. The two effects could be related if
baryogenesis arises from leptogenesis \cite{lg} then converted into baryogenesis by instantons. While baryogenesis
at a large energy scale is thus not excluded it is interesting that recent studies have shown that baryogenesis at
the weak scale could be possible in the MSSM \cite{cw1}. In fact, in this model there are additional sources of CP
violations and the bound on $m_H$ is modified by a sufficient amount by the presence of scalars with large
couplings to the Higgs sector, typically the s-top. What is required is that
$m_h\sim 80-110~GeV$ (in the
LEP2 range!), a s-top not heavier than the top quark and, preferentially, a small $\tan{\beta}$.
\subsubsection{Neutrino Masses}
Recent data from Superkamiokande \cite{SK}(and also MACRO \cite{MA}) have provided a more
solid experimental basis for neutrino oscillations as an explanation of the atmospheric neutrino
anomaly. In addition the solar neutrino deficit is also probably an indication of a different
sort of neutrino oscillations. Results from the laboratory experiment by the LNSD
collaboration \cite{LNSD} can also be considered as a possible indication of yet another type
of neutrino oscillation. But the preliminary data from Karmen \cite{KA} have failed to
reproduce this evidence. The case of LNSD oscillations is far from closed but one can
tentatively assume, pending the results of continuing experiments, that the signal will not
persist. Then solar and atmospheric neutrino oscillations can possibly be explained in terms
of the three known flavours of neutrinos without invoking extra sterile species. Neutrino
oscillations for atmospheric neutrinos require
$\nu_{\mu}\rightarrow\nu_{\tau}$ with $\Delta m^2_{atm}\sim 2~10^{-3}~eV^2$ and a nearly
maximal mixing angle
$\sin^2{2\theta_{atm}}\geq 0.8$. In most of the Superkamiokande allowed region the bound by Chooz
\cite{Chooz} essentially excludes $\nu_e\rightarrow\nu_{\mu}$ oscillations for atmospheric neutrino
oscillations. Furthermore the last results from Superkamiokande allow a solution of the
solar neutrino deficit in terms of
$\nu_e$ disappearance vacuum oscillations (as opposed to MSW \cite{MSW} oscillations within the sun)
with $\Delta m^2_{sol}\sim ~10^{-10}~eV^2$ and again nearly maximal mixing angles. Among the
large and small angle MSW solutions the small angle one is perhaps more likely at the moment
(with \cite{Bahcall} $\Delta m^2_{sol}\sim 0.5~10^{-5}~eV^2$ and $\sin^2{2\theta_{sol}}\sim
5.5~10^{-3}$) than the large angle MSW solution. Of course experimental uncertainties are
still large and the numbers given here are merely indicative. But by now it is very unlikely that all this
evidence for neutrino oscillations will disappear or be explained away by astrophysics or other solutions. The
consequence is that we have a substantial evidence that neutrinos are massive.
In a strict minimal standard model point of view neutrino masses could vanish if no right handed neutrinos
existed (no Dirac mass) and lepton number was conserved (no Majorana mass). In
GUTs both these
assumptions are violated. The right handed neutrino is required in all unifying groups larger than SU(5). In SO(10)
the 16 fermion fields in each family, including the right handed neutrino, exactly fit into the 16 dimensional
representation of this group. This is really telling us that there is something in SO(10)! The SU(5)
alternative in terms of $\bar 5+10$, without a right handed neutrino, is certainly less elegant. The breaking of
$|B-L|$, B and L is also a generic feature of GUTs. In fact, the see-saw mechanism \cite{ssm} explains
the smallness of neutrino masses in terms of the large mass scale where $|B-L|$ and L are violated. Thus, neutrino
masses, as would be proton decay, are important as a probe into the physics at the
GUTs scale.
Oscillations only determine squared mass differences and not masses. The case of three nearly degenerate neutrinos
is the only one that could in principle accomodate neutrinos as hot dark matter together with solar and atmospheric
neutrino oscillations. According to our previous discussion, the common mass should be around 1-3 eV. The solar
frequency could be given by a small 1-2 splitting, while the atmospheric frequency could be given by a still small
but much larger 1,2-3 splitting. A strong constraint arises in the degenerate case from neutrinoless double beta
decay which requires that the ee entry of
$m_{\nu}$ must obey
$|(m_{\nu})_{11}|\leq 0.46~{\rm eV}$. As observed in ref. \cite{GG}, this bound can only be
satisfied if
double maximal mixing is realized, i.e. if also solar neutrino oscillations occur with nearly maximal mixing.
We have mentioned that it is not at all clear at the moment that a hot dark matter component is really
needed \cite{kol}. However the only reason to consider the fully degenerate solution is
that it is compatible
with hot dark matter.
Note that for degenerate masses with $m\sim 1-3~{\rm eV}$ we need a relative splitting $\Delta m/m\sim
\Delta m^2_{atm}/2m^2\sim 10^{-3}-10^{-4}$ and an even smaller one for solar neutrinos. We were unable
to imagine a natural mechanism compatible with unification and the see-saw mechanism to arrange such a
precise near symmetry.
If neutrino masses are smaller than for cosmological relevance, we can have the hierarchies $|m_3| >> |m_{2,1}|$
or $|m_1|\sim |m_2| >> |m_3|$. Note that we
are assuming only two frequencies, given by $\Delta_{sun}\propto m^2_2-m^2_1$ and
$\Delta_{atm}\propto m^2_3-m^2_{1,2}$. We prefer the first case, because for quarks and leptons one
mass eigenvalue, the third generation one, is largely dominant. Thus the dominance of $m_3$ for neutrinos
corresponds to what we observe for the other fermions. In this case, $m_3$ is determined by the atmospheric
neutrino oscillation frequency to be around $m_3\sim0.05~eV$. By the see-saw mechanism $m_3$ is related to some
large mass M, by $m_3\sim m^2/M$. If we identify m with either the Higgs vacuum expectation value or the top mass
(which are of the same order), as suggested for third generation neutrinos by
GUTs in simple SO(10)
models, then M turns out to be around $M\sim 10^{15}~GeV$, which is consistent with the connection with
GUTs. If
solar neutrino oscillations are determined by vacuum oscillations, then $m_2\sim 10^{-5}~eV$ and we have that the
ratio $m_2/m_3$ is well consistent with $(m_c/m_t)^2$.
A lot of attention \cite{gaff} is being devoted to the
problem of a natural explanation of the observed nearly maximal mixing angle for atmospheric
neutrino oscillations and possibly also for solar neutrino oscillations, if explained by vacuum
oscillations. Large mixing angles are somewhat unexpected because
the observed quark mixings are small and the quark, charged lepton and neutrino mass matrices are to
some extent related in GUTs. There must be some special interplay between the neutrino Dirac
and Majorana matrices in the see-saw mechanism in order to generate maximal
mixing. It is hoped that looking for a natural explanation of large neutrino mixings can lead us to decripting
some interesting message on the physics at the GUT scale.
\subsubsection{Ultra High Energy Cosmic Rays}
The observation by the Fly's Eye and AGASA collaborations of proton-like cosmic rays with energies of order $\mathrel{\rlap {\raise.5ex\hbox{$>$}} {\lower.5ex\hbox{$\sim$}}}
10^{11}~GeV$ well above the GKZ cutoff of a few $10^{10}~GeV$ poses serious problems in terms of a possible
astrophysical explanation. The GKZ cutoff arises from absorption of protons by the cosmic microwave background if the
proton energy is sufficient to induce N* resonant photoproduction of pions. For these energetic protons the mean free
path in space is limited to the vicinity of our galaxy, say to a distance of order of 50Mpc. On the other hand the
angular distribution of high energy proton events indicates their extragalactic origin. So the problem is either to find
sufficiently energetic nearby astrophysical sources or to explain the observed events in terms of some new effect in
particle physics. I understand that an astrophysical solution is still not excluded
($\gamma$-ray bursts?). As far as a possible particle
physics explanation is concerned one class of solutions is based on assuming the UHECR are not protons, but some
exotic hadron-like particle. The least unplausible example of such a particle is a hadron with light gluino
constituents \cite{far}. This hadron of larger mass than the proton would probably have a smaller crossection for pion
photoproduction and evade the GFK bound. Other solutions like a neutrino with enormously enhanced crosssections at
small x are untenable \cite{hal}. One different possibility is if primary cosmic neutrinos annihilate with cosmic
background neutrinos to produce a Z which then decays into protons \cite{wei}. A relic neutrino of mass of order few eV
would be needed. But the problems are that one requires a very large flux of neutrinos of very high energy
\cite{wax}(which again poses a difficult astrophysical problem) and the fact that in Z decay there are many more pions
(hence photons from
$\pi^0$ decay) than protons. Another class of proposed explanations, which I find more appealing, is to invoke the
decay of some superheavy particle of mass $M\mathrel{\rlap {\raise.5ex\hbox{$>$}} {\lower.5ex\hbox{$\sim$}}} 10^{12}~GeV$. It could also be a topological defect \cite{ber}. But
a particle candidate would be a cosmion \cite{sar}, an almost completely stable particle (lifetime longer than the
universe life) with only gravitation interactions, possibly from a hidden sector, a remnant of the quantum gravity
world, with relatively small mass in comparison to $M_{GUT}$ in order for its density not to be diluted by inflation.
This particle would contribute to the dark matter expecially clustered near galaxies like ours. Its rare decays would
generate the observed protons. Again why so many protons and not even more pions? Advocates of these solution argue
that we only have experience with the final state of objects of mass of order 100 GeV, not
$10^{12}~GeV$ or more.
\section{Conclusion}
Today in particle physics we follow a double approach: from above and from below. From above there are, on the theory
side, quantum gravity (that is superstrings), GUTs and cosmological scenarios. On the experimental side there
are underground experiments (e.g. searches for neutrino oscillations and proton decay), cosmic ray
observations, satellite experiments (like COBE, IRAS etc) and so on. From below, the main objectives of theory and
experiment are the search of the Higgs and of signals of particles beyond the Standard Model (typically supersymmetric
particles). Another important direction of research is aimed at the exploration of the flavour problem: study of CP
violation and rare decays. The general expectation is that new physics is close by and that should be found very
soon if not for the complexity of the necessary experimental technology that makes the involved time scale painfully
long.
I am very grateful to Professor Oscar Saavedra for his kind invitation and hospitality.
|
1,116,691,499,819 | arxiv | \section{Introduction} \label{sct intro}
The aim of this article is to study fine regularity properties for solutions of the \textit{porous medium equation}-(pme)
\begin{equation}\label{m-eq}\tag{$m$-pme}
u_t= \Delta u^m, \quad m > 1.
\end{equation}
The mathematical analysis involving \textit{pme} has attracted attention for the last six decades, motivated by its relation with natural phenomena models which describe processes involving fluid flow, heat transfer and, in general terms, nonlinear diffusion process, cf. \cite{vasquez}.
Unlike the uniform parabolicity exhibited for the classical linear \textit{heat equation} $u_t=\Delta u$, case $m=1$, for parameters $m > 1$ the finite speed of propagation property holds for \eqref{m-eq} and so, solutions may present parabolic degeneracy along the set
$$
\mathcal{F}(u)=\partial\{(t,x) \; : u(t,x) \neq 0\},
$$
which imposes lack of smoothness for solutions. In general, the best regularity result known guarantees local $C^{0,\alpha}$ regularity for solutions in time and space for some universal $0<\alpha \ll 1$, see \cite{D,DF1}. However, no more information is known for the exponent $\alpha$. In this connection, specifically for dimension $d=1$, it was established that the pressure term $\varrho \approx u^{m-1}$ is locally Lipschitz continuous in space, see \cite{A1,AC1}. From this fact, solutions are $C^{\,0,\beta}$ for the sharp exponent
$$
\beta=\min\{1,1/(m-1)\}.
$$
Nevertheless, it does not occur in higher dimensions due to a counter-example provided in \cite{AG}. On the other hand, under natural extra conditions, solutions may present surprising gains of regularity. For instance, it was shown in \cite{KKV} that after a time interval locally flat solutions of \eqref{m-eq} and their respective pressure terms are $C^\infty$. This exemplify how the question of obtaining sharp regularity estimates for solutions of \eqref{m-eq} has been an interesting and delicate subject. We also mention \cite{AMU} for sharp regularity estimates, obtained in terms of the integrability of the source term and \cite{CVW} for Lipschitz regularity estimates for large times.
In the 1950s, a fundamental solution for \eqref{m-eq} was found by Barenblatt \cite{barenblatt}, Zel'dovich and Kompaneets \cite{ZK} and later by Pattle \cite{pattle}. They obtained the following explicit formula
$$
\mathcal{B}_m(t,|x|,C) =t^{-\alpha} \left(C - \frac{b(m-1)}{2m}\frac{|x|^2}{t^{2b}}\right)_+^{\frac{1}{m-1}}
$$
for a free parameter $C>0$, $\alpha=\frac{d}{d(m-1)+2}$ and $b=\frac{\alpha}{d}$, where $d$ denotes the dimension of the euclidean space. This fundamental solution, also called \textit{Barenblatt solution}, presents a Dirac mass as the initial data due to $\mathcal{B}_m(t,|x|,C) \to M \delta_0(x)$ as $t \to 0$, where $M(C,m,d)=\int\mathcal{B}_m \,dx$ is the total mass. Based on the structure of $\mathcal{B}_m$, we observe that at the inner edge
$$
\mathcal{F}(\mathcal{B}_m):=\partial\left\{(t,x) \, : \,\mathcal{B}_m(t,|x|,C)>0\right\}=\left\{(t,x) \, : \, |x|=\sqrt{\frac{2Cm}{b(m-1)}}\, \cdot t^{b} \right\},
$$
the gradient blows up for $m > 2$, is finite when $m = 2$, and vanishes (but with a nonzero derivative) in the case $1<m<2$. More precisely, we note that
$$
\lim\limits_{m\to 1}\mathcal{B}_m(t,|x|,C) = M\mathcal{E}(t,x)
$$
where $\mathcal{E}(t,x) \in C^\infty$ is the fundamental solution of the heat equation. Therefore, even under a rough initial data, we observe that for any $t>0$, the family $\{\mathcal{B}_m\}_{m>1}$ plays the following role: the smoothness of $\mathcal{B}_m$ at $\mathcal{F}(\mathcal{B}_m)$ increases asymptotically to, let us say, $C^\infty$ as $m\searrow 1$.
\begin{figure}[h!]
\centering
\psscalebox{0.75 0.75}
{
\begin{pspicture}(0,-4.577778)(15.244445,4.577778)
\definecolor{colour0}{rgb}{0.77254903,0.75686276,0.75686276}
\definecolor{colour1}{rgb}{0.56078434,0.5529412,0.5529412}
\definecolor{colour2}{rgb}{0.56078434,0.53333336,0.53333336}
\psline[linecolor=black, linewidth=0.04, arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.0]{->}(0.12222222,-2.5555556)(15.1,-2.5666666)
\rput[bl](0.4888889,3.9355555){$t>0$}
\rput[bl](14.666667,-2.9444444){$x$}
\rput[bl](7.411111,-0.5888889){$\mathcal{E}$}
\psbezier[linecolor=colour0, linewidth=0.04](5.111111,-2.5666666)(5.5111113,0.23333333)(6.711111,3.8333333)(7.5111113,3.833333333333332)(8.311111,3.8333333)(9.511111,0.23333333)(9.911111,-2.5666666)
\rput[bl](7.311111,3.1222222){$\mathcal{B}_2$}
\psbezier[linecolor=colour1, linewidth=0.04](3.488889,-2.5555556)(5.9666667,-0.54888886)(6.611696,2.2777777)(7.4888887,2.311111111111107)(8.366082,2.3444445)(9.733334,-1.48)(11.488889,-2.5555556)
\rput[bl](7.233333,1.5){$\mathcal{B}_m$}
\psbezier[linecolor=black, linewidth=0.04](0.23333333,-2.3333333)(5.311111,-2.211111)(6.532406,-0.06666667)(7.577778,-0.06666666666666572)(8.62315,-0.06666667)(10.244445,-2.4333334)(14.866667,-2.3333333)
\psdots[linecolor=black, dotsize=0.1](9.911111,-2.5666666)
\psdots[linecolor=black, dotsize=0.1](11.488889,-2.5777779)
\psdots[linecolor=black, dotsize=0.1](5.1,-2.5666666)
\psdots[linecolor=black, dotsize=0.1](3.5,-2.5555556)
\psframe[linecolor=black, linewidth=0.02, dimen=outer](15.244445,4.577778)(0.0,-4.577778)
\rput[bl](0.5,-4.308889){$m\searrow 1$}
\rput[bl](9.444445,-3.2222223){$\mathcal{F}(\mathcal{B}_2)$}
\rput[bl](10.966666,-3.2222223){$\mathcal{F}(\mathcal{B}_m)$}
\rput[bl](10.433333,-4.3){$\{\mathcal{B}_m>0\} \to \{\mathcal{E}>0\}= \mathbb{R}^d$}
\psline[linecolor=colour2, linewidth=0.02, tbarsize=0.07055555cm 5.0]{|*-|*}(3.5,-3.0333333)(7.577778,-3.0333333)
\rput[bl](4.6,-3.8755555){$\scriptsize O(\frac{1}{\sqrt{m-1}})$}
\end{pspicture}
}
\caption{This picture represents the improvement of regularity for the Barenblatt solution as $m \searrow 1$: around the free boundary $\mathcal{F}(\mathcal{B}_m)$, it describes a surface leading towards a smooth surface.}
\end{figure}
Motivated by such analysis, we turn our attention to investigate high regularity, in space and time, along interior points of the free boundary $\mathcal{F}(u):=\partial\{u>0\}$, for nonnegative bounded weak solutions of \eqref{m-eq} as $m$ is universally close to $1$. More precisely, fixed any parameter $\mu \in (0,+\infty)$ we provide an universal closeness regime such that solutions of \eqref{m-eq} are pointwisely of the class $C^{\mu}$ at $\mathcal{F}(u)$.
Although such features seen to be appropriate for this scenario, it presents further difficulties. For example, we observe that \eqref{m-eq} revels a variance on the diffusion velocity, which depends directly on $m$. Thus to gather high regularity from the limit case $m=1$, a suitable strategy is needed since the intrinsic cylinders are changing when $m$ varies, see \eqref{cyl}. Motivated by this, we could try instead of solution $u$, to provide growth estimates through the pressure
\begin{equation}\nonumber
\varrho= c\, u^{m-1}, \quad \quad c > 0,
\end{equation}
which turns our analysis to the pressure equation
$$
\varrho_t= {\tfrac{m}{c}} \varrho \Delta \varrho + {\tfrac{m}{c(m-1)}} |\nabla \varrho|^2.
$$
Unlike equation \eqref{m-eq} the diffusion term related to the equation above does not depend on $m$ and so, an analysis of such pde would sound reasonable. However, since $m\searrow 1$ the pressure constant has to behave like $c \approx 1/(m-1)$ and so, no further information is available from the second order term of the limit equation.
As an important consequence of such analysis, we also study nonnegative bounded weak solutions for the inhomogeneous porous medium equation with bounded source term,
\begin{equation}\label{mf-eq}\tag{$f$,$m$-pme}
u_t-\Delta u^m = f \in L^\infty,
\end{equation}
providing for $m-1 \ll 1$, optimal growth estimates
$$
u(t,x) \sim |x-x_0|^{\frac{2}{m}} + |t-t_0|
$$
at touching ground points $(t_0,x_0) \in \mathcal{F}(u)$. Related to this scenario, we mention \cite{PS} for improved H\"older regularity estimates at the free boundary, under integrability conditions on the source term.
Futhermore, we also refine the methods employed here to provide sharp local $C^{0,1}$ regularity in space and $C^{0,\frac 12}$ regularity in time for equations of the type \eqref{mf-eq}. We have postponed the precise statements to Section \ref{main}.
The main strategy of this paper is based on the Caffarelli's compactness approach \cite{C89}, and a refined improvement of flatness strategy provided by Teixeira in \cite{T2}. For this scenario, we point out that, to produce a suitable flatness property, the compactness argument provides how $m$ has to be universally close to $1$. However, such closeness shall depend on the parabolic metric from which (unlike their elliptic counterpart) varies on $m$, causing a self-dependence on this parameter. In order to avoid this issue, a subtle use of dyadic parabolic cylinders is provided, see Proposition \ref{stepone}.
The paper is organized as follows. In Section \ref{pre} we gather relevant notations and present the main Theorems, as well as known results we shall use in this article. In Section \ref{grow} we treat the homogeneous case and the proof of Theorem \ref{fbreg} is delivered. The inhomogeneous case is discussed in Section \ref{inhomo}, where the proofs of Theorem \ref{fbregnon} and Theorem \ref{locreg} are carried out.
\section{Preliminaries and Main results} \label{pre}
\subsection*{Notations and intrinsic parabolic cylinders}
The lack of homogeneity caused by the nature of degeneracy or singularity of certain parabolic equations requires a refined choice for suitable cylinders, cf. \cite{TU,U,vasquez}. In view of this, for a fixed open bounded set $U \subset \mathbb{R}^d$ and parameter $\theta>0$, we introduce the intrinsic $\theta$-parabolic cylinder
\begin{equation}\label{cyl}
G_\rho^\theta :=I_\rho^\theta \times B_\rho \subset \mathbb{R} \times U,
\end{equation}
where $I_\rho^{\,\theta}:=(-\rho^\theta,0 \,]$ is an 1-dimensional interval and $B_\rho$ is the $d$-dimensional ball with radius $\rho$ centered at the origin. More generally, we set $I^{\,\theta}_\rho(t_0):=(-\rho^\theta+t_0,t_0\,]$ and $B_\rho(x_0):=\{x_0\}+B_\rho$. We also denote $G_1:=G_1^\theta$ for any $\theta$.
It is easy to check for any number $0<\mu<\infty$, equations of the type \eqref{m-eq} have an invariant scaling under cylinders \eqref{cyl} with the interpolation space/time given precisely by
\begin{equation}\label{interp}
\theta=\theta(\mu,m):=\mu(1-m)+2.
\end{equation}
We highlight the following monotonicity
$$
G_\rho^{\theta} \subset G_\rho^{\theta'} \quad \mbox{for any } 0<\theta' \leq \theta \mbox{ and } 0<\rho<1.
$$
Next, we introduce the notion of solutions of \eqref{mf-eq} we shall work with.
\begin{definition}\label{def}
We say a nonnegative locally bounded function
$$
u \in C_{loc}(0,T;L_{loc}^2(U)), \quad u^{\frac{m+1}{2}} \in L_{loc}^2(0,T,W_{loc}^{1,2}(U))
$$
is a local weak solution of \eqref{m-eq} if for every compact set $K \subset U$ and every sub interval $[t_1,t_2] \subset (0,T]$, there holds
$$
\left.\int_K u\varphi \right\rvert^{t_2}_{t_1}+\int^{t_2}_{t_1}\int_K-u\varphi_t+mu^{m-1}\nabla u\nabla\varphi \, = \int_{t_1}^{t_2}\int_Kf\varphi,
$$
for all nonnegative test functions
$$
\varphi \in W^{1,2}_{loc}(0,T;L^2(K)) \cap L^2_{loc}(0,T;W^{1,2}_0(K)).
$$
\end{definition}
Given a nonnegative function $u$ we denote the parabolic positive set of $u$ and the parabolic free boundary, respectively by
$$
\mathcal{P}(u):=\{(t,x) \in G_1 \, : \, u(t,x)>0\} \quad and \quad \mathcal{F}(u):=\partial \mathcal{P}(u) \cap G_1.
$$
According to the above definition, we interpret the gradient term as
$$
u^{m-1}\nabla u := \frac{2}{m+1}u^{\frac{m-1}{2}}\nabla u^{\frac{m+1}{2}} \quad \mbox{in} \quad \mathcal{P}(u).
$$
\medskip
\subsection{Main results}\label{main}
Here we present the main results we shall prove in this paper. The first one provides high regularity estimates for the homogeneous equation \eqref{m-eq} at free boundary points $(t,x) \in \mathcal{F}(u)$.
\begin{theorem}[High growth estimates at free boundary points]\label{fbreg}
Fixed $0<\mu < \infty$, there exist parameters $0<m_\mu \leq 1+1/\mu$, $0<\rho_0 < 1/4$ and $C>0$ depending only on $d, \|u\|_{\infty,G_1}$ and $\mu$ such that, for
$$
1 < m \leq m_\mu,
$$
nonnegative locally bounded weak solutions $u$ of \eqref{m-eq} in $G_{1}$ satisfy
$$
\sup\limits_{I^\theta_\rho(t_0)\times B_\rho^{}(x_0)} u(t,x) \leq C \rho^{\,\mu}
$$
for each $(t_0,x_0) \in \mathcal{F}(u) \cap G_{\frac{1}{2}}^{\theta}$ and $0<\rho\leq \rho_0$.
\end{theorem}
For the second result, we establish the optimal rate growth for solutions of the inhomogeneous case \eqref{mf-eq} at free boundary points $(t,x) \in \mathcal{F}(u)$. Such optimality is observed by considering the stationary solution $u(t,x)=u(x)=C|x|^{2/m}$, for $C>0$ depending on $d$ and $m$.
\begin{theorem}[Optimal growth estimates at free boundary points]\label{fbregnon}
There exist parameters $\tilde m>1$, $0<\overline\rho\,<1/4$ and $C>0$ depending only on $d$, $\|f\|_{\infty,G_1}$ and $\|u\|_{\infty,G_1}$ such that for each
$$
1 < m \leq \tilde m,
$$
nonnegative locally bounded weak solutions $u$ of \eqref{mf-eq} in $G_{1}$ satisfy
\begin{equation}\label{fbestnon}
\sup\limits_{I^{\theta}_\rho(t_0)\times B_\rho^{}(x_0)} u(t,x) \leq C \rho^{\,\frac{2}{m}}
\end{equation}
for each $(t_0,x_0) \in \mathcal{F}(u) \cap G_{\frac{1}{2}}^{\theta}$ and $0<\rho\leq \overline\rho$ with $\theta=\frac{2}{m}$.
\end{theorem}
Next, we establish local sharp regularity estimates in space and time, for nonnegative bounded solutions of \eqref{mf-eq}.
\begin{theorem}[Local sharp regularity]\label{locreg}
There exist parameters $\tilde m>1$, $0<\overline\rho\,<1/4$ and $C>0$ depending only on $d$, $\|f\|_{\infty,G_1}$ and $\|u\|_{\infty,G_1}$ such that for each
$$
1 < m \leq \tilde m,
$$
nonnegative locally bounded weak solutions $u$ of \eqref{mf-eq} in $G_{1}$ satisfy
\begin{equation}\label{locest}
\sup\limits_{(t,x) \in I^2_\rho \times B_\rho^{}} |u(t,x)-u(s,y)| \leq C \rho^{}
\end{equation}
for each $0<\rho \leq \overline \rho$ and $(s,y) \in I^2_{1/4} \times B_{1/4}$. In particular, nonnegative locally bounded weak solutions of \eqref{m-eq} are locally of class $C^{\,0,1}$ in space and $C^{\,0,\frac{1}{2}}$ in time.
\end{theorem}
\medskip
\subsection*{Auxiliary results}
Here we shall mention some important results to be required trough this paper. First, we remark the strong maximum principle for the heat equation, see for instance \cite[Theorem 11]{evans}.
\begin{theorem}\label{Evans} Assume $h(x,t)$ is a weak solution of the heat equation $h_t=\Delta h$ defined in $G_1$. If there exists a point $(t_0,x_0) \in G_1$ such that
$$
h(t_0,x_0)=\max\limits_{\overline{G_1}}h(t,x)
$$
then $u$ is constant everywhere in $(-1,t_0\,] \times B_1$.
\end{theorem}
Next, we are going to mention stable local regularity estimates for solutions of \eqref{m-eq} within their domain of definition. As obtained in \cite{DGV}, see theorem 11.2 and remark 1.1, constants $C$ and $\alpha$ in theorem \ref{compactness} below, are stable as $m\searrow 1$.
\begin{theorem}\label{compactness}
Let $1 \leq m \leq 2$ and $u$ be a bounded weak solution of \eqref{m-eq} in $G_1$. Then there exist constants $C>0$ and $0<\alpha<1$ depending only on $d$ and $\|u\|_{\infty, G_1}$ such that
$$
|u(t,x)-u(s,y)| \leq C(|x-y|^\alpha + |t-s|^{\frac{\alpha}{2}})
$$
for any pair of points $(t,x),(s,y) \in (-\frac{1}{2},0] \times B_{\frac{1}{2}}$.
\end{theorem}
\medskip
\subsection*{Normalization regime}
For the results to be established in this paper it is enough, with no loss of generality, to consider normalized weak solutions $v$ of \eqref{m-eq} , i.e., satisfying $\|v\|_{\infty,G_1} \leq 1$. Indeed, in the case the results are established for normalized solutions, for any bounded weak solution $u(t,x)$ we may redefine it as follows
\begin{equation}\label{norm}
v(x,t) := \frac{u(N^{b}t,N^ax)}{N} \quad \mbox{in} \quad G_1,
\end{equation}
for $N=\max\{1,\|u\|_{\infty,G_1}\}$ with the following requirements
$$
b=2a-(m-1) \quad \mbox{and} \quad a<0.
$$
Since $v$ still solves \eqref{m-eq} in $G_1$ satisfying $0\leq v \leq 1$, we conclude that Theorem \ref{fbreg} and Theorem \ref{locreg} can also be obtained for the non-normalized $u(t,x)$ with parameters under the additional dependence on $\|u\|_{\infty,G_1}$.
By a similar analysis, we have that, for a given universal $\varepsilon_0>0$, we can always enter the smallness regime $\|f\|_{\infty,G_1} \leq \varepsilon_0
$ for a universal rescaled solution of \eqref{mf-eq} as in \eqref{norm}.
\medskip
\section{Proof of Theorem \ref{fbreg}}\label{grow}
The next result provides a universal flatness estimate that allows us to construct a refined decay of solutions in dyadic parabolic cylinders centered at a free boundary point.
\begin{lemma}\label{complem} Given $\kappa>0$ there exists $m_\kappa>1$, depending only on $\kappa$ and universal parameters, such that if $(0,0) \in \mathcal{F}(v)$, $0 \leq v \leq 1$ and $v$ satisfies \eqref{m-eq} in $G_1$ for
$$
1 \leq m \leq m_\kappa,
$$
then
$$
\sup \limits_{I_{1/2}^2 \times B_{1/2}^{}} v(t,x) \leq \kappa.
$$
\end{lemma}
\begin{proof}
For the sake of contradiction, we assume the existence of $\kappa_0>0$ and sequences $(v_\iota)_{\iota \in \mathbb{N}}$ and $(m_\iota)_{\iota \in \mathbb{N}}$ where
$$
v_\iota \in C_{loc}(-1,0;L^2_{loc}(B_1)), \quad (v_\iota)^{\frac{m_\iota+1}{2}} \in L^2(-1,0;W^{1,2}_{loc}(B_1))
$$
and $0 \leq v_\iota \leq 1$ such that $v_\iota$ is a nonnegative bounded weak solution of
\begin{equation}\label{j-eq}\tag{$m_\iota$-pme}
(v_\iota)_t = \Delta(v_\iota^{m_\iota}) \quad \mbox{in} \quad G_1,
\end{equation}
with $m_\iota \to 1$ as $\iota\to \infty$; however
\begin{equation}\label{comp1}
\sup \limits_{I_{1/2}^2 \times B_{1/2}^{}} v_\iota(t,x) > \kappa_0.
\end{equation}
By stable H\"older continuity, Theorem \ref{compactness}, the family $(v_\iota)_{\iota \in \mathbb{N}}$ is equicontinous and bounded, therefore
$$
v_\iota \to \tilde v \quad \mbox{uniformly in }(-1/2,0] \times B_{1/2},
$$
where the limiting function $\tilde v \geq 0$ solves
$$
\tilde v_t=\Delta \tilde v \quad \mbox{in } I_{1/2}^2 \times B_{1/2}^{},
$$
attaining minimum value at $(0,0)$. Therefore, by the strong maximum principle, theorem \ref{Evans}, we must have $\tilde v \equiv 0$ in $G_{1/2}^2$. This contradicts \eqref{comp1} and the proof of Lemma \ref{complem} is complete.
\end{proof}
\begin{proposition}[Improvement of flatness]\label{stepone} Given $0<\mu< \infty$, there exists a parameter $1 < m_\mu \leq 1+1/\mu$, depending only on $\mu$ and universal parameters, such that if $(0,0) \in \mathcal{F}(v)$, $0 \leq v \leq 1$ and $v$ is a bounded weak solution of \eqref{m-eq} in $G_1$ for
$$
1 < m \leq m_\mu,
$$
then
$$
\sup \limits_{I_{\rho}^\theta \times B_{\rho}^{}} v(t,x) \leq \rho^{\,\mu}, \quad \mbox{for} \quad \rho:=\left( \frac12 \right)^{\frac2\theta}.
$$
\end{proposition}
\begin{proof}
First let us fix $0<\mu<\infty$. Since we assumed $1 < m \leq 1+1/\mu$, it is easy to observe that for the parameter $\theta$ given in \eqref{interp}, there holds $1 \leq \theta < 2$. This implies that
\begin{equation}\label{improv1}
\left(\dfrac{1}{2}\right)^{2} \leq \left(\dfrac{1}{2}\right)^{\frac{2}{\theta}} < \;\, \dfrac{1}{2}.
\end{equation}
Now, considering in Lemma \ref{complem}
$$
\kappa=\left(\frac{1}{2}\right)^{2\mu},
$$
we guarantee the existence of a parameter $m_\mu$ depending only on $\mu$, such that $1<m_\mu \leq 1+1/\mu$, where for each $1 \leq m \leq m_\mu$ weak solutions of \eqref{m-eq} satisfy
$$
\sup \limits_{I_{1/2}^2 \times B_{1/2}^{}} v(t,x) \leq \left(\frac{1}{2}\right)^{2\mu}.
$$
In view of this, by choosing $\rho:=\left(\dfrac{1}{2}\right)^{\frac{2}{\theta}}$ we have $I_{1/2}^2=I_{\rho}^\theta$. Then, from \eqref{improv1}
$$
\sup \limits_{I_{\rho}^\theta \times B_{\rho}^{}} v(t,x) \leq \left(\frac{1}{2}\right)^{2\mu} \leq \left(\frac{1}{2}\right)^{\frac{2\mu}{\theta}}=\rho^{\,\mu},
$$
as desired.
\end{proof}
\begin{proof}[Proof of Theorem \ref{fbreg}]
Up to a translation, we can suppose with no loss of generality that $x_0=0$ and $t_0=0$. Also, from \eqref{norm} we can assume $0 \leq v \leq 1$. By now, we show a discrete version of Theorem \ref{fbreg}. In other words, for $\rho$ and $m_\mu$ as in Proposition \ref{stepone}, there holds
\begin{equation}\label{kstep}
\sup \limits_{I_{\rho^k}^\theta \times B_{\rho^k}^{}} v(t,x) \leq \rho^{\,k\mu}, \quad k \in \mathbb{N}.
\end{equation}
We prove this inductively. Note that case $k=1$ is precisely Proposition \ref{stepone}. Let us assume that \eqref{kstep} holds for some positive integer $k>1$. Set
$$
v_k(t,x):=\frac{v(\rho^{\,k\theta}t,\rho^{\,k} x)}{\rho^{\,k\mu}}, \quad (t,x) \in G_1.
$$
It is easy to check that $v_k$ solves \eqref{m-eq} in $G_1$ with the same exponent $m$ satisfying $1 \leq m \leq m_\mu$ with $\theta$ as in \eqref{interp}.
In addition, by the induction hypothesis, \eqref{kstep} holds and so we have
$$
\|v_k\|_{\infty, G_1} \leq 1.
$$
Thus $v_k$ satisfies the hypothesis in Proposition \ref{stepone}. From that, we obtain
$$
\sup \limits_{I_{\rho}^\theta \times B_{\rho}} v_k(t,x) \leq \rho^{\,\mu},
$$
which give us
$$
\sup \limits_{I_{\rho^{\,k+1}}^{\theta} \times B_{\rho^{\,k+1}}^{}} v(t,x) \leq \rho^{\,\mu(k+1)}.
$$
To complete the proof of Theorem \ref{fbreg}, we argue as follows. For a given $0< r \leq 1/4$, we select the positive integer $k_r$ such that
$$
\rho^{k_r+1}< r \leq \rho^{k_r}.
$$
Finally, remembering that $\rho=\left(\frac{1}{2}\right)^{\frac{2}{\theta}}$, by \eqref{kstep} and \eqref{improv1}, we conclude
$$
\sup \limits_{I_{r}^\theta \times B_{r}^{}} v(t,x) \leq \sup \limits_{I_{\rho^{k_r}}^\theta \times B_{\rho^{k_r}}^{}} v(t,x) \leq \rho^{\,-\mu} \rho^{\,(k_r+1)\mu} < 2^{\frac{2\mu}{\theta}} r^{\,\mu} \leq 4^{\mu} r^{\,\mu}.
$$
\end{proof}
\medskip
\section{The inhomogeneous case}\label{inhomo}
In this section we shall deliver the proofs of Theorem \ref{locreg} and Theorem \ref{fbregnon} which provide regularity estimates for nonnegative bounded weak solutions of \eqref{mf-eq} for parameters $m$ universally close to $1$. Hereafter in this section, we assume the intrinsic cylinder exponent $\theta=\theta(\mu,m)$ for the particular case $\mu=2/m$, see \eqref{interp}. In this case such exponent is given by
$$
\theta=2/m.
$$
Now we state the following approximation lemma.
\begin{lemma}\label{complemma2} Let $v$ be a nonnegative bounded weak solution of \eqref{mf-eq} in $G_1$ with $0 \leq v \leq 1$. Given $\kappa>0$, there exists $\varepsilon>0$, depending on $\kappa$ and $m$ such that if
\begin{equation}\label{lips1}
v(0,0) + \|f\|_{\infty,G_1}\leq \varepsilon,
\end{equation}
we can find $\varpi$, a bounded weak solution of $\eqref{m-eq}$ in $G^1_{1/2}$, satisfying $\varpi(0,0)=0$, such that
\begin{equation}\label{lips9}
\sup\limits_{I^\theta_{1/2} \times B_{1/2}}|v(t,x)-\varpi(t,x)| \leq \kappa.
\end{equation}
\end{lemma}
\begin{proof} As in the proof of Lemma \ref{complem} we assume, for the sake of contradiction, that there exists $\kappa_0>0$ and, for each positive integer $k$, functions $v_k$ and $f_k$, such that
$0 \leq v_k \leq 1$, $\|f_k\|_\infty \leq 1/k$ and $v_k$ is a weak solution of $(f_k,m\mbox{-pme})$ in $G_1$. On the other hand,
\begin{equation}\label{comp11}
\sup \limits_{I_{1/2}^\theta \times B_{1/2}^{}} |v_k(t,x) - \omega(t,x)| > \kappa_0,
\end{equation}
for any bounded weak solution $\omega$ of \eqref{m-eq} in $G^1_{\frac 12}$. By classical local H\"older estimates, $\{v_k\}$ is equicontinuous and bounded in $G^1_{\frac 12} \Subset G_1$. Therefore $v_k$ converges uniformly to some $\overline v$ which satisfies \eqref{m-eq} in $G^1_{\frac 12}$. This gives us a contradiction to \eqref{comp11} by choosing $k \gg 1$.
\end{proof}
\begin{remark}\label{rem}
Thanks to Theorem \ref{fbreg}, we observe there exists universal parameter $\tilde m>1$ and universal positive constant $C$ such that for any $\omega$ nonnegative bounded weak solution of \eqref{m-eq} with $1 < m \leq \tilde m$, there holds
\begin{equation}\label{fb}
\sup\limits_{I^{\theta'}_\rho \times B_{\rho}} \omega(t,x) \leq C\rho^{2},
\end{equation}
for $\theta'=2(1-m)+2$.
\end{remark}
First, we state the following proposition.
\begin{proposition}\label{smallstep1} For the universal parameter $\tilde m > 1$ given in \eqref{fb},there exist positive universal constants $C_0, \varepsilon_0$ and $r_0$, such that if $v$ is a nonnegative bounded weak solution of \eqref{mf-eq} in $G_1$ such that
$$
1 < m \leq \tilde m \quad \mbox{and} \quad \|f\|_{\infty,G_1} \leq \varepsilon_0
$$
where $v$ satisfies
$$
0 \leq v(t,x) \leq 1 \quad \mbox{and} \quad v(0,0) \leq \frac 12 \, r_0^{\,\frac{2}{m}},
$$
then
$$
\sup \limits_{I_{r_0}^\theta \times B_{r_0}^{}} v(t,x) \leq r_0^{\,\frac{2}{m}}.
$$
\end{proposition}
\begin{proof}
Hereafter in this section, we consider $v$ a nonnegative bounded weak solution of \eqref{mf-eq} for a fixed $m \in (1,\tilde m\,]$, as in Remark \ref{rem}. First, we show the existence of universal small parameter $r_0$, such that if
\begin{equation}\label{lips4}
\begin{array}{lcc}
v(0,0) \leq \frac 12\, r_0^{\frac{2}{m}} &
\quad \mbox{then} \quad
& \sup\limits_{I^\theta_{r_0} \times B_{r_0}} v(t,x) \leq r_0^{\frac{2}{m}}.
\end{array}
\end{equation}
Indeed, let $\kappa$ be as in Lemma \ref{complemma2} and $\varepsilon=\varepsilon(\kappa)>0$ such that the pointwise estimate \eqref{lips1} holds. In view of estimate \eqref{fb}, estimate \eqref{lips1} and $\theta' < \theta$, we have
\begin{equation}\nonumber
\begin{array}{ccl}
\sup\limits_{I^\theta_{r} \times B_{r}} v(t,x) & \leq & \varepsilon + \sup\limits_{I^\theta_{r} \times B_{r}} \varpi(t,x) \\[0.5cm]
& \leq & \varepsilon + \sup\limits_{I^{\theta'}_{r} \times B_{r}} \varpi(t,x) \\[0.5cm]
& \leq & \varepsilon + Cr^{2}, \\
\end{array}
\end{equation}
for radii $0<r \leq 1/2$. Therefore, by making the following universal choices
$$
r_0= \left(\frac{1}{2C}\right)^{\frac{m}{2(m-1)}}\quad \mbox{and} \quad \varepsilon=\frac{1}{2}r_0^{\,\frac{2}{m}},
$$
we obtain \eqref{lips4} as desired.
\end{proof}
\begin{proposition}\label{small} For the universal parameter $\tilde m > 1$ given in \eqref{fb},there exist positive universal constants $C_0, \varepsilon_0$ and $r_0$, such that if $v$ is a nonnegative bounded weak solution of \eqref{mf-eq} in $G_1$ such that
$$
1 < m \leq \tilde m \quad \mbox{and} \quad \|f\|_{\infty,G_1} \leq \varepsilon_0
$$
where $v$ satisfies
\begin{equation}\label{pointwise}
0 \leq v(t,x) \leq 1 \quad \mbox{and} \quad v(0,0) \leq \frac 12 \, r^{\frac{2}{m}}
\end{equation}
for each $0< r \leq r_0$, then
$$
\sup \limits_{I_{r}^\theta \times B_{r}^{}} v(t,x) \leq C_0\, r^{\frac{2}{m}}.
$$
\end{proposition}
\begin{proof} First, let us assume parameters $r_0$ and $\varepsilon_0$ as in Proposition \ref{smallstep1}. We want to show that if the pointwise estimate
\begin{equation}\label{lips12}
v(0,0) \leq \frac 12 r_0^{\,k \frac{2}{m}}
\end{equation}
holds for a positive integer $k$, then
\begin{equation}\label{lips11}
\sup\limits_{I^\theta_{r_0^k} \times B_{r_0^k}} v(t,x) \leq r_0^{k\frac{2}{m}}.
\end{equation}
Indeed, we prove this by induction on $k$. Let us denote by $\mathcal{P}_k$ the following
statement: if estimate \eqref{lips12} is pointwisely satisfied, then estimate \eqref{lips11} holds. Note that Proposition \ref{smallstep1} corresponds to the case $\mathcal{P}_1$. Now, we assume the statement $\mathcal{P}_k$ holds for $k>1$. Under the assumption \eqref{lips12} for $k+1$ instead of $k$ we consider
$$
\tilde{v}(x,t)=\frac{v(r_0^{\theta}t,r_0x)}{r_0^{\frac{2}{m}}}.
$$
Easily, we observe that $\tilde{v}$ is still a nonnegative bounded weak solution of $(\tilde f,m\mbox{-pme})$ for the parameter $1<m \leq \tilde m$ previously fixed and $\|\tilde f\|_\infty \leq \varepsilon_0$, such that
$$
\tilde{v}(0,0) \leq \frac 12 \, r_0^{\,k\frac{2}{m}}.
$$
Therefore, from statement $\mathcal{P}_{k}$, we have
\begin{equation}\nonumber
\sup\limits_{I^\theta_{r_0^{\,k+1}} \times B_{r_0^{\,k+1}}} {v}(t,x) = \sup\limits_{I^\theta_{r_0^{\,k}} \times B_{r_0^{\,k}}} \tilde{v}(t,x) \, r_0^\frac{2}{m} \leq r_0^{\,(k+1)\frac{2}{m}}
\end{equation}
and so, $\mathcal{P}_{k+1}$ is obtained. Therefore $\mathcal{P}_{k}$ holds for every positive integer $k$.
Finally we are ready to conclude the proof of Proposition \ref{small}. Fixed radius $0<r \leq r_0$, let us choose the integer $ k_r>0$ such that $r_0^{k_r+1} < r \leq r_0^{k_r}$. If
$$
v(0,0) \leq \frac 12 \, r^{\frac{2}{m}} \; \left(\leq \frac 12 \, r_0^{k_r\frac{2}{m}}\right)
$$
then from $\mathcal{P}_k$, we obtain
$$
\sup\limits_{I^\theta_{r} \times B_{r}} {v}(t,x) \leq \sup\limits_{I^\theta_{r_0^{\,k_r}} \times B_{r_0^{\,k_r}}} {v}(t,x) \leq r_0^{-\frac{2}{m}} r^\frac{2}{m}.
$$
\end{proof}
\begin{proof}[Proof of Theorem \ref{fbregnon}]
As observed in \eqref{norm} we can assume, with no loss of generality, that nonnegative bounded solutions of \eqref{mf-eq}, are under the following regime
$$
0 \leq v(t,x) \leq 1 \quad \mbox{with} \quad \|f\|_\infty \leq \varepsilon_0,
$$
for $\varepsilon_0$ as in Proposition \ref{small}. Using a covering argument, it is enough to prove estimate \eqref{fbestnon} for the particular case $t_0=0$ and $x_0=0$. Since $(0,0) \in \mathcal{F}(u)$ satisfies directly the pointwise estimate in \eqref{pointwise}, we have that the proof of Theorem \ref{fbregnon} follows easily as a particular case of Proposition \ref{small}.
\end{proof}
\medskip
\begin{proof}[Proof of Theorem \ref{locreg}]
It is enough to show estimate \eqref{locest} for the particular case $y=0$ and $s=0$. For this, we have to show there exist positive constants $m_0$ and $C$ such that, for a given nonnegative weak solution $v$ of $\eqref{mf-eq}$, with $0<m\leq m_0$, there holds
\begin{equation}\label{finalest}
\sup\limits_{I_r^{2}\times B_r} |v(t,x)-v(0,0)| \leq C r,
\end{equation}
for any $0<r\ll 1$. We also note that after a normalization argument, as argued in \eqref{norm}, we can assume with no loss of generality, that the solution $v$ satisfies the smallness conditions required in Proposition \ref{small}. For a universal parameter $r_0$ as in Proposition \ref{small}, let us denote
$$
r_\star:=\left(2v(0,0)\right)^{\frac{m}{2}}.
$$
We split the proof into three cases.
\medskip
\noindent
\textit{Case 1. $r_\star \leq r \leq r_0$}.
For this, we have
$$
v(0,0) \leq \frac 12 \,r^{\frac{2}{m}}.
$$
By Proposition \ref{small}, and assuming that $1<m\leq \tilde m$, we derive
\begin{equation}\nonumber
\sup\limits_{I_r^{2}\times B_r} |v(t,x)-v(0,0)| \leq \sup\limits_{I_r^{\theta}\times B_r} v(t,x) \leq C\, r^{\frac{2}{m}} \leq C r
\end{equation}
and estimate \eqref{finalest} follows.
\medskip
\noindent
\textit{Case 2. $0<r<r_\star \leq r_0$}.
In this case, we define the rescaled function
$$
\overline{v}(t,x):= \dfrac{v(r_\star^{\theta} t,r_\star x)}{r_\star^{\frac{2}{m}}} \quad \mbox{in} \quad G_1,
$$
for $\theta=2/m$. We note that $\overline{v}$ is a nonnegative bounded weak solution of
$$
\overline v_t - div(m \,\overline v^{\,m-1} \nabla \overline v) = \overline f \quad \mbox{in } G_1,
$$
with $\| \overline f \|_\infty = \| f \|_\infty$, satisfying
$$
\overline{v}(0,0)=\frac 12 \quad \mbox{and} \quad \sup\limits_{G_1} \overline{v}(t,x) \leq C,
$$
where the last estimate is obtained by using Proposition \ref{small} for the radius $r_\star$. This makes $\overline{v}$ a function universally continuous and so, it is possible to find a universal parameter $\tau_0>0$ such that
$$
\overline{v}(t,x) > \frac{1}{4} \quad \mbox{for} \quad (t,x) \in I^{\theta}_{\tau_0} \times B_{\tau_0}.
$$
In view of this, $\overline{v}$ solves a uniformly parabolic equation of the form
\begin{equation}\label{unifparab}
\overline v_t - div(\mathcal{A}(t,x) \nabla \overline v) = \overline f \quad \mbox{in } I^{\theta}_{\tau_0} \times B_{\tau_0},
\end{equation}
for some continuous coefficient $\lambda \leq \mathcal{A}(t,x) \leq \Lambda$ with universal ellipticity constants $0<\lambda < \Lambda$. By classical regularity estimates, we have $\overline v$ satisfies
$$
\sup\limits_{I_r^{2}\times B_r} |\overline v(t,x)- \overline v(0,0)| \leq C r,
$$
for any $r>0$ such that $I^2_r \subset I^2_{\tau_0} \Subset I^\theta_{\tau_0} $. Therefore
$$
\sup\limits_{I_{rr_\star}^{2}\times B_{rr_\star}} |v(t,x)- v(0,0)| \leq C r r_\star^{\frac{2}{m}} \leq C rr_\star
$$
and so we guarantee that $v$ satisfies estimate \eqref{finalest} for any $0<r \leq r_\star\tau_0$. In order to conclude the argument of this case, we have to prove estimate \eqref{finalest} also holds for radii $r_\star\tau_0 < r \leq r_\star$. Indeed, using Proposition \ref{small} for radius $r_\star$ once more, we get
$$
\sup\limits_{I_{r}^{2}\times B_{r}} |v(t,x)- v(0,0)| \leq \sup\limits_{I_{r_\star}^{2}\times B_{r_\star}} |v(t,x)- v(0,0)| \leq C r_\star \leq \frac{C}{\tau_0} r.
$$
\medskip
\noindent
\textit{Case 3. $r_\star > r_0$}. We easily observe that
$$
v(0,0)> \frac 12 r_0^{\frac{2}{m}}.
$$
By continuity, $v(t,x)$ is universally bounded from below in a small universal cylinder centered at $(0,0)$. Therefore, $v$ solves a uniformly parabolic equation as in \eqref{unifparab}, which implies that, for some universal $r_1>0$, $v$ satisfies estimate \eqref{finalest} for any $0<r\leq r_1$.
\medskip
Finally, we conclude estimate \eqref{finalest} holds for radii $0<r\leq \min\{r_0,r_1\}$.
\end{proof}
\bigskip
{\small \noindent{\bf Acknowledgments.} The author is partially supported by CNPq 427070/2016-3 and grant 2019/0014 Paraiba State Research Foundation (FAPESQ). The author would like to thank the hospitality of the Abdus Salam International Centre for Theoretical Physics (ICTP), where parts of this work were conducted.
\bigskip
\bibliographystyle{amsplain, amsalpha}
|
1,116,691,499,820 | arxiv | \section{Introduction}\par
Quantum dots are nano-scale crystals, which have a discrete energy spectrum and for this reason often referred to as artificial atoms. Systems based on quantum dots are potentially important for nanoelectronic
applications.
Due to different kinds of topologies of these systems, the multi-dot systems
can show non-trivial interplay of fundamental effects (e.g., Fano and Kondo effects~\cite{Tanamoto_2007,Jiang_2008}).
An especially rich physics emerges when the geometry of the system allows electron tunneling through closed-loop geometries. Many fundamental effects such as Fano resonances~\cite{Hackenbroich_2001,Guevara_2006,Meden_2006,Zeng_2002}, Aharonov-Bohm oscillations~\cite{Loss_2000,Chi_2006,Delgado_2008,Zeng_2002}, Kondo behavior~\cite{Shang_2015,Oguri_2009,Tanamoto_2007,Aguado_2000} and corresponding quantum phase transitions (QPT) ~\cite{Zitko_2007(2),Wang_2007,Zitko_2008,Tooski_2016,Liu_2010,Zitko_2006,Dias_2006,ZH} have been found in these systems.
The simplest system of this kind is the parallel double quantum dot (DQD) system \cite{Zitko_2007(2),Zitko_2006,Dias_2006,Wang_2007,ZH,PK_2016,PK_2017,Chi_2005,Meden_2006,Trocha_2007,Tanaka_2005,Holleitner_2001}. It was shown that this
system even for moderate values of Coulomb interaction may demonstrate the interaction-induced QPTs to the so-called singular Fermi liquid (SFL) state, associated with the presence of the local magnetic moment in one of the states (so called "odd" state), which is weakly hybridized or decoupled from the conduction bands (leads).
The SFL state remains stable in a wide range of gate voltages near half filling and at some critical gate voltage undergoes QPT into paramagnetic state without local moments.
The type of the QPTs and peculiarities of the electron transport at the transition strongly depend on the type of the system symmetry, as well as on the number of energy levels. In particular, for the parallel double quantum dot system it was found that depending on the
symmetry of the system it can demonstrate either a first-order QPT to SFL state, accompanied by a discontinuous change of the conductance or the second-order QPT, in which the conductance is continuous and exhibits Fano-type asymmetric resonance near the transition point~\cite{PK_2017}. In both cases, the conductance reaches almost unitary limit in the SFL phase. Therefore, the QPTs to SFL state have a significant impact on the electron transport.
The SFL state may occur
also in other closed loop geometries of atoms or quantum dots, appearing in larger nanoscopic systems, e.g.~organic molecules~\cite{Ke_2008,Markussen_2010,Cardamone_2006,Guedon_2012,Quian_2008}, quantum networks~\cite{Foldi_2008,Dey_2011,Fu_2012} etc., where the interference of different paths of electron propagation
may yield non-trivial quantum phase transitions and transport properties.
The electron-electron interaction plays an important role in these systems. At the same time, numerically exact methods such as the numerical renormalization group, experience serious difficulties for large number of interacting sites.
As a simplest multi-dot system with closed loop geometry, in the present paper we study the quadruple quantum dot (QQD) ring system
\cite{Zeng_2002,Shang_2015,Liu_2010,
Lovey_2011,Yan_2006,Ex1,Ex2}. This system appears as a building block of quantum network devices \cite{Foldi_2008,Dey_2011,Fu_2012}. This system can be also viewed as a prototype of cyclobutadiene organic molecule, discussed some time ago from the viewpoint of electronic transport \cite{cybtdn}.
The QQD system
demonstrates a rather rich phase diagram with the possibility of controlling
spin states of electrons
\cite{Ozfidan_2013}, making
it promising for the development of spintronic devices.
It was shown that spin-polarized electron transport ~\cite{Fu_2012,Kagan_2017,Wu_2013,Eslami_2014}, e.g. generated by tuning the energy levels or hopping
levels in an external magnetic field~\cite{Fu_2012,Kagan_2017}, can be achieved in this structure.
Various spin states found for the isolated QQD system \cite{Ozfidan_2013}
imply a possibility of different magnetic moment states in this system connected to the leads
even in the absence of the (or in the infinitesimal) magnetic field.
In this respect, study of the possibility of the formation
of spin-split (in vanishingly small magnetic field) phases, corresponding to presence of local magnetic moments,
their connection with the transport properties of the system, and evolution under the non-equilibrium conditions
opens a way to model larger systems, including quantum networks and organic molecules.
Although paramagnetic solution self-energies of QQD were studied in \cite{DGAParquet,Kagan_2017,KA2}, they do not give sufficient information on the formation of local magnetic moments.
Only a limited number of studies have been done on the non-equilibrium effects in the QQD systems and mainly focused on the effects of the spin-polarization, magnification and circulation of the persistent current~\cite{Yi_2012,Yi_2010}, as well the current oscillation phenomena. The current-voltage $(J-V)$ characteristics have been investigated in some particular cases~\cite{Wu_2014,Wu_2013,KA2}, including a possibility of
negative differential conductance (NDC) effects~\cite{KA2}, analogous to those previously found for parallel quantum dots \cite{Aguado_2000,Lara_2008,Fransson_2004,Liu_2007,Chi_2005}. These studies however did not investigate in detail the possibility and effects of local moment formation, e.g. away from equilibrium.
From practical point of view, it is also interesting to consider whether it is possible to obtain highly spin-polarized current due to the energy difference of the spin-up and -down states, caused by the transition to the magnetic moment state in an infinitesimal magnetic field without the spin-orbit interaction.\par
To study the above mentioned aspects of electronic and transport properties of QQD system we use the functional renormalization-group approach
~\cite{Karrasch_2006,Metzner_2012,Gezzi_2007,Jakobs_2007}. This approach (after introducing the appropriate counterterm, which corresponds to switching off or decreasing magnetic field during the flow) was able to describe both, normal and SFL phases of the DQD system and was found to be in a good agreement with the numerical renormalization group data for a parallel quantum dot system in equilibrium up to intermediate values of the Coulomb interaction~\cite{PK_2016,PK_2017}.
However, generalization of this approach to larger systems is not straightforward, since it yields electron interaction vertices, which number increases as 4-th power of number of quantum dots, which are also difficult to treat numerically.
In the present paper we exploit the fRG method, which neglects flow of the electron interaction vertices, to describe one of the simplest systems of quantum dots, forming closed loop. The considered method represents a generalization of the fRG approach~\cite{Karrasch_2006,Metzner_2012} to the Keldysh space \cite{Gezzi_2007,Jakobs_2007} and allows one to reformulate an interacting problem in terms of coupled differential equations for flowing self-energies, which, after several approximations, can be easily integrated even for complex systems.
Among
other methods, the non-equilibrium fRG approach has some advantages: it does not require significant computational resources and results of its implementation
are consistent with the ones obtained through more elaborate methods deal with non-equilibrium situations~\cite{Karrasch_2010}.
Recently, this method has been successfully applied to several quantum dot systems~\cite{Karrasch_2010,Karrasch_2010_2,Jakobs_2010,Kennes_2013,Rentrop_2014} and the comparative study to other numerical and semi-analytical methods has been done~\cite{Karrasch_2010,Eckel_2010}. Its application to systems with closed loop geometries formed by quantum dots was not however performed so far.
We argue that the considered method is able to describe various aspects of electronic properties
of quantum dot or molecular systems with closed loop geometries, which are exemplified by QQD system.
In particular, we consider both, equilibrium and non-equilibrium regimes of the QQD system in the zero-temperature limit $T=0$.
In equilibrium, we show that depending on the geometry of the QQD system the regimes with zero, one, and two almost local magnetic moments can be realized. Moreover, adding hopping between the opposite quantum dots, attached to the contacts, allows one to use this system as a spin filter even in the absence of the spin-orbit coupling: for sufficiently large hopping in a certain range of gate voltages we find zero conductance for one of the spin projections (oriented {along} the infinitesimally small magnetic field).
We find that the magnetic moments, existing at zero bias voltage, remain stable in the wide range of bias voltages near equilibrium. At the same time, at higher bias voltages the destruction of the magnetic moments occurs and proceeds in one or two stages, depending on the parameters of the QQD system. We present results for the current-voltage characteristic and the differential conductance of the system, which exhibit sharp features at the transition points between different magnetic phases. The occurrence of interaction induced NDC phenomena is demonstrated. The presented method may be therefore used to describe electronic transport in larger systems: quantum networks and organic molecules. \par
The paper is organized as follows. In Sect. II we introduce the model and briefly discuss the non-equilibrium fRG method. In Sect. III we present the results of the fRG calculations in equilibrium and analyze the possibility of the local moments formation (Sect. IIIA) and differential conductance (Sects. IIIB,C). In Sect. IVA we discuss non-equilibrium regime, and in Sect. IVB we present the $J$-$V$ characteristics of the QQD system and discuss the appearance of the NDC phenomenon. Finally, in Sect. V we present conclusions.
\section{Model and method}
We consider the QQD system as depicted in Fig. \ref{sketch}. The corresponding model is defined by the following Hamiltonian
\begin{equation}
\mathcal{H}= \mathcal{H}_{\rm QQD}+\mathcal{H}_{\rm leads}+\mathcal{H}_{\rm T}.
\label{Hamiltonian}
\end{equation}
The term $\mathcal{H}_{\rm QQD}$ in the Eq.~(\ref{Hamiltonian}) describes the isolated QQD cluster,
\begin{eqnarray}
\mathcal{H}_{\rm QQD}&=&\sum_{\sigma}\sum_{j=1}^{4}\left[\left(\epsilon_{j}-\sigma H-
U_{j}/2\right)d^{\dagger}_{j,\sigma}d_{j,\sigma}\right.\notag\\&+&
(U_{j}/2)n_{j,\sigma}n_{j,\bar{\sigma}}\Big]-\sum_{\sigma}{\left[\left(t_{12}d^{\dagger}_{1,\sigma}d_{2,\sigma}\right.\right.}\notag\\&+&{\left.\left.
t_{24}d^{\dagger}_{2,\sigma}d_{4,\sigma}+t_{13}d^{\dagger}_{1,\sigma}d_{3,\sigma}+t_{34}d^{\dagger}_{3,\sigma}d_{4,\sigma}\right.\right.}\notag\\
&+&{\left.\left.t_{14}d^{\dagger}_{1,\sigma}d_{4,\sigma}\right)\right.}+\text{H.c.}\Big],
\label{H_dot}
\end{eqnarray}
where $d^{\dagger}_{j,\sigma} (d_{j,\sigma})$ are the creation (annihilation) operators for electrons with spin $\sigma\in\{\uparrow(1/2),\downarrow(-1/2)\}$ $(\bar{\sigma}=-\sigma)$ on the $j$-th quantum dot, $n_{j,\sigma}=d^{\dagger}_{j,\sigma}d_{j,\sigma}$. The parameters $\epsilon_{j}$ are the level positions, $H$ is the magnetic field, which produces Zeeman splitting of the energy levels (we assume in the following that QQD structure is not affected by magnetic flux), $U_{j}$ and $t_{ij}$ denote the on-site Coulomb interaction of the $j$-th dot and tunnel matrix elements between the nearest-neighbor quantum dots, respectively. In the following we assume that the quantum dots are equal, hence $U_{j}=U$ and $\epsilon_{j}=\epsilon$.\par
\begin{figure}[t]
\center{\includegraphics[width=0.75\linewidth]{QQDv2.png}}
\caption{(Color online). Schematic representation of quadruple quantum dot structure (QD1-QD4) connected to two left (L) and right (R) leads.}
\label{sketch}
\end{figure}
The second $\mathcal{H}_{\rm leads}$ and third $\mathcal{H}_{\rm T}$ terms in Eq.~(\ref{Hamiltonian}) describe the noninteracting leads and the tunneling of electrons between the leads and dots, respectively,
\begin{eqnarray}
\mathcal{H}_{\rm leads}&=&-\sum_{\alpha=L,R}\sum_{k=0}^{\infty}\sum_{\sigma}\left[\mu_{\alpha}c^{\dagger}_{\alpha,k,\sigma}c_{\alpha,k,\sigma}\right.\notag\\
&+&\left.\tau(c^{\dagger}_{\alpha,k+1,\sigma}c_{\alpha,k,\sigma}+\text{H.c.})\right],
\label{H_leads}\\
\mathcal{H}_{\rm T}&=&-\sum_{\sigma}\left[\left(t_{L}c^{\dagger}_{L,0,\sigma}d_{1,\sigma}+t_{R}c^{\dagger}_{R,0,\sigma}d_{4,\sigma}\right)\right.\notag\\&+&\left.\text{H.c.}\right],
\label{H_T}
\end{eqnarray}
where $c^{\dagger}_{\alpha,k,\sigma}(c_{\alpha,k,\sigma})$ is the corresponding creation (annihilation) operator for an electron on the $k$ lattice site of the left $\alpha=L$ or right $\alpha=R$ lead, $\tau$ denotes nearest-neighbor hopping between the sites of the leads, $\mu_{\alpha}$ is the chemical potential and $t_{\alpha}$ is the dot-lead coupling matrix element.\par
In the absence of the electron-electron interaction $U$ for
hopping symmetry $t_{12}/t_{13}=t_{24}/t_{34}$ one of the states (the so called odd state), obtained by an appropriate canonical transformation of the states on QD2,3 to the even-odd basis (see Appendix A, cf. Ref.~\cite{PK_2017}),
can be completely disconnected from the other quantum dots (and, consequently, from the leads). Even in the presence of the Coulomb interaction, this state remains weakly hybridized with the leads, which yields formation of the local moment in that state in the vicinity of half filling ($\epsilon_j$=0), see Sect. IIIA below.
In this respect, the QQD system {at $t_{14}=0$} is similar to the double quantum dot system, where the presence of the odd, weakly hybridized with the leads, state provides the possibility for formation of a correlation induced local magnetic moment in the system~\cite{Zitko_2007(2),ZH,PK_2016,PK_2017}. {However, as it will be shown in Sect. IIIC below, apart from the tunneling through the even energy level, which takes place in DQD system, in QQD system
the resonant tunneling from QD1 to QD4 is possible. This difference becomes especially prominent when switching on $t_{14}$ hopping}, which will be also discussed in Sect. IIIC.
The simplest asymmetry, which allows one to focus on the effect of the (local) magnetic moment formation under equilibrium and non-equilibrium conditions and its influence on the electron transport,
is the so called diagonal hopping asymmetry~\cite{PK_2017} $t_{12}=t_{34}=t$, $t_{13}=t_{24}=\gamma t$, where the parameter $\gamma$ varies from zero to unity. This choice of the geometry allows us to study the evolution of the system from the case of $\gamma=1$, when all hopping matrix elements are equal and the local moment is formed in the odd state in the equilibrium, to the case of $\gamma=0$ for which the system splits into the two subsystems, each of which hybridized to only one of the leads, and the local moments are present in both, even and odd states for small hybridization to the leads, or absent otherwise. We do not consider hopping between the QD2,3 because it does not change qualitatively conductivities for small hoppings, and for large hoppings simply destroy local moments (if they were present without this hopping) due to mixing of even- and odd states.
By using the Dyson equation and the projection technique the bare Green function of the system in the Keldysh space can be written as
\begin{equation}
\mathcal{G}=\begin{pmatrix}
\mathcal{G}^{--}&\mathcal{G}^{-+}\\
\mathcal{G}^{+-}&\mathcal{G}^{++}
\end{pmatrix}
=\left[\mathcal{G}^{-1}_{\rm dots}-\Sigma_{\rm bath}\right]^{-1},
\label{Gf}
\end{equation}
where
\begin{eqnarray}
\left[\mathcal{G}^{-1}_{\rm dots}\right]^{kk^{'}}_{jj^{'};\sigma}&=&-k\delta_{kk^{'}}\\
&\times&\begin{pmatrix}
\omega-\epsilon_{1,\sigma} & t_{12} & t_{13} & t_{14}\\
t_{12} & \omega-\epsilon_{2,\sigma} & 0 & t_{24}\\
t_{13} & 0 & \omega-\epsilon_{3,\sigma} & t_{34}\\
t_{14} & t_{24} & t_{34} & \omega-\epsilon_{4,\sigma}
\end{pmatrix}_{jj^{'}}\notag
\end{eqnarray}
with $\epsilon_{j,\sigma}=\epsilon_{j}-\sigma H$ is the Green function of the isolated QQD cluster and
\begin{eqnarray}
&&\left[\Sigma_{\rm bath}\right]^{kk^{'}}_{jj^{'};\sigma}=-i\delta_{jj^{'}}\sum_{\alpha}\Theta_j^\alpha\Gamma_{\alpha}\notag\\
&&\times
\begin{pmatrix}
1-2f(\omega-\mu_{\alpha})&2f(\omega-\mu_{\alpha})\\
-2f(-(\omega-\mu_{\alpha}))&1-2f(\omega-\mu_{\alpha})
\end{pmatrix}_{kk^{'}}\notag\\ &&
{=}
-i\delta_{jj^{'}}\sum_{\alpha}\Theta_j^\alpha\Gamma_{\alpha}\notag\\
&&\times\left[(2\delta_{kk^{'}}-1)\rm sign(\omega-\mu_{\alpha})+{\it k}(\delta_{{\it kk}^{'}}-1)\right]
\label{Sigma_bath}
\end{eqnarray}
incorporates effects of the coupling between the dots and leads, where $\Theta_j^\alpha=\delta_{\alpha L}\delta_{j1}+\delta_{\alpha R}\delta_{j4}$. In the above equations $\Gamma_{L(R)}=\pi|t_{1(4)}^{L(R)}|^{2}\rho_{\text{lead}}$ is an energy independent hybridization strength, where $\rho_{\text{lead}}$ represents the local density of states at the last site of the left or right lead (the leads are equivalent), $f\left(\omega-\mu_{\alpha}\right)=\theta(\mu_\alpha-\omega)
$ is the Fermi-Dirac distribution function of the lead $\alpha$ with the chemical potential $\mu_{\alpha}$ at zero temperature $T=0$ ($\theta(x)$ is the Heaviside step function).
Throughout this paper we use the notation $k(k^{'})=\pm 1$ for the Keldysh indices $k(k^{'})=\pm$. Finally, the out of equilibrium regime of the system is set by applying the bias voltage $V$ between the leads and choosing $\mu_{L}=-\mu_{R}=V/2$.\par
In order to approximately determine the self-energy $\Sigma$, which accounts for the effects of the electron interaction $U$, and the corresponding Green function $\mathcal{G}$, which is considered to be a matrix ($8 \times 8$ for each spin projection) in the Keldysh-dots space,
we use the the functional renormalization group method in the Keldysh formalism~\cite{Metzner_2012,Gezzi_2007,Jakobs_2007}. This method yields an infinite hierarchy of differential flow equations for the cutoff-parameter $\Lambda$-dependent self-energy $\Sigma^\Lambda$, two-particle and the high-order interaction vertices, which has similar structure
to the fRG on the Matsubara frequency axis \cite{Karrasch_2006,Metzner_2012}.
In the present study we consider only the flow of the self-energy and neglect the frequency dependence of the self-energy and the flow of the two-particle and higher order vertex functions. It was shown that neglecting frequency dependence of the self-energy allows one to describe both, equilibrium \cite{Karrasch_2006} and non-equilibrium properties \cite{Gezzi_2007,Jakobs_2007}, as well as to access the SFL state \cite{PK_2016,PK_2017}. On the other hand, neglecting flow of two-particle and higher order vertices is sufficient to reproduce the Kondo behavior of the linear conductance of a single quantum dot\cite{Karrasch_2006}; this approach demonstrates an excellent agreement with the density matrix renormalization group (DMRG) and the NRG results for the interacting resonant level model \cite{Karrasch_2010} and allows us to fulfill exactly charge conservation, which is typically violated in higher order truncations \cite{Gezzi_2007}.
At the considering level of truncation the above described approximations lead to the closed zero-temperature fRG flow equation for the
self-energy $\Sigma^{\Lambda}$, which has the form~\cite{Gezzi_2007}
\begin{equation}
\partial_{\Lambda}\Sigma^{kk^{'};\Lambda}_{jj^{'};\sigma}=-ikU\delta_{kk^{'}}\delta_{jj^{'}}\int{\dfrac{d\omega}{2\pi}\mathcal{S}^{kk;\Lambda}_{jj;\bar{\sigma}}}\left(\omega\right),
\label{fRG_Eq}
\end{equation}
where
$
\mathcal{S}^{kk^{'};\Lambda}_{jj^{'};\sigma}=-\sum_{i i^{'}}\sum_{q q^{'}}\mathcal{G}^{kq^{'};\Lambda}_{ji^{'};\sigma}\partial_{\Lambda}\left[
\Sigma^{\Lambda}_{\rm cut}
\right]^{q^{'}q}_{i^{'}i;\sigma}\mathcal{G}^{qk^{'};\Lambda}_{ij^{'};\sigma}
$
is the single-scale propagator and $\mathcal{G}^{\Lambda}=\left[\mathcal{G}^{-1}-\Sigma^{\Lambda}_{\rm cut}-\Sigma^{\Lambda}\right]^{-1}$ is the $\Lambda$-dependent propagator, where
\begin{eqnarray}
\left[\Sigma^{\Lambda}_{\rm cut}\right]^{kk^{'}}_{jj^{'};\sigma}=&-&i\Lambda\delta_{jj^{'}}\left[(2\delta_{kk^{'}}-1){\rm sign(\omega)}\right.\notag\\&+&\left.k(\delta_{kk^{'}}-1)\right]
\end{eqnarray}
introduces the $\Lambda$-dependence of $\mathcal{G}$ through the reservoir cutoff scheme~\cite{Karrasch_2010} (note that here $\Sigma^{\Lambda}_{\rm cut}$ is defined in the contour basis ($k(k^{'})=\pm 1$) instead of the retarded-advanced Keldysh basis ($k(k^{'})\in\{r,a,\rm K\}$). For some quantities in the equilibrium we also compare results to those from the fRG approach considering flow of the vertices (the corresponding fRG equations can be found, e.g., in Refs.~\cite{Gezzi_2007,Jakobs_2007}).\par
By solving the differential equation (\ref{fRG_Eq}) with the initial condition $\Sigma^{\Lambda_{\rm ini}}=0$, where $\Lambda_{\rm ini}$ is some initial scale, which is chosen to be much larger than all energy scales of the quantum dot system (note that we have included the term $U/2$ into the quadratic part of the Hamiltonian in Eq. (\ref{H_dot})), at the scale $\Lambda=0$ we obtain the energy-independent approximation to the self-energy $\Sigma=\Sigma^{\Lambda\rightarrow 0}$. To induce small initial spin splitting, which can be further enhanced by correlation effects during fRG flow (and therefore allows us to obtain local moments), we apply small magnetic field $H/{\rm max}(\Gamma_{L,R})=0.001$. Due to use of truncation (\ref{fRG_Eq}) of fRG hierarchy at first (self-energy) instead or second order (vertices), the counterterm technique, suggested in previous studies~\cite{PK_2017,PK_2016} is not necessary, and does not change the obtained results.
\section{Local moments and conductance in the equilibrium regime $(V=0)$}\par
Let us first consider the results of the application of outlined fRG approach
in the equilibrium $(V=0)$. This case was intensively studied within equilibrium fRG for DQD system (see, e.g., Refs. \cite{PK_2016,PK_2017}), where good agreement with numerical renormalization-group (NRG) results was obtained.
As in previous study of two parallel quantum dots~\cite{PK_2017,PK_2016}, it is convenient to perform transformation of the electronic states on QD2,3 to the even ($e$) and odd ($o$) orbitals, see Appendix A. In numerical calculations, we set $\Gamma_{L}=\Gamma_{R}=\Gamma$, $U/\Gamma=2$, $T=0$ and use $\Gamma$ as the energy unit.
\subsection{Local magnetic moments}
To analyze the presence of the magnetic moment in the system we consider $\epsilon=0$, $t_{14}=0$ case (the results at finite small $\epsilon$ and finite small or moderate $t_{14}$ are qualitatively similar) and calculate the average square of the spin $\langle \mathbf{S}_{e/o}^2 \rangle$, corresponding to the even and odd orbitals, where $\vec{\mathbf{S}}_{p}=({1}/{2})\sum_{\sigma,\sigma^{'}}d^{\dagger}_{p,\sigma}\vec{\bm{\sigma}}_{\sigma\sigma^{'}}d_{p,\sigma^{'}}$ is the spin operator and $\vec{\bm{\sigma}}$ is the vector of the Pauli spin matrices.
Fig.~\ref{S2even_odd_gamma_b_c} shows the dependence of $\langle \mathbf{S}_{e/o}^2 \rangle$ on the parameter $\gamma$ for various values of $t$.
As one can expect,
for small $t$ (see, e.g., $t=0.05$ case) the average $\langle \mathbf{S}_{e/o}^2 \rangle\approx 3/4$, which (together with the filling $\langle n_{e(o),\uparrow}\rangle\approx 1$ and $\langle n_{e(o),\downarrow}\rangle\approx 0$) means that the electron is
almost localized on both the odd and even orbitals ($\langle n_{e/o,\uparrow}n_{e/o,\downarrow}\rangle\approx 0$) due to weak connection of these orbitals with quantum dots 1,4, namely
$t_{pq}\ll U$ ($p\in\{1,4\}$, $q\in\{e,o\}$).
The corresponding
square of the spins on quantum dots QD2 and QD3, $\langle \mathbf{S}_{2,3} ^2\rangle\approx 3/4$.
The average $\langle \mathbf{S}_{o}^2 \rangle$
monotonically increases up to a maximum value of $\langle \mathbf{S}_{o}^2 \rangle=3/4$ at $\gamma=1$ due to
decrease of the coupling $t_{4o}$ between the odd orbital and quantum dot QD4 (for our definition of the odd orbital $t_{4o}=0$ for $\gamma=1$ and $t_{1o}=0$ for any $\gamma$).
In contrast, both hopping parameters $t_{1e}$ and $t_{4e}$,
associated with the even orbital, increase with $\gamma$, which leads to a smooth decrease of $\langle \mathbf{S}_{e}^2 \rangle$.
It is important to note that in this
and the following cases we find $\langle \mathbf{S}_{1(4)}^2 \rangle$ close to its free-electron value $3/8$, which indicates that there are no local magnetic moments in quantum dots QD1 and QD4.
Increase of the hopping strength $t$ leads to delocalization of the electronic states, which yields a gradual decrease of $\gamma=0$ value of $\langle \mathbf{S}_{e/o}^2 \rangle$.
Starting with some sufficiently large value of $t$, we find that $\langle \mathbf{S}_{e/o}^2 \rangle\rightarrow 3/8$ for $\gamma\rightarrow 0$, which means that there are no magnetic local moments present in the even/odd states. At the same time, as shown in Fig.~\ref{S2even_odd_gamma_b_c}, with increase of $\gamma$ from $\gamma=0$ to $\gamma=1$, $\langle \mathbf{S}_{o}^2 \rangle$ increases from
3/8
to the value
3/4, showing presence of the local magnetic moment in the odd state at $\gamma\gtrsim 0.6$ (in this case $\langle{n_{o,\uparrow}}\rangle\approx 1$, $\langle{n_{o,\downarrow}}\rangle\approx 0$, $\langle{n_{e,\sigma}}\rangle\approx 0.5$).
This corresponds to the so called singular Fermi liquid state~\cite{Zitko_2007(2),PK_2017,PK_2016} and explained by the fact that, regardless of the choice of $t$, the odd state is almost disconnected from the leads at $\gamma \rightarrow 1$ (in particular, hopping matrix element $t_{4o}$ associated with the odd states, decreases to zero)
and hence, the local magnetic moment on the odd orbital is always well-defined when $\gamma\rightarrow 1$. At the same time, $\langle \mathbf{S}_{e}^2 \rangle\approx 3/8$ remains almost unchanged
with the variation of $\gamma$, since this orbital remains
strongly coupled to the quantum dots QD1 and QD4
, which, in turn, have a direct hybridization with the leads, cf. Ref.~\cite{PK_2017}.
Thus, in contrast to the cases considered above, in this case only the odd orbital is responsible for the appearance of an unscreened local magnetic moment in the system.
\begin{figure}[t]
\centering
\includegraphics[width=0.8\linewidth]{S2even_odd_gamma_b_c_v6.png}
\caption{(Color online). The average square of a magnetic moment $\langle\mathbf{S}_{e/o}^{2}\rangle$ in the even (dashed black lines) and odd (solid red lines) states as a function of $\gamma$ for $t=0.05$ (upper panel) and $t=0.5$ (lower panel), and $t_{14}=\epsilon=0$. Dashed-dotted-dotted blue and dashed-dotted green lines show $\langle\mathbf{S}_{e}^{2}\rangle$ and $\langle\mathbf{S}_{o}^{2}\rangle$, respectively, in the fRG approach with the flow of the two-particle vertex (the corresponding curves are almost indistinguishable for $t=0.05$).}
\label{S2even_odd_gamma_b_c}
\end{figure}
In order to analyze the role of the neglected vertex corrections,
we compared the
obtained results with those from fRG calculations that account for the flow of the two-particle vertex functions, which for DQD system yielded agreement with NRG approach. To eliminate the problem of the divergences of the vertices in the fRG flow, we use the counterterm extension of the fRG approach (related discussion can be found in Refs.~\cite{PK_2016,PK_2017})
with initial magnetic field $\tilde{H}/\Gamma=0.02$, which is switched off linearly with $\Lambda$ starting from the scale $\Lambda_{c}/\Gamma=0.02$. It turns out, that for intermediate and large hopping parameters between the quantum dots ${\rm min}(t_{ij})\gtrsim U,\Gamma$
$(i,j\in\{1,2\})$, the renormalization of the two-particle vertex produces only small quantitative changes to the self-energy, obtained from the first-order fRG scheme
(see, e. g., the results for $\langle \mathbf{S}_{e/o}^2 \rangle$ for $t=0.5$ shown in the lower panels of Fig.~\ref{S2even_odd_gamma_b_c}). In the regime of small hopping strength ${\rm max}(t_{ij})\ll U,\Gamma$ the energy splitting between the spin-up and -down components of the self-energy in the fRG approach with account for the flow of the two-particle vertex is somewhat larger in comparison with that obtained in the first-order fRG approach and account of the flow of the two-particle vertex leads to enhancement of the magnetic moments in the QQD system
(see upper panel of Fig.~\ref{S2even_odd_gamma_b_c}). However, even in this case, the physical picture of the formation of the magnetic moment in the quantum dot system remains unchanged.
Note that in the cases when we obtain $\langle {\bf S}^2_{e/o}\rangle\approx 3/4$, the obtained values of the local moments suggest that they are not screened by conduction electrons in the considering case of QQD system (the same applies to DQD system). This can be attributed to presence of the effective hopping between even and odd states via QD4 and strong ferromagnetic correlations between even and odd states, which originate from ferromagnetic correlations between QD2,3 (see Fig. \ref{SiSj_V} below). These ferromagnetic correlations, together with the charge transfer between the leads preclude also the formation of the two-channel Kondo effect (see, e.g., Ref. \cite{TwoChanKondo}).
We have verified that the same fRG approach for a single quantum dot leads to spin unpolarized solution for $H\rightarrow 0$, which mimics
screening of the local moment at $T=0$. This approach also describes aspects of Kondo physics, in particular, the Kondo plateau of conductance, as well as it can properly estimate Kondo temperature from the fRG calculation in a finite magnetic field~\cite{Karrasch_2006}. Thus, the considering fRG approach does not lead to an unphysical magnetic solution (even for the first-order truncation of the fRG equations), as it takes place in the mean-field approximation, and hence in our case the appearance of the (unscreened) local magnetic moment phase at $\gamma$ close to one is not an artifact of the fRG approach.
\subsection{Total conductance}
In Fig.~\ref{G_Vg}
we present the results for the zero-temperature linear conductance $G=\sum_\sigma G_\sigma$, where $G_\sigma=({4e^{2}}/{h})\Gamma_{L}\Gamma_{R}{\left|\mathcal{G}_{14;\sigma}^{r}\left(\omega=0\right)\right|^{2}}$ as a function of the gate voltage $\epsilon$ for $t_{14}=0$, where $\mathcal{G}^{r}=\mathcal{G}^{--;0}-\mathcal{G}^{-+;0}$ is the retarded Green function in the end of the fRG flow, for various hopping parameters $\left(t,\gamma\right)\in\left\{\left(0.05,0.9\right), \left(0.5,0.9\right), \left(0.5,0.1\right)\right\}$, obtained by numerical integration of
Eq.~(\ref{fRG_Eq}); the case of fine $t_{14}$ is considered in the next subsection. We use here the Landauer expression for conductance, since we consider $T=0$ case and we have vanishing imaginary part of the self-energy $\Sigma^\Lambda$ in our truncation, which implies physically that we map the interacting system onto the renormalized non-interacting one.
It can be seen that in the cases $\left(t,\gamma\right)=\left(0.05,0.9\right)$ and $\left(t,\gamma\right)=\left(0.5,0.9\right)$, which
are characterized by the presence of the almost local magnetic moment(s) in the quantum dots at $\epsilon=0$,
the gate voltage dependence of the linear conductance shows abrupt changes
in the narrow vicinity of some gate voltage.
This behavior of the conductance is associated with the quantum phase transitions at some critical gate voltage $\epsilon_c$ from the local magnetic moment to the "paramagnetic" regime of the system analogous to the ones which take place in the parallel double dot system~\cite{Zitko_2007(2),PK_2016,PK_2017}. The occupation numbers $\langle n_{e,o}\rangle$ and squares of the local moments $\langle S^2_{e,o} \rangle$ are close to their $\epsilon=0$ values at $|\epsilon|<\epsilon_c$, and correspond to paramagnetic state at $|\epsilon|>\epsilon_c$.
The dependence of the linear conductance on the gate voltage exhibits near $|\epsilon|=\epsilon_c$ the presence of the asymmetric Fano-like resonance for $\left(t,\gamma\right)=\left(0.5,0.9\right)$, when for $\epsilon=0$
spin-half local magnetic moment is present
and the sharp peak of the conductance for $\left(t,\gamma\right)=\left(0.05,0.9\right)$ case, which in turn corresponds to two spin-half local magnetic moments in the quantum dot ring at zero gate voltage. For the case $\left(t,\gamma\right)=\left(0.5,0.1\right)$, when no magnetic moments exist in the quantum dots, $G\left(\epsilon\right)$ is a smooth nonmonotonic function of $\epsilon$.
\par
\begin{figure}[t]
\centering
\includegraphics[width=0.8\linewidth]{G_Vg_v8.png}
\caption{(Color online). Upper/middle panel: The gate voltage dependence of the zero-temperature linear conductance $G$ for $(t, \gamma)=(0.05, 0.9)$ (dashed red line), $(t, \gamma)=(0.5, 0.9)$ (solid black line) and $(t, \gamma)=(0.5, 0.1)$ (dashed-dotted blue line) in the fRG approach without/with the flow of the two-particle vertex.
Lower panel: the conductance in the first-order perturbation theory (solid green line) and in the mean-field approach (dashed-dotted purple line) for $(t, \gamma)=(0.5, 0.9)$. $t_{14}=0$ for all plots.
}
\label{G_Vg}
\vspace{-0.3cm}
\end{figure}
The corresponding results for the linear conductance with account of the vertex flow are presented in the middle panel of Fig.~\ref{G_Vg}.
One can see that the conductance obtained within the
scheme, which does not include the flow of the two-particle vertex functions, qualitatively reproduces the general patterns and the overall features of the corresponding results with the flow of the vertex.
It is also necessary to note that, although in all cases the general behavior of the conductance remained the same in the vicinity of the quantum phase transition after accounting for the flow of the two-particle vertex functions, the quantum phase transition point $\epsilon_c$ shifts toward a lower gate voltage.
\begin{figure}[b]
\centering
\includegraphics[width=0.8\linewidth]{G_Vg_up_down_v2.png}
\caption{(Color online). The gate voltage dependence of the spin-up ($\sigma=\uparrow$, solid red lines) and spin-down ($\sigma=\downarrow$ dashed black lines) zero temperature linear conductance $G_{\sigma}$ for $(t, \gamma)=(0.5, 0.9)$ and $t_{14}=0$ (a), $t_{14}=\Gamma$ (b), $t_{14}=2\Gamma$ (c) in the fRG approach without the flow of the two-particle vertex. }
\label{Gt14}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.8\linewidth]{Gij_Vg_t14_0_2.png}
\caption{(Color online). Conductances $G_{\uparrow,11}$ (solid black line), $G_{\uparrow,{\rm st}}=G_{\uparrow,22}+G_{\uparrow,44}+2G_{\uparrow,24}$ (dashed red line) and the interference contribution $G_{\uparrow,{\rm if}}=2(G_{\uparrow,12}+G_{\uparrow,14})$ (dashed-dotted blue line) as a function of gate voltage $\epsilon$ for $(t,\gamma)=(0.5, 0.9)$, $t_{14}=0$ (upper panel) and $t_{14}=2\Gamma$ (lower panel).
}
\label{Gij_Vg_t14_0_2}
\end{figure}
To emphasize importance of using fRG approach, which yields non-trivial results already in the truncation, neglecting flow of the two particle vertex, we also show in the lower panel of Fig.~\ref{G_Vg} the results for the linear conductance in the first-order perturbation theory (PT) and mean-field approach (MF) for $\left(t,\gamma\right)=\left(0.5,0.9\right)$. Within the MF approach, the conductance is strongly suppressed near $\epsilon=0$ compared to the fRG results. This is mainly due to the overestimation of the splitting between the spin-up and spin-down states in the MF approach, which does not allow to approach even approximately unitary value of conductance at small $\epsilon$.
At the same time, the MF approach, yielding substantial spin splitting at small $\epsilon$, is able to predict the existence of the phase with the local magnetic moment. In contrast, the PT theory predicts only the symmetric phase without local moments for all $\epsilon$
although the conductance near $\epsilon=0$ in the PT approach is larger than in the MF and somewhat closer to the unitary limit. Note that the resonance near $\epsilon\approx 0.8\Gamma$ in the PT approach is not related to the transition between different magnetic regimes and arises solely due to the interaction-induced dependence of the perturbation theory energy levels of the QQD system on the gate voltage.
\subsection{Spin-resolved conductances}
In Fig. \ref{Gt14} we consider spin-resolved conductances $G_\sigma(\epsilon)$ in the case of a single local moment $(t, \gamma)=(0.5, 0.9)$ which is most interesting for practical applications, since in this case strong difference between the transport of two spin projections is expected (we still assume vanishingly small magnetic field which orients the local moment along the $z$-axis and therefore creates finite spin splitting of the states, cf. Ref. \cite{PK_2016}). At $t_{14}=0$ we find finite spin-up and spin-down conductances, except the narrow resonance region. While at finite $t_{14}$ the dependence of conductance $G_\downarrow(\epsilon)$ for {minority} spin projection remains qualitatively similar to that for $t_{14}=0$, the conductance for the {majority} spin projection $G_\uparrow(\epsilon)$ is suppressed with respect to $t_{14}=0$ case, and above a certain value of $t_{14}$ vanishes at some gate voltage, forming a plateau with a small, almost vanishing conductance. This vanishing of conductance occurs due to destructive interference of different paths of propagating of spin-up electrons (note that the dependence on the spin occurs due to preferred orientation of the spin of electrons along the field in the even state, which is favored by ferromagnetic correlations between even and odd orbitals and orientation of the local moment along the field).
To get further insight into the mechanism of the conductance in QQD system and its suppression for the majority spin projection,
{we consider partial contributions to the conductance through various states of the system, which energies $\lambda_m$ (including imaginary parts corresponding to the damping due to connection to the leads), $m=1...4$ are determined from the diagonalization of inverse Green function $(\mathcal G^{r}_\sigma(0))^{-1}$ in the end of the flow (due to frequency independence of the self-energy these eigenvalues provide also poles of analytically continued Green function $\mathcal G^{r}_\sigma(\omega)$ in the lower half plane). The obtained eigenstates can be approximately represented as:
\begin{eqnarray}
|\text{es}_1\rangle &\approx& |1\rangle-|4\rangle\notag\\
|{\text{es}}_2\rangle &\approx& \alpha (|2\rangle+|3\rangle)- (|1\rangle+|4\rangle)\notag\\
|\text{es}_3\rangle &\approx& |2\rangle-|3\rangle\notag\\
|{\text{es}}_4\rangle &\approx& \alpha (|1\rangle+|4\rangle)+ (|2\rangle+|3\rangle) \label{QQDstates}
\end{eqnarray}
($\alpha$ depends on the parameters of the system and the spin projection, $|i\rangle$ denotes the state with the considering spin projection $\sigma$ on QD$i$). As it is shown in Appendix B, the states $|\text{es}_{1,2,4}\rangle$ are similar to those in the three quantum dots chain, which corresponds to QD1$\leftrightarrow$(even state of QD2,3)$\leftrightarrow$QD4 subsystem of QQD. In particular, the state $|\text{es}_1\rangle$
describes the resonant tunneling between QD1,4; the states $|{\text{es}}_{2,4}\rangle$ describe sequential tunneling through the even state, as well as the tunneling via the hopping $t_{14}$ (when present). Finally, the state $|\text{es}_3\rangle$ is the odd state of QD2,3, discussed above. By representing $G_{\sigma}=\sum_{mm'}G_{\sigma,mm'}$ where
\begin{eqnarray}
G_{\sigma,mm' }&=&(4e^2/h)\Gamma_L \Gamma_R {\rm Re} \left[P^{\sigma}_{m} \left(P^{\sigma}_{m'}\right)^*\right],\end{eqnarray}
we individuate the contributions to the conductance through individual eigenstates ($m=m'$) and their interference ($m\ne m'$),
$P^{\sigma}_{m}=U^{\sigma}_{1 m}\left[U^{\sigma}\right]^{-1}_{m 4}/\lambda^{\sigma}_{m},
$
${U}_{im}^{\sigma}$ is the matrix of the eigenvectors of the Green function $(\mathcal G_\sigma^{r}(0))^{-1}$.} We find that the odd state $|\text{es}_3\rangle$ does not contribute to the conductance, except the narrow region of gate voltages near the resonance.
The other contributions $G_{\uparrow,mm'}$ are shown in Fig.~\ref{Gij_Vg_t14_0_2} where we group together the contributions of states $|\text{es}_{2,4}\rangle$.
One can see that for $t_{14}=0$ the biggest contribution to the conductance comes from the resonant tunneling ($G_{\uparrow,11}$); for $\sigma=\uparrow$ the sequential tunneling contribution $G_{\sigma,\rm st}=G_{\sigma,22}+G_{\sigma,44}+2G_{\sigma,24}$ and its interference $G_{\sigma,\rm if}=2(G_{\sigma,12}+G_{\sigma,14})$ with the resonant tunneling path are small, similarly to the conductance of three-dot chain (see Appendix B). With switching on hopping $t_{14}$ the situation changes drastically: the resonant tunneling contribution $G_{\uparrow,11}$ is suppressed due to the shift of the energy levels and it becomes comparable to the contribution from the sequential tunneling $G_{\uparrow,{\rm st}}$. At the same time, these two contributions strongly interfere with each other, such that the total conductance vanishes near $\epsilon=0.8\Gamma$. For another spin projection ($\sigma=\downarrow$, not shown) we find the same resonant tunneling contribution $G_{\downarrow,11}\approx G_{\uparrow,11}$, but for $t_{14}=2\Gamma$ much smaller sequential $G_{\downarrow,\rm st}$ and interference $G_{\downarrow,{\rm if}}$ contributions than corresponding contributions for $\sigma=\uparrow$; the interference contribution $G_{\downarrow,{\rm if}}$ is also positive for $t_{14}=2\Gamma$. These effects show the possibility of using even a single QQD system as a spin filter in spintronic devices and they can be further enhanced in quantum networks (cf. Ref. \cite{Fu_2012}), which, however, will be studied elsewhere.
We have verified that for DQD system in a similar geometry (with direct hopping between the leads included) the suppression of the majority conductance is much smaller than for QQD system, which is due to absence of resonant tunneling eigenstate $|\text{es}_ 1\rangle$, see Appendix C.
\section{Non-equilibrium regime $(V\neq 0)$}
\subsection{Local magnetic moments}
Let us consider the impact of non-equilibrium zero-temperature conditions with a finite bias voltage $V$ applied on the local magnetic moments.
We again consider in this subsection the case $t_{14}=\epsilon=0$ (with small finite $\epsilon$ and finite $t_{14}$ yielding qualitatively similar results) and focus on the quantum dot systems with the following hopping parameters $\left(t,\gamma\right)\in\left\{\left(0.05,0.9\right), \left(0.5,0.9\right), \left(0.5,0.1\right)\right\}$, which in the equilibrium case $V=0$ correspond to three different physical situations, discussed in previous subsection: an almost local magnetic moment in both even and odd states (or, equivalently, on the quantum dots QD2 and QD3), the moment in the odd state (i.e. distributed between the QD2 and QD3 quantum dots), and to the absence of a local magnetic moment in the system, respectively.\par
\begin{figure}[b]
\centering
\includegraphics[width=0.8\linewidth]{S2_Vnv4.png}
\caption{(Color online). The average square of a magnetic moment $\langle\mathbf{S}_{e(o)}^{2}\rangle$ in the even (dashed black line) and odd (solid red line) states as a function of bias voltage $V$ for $(t, \gamma)=(0.05, 0.9)$ (a), $(t, \gamma)=(0.5, 0.9)$ (b) and $(t, \gamma)=(0.5, 0.1)$ (c).}
\label{S2_Vn}
\end{figure}
The dependencies of the average square of the spin $\langle \mathbf{S}_{e/o}^2 \rangle$ in the even and odd orbitals on bias voltage $V$ for the first case $\left(t,\gamma\right)=(0.05,0.9)$ are shown in Fig.~\ref{S2_Vn}a.
One can see that increasing bias voltage suppresses the equilibrium value of $\langle \mathbf{S}_{e/o}^2 \rangle$, leading to a double-step behavior, which is related to the strong non-linear change of the renormalized system parameters with the bias voltage. In
Fig.~\ref{SE_e_o} we plot the renormalized energy levels of the even/odd orbitals $\epsilon_{e/o,\sigma}$ and hopping parameters $t^{\sigma}_{eo}$ (the other system parameters are not renormalized) as a function of $V$. We can see that at not too large $V<0.5\Gamma$
the increase of the bias voltage does not lead to a significant change of the renormalized parameters relative to their equilibrium $(V=0)$ values
and $t^{\sigma}_{eo}$ is pinned to zero as shown in the lower panel of Fig.~\ref{SE_e_o}.
Therefore, all non-zero hopping parameters are proportional to $t$ and small because of the initial choice of $t$ ($t/\Gamma=0.05$). In this case, the energy levels of the isolated quantum dot system (eigenvalues of the effective non-interacting Hamiltonian) $E_{j,\sigma}$ ($j=1,4$) can be roughly estimated as a set of one-particle energy levels $\{E_{j,\sigma}\}\approx \{\epsilon_{j,\sigma}\}$. This approximation and the observation that within the considered bias voltages range $\epsilon_{e/o,\uparrow}<\mu_{R}=-V/2$ and $\epsilon_{e/o,\downarrow}>\mu_{L}=V/2$
(see Fig. ~\ref{SE_e_o}) allows us to conclude that
$\langle n_{e/o,\uparrow}\rangle \approx 1$ and $\langle n_{e/o,\downarrow}\rangle \approx 0$ for these values of $V$, which reflects formation of local magnetic moment with the spin, aligned along infinitesimally small magnetic field.
\begin{figure}[t]
\centering
\includegraphics[width=0.8\linewidth]{SE_e_o_t0_05g0_9.png}
\caption{(Color online). Upper panel: The renormalized energy levels of the odd states $\epsilon_{o,\sigma}$ (thick solid (red) line for $\sigma=\uparrow$ and thin solid (black) line for $\sigma=\downarrow$) and the even states $\epsilon_{e,\sigma}$ (thick dashed (green) line for $\sigma=\uparrow$ and thin dashed (blue) line for $\sigma=\downarrow$) as a function of bias voltage $V$.
Lower panel: The renormalized hopping matrix element $t^{\sigma}_{eo}$ (solid black/dashed red line for $\sigma=\uparrow/\downarrow$) as a function of bias voltage $V$ for $(t, \gamma)=(0.05, 0.9)$.
}
\label{SE_e_o}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.8\linewidth]{Neo_Vv5.png}
\caption{(Color online). The occupation numbers in the odd orbitals $\langle n_{o,\sigma}\rangle$ (thick solid (red) line for $\sigma=\uparrow$ and thin solid (black) line for $\sigma=\downarrow$) and the even orbitals $\langle n_{e,\sigma}\rangle$ (thick dashed (green) line for $\sigma=\uparrow$ and thin dashed (blue) line for $\sigma=\downarrow$) as a function of bias voltage $V$ for $(t, \gamma)=(0.05, 0.9)$ (a), $(t, \gamma)=(0.5, 0.9)$ (b) and $(t, \gamma)=(0.5, 0.1)$ (c).}
\label{Neo_V}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.8\linewidth]{NN_V_v3.png}
\caption{(Color online). The total occupation
number of the spin--up states $\langle n_{\uparrow}\rangle$ (solid red line) and spin--down states $\langle n_{\downarrow}\rangle$ (dashed black line) as a function of bias voltage $V$ for $(t, \gamma)=(0.05, 0.9)$ (a), $(t, \gamma)=(0.5, 0.9)$ (b) and $(t, \gamma)=(0.5, 0.1)$ (c).}
\label{NN_V}
\end{figure}
With further increase of the bias voltage the renormalized energy levels $\epsilon_{e/o,\sigma}$ corresponding to different spin projections approach each other (see Fig.~\ref{SE_e_o}), and, therefore, the spin splitting
decreases with $V$.
It is important that the spin
splitting does not collapse completely
even for sufficiently large values of the bias voltage.
In Fig.~\ref{SE_e_o} we observe the region of the intermediate voltages $0.5 \lesssim V/\Gamma\lesssim 2.1$ for which the splitting of the energy levels is still significant. In contrast to the above considered case, the
bias voltages in this range lead to the appearance of a nonzero hopping amplitude between the even and odd orbitals $t^{\sigma}_{eo}\gg t$, which increases monotonically with increasing bias voltage, does not depend on the spin orientation and provides additional hybridization of the even/odd states due to the appearance of new possible paths between these states and the leads.
The combined effect of sharp increase of this amplitude and decrease of of the energy levels splitting,
results in an abrupt drop of $\langle \mathbf{S}_{e/o}^2 \rangle$ as seen in Fig.~\ref{S2_Vn}a. The values of the square of the moment $\langle \mathbf{S}_{e/o}^2 \rangle$ in the range $0.5 \lesssim V/\Gamma\lesssim 2.1$
is different from the non-interacting value $3/8$ due to correlations. This intermediate state can be considered as obeying fractional quasi-local magnetic moment in even and odd orbitals, which appearance is possible entirely due to considered non-equilibrium conditions.
In the regime of high bias voltage $V \gtrsim 2.1\Gamma$ the even/odd spin-up and -down states are only slightly split and $t^{\sigma}_{eo}$ practically does not change with increasing $V$ (see Fig.~\ref{SE_e_o}a). Such a small splitting in the spin space results in the absence of the magnetic moments in the system and we find $\langle \mathbf{S}_{e/o}^2 \rangle \approx \langle \mathbf{S}_{j}^2 \rangle \approx 3/8$.
The calculation of the average occupation numbers
confirms
the results, obtained above (see Figs.~\ref{Neo_V}a and \ref{NN_V}a).
In the range $V\lesssim \Gamma/2$ for the quantum dots QD1 and QD4 we find $\langle n_{1(4),\sigma}\rangle\approx 0.5$ (the corresponding bias voltage dependencies are not presented here). Consequently, we have $\langle n_{\uparrow}\rangle\approx 3$ and $\langle n_{\downarrow}\rangle\approx 1$ for the total occupation number of the states spin $\sigma$ projection $\langle n_{\sigma}\rangle=\sum_{j}{\langle n_{j,\sigma}\rangle}$ $(\langle n_{2,\sigma}\rangle+\langle n_{3,\sigma}\rangle=\langle n_{e,\sigma}\rangle+\langle n_{o,\sigma}\rangle)$, and therefore, $\langle n_{\uparrow}\rangle-\langle n_{\downarrow}\rangle\approx 2$ for these bias voltages (note, that we consider only the half-filling case $\epsilon=0$ and $H\rightarrow 0$, which implies that $\langle n\rangle=\langle n_{\uparrow}\rangle+\langle n_{\downarrow}\rangle=4$). Thus, one can conclude that at bias voltages $V\lesssim \Gamma/2$
the values of the occupation numbers and spin-spin correlation functions
almost coincide with the equilibrium ones.
For larger $V$
the obtained occupation numbers $\langle n_{e/o,\uparrow}\rangle$ ($\langle n_{e/o,\downarrow}\rangle$)
are less (greater) than those for the case of $V\lesssim \Gamma/2$ (see Fig.~\ref{Neo_V}a). However,
the difference between the occupation numbers of spin-up and spin-down states still remains significant in the range $0.5 \lesssim V/\Gamma\lesssim 2.1$.
As can be seen from the Fig.~\ref{Neo_V}a, in case $V \gtrsim 2.1\Gamma$ we have $\langle n_{e/o,\uparrow}\rangle \approx\langle n_{e/o,\downarrow}\rangle\approx 0.5$.
Let us now consider the case $\left(t,\gamma\right)=(0.5,0.9)$, when the hopping matrix elements $t_{ij}$ are an order of magnitude larger than in the previous case, but have the same ratio between them. In this case the renormalized energy levels $\epsilon_{e/o,\sigma}$ (see Fig.~\ref{SE_e_o_b}) behave near the
equilibrium
quite analogously to the above considered case $\left(t,\gamma\right)=(0.05,0.9)$, but despite the presence of the large splitting between the spin-up and spin-down states of the even and odd orbitals the appearance of local magnetic moment takes place only on the odd orbital, which is clearly seen from the bias voltage dependence of $\langle \mathbf{S}_{e/o}^2 \rangle$ shown in Fig~\ref{S2_Vn}b. As in the equilibrium case, for $V\lesssim \Gamma/3$ we obtain $\langle \mathbf{S}_{o}^2 \rangle\approx 3/4$, while $\langle \mathbf{S}_{e}^2 \rangle\approx 3/8$.
In contrast to the above considered case of small $t$, the hopping matrix elements $t^{\sigma}_{eo}$ are non-zero even in the low bias region as shown in the lower panel of Fig.~\ref{SE_e_o_b}. However, the generated hopping parameters $t^{\sigma}_{eo}$ are small enough and do not provide
the hybridization between the odd orbital and the leads sufficient to destroy the magnetic moment. In contrast to the case $(t,\gamma)=(0.05,0.9)$ there is no region of intermediate level splitting, and for $V \gtrsim \Gamma/3$ we have $|\epsilon_{e/o,\uparrow}-\epsilon_{e/o,\downarrow}|\approx H\rightarrow 0$. This leads to the sharp decrease of $\langle \mathbf{S}_{o}^2\rangle$ near the voltage $V=\Gamma/3$ from almost its maximum value of $\langle \mathbf{S}_{o}^2\rangle=3/4$ to $\langle \mathbf{S}_{o}^2\rangle\approx 3/8$ (see Fig.~\ref{S2_Vn}b), such that the magnetic moment is absent for $V \gtrsim \Gamma/3$. As for the above considered case of small $t$,
we have $\langle n_{o,\uparrow(\downarrow)}\rangle\approx 1(0)$ in the regime with the magnetic moment $(V\lesssim \Gamma/3)$ and $\langle n_{o,\uparrow/\downarrow}\rangle\approx 0.5$ for larger $V$. At the same time, we find that $\langle n_{e,\sigma}\rangle\approx 0.5$ for all values of $V$. As a result,
the local moment regime is characterized by
a difference in the total occupation numbers for the spin-up and spin-down states
approximately equal one
($\langle n_{\uparrow}\rangle\approx 2.5$ and $\langle n_{\downarrow}\rangle\approx 1.5$, see Fig.~\ref{NN_V}b). It is worth noting that small difference between the occupation numbers $\langle n_{e,\uparrow}\rangle$ and $\langle n_{e,\downarrow}\rangle$ for $V\lesssim \Gamma/3$ (see Fig.~\ref{Neo_V}b) is likely due to the overestimation of the spin splitting
of the energy levels of the even orbital in the fRG scheme, which does not take into account the renormalization of the non-diagonal self-energy elements in the considered order of truncation. This small splitting is not expected to affect the obtained results regarding presence of local magnetic moment at finite $V$.\par
\begin{figure}[t]
\centering
\includegraphics[width=0.8\linewidth]{SE_e_o_t0_5g0_9.png}
\caption{(Color online).
The same as Fig. \ref{SE_e_o} for
$(t, \gamma)=(0.5, 0.9)$.}
\label{SE_e_o_b}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.8\linewidth]{SE_e_o_t0_5g0_1_v2.png}
\caption{(Color online). The same as Fig. \ref{SE_e_o} for
$(t, \gamma)=(0.5, 0.1)$.}
\label{SE_e_o_c}
\vspace{-0.5cm}
\end{figure}
Finally, we consider the case $(t,\gamma)=(0.5,0.1)$ in which the quantum dot system has a strong hopping asymmetry and both the even and odd orbitals are coupled to the quantum dots QD1 and QD4 by almost comparable hopping parameters: $t_{1e}\approx t_{4o}\approx 0.5$, $t_{1o}=0$ and $t_{4e}\approx 0.1$. In this case we do not find any splitting between spin-up and spin-down energy states of the even/odd orbital (see Fig.~\ref{SE_e_o_c}) and, as a consequence, $\langle \mathbf{S}_{e/o}^2 \rangle\approx 3/8$ for an arbitrary bias voltage, as can be seen in Fig.~\ref{S2_Vn}c. In addition, we find a strong renormalization of the energy levels, in particular $\epsilon_{e,\sigma}(\epsilon_{o,\sigma})\propto\mu_{L}(\mu_{R})$ within a wide range of bias voltage near $V=0$ and slowly decreases(increases) with further increase of bias voltage. Note that, $t^{\sigma}_{eo}$ shows linear behavior for bias voltages $V\lesssim 3\Gamma$
and becomes almost constant at higher bias voltages (see the lower panel of Fig.~\ref{SE_e_o_c}).
This behavior of the renormalized parameters leads to the possibility of a significant deviation of the occupations numbers $\langle n_{e/o,\sigma}\rangle$ (see Fig.~\ref{Neo_V}c) from their equilibrium values $\langle n_{e/o,\sigma}\rangle\stackrel{V\rightarrow 0}{\approx} 0.5$, while
the occupation numbers $\langle n_{\uparrow,\downarrow}\rangle\approx 2$
are only slightly different from each other (see Fig.~\ref{NN_V}c).
In the limit of large bias voltages $V\gg \Gamma$ the occupation numbers converge to $\langle n_{e,\sigma}\rangle=1$
and $\langle n_{o,\sigma}\rangle=0$
in contrast to the previous cases, where $\langle n_{e/o,\sigma}\rangle\approx 0.5$ for $V\gg \Gamma$. This behavior
originates from the fact that in the considering case
the coupling between the even (odd) orbital and the left (right) lead
is much stronger than the corresponding coupling with the
right (left) lead, which makes the filling of the even (odd) orbital energetically (un)favorable for $V\gg \Gamma$.
Similar conclusions can be made concerning the fillings at the individual quantum dots, and, as expected from the above qualitative discussion, for $V\gg \Gamma$ we find $\langle n_{1(2),\sigma}\rangle\approx 1$, while $\langle n_{3(4),\sigma}\rangle\approx 0$.\par
\begin{figure}[b]
\centering
\includegraphics[width=0.8\linewidth]{SiSj_V_v10.png}
\caption{(Color online). The spin--spin correlation function $\langle \mathbf{S}_{1}\mathbf{S}_{2}\rangle$ (dashed red line), $\langle \mathbf{S}_{1}\mathbf{S}_{3}\rangle$ (solid blue line) and $\langle \mathbf{S}_{2}\mathbf{S}_{3}\rangle$ (dashed-dotted black line) as a function of bias voltage $V$ for (a) $(t, \gamma)=(0.05, 0.9)$, (b) $(t, \gamma)=(0.5, 0.9)$ and (c) $(t, \gamma)=(0.5, 0.1)$. The value of $\langle \mathbf{S}_{e}\mathbf{S}_{o}\rangle$ at zero bias voltage is indicated by the gray circle.}
\label{SiSj_V}
\end{figure}
The spin-spin correlation functions $\langle \mathbf{S}_{i}\mathbf{S}_{j}\rangle$ corresponding to the above-considered regimes of the system are shown in Figs.~\ref{SiSj_V}a-\ref{SiSj_V}c. One can see that the formation of the magnetic moment in the system is accompanied by the appearance of ferromagnetic correlation between spins on the quantum dots QD2 and QD3, $\langle \mathbf{S}_{2}\mathbf{S}_{3}\rangle>0$, which
becomes stronger with increasing the value of the magnetic moment. For the regimes
without magnetic moment we find $\langle \mathbf{S}_{2}\mathbf{S}_{3}\rangle\approx 0$. Thus, in the cases $(t, \gamma)=(0.05, 0.9)$ and $(t, \gamma)=(0.5, 0.9)$, $\langle \mathbf{S}_{2}\mathbf{S}_{3}\rangle$ shows step-like behavior as a function of bias voltage. The spin-spin correlation functions $\langle \mathbf{S}_{1}\mathbf{S}_{2}\rangle$=$\langle \mathbf{S}_{3}\mathbf{S}_{4}\rangle$ and $\langle \mathbf{S}_{1}\mathbf{S}_{3}\rangle$=$\langle \mathbf{S}_{2}\mathbf{S}_{4}\rangle$ are always negative (antiferromagnetic) and are proportional in magnitude to the hopping amplitudes between the corresponding quantum dots, i.e, $|\langle \mathbf{S}_{i}\mathbf{S}_{j}\rangle|\sim t_{ij}$. We also note that for all three considered cases the spin-spin correlation between the quantum dots QD1 and QD4 is almost absent, $\langle \mathbf{S}_{1}\mathbf{S}_{4}\rangle\approx 0$.\par
\subsection{Current $J$}
In this subsection we first present zero-temperature results for the $J-V$ characteristics and the bias voltage dependence of the differential conductance $G=e ({dJ}/{dV})$ for the cases considered in the previous subsection. The current with spin $\sigma$ through the lead $\alpha$ is written as~\cite{Meir_1992}
\begin{multline}
J^{\alpha}_{\sigma}=\dfrac{2ie}{h}\Gamma_{\alpha}\sum_{j}{\Theta_j^\alpha}\int{d\omega}\left\{f\left(\omega-\mu_{\alpha}\right)\left[\mathcal{G}_{jj;\sigma}^{r}\left(\omega\right)\right.\right.\\-\left.\left.\mathcal{G}_{jj;\sigma}^{a}\left(\omega\right)\right]+\mathcal{G}_{jj;\sigma}^{-+;0}\left(\omega\right)\right\},
\end{multline}
where $\mathcal{G}^{a}=\mathcal{G}^{--;0}-\mathcal{G}^{+-;0}$ is the advanced Green function in the end of fRG flow. Using the explicit form of the propagator $\mathcal{G}$ given by Eq.~(\ref{Gf}) we can reduce the above expression to a more convenient form
\begin{equation}
J^{\alpha=L(R)}_{\sigma}=\dfrac{2ie}{h}{\Gamma_{\alpha}}\sum_j\Theta^\alpha_j\int_{\mu_{r}}^{\mu_{l}}{\mathcal{G}_{jj,\sigma}^{+-(-+);0}\left(\omega\right)d\omega},
\label{Jas}
\end{equation}
where we have used that the non-diagonal components of the self-energy do not flow $\partial_{\Lambda}\Sigma^{kk^{'};\Lambda}_{jj^{'}}\sim\delta_{kk^{'}}\delta_{jj^{'}}$ (see Eq.~\ref{fRG_Eq}) and we have taken the zero-temperature limit for Fermi functions.\par
The total current $J$ can be calculated as
\begin{equation}
J=\dfrac{1}{2}\sum_{\sigma}\left(J^{L}_{\sigma}-J^{R}_{\sigma}\right).
\end{equation}
Note that $|J^{R}_{\sigma}|=J^{L}_{\sigma}$ due to the conservation of the current. The dependences of the corresponding currents $J$
on the bias voltage $V$ for $t_{14}=\epsilon=0$ are shown in Fig.~\ref{J_V1}. We also plot the zero-temperature differential conductance $G=\sum_\sigma G_\sigma$, where $G_{\sigma}=e ({dJ^{L}_{\sigma}}/{dV})=-e ({dJ^{R}_{\sigma}}/{dV})$,
in Fig.~\ref{G_V}.
In the equilibrium limit $V\rightarrow 0$ the current vanishes and for the differential conductance we obtain
\begin{equation}
G_{\sigma}^{0}=\dfrac{ie^{2}}{h}\Gamma_{L}\left[\mathcal{G}_{11;\sigma}^{+-;0}\left(\mu_{L}-0\right)+\mathcal{G}_{11;\sigma}^{+-;0}\left(\mu_{R}+0\right)\right]
\label{G1}
\end{equation}
which coincides with the conductance obtained from the equilibrium Matsubara functional renormalization group method within the Landauer formalism (see Appendix D). In the opposite limit of large bias voltage $V\gg \Gamma$, the current saturates and we find that $G\rightarrow 0$ for all regimes of interest.\par
\begin{figure}[t]
\centering
\includegraphics[width=0.8\linewidth]{J_V1_v5.png}
\caption{(Color online). Zero-temperature current $J$ as a function of bias voltage $V$ for $(t, \gamma)=(0.05, 0.9)$ (a), $(t, \gamma)=(0.5, 0.9)$ (b) and $(t, \gamma)=(0.5, 0.1)$ (c), and $t_{14}=\epsilon=0$.}
\label{J_V1}
\end{figure}
As one can see from Fig.~\ref{J_V1}a in the case of $(t,\gamma)=(0.05,0.9)$, the $J-V$ curve shows staircase-like structure with two sharp steps, which take place at the same bias voltages, at which $\langle \mathbf{S}_{e/o}^2 \rangle$ show step-like behavior in Fig.~\ref{S2_Vn}a. As a result, the differential conductance $G$
(see Fig.~\ref{G_V})
exhibits two narrow peaks located near $V\approx 0.5\Gamma$ and $V\approx 2.1\Gamma$; the first conductance peak almost reaches the unitary limit of the conductance $G=2e^{2}/h$. For bias voltages outside the regions of conductance peaks, we find $G\approx 0$.
These two peaks are in contrast to the single peak in the gate voltage dependence of the linear conductance at $V=0$
(see Fig.~\ref{G_Vg}).
It is also important to note that the $J-V$ characteristic contains regions in which the current decreases with the increase of bias voltage, leading to the negative differential conductance (NDC). As will be shown below, the appearance of NDC is associated with a strong dependence of the renormalized system parameters on the bias voltage, which is in turn induced by the electron-electron interaction.\par
\begin{figure}[t]
\centering
\includegraphics[width=0.8\linewidth]{G_V_v4.png}
\caption{(Color online). Zero-temperature differential conductance $G$ as a function of bias voltage $V$ for $(t, \gamma)=(0.05, 0.9)$ (dashed red line), $(t, \gamma)=(0.5, 0.9)$ (solid black line) and $(t, \gamma)=(0.5, 0.1)$ (dashed-dotted blue line: the result for the conductance $G$ was multiplied by 10), and $t_{14}=\epsilon=0$.
}
\label{G_V}
\end{figure}
For $(t,\gamma)=(0.5,0.9)$, the current shows a small amplitude abrupt jump (not distinguishable in Fig.~\ref{J_V1}b), which is located, as in the above case, at the transition between the different magnetic regimes and results in the asymmetric resonance peak of the differential conductance
for $V\approx\Gamma/3$. It is interesting to note that the conductance reaches its maximum value in the vicinity of the resonance. Overall in this case, the conductance/current takes significantly higher values compared with those of $(t,\gamma)=(0.05,0.9)$. This holds for $U=0$ and is related to the large coupling strength between quantum dots. In addition,
the conductance becomes negative in two regions of bias voltage: the narrow region near the conductance dip and the semi-infinite one for higher voltages.\par
Finally, in the case of $(t,\gamma)=(0.5,0.1)$, where the magnetic moment is absent for any value of $V$, the current does not show any abrupt behavior and changes smoothly with bias voltage, as shown in Fig.~\ref{J_V1}c. However, the $J-V$ characteristic is strongly non-linear, which
is the result of the non-linear behavior of the renormalized system parameters.
The NDC effect is also present in this case.\par
As it is evident from the above results, each sharp jump in the current indicates a transition between the regimes with different magnetic moment values. At the same time, a negative differential conductance appears even in the regime without local magnetic moments, as we have shown for $(t, \gamma)=(0.5, 0.1)$ case. In order to get insight into the origin of the NDC behavior, consider the explicit expression for the zero-temperature conductance $G_{\sigma}$
Direct differentiation of Eq.~(\ref{Jas}) yields $G_{\sigma}=G_{\sigma}^{0
}+G_{\sigma}^{\text{\rm I}}$, where
\begin{equation}
G_{\sigma}^{\text{\rm I}}
=\dfrac{e^{2}}{h}\sum_{p}{K_{p,\sigma}\dfrac{d\epsilon_{p,\sigma}}{dV}}
\end{equation}
with
\begin{equation}
K_{p,\sigma}=2i{\Gamma_{L}}\int_{\mu_{r}}^{\mu_{l}}{\left(\mathcal{G}_{1p;\sigma}^{+-;0}\mathcal{G}_{p1;\sigma}^{--;0}-\mathcal{G}_{1p;\sigma}^{++;0}\mathcal{G}_{p1;\sigma}^{+-;0}\right)d\omega},
\end{equation}
where $p\in\{1,2,3,4\}$.
The contribution $G^{\rm I}$ represents essentially non-equilibrium part of the conductance (which vanishes in the limit $V\rightarrow0$), corresponding to passing the current through each of the quantum dots $p$ and, as shown below it is responsible for the NDC phenomenon (we note that the contributions $G^{\rm 0}_\sigma$ are also affected by finite bias voltage, but remain always positive, see Appendix D).
\begin{figure}[t]
\centering
\includegraphics[width=0.8\linewidth]
{G12_V_v6.png}
\caption{(Color online). The panels (a)-(c): The bias voltage dependence of ${d\epsilon_{p,\sigma}}/{dV}$ (a), $K_{p,\sigma}$ (b) and $D_{p,\sigma}$ (c) for $\sigma=\uparrow$. The thin solid (blue), thick solid (red), thick dashed (black), and thin dashed (green) lines correspond to $p=1,2,3$ and 4, respectively. The lower panel (d): The bias voltage dependence of the differential conductance $G_{\sigma}$ (thin solid (black) line), $G_{\sigma}^{0}$ (thick solid (red) line) and $G_{\sigma}^{\text{\rm I}}$ (thick dashed (blue) line) for $\sigma=\uparrow$.}
\label{G12_V}
\end{figure}
\par
As an example, let us analyze the magnitude and sign of the contributions $G_{\sigma}^{0}$ and $G_{\sigma}^{\text{\rm I}}$ to the differential conductance $G_{\sigma}$ for the case $(t,\gamma)=(0.5,0.1)$, $t_{14}=\epsilon=0$, and $\sigma=\uparrow$ (for $\sigma=\downarrow$ we obtain the same results). Note that the conductance $G_{\uparrow}$ reproduces all the features of the total conductance $G$ (see Fig.~\ref{G12_V}c).
The term $G_{\uparrow}^{0}$ is positive for any bias voltage $V$ (see Appendix D), and thus does not contribute to the NDC effect.
The sign of $G_{\sigma}^{\text{\rm I}}$ is determined by the
sign of the product $K_{p,\sigma}({d\epsilon_{p,\sigma}}/{dV}).$
As shown in Fig.~\ref{G12_V}a, ${d\epsilon_{p,\uparrow}}/{dV}$ can be positive definite $(p=1)$, negative definite $(p=4)$ or even change sign $(p=2,3)$. Moreover, we find that the coefficients $K_{p,\uparrow}$ are also not sign-definite (see Fig.~\ref{G12_V}b). It is important to note that $\left|{d\epsilon_{2(3),\uparrow}}/{dV}\right|>\left|{d\epsilon_{1(4),\uparrow}}/{dV}\right|$ and $|K_{2(3),\uparrow}|\gg|K_{1(4),\uparrow}|$ in a wide region of intermediate values of $V$, which means that terms, corresponding to the contribution of the quantum dots $p=2,3$
give the main contribution to $G_{\uparrow}^{\text{\rm I}}$. This is supported by the bias voltage dependence of the functions $D_{p,\uparrow}=K_{p,\uparrow}({d\epsilon_{p,\uparrow}}/{dV})$
shown in Fig.~\ref{G12_V}c. As we can see, $D_{2(3),\uparrow}$ is negative definite (almost everywhere) and have a much greater impact on the conductance, while $D_{1(4),\uparrow}$ is predominantly negative and small in magnitude for all bias voltages. As a result, we find that $G_{\uparrow}^{\text{\rm I}}$ is always negative for arbitrary value of $V$ and is comparable in magnitude with $G_{\uparrow}^{0}$ (see Fig.~\ref{G12_V}c), leading to the strong suppression or even change of sign of the Landauer-type $G_{\uparrow}^{0}$ contribution to the differential conductance $G_{\uparrow}$. This eventually leads to the appearance of
the NDC effect when the non-equilibrium part dominates, $\left|G_{\uparrow}^{\text{\rm I}}\right|>G_{\uparrow}^{0}$.
Comparing the obtained results for $t_{14}=\epsilon=0$ to those for DQD system (see Appendix C), we find that the double quantum dot system shows a qualitatively similar picture of the magnetic moment(s) and differential conductance as in the QQD system. In particular, as for the QQD system, in the DQD system regimes with two, one or none of the magnetic moment(s) in quantum dots can be realized depends on the choice of the geometry of the system.
\begin{figure}[t]
\centering
\includegraphics[width=0.8\linewidth]{J_V_up_down_0_8_v4.png}
\caption{(Color online). Zero-temperature current $J_{\sigma}=J^{L}_{\sigma}=|J^{R}_{\sigma}|$ for spin-up ($\sigma=\uparrow$, solid red lines) and spin-down ($\sigma=\downarrow$, dashed black lines) electrons as a function of bias voltage $V$ for $(t, \gamma)=(0.5, 0.9)$, $\epsilon=0.8\Gamma$ and $t_{14}=\Gamma$ (upper panel), $t_{14}=2\Gamma$ (lower panel). Insets zoom the $J_{\sigma}(V)$ dependences at small $V$.}
\label{FigJVt14}
\end{figure}
Finally, we also present the results for the bias voltage dependence of the spin-resolved currents at finite $t_{14}$ (see Fig. \ref{FigJVt14}). In this case we choose $\epsilon=0.8\Gamma$, which corresponds to the gate voltage near the minimum of $G_\uparrow(\varepsilon)$ conductance. One can see that for $t_{14}=2\Gamma$, when in the equilibrium $G_\uparrow(\varepsilon)=0$, the corresponding current $J_\uparrow(V)$ almost vanishes in finite range of bias voltages $V<0.15\Gamma$, and remains small outside this range up to $V\sim\Gamma$. This shows a possibility of spin filtering by QQD even at finite small bias voltages.
\section{Conclusions}
In summary, in the zero-temperature limit we have discussed the possibility of the formation of the magnetic moments, near equilibrium and the non-equilibrium electron transport in the QQD system coupled to two leads within the non-equilibrium functional renormalization group approach. Our calculations have shown that, depending on the inter-dot coupling (hopping) configuration and bias voltage $V$, different magnetic regimes can be realized in the QQD system.\par
We have first explored the formation of the magnetic moments in equilibrium $(V=0)$ case. In that case we have shown that the considered fRG approach neglecting vertex flow reproduces qualitatively correct the results, obtained within more sophisticated fRG approach which accounts for the flow of the vertices, and which in turn showed good agreement with the numerical renormalization-group analysis for DQD system.
We have found three different magnetic regimes that can be achieved in the QQD system by tuning the inter-dot hopping parameters: with two, one or no magnetic moments. As for the parallel double quantum dot system, this difference
can be understood on the basis of the "even-odd" states.
The first case (two magnetic moments) corresponds to the situation, where all inter-dot hopping parameters are small compared to the other parameters of the system.
We have found that the realization of the second and third cases depends on the inter-dot coupling of the "odd" states: well-defined magnetic moment occurs when the coupling of the ”odd” states is sufficiently small.
While the above mentioned properties are similar to the DQD system, in QQD system the possibility of resonant tunneling between the opposite quantum dots yields somewhat different transport properties from those in DQD system. This difference becomes especially prominent in the presence of direct hopping between the opposite quantum dots, attached to the leads. In particular, in the presence of this hopping and one local moment in the ring, the conductance of one of the spin projection, oriented along the infinitesimally small magnetic field, is suppressed due to the interference effects, such that the QQD system can be used in spintronic devices.
Then we have considered the influence of the non-equilibrium conditions, appearing because of finite bias voltage, on the above listed magnetic states of the QQD system. We have found magnetic moments (if exist) remain stable in the wide range of voltages near $V=0$. At the same time, for higher bias voltages the destruction of the magnetic state occurs and proceeds in one (two) stage(s) for the QQD systems which coupling configuration allows the formation of the one (two) local moment regime. For the two-stage process the intermediate state possesses fractional magnetic moment.
The current-voltage characteristics and the differential conductances of the system exhibit sharp features at the transition points between different magnetic phases and show negative differential conductance (NDC) behavior.
It is important to note that although the frequency-independent fRG approximation used in the present study is applicable for the study of the formation of local magnetic moment(s) and transport properties of the quantum dot systems in the regime of small to intermediate Coulomb interactions, it cannot be used to describe spectral functions of the system, as well as various properties associated with the imaginary part of the self-energy, for example, the spin relaxation processes~\cite{Shnirman_2003}. For description of these properties the numerical approaches, in particular the numerical renormalization group, should be farther developed. At the same time, the presented study can help to interpret/achieve new results in experimental realizations of QQD systems, including its use in spintronic devices, as well as to be the guide for studying larger quantum dot and other and nanoscopic systems, which include closed path (ring) geometries, e.g. organic molecules.\par
{\it Acknowledgements} The work is supported by the theme Quant AAAA-A18-118020190095-4 of FASO Russian Federation, RFBR grant 17-02-00942a and the project 18-2-2-11 of Ural Branch RAS.
|
1,116,691,499,821 | arxiv | \section{Introduction}
Combining with one time pad \cite{shannon1949communication}, quantum key distribution (QKD) \cite{bennett1984quantum,ekert1991quantum} can offer a private communication with an information-theoretical security \cite{lo1999unconditional,shor2000simple,mayers2001unconditional,renner2005information}.
In practical QKD implementations, a weak pulsed laser source is utilized in place of an ideal single-photon source. To deal with the security vulnerability coming from the multi-photon components of emitted laser pulses \cite{brassard2000limitations,pns2002quantum}, decoy-state method is proposed \cite{hwang2003quantum,wang2005beating,lo2005decoy}. With this method, the secure single-photon contribution can be estimated effectively.
In asymptotic setting (with infinitely long keys), the security of decoy-state QKD is analyzed \cite{wang2005beating,lo2005decoy}.
In the case of finite-length keys, security bounds against general attacks are first derived in Ref. \cite{hayashi2014security}.
Subsequently, Ref. \cite{lim2014concise} derives concise and tight finite-key security bounds for efficient three-intensity decoy-state protocol by combining the recent security proof technique \cite{tomamichel2011uncertainty,tomamichel2012tight} with a finite-key analysis for decoy-state method.
The simulation results show that these bounds are relatively tight.
Four-intensity decoy-state protocols are researched in \cite{wang2005decoy,hayashi2007general,zhou2014tightened,yu2015decoy}.
However, in their works, the four different intensities are mainly used to obtain a tighter estimation formula for single-photon error rate \cite{hayashi2007general,zhou2014tightened} and
the secret key rates are calculated
in asymptotic setting.
For practical QKD implementations, the effects due to finite-length keys should be considered, e.g., statistical fluctuation \cite{hayashi2014security,lim2014concise,cai2009finite}.
Thus, in finite-key setting, statistical fluctuations of four different measurement values (the numbers of quantum bit errors) should be taken into account when four intensities are all utilized for one estimation formula of single-photon error rate \cite{hayashi2007general,zhou2014tightened}, which may in contrary bring a lower secret key rate especially when the transmission distance is large.
Here, we propose an efficient four-intensity decoy-state QKD protocol with biased basis choice.
Unlike previous four-intensity protocols, in this protocol, the basis choice is biased and the estimation method for single-photon contribution is the same with the widely used one \cite{lim2014concise,ma2005practical}.
Additionally, different from efficient three-intensity protocols \cite{lim2014concise,wei2013decoy,lucamarini2013efficient}, the intensities and the bases in our protocol are independent except the lowest intensity.
More specifically, in our protocol, $Z$ basis is used for key generation where three different intensities are utilized, and $X$ basis is used for testing where two different intensities are used \cite{Lo2005efficient}.
The two higher intensities in \(Z\) basis (except lowest intensity) are independent from the higher intensity in $X$ basis.
Compared with efficient three-intensity protocols, the intensities in our protocol can be freely optimized to increase the detected pulses in two bases (for a fixed number of sent pulses $N$) and decrease the statistical deviations caused by finite-length key.
Using the universally composable finite-key analysis method \cite{lim2014concise}, we derive concise security bounds for our efficient four-intensity protocol.
With these bounds and system parameters in Ref. \cite{lim2014concise}, we perform some numerical simulations with full parameter optimization.
When the number of sent pulses $N$ is \(10^9\), compared with efficient three-intensity protocol \cite{lim2014concise}, our protocol can increase the secret key rate by at least \(30\%\).
Particularly, this increasing rate of secret key rate will be raised with the increasing transmission distance.
\section{Protocol description}
In this paper, we consider the efficient BB84 protocol \cite{Lo2005efficient}, i.e., the basis choice is biased.
Our protocol is based on the transmission of phase-randomized laser pulses, and uses four different-intensity setting.
Then, we describe our efficient four-intensity protocol in detail.
\textit{1. Preparation and measurement.}
Alice sends four different kinds of weak laser pulses with intensities \(\omega ,{\rm{ }}{\upsilon _1},{\rm{ }}{\upsilon _2},\mu {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt}\) \((\mu > {\upsilon _1} + \omega ,{\upsilon _1} > \omega \ge 0,{\upsilon _2} > \omega \ge 0)\),
with probabilities, \({P_\omega }\), \({P_{{\upsilon _1}}}\), \({P_{{\upsilon _2}}}\) and \({P_\mu }\) (\(P_\omega+P_{{\upsilon _1}}+P_{{\upsilon _2}}+P_\mu=1 \)), respectively.
Specially, the pulses with intensities (\({\upsilon _1}\) and $\mu$) are all prepared in $Z$ basis, the pulses with intensity \({\upsilon _2}\) are all prepared in $X$ basis, and the pulses with the lowest intensity \(\omega\) are prepared in $Z(X)$ basis with probability \({P_{Z|\omega }}({P_{X|\omega }})\) (\(P_{Z|\omega }+P_{X|\omega }=1\)).
Bob chooses $Z(X)$ basis to perform the measurement with probability \(P_Z{(P_X)}\) ($P_Z+P_X=1$).
\textit{2. Basis reconciliation and parameter estimation.}
Alice and Bob announce their basis choices over an authenticated public channel, and accomplish the sifting by reserving the detected signals with the same basis and discarding the others.
After this procedure, Alice and Bob share two bit strings with lengths \({n_Z}\) and ${n_X}$ corresponding to two bases.
Then, Alice announces the intensity information.
Based on that, the shared bit string in \(Z\) basis can be divided into three substrings with lengths,
\({n_{Z,\nu }}\) (\(\nu \in \{ \mu,{\upsilon _1},\omega \}\)), where \({n_Z} = \sum\limits_{\nu \in \{ \mu ,{\upsilon _1},\omega \} } {{n_{Z,\nu }}} \).
As for \(X\) basis, Alice and Bob need to announce their bit strings.
They compare them and obtain the numbers of bit error, \({m_{X,k}}\) (\(k \in \{ {\upsilon _2},\omega \}\)).
With these \({n_{Z,\nu }}\) and \({m_{X,k}}\), they can calculate the number of vacuum events \(s_{Z,0}\) [Eq. (\ref{Eq:sZ0})], the number of single-photon events \(s_{Z,1}\) [Eq. (\ref{Eq:sZ1})] and the \textit{phase error rate} \(e_{1}^{PZ}\) [Eq. (\ref{Eq:e1PZ})] associated with single-photon events in \(Z\) basis.
\textit{3. Error correction and privacy amplification.}
In the error-correction step, we assume that the error rate \(E_Z\) is predetermined and at most \(\lambda _{EC}=f{n_z}H({E_z})\) is revealed, where \(f\) is error-correction efficiency; \(H(x)=- x{\log _2}(x) - (1 - x){\log _2}(1 - x)\) is the binary Shannon entropy function.
Next, an error verification is performed to ensure that Alice and Bob share a pair of identical keys.
\({\varepsilon _{cor}}\) is the probability that a pair of nonidentical keys pass this error-verification step.
Finally, they perform the privacy amplification to extract the \(\varepsilon _{sec}\)-secret keys with length \(l\) [Eq.~\ref{Eq:l}].
\section{Security bounds for efficient four-intensity protocol}
Following the finite-key security analysis in Ref. \cite{lim2014concise}, the length of \(\varepsilon_{sec}\)-secret key \(l\) is given
by
\begin{eqnarray}\label{Eq:l}
l =&
\lfloor{s_{Z,0}} + {s_{Z,1}}[1 - H(e_1^{PZ})] \nonumber\\
&- {\lambda _{EC}}
-6{{\log }_2}\frac{{17}}{{{\varepsilon _{\sec }}}} -{{\log }_2}\frac{2}{{{\varepsilon _{cor}}}}\rfloor.
\end{eqnarray}
Then the secret key rate \(R\) is ${l \mathord{\left/
{\vphantom {l N}} \right.
\kern-\nulldelimiterspace} N}$, where \(N\) is the number of pulses sent by Alice.
\(s_{Z,0}\) \(s_{Z,1}\) and \(e_{1}^{PZ}\) are, respectively, the number of vacuum events, the number of single-photon events and the \textit{phase error rate} associated with single-photon events in \(Z\) basis.
The values of these three parameters need to be estimated with decoy-state method instead of being measured from the experiment directly.
The estimation formulas of \(s_{Z,0}\), \(s_{Z,1}\) and \(e_{1}^{PZ}\) are the same with the ones in Ref.~\cite{lim2014concise}.
It should be noted that the estimation of the number of single-photon events in $X$ basis \(s_{X,1}\) in our protocol is different from the one in Ref. \cite{lim2014concise}.
From Eq.(\ref{Eq:e1PZ}), one can see that in order to accomplish the calculation of \(e_{1}^{PZ}\), \(s_{X,1}\) needs to be estimated first.
In Ref. \cite{lim2014concise}, \(s_{X,1}\) is estimated by using Eqs. (\ref{Eq:sZ0}) and (\ref{Eq:sZ1}) with statistics from the $X$ basis.
Thus, the bounds \(n_{X,\omega }^ -\), \(n_{X,{\upsilon _1}}^ + \), \(n_{X,{\upsilon _1}}^ -\), \(n_{X,\omega }^ +\) and \(n_{X,\mu }^ +\) should be estimated from the measurement values of \(n_{X,\omega }\), \(n_{X,\upsilon_1 }\) and \(n_{X,\mu }\) in $X$ basis, which leads to 5 error terms \cite{lim2014concise}.
Different from the protocol in \cite{lim2014concise}, only two intensities are used in $X$ basis in our protocol.
Here, we assume that the yields of single-photon state in two bases are equal in asymptotic setting\footnote{This is normally satisfied when all the detectors have the same parameters (dark count rate, detection efficiency and after-pulse probability) and are operating according to specification.}.
Then, in finite-key setting, \(s_{X,1}\) can be estimated from \(s_{Z,1}\) by using the random-sampling theory (without replacement) \cite{korzh2015provably} and the result is shown in Eq. (\ref{Eq:sX1}). In this case, only 1 error term arises when \(s_{X,1}\) is estimated.
Such a change of the estimation method of \(s_{X,1}\) causes a minor modification on the secret key rate formula [Eq. (\ref{Eq:l})].
According to the Eq. (B4) in Ref. \cite{lim2014concise}, $21$ error terms emerge during the secrecy analysis for the efficient three-intensity protocol, where 5 error terms come from the estimation of \(s_{X,1}\).
In our protocol, \(s_{X,1}\) is estimated with \(s_{Z,1}\), where 1 error term needs to be taken into consideration.
Therefore, only $17$ error terms need to be composed into the secrecy parameter \(\varepsilon _{\sec }\) when we apply the same secrecy analysis method in \cite{lim2014concise} to our protocol.
Then, we set each error term to a common value \(\frac{\varepsilon _{\sec }}{17}\) and this value is used in both our secret key rate formula [Eq. (\ref{Eq:l})] and finite-size decoy-state analysis [Eqs. (3, 5, 7, 8)].
Next, we will show how to estimate \(s_{Z,0}\), \(s_{Z,1}\) and \(e_{1}^{PZ}\) in our protocol.
\(s_{Z,0}\) is given by
\begin{equation}\label{Eq:sZ0}
{s_{Z,0}} \ge {\tau _{Z,0}}\frac{{{\upsilon _1}n_{Z,\omega }^ - - \omega n_{Z,{\upsilon _1}}^ + }}{{{\upsilon _1} - \omega }},
\end{equation}
where \({\tau _{Z,i}} = \sum\limits_{k \in \{ \mu ,{\upsilon _1},\omega \} } {{{{P_k}{P_{Z|k}}{e^{ - k}}{k^i}} \mathord{\left/
{\vphantom {{{P_k}{P_{W|k}}{e^{ - k}}{k^i}} {i!}}} \right.
\kern-\nulldelimiterspace} {i!}}} \) is the probability that Alice sends an \(i\)-photon pulse in $Z$ basis, \(P_{k}\) and \(P_{W|k}\) are the probability to choose intensity $k$ and the conditional probability to choose $W$ basis (\(W \in \{ Z,X\}\)) conditional on $k$, and
\begin{equation}\label{Eq:stanz}
n_{Z,k}^ \pm = \frac{{{e^k}}}{{{P_k}{P_{Z|k}}}}[{n_{Z,k}} \pm \sqrt {\frac{{{n_Z}}}{2}\ln \frac{{17}}{{{\varepsilon _{\sec }}}}} ],(k \in \{ \mu ,{\upsilon _1},\omega \} ).
\end{equation}
\(s_{Z,1}\) can be calculated by
\begin{equation}\label{Eq:sZ1}
{s_{Z,1}} \ge \frac{{{\tau _{Z,1}}\mu [n_{Z,{\upsilon _1}}^ - - n_{Z,\omega }^ + - \frac{{\upsilon _1^2 - {\omega ^2}}}{{{\mu ^2}}}(n_{Z,\mu }^ + - \frac{{{s_{Z,0}}}}{{{\tau _{Z,0}}}})]}}{{\mu ({\upsilon _1} - \omega ) - \upsilon _1^2 + {\omega ^2}}}.
\end{equation}
By using a random sampling without replacement \cite{korzh2015provably}, \(s_{X,1}\) can be obtained by
\begin{equation}\label{Eq:sX1}
{s_{X,1}} \ge N_{1}^X\frac{{{s_{Z,1}}}}{{N_{1}^Z}} - 2N_{1}^Xg(N_{1}^X,N_{1}^Z,\frac{{{s_{Z,1}}}}{{N_{1}^Z}},\frac{\varepsilon_{sec} }{{17}}),
\end{equation}
where
\(N_{1}^W = N{\tau _{W,1}}{P_W}\),
\(C(x,y,z) = \exp (\frac{1}{{8(x + y)}} + \frac{1}{{12y}} - \frac{1}{{12yz + 1}} - \frac{1}{{12y(1 - z) + 1}})
\),
\(g(x,y,z,\varepsilon ) = \sqrt {\frac{{2(x + y)z(1 - z)}}{{xy}}\log \frac{{\sqrt {x + y} C(x,y,z)}}{{\sqrt {2\pi xyz(1 - z)} \varepsilon }}}\)
.
The number of bit errors ${v_{X,1}}$ associated with the single-photon events in $X$ basis is given by
\begin{equation}\label{Eq:vx1}
{v_{X,1}} \le {\tau _{X,1}}\frac{{m_{X,{\upsilon _2}}^ + - m_{X,\omega }^ - }}{{({\upsilon _2} - \omega )}},
\end{equation}
where \({\tau _{X,1}} = \sum\limits_{k \in \{ {\upsilon _2},\omega \} } {{P_k}{P_{X|k}}{e^{ - k}}{k}} \),
\({m_X} = {m_{X,{\upsilon _2}}} + {m_{X,\omega }}\),
\begin{equation}\label{Eq:mxk}
m_{X,k}^ \pm = \frac{{{e^k}}}{{{P_k}{P_{X|k}}}}[{m_{X,k}} \pm \sqrt {\frac{{{m_X}}}{2}\ln \frac{{17}}{{{\varepsilon _{\sec }}}}} ],k \in \{ {\upsilon _2},\omega \}.
\end{equation}
$e_1^{pz}$ is computed by
\begin{equation}\label{Eq:e1PZ}
e_1^{pz} = \frac{{{c_{Z,1}}}}{{{s_{Z,1}}}} \le \frac{{{v_{X,1}}}}{{{s_{X,1}}}} + \gamma (\frac{\varepsilon_{sec} }{{17}},\frac{{{v_{X,1}}}}{{{s_{X,1}}}},{s_{X,1}},{s_{Z,1}}),
\end{equation}
where
\(\gamma (a,b,c,d) = \sqrt {\frac{{(c + d)(1 - b)b}}{{cd\log 2}}{{\log }_2}(\frac{{(c + d)}}{{cd(1 - b)b{a^2}}})}\).
\section{Numerical simulation}
To make a comparison with the efficient three-intensity protocol, we perform some numerical simulations for the fiber-based QKD system with the system parameters in \cite{lim2014concise}, shown in Table \ref{Tab:1}.
These parameters come from recent decoy-state QKD and single-photon detector experiments \cite{frohlich2013quantum,walenta2012sine}.
More specifically, the intensity of weakest decoy state \(\omega=2\times10^{-4}\) and the misalignment error rate \(e_{mis}=5\times 10^{-3}\) are all from the experiment work in \cite{frohlich2013quantum}.
Bob uses an active measurement setup with two single-photon detectors and they have a detection efficiency \(\eta _B=0.1\), a dark count rate \(p_{dc}=6\times 10^{-7}\), and an after-pulse probability \(p_{ap}=0.04\)
\cite{walenta2012sine}.
The dedicated optical fiber is used for quantum channel and the attenuation coefficient of the fibers is $0.2$ dB/km.
For simulation, \(E_Z\) is set to be the average of the observed error rates in $Z$ basis and the error-correction efficiency $f$ is set to be 1.16.
In practice, the cost of error correction \(\lambda _{EC}\) is the size of the information exchanged during the error correction step.
Regarding the secrecy, we also set \({\varepsilon _{\sec }}\) to be proportional to the key length $l$, that is, \({\varepsilon _{\sec }} = \kappa l\) where \(\kappa \) is a security constant and can be seen as the secrecy leakage per bit in final key.
To reduce the optimization complexity, we set \({P_{Z|\omega }} = {P_Z}\).
The parameters \(\{ \mu ,{\upsilon _1},{\upsilon _2},{P_\mu },{P_{{\upsilon _1}}},{P_{{\upsilon _2}}},{P_Z}\} \) are optimized to maximize the secret key rate.
\begin{table
\caption{
List of parameters for numerical simulations.
\(p_{dc}\) is the dark count rate;
\(p_{ap}\) is the after-pulse probability;
\(\omega\) is the lowest intensity;
\(\kappa \) is the security constant;
${\varepsilon _{cor}}$ is the probability that shared secret keys are nonidentical;
\(e_{mis}\) is the misalignment error rate;
\({\eta _B}\) is the detection efficiency.
}
\label{Tab:1}
\begin{tabular}{ccccccc}
\hline\noalign{\smallskip}
\textrm{\(p_{dc}\)}&
\textrm{\({p_{ap}}\)}&
\textrm{\(\omega\)}&
\textrm{\(\kappa\)}&
\textrm{${\varepsilon _{cor}}$}&
\textrm{\(e_{mis}\)}&
\textrm{\({\eta _B}\)}
\\
\noalign{\smallskip}\hline\noalign{\smallskip}
$6\times 10^{-7}$ & $0.04$ &$2\times 10^{-4}$ &$10^{-15}$&$10^{-15}$ & $5\times 10^{-3}$ &0.1
\\
\noalign{\smallskip}\hline
\end{tabular}
\end{table}
We set $N$ to be $10^9$ and compare the secret key rates of the efficient three-intensity protocol and our efficient four-intensity protocol.
The results are shown in Fig. \ref{Fig:1}.
Compared with the efficient three-intensity protocol, our protocol can increase secret key rate by at least 30\% at all transmission distances.
The increasing rates of secret key rate at different transmission distances are shown in Fig. \ref{Fig:2}.
We can find that this increasing rate is monotonically increasing with the transmission distance.
That is, the advantage of our efficient four-intensity protocol is more significant at a large transmission distance.
Particularly, at 100 km, the increasing rate is 60\% and the optimal parameters and secret key rates are shown in Table \ref{Tab:2}.
Additionally, we have also performed simulations with other statistical fluctuation analyses, including the standard error analysis \cite{ma2012statistical} and the Chernoff bound \cite{curty2014finite,yin2014long}, and obtain the same conclusion that the secret key rate is increased with our protocol and the improvement is more significant at a large transmission distance.
Compared with the efficient three-intensity protocol, the secret key rate improvement of our efficient four-intensity protocol mainly comes from the following two aspects:
\begin{figure}
\resizebox{0.45\textwidth}{!}
\includegraphics{fig1.eps}
\caption{
(Color online)
Secret key rate vs fiber length (dedicated fiber).
Numerically optimized secret key rates are obtained for a fixed number of pulses sent by Alice $N=10^9$.
The optimal parameters at the transmission distance of 100 km are shown in Table \ref{Tab:2}.
The blue dashed line shows the secret key rates of efficient three-intensity protocol \cite{lim2014concise}.
The secret key rates of efficient four-intensity protocol are presented by the red solid line.
Compared with the efficient three-intensity protocol, our efficient four-intensity protocol can increase the secret key rate by at least 30\%.
Particularly, this improvement is more significant when the transmission distance is large.
The increasing rates of secret key rate at different transmission distances are shown in Fig.~\ref{Fig:2}.
}
\label{Fig:1}
\end{figure}
\begin{figure}
\resizebox{0.43\textwidth}{!}
\includegraphics{fig2.eps}
\caption{
(Color online) The increasing rate of secret key rate vs fiber length (dedicated fiber).
The red solid line shows the increasing rate of secret key rate between efficient four-intensity protocol and efficient three-intensity protocol.
This increasing rate is monotonically increasing with the transmission distance.
}
\label{Fig:2}
\end{figure}
\begin{table
\caption{
Comparison of parameters at 100 km (standard fiber) between efficient three-intensity protocol \cite{lim2014concise} and our efficient four-intensity protocol.
More general comparison results are shown in Fig. \ref{Fig:1}.
The second and third columns are, respectively, optimal parameters for efficient three-intensity protocol and efficient four-intensity protocol.
Compared with the efficient three-intensity protocol at 100 km, our efficient four-intensity protocol can increase key rate by 60\%.
}
\label{Tab:2
\begin{tabular}{ccc}
\hline\noalign{\smallskip}
&
\textrm{Efficient }&
\textrm{Efficient }\\
\textrm{Parameters}&\textrm{three-intensity}&
\textrm{four-intensity}\\
\noalign{\smallskip}\hline\noalign{\smallskip}
{$\mu$}& {$0.551$}& {$0.47$}\\
{$\upsilon_1$}& {$0.188$}& {$0.183$}\\
{$\upsilon_2$}& {$--$}& {$0.32$}\\
{$P_{\mu}$}& {$0.127$}& {$0.16$}\\
{$P_{\upsilon_1}$}& {$0.599$}& {$0.407$}\\
{$P_{\upsilon_2}$}& {$--$}& {$0.22$}\\
{$P_Z$}& {$0.669$}& {$0.82$}\\
{$R$}& {$9.58\times10^{-6}$}& {$1.53\times10^{-5}$}\\
\noalign{\smallskip}\hline
\end{tabular}
\end{table}
(I) In efficient three-intensity protocol, the intensity \(\upsilon _1\) in $Z$ basis and the intensity \(\upsilon _2\) in $X$ basis are replaced by the same intensity \(\upsilon \), and this intensity \(\upsilon\) participates in the calculations of both $s_{Z,1}$ [Eq. (\ref{Eq:sZ1})] and $v_{X,1}$ [Eq. (\ref{Eq:vx1})].
In asymptotic setting, the optimal estimations of $s_{Z,1}$ and $v_{x,1}$ are obtained when the intensity \(\upsilon\) is infinitesimal \cite{ma2005practical}.
Nonetheless, in finite-key setting, with statistical fluctuation [Eqs. (\ref{Eq:stanz},\ref{Eq:mxk})] into consideration, the optimal intensity \(\upsilon\), where the best estimation of $s_{Z,1}$ is achieved, may not help to get a tight estimation of $v_{X,1}$.
Therefore, in our protocol, we make the intensities (\(\upsilon _1\) and \(\upsilon _2\)) independent by adding another intensity.
From Table \ref{Tab:2}, the optimal intensity \(\upsilon _1\) and the optimal intensity \(\upsilon _2\) are, severally, 0.183 and 0.32.
They are quite different.
In a word, the separate optimization of \(\upsilon _1\) and \(\upsilon _2\) helps to achieve a higher key rate.
(II) Compared with standard balanced-basis protocol, efficient protocol can highly improve the secret key rate \cite{lim2014concise,wei2013decoy,lucamarini2013efficient,Lo2005efficient}.
In asymptotic setting, the increasing rate of secret key rate can reach 100\%, when \(P_Z\) approaches 1 \cite{Lo2005efficient}.
However, in finite-key setting, compared to standard three-intensity protocol with balanced basis choice, the efficient three-intensity protocol can increase secret key rate by 45\% \cite{wei2013decoy}.
As the transmission distance increases, the improvement of secret key rate will decrease.
That is because at a larger transmission distance, more pulses in \(X\) basis are needed to give an accurate estimation of $v_{X,1}$ in Eq. (\ref{Eq:vx1}) and Bob has to increase \(P_X\).
When \(P_X\) approaches 0.5, the efficient protocol becomes similar to the standard one, where \(P_Z=P_X=0.5\).
In our protocol, we subtly add an additional intensity and make \(\upsilon _1\) and \(\upsilon _2\) independent.
As Table \ref{Tab:2} shows, we can increase the intensity \(\upsilon _2\) to make the estimation of $v_{X,1}$ more accurate.
That is, with an additional variable to reduce the statistical fluctuation, efficient four-intensity protocol can help to get a higher probability \(P_Z\) to choose \(Z\) basis for key generation.
The optimal \(P_Z\)s for these two protocols at different transmission distances are shown in Fig. \ref{Fig:3}.
From Fig. \ref{Fig:3}, one can see that for these two protocols the optimal \(P_Z\)s are all monotonically decreasing with the transmission distance.
Nevertheless, the optimal \(P_Z\) of our protocol is always larger than the one of efficient three-intensity protocol.
This is the other important factor which leads to a higher secret key rate.
\begin{figure}
\resizebox{0.48\textwidth}{!}
\includegraphics{fig3.eps}
}
\caption{
(Color online) Optimal $P_Z$ vs fiber length (dedicated fiber).
The blue dashed line shows the optimal $P_Z$s of efficient three-intensity protocol \cite{lim2014concise}.
The optimal \(P_Z\)s of our efficient four-intensity protocol are shown by red solid line.
From the results, we can see that our efficient four-intensity protocol always has a higher \(P_Z\) than the efficient three-intensity protocol.
}
\label{Fig:3}
\end{figure}
\begin{figure}
\resizebox{0.48\textwidth}{!}
\includegraphics{fig4.eps}
}
\caption{
(Color online) Secret key rate vs the weakest decoy state \(\omega\).
The optimized secret key rates are obtained for different transmission distances (20 km, 40 km, 60 km, 80 km, 100km, from top to bottom).
All the solid lines show the results of our efficient four-intensity protocol, e.g., the black solid line shows the optimized secret key rates of our protocol for a fixed transmission distance 20 km.
The results of efficient three-intensity protocol \cite{lim2014concise} are all
presented in dashed lines.
From these results, we find that as long as the intensity \(\omega\) is below \(1\times10^{-3}\), the secret key rates of both the efficient three-intensity protocol and our efficient four-intensity protocol are stable, and our protocol has such an increasing rate of secret key rate as more than 30\% in comparison with the efficient three-intensity protocol at all transmission distances.
}
\label{Fig:4}
\end{figure}
Note that the intensity \(\omega\) of weakest decoy state in our simulations is set to be \(2\times10^{-4}\) instead 0 (a vacuum state).
That is because, in practice, it is usually difficult to create a perfect vacuum state in decoy-state QKD experiments \cite{rosenberg2007long,dixon2008gigahertz},
although it is optimal to set the weakest decoy state to be vacuum state \cite{ma2005practical}.
In Ref. \cite{lim2014concise}, Lim et al. choose the experiment parameter \(\omega=2\times10^{-4}\) of Ref. \cite{frohlich2013quantum} to perform the numerical simulations.
Following their simulation work, we also set \(\omega=2\times10^{-4}\).
However, what is the effect of the intensity of the weakest decoy state on the secret key rate?
Here, we further optimize the secret key rate over the free \(\omega\).
The results are shown in Fig. \ref{Fig:4}.
We find that as long as the intensity \(\omega\) is below \(1\times10^{-3}\), the secret key rates of both the efficient three-intensity protocol and our efficient four-intensity protocol are stable.
That is, a perfect vacuum state is not essentially required in practical decoy-state QKD experiments.
Meanwhile, we also find that even when the intensity \(\omega\) is free, our protocol can still increase the secret key rate by more than 30\% in comparison with the efficient three-intensity protocol in Ref. \cite{lim2014concise} at all transmission distances.
\section{Discussion and conclusion}
Actually, in terms of current finite-key analysis for decoy-state method, our idea that the intensities and the bases should be independent can be further exploited.
Note that the intensity \(\omega\) also participates in the calculations of both $s_{Z,1}$ in $Z$ basis and $v_{X,1}$ in $X$ basis, and we can add the fifth intensity to make the intensities (\(\omega_1\) in $Z$ basis and \(\omega_2\) in $X$ basis) in two bases independent.
From Eq. (\ref{Eq:e1PZ}), we can see that when the numbers of single-photon events in two bases are close, the sample deviation is small.
Then the sixth intensity, e.g., with an average number of photons of order 1 in $X$ basis, is needed to make the numbers of single-photon events in two bases close.
However, in practical implementations, setting more than four different intensities is hard for experimentalists.
Our efficient four-intensity protocol is feasible and practical for current technology.
In summary, we propose an efficient four-intensity protocol and provide concise finite-key security bounds for this protocol that are valid against general attacks.
Compared with the efficient three-intensity protocol, our efficient four-intensity protocol can increase secret key rate by at least 30\%.
Particularly, at a large transmission distance, the improvement is more significant.
\section*{Acknowledgements}
This work is supported by the National High Technology Research and Development Program of China
Grant No.2011AA010803, the National Natural Science Foundation of China Grants No.61501514 and No.U1204602 and the Open
Project Program of the State Key Laboratory of Mathematical Engineering and Advanced Computing Grant No.2015A13.
\section*{Author contributions}
H.J.,M.G.,B.Y.,W.W.,Z.M. all contributed equally to this paper.
\bibliographystyle{unsrt}
|
1,116,691,499,822 | arxiv | \section{Introduction}\label{sec:intro}
We are entering a golden age for Milky Way astronomy. The European
Space Agency (ESA)'s $Gaia$ mission \citep{GaiaMission}, which
launched $19^{th}$ December 2013, has recently published its first
data release \citep{GaiaDR1}, giving us a new window on our Galaxy,
and in particular, the solar neighbourhood. The primary astrometric
catalogue in \emph{Gaia}-DR1 is the Tycho-$Gaia$ Astrometric Solution
\citep[TGAS,][]{M15,Lindegren16a}, which uses data from the $Tycho$-2
catalogue \citep{H00}, to provide a baseline of approximately 30 years
upon which to calculate astrometric values for stars in common between
$Tycho$-2 and $Gaia$. There is also significant overlap between stars
in the TGAS catalogue and stars observed by the Radial Velocity
Experiment \citep[RAVE, e.g.][]{Rave}, enabling the full 6-dimensional
phase space information to be known for over 200,000 stars in the
solar neighbourhood. This enables us to explore local dynamics in
unprecedented detail.
Many aspects of the structure and dynamics of the Milky Way are
difficult to measure, owing to our position within the Galaxy, and
complex observational selection effects such as dust extinction. Thus,
many of the fundamental parameters of the Milky Way carry significant
uncertainty. One such parameter is the velocity at which the Sun
rotates around the Galaxy, for which plausible values span the range
from $\approx240\ensuremath{\,\mathrm{km\ s}^{-1}}$ to $260\ensuremath{\,\mathrm{km\ s}^{-1}}$ \citep[e.g.,][]{Bovy12a,Reid14a} and
are dependent on the data set and technique used.
\cite{CI87} proposed that the rotation velocity of the Sun may be
measured by searching for a lack of stars exhibiting zero angular
momentum\footnote{\citet{CI87} phrased this as a measurement of the
circular velocity. Because the measurement is based on stellar
velocities with respect to the Sun, the quantity that is directly
measured is the Sun's motion around the center, not the circular
velocity. The difference between these two is still quite uncertain
\citep[e.g.,][]{Schoenrich10a,Bovy15b}}. Stars with zero angular
momenta are expected to plunge into the Galactic nucleus and
subsequently experience scattering onto chaotic orbits with a high
scale height, henceforth spending the majority of their time in the
stellar halo \citep{Martinet}. If stars with very low angular momentum
are indeed not present in the solar neighbourhood, the tail of the
tangential velocity distribution will exhibit a dip centred at the
Solar reflex value. This method for measuring the Solar velocity is
attractive because it should depend only on the existence of low
angular momentum orbits within the Milky Way's disk. With
6-dimensional phase space measurements for nearby stars it becomes
possible to calculate the tangential velocity distribution with
respect to the Sun, and thus we can test this prediction by searching
for a dearth of stars around the assumed value of the negative of the
Solar motion.
This paper is constructed as follows. In Section \ref{sec:observation} we discuss our treatment of the data, and the feature observed in the resulting velocity distribution. In Section \ref{sec:modeling} we present simulated models which can explain the feature and make predictions for the size and shape of the observed feature. In Section \ref{sec:detection} we fit our model to the data and present our measurement of $v_{\odot,\rm{reflex}}$. Finally, in Section \ref{sec:conclusion} we discuss the implications of the detection and look forward to future measurements.
\section{Observed feature}\label{sec:observation}
We cross match the TGAS catalogue \citep{Lindegren16a} and the RAVE DR5 data \citep{Kunder16a} to add RAVE line-of-sight velocities to the TGAS astrometric data. Where available, we employ the RAVE spectrophotometric distance estimates, because estimating distances from the TGAS parallax is non-trivial \citep[e.g.][]{BJ}. However, where no RAVE distance is available, we naively invert the TGAS parallax $\pi$ to obtain distance estimates for the remaining stars, removing any star with $\sigma_{\pi}/\pi>0.1$ to avoid large distance uncertainties. The resulting sample consists of 216,201 stars with 6-dimensional phase-space measurements. Then, we convert the velocities from equatorial coordinates to standard Galactic Cartesian coordinates centred on the Sun, $(X,Y,Z,v_X,v_Y,v_Z)$, with $v_X$ positive in the direction of the Galactic centre (toward Galactic longitude $l = 0$) and $v_Y$ positive in the direction of Galactic rotation (toward $l=90^\circ$), both measured with respect to the Sun.
\begin{figure}
\begin{tikzpicture}
\node (img) {\includegraphics[width=0.49\textwidth]{f1.pdf}};
\node[below=of img, node distance=0cm, yshift=2.0cm,font=\color{black}] {$v_Y$ (km s$^{-1}$)};
\node[left=of img, node distance=0cm, rotate=90, anchor=center,yshift=-1.3cm,xshift=3cm,font=\color{black}] {Number of stars per bin};
\node[left=of img, node distance=0cm, rotate=90, anchor=center,yshift=-1.3cm,xshift=-3cm,font=\color{black}] {PDF};
\end{tikzpicture}
\caption{\textbf{Top panel:} Distribution of $v_Y$ for stars observed to be within 700 pc of the Sun. The Sun's velocity has $v_Y = 0$. The low-velocity tail of the distribution displays a clear dip in the range between $-210\ensuremath{\,\mathrm{km\ s}^{-1}}$ and $-270\ensuremath{\,\mathrm{km\ s}^{-1}}$ that is marked with an arrow and overlaid with an exponential for illustration. \textbf{Bottom panel:} Normalised KDE of the above distribution clearly showing the dip.}
\label{fig:disp}
\end{figure}
Figure~\ref{fig:disp} shows the distribution of $v_Y$ for stars within
700 pc as a histogram (top panel) and with Kernel Density Estimation (e.g., \citealt{KDE}; bottom panel) using a Gaussian kernel with bandwidth 9. We chose 700 pc as a balance between quantity of stars, and quality of data. There is a clear underdensity of stars in the approximate region of $-210\ensuremath{\,\mathrm{km\ s}^{-1}}$ to $-270\ensuremath{\,\mathrm{km\ s}^{-1}}$, marked with an arrow (see also the
top panel of Figure~\ref{fig:fit}). Although there are only
$\approx\!10$ stars per bin at these velocities, this dip in the
distribution is clear across $\approx\!8$ bins. We defer a discussion of the significance of the dip until Section 4 where we fit a model for the dip obtained from simulations.
Furthermore, in the two-dimensional distribution of $(v_X,v_Y)$, we
find a lack of stars with both positive $v_X$ and large negative
$v_Y$. That is, there are very few stars observed on disk orbits
($|v_X| < 150\ \ensuremath{\,\mathrm{km\ s}^{-1}}$ and $|v_Z| < 150\ensuremath{\,\mathrm{km\ s}^{-1}}$) plunging towards the
Galactic centre. Quantitatively, there are 24 stars with $v_X > 0$ in
the range $-290\ensuremath{\,\mathrm{km\ s}^{-1}} < v_Y < -230\ensuremath{\,\mathrm{km\ s}^{-1}}$ and 34 stars with $v_X <
0$. These rates are inconsistent with being drawn from the same
distribution at $\gtrsim1.7\sigma$.
Our analysis does not explicitly take into account the uncertainties on the distance or velocity estimates. Propagating the uncertainties through the coordinate transformation results in an uncertainty of approximately 10 per cent on the measured cartesian velocities. At the $v_Y$ range of the dip, this corresponds to uncertainties of approximately $24\ensuremath{\,\mathrm{km\ s}^{-1}}$. Because the width of the dip feature appears to be approximately $60 \ensuremath{\,\mathrm{km\ s}^{-1}}$ we can neglect the uncertainties in this initial investigation.
\section{Expectation from Galactic models}\label{sec:modeling}
A likely explanation for these missing stars, as mentioned in
Section~\ref{sec:intro}, is that they have been scattered onto chaotic
orbits with larger scale heights by interaction with the Galactic
nucleus. This is expected for disk stars with approximately zero
angular momentum as discussed in \citet{CI87}. Thus, as these stars
would then spend the majority of their orbits far from the Galactic
plane, it is very unlikely that they would be observed in the solar
neighbourhood at any one given time. In Galactocentric coordinates,
such stars have tangential velocities $v_T \approx 0$, corresponding
to heliocentric $v_Y \approx v_{\odot,\rm{reflex}}$, minus the Solar
tangential velocity measured in the Galactocentric frame.
\cite{CI87} performed an analysis of this effect and matched models of the dip constructed within an analytic potential to data from local stellar catalogues complete to 25 pc. They found the dip to be centred at $250\ensuremath{\,\mathrm{km\ s}^{-1}}$, with a depth greater than 80\,\%. However, they are careful to note that there are only 18 stars with $v_Y<-140\ensuremath{\,\mathrm{km\ s}^{-1}}$ and that a larger sample will improve the measurement. The TGAS+RAVE sample used here has 374 stars with $-310\ensuremath{\,\mathrm{km\ s}^{-1}}\leq v_Y<-150\ensuremath{\,\mathrm{km\ s}^{-1}}$, allowing far greater confidence in our subsequent analysis of the feature.
Firstly, we make a fresh prediction of the feature which we expect to observe in the TGAS data, similar to \cite{CI87}, but with an updated potential, and drawing the distribution of initial positions, radial, and vertical velocities from the observed data to tailor the prediction to the current data set. It is challenging to construct an $N$-body model with sufficient resolution to predict the high velocity tail in the local neighbourhood. Thus, we integrate test particles in a Milky-Way like potential, and observe the resulting orbits.
For our Milky-Way potential we use \texttt{MWPotential2014} from
\texttt{galpy } \citep{Bovy15a}. \texttt{MWPotential2014} consists of
a power-law spherical bulge potential with an exponential cut-off, a
Miyamoto-Nagai disk potential, and a NFW halo potential. The
parameters of this potential have been fit to a wide variety of
dynamical data in the Milky Way; the full parameters are given in
\citet{Bovy15a}. To model the hard Galactic nucleus that is not
included in \texttt{MWPotential2014}, we also include a Plummer
potential $\Phi(R,z)=-M/\sqrt{R^2+z^2+b^2}$,
where $M=2\times10^9$ $M_{\odot}$ and $b=250$ pc. Note that we do
not include a non-axisymmetric bar potential in this initial work,
which may affect the feature slightly. This should be considered when
applying this technique to future $Gaia$ data releases.
\begin{figure}
\begin{tikzpicture}
\node (img) {\includegraphics[width=0.49\textwidth]{f2.pdf}};
\node[below=of img, node distance=0cm, yshift=2cm,font=\color{black}] {$R$ (kpc)};
\node[below=of img, node distance=0cm, yshift=9cm,xshift=-2.1cm,font=\color{black}] {$v_T=0\ensuremath{\,\mathrm{km\ s}^{-1}}$};
\node[below=of img, node distance=0cm, yshift=4.45cm,xshift=-1.9cm,font=\color{black}] {$v_T=-10\ensuremath{\,\mathrm{km\ s}^{-1}}$};
\node[below=of img, node distance=0cm, yshift=6.7cm,xshift=-2.0cm,font=\color{black}] {$v_T=10\ensuremath{\,\mathrm{km\ s}^{-1}}$};
\node[left=of img, node distance=0cm, rotate=90, anchor=center,yshift=-1.3cm,,font=\color{black}] {$z$ (kpc)};
\end{tikzpicture}
\caption{\textbf{Top panel:} Example orbit of a particle with $v_T=0\ensuremath{\,\mathrm{km\ s}^{-1}}$ exhibiting chaotic behaviour. \textbf{Middle panel:} Example orbit of a particle with $v_T=10\ensuremath{\,\mathrm{km\ s}^{-1}}$. \textbf{Bottom panel:} Example orbit of a particle with $v_T=-10\ensuremath{\,\mathrm{km\ s}^{-1}}$. Orbits that penetrate the galactic nucleus are scattered onto non-disk orbits that spend little time in the solar neighborhood.}
\label{fig:orbits}
\end{figure}
Figure \ref{fig:orbits} displays the orbits of three stars integrated for 2 Gyr. The top panel shows the orbit of a star with $v_T=0\ensuremath{\,\mathrm{km\ s}^{-1}}$, which exhibits chaotic behaviour upon interaction with the galactic nucleus. The middle and lower panels show the orbits of stars with $v_T=10\ensuremath{\,\mathrm{km\ s}^{-1}}$ and $v_T=-10\ensuremath{\,\mathrm{km\ s}^{-1}}$, respectively, which exhibit well-behaved disk orbits. The star with no angular momentum spends little of its orbital period near the galactic plane, whereas the two stars which do not approach the nucleus remain within a few hundred parsecs of the plane.
Without an ab-initio model for the disc, it is difficult to predict
how many stars are expected to be missing near zero angular momentum
and what the exact profile of the dip should be. Therefore, we use the
fraction of the orbital period of the test-particle stars that they
spend near the mid-plane of the disc as a proxy for whether it has
been scattered to a much higher scale height. We integrate many orbits
with positions, radial, and vertical velocities drawn from the
TGAS+RAVE sample and from an initial, uniform distribution in $v_Y$
covering the low-velocity tail. We then re-weight the stars using the
fraction of time they spend near the plane and construct the dip
profile by dividing this weighted, final $v_Y$ distribution by the
initial, uniform distribution.
Figure~\ref{fig:dips} shows the computed dip profiles for velocities
within $80\ensuremath{\,\mathrm{km\ s}^{-1}}$ of $v_T=0$. The top panel shows the results of 7
simulations with reflex solar motion, $v_{\odot,\rm{reflex}}$, of
$-220$, $-230$, $-235$, $-240$, $-245$, $-250$ and $-260\ensuremath{\,\mathrm{km\ s}^{-1}}$,
assuming $|z|<300$ pc as the criterium `close to the disc plane'. It
is clear that the shape and depth of the overlaid distributions are
very similar and thus not dependent on the value of the solar
motion. The dip profile is not entirely symmetric around zero
$v_Y-v_{\odot,\rm{reflex}}$, because the spatial distribution of the
TGAS+RAVE sample is asymmetric around the Sun. The middle panel of
Figure~\ref{fig:dips} displays the same as the upper panel, for
$v_{\odot,\rm{reflex}}=-240\ensuremath{\,\mathrm{km\ s}^{-1}}$, but assuming $|z|<1$ kpc to weight
the orbits. The profile of the dip is notably shallower, with a depth
of $\sim0.85$, compared to $\sim0.7$ for the models in the top panel,
but the shape of the dip profile is similar. The bottom panel of
Figure~\ref{fig:dips} shows same as the middle panel, but assuming
$|z|<50$ pc to weight the orbits. The depth is consistent with the
models in the top panel.
We create a function $D(v_Y-v_{\odot,\rm{reflex}})$ of the dip by
smoothly interpolating the model with $v_{\odot,\rm{reflex}}=-240\ensuremath{\,\mathrm{km\ s}^{-1}}$
and $|z|<300$ pc displayed in the top panel of
Figure~\ref{fig:dips}. Because the dip profile computed above is an
approximation, we give the model more freedom by allowing the dip's
amplitude to vary. We model the full distribution $f(v_Y)$ of $v_Y$ as
an exponential multiplied by the dip:
\begin{equation}\label{eq:fit}
\begin{split}
f(&v_Y) \propto\\
& \exp(m\,v_Y+b)\times\left(1-\alpha\,\left[\frac{1-D(v_Y-v_{\odot,\rm{reflex}})}{1-D(0)}\right]\right).
\end{split}
\end{equation}
Written in this manner, the depth is parameterized by a parameter
$\alpha$, normalized such that $\alpha=0$ corresponds to no dip and
$\alpha=1$ corresponds to a dip reaching all the way to zero, i.e., a
complete absence of stars.
\begin{figure}
\begin{tikzpicture}
\node (img) {\includegraphics[width=0.47\textwidth]{f3.pdf}};
\node[below=of img, node distance=0cm, yshift=1.6cm,font=\color{black}] {$v_Y-v_{\odot,\mathrm{reflex}}$ (km s$^{-1}$)};
\node[below=of img, node distance=0cm, yshift=5.9cm,xshift=-2.13cm,font=\color{black}] {$|z|<300$ pc};
\node[below=of img, node distance=0cm, yshift=2.5cm,xshift=-2.2cm,font=\color{black}] {$|z|<50$ pc};
\node[below=of img, node distance=0cm, yshift=4.2cm,xshift=-2.2cm,font=\color{black}] {$|z|<1$ kpc};
\node[left=of img, node distance=0cm, rotate=90, anchor=center,yshift=-1.5cm,,font=\color{black}] {Fraction of stars remaining};
\node[right=of img, node distance=0cm, rotate=270, anchor=center,yshift=-1.0cm,,font=\color{black}] {$v_{\odot,\mathrm{reflex}}$ (km s$^{-1})$};
\end{tikzpicture}
\caption{\textbf{Top panel:} Fraction of stars remaining in stable
orbits with initial velocities close to $v_T=0\ensuremath{\,\mathrm{km\ s}^{-1}}$ for seven
simulations in the range $v_{\odot,\mathrm{reflex}}=-220$ to
$-260\ensuremath{\,\mathrm{km\ s}^{-1}}$. \textbf{Middle panel:} Same as top panel, but for $|z|<1$
kpc. \textbf{Bottom panel:} Same as top panel, but for $|z|<50$
pc.}
\label{fig:dips}
\end{figure}
\section{Detection of a zero-angular-momentum feature in \emph{Gaia}-DR1}\label{sec:detection}
We fit the distribution of $v_Y$ of disk stars ($|v_X| < 150\ensuremath{\,\mathrm{km\ s}^{-1}}$ and
$|v_Z| < 150\ensuremath{\,\mathrm{km\ s}^{-1}}$) in the range $-310\ensuremath{\,\mathrm{km\ s}^{-1}} \leq v_Y < -150\ensuremath{\,\mathrm{km\ s}^{-1}}$ using
the model in Equation~\eqref{eq:fit}. The top panel of
Figure~\ref{fig:fit} displays the best fit $\alpha=0$ model overlaid
on the zoomed in tail of the histogram of the distribution of
$v_Y$. This clearly demonstrates the dearth of stars between
approximately $v_Y=-210\ensuremath{\,\mathrm{km\ s}^{-1}}$ and $-270\ensuremath{\,\mathrm{km\ s}^{-1}}$ when compared with the
exponential model.
The best-fitting dip model is shown in the bottom panel of
Figure~\ref{fig:fit}. The data prefer a dip: the log likelihood of the
best-fit model is $\ln \mathcal{L}=282.0$, compared with $\ln
\mathcal{L}=276.7$ for the best-fit exponential model with two fewer
parameters ($\alpha =0$). For comparison, we also fit a model where we
allow the width to be controlled by a new parameter $w$ that stretches
the profile in Figure~\ref{fig:dips} along the $x$ axis. For this
model we find $\ln \mathcal{L}=282.7$ and $w=1.8$. Although the
likelihood is marginally higher for this model, the significance of
the detection is lower owing to the extra free parameter.
Using the Akaike information criterion \citep[AIC,][]{AIC} the difference in ln likelihood corresponds to a $2.1\sigma$ detection for the $w=1$ model and a $1.8\sigma$ detection for the $w$ free case. When combined with the $1.7\sigma$ detection from the $v_X$ to $v_Y$ analysis using a Bonferroni correction to account for the multiple tests \citep[e.g.][]{Dunn} this gives a $2.7\sigma$ detection for $w=1$ and a $2.5\sigma$ detection when $w$ is left free. We explore the allowed range of models for $w=1$ using a Markov Chain Monte Carlo (MCMC) analysis with \texttt{emcee} \citep{Foreman13a}. We find that $\alpha=0.52\pm0.1$ and $v_{\odot,\rm{reflex}}=-239\pm9\ensuremath{\,\mathrm{km\ s}^{-1}}$ at $1\sigma$.
Even though we do not have an ab-initio prediction of the dip profile,
we find that the best-fitting $\alpha$ is consistent within its
uncertainty to that obtained using the models (which have $\alpha
\approx 0.3$). Thus, systematics due to our approximate model for the
dip in a simple axisymmetric Milky-Way potential are likely quite
small.
\begin{figure}
\begin{tikzpicture}
\node (img) {\includegraphics[width=0.49\textwidth]{f4.pdf}};
\node[below=of img, node distance=0cm, yshift=8.8cm,xshift=-2.3cm,font=\color{black}] {$\ln\mathcal{L}=276.7$};
\node[below=of img, node distance=0cm, yshift=5.4cm,xshift=-2.23cm,font=\color{black}] {$\ln\mathcal{L}=282.0$};
\node[left=of img, node distance=0cm, rotate=90, anchor=center,yshift=-1.5cm,,font=\color{black}] {Number of stars per bin};
\node[below=of img, node distance=0cm, anchor=center,yshift=1.5cm,,font=\color{black}] {$v_{\odot,\mathrm{reflex}}$ (km s$^{-1})$};
\end{tikzpicture}
\caption{\textbf{Top panel:} Tail of the $v_Y$ distribution overlaid
with the best fit model assuming $\alpha=0$. \textbf{Bottom panel:}
Tail of the $v_Y$ distribution overlaid with the best fit model
(red) allowing the dip depth $\alpha$ to be a free parameter. We
also display 100 models randomly sampled for the parameters from
MCMC. Each panel includes the ln likelihood of the best-fit. The dip
is detected at $2.1\sigma$ and centered on $-239\ensuremath{\,\mathrm{km\ s}^{-1}}$.}
\label{fig:fit}
\end{figure}
\section{Discussion and outlook}\label{sec:conclusion}
We have shown that a dip in the low-velocity tail of stars in the
solar neighborhood is present at $2.7\sigma$ significance. This feature
can be plausibly explained by the absence of stars on orbits with
approximately zero angular momentum, due to such stars being scattered
onto halo orbits after interaction with the Galactic nucleus as
predicted by \citet{CI87}. Using orbit integration in a realistic
model of the Galactic potential, we have demonstrated that this dip
should be centered on minus the Solar rotational velocity
$v_{\odot,\rm{reflex}} = -v_\odot$. In this way, we were able to
measure $v_\odot=239\pm9\ensuremath{\,\mathrm{km\ s}^{-1}}$. Our modeling does not include the
Galactic bar or spiral arms, which affect the orbits of
stars. However, because the dip is due to zero-angular momentum stars
moving on orbits that are highly eccentric that are being lost from
the solar neighborhood over many dynamical times, the gravitational
perturbations and current position of the relatively diffuse stellar
bar and of the spiral structure should not greatly affect the profile
of the dip and we expect their effect to be minor on our derived value
of $v_{\odot,\rm{reflex}}$.
Observations of the Galactic center have precisely measured the proper
motion $\mu_{\mathrm{Sgr\ A}^*}$ of Sgr A$^*$
\citep{Reid04a}. Assuming that Sgr A$^*$ is at rest with respect to
the dynamical center of the Milky Way, this apparent proper motion is
due to the reflex motion of the Sun. The ratio of the linear and
angular measurements of the reflex motion of the Sun
$v_{\odot,\rm{reflex}}/\mu_{\mathrm{Sgr\ A}^*}$ then becomes a
measurement of the distance $R_0$ to the Galactic centre. For
$\mu_{\mathrm{Sgr\ A}^*} = 30.24\pm0.11\ensuremath{\,\mathrm{km\ s}^{-1}}\ensuremath{\,\mathrm{kpc}}\ensuremath{^{-1}}$, our measurement
of $v_{\odot,\rm{reflex}}$ implies $R_0 =
7.9\pm0.3\ensuremath{\,\mathrm{kpc}}$. This is an essentially dynamical measurement of $R_0$. If the
lack of zero angular momentum stars is confirmed in future \emph{Gaia}
data releases and systematics can be controlled, this feature can in
principle saturate the precision implied by the uncertainty in
$\mu_{\mathrm{Sgr\ A}^*}$, which is $\approx30\ensuremath{\,\mathrm{pc}}$.
Future $Gaia$ data releases will have many orders of magnitude more
stars, with substantially higher accuracy on the measured
parameters. Additionally, future $Gaia$ data releases will contain
line-of-sight velocities for stars in the Solar neighbourhood,
removing the asymmetry in the sample introduce by the necessity to
cross match with RAVE. Thus, with future $Gaia$ data releases we hope
that this technique may provide a very precise measurement of the
solar reflex motion with respect to the Galactic centre. We can make a
rough prediction for the strength of the feature expected in future $Gaia$ data releases by using the $Gaia$ Object Generator \citep[GOG,][]{Luri} based on the $Gaia$ Universe Model Snapshot
\citep[GUMS,][]{GUMS}. GOG predicts that approximately 17.5 million stars will be observed with radial velocities within 700 pc, and thus, by weighting that number by the fraction of
stars in the tail compared to the full sample matched with RAVE,
e.g. 374/216,201=0.0017, we can predict that $Gaia$ will observe roughly
30,000 stars in the tail within 700 pc. This should allow us to confirm the existence of this feature, and if present allow us to increase the precision of the measurement of $v_{\odot,\rm{reflex}}$ to within approximately $1\ensuremath{\,\mathrm{km\ s}^{-1}}$, assuming the errors decrease on the order of $\sqrt{N}$. In turn the error on $R_0$ would become dominated by the error in the proper motion measurement. At this level the systematics will then become extremely important, and in future analysis we must take into consideration other factors such as the bar, spiral structure and local giant molecular clouds, although the tangential deflections are expected to be small.
Further work is also needed to fully determine the origin of this
feature, and to make more detailed predictions on the shape and depth
which we expect to observe. The exact dimensions of the feature are
likely slightly dependent on the potential of the inner Galaxy, and
thus matching the observed shape in future $Gaia$ data releases to dip
functions constructed from various models for the potential may give
us valuable insight into the potential of the inner Galaxy.
\acknowledgements J.A.S.H. is supported by a Dunlap Fellowship at the
Dunlap Institute for Astronomy \& Astrophysics, funded through an
endowment established by the Dunlap family and the University of
Toronto. J.B. and R.G.C. received partial support from the Natural
Sciences and Engineering Research Council of Canada. J.B. also
received partial support from an Alfred P. Sloan Fellowship. The MCMC
analyses in this work were run using \emph{emcee} \citep{Foreman13a}.
This work has made use of data from the European Space Agency (ESA)
mission {\it Gaia} (\url{http://www.cosmos.esa.int/gaia}), processed
by the {\it Gaia} Data Processing and Analysis Consortium (DPAC,
\url{http://www.cosmos.esa.int/web/gaia/dpac/consortium}). Funding for
the DPAC has been provided by national institutions, in particular the
institutions participating in the {\it Gaia} Multilateral
Agreement. Funding for RAVE (\url{http://www.rave-survey.org}) has been provided by
institutions of the RAVE participants and by their national funding
agencies.
|
1,116,691,499,823 | arxiv | \section{Derivation of the maximum post-selection probability}
To maximize $P_{s}\approx|\langle\Psi_{f}|\Psi_{i}\rangle|^{2}$ while
keeping $A_{w}$ and $|\Psi_{i}\rangle$ fixed, we note that the initial
state can be decomposed into a piece parallel to $(\ha-A_{w})|\Psi_{i}\rangle$
and an orthogonal piece in the complementary subspace $\mathcal{V}^{\perp}$:
\begin{align}
|\Psi_{i}\rangle & =\frac{(\ha-A_{w})|\Psi_{i}\rangle\langle\Psi_{i}|(\ha-A_{w}^{*})|\Psi_{i}\rangle}{\langle\Psi_{i}|(\ha-A_{w}^{*})(\ha-A_{w})|\Psi_{i}\rangle}+\left(|\Psi_{i}\rangle-\frac{(\ha-A_{w})|\Psi_{i}\rangle\langle\Psi_{i}|(\ha-A_{w}^{*})|\Psi_{i}\rangle}{\langle\Psi_{i}|(\ha-A_{w}^{*})(\ha-A_{w})|\Psi_{i}\rangle}\right).
\end{align}
Since $|\Psi_{f}\rangle$ must also be in $\mathcal{V}^{\perp}$ by
the definition of the weak value, it follows that the maximum $P_{s}$
can be achieved for the post-selection state parallel to the component
of $|\Psi_{i}\rangle$ in $\mathcal{V}^{\perp}$, i.e.,
\begin{equation}
|\Psi_{f}\rangle\propto|\Psi_{i}\rangle-\frac{(\ha-A_{w})|\Psi_{i}\rangle\langle\Psi_{i}|(\ha-A_{w}^{*})|\Psi_{i}\rangle}{\langle\Psi_{i}|(\ha-A_{w}^{*})(\ha-A_{w})|\Psi_{i}\rangle}.\label{eq:11}
\end{equation}
After some calculation, it follows that
\begin{equation}
\max_{|\Psi_{f}\rangle\in\mathcal{V}^{\perp}}P_{s}=\frac{\text{Var}(\ha)_{|\Psi_{i}\rangle}}{\langle\Psi_{i}|\ha^{2}|\Psi_{i}\rangle-2\langle\Psi_{i}|\hat{A}|\Psi_{i}\rangle\re A_{w}+|A_{w}|^{2}},\label{eq:12}
\end{equation}
where $\text{Var}(\ha)_{|\Psi_{i}\rangle}=\langle\Psi_{i}|\ha^{2}|\Psi_{i}\rangle-[\langle\Psi_{i}|\ha|\Psi_{i}\rangle]^{2}$
is the variance of $\ha$ in the state $|\Psi_{i}\rangle$.
For the purposes of weak value amplification, we usually require $|A_{w}|$
to be larger than any eigenvalue of $\ha$, $|A_{w}|\gg|\Lambda|$.
Therefore, this maximum $P_{s}$ can be approximated as Eq. (9) in
the main text.
\section{Derivation of the optimal post-selection state}
As noted in the previous section, the optimal post-selection state
should be parallel to the component of $|\Psi_{i}\rangle$ in $\mathcal{V}^{\perp}$.
The post-selection probability is then controlled by the variance
$\text{Var}(\hat{A})_{|\Psi_{i}\rangle}$. This variance is maximized
for a maximally entangled initial state $|\Psi_{i}\rangle=\frac{1}{\sqrt{2}}(|\lambda_{\max}\rangle^{\otimes n}+\et|\lambda_{\min}\rangle^{\otimes n})$.
Hence, we can directly compute the optimal post-selected state to
be
\begin{align}
|\Psi_{f}\rangle & \propto|\Psi_{i}\rangle-\frac{(\at-A_{w})|\Psi_{i}\rangle\langle\Psi_{i}|(\at-A_{w}^{*})|\Psi_{i}\rangle}{\langle\Psi_{i}|(\at-A_{w}^{*})(\at-A_{w})|\Psi_{i}\rangle}\\
& =\frac{1}{\sqrt{2}}(|\lambda_{\max}\rangle^{\otimes n}+\et|\lambda_{\min}\rangle^{\otimes n})-\frac{1}{\sqrt{2}}((n\lambda_{\max}-A_{w})|\lambda_{\max}\rangle^{\otimes n}\nonumber \\
& \qquad+\et(n\lambda_{\min}-A_{w})|\lambda_{\min}\rangle^{\otimes n})\frac{n\lambda_{\max}+n\lambda_{\min}-2A_{w}^{*}}{|n\lambda_{\max}-A_{w}|^{2}+|n\lambda_{\min}-A_{w}|^{2}}\nonumber \\
& \propto(|n\lambda_{\min}-A_{w}|^{2}-(n\lambda_{\max}-A_{w})(n\lambda_{\min}-A_{w}^{*}))|\lambda_{\max}\rangle^{\otimes n}\nonumber \\
& \qquad+\et(|n\lambda_{\max}-A_{w}|^{2}-(n\lambda_{\min}-A_{w})(n\lambda_{\max}-A_{w}^{*}))|\lambda_{\min}\rangle^{\otimes n})\nonumber \\
& \propto-(n\lambda_{\min}-A_{w}^{*})|\lambda_{\max}\rangle^{\otimes n}+\et(n\lambda_{\max}-A_{w}^{*})|\lambda_{\min}\rangle^{\otimes n}.\nonumber
\end{align}
This is Eq.~(12) in the main text.
\section{Quantum Fisher information}
It is important to determine just how well the weak value amplification
technique can estimate the small parameter $g$. There is some concern
that the post-selection process will lead to a substantial reduction
of the total obtainable information, since a large fraction of the
potentially usable data is being thrown away (e.g., \cite{Ferrie2013}).
To assuage these concerns, we compare the maximum Fisher information
about $g$ that can be obtained without post-selection to the Fisher
information that remains in the post-selected states used for weak
value amplification.
We first recall a few general results from the study of quantum Fisher
information. If one wishes to estimate a parameter $g$, then the
minimum standard deviation of any unbiased estimator for $g$ is given
by the \emph{quantum Cram\'{e}r-Rao bound}: $I(g)^{-1/2}$. The function
$I(g)$ is the \emph{quantum Fisher information} \cite{metrology2}
\begin{equation}
I(g)=4\frac{\d\langle\Phi_{g}|}{\d g}\frac{\d|\Phi_{g}\rangle}{\d g}-4\left|\frac{\d\langle\Phi_{g}|}{\d g}|\Phi_{g}\rangle\right|^{2},\label{eq:2}
\end{equation}
which is determined by a quantum state $|\Phi_{g}\rangle$ that contains
the information about $g$. If this state is prepared with some interaction
Hamiltonian $|\Phi_{g}\rangle=\exp(-ig\hat{H})|\Phi\rangle$ then
the Fisher information reduces to a simpler form \cite{quantum metrology 2}
\begin{equation}
I(g)=4\text{Var}(\hat{H})_{|\Phi\rangle},\label{eq:1-1}
\end{equation}
and is entirely determined by the variance of the Hamiltonian in the
pre-interaction state $|\Phi\rangle$.
\subsection{General Discussion}
In the main text, the relevant Hamiltonian with a meter observable
$\hat{F}$ is $\hat{H}=\hbar g\hat{A}\otimes\hat{F}\delta(t-t_{0})$,
where $\hat{A}$ is a sum of $n$ ancilla observables $\hat{a}$ of
dimension $d$. The joint state $|\Phi\rangle$ is also always prepared
in a product state $|\Phi\rangle=|\Psi_{i}\rangle\otimes|\phi\rangle$
between the ancillas and the meter. If there is no post-selection
then the quantum Fisher information is found to be
\begin{equation}
I(g)=4\left[\langle\hat{A}^{2}\rangle_{|\Psi_{i}\rangle}\langle\hat{F}^{2}\rangle_{|\phi\rangle}-\left(\langle\hat{A}\rangle_{|\Psi_{i}\rangle}\langle\hat{F}\rangle_{|\phi\rangle}\right)^{2}\right].\label{eq:maxinf}
\end{equation}
Now suppose we projectively measure the ancillas in order to make
a post-selection. This measurement will produce $d^{n}$ independent
outcomes corresponding to some orthonormal basis $\{|\Psi_{f}^{(k)}\rangle\}_{k=1}^{d^{n}}$.
In the linear response regime with $g\ll1$, each of these outcomes
prepares a particular meter state
\begin{align}
|\phi'_{k}\rangle & \propto\langle\Psi_{f}^{(k)}|\exp(-ig\hat{H})|\Psi_{i}\rangle|\phi\rangle\approx(\hat{1}-igA_{w}^{(k)}\hat{F})|\phi\rangle\label{eq:stateps}
\end{align}
with probability $P_{s}^{(k)}\approx|\langle\Psi_{f}^{(k)}|\Psi_{i}\rangle|^{2}$
that is governed by a different weak value
\begin{equation}
A_{w}^{(k)}=\frac{\langle\Psi_{f}^{(k)}|\hat{A}|\Psi_{i}\rangle}{\langle\Psi_{f}^{(k)}|\Psi_{i}\rangle}.
\end{equation}
We can then compute the remaining Fisher information contained in
each of the post-selected states $\sqrt{P_{s}^{(k)}}|\phi'_{k}\rangle$
using \eqref{eq:2}, which produces
\begin{align}
I^{(k)}(g) & \approx4\, P_{s}^{(k)}|A_{w}^{(k)}|^{2}\,\left[\text{Var}(\hat{F})_{|\phi\rangle}-\langle\hat{F}^{2}\rangle_{|\phi\rangle}\left(2g\text{Im}A_{w}^{(k)}\langle\hat{F}\rangle_{|\phi\rangle}+|gA_{w}^{(k)}|^{2}\langle\hat{F}^{2}\rangle_{|\phi\rangle}\right)\right].\label{eq:kinf}
\end{align}
Importantly, if we add the information from all $d^{n}$ post-selections
we obtain
\begin{align}
\sum_{k=1}^{d^{n}}I^{(k)}(g) & \approx4\,\langle\hat{A}^{2}\rangle_{|\Psi_{i}\rangle}\,\text{Var}(\hat{F})_{|\phi\rangle}-O(g).
\end{align}
With the condition $\langle\hat{F}\rangle_{|\phi\rangle}=0$, this
saturates the maximum in \eqref{eq:maxinf} up to small corrections,
which indicates that the ancilla measurement does not lose information
by itself. One can always examine all $d^{n}$ ancilla outcomes to
obtain the maximum information, as pointed out in \cite{Ferrie2013}.
Now let us focus on a particular post-selection $k=1$, using an unbiased
meter that satisfies $\langle\hat{F}\rangle_{|\phi\rangle}=0$, as
assumed in the main text. This produces the simplification
\begin{align}
I^{(1)}(g) & \approx4\, P_{s}^{(1)}|A_{w}^{(1)}|^{2}\,\left[1-|gA_{w}^{(1)}|^{2}\text{Var}(\hf)\right].\label{eq:kinfsimple}
\end{align}
Now recall Eq.~(15) of the main text, where we showed that if we
fix $P_{s}^{(1)}\ll1$ and picked a post-selection state that maximizes
$A_{w}^{(1)}$ then we found
\begin{equation}
\max|A_{w}^{(1)}|^{2}\approx\frac{1-P_{s}^{(1)}}{P_{s}^{(1)}}\text{Var}(\hat{A})_{|\Psi_{i}\rangle}\approx\frac{\text{Var}(\hat{A})_{|\Psi_{i}\rangle}}{P_{s}^{(1)}}.
\end{equation}
For this strategically chosen post-selection with small $P_{s}^{(1)}$
and maximized $A_{w}^{(1)}$, it then follows that
\begin{align}
I^{(1)}(g) & \approx4\,\text{Var}(\hat{A})_{|\Psi_{i}\rangle}\,\left[1-|gA_{w}^{(1)}|^{2}\text{Var}(\hf)\right]=I(g)\;\left[\frac{\text{Var}(\hat{A})_{|\Psi_{i}\rangle}}{\langle\hat{A}^{2}\rangle_{|\Psi_{i}\rangle}}\right]\,\left[1-|gA_{w}^{(1)}|^{2}\text{Var}(\hf)\right],\label{eq:kinfopt}
\end{align}
which is Eq.~(16) in the main text. That is, nearly \emph{all} the
Fisher information can be concentrated into a single (but rarely post-selected)
meter state (see also \cite{Jordan2013}). The remaining information
is distributed among the $(d^{n}-1)$ remaining states, and could
be retrieved in principle. The special post-selected meter state suffers
an overall reduction factor of $\eta=\text{Var}(\hat{A})/\langle\hat{A}^{2}\rangle$,
as well as a small loss $|gA_{w}^{(1)}|^{2}\text{Var}(\hf)$. However,
most weak value amplification experiments operate in the linear response
regime $g|A_{w}^{(1)}|\text{Var}(\hf)^{\frac{1}{2}}\ll1$ where this
remaining loss is negligible. Moreover, the overall reduction factor
$\eta$ can even be set to unity by choosing ancilla observables that
satisfy $\langle\hat{A}\rangle_{|\Psi_{i}\rangle}=0$.
As carefully discussed in \cite{Ferrie2013}, one cannot actually
reach the optimal bound of \eqref{eq:maxinf} when making a post-selection.
However, \eqref{eq:kinfopt} shows that one can get remarkably close
by carefully choosing which post-selection to make. It is quite surprising
that one can even approximately saturate \eqref{eq:maxinf} while
discarding the $(d^{n}-1)$ much more probable outcomes. Rare post-selections
can often be advantageous for independent reasons (e.g., to attenuate
an optical beam down to a manageable post-selected beam power), so
this property of weak value amplification makes it an attractive technique
for estimating an extremely small parameter $g$ that permits the
linear response conditions \cite{Jordan2013}.
\subsection{Examples}
To see how this works in more detail, let us examine the ancilla qubit
post-selection examples used in the main text, where $g=\varphi/2$.
For completeness, we will work through two examples. First, we consider
a sub-optimal ancilla observable $\hat{a}=|1\rangle\langle1|$. Second,
we consider an optimal ancilla observable $\hat{a}=\hat{\sigma}_{z}$
to emphasize the practical difference.
\subsubsection{Ancilla Projectors}
A suboptimal choice of ancilla observable is the projector $\hat{a}=|1\rangle\langle1|$
used in controlled qubit operations. From the optimal initial state
given by Eq.~(10) in the main text, we have $\langle\hat{A}^{2}\rangle=n^{2}/2$
and $\langle\hat{A}\rangle=n/2$. Therefore, the maximum quantum Fisher
information from \eqref{eq:maxinf} that we can expect for estimating
$\varphi$ is
\begin{equation}
I(\varphi)=\frac{n^{2}}{2},\label{eq:4}
\end{equation}
where the factor $1/2$ in $g=\varphi/2$ has been taken into account,
and the corresponding quantum Cram\'{e}r-Rao bound is $\sqrt{2}/n$.
This is the best (Heisenberg) scaling of the estimation precision
that can be obtained by using $n$ entangled ancillas with the given
initial states.
Now, let us consider what happens when we make the optimal preparation
and post-selections for weak value amplification. We expect from \eqref{eq:kinfopt}
that the maximum information which can be attained through post-selection
will be reduced by a factor of
\begin{equation}
\eta=\frac{\text{Var}(\hat{A})_{|\Psi_{i}\rangle}}{\langle\hat{A}^{2}\rangle_{|\Psi_{i}\rangle}}=\frac{1}{2}.
\end{equation}
It is in this sense that the choice of $\hat{a}$ as a projector is
suboptimal. We will see in the next section what happens with the
optimal choice of $\hat{\sigma}_{z}$.
In the first case considered in the main text (i.e., increasing the
post-selection probability with the weak value $A_{w}$ fixed), the
optimal post-selected state is
\begin{equation}
|\Psi_{f}\rangle\propto(A_{w}^{*})|1\rangle^{\otimes n}+(n-A_{w}^{*})|0\rangle^{\otimes n}.\label{eq:post}
\end{equation}
Computing the post-selected meter state then produces
\begin{equation}
|\phi'\rangle_{1}=\frac{\left[n-A_{w}[1-\cos(n\varphi/2)]\hat{1}-iA_{w}\sin(n\varphi/2)\hat{\sigma}_{z}\right]|\phi\rangle}{\left(n^{2}+2[|A_{w}|^{2}-n\text{Re}A_{w}][1-\cos(n\varphi/2)]\right)^{1/2}}\approx\left(\hat{1}-iA_{w}\frac{\varphi}{2}\hat{\sigma}_{z}\right)|\phi\rangle,
\end{equation}
where we have used $\langle\phi|\hat{\sigma}_{z}|\phi\rangle=0$,
and then have made the small parameter approximation $n\varphi\ll1$.
This recovers the expected linear response result in \eqref{eq:stateps}.
This state is post-selected with probability
\begin{equation}
p_{1}=\frac{1}{2}-\cos(n\varphi/2)\frac{|A_{w}|^{2}-n\text{Re}A_{w}}{n^{2}+2[|A_{w}|^{2}-n\text{Re}A_{w}]}\approx\frac{n^{2}}{2n^{2}+4[|A_{w}|^{2}-n\text{Re}A_{w}]}\approx\frac{n^{2}}{4}|A_{w}|^{-2},
\end{equation}
where we have made the small parameter approximation $n\varphi\ll1$,
and then the large weak value assumption $n\ll|A_{w}|$.
Now computing the quantum Fisher information (\ref{eq:2}) with the
post-selected meter state $\sqrt{p_{1}}\,|\phi'\rangle_{1}$ yields
the simple expression
\begin{equation}
I_{1}(\varphi)\approx\frac{n^{2}}{4}\left(1-\left|\frac{\varphi A_{w}}{2}\right|^{2}\right)\leq\frac{n^{2}}{4},\label{eq:fisher1}
\end{equation}
in agreement with \eqref{eq:kinfopt}. The maximum achieves the best
possible scaling of $n^{2}$ as in \eqref{eq:4}. Moreover, for the
most frequently used linear response regime with $|A_{w}|\varphi\ll1$,
we achieve the expected maximum information of $\eta I(\varphi)=n^{2}/4$.
For the second case (i.e., increasing the weak value $A_{w}$ with
the post-selection probability fixed), we can obtain the results simply
by rescaling $A_{w}\to\sqrt{n}A_{w}$ to produce $p_{2}\propto n$,
as shown in the main text. This produces,
\begin{equation}
|\phi'\rangle_{2}\approx\left(\hat{1}-i\sqrt{n}A_{w}\frac{\varphi}{2}\hat{\sigma}_{z}\right)|\phi\rangle,
\end{equation}
and
\begin{equation}
p_{2}\approx\frac{n^{2}}{4}|\sqrt{n}A_{w}|^{-2}=\frac{n}{4}|A_{w}|^{-2},
\end{equation}
and yields the Fisher information
\begin{equation}
I_{2}(\varphi)\approx\frac{n^{2}}{4}\left(1-n\left|\frac{\varphi A_{w}}{2}\right|^{2}\right)\leq\frac{n^{2}}{4}.\label{eq:fisher2}
\end{equation}
The increase of the amplification factor $|A_{w}|$ correspondingly
decreases the remaining Fisher information, as expected from \eqref{eq:fisher1}.
However, since $n\varphi\ll1$ and $\varphi|A_{w}|\ll1$ in the linear
response regime, this decrease is still small.
Alternatively, this second case can be computed explicitly as follows.
For a fixed post-selection probability $p$, the post-selected state
must be $|\Psi_{f}\rangle=\sqrt{p}|\Psi_{i}\rangle+\sqrt{1-p}|\Psi_{i}^{\perp}\rangle,$
where the optimal $|\Psi_{i}^{\perp}\rangle$ is parallel to the component
of $\ha|\Psi_{i}\rangle$ in the complementary subspace orthogonal
to $|\Psi_{i}\rangle$. Computing this yields
\begin{equation}
\begin{aligned}|\Psi_{f}\rangle & =\sqrt{p}|\Psi_{i}\rangle+\sqrt{1-p}\frac{\hat{A}|\Psi_{i}\rangle-|\Psi_{i}\rangle\langle\Psi_{i}|\hat{A}|\Psi_{i}\rangle}{\sqrt{\text{Var}(\hat{A})_{|\Psi_{i}\rangle}}}\\
& =\left(\sqrt{\frac{p}{2}}-\sqrt{\frac{1-p}{2}}\right)|0\rangle^{\otimes n}+\left(\sqrt{\frac{p}{2}}+\sqrt{\frac{1-p}{2}}\right)|1\rangle^{\otimes n}.
\end{aligned}
\label{eq:fixp}
\end{equation}
Thus, computing the post-selected meter state yields
\begin{equation}
|\phi'\rangle_{2}\propto\left(\left(\sqrt{\frac{p}{2}}-\sqrt{\frac{1-p}{2}}\right)\hat{1}+\left(\sqrt{\frac{p}{2}}+\sqrt{\frac{1-p}{2}}\right)\e^{-in\varphi\hat{\sigma}_{z}/2}\right)|\phi\rangle\approx\left(\hat{1}-i|A_{w}|\frac{\varphi}{2}\hat{\sigma}_{z}\right)|\phi\rangle,
\end{equation}
where we have defined the effective weak value factor
\begin{equation}
|A_{w}|=\frac{n}{2}\left(1+\sqrt{\frac{1-p}{p}}\right)\approx\frac{n}{2}p^{-1/2},\label{eq:effaw}
\end{equation}
and have used the linear response approximations $n\varphi\ll1$ and
$\varphi|A_{w}|\ll1$, as well as the small probability assumption
$p\ll1$. Computing the quantum Fisher information from (\ref{eq:2})
with the state $\sqrt{p}\,|\phi'\rangle_{2}$ then produces
\begin{equation}
I_{2}(\varphi)\approx p|A_{w}|^{2}\left(1-\left[\frac{\varphi|A_{w}|}{2}\right]^{2}\right)=\frac{n^{2}}{4}\left(1-\left[\frac{n\varphi}{4\sqrt{p}}\right]^{2}\right)\leq\frac{n^{2}}{4}
\end{equation}
using the definition \eqref{eq:effaw}. This result precisely matches
the form of \eqref{eq:kinfsimple}. It is now clear that for quadratic
scaling $p=n^{2}p_{0}$ we recover \eqref{eq:fisher1} with the effective
reference weak value $|A_{w}|=1/(2\sqrt{p_{0}})$, while for linear
scaling $p=np_{0}$ we recover \eqref{eq:fisher2}.
\subsubsection{Ancilla Z-operators}
For contrast, an optimal choice of ancilla observable is $\hat{a}=\hat{\sigma}_{z}$,
as used in the main text. From the optimal initial state given by
Eq.~(10) in the main text, we have $\langle\hat{A}^{2}\rangle=n^{2}$
and $\langle\hat{A}\rangle=0$. Therefore, the maximum quantum Fisher
information from \eqref{eq:maxinf} that we can expect for estimating
$\varphi$ is
\begin{equation}
I(\varphi)=n^{2},\label{eq:4b}
\end{equation}
which is a factor of 2 larger than \eqref{eq:4}. The corresponding
quantum Cram\'{e}r-Rao bound is $1/n$. From \eqref{eq:kinfopt},
we expect that the reduction factor is
\begin{equation}
\eta=\frac{\text{Var}(\hat{A})_{|\Psi_{i}\rangle}}{\langle\hat{A}^{2}\rangle_{|\Psi_{i}\rangle}}=1.
\end{equation}
Thus, it is possible to saturate the optimal bound with this choice
of $\hat{a}$.
In the first case considered in the main text (i.e., increasing the
post-selection probability with the weak value $A_{w}$ fixed), the
optimal post-selected state is
\begin{equation}
|\Psi_{f}\rangle\propto(n+A_{w}^{*})|1\rangle^{\otimes n}+(n-A_{w}^{*})|0\rangle^{\otimes n}.\label{eq:postz}
\end{equation}
Computing the post-selected meter state then produces
\begin{equation}
|\phi'\rangle_{1}=\frac{\left[n\cos(n\varphi/2)\hat{1}-iA_{w}\sin(n\varphi/2)\hat{\sigma}_{z}\right]|\phi\rangle}{\left(n^{2}\cos^{2}(n\varphi/2)+|A_{w}|^{2}\sin^{2}(n\varphi/2)\right)^{1/2}}\approx\left(\hat{1}-iA_{w}\frac{\varphi}{2}\hat{\sigma}_{z}\right)|\phi\rangle,
\end{equation}
where we have used $\langle\phi|\hat{\sigma}_{z}|\phi\rangle=0$,
and then have made the small parameter approximation $n\varphi\ll1$.
This again recovers the expected linear response result in \eqref{eq:stateps}.
This state is post-selected with probability
\begin{equation}
p_{1}=\frac{n^{2}\cos^{2}(n\varphi/2)+|A_{w}|^{2}\sin^{2}(n\varphi/2)}{n^{2}+[A_{w}|^{2}}\approx\frac{n^{2}}{n^{2}+|A_{w}|^{2}}\approx n^{2}|A_{w}|^{-2},
\end{equation}
where we have made the small parameter approximation $n\varphi\ll1$,
and then the large weak value assumption $n\ll|A_{w}|$.
Now computing the quantum Fisher information (\ref{eq:2}) with the
post-selected meter state $\sqrt{p_{1}}\,|\phi'\rangle_{1}$ yields
the simple expression
\begin{equation}
I_{1}(\varphi)\approx n^{2}\left(1-\left|\frac{\varphi A_{w}}{2}\right|^{2}\right)\leq n^{2},\label{eq:fisher1z}
\end{equation}
in agreement with \eqref{eq:kinfopt}. The maximum saturates the upper
bound of $n^{2}$ in \eqref{eq:4b}, as expected.
For the second case (i.e., increasing the weak value $A_{w}$ with
the post-selection probability fixed), we can again obtain the results
simply by rescaling $A_{w}\to\sqrt{n}A_{w}$ to produce
\begin{align}
|\phi'\rangle_{2} & \approx\left(\hat{1}-i\sqrt{n}A_{w}\frac{\varphi}{2}\hat{\sigma}_{z}\right)|\phi\rangle,\\
p_{2} & \approx n^{2}|\sqrt{n}A_{w}|^{-2}=n|A_{w}|^{-2},
\end{align}
and the Fisher information
\begin{equation}
I_{2}(\varphi)\approx n^{2}\left(1-n\left|\frac{\varphi A_{w}}{2}\right|^{2}\right)\leq n^{2}.\label{eq:fisher2z}
\end{equation}
Alternatively, computing the optimal post-selection state for a fixed
post-selection probability $p$ yields the same state as \eqref{eq:fixp}.
Hence, computing the post-selected meter state yields
\begin{equation}
|\phi'\rangle_{2}\propto\left(\left(\sqrt{\frac{p}{2}}-\sqrt{\frac{1-p}{2}}\right)e^{in\varphi\hat{\sigma}_{z}/2}+\left(\sqrt{\frac{p}{2}}+\sqrt{\frac{1-p}{2}}\right)\e^{-in\varphi\hat{\sigma}_{z}/2}\right)|\phi\rangle\approx\left(\hat{1}-i|A_{w}|\frac{\varphi}{2}\hat{\sigma}_{z}\right)|\phi\rangle,
\end{equation}
where we have defined the effective weak value factor
\begin{equation}
|A_{w}|=n\sqrt{\frac{1-p}{p}}\approx np^{-1/2},\label{eq:effaw2}
\end{equation}
in contrast to \eqref{eq:effaw}. Computing the quantum Fisher information
from (\ref{eq:2}) with the state $\sqrt{p}\,|\phi'\rangle_{2}$ then
produces
\begin{equation}
I_{2}(\varphi)\approx p|A_{w}|^{2}\left(1-\left[\frac{\varphi|A_{w}|}{2}\right]^{2}\right)=n^{2}\left(1-\left[\frac{n\varphi}{\sqrt{p}}\right]^{2}\right)\leq n^{2},
\end{equation}
using the definition \eqref{eq:effaw2}. As before, this result precisely
matches the form of \eqref{eq:kinfsimple}. It is now clear that for
quadratic scaling $p=n^{2}p_{0}$ we recover \eqref{eq:fisher1z}
with the effective reference weak value $|A_{w}|=1/\sqrt{p_{0}}$,
while for linear scaling $p=np_{0}$ we recover \eqref{eq:fisher2z}.
Therefore, in both post-selected qubit examples considered in the
main text we can nearly saturate the expected maximum of $I(\varphi)=n^{2}$
when the linear response conditions $n\varphi\ll1$, $\varphi|A_{w}|\ll1$,
and the large weak value condition $n\ll|A_{w}|$ are met, despite
the loss of data incurred by the post-selection.
\end{widetext}
\end{document}
|
1,116,691,499,824 | arxiv | \section{Theory of electron beam stability in graphene}
In a canonical problem of plasma-beam instability, the electron beam in collimated in momentum space rather than in real space. The momentum collimation is readily achieved upon electron injection through tunnel junctions~\cite{Cheianov}. The angular distribution of tunnel-injected electrons is Gaussian, and its width shrinks with reducing the barrier transparency. In the following calculations, we shall mimic the distribution function of beam electrons as a delta-function, $f_b({\bf k}) = n_b \delta({\bf k}-{\bf k}_b)$, where $n_b$ is the density of injected electrons with momentum around ${\bf k}_b$, and the delta function is normalized according to $g(2\pi)^{-2}\int{d^2{\bf k} \delta({\bf k}-{\bf k}_0)} = 1$ ($g=4$ is the spin-valley degeneracy). The steady-state distribution function of electrons thus reads
\begin{equation}
\label{Distribution}
f_0 ({\bf k})= f_F({\bf k}) + N_b \delta({\bf k}-{\bf k}_0),
\end{equation}
where $f_F({\bf k})$ is the Fermi function of background equilibrium electrons.
We are to analyze the electromagnetic stability of distributions \ref{Distribution}. Such analysis is based on evaluation of polarizability $\Pi({\bf q},\omega)$ and dielectric function $\varepsilon({\bf q},\omega)$ of electron system followed by the search of unstable roots for plasmon dispersion relation $\varepsilon({\bf q},\omega)=0$.
The evolution of electron distribution function $f$ is governed by the kinetic equation:
\begin{equation}
\frac{\partial f}{\partial t} + {\bf v_k}\frac{\partial f}{\partial {\bf r}} + \frac{\partial V}{\partial {\bf r}}\frac{\partial f}{\partial {\bf k}} = \mathcal{C}_{ee}\{f\}
\end{equation}
where ${\bf v_k} = v_0 {\bf k}/k$ is the electron velocity in graphene, $V({\bf r})$ is the external electric potential. The right-hand side is the electron-electron (e-e) collision integral.
To preserve the main features of e-e collisions and maintain analytical tractability, we adopt $\mathcal{C}_{ee}$ in the generalized relaxation-time approximation. In this model, all perturbations of distribution function are relaxed toward local equilibrium
\begin{gather}
\label{Collision-integral}
\mathcal{C}_{ee}\{f\} = \frac{f - f_{eq}}{\tau_{ee}},\\
f_{eq}({\bf k}) = \left[1+\exp\left\{\frac{\varepsilon_{\bf k} - {\bf ku}_{eq} - \mu_{eq}}{T_{eq}}\right\}\right]^{-1}
\end{gather}
rather than to zero. Moreover, the parameters of this local equilibrium, which are quasi-Fermi level $\mu_{eq}$, drift velocity ${\bf u}_{eq}$ and temperature $T_{eq}$ are different from those of steady background electrons. These parameters would be established after equilibration of background electron plasma and beam, and are determined from particle number, momentum, and energy conservation laws. If the density of beam electrons is small, the equilibrium drift velocity would be $u_{eq} \approx v_0 (n_b/n_0)(k_b/k_F) $.
We further proceed to linearization of Boltzmann equation with respect to small external potential $V({\bf r}) = \delta\varphi_{{\bf q} \omega}e^{i({\bf qr}-\omega t)}$. The distribution function acquires a correction $\delta f_{{\bf q}\omega}({\bf p}) e^{i{\bf qr} - i\omega t}$, and so does the local equilibrium function $f_{eq} = f^{(0)}_{eq} + \delta f_{eq} e^{i{\bf qr} - i\omega t}$. It is now possible to obtain a formal solution for $\delta f_{{\bf q}\omega}({\bf p})$ (the subscript ${\bf q}\omega$ will be suppressed from now on):
\begin{equation}
\label{Delta_f}
\delta f ({\bf k}) = \frac{- {\bf q} e\delta\varphi \frac{\partial}{\partial {\bf k}} \left\{f_F({\bf k}) + f_b({\bf k}) \right\} + i \gamma_{ee} \delta f_{eq}} {\omega + i \gamma_{ee} - {\bf q v_{k}}}.
\end{equation}
Considerable precautions should be taken upon evaluation of momentum derivative for beam distribution function $\partial f_b({\bf k})/\partial{\bf k}$. Once the beam distribution is delta-peaked in momentum space, the derivative becomes ill-defined. This problem is resolved if one recalls that Boltzmann kinetic equation is derived from quantum Liouville equation in the quasi-classical limit. Switching to quantum equations (Appendix A), one finds the replacement rules for pathological terms:
\begin{equation}
\frac{{\bf q} \partial f_b({\bf k})/\partial{\bf k}}{\omega + i \gamma_{ee} - {\bf q v_k}} \rightarrow \frac{f_{b}({\bf k}+{\bf q}) - f_{b}({\bf k})}{\omega + i \gamma_{ee} - \varepsilon_{\bf k + q} + \varepsilon_{\bf k}}.
\end{equation}
The solution for distribution function is accomplished after one finds the parameters of local-equilibrium function
\begin{equation}
\label{Eq-function}
\delta f_{eq} = \delta\mu \partial_\mu f^{(0)}_{eq} + \delta{\bf u} \partial_{\bf u} f^{(0)}_{eq} +\delta T \partial_T f^{(0)}_{eq}.
\end{equation}
using the conservation laws upon collisions. More precisely, the time derivatives of particle number, momentum, and energy should turn to zero if collision integral (\ref{Collision-integral}) is evaluated on distribution functions (\ref{Delta_f}) and (\ref{Eq-function}). This procedure leads us to closed-form equations for local-equilibrium parameters $\delta\mu$, $\delta{\bf u}$, and $\delta T$. These can be called generalized hydrodynamic equations and are valid at arbitrary value of Knudsen number ${\rm Kn} = qv_0 \tau_{ee}$. The final form of these equations is quite cumbersome and presented in Appendix B, yet they yield simple results in hydrodynamic (${\rm Kn} \ll 1$) and ballistic (${\rm Kn} \gg 1$) limits.
\section{Results}
\subsection{Beam instability in graphene: collisionless case}
The polarization $\Pi({\bf q},\omega)$ of electron system with injected beam in the absence of collisions the sum of individual contributions from steady electrons $\Pi_0({\bf q},\omega)$ and the beam $\Pi_b({\bf q},\omega)$. The dielectric function $\varepsilon({\bf q},\omega)$ governing the collective response is therefore
\begin{equation}
\varepsilon = 1 +V_0\left[\Pi_0+\Pi_b\right] \equiv \varepsilon_0 + V_0 \Pi_b,
\end{equation}
where we have introduced the Fourier transform of Coulomb potential in 2D $V_0 = 2\pi e^2/\kappa|q|$ ($\kappa$ is the beackground dielectric constant)~\footnote{One can easily account for the presence of metal gate gate at distance $d$ from graphene by multiplying $V_0$ by $1-e^{-2|q|d}$} and dielectric function of equilibrium graphene electrons $\varepsilon_0 = 1 + V_0 \Pi_0$. The beam polarization is given by
\begin{equation}
\Pi_b = n_b \left[\frac{1}{\omega + i\delta - \omega^{-}_{b0}} - \frac{1}{\omega + i\delta - \omega^{+}_{b0}}\right].
\end{equation}
\begin{figure}[ht]
\includegraphics[width=0.85\linewidth]{Fig_1.eps}
\caption{\label{Fig1}
Beam instability in collisionless electron system in graphene. Panel (A) shows the calculated dispersion of normal plasmon ($\omega_{\rm pl}$, green line) and two beam-induced modes ($\omega_b^{\pm}$, red and blue lines). Dashed lines show the ten-fold magnified damping and growth rates of beam-induced modes. Panel (B) shows the maximum growth rate of beam-induced mode (scaled by energy of beam electrons) with respect to wave vector $q$, propagation angle $\theta$ and gate-to-channel separation $d$. Inset shows the momentum distribution of electrons in the problem of beam instability
}
\end{figure}
The polarzation of beam electrons $\Pi_b$ is proportional to their small density, yet it is highly resonant in frequency. The poles of beam polarization are located at two beam-induced collective modes
\begin{equation}
\omega^{\pm}_{b0} = {\bf q v}_b \pm q v_0 \sin\theta \frac{q}{2k_b}.
\end{equation}
The frequency of these modes is {\it almost zero} in the reference frame of a beam, except for a small correction due to quantum effects. Interaction of beam with background electrons through self-consistent field results in modification of beam modes. Their frequencies are changed according to
\begin{equation}
\label{Beam_modes}
\omega^{\pm}_{b} = {\bf qv}_b \pm qv_0|\sin\theta| \left[ \left(\frac{q \sin\theta}{2 k_b}\right)^2 + \frac{V_0 (q) n_b/ k_b}{\varepsilon_0({\bf q},{\bf qv}_b)} \right]^{1/2}.
\end{equation}
The frequencies of these beam-induced modes are no more stable, independent of beam density $n_b$. Indeed, the dielectric function of graphene has non-zero imaginary part at $\omega = {\bf qv}_b < qv_0$ which signifies on collisionless intraband absorption (Landau damping). The double sign before the square root in (\ref{Beam_modes}) implies that one mode ($\omega_b^{+}$) is decaying while the other one ($\omega_b^{-}$) is growing in time.
The beam-induced modes always lie in the domain of intraband absorption $\omega < q v_0$, where the dielectric function of background electrons has positive imaginary part, ${\rm Im}\varepsilon_0 > 0$ (as shown in Fig.~\ref{Fig1}). It may look counter-intuitive that absorptive dielectric function may give rise to plasmon gain, as it follows from Eq.~(\ref{Beam_modes}). This is explained by the fact that the energy of electromagnetic oscillations $W({\bf q},\omega)$ with frequency $\omega_b^{-}$ is negative~\cite{Mikhailovsky}. The time derivative of oscillation energy is negative in absorptive media $dW({\bf q},\omega)/dt<0$, which corresponds to the growth of absolute value of energy.
The plasmon gain appears in the thresholdless manner in the absence of e-e collisions, i.e. even a very small density of beam electrons gives rise to a proportionally small growth rate. The thresholdless character of beam instability is not intrinsic to graphene; already in collisionless warm three-dimensional Maxwell plasma the Landau damping similarly gave rise to a beam instability~\cite{Mikhailovsky}. A special property of graphene (and degenerate 2d electron systems in general) is that Landau damping is never parametrically small, and cannot be neglected in the problem of beam instability. On the contrary, reduction of temperature in Maxwell plasma led to an exponential decrease in Landau damping. In this context, we note that neglect of spatial dispersion in the background dielectric function in the first study of beam instability in graphene was not justified and led to quantitatively wrong conclusions~\cite{Aryal-instab}.
\begin{figure}[ht]
\includegraphics[width=0.85\linewidth]{Fig_2.eps}
\caption{\label{Fig2}
Excitation of normal graphene plasmons by electron beam at the hydrodynamic-to-ballistic crossover. The figure shows calculated damping/growth rate of normal graphene plasmons vs e-e collision frequency at fixed wave vector $q = T/(\hbar v_0)$ and various densities of beam electrons. For small beam densities, the viscous damping reaches its maximum at ${\rm Kn}\sim 1$, for large beam densities so does the growth rate due to momentum transfer between beam and normal plasmon modes
}
\end{figure}
A closer inspection of dispersion relation for unstable modes reveals that the growth rate has maxima as a function of wave vector $q$, propagation angle $\theta$, Fermi energy $\varepsilon_F$ and gate-to-channel separation $d$. The maximum possible growth rate of beam instability can be hardly found analytically, but the outcome of numerical maximization procedure can be presented in universal dimensionless form. In Fig.~\ref{Fig1} B we plot the growth rate in units of $k_b v_0$ vs the dimensionless density of beam electrons $p_1 = 2\pi \alpha_c n_b/k_b^2$. Assuming that beam is injected slightly above the Fermi level, $k_b \approx k_F$, taking the realistically small density of beam electrons $n_b/n_{eq} \approx 0.1$ and coupling constant $\alpha_c =0.5$, we find $p_1 \approx 0.1$ and max growth rate of beam instability $\sim 0.01 \varepsilon_F/\hbar$. It becomes comparable to e-e collision frequency at temperatures $T \approx 0.1 \varepsilon_F$. For realistic Fermi energy $\sim 100$ meV, this corresponds to the liquid nitrogen temperature.
\begin{figure}[ht]
\includegraphics[width=0.85\linewidth]{Fig_3.eps}
\caption{\label{Fig3}
Instability of normal graphene plasmons in the nearly-hydrodynamic regime ${\rm Kn} \ll 1$. Panel (A) shows the calculated threshold density of beam electrons for onset of instability vs wave velocity at various Fermi energy (solid lines). Dashed lines show an analytical approximation (\ref{Beam-threshold-limit}) to the threshold density. Panel (B) shows the color map of threshold density vs phase velocity $s$ and Fermi energy $\varepsilon_F/T$
}
\end{figure}
As the temperature is increased, e-e collisions destroy the ordinary beam instability. As far as the beam density is small ($n_b/n_0 \ll 1$) and collisions can be treated perturbatively ($\gamma_{ee} \ll \omega^{(\pm)}_b$), the effect of collisions is trivial and results in shift of beam mode frequency by $-\gamma_{ee}$. The in-scattering terms of collision integral can be neglected in this regime as they are proportional to the product of beam density and scattering rate. This result can be physically interpreted by smallness of phase space occupied by beam electrons, which results in low probability of electron scattering in the direction of beam propagation. Such conclusion holds for arbitrary model of e-e scattering and is not limited to generalized relaxation time approximation analyzed here.
\subsection{Excitation of graphene plasmons by injected electrons: strong e-e collisions}
The situation changes radically for e-e collisions with frequency comparable to that of plasmon modes. Numerically, it corresponds to terahertz frequencies in graphene at room temperature~\cite{Our-hydrodynamic}. For ordinary plasmon modes in equilibrium with $\omega_{pl} \approx v_0 \sqrt{4\alpha_c k_F q}$, this frequency range is characterized by strong viscous damping~\cite{Crossover}. This result is illustrated in Fig. 2 with the black line. It shows that the damping rate of bulk graphene plasmon vs e-e collision frequency has a maximum located at the crossover between ballistic and hydrodynamic regimes.
When the beam is injected into a electron plasma with strong e-e collisions, the maximum in the damping rate is transformed into the maximum of the growth rate, as shown in Fig. 2 with green and red lines. We have verified that both plasmon damping and beam-induced instability disappear both in the deep hydrodynamic regime ($\gamma_{ee} \rightarrow \infty$) and in the ballistic regime ($\gamma_{ee} \rightarrow 0$). Both effects appear as first-order corrections to plasmon dispersion in Knudsen number ${\rm Kn} = q v_0 \tau_{ee}$; in the absence of the beam the viscous damping equals $\omega'' = qv_0 {\rm Kn}/4$. The fact that beam-induced instability appears at the same order as viscosity enables us to interpret it viscous momentum transfer between electron beam and normal plasmon modes.
The growth rate of 'normal' graphene plasmons due to viscous interaction with electron beam can be studied analytically by expansion of generalized hydrodynamic equations in the limit of small Knudsen number. Noting that the real part of 'normal' plasmon frequency is almost unaffected by scattering, we can obtain the damping/growth rate as:
\begin{equation}
\label{Damping-growth}
\gamma = qv_0 {\rm Kn} \frac{\frac{m_{hd}}{m_{b}} P_1(s,\beta_{eq}) + P_2(s,\beta_{eq}) + n_b P_3(s,\beta_{eq})}{P_4(s,\beta_{eq}) + n_b P_5(s,\beta_{eq})},
\end{equation}
where $s = \omega/qv_0$ is the phase velocity scaled by Fermi velocity, $\beta_{eq} = u_{eq}/v_0$ is the dimensionless velocity of electrons equilibrated with beam, $m_{hd} = \rho_{eq}v_0^2/n_{eq}$ and $m_{b} = 2 n_{eq}/\partial n_{eq}/\partial\mu$ are the 'proper' electron masses in hydrodynamic and ballistic regimes, and and $P_{i}(s,\beta_{eq})$ are polynomial functions. To the leading order in beam density (and hence, drift velocity) they are given by
\begin{gather}
P_{1} = 4 s \left(1-2
s^2\right)^2 - 4\beta_{\text{eq}} \left(12 s^4+8 s^2-1\right) ,\\
P_{2} = -4 \left(4 s^5-5
s^3+s\right) + 6 \left(6 s^4-8 s^2+1\right) \beta_{\text{eq}},\\
P_{3} = -16 s^5+20 s^3+6 s^2-4 s-2,\\
P_{4} = -32 s^3 + 4\beta_{\text{eq}} \left(26 s^2-1\right), \\
P_{5} = -8 s^3-6 s^2+1.
\end{gather}
In the absence of electron beam, equation (\ref{Damping-growth}) readily reproduces damping rate of equilibrium graphene plasmons~\cite{Crossover}
\begin{eqnarray}
\gamma_0 = - \frac{qv_0 {\rm Kn}}{8}\left[1 + \left(\frac{m_{hd}}{m_{b}} - 1\right)\left(\frac{1}{s}-2s\right)^2\right].
\end{eqnarray}
The first term in square brackets is due to viscous damping. The second one can be traced down to the instrisic conductivity of Dirac fluid~\cite{Gallagher_High-frequency,Quantum_critical,Lucas_resonances,Polini_intrinsic} which, in turn, is a result of non-conserved electric current upon collisions of Dirac electrons.
The beam effects on plasmon growth are proportional to the product of two small quantities, Knudsen number and relative density of beam electrons. It may be questioned whether beam can make a pronounced effect on damping compared to viscosity, which contribution is proportional to $\rm Kn$ solely. It appears that such situation is possible in the limit of large wave phase velocity, $s \gg 1$. In this limit, the viscous damping disappears but the beam-induced growth persists, while the expression for the damping/growth rate acquires a simple form:
\begin{equation}
\gamma_{s\gg 1} \approx - \frac{1}{2} qv_0 s^2 {\rm Kn} \left[\frac{m_{hd}}{m_b} - 1 - n_b\right].
\end{equation}
We observe therefore that beam-induced growth should compete only with damping due to intrinsic conductivity and not with the viscous damping. Further, in the limit of degenerate carriers, $\varepsilon_F/T \gg 1$, the last type of damping disappears, and the threshold density of beam electrons for onset of instability can be relatively small:
\begin{equation}
\label{Beam-threshold-limit}
\frac{N_b}{n_{eq}}\approx \frac{\pi^2}{3}\frac{T^2}{\varepsilon_F^2}.
\end{equation}
The threshold beam density for the onset of plasma instability in the nearly-hydrodynamic regime is a function of only two parameters: dimensionless phase velocity $s = \omega/qv_0$ and scaled Fermi energy. These universal dependences are shown in Fig. 3 with solid lines, dashed lines correspond to analytical low-temperature limits \ref{Beam-threshold-limit}. For small phase velocities $s \sim 1$, the threshold density becomes unachievably large as the beam-induced momentum transfer cannot compensate for viscous dissipation. At large velocities, the instability threshold abruptly goes to zero.
\section{Discussion and conclusions}
The above discussion was concentrated on stability of a spatially uniform distribution comprising steady electrons and collimated beam with energy slightly above Fermi surface. This picture is simplified, as the beam electrons will undergo scattering and angular spreading upon propagation over steady Fermi sea. Within the model adopted model of collision integral, the angular spreading would occur at length $l \sim v_0 \tau_{ee}$.
More advanced models of scattering in two dimensions account for different relaxation rates for even and odd harmonics of distribution function, $\tau_{\rm even} \approx \tau_{\rm ee} \approx (T/\varepsilon_F)^2 \tau_{\rm odd}$~\cite{Gurzhi_new_effect}. Within these models, the temporal evolution of beam features two characteristic steps~\cite{Gurzhi_beam_relaxation}: (1) angular spreading of electrons across $\delta\theta \sim (T/\varepsilon_F)^{1/2}$ and formation of hole 'tail' in oppostite direction during time $\tau_{\rm even}$ (2) complete angular equilibration during time $\tau_{\rm odd}$. We may suggest that spatial evolution of injected beam would also feature two characteristic lengths, $l_{\rm even} = v_0 \tau_{\rm even}$ and $l_{\rm odd} = v_0 \tau_{\rm odd}$. Stability study of 'pre-equilibrium' beams with angular width $\delta\theta$ is a subject of foregoing research.
It is instructive to compare the criteria of stability for various distributions of drifting electrons. The above study conjectured that electron beam, a distribution of highest possible anisotropy, is unstable in the absence of collisions without any threshold in beam density. Another limiting case is locally-equilibrium distribution of drifting electrons, which represents a Fermi sphere shifted by ${\bf ku}_{\rm dr}$ in momentum space. Such patterns of drifting electrons can lead to instabilities in double-layer and grating-gated graphene, the velocity threshold being $u_{dr} \gtrsim v_0/\sqrt{2}$~\cite{Emission_by_drifting_electrons}. High threshold velocity is paid off by insensitivity of hydrodynamic distributions to e-e collisions, while electron beams are strongly affected by the latter. The instability due to viscous momentum transfer from beam to normal plasmon modes is an appealing exception from this trade-off.
{\it Acknowledgement.} This work was supported by the grant 18-37-20058 of the Russian Foundation for Basic Research. The author thanks Victor Ryzhii for helpful discussions.
|
1,116,691,499,825 | arxiv | \section{Introduction and statement of main results}
Squared Bessel processes are a family of one-dimensional diffusions on $[0,\infty)$, defined by
continuous solutions $Y = (Y(x), 0 \le x \le \zeta)$ of the stochastic differential equation
\begin{equation}
\label{besqdef}
dY(x)=\delta\,dx+2\sqrt{Y(x)}dB(x),\qquad Y(0)=y\ge 0, \qquad 0 < x < \zeta
\end{equation}
where $\delta$ is a real parameter, $B=(B(x),x\ge 0)$ is standard Brownian motion,
and $\zeta$ is the {\em lifetime} of $Y$, defined by
\begin{equation}
\label{def:sdecases}
\zeta:=\begin{cases}
\infty & \text{if $ \delta > 0 $ }\\
T_0:= \inf \{x\ge 0\colon Y(x) = 0 \} & \text{if $\delta \le 0$}.
\end{cases}
\end{equation}
It is known \cite[Chapter XI]{RevuzYor}, \cite{GoinYor03} that this SDE has a unique strong solution, with
$\zeta < \infty$ and $Y(\zeta) = 0$ almost surely if $\delta \le 0$, when we make the boundary state $0$ absorbing by setting $Y(x) = 0$ for $x \ge \zeta$.
The distribution on the path space $C[0,\infty)$ of the process $Y$ so defined, for each starting state $y \ge 0$ and
each real $\delta$, is denoted ${\tt BESQ}_y(\delta)$. For each real $\delta$, the collection of laws
$({\tt BESQ}_y(\delta), y \ge 0)$ defines a Markovian diffusion process on $[0,\infty)$, the {\em squared Bessel process of dimension $\delta$},
denoted ${\tt BESQ}(\delta)$.
Several other constructions and interpretations of ${\tt BESQ}(\delta)$ are known.
In particular,
\begin{itemize}[leftmargin=.7cm]
\item ${\tt BESQ}(\delta)$ for $\delta = 1,2, \ldots$ is the squared norm of standard Brownian motion in $\mathbb{R}^d$;
\item ${\tt BESQ}(\delta)$ may be understood for all real $\delta$ as a continuous-state branching process, with an immigration rate
$\delta$ if $\delta >0$, emigration rate $|\delta|$ if $\delta <0$, and lifetime $\zeta$ at which the population dies out.
\end{itemize}
The case of immigration has been well-studied \cite{KawWat71,ShiWat73,RogWil2,RevuzYor,Lambert2002,Li2006}.
The literature on the case of emigration is rather sparse \cite{Pal13,Paper3}, but scaling limit results for discrete branching processes with emigration \cite{Vatutin1977,VatZub1993} with
${\tt BESQ}(\delta)$ limits for $\delta <0$ can be obtained from
\cite{Alexander2011,BerKor2016,Pal13}.
While dimension $\delta = 0$ is critical for whether the ${\tt BESQ}(\delta)$ process has finite or infinite lifetime,
dimension $\delta = 2$ is well known to be critical in another respect: for a ${\tt BESQ}_v(\delta)$ process $Y$ with hitting times $T_y:= \inf \{x\ge 0\colon Y(x) = y \}$,
\begin{itemize}[leftmargin=.7cm]
\item for $\delta\!>\!2$ the process is upwardly transient, with
$\mathbb{P}_v(T_0\!<\!\infty)\!=\!0$ and $\mathbb{P}_v(T_w\!<\!\infty)\!=\!1$ for $0< v < w$, while
\item for $\delta <2$ the process is either recurrent if $0 < \delta < 2$, or downwardly transient
if $\delta \le 0$, with $\mathbb{P}_v(T_a < \infty) = 1$ for all $0 \le a < v$ in either case.
\end{itemize}
A remarkable duality between ${\tt BESQ}(\delta)$ processes of dimensions $\delta = 2 \pm 2 \alpha $
was pointed out in \cite[Theorem (3.3) and Remark (4.2)(ii)]{MR620995}
and \cite[Section 3]{PitmYor82}:
\begin{itemize}[leftmargin=.7cm]
\item for each real $\alpha \ge 0$ and $0 < u < v$, the conditional distribution of a ${\tt BESQ}_v( 2 + 2 \alpha )$ process up to time $T_u$, given the event $(T_u < \infty)$ which has
probability $(u/v)^{\alpha}$, equals the unconditional distribution of a ${\tt BESQ}_v( 2 - 2 \alpha )$ process up to time $T_u$.
\end{itemize}
There is a similar description of ${\tt BESQ}_u( 2 + 2 \alpha )$ up to $T_v$ for $0 < u < v$ as ${\tt BESQ}_u( 2 - 2 \alpha )$ up to $T_v$ given $T_v < T_0$.
This duality relation between dimensions $2 \pm 2 \alpha $ is best known for $\alpha \in (0,1)$. Then it relates
the recurrent dimensions $2 - 2 \alpha \in (0,2)$, for which the inverse local time process of ${\tt BESQ}_0(2 - 2 \alpha)$ at $0$ is a stable subordinator of index
$\alpha$, to the transient dimensions $2 + 2 \alpha \in (2,4)$. For $\alpha = 1/2$
this is the well known relation between Brownian motion on $[0,\infty)$ with either reflection or absorption at $0$, and the three-dimensional Bessel process,
expressed here in terms of squared Bessel processes.
But as emphasized in \cite[Example (3.5)]{PitmYor82}, the duality relation between dimensions $2 \pm 2\alpha$ holds also for $\alpha \ge 1$, when it relates the downwardly
transient ${\tt BESQ}(- \delta)$ process for $- \delta = 2 - 2 \alpha \le 0$ to the upwardly transient ${\tt BESQ}(4+ \delta)$ process.
It was shown by Shiga and Watanabe \cite{ShiWat73} that the distribution of ${\tt BESQ}_y(\delta)$ for all real $y \ge 0$ and $\delta \ge 0$ is
uniquely determined by the prescription that ${\tt BESQ}_y(1)$ is the distribution of $(\sqrt{y} + B)^2$, and the following additivity property: for $y,y^\prime\ge 0$ and $\delta,\delta^\prime\ge 0$,
and two independent processes $Y$ and $Y'$,
\begin{equation}\label{fulladd}
\mbox{if $Y$ is a ${\tt BESQ}_y(\delta)$ and $Y^\prime$ is a ${\tt BESQ}_{y^\prime}(\delta^\prime)$ then $ Y+Y^\prime$ is a ${\tt BESQ}_{y+y^\prime}(\delta+\delta^\prime)$}.
\end{equation}
The distribution of ${\tt BESQ}_y(-\delta)$ for all $y > 0$ and $\delta >0$ is determined in turn by the duality between dimensions $-\delta$ and $4 + \delta$.
Pitman and Yor \cite{PitmYor82} used the additivity property to construct a ${\tt BESQ}_y(\delta)$ process $Y_y^{(\delta)}$ for $y, \delta \ge 0$
as a sum of points in a $C[0,\infty)$-valued Poisson point process, whose intensity measure involves the local time profile induced by It\^o's law of Brownian excursions.
The $C[0,\infty)$-valued process $(Y_y^{(\delta)}\!,\,y \!\ge\! 0,\delta \!\ge\! 0)$ then has stationary independent increments in both $y \!\ge\! 0$ and $\delta \!\ge\! 0$.
This construction, and the duality between dimensions $0$ and $4$, explained the multiple appearances of ${\tt BESQ}(\delta)$ processes and their bridges for $\delta = 0,2$ and $4$ in the Ray--Knight descriptions of Brownian local time
processes.
This model of Brownian local times and ${\tt BESQ}$ processes, driven by a Poisson point process of local time pulses from Brownian excursions, led to a number of further developments.
In particular, as recalled later in Lemmas \ref{lmplus} and \ref{lmminus}, if a reflecting Brownian motion $|B|$ is perturbed by adding
a multiple $\mu$ of its local time process $\ell$ at $0$, to form $X:= |B| + \mu \ell$, where $\mu$ might be of either sign,
then the resulting {\em perturbed Brownian motion} $X$ has a local time process from which it is possible, by varying $\mu$, and sampling
at suitable random times, to construct ${\tt BESQ}(\delta)$ processes for all real $\delta$.
The more recent notion of a {\em Poisson loop soup}
\cite{MR2815763}
greatly generalizes this construction of local time fields from one-dimensional Brownian motion to one-dimensional diffusions \cite{lupu-diffusion-loops}
and much more general Markov processes.
Despite these constructions of ${\tt BESQ}(\delta)$ for all real $\delta$ in the local time processes of perturbed Brownian motions,
and the general importance of additivity properties in the construction of local time fields \cite{MR2250510}
\cite{MR2815763},
it is known \cite[Exercise XI.(1.33)]{RevuzYor} and \cite[top of p.332]{GoinYor03} that the additivity property \eqref{fulladd}
of ${\tt BESQ}$ processes
fails
without the assumption that both $\delta \ge 0$ and $\delta' \ge 0$.
Our starting point here is a weaker form of additivity of ${\tt BESQ}$ processes, involving both positive and negative
dimensions:
\begin{proposition}
\label{prop:add} For arbitrary real $\delta,\delta^\prime$ and $y,y^\prime\ge 0$, let $Y$, $Y^\prime$ and $Y_1$ be three independent processes, with
\begin{itemize}[leftmargin=.7cm]
\item $Y$ a ${\tt BESQ}_y(\delta)$ with lifetime $\zeta$;
\item $Y'$ a ${\tt BESQ}_{y'}(\delta')$ with lifetime $\zeta'$;
\item $Y_1$ a ${\tt BESQ}_{1}(\delta + \delta')$.
\end{itemize}
Let $T$ be a stopping time relative to the filtration generated by the pair of processes $(Y,Y')$, with $T \le \zeta \wedge \zeta'$, and let
$Z$ be the process\vspace{-0.1cm}
\begin{equation}
\label{def:cases}
Z(x):=\begin{cases}
Y(x) + Y'(x), & \text{if $0 \le x \le T$},\\
Z(T) Y_1( (x - T)/Z(T) ), & \text{if $T < x < \infty$}.
\end{cases}\vspace{-0.1cm}
\end{equation}
Then $Z$ is a ${\tt BESQ}_{y + y'}(\delta + \delta')$.
\end{proposition}
By the scaling property of squared Bessel processes, for each fixed $z >0$ the scaled process $(z Y_1(w/z), w \ge 0)$ is a ${\tt BESQ}_{z}(\delta + \delta')$.
So \eqref{def:cases} sets $Z := Y + Y'$ on $[0,T]$, and makes $Z$ evolve as a ${\tt BESQ}(\delta + \delta')$ on $[T,\infty)$.
This proposition and its proof are a straightforward generalization of the case with $\delta = -1$, $\delta' = 0$ and
$T = \zeta \wedge \zeta'$, which was established as \cite[Lemma 25]{Paper3}.
The proof is by consideration of the SDE \eqref{besqdef}, as in the proof of the additivity property (\ref{fulladd}) in \cite[Theorem XI (1.2)]{RevuzYor}.
If both $\delta, \delta' \ge 0$, the conclusion Proposition \ref{prop:add} holds even without the assumption $T \le \zeta \wedge \zeta'$,
by combining the simpler additivity property (\ref{fulladd}) with the strong Markov property of ${\tt BESQ}(\delta + \delta')$.
We are particularly interested in the instance of Proposition \ref{prop:add} with $y = 0$, $y' = v$,
$\delta' = - \delta<0$ and $T = \zeta'$, which may be paraphrased as follows:
\begin{corollary}
\label{propadd} Let $\delta>0$ and $v\ge 0$. Let $Y^\prime:=
(Y_v^\prime(x),x\ge 0)$ be ${\tt BESQ}_v(-\delta)$
absorbed at $\zeta^\prime(v):=\inf\{x\ge 0\colon Y_v^\prime(x)=0\}$. Conditionally given $Y^\prime$ with $\zeta^\prime(v)=a$, let
$Y:=
(Y_{0,v}^{(\delta)}(x),x\ge 0)$ be a
time-inhomogeneous
Markov process that is ${\tt BESQ}_0(\delta)$ on the time interval $[0,a]$ and then continues on $[a,\infty)$ as ${\tt BESQ}(0)$.\ Then $Y+Y^\prime$ is a ${\tt BESQ}_v(0)$.
\end{corollary}
The subtlety here is
that we create dependence between $Y$ and $Y^\prime$ by specifying that $Y$ only follows
${\tt BESQ}_0(\delta)$
independently
of $Y'$ until time
$\zeta^\prime(v)$, when $Y^\prime$ hits zero,
and then $Y$ continues as needed for the additivity to hold.
In \cite{Paper1}, the authors encountered the case $\delta=1$ of Corollary \ref{propadd} in a more elaborate context which we review in Section
\ref{sec:lit}.
Let $L=(L(x,t),x\in\mathbb{R}, t\ge 0)$ be the jointly continuous space-time local time process of Brownian motion $B=(B(t),t\ge 0)$ and let
$\tau(v)=\inf\{t\ge 0\colon L(0,t)>v\}$ be the inverse local time of $B$ at $0$.
According to one of the Ray--Knight theorems,
the process $(L(x,\tau(v)),x\ge 0)$ is a ${\tt BESQ}_v(0)$. This raises the following question:
\medskip
\begin{center}
Can we find the pair $(Y,Y^\prime)$ of Corollary \ref{propadd} embedded in the local times of $B$?
\end{center}
\medskip
The following theorem provides a positive answer to this question.
See Figure \ref{splitfig} for an illustration of the embedding.
\begin{theorem}\label{emimm} For each $\delta>0$, there is an increasing family of stopping times $S_\delta(x), x \ge 0$
such that the following two families of random variables are independent:\vspace{-0.1cm}
\begin{itemize}[leftmargin=.7cm]
\item $Y_0^{(\delta)}:=(L(x,S_\delta(x)),x\ge 0) \mbox{$ \ \stackrel{d}{=}$ } {\tt BESQ}_0(\delta)$;
\item $Y_v^\prime:=(L(x,\tau(v))-L(x,S_\delta(x)\wedge\tau(v)),x\ge 0)\mbox{$ \ \stackrel{d}{=}$ } {\tt BESQ}_v(-\delta)$ for all $v\ge 0$.
\end{itemize}
For each $v\ge 0$, the random level $\zeta^\prime(v):=\inf\{x\ge 0\colon S_\delta(x)>\tau(v)\}$ is almost surely finite,
and coincides with the
absorption time of $Y_v^\prime$. Conditionally given $\zeta^\prime(v)=a$,
\begin{itemize}[leftmargin=.7cm]
\item
the process
$Y_{0,v}^{(\delta)}:=(L(x,S_\delta(x)\wedge\tau(v)),x\ge 0)$ is independent of $Y_v^\prime$ and a time-inhomogeneous Markov process that is
${\tt BESQ}_0(\delta)$ on the time interval $[0,a]$ and then continues as ${\tt BESQ}(0)$
\end{itemize}
\end{theorem}
\begin{figure}
\begin{picture}(415,190)
\put(30,0){\includegraphics[height=190pt,width=355pt]{2colormoreroots2_cropped.pdf}}
\put(85,130){\line(1,0){40}}
\multiput(125,130)(10,0){21}{\line(1,0){5}}
\put(65,130){$\scriptstyle\zeta^\prime(v)$}
\put(365,100){$\scriptstyle B(t)$}
\put(380,66){$\scriptstyle t$}
\put(128,185){$\scriptstyle x$}
\put(85,85){$\scriptstyle {\tt BESQ}_v(-\delta)$}
\put(35,115){$\scriptstyle {\tt BESQ}_0(\delta)$}
\put(50,110){\vector(3,-2){20}}
\put(65,170){$\scriptstyle {\tt BESQ}(0)$}
\put(90,165){\vector(4,-1){25}}
\put(45,45){$\scriptstyle {\tt BESQ}_v(0)$}
\put(80,50){\vector(5,1){30}}
\put(376,58){\line(0,1){12}}
\put(372,49){$\scriptstyle\tau(v)$}
\end{picture}
\caption{Simulation of Brownian motion $(B(t),0\!\le\! t\!\le\!\tau(v))$ split along a suitable increasing path determined by $\delta >0$.
The red excursions below the path have total local time process ${\tt BESQ}_v(-\delta)$ on $[0,\infty)$. The blue excursions above the path have total local time process on $[0,\infty)$ which is ${\tt BESQ}_0(\delta)$ up to level $\zeta^\prime(v)$, then continue as ${\tt BESQ}(0)$
above level $\zeta^\prime(v)$.\label{splitfig}}
\end{figure}
Thinking of ${\tt BESQ}(\delta)$ as a branching processes with immigration or emigration, according to the sign of $\delta$,
Theorem \ref{emimm} provides a frontier $S_\delta$ varying with $x$,
across which the emigration of ${\tt BESQ}(-\delta)$ is the immigration of ${\tt BESQ}(\delta)$.
\begin{corollary}\label{cor10} In the setting of Theorem \ref{emimm}, the $C[0,\infty)$-valued process $(Y_0^{(\delta)},\delta\ge 0)$, with $Y_0^{(0)}\equiv 0$,
has stationary and independent increments in $\delta\ge 0$.
\end{corollary}
The weaker form of additivity in Theorem \ref{emimm} raises further questions. Here are some:\vspace{0.2cm}
\begin{enumerate}[leftmargin=.7cm]
\item[1.] Is the (right-continuous increasing) process $(S_\delta(x),x\ge 0)$ of stopping times uniquely identified by the distribution of $Y_0^{(\delta)}\!$ specified in the first bullet point of Theorem \ref{emimm}?\vspace{0.2cm}
\item[2.] In Corollary \ref{propadd}, what is the conditional distribution of $Y^\prime$ or of $\zeta^\prime(v)$ given $Y+Y^\prime$?\vspace{0.2cm}
\item[3.] Suppose a non-negative process $Y'$ absorbed at 0 at time $\zeta'$ is such that
$Y + Y'$ is ${\tt BESQ}_v(0)$ for $Y$ conditionally given $Y'$ as in Corollary \ref{propadd}.
Is $Y'$ a ${\tt BESQ}_v(- \delta)$ process?\vspace{0.2cm}
\end{enumerate}
The rest of this article is organized as follows:
Section \ref{sec:proof} presents the proofs of Theorem \ref{emimm} and Corollary \ref{cor10}. In Section \ref{sec:checks}, we explore the implications of Proposition \ref{prop:add}
by checking the laws of some marginals and functionals. We conclude in Section \ref{sec:lit} by pointing out some related developments.
\section{Proofs of Theorem \ref{emimm} and Corollary \ref{cor10}}
\label{sec:proof}
Our proof of Theorem \ref{emimm} exploits the following known variants of the Ray--Knight theorems for perturbed Brownian motions
$R_\mu ^\pm := |B| \pm \mu \ell$, where $\ell$ is the local time of $B$ at $0$ normalized so that $|B| - \ell \mbox{$ \ \stackrel{d}{=}$ } B$,
and we assume $\mu >0$.
\begin{lemma}[Th\'eor\`eme 2 of Le Gall and Yor \cite{LGY1986}, Theorems 3.3-3.4 of \cite{YorAspects1}
\label{lmplus}
The space-time local time process
$(L^+_\mu(x,t),x\in\mathbb{R},t\ge 0)$ of $R_\mu^+$ is such that for each $a\in[0,\infty]$
$$(L_\mu^+(x,\tau(a/\mu)),x\ge 0) \mbox{ is }{\tt BESQ}_0(2/\mu)\mbox{ on }[0,a]\mbox{ continued on $[a,\infty)$ as }{\tt BESQ}(0).$$
\end{lemma
\begin{lemma}[Theorem 3.3 of Carmona, Petit and Yor \cite{CPY1994}]
\label{lmminus}
For each fixed $v\ge 0$ the local time process
$(L^-_\mu(x,t),x\!\in\!\mathbb{R},t\!\ge\! 0)$ of $R_\mu^-$ evaluated at
$\tau_\mu^-(v)\!:=\!\inf\{t\!\ge\! 0\colon L_\mu^-(0,t)\!>\!v\}$ yields independent
processes
\vspace{-0.1cm}
$$(L_\mu^-(-x,\tau_\mu^-(v)),x\ge 0)\mbox{$ \ \stackrel{d}{=}$ } {\tt BESQ}_v(2-2/\mu)\quad\mbox{and}\quad(L_\mu^-(x,\tau_\mu^-(v)),x\ge 0)\mbox{$ \ \stackrel{d}{=}$ }{\tt BESQ}_v(0).$$
\end{lemma}
Now let $\gamma\in[-1,1]$. Consider the excursions away from level 0 of reflected Brownian motion. Independently multiply each excursion by $-1$ with probability
$\frac{1}{2}(1-\gamma)$.
The resulting
process $X_\gamma=(X_\gamma(t),t\ge 0)$ is known as \em skew Brownian motion\em. See \cite{Lejay} for a recent survey of constructions of this process, including the construction of $X_\gamma$ by
Harrison and Shepp \cite{HarrShep81} as
the unique strong solution to the equation
\begin{equation}
\label{skewbm}
X_\gamma(t)=B(t)-\gamma \ell_\gamma(t),\qquad t\ge 0,
\end{equation}
where $B$ is Brownian motion and $\ell_\gamma$ is the local time process at 0 of $X_\gamma$, that is
$$\ell_\gamma(t)=\lim_{h\downarrow 0}\frac{1}{2h}\int_0^t 1 \{-h < X_\gamma(s) < h \} ds. $$
where the limit exists simultaneously for all $t\ge 0$ almost surely.
This choice of local time at 0 is defined so that $\ell_\gamma(\cdot)\overset{d}{=}\ell$ for all $\gamma$, where $\ell:= \ell_0=L(0,\,\cdot\,)$ is the usual
local time of $|B|$ at $0$, normalised as in Lemmas \ref{lmplus} and \ref{lmminus}, so that $|B|-\ell\overset{d}{=}B$.
\begin{lemma}\label{propskew} For $\gamma\!\in\!(0,1)$, let $X_\gamma$ be the skew Brownian motion driven by $B$ as in {\em (\ref{skewbm})}.
Let $I^+\!:=\! (0,\infty)$ and $I^-\!:=\! (-\infty,0)$, and
consider time changes
$\kappa^\pm_\gamma(s)\!=\!\inf\left\{t\!\ge\! 0\colon A_\gamma^\pm(t)\!>\!s\right\}$
where
$A_\gamma^\pm(t)\!:=\!\int_0^t 1 \{X_\gamma(r) \in I^\pm \}dr$.
Then
\begin{itemize}[leftmargin=.7cm]
\item $W_\gamma^\pm := \pm B\circ\kappa_\gamma^\pm \overset{d}{=}|B| \pm \mu_\gamma^\pm \ell $, where $\mu_\gamma^\pm=2\gamma/(1- \pm \gamma) > 0$,\vspace{0.1cm}
\item $W_\gamma^+$ and $W_\gamma^-$ are independent.\pagebreak[2]
\end{itemize}
\end{lemma}
\begin{proof} Denote by $n_{\rm ex}$ the excursion intensity measure of reflecting Brownian motion relative to increments of $\ell$. See e.g.
\cite[Chapter XII]{RevuzYor}.
Relative to increments of the local time process $\ell_\gamma$ of $X_\gamma$, with $\ell_\gamma \mbox{$ \ \stackrel{d}{=}$ } \ell$,
the absolute values of excursions of $X_\gamma$ away from $0$ into $I^\pm$
form independent Poisson point processes with intensity
measures $\frac{1}{2}(1- \pm \gamma)\linebreak[2] n_{\rm ex}(d\omega)ds$.
Note that $\ell_\gamma(t)$ splits naturally into the contributions from these positive and negative
excursions:
\begin{align}\label{ltpm}\lim_{h\downarrow 0}\frac{1}{2h}\int_0^t1\{X_\gamma(s) \in I^{\pm} \cap [-h,h] \}ds&=\frac{1}{2}(1- \pm \gamma)\ell_\gamma(t)
\end{align}
By Knight's theorem \cite[Theorem V.(1.9)]{RevuzYor}, the time changes
$\kappa_\gamma^\pm$
give rise to two independent
reflecting Brownian motions
$X_\gamma^\pm=X_\gamma\circ\kappa_\gamma^\pm$.
This argument is detailed in \cite[page 242]{RevuzYor} for the case $\gamma = 0$, and
extends easily to general $|\gamma| < 1$. See also the discussion after \cite[Proposition 11]{Lejay}.
By these time changes, the local times of
$X_\gamma^\pm$
at 0 are
the time changes of the limits
\eqref{ltpm},
namely
$\ell_\gamma^\pm(s)=\frac{1}{2}(1- \pm \gamma)\ell_\gamma(\kappa_\gamma^\pm (s))$.
We read (\ref{skewbm}) as a decomposition of $B(t)=X_\gamma(t)+\gamma \ell_\gamma(t)$ into excursions away from the
increasing process $(\gamma \ell_\gamma(t),t\ge 0)$. This increasing process is the inverse of a stable subordinator with Laplace exponent
$\sqrt{2\lambda}/\gamma$. Then
$$W_\gamma^\pm(s)= \pm B(\kappa_\gamma^\pm (s))=X_\gamma^\pm (s)+\gamma\ell_\gamma(\kappa_\gamma^\pm(s)) $$
so that $W_\gamma^\pm \mbox{$ \ \stackrel{d}{=}$ } R_\mu^\pm := |B| \pm \mu ^\pm \ell$ as in Lemmas \ref{lmplus} and \ref{lmminus} for $\mu^\pm :=2\gamma/(1- \pm \gamma)$.
\end{proof}
In this framework of skew Brownian motion, we now give a more explicit statement of the local times decomposition claimed in Theorem \ref{emimm}.
\begin{theorem}\label{emimm2} Let $\delta>0$ and $\gamma:=1/(1+\delta)$. Let $X_\gamma$ be skew Brownian motion driven by $B$ as in (\ref{skewbm}), with local
time $\ell_\gamma$ at zero. Let $S_\delta(x)=\inf\{t\ge 0\colon\gamma\ell_{\gamma}(t)>x\}$, $x\ge 0$. Then the following two families of random
variables are independent
\begin{itemize}[leftmargin=.7cm]
\item $Y_0^{(\delta)}:=(L(x,S_\delta(x)),x\ge 0)\mbox{$ \ \stackrel{d}{=}$ }{\tt BESQ}_0(\delta)$;
\item $Y_v^\prime:=(L(x,\tau(v))-L(x,S_\delta(x)\wedge\tau(v)),x\ge 0)\mbox{$ \ \stackrel{d}{=}$ }{\tt BESQ}_v(-\delta)$ for all $v\ge 0$.
\end{itemize}
For each $v\ge 0$, the random level $\zeta^\prime(v):=\inf\{x\ge 0\colon S_\delta(x)>\tau(v)\}$ is almost surely finite,
and coincides with the absorption time of $Y_v^\prime$. Conditionally given $\zeta^\prime(v)=a$,
\begin{itemize}[leftmargin=.7cm]
\item the process $Y_{0,v}^{(\delta)}:=(L(x,S_\delta(x)\wedge\tau(v)),x\ge 0)$ is independent of $Y_v^\prime$ and a time-inhomogeneous Markov process
that is ${\tt BESQ}_0(\delta)$ on the time interval $[0,a]$ and then continues as ${\tt BESQ}(0)$.
\end{itemize}
\end{theorem}
Note that, with $\gamma=1/(1+\delta)$, we have $B(S_\delta(x))=X_\gamma(S_\delta(x))+\gamma\ell_\gamma(S_\delta(x))=0+x=x$, since $S_\delta(x)$ is
an inverse local time of $X_\gamma$. Hence, $S_\delta$ is a right inverse of $B$. Right inverses of L\'evy processes were studied by
Evans \cite{Evans}, also \cite{Winkel}, to construct stationary local time processes. The main focus has been on the minimal right inverse, which
for $B$ is the first passage process. Theorem \ref{emimm2} involves a family of non-minimal right inverses.
\begin{proof}[Proof of Theorem \ref{emimm2}]
The definition of $S_\delta(x)$ is such that the increasing path in Figure \ref{splitfig}
is a multiple of the local time of the skew Brownian motion $X_\gamma$. We write (\ref{skewbm}) as $B(t)=X_\gamma(t)+\gamma\ell_\gamma(t)$. The
meaning of this expression is that the positive excursions of $X_\gamma$ are found in $B$ as excursions
above $\gamma\ell_\gamma(t)$, while the negative excursions of $X_\gamma$ are found in $B$ as excursions below $\gamma\ell_\gamma(t)$. Recall that $W_\gamma^+$
and $W_\gamma^-$ comprise excursions of $B$ above and below $\gamma\ell_\gamma$, respectively.
The theorem identifies the distributions of these local times by application of Lemmas \ref{lmplus} and \ref{lmminus}, as will now
be detailed.
In the setting of Lemma \ref{propskew}, we can apply Lemma \ref{lmminus} to $W_\gamma^-$ to see that for all
$v>0$, the process $W_\gamma^-$ up to the inverse $\tau_\gamma^-(v)=\inf\{s\ge 0\colon L_\gamma^-(0,s)>v\}$ of the local time
$(L_\gamma^-(0,t),t\ge 0)$ of $W_\gamma^-$ at zero has two independent local time processes
$(L_\gamma^-(-x,\tau_\gamma^-(v)),x\ge 0)\mbox{$ \ \stackrel{d}{=}$ }{\tt BESQ}_v(0)$ and $(L_\gamma^-(x,\tau_\gamma^-(v)),x\ge 0)\mbox{$ \ \stackrel{d}{=}$ }{\tt BESQ}_v(2-2/\mu_\gamma^-)$,
where $\mu_\gamma^-=2\gamma/(1+\gamma)$, i.e. $2-2/\mu_\gamma^-=1-1/\gamma=-\delta<0$, since $\gamma=1/(1+\delta)\in(0,1)$.
Similarly, Lemma \ref{lmplus} with $a=\infty$, yields that $W_\gamma^+$ has ultimate local time process
$(L_\gamma^+(x,\infty),x\ge 0)\mbox{$ \ \stackrel{d}{=}$ }{\tt BESQ}_0(2/\mu_\gamma^+)$ where $\mu_\gamma^+=2\gamma/(1-\gamma)$, i.e. $2/\mu_\gamma^+=-1+1/\gamma=\delta$.
Let us rewrite the results of the last two paragraphs in terms of the local times $L=(L(x,t),x\in\mathbb{R},t\ge 0)$ of $B$. Recall that
$S_\delta(x)=\inf\{t\ge 0\colon\gamma\ell_\gamma(t)>x\}$, $x\ge 0$, where $\gamma=1/(1+\delta)$, and also set $S_\delta(x):=0$ for $x<0$. Note
that
$$W_\gamma^-(A_\gamma^-(t))\le\gamma\ell_\gamma(t)\le W_\gamma^+(A_\gamma^+(t))\quad\mbox{and}\quad
W_\gamma^-(A_\gamma^-(t))\le B(t)\le W_\gamma^+(A_\gamma^+(t)),$$
where for each $t\ge 0$ and in each of the two statements, at least one of the inequalities is an equality. Let
$$\mathcal{R}_\gamma^+:=\{(t,x)\in[0,\infty)\times\mathbb{R}\colon x\ge\gamma\ell_\gamma(t)\}=\{(t,x)\in[0,\infty)\times\mathbb{R}\colon S_\delta(x)\ge t\}.$$
Then the occupation measure $U_\gamma^+$ of $W_\gamma^+$ can be related to the occupation measure $U$ of $B$ by the usual change of variables
$u=\kappa_\gamma^+(r)$, separately on each excursion interval of $X_\gamma$ of a positive excursion to give
\begin{align*}
U_\gamma^+([0,s]\times[0,x])&=\int_0^s1_{[0,x]}(W_\gamma^+(r))dr
=\int_0^{\kappa_\gamma^+(s)}1_{[0,x]}(B(u))1_{\mathcal{R}_\gamma^+}(u,B(u))du\\
&=U(([0,\kappa_\gamma^+(s)]\times[0,x])\cap\mathcal{R}_\gamma^+).
\end{align*}
By the occupation density formula for $U$ in its general form for time-varying integrands, we obtain
$$
U_\gamma^+([0,s]\times[0,x])=\int_0^x\int_0^{\kappa_\gamma^+(s)}1_{\mathcal{R}_\gamma^+}(u,y)d_yL(y,u)du
=\int_0^xL(y,\kappa_\gamma^+(s)\wedge S_\delta(y))dy.
$$
Hence $(L(y,\kappa_\gamma^+(s)\wedge S_\delta(y)),y\in\mathbb{R},s\ge 0)$ is a local time for $W_\gamma^+$ that is right-continuous in $s$ and in $y$.
In particular, we
deduce from the continuity of $y\mapsto L_\gamma^+(y,\infty)$ that $L_\gamma^+(y,\infty)=L(y,S_\delta(y))$ for all $y\ge 0$ almost surely.
Similarly, we use the joint continuity of local times of $W_\gamma^-$ of \cite[Theorem 3.2]{CPY1994} to obtain that
$L(y,\kappa_\gamma^-(s)\wedge S_\delta(y))-L(y,S_\delta(y))=L_\gamma^-(y,s)$ almost surely. In particular, we
have $L(0,t)=L_\gamma^-(0,A_\gamma^-(t))$, hence $\tau_\gamma^-(v)=A_\gamma^-(\tau(v))$ for all $v\ge 0$, and hence, for all $x\ge 0,v\ge 0$ almost
surely,
$$L(x,\tau(v))-L(x,\tau(v)\wedge S_\delta(x))
=L_\gamma^-(x,\tau_\gamma^-(v)).$$
To complete the proof, we consider the random level $\zeta^\prime(v)=\inf\{x\ge 0\colon L(x,\tau(v))=L(x,S_\delta(x)\wedge\tau(v))\}$. Since
$L$ and $S_\delta$ are both increasing and $B(\tau(v))\!=\!0$ while $B(S_\delta(x))\!=\!B(S_\delta(x-))\!=\!x$, we can only have
$L(x,\tau(v))=L(x,S_\delta(x)\wedge\tau(v))$ if $S_\delta(x)>\tau(v)$, and $\zeta^\prime(v)=\inf\{x\ge 0\colon S_\delta(x)>\tau(v)\}$. Note that
$\zeta^\prime(v)=\inf\{x\ge 0\colon L_\gamma^-(x,\tau_\gamma^-(v))=0\}$, as a function of $W_\gamma^-$, is independent
of $W_\gamma^+$. Conditionally given $\zeta^\prime(v)=a$, we can apply Lemma \ref{lmplus} to obtain that
$(L_\gamma^+(x,A_\gamma^-(\tau(v))),x\ge 0)=(L(x,S_\delta(x)\wedge\tau(v)),x\ge 0)$ also has the desired distribution.
\end{proof}
\begin{proof}[Proof of Corollary \ref{cor10}] Let $\delta>\delta^\prime>0$. Let $\Upsilon_{\rm top}=(L(x,S_{\delta^\prime}(x)),x\ge 0)$, $\Upsilon_{\rm mid}=(L(x,S_{\delta}(x))\!-\!L(x,S_{\delta^\prime}(x)),x\!\ge\! 0)$
and $\Upsilon_{\rm rest}\!=\!(L(x,\tau(v))\!-\!L(x,S_{\delta}(x)\!\wedge\!\tau(v)),x\!\ge\! 0,v\!\ge\! 0)$. By the proof of Theorem \ref{emimm2}, we have
$(\Upsilon_{\rm top},\Upsilon_{\rm mid})$ independent of $\Upsilon_{\rm rest}$, and we have $\Upsilon_{\rm top}$ independent of
$(\Upsilon_{\rm mid},\Upsilon_{\rm rest})$. Hence, $\Upsilon_{\rm mid}$ is independent of $\Upsilon_{\rm top}$. By additivity,
$\Upsilon_{\rm mid}\mbox{$ \ \stackrel{d}{=}$ }{\tt BESQ}_0(\delta-\delta^\prime)$, independent of $\Upsilon_{\rm top}\mbox{$ \ \stackrel{d}{=}$ }{\tt BESQ}(\delta)$. A straightforward
induction completes the proof.
\end{proof}
\section{Some checks on Proposition \ref{prop:add}}\label{sec:checks}
To simplify presentation, for any real $\delta$ and $v\ge 0$ we will denote by ${\tt BESQ}^{(\delta)}_v=({\tt BESQ}^{(\delta)}_v(x),x\ge 0)$ a process with law ${\tt BESQ}_v(\delta)$.
For $r \ge 0$ let $\gamma(r)$ denote a gamma variable, with $\gamma(0) = 0$ and
\begin{equation}
\label{gammadens}
\frac{ \mathbb{P}(\gamma(r)\in dt)}{dt} = f_r(t):= \frac{1} {\Gamma(r)} t^{r-1}e^{-t}1(t>0).
\end{equation}
Fix $x >0$.
To check the implication of Corollary \ref{propadd} that $Y(x)+Y^\prime(x) \mbox{$ \ \stackrel{d}{=}$ } {\tt BESQ}^{(\delta)}_v(x)$ for all $v\ge 0$,
by uniqueness of Laplace transforms it suffices to show for all $\mu >0$ that in the modified setting where $Y^\prime\mbox{$ \ \stackrel{d}{=}$ } {\tt BESQ}_{\gamma(1)/\mu}^{(-\delta)}$, meaning that $Y(0)$ is assigned the
exponential distribution of $\gamma(1)/\mu$, that $Y(x)+Y^\prime(x) \mbox{$ \ \stackrel{d}{=}$ } {\tt BESQ}_{\gamma(1)/\mu}^{(0)}(x)$.
To show this, first recall some known facts:
\begin{lemma}
Let $\delta \ge 0$. Then
\begin{enumerate}[leftmargin=.7cm]
\item[ {\em (a)}]
${\tt BESQ}^{(\delta)}_0(x)\mbox{$ \ \stackrel{d}{=}$ } 2x\gamma(\delta/2)$
\item[{\em (b)}] ${\tt BESQ}^{(-\delta)}_{\gamma(1)/\mu}(x)\mbox{$ \ \stackrel{d}{=}$ } (2\mu x+1)\gamma(1)I(1/(2\mu x+1)^{1 + \delta/2})$
where $\gamma(1)$ is independent of the indicator variable $I(p)$ with Bernoulli $(p)$ distribution for $p = 1/(2\mu x+1)^{1 + \delta/2}$.
\end{enumerate}
\end{lemma}
Here (a) is a consequence of the additivity property \eqref{fulladd},
while (b) details for $-\delta = 2 - 2 \alpha \le 0$ the entrance law for ${\tt BESQ}(2 - 2 \alpha)$ killed at $T_0$, with $\alpha >0$,
which was identified by \cite[(3.2) and (3.5)]{PitmYor82}.
The case of (b) for $\delta=0$ and $\mu = 1/(2 b)$ is also an easy consequence of the Ray-Knight description of Brownian local times
$L(x, T_{-b}), x \ge 0 ) \mbox{$ \ \stackrel{d}{=}$ } {\tt BESQ}^{(0)}_{ 2 b \gamma(1)}$.
Applying this instance of (b), we find that ${\tt BESQ}_{\gamma(1)/\mu}^{(0)}(x)$ has Laplace transform (in $\lambda$)
\begin{equation}\label{LHS}
\left(1-\frac{1}{2\mu x+1}\right)
+
\frac{1}{2\mu x+1}\,\frac{1}{1+\lambda(2\mu x+1)/\mu}
=\frac{(2\lambda x+1)\mu}{(2\lambda x+1)\mu+\lambda}.
\end{equation}
On the other hand, we obtain the Laplace transform of $Y(x)+Y^\prime(x)$ by conditioning $Y(x)$ and $Y^\prime(x)$ on
$\zeta^\prime=\inf\{x\ge 0\colon Y^\prime(x)=0\}$. Specifically, now using (b) for $Y^\prime$ and (a) for $Y$, we find
\begin{align}\mathbb{E}\left(\exp\left(-\lambda(Y(x)+Y^\prime(x))\right)\right)
=&\frac{1}{(2\mu x+1)^{1+\delta/2}}\,\frac{1}{1+\lambda(2\mu x+1)/\mu}\,\frac{1}{(1+\lambda 2x)^{\delta/2}}\nonumber\\
&+\int_0^x\frac{(\delta+2)\mu}{(2\mu m+1)^{2+\delta/2}}\mathbb{E}\left(e^{-\lambda{\tt BESQ}^{(0)}_{2m\gamma(\delta/2)}(x-m)}\right)dm.\label{integr}
\end{align}
By the additivity property of ${\tt BESQ}(0)$ and then proceeding as for \eqref{LHS},
we have
$$\mathbb{E}\left(e^{-\lambda{\tt BESQ}^{(0)}_{2m\gamma(\delta/2)}(x-m)}\right)=\left(\mathbb{E}\left(e^{-\lambda{\tt BESQ}_{2m\gamma(1)}^{(0)}(x-m)}\right)\right)^{\delta/2}
=\left(\frac{1+2(x\!-\!m)\lambda}{1+2\lambda x}\right)^{\delta/2}.$$
The change of variables $m = x u$, $dm = x du$ allows the integral in (\ref{integr}) to be expressed as
$$\frac{(\delta+2)\mu x}{(1+2\lambda x)^{\delta/2}}\int_0^1\frac{(1+2\lambda x(1-u))^{\delta/2}}{(1+2\mu xu)^{2+\delta/2}}du.$$
Writing $a=2\lambda x$, $b=2\mu x$ and $q=1+\delta/2$, this integral is of the form
\begin{equation}\label{Wolf}\int_0^1\frac{(1+a(1-u))^{q-1}}{(1+bu)^{q+1}}du=\frac{(1+a)^q-(1+b)^{-q}}{(a+ab+b)q}
\end{equation}
where the integral is evaluated as $F(1) - F(0)$
for the indefinite integral
$$F(u)=-\frac{(1+a(1-u))^q(1+bu)^{-q}}{(a+ab+b)q}.$$
The identification of \eqref{LHS} and \eqref{integr} is now elementary using (\ref{Wolf}).
Consider next the distribution of $\int_0^\infty(Y+Y^\prime)(x)dx$ in the setting of Corollary \ref{propadd}. By
the corollary, this is the distribution of the corresponding integral of a ${\tt BESQ}_v(0)$, which according to the Ray--Knight theorem for local
times of $B$ at time $\tau(v)$ is that of
$$\tau_+(v):=\int_0^{\tau(v)}1_{\{B_+>0\}}dt\overset{d}{=}\tau(v/2).$$
The equivalent equality of Laplace transforms at $\frac{1}{2}\lambda^2$ reads
\begin{equation}
\label{stableint}
\mathbb{E}\left(\exp\left(-\frac{1}{2}\lambda^2\int_0^\infty(Y+Y^\prime)(x)dx\right)\right)=\exp\left(-\frac{v}{2}\lambda\right).
\end{equation}
This formula too can be checked from the construction of $Y$ and $Y^\prime$ by conditioning on $\zeta^\prime$.
For the ${\tt BESQ}_0(\delta)$ process $Y$ on $[0,m]$ continued as ${\tt BESQ}_{Y(m)}(0)$, we have
\begin{equation}\label{intY}
\mathbb{E}\left(\left.\exp\left(-\frac{1}{2}\lambda^2\int_0^\infty Y(x)dx\right)\,\right|\,\zeta^\prime=m\right)=\exp\left(-\frac{1}{2}\delta m\lambda\right),
\end{equation}
by Lemma \ref{lmplus} applied with $a=m$ and $\mu=2/\delta$, since these substitutions make
$$\left(\left.\int_0^\infty Y(x)dx\,\right|\,\zeta^\prime=m\right)\overset{d}{=}\int_0^\infty L_\mu^+(x,\tau(a/\mu))dx=\tau(a/\mu)$$
with $a/\mu=\frac{1}{2}\delta m$. On the other hand, given $\zeta^\prime=m$,
an
application of the formula of \cite[Proposition (5.10)]{PitmYor82} yields the first passage bridge functional
\begin{align}
&\mathbb{E}\left(\left.\exp\left(-\frac{1}{2}\lambda^2\int_0^mY^\prime(x)dx\right)\,\right|\,\zeta^\prime=m\right)\nonumber\\
&=\left(\frac{\lambda m}{\sinh(\lambda m)}\right)^{(4+\delta)/2}\exp\left(-\frac{v}{2m}\left(\lambda m\coth(\lambda m)-1\right)\right).
\label{intYprime}
\end{align}
To complete the calculation of the left side of \eqref{stableint}
we must integrate the product of expressions in (\ref{intY}) and (\ref{intYprime}) with respect to the distribution of $\zeta^\prime$, the absorption
time of ${\tt BESQ}_v(-\delta)$, which is the distribution of $v/(2\gamma(1+\delta/2))$ with density
$$\frac{\mathbb{P}(\zeta^\prime\in dm)}{dm}=\frac{v}{2m^2}f_{1+\delta/2}\left(\frac{v}{2m}\right)$$
where $f_{r}(t)$ is the gamma$(r)$ density at $t$ as in \eqref{gammadens}.
So at the level of the total integral functional, the identification of the distribution of $Y+Y^\prime$ as ${\tt BESQ}_v(0)$ implies the identity
$$
\exp\left(\!-\frac{1}{2}\lambda v\!\right)\!
=\!\int_0^\infty\!\frac{v^{1+\delta/2}b^{2+\delta/2}}{2^{\delta/2}\Gamma(1+\delta/2)}\left(\frac{1}{\sinh(\lambda m)}\right)^{2+\delta/2}
\!\exp\left(-\frac{1}{2}\delta\lambda m-\frac{v\lambda}{2}\coth(\lambda m)\!\right)dm.
$$
Make the change of variables $x=\lambda m$, $dm=dx/\lambda$, then set $t=\lambda v/2$, $\delta=2p$, to see that this evaluation shows that
Corollary \ref{propadd} has the following consequence.
\begin{corollary} For all $t>0$ and $p\ge 0$,
$$\int_0^\infty\frac{\exp(-px-t\coth(x))}{(\sinh(x))^{2+p}}dx=\frac{\Gamma(1+p)e^{-t}}{t^{1+p}}.$$
\end{corollary}
The simplest case of the Corollary is for $p=0$. Then it is some variation of Knight's analysis of the joint distribution of $\tau(v)$ and
$M(\tau(v)):=\max\{|B(s)|,0\le s\le\tau(v)\}$. See Section 11.3 of \cite{YenYor}, especially formula (11.3.1).
For $p=0$ or $p=1$, the integrals are easily evaluated using the elementary indefinite integrals
\begin{align*}\int\frac{e^{-t\coth(x)}}{\sinh^2(x)}dx&=\frac{e^{-t\coth(x)}}{t};\\
\quad\int\frac{e^{-x-t\coth(x)}}{\sinh^3(x)}dx&=\frac{e^{-t\coth(x)}(1-t+t\coth(x))}{t^2}.
\end{align*}
According to {\em Mathematica}, there are similar expressions for $p=2,3,\ldots$, but they get more complicated and their general structure is not
readily apparent.
This kind of argument can be extended to a full proof of Proposition \ref{prop:add} by the method of
computating the Laplace functional
\begin{equation}\label{laplace}\mathbb{E}\left(\exp\left(-\int_0^\infty Z(x)\rho(dx)\right)\right)
\end{equation}
for suitable measures $\rho$ on $(0,\infty)$, and showing that it equals the known Laplace functional of ${\tt BESQ}_{y+y^\prime}(\delta-\delta^\prime)$, found in \cite{PitmYor82} when $\delta-\delta^\prime\ge 0$, for enough such $\rho$.
Let us briefly sketch
this here for the (slightly easier) case when $\delta+\delta^\prime\ge 0$ and $y=0$, $y^\prime=v$. Without loss of generality, $\delta^\prime>0$,
as otherwise the statement follows from full additivity. The case $y>0$ is then also straightforward, while $\delta-\delta^\prime<0$
follow similarly. We claim that for any function $f\colon[0,\infty)\rightarrow[0,\infty)$ that is Lebesgue
integrable on $[0,z]$ for all $z\ge 0$, the Laplace functional (\ref{laplace}) reduces to the known Laplace functional of ${\tt BESQ}_v(\delta-\delta^\prime)$
when $\rho(dx)=f(x)dx$. By \cite[Theorem XI.(1.7)]{RevuzYor}, the latter is
\begin{equation}\label{RHS}(\phi(\infty))^{(\delta-\delta^\prime)/2}\exp\left(\frac{v}{2}\phi^\prime(0)\right)
\end{equation}
where $\phi$ is the unique positive, non-increasing solution to the Sturm--Liouville equation
\begin{equation}\label{SL}\phi^{\prime\prime}=2f\phi,\qquad\phi(0)=1.
\end{equation}
This solution is
convex and converges at $\infty$ to some $\phi(\infty)\in(0,1]$. We compute the Laplace functional in the
setting of Proposition \ref{prop:add} by conditioning first on $\zeta^\prime(v)=x$ and
then on $Y(x)=y$. We use notation $\mathbb{P}_y^{(\gamma)}:={\tt BESQ}_y(\gamma)$ and also $\mathbb{P}_{a,b}^{(\gamma),x}$ for the distribution of
a ${\tt BESQ}(\gamma)$ bridge of length $x$ from $a\ge 0$ to $b\ge 0$. Specifically, for $\delta^\prime>0$, for each $a >0$ we define $\mathbb{P}_{a,0}^{(-\delta'),x}$ for $x \ge 0$
to be the first passage bridge obtained as the weakly continuous conditional distribution of $\mathbb{P}^{(-\delta^\prime)}_a(\,\cdot\,|\,T_0=x)$. By duality,
this equals $\mathbb{P}^{(4+\delta^\prime),x}_{a,0}$, which is the time reversal of $\mathbb{P}^{(4+\delta^\prime),x}_{0,a}$. See
\cite{PitmYor82}. We need several expectations of quantities of the form $\mathcal{L}(f,w):=\exp(-\int_0^wY(u)f(u)du)$ and also use notation $\theta_x(f)=f(x+\cdot)$. In this notation, we want to compute
\begin{equation}\label{goal}
\int_0^\infty\!\mathbb{P}_{v,0}^{(-\delta^\prime),x}(\mathcal{L}(f,x))\!\left(\int_0^\infty\!\mathbb{P}_{0,y}^{(\delta),x}(\mathcal{L}(f,x))\mathbb{P}_y^{(\delta-\delta^\prime)}(\mathcal{L}(\theta_xf,\infty))\mathbb{P}(Y(x)\!\in\! dy)\!\right)\mathbb{P}(\zeta^\prime(v)\!\in\! dx).
\end{equation}
The key technical formula is a generalisation of \cite[Theorem XI.(3.2)]{RevuzYor} from unit-length bridges to bridges of length $x$, which we
express in terms of the solution of $\phi^{\prime\prime}= 2 f\phi$ without restricting $f$ to support in $[0,x]$. We obtain for all $\gamma>0$, $a\ge 0$, $b\ge 0$, $x>0$
\begin{equation}\label{bridgefunctional}
\mathbb{P}_{a,b}^{(\gamma),x}(\mathcal{L}(f,x))
=\left(\phi(x)\right)^{\gamma/2}\exp\left(\frac{a}{2}\phi^\prime(0)-\frac{b}{2}\,\frac{\phi^\prime(x)}{\phi(x)}\right)
\frac{q^{(\gamma)}_{\sigma^2(x)}(a(\phi(x))^2,b)}{q_x^{(\gamma)}(a,b)},
\end{equation}
where $\sigma^2(x)=(\phi(x))^2\int_0^x(\phi(u))^{-2}du$, and $q_u^{(\gamma)}(v,y)=\mathbb{P}_v^{(\gamma)}(Y(u)\in dy)/dy$ is the continuous ${\tt BESQ}(\gamma)$ transition density on $(0,\infty)$. Using $\mathbb{P}_{v,0}^{(-\delta^\prime),x}=\mathbb{P}_{v,0}^{(4+\delta^\prime),x}$ for the first, this
yields the two bridge functionals of (\ref{goal}) while the remaining functional can be obtained from \cite[Theorem XI.(1.7)]{RevuzYor}. We leave
the remaining details to the reader.
\section{Related developments in the literature}\label{sec:lit}
We discuss three related developments in the literature. These are interval partition
diffusions, an instance of a weaker form of additivity related to sticky Brownian motion, and local time flows generated by skew Brownian motion.
Motivated by Aldous's conjectured diffusion on a space of continuum trees, the authors of \cite{Paper1} study interval partition diffusions in
which interval lengths evolve as independent ${\tt BESQ}(-1)$ processes until absorption at 0, while new intervals are created according to a
Poisson point process of ${\tt BESQ}(-1)$ excursions. See also \cite{Paper3} and further references there. Specifically,
the construction for one initial interval of length $v$ is illustrated in Figure \ref{FPRWfig} (left). Next to a ${\tt BESQ}_v(-1)$ process
$Y^\prime$ with absorption time $\zeta^\prime(v)$, the total sums of all other interval lengths form a process $Y$ that is shown to be
${\tt BESQ}_0(1)$ up to $\zeta^\prime(v)$ continuing as ${\tt BESQ}(0)$, as in Corollary \ref{propadd}. See
\cite[Theorem 1.5 and Corollary 5.19]{Paper1}.
A generalisation to ${\tt BESQ}(-\delta)$ for $\delta\in(0,2)$ is indicated in \cite[Section 6.4]{Paper0}, to be taken up elsewhere.
\begin{figure}[t]
\begin{picture}(415,100)
\put(0,0){\includegraphics[scale=0.44]{ScafSpind.png}}\put(205,0){\includegraphics[height=100pt,width=200pt]{3colorequil3_cropped}}
\put(40,88){\line(1,0){145}}
\put(20,88){$\scriptstyle\zeta^\prime(v)$}
\put(49.5,-2){$\scriptstyle v$}
\put(47.5,0){\vector(-1,0){21.5}}
\put(55.5,0){\vector(1,0){21.5}}
\put(35,15){$\scriptstyle {\tt BESQ}_v(-\delta)$}
\put(148,78){$\scriptstyle {\tt BESQ}_0(\delta)$}
\put(150,92){$\scriptstyle {\tt BESQ}(0)$}
\end{picture}
\caption{Left: The left-most shaded area is $Y^\prime\!\sim\!{\tt BESQ}_v(-\delta)$ for some $\delta\!\in\!(0,2)$, represented
in the widths of a symmetric ``spindle'' shape. Other shaded areas form a Poisson point process of ${\tt BESQ}(-\delta)$ excursions
placed on a ``scaffolding'', an induced ${\tt Stable}(1\!+\!\delta/2)$ process whose jump heights are the excursion lifetimes%
. Simulation, as in \cite{Paper3}, due to \cite{WXMLCRP}.
Right: Simulation of Brownian motion split along two increasing paths. \label{FPRWfig}}
\end{figure}
Shiga and Watanabe \cite{ShiWat73}
showed that families of one-dimensional diffusions with the additivity property
can be parameterised by three real parameters, one of which corresponds to a linear time-change parameter affecting the diffusion
coefficient, which we fix here without loss of generality. The family formed by the other two parameters, $\delta$ and $\mu$, are the strong
solutions to the stochastic differential equation
\begin{equation}\label{genbesqdef}
dY(t)=(\delta-\mu Y(t))\,dt+2\sqrt{Y(t)}dB(t),\quad Y(0)=y,\quad y\ge 0,\qquad \delta\ge 0,\mu\in\mathbb{R}.
\end{equation}
We observe that these families may be extended to $\delta < 0$ with absorption at $0$ much as in \eqref{besqdef},
and that the statement and proof of Proposition \ref{prop:add} generalizes straightforwardly to this case.
Warren \cite[Proposition 3]{Warren1997} establishes an additivity in the case $\delta=0$ that involves the parameter $\mu$ of (\ref{genbesqdef}),
where the second process
$Y^\prime=(Y^\prime(t),t\ge 0)$ is driven by a Brownian motion $(B^\prime(t),t\ge 0)$ independent of the first process $Y$, but the first process $(Y(t),t\ge 0)$ appears in the $dt$-part of its
stochastic differential equation:
$$dY^\prime(t)=\mu Y(t)\,dt+2\sqrt{(Y^\prime(t))^+}dB^\prime(t),\qquad Y^\prime(0)=0.$$
Specifically, $Y+Y^\prime\mbox{$ \ \stackrel{d}{=}$ }{\tt BESQ}_y(0)$. Furthermore, \cite[Theorems 10-11 and Proposition 12]{Warren1997} demonstrate how to find this
decomposition embedded in the local times of a given Brownian motion, using the Brownian motion to drive a stochastic differential equation whose
strong solution is sticky Brownian motion of parameter $\mu\ge 0$.
Burdzy et al. \cite{MR1880238
MR2094439}
treat other aspects of what they call the local time flow generated by skew {B}rownian motion. They study solutions to uncountably many coupled variants of (\ref{skewbm}) jointly. Specifically, \cite{MR2094439} focusses on
$(X_\gamma^{s,x}(t),L_\gamma^{s,x}(t))$, $t\ge s$, $x\in\mathbb{R}$, for $B(t)$ in (\ref{skewbm}) replaced by $x+B(t)-B(s)$, while \cite{MR1880238} exhibits various
one-dimensional families indexed by $x$ or by $\gamma$ that form Markov processes in a way reminiscent of Ray--Knight theorems.
Taking $s=0$, one viewpoint is to read these coupled solutions as joint
decompositions of $B(t)=X_\gamma^{s,x}(t)-x+\gamma L_\gamma^{s,x}(t)$ along increasing paths $-x+\gamma L_\gamma^{s,x}(t)$. Consider the
coupling in $\gamma\in(0,1)$ of \cite[Theorem 1.3 and 1.4]{MR1880238} when $x=0$. They note for
$\gamma_1<\gamma_2$ that $\gamma_1\ell_{\gamma_1}(t)\le\gamma_2\ell_{\gamma_2}(t)$ for all $t\ge 0$, cf. Figure \ref{FPRWfig} (right), and they establish a phase transition
when $\gamma_1=\gamma_2/(1+2\gamma_2)$. By our Corollary \ref{cor10}, applied to an increment
$Y^{(\delta_1)}_0-Y^{(\delta_2)}_0\mbox{$ \ \stackrel{d}{=}$ }{\tt BESQ}(\delta_1-\delta_2)$, we identify the
same phase transition with the behaviour around the critical dimension $\delta=2$ of ${\tt BESQ}_0(\delta)$, since for $\delta_i=-1+1/\gamma_i$, $i=1,2$, we have $\gamma_1=\gamma_2/(1+2\gamma_2)$ if and only if $\delta_1-\delta_2=2$.
\bibliographystyle{abbrv}
|
1,116,691,499,826 | arxiv | \section{Introduction}
In this letter we investigate the Chern--Simons (C--S) field
theories \cite{csft,csftt,witten}
with gauge group $SU(n)$ in the Coulomb gauge
using the Dirac's formalism for constrained
systems \cite{dirac,hrt}.
As it happens in the case of the more popular covariant gauges,
also in this gauge the C-S functional contains self-interactions,
but the Feynman rules simplify considerably and can be explicitly
derived even on
space-times with a non--flat spatial section \cite{ffprd}.
Another advantage of the Coulomb gauge is that there are no
time derivatives in the gauge fixed action, so that the
C--S theory becomes
in practice a two-dimensional model.
Despite of
many physical and mathematical applications of the
C--S field theories \cite{witten,csappl}, however,
until now only a few calculations
have been performed in the Coulomb gauge \cite{ffprd,cgincs,gg}.
One of the main reasons is that,
already in the case of the Yang--Mills field theories,
several perplexities arise concerning the use of this
gauge \cite{taylor,leibbrandt,chetsa}.
Analogous problems are unfortunately present also in C--S field theories,
but in a milder form, so that
these models
provide an important laboratory in order to study the possible remedies.
For example, in the abelian case it is known that
the so-called Maxwell--Chern--Simons (MCS) theory is affected by
the presence of infrared divergences in the Coulomb gauge
\cite{csft,csftt}. Nevertheless, it has recently been shown
in ref. \cite{gg} that the theory can be consistently
worked out and that for instance the M\"oller scattering
amplitudes computed in the Coulomb gauge and in the
covariant gauges coincide at all orders in perturbation theory.
Other tests confirming the safety of the Coulomb gauge in the MCS
models can also be found in \cite{gg}.
On the other side,
the ambiguities in the Yang-Mills Feynman integrals
pointed out in \cite{taylor}
arise as well in the nonabelian C--S field theories due
to the absence of time derivatives
in the action \cite{ffprd}. A simple recipe to regularize such ambiguities
has been proposed and successfully tested
in first order calculations \cite{ffprd}.
However, a detailed investigation of the consistency of the
nonabelian C--S field
theories in the Coulomb gauge
at any perturbative order is still missing. As one of the steps
to fill this gap,
we exploit in this letter the formalism of
Dirac's canonical approach to constrained systems \cite{dirac,hrt}.
We notice that, besides some subtleties already noticed in
\cite{linni}, the derivation of the final Dirac brackets requires in the
Coulomb gauge some care with distributions.
Moreover, the final commutation
relations (CR's) between the fields obtained here
are rather involved. At a first sight, this is surprising in topological
field theories with vanishing
Hamiltonian and without degrees of freedom. However, at
least in the case considered here,
in which there are no interactions with matter fields,
we show that
this contradiction is only apparent. As a matter of fact,
taking into account the Gauss law
and the Coulomb gauge fixing, it can be proved that the commutation
relations between the gauge fields vanish identically
at any perturbative order as expected.
In this way we discover that the Chern--Simons field theories in the Coulomb
gauge are not only perturbatively finite as has been checked
in the covariant gauges \cite{covgau},
but also free.
This is not a priori evident, because in the Coulomb gauge the
C--S functional contains non--trivial self--interaction terms.
The material presented in this paper is divided as follows.
In the next Section we will present our results. In the Conclusions
we will discuss the open problems and the possible further developments.
\section{Canonical Quantization of the C--S Field Theory in the
Coulomb Gauge}
\vspace{1cm} The Lagrangian of the pure $SU(N)$ Chern--Simons (C--S) field
theory in three dimensions is given by
\begin{equation}
L_{CS}=\frac s{8\pi }\epsilon ^{\mu \nu \rho }
\left( A_\mu ^a\partial _\nu A_\rho
^a-\frac 13f^{abc}A_\mu ^aA_\nu ^bA_\rho ^c\right) \label{lagrangian}
\end{equation}
where $s$ is a dimensionless coupling constant and $A_\mu ^a$ is the gauge
potential. Greek letters $\mu ,\nu ,\rho ,\ldots =0,1,2$ will denote
space--time indices while the first latin letters $a,b,c,\ldots =1,\cdots ,N$
will denote color indices. Moreover, the totally antisymmetric tensor $%
\epsilon ^{\mu \nu \rho }$ is defined by the convention $\epsilon ^{012}=1$.
Finally, a Minkowski metric $g_{\mu \nu }=$diag$(1,-1,-1)$ will be assumed.
To derive the C--S Hamiltonian, we have to compute the canonical momenta:
\begin{equation}
\pi ^{\mu ,a}\left( {\bf x},t\right) =\frac{\delta S_{CS}}{\delta \left(
\partial _0A_\mu \left( {\bf x},t\right) \right) } \label{cmomdef}
\end{equation}
Here we have put $S_{CS}=\int d^3xL_{CS}$, $t=x^0$
and ${\bf x}=\left( x^1,x^2\right) $.
The nonvanishing
Poisson brackets (PB) among canonical variables are:
\[
\left\{ A_\mu ^a\left( {\bf x},t\right) ,\pi _\nu ^b\left( {\bf y},t\right)
\right\} =\delta ^{ab}g_{\mu \nu }\delta ^{(2)}\left( {\bf x}-{\bf y}\right)
\]
From eqs. (\ref{lagrangian}) and (\ref{cmomdef})
we obtain:
\begin{equation}
\pi ^{0,a} =0 \qquad\qquad\qquad
\pi ^{i,a} =\frac s{8\pi }\epsilon ^{ij}A_j^a \label{canmom}
\end{equation}
where $\epsilon ^{ij}$, $i,j=1,2$, is the two dimensional totally
antisymmetric tensor satisfying the definition $\epsilon ^{12}=1$. A
straightforward calculation shows that the C--S Hamiltonian is given by:
\begin{equation}
H_{CS}=-\int d^2{\bf x}A_0^a\left( D_i^{ab}\pi ^{i,b}+\partial _i\pi
^{i,a}\right) \label{wrrr}
\end{equation}
In the above equation $D_i^{ab}$ denotes the spatial components of the
covariant derivative:
\[
D_\mu ^{ab}\equiv \partial _\mu \delta ^{ab}+f^{abc}A_\mu ^c
\]
From eqs. (\ref{canmom}) we obtain the constraints:
\begin{eqnarray}
\varphi ^{0,a} &=&\pi ^{0,a} \label{fpconstr} \\
\varphi ^{i,a} &=&\pi ^{i,a}-\frac s{8\pi }\epsilon ^{ij}A_j^a\qquad \qquad
\qquad i=1,2 \label{spconstr}
\end{eqnarray}
Following the Dirac procedure for constrained systems,
the latter will be imposed in the weak sense:
$$\varphi^{\mu,a}\approx 0$$
To this purpose, we
construct the
extended Hamiltonian:
\begin{equation}
\widetilde{H}=H_{CS}+\int \lambda _\mu ^a\varphi ^{\mu ,a}d^2{\bf x}
\label{extham}
\end{equation}
where the $\lambda _\mu ^a$'s represent the
Lagrange multipliers corresponding to the
primary constraints $\varphi^{\mu,a}$.
From the consistency conditions
$\dot \varphi
^{\mu ,a}=\left\{ \varphi ^{\mu ,a},\widetilde{H}_{CS}\right\} \approx 0$,
we obtain the secondary constraint:
\newfont\prova{eusm10 scaled\magstep1}
\begin{equation}
\text{{\prova G}}^a=
D_i^{ab}\pi ^{i,b}+\partial _i\pi
^{i,b}\approx 0 \qquad\qquad\qquad \text{Gauss law} \label{glaw}
\end{equation}
and two relations which determine the Lagrange multipliers $\lambda_1$ and
$\lambda_2$:
\begin{equation}
\frac s{4\pi }\epsilon ^{ij}\left( D_j^{ab}A_0^b-\lambda _j^a\right)\approx
0\qquad \qquad \qquad i=1,2 \label{lagdet}
\end{equation}
It is possible to see that the
consistency condition \.{\prova G}$^a\approx 0$ does not lead to any
further independent equation.
Let us notice that
the operators $\text{\prova G}^a$ generate the $SU(N)$ group of gauge
transformations. To show this,
we introduce the
Dirac brackets (DB's)
associated to
the second class
constraints $\varphi _i^a$ of eq. (\ref{spconstr}):
\begin{equation}
\left\{ A({\bf x}),B({\bf y})\right\} ^{*}=\left\{ A({\bf x}),B({\bf y}%
)\right\} -\sum_{i,j=1}^2\int d^2{\bf x}^{\prime }d^2{\bf y}^{\prime
}\left\{ A({\bf x}),\varphi ^i({\bf x}^{\prime })\right\} \left(
C^{-1})_{ij}({\bf x}^{\prime },{\bf y}^{\prime }\right) \left\{ \varphi ^j(%
{\bf y}^{\prime }),B({\bf y})\right\} \label{dbdef}
\end{equation}
where $\left( C^{-1}\right) _{ij}({\bf x},{\bf y})$ is the inverse of the
matrix $C^{ij}({\bf x},{\bf y})=\left\{ \varphi ^i({\bf x}),\varphi ^j({\bf y%
})\right\} $. For simplicity, color indices and the time variable have been
omitted in the above equations.\\ Computing $\left( C^{-1}\right) _{ij}({\bf %
x},{\bf y})$ explicitly, we find:
\[
\left( C^{-1}\right) _{ij}^{ab}({\bf x},{\bf y})=\frac{4\pi }s\delta
^{ab}\epsilon _{ij}\delta ({\bf x}-{\bf y})
\]
The Dirac brackets among the canonical variables are now given by
\begin{eqnarray}
\left\{ A_i^a(t,{\bf x}),\pi ^{j,b}(t,{\bf y})\right\} ^{*} &=&\frac 12%
\delta ^{ab}\delta _i^j\delta ({\bf x}-{\bf y}) \label{bone} \\
\left\{ A_i^a(t,{\bf x}),A_j^b(t,{\bf y})\right\} ^{*} &=&\frac{4\pi }s%
\delta ^{ab}\epsilon _{ij}\delta ({\bf x}-{\bf y}) \label{btwo} \\
\left\{ \pi ^{i,a}(t,{\bf x}),\pi ^{j,b}(t,{\bf y})\right\} ^{*} &=&\frac s{%
16\pi }\delta ^{ab}\epsilon ^{ij}\delta ({\bf x}-{\bf y}) \label{bthree}
\end{eqnarray}
In the following, the DB's defined in
(\ref{dbdef}) will be written without the
superscript $^*$.
Exploiting the DB's (\ref{bone})--(\ref{bthree}), we
obtain the relations:
\begin{eqnarray}
\left\{ \text{{\prova G}}^a(t,{\bf x}),A_i^b(t,{\bf y})\right\}
&=&-D_i^{ab}(x)\delta ({\bf x}-{\bf y})\label{ffcone} \\
\left\{ \text{{\prova G}}[\psi ],A_i^a(t,{\bf x})\right\}
&=& D_i^{ab}(x)\psi ^b({\bf x})\label{ffctwo} \\
\left\{ \text{{\prova G}}^a(t,{\bf x}),\text{{\prova G}}^b(t,{\bf y}%
)\right\} &=&-f^{abc}\text{{\prova G}}^c(t,{\bf x})\delta ({\bf x-y)}
\label{ffcthree}
\end{eqnarray}
where $\text{\prova G}\left[ \psi \right] =\int d^2{\bf x}{\prova G}^a(t,{\bf
x})\psi ^a({\bf x})$.
This shows that the $\text{\prova G}^a(t,{\bf x})$ are the generators of the $%
SU(N)$ gauge transformations as desired.
At this point, we are left with the constraints given by eq.
(\ref{fpconstr}) and by the Gauss law
(\ref{glaw}).
However, the former constraint, which involves the conjugate momentum of
$A_0^a$ can be ignored.
As a matter of fact, the field
$A_0^a$ plays just
the role of the Lagrange multiplier associated to the Gauss law
in the Hamiltonian (\ref{wrrr}) and has no dynamics.
From eqs. (\ref{ffcone})--(\ref{ffcthree}) it turns out that the Gauss law
(\ref{glaw}) is
a first class constraint. To make it second class,
we introduce the Coulomb gauge
fixing:
\begin{equation}
\partial _iA^{i,a}\approx 0 \label{coulombgauge}
\end{equation}
and the new extended Hamiltonian:
\begin{equation}
\check H_{CS}=\int d^2{\bf x}\left[ -A_0^a\text{{\prova G}}^a+\frac s{8\pi }%
A_i^a\partial ^iB^a+\lambda _0^a\pi ^{0,a}\right] \label{neham}
\end{equation}
From the condition $\{\partial_i A^{i,a},\check H_{CS}\}
\approx 0$, we obtain an equation for $A_0^a$:
\begin{equation}
\partial^iD_i^{ab}A_0^b\approx 0\label{sfour}
\end{equation}
Moreover, the requirement
$\left\{ \partial^iD_i^{ab}A_0^b(x),\check H_{CS}
\right\}\approx 0$ determines the
Lagrange multiplier $\lambda_0$:
\begin{equation}
-\bigtriangleup\lambda_0^a
-\left\{\partial_i({\bf A}_i\times{\bf A}_0)^a,\check H_{CS}\right\}\approx0
\label{lzdet}
\end{equation}
In the above equation the symbol $\bigtriangleup$ denotes the two dimensional
Laplacian $\bigtriangleup=-\partial_i\partial^i$ and
$$({\bf A}_i\times{\bf A}_0)^a\equiv f^{abc}A_i^bA_0^c$$
Another independent equation, which fixes the Lagrange multipliers $B^a$, is
provided by the requirement \.{\prova G}$^a\approx 0$:
\begin{equation}
\left\{{\text{\prova G}}^a,\check H_{CS}\right\}
\approx-\frac{s}{8\pi}D_i^{ab}\partial^i
B^b\approx 0\label{fixb}
\end{equation}
Let us notice that the above relations (\ref{glaw}), (\ref{coulombgauge}) and
(\ref{sfour})--(\ref{fixb}) are compatible with the equations of motion of the
gauge potentials:
\begin{equation} \epsilon^{ij}(D_i^{ab}A_j^{b}-\partial_jA_i^a)=0\label{eqone}
\end{equation}
\begin{equation} D_j^{ab}A_0^b-\partial_0A_j^a=0\label{eqtwo}
\end{equation}
As a matter of fact (\ref{eqone}) is equivalent to the condition
${\text{\prova G}}^a=0$. Moreover, multiplying for instance eq. (\ref{eqtwo})
with the differential operator $\epsilon_{ki}\partial^k$, we obtain the
relation:
$$\partial_0\partial^kA_k^a-\partial^kD_k^{ab}A_0^b=0$$
which is consistent with the Coulomb gauge
and the condition (\ref{sfour}) on $A_0^a$.
It is now possible to realize that the Gauss law
(\ref{glaw}) and the Coulomb gauge fixing
(\ref{coulombgauge}) form a set of second class constraints, so that we can
impose them
in the strong sense computing the final Dirac brackets.
Putting
\[
\chi_1^a=\text{\prova G}^a\qquad\qquad\qquad
\chi_2^a=
\partial_i A^{i,a}
\]
with $\alpha,\beta=1,2$, we have for
any two observables $A({\bf x})$ and
$B({\bf y})$ \footnotemark\footnotetext{In the following, the time
variable will be omitted from our equations.}:
\[
\left\{ A^a({\bf x}),B^b({\bf y})\right\}^{*}=\left\{ A^a({\bf x}),B^b({\bf y}%
)\right\}-
\]
\begin{equation}
\sum_{\alpha,\beta=1}^2\sum_{c,d}
\int d^2{\bf x}^{\prime }d^2{\bf y}^{\prime
}\left\{ A^a({\bf x}),\chi_\alpha^c({\bf x}^{\prime })\right\}(
C^{-1})^{\alpha\beta,cd}({\bf x}^{\prime },{\bf y}^{\prime })
\left\{ \chi ^{\beta,d}(%
{\bf y}^{\prime }),B^b({\bf y})\right\} \label{dbndef}
\end{equation}
The matrix
$(C^{-1})^{\alpha\beta,cd}({\bf x},{\bf y})$ denotes the inverse of the
$2\times 2$ matrix $C_{\alpha\beta}^{ab}({\bf x},{\bf y})=\{\chi^a_\alpha
({\bf x}),\chi^b_\beta({\bf y})\}$.
After some manipulations and remembering that the gauge potentials
satisfy the Coulomb gauge constraint, we obtain:
\[
{\bf C}^{ab}({\bf x},{\bf y})=\left(
\begin{array}{c c }
0 & -D^{ab}_i({\bf x})\partial_{\bf x}^i\delta({\bf x}-{\bf y})\\
D^{ab}_i({\bf x})\partial_{\bf x}^i\delta({\bf x}-{\bf y}) & 0\\
\end{array}\right)
\]
To invert the above matrix, it
is convenient to introduce the function $\text{\prova D}^{cb}
({\bf x},{\bf y})$, defined by the following equation \cite{schwinger}:
\begin{equation}
D^{ac}_i({\bf x})\partial_{\bf x}^i\text{\prova D}^{cb}({\bf x},{\bf y})=
\delta^{ab}\delta({\bf x}-{\bf y})\label{dstorta}
\end{equation}
Supposing that
the Green function $\text{\prova D}^{ab}({\bf x},{\bf y})$ has a sufficiently
good behavior at infinity, it is easy to prove that
\begin{equation}
({\bf C}^{-1})^{ab}({\bf x},{\bf y})=\left(
\begin{array}{c c }
0 & \text{\prova D}^{ab}({\bf x},{\bf y})\\
-\text{\prova D}^{ab}({\bf x},{\bf y}) & 0\\
\end{array}\right)\label{invnew}
\end{equation}
After imposing the constraints (\ref{glaw}) and (\ref{coulombgauge}) in the
strong sense, the Hamiltonian $\check H_{CS}$ vanishes, but the commutation
relations (CR's) between the fields remain complicated.
From eqs. (\ref{dbndef}) and (\ref{invnew}), in fact, the basic DB's
between the canonical variables $A_i^a$ have the following form:
\[
\left\{A_i^a({\bf x}),A_j^b({\bf y})\right\}^*=
-\frac{4\pi}{s}\delta^{ab}\epsilon_{ij}\delta({\bf x}-{\bf y})+
\]
\begin{equation}
\frac{4\pi}{s}\epsilon_{ik}\partial_{\bf x}^kD_j^{bc}({\bf y})
\text{\prova D}^{ac}
({\bf x},{\bf y})-\frac{4\pi}{s}
\epsilon_{kj}D_i^{ac}({\bf x})\partial_{\bf y}^k\text{\prova D}^{cb}
({\bf x},{\bf y})\label{maincomrel}
\end{equation}
Let us study the main properties of the above DB's. First of all, they
are antisymmetric as expected:
\begin{equation}
\{ A_i^a({\bf x}), A_j^b({\bf y})\}^*=-
\{ A_j^b({\bf y}), A_i^a({\bf x})\}^*\label{antisym}
\end{equation}
The antisymmetry of the right hand side of eq. (\ref{maincomrel}) is not
explicit, but can be verified with the help of the relation:
\begin{equation}
\label{propsym}
\text{\prova D}^{ab}({\bf x},{\bf y})=\text{\prova D}^{ba}({\bf y},{\bf x})
\end{equation}
The above symmetry of the Green function $\text{\prova D}^{ab}({\bf x},{\bf
y})$ in its arguments is a consequence of the selfadjointness of the
defining equation (\ref{dstorta}) \cite{schwinger}.
Moreover, the CR's (\ref{maincomrel}) are consistent
with the Coulomb gauge. As a matter of fact, it is easy to prove that:
$$\{ A_i^a({\bf x}), \partial^jA_j^b({\bf y})\}^*=
\{ \partial^iA_i^a({\bf x}), A_j^b({\bf y})\}^*=0$$
The case of a Chern--Simons field theory with abelian gauge group $U(1)$ is
particularly instructive in order to understand the meaning of the CR's
(\ref{maincomrel}). Let $U_\mu$ denote the abelian gauge fields.
Then the Lagrangian (\ref{lagrangian}) reads:
$$L_{CS}={s\over 8\pi}\epsilon^{\mu\nu\rho}U_\mu\partial_\nu U_\rho$$
It is now possible to decompose the gauge potentials $U_i$, $i=1,2$ into
transverse and longitudinal components:
$$U_i=\epsilon^{ij}\partial_j\varphi+\partial_i \rho$$
where $\varphi$ and $\rho$ are two real scalar fields.
Exploiting the Coulomb gauge condition it turns out that $\rho=0$.
The canonical momenta are given by:
$$\pi^i=\frac{s}{8\pi}\epsilon^{ij}U_j$$
As a consequence,
from the Gauss law $\partial_i\pi^i=0$, we obtain the relation $\partial_i
\partial^i\varphi=0$. This implies that $\varphi=0$ and thus
there is no dynamics in the C--S
field theory as expected.
The CR's (\ref{maincomrel}) must be consistent with that fact.
Indeed, in the abelian case it is easy to derive the Green function
$\text{\prova D}({\bf x},{\bf y})$ solving eq. (\ref{dstorta}). The result is:
\begin{equation}
\text{\prova D}({\bf x},{\bf y})=-{1\over 2\pi} {\rm log}|
{\bf x}-{\bf y}|\label{dsabelian}
\end{equation}
Substituting the right hand side of the above equation in (\ref{maincomrel}),
we obtain:
$$[U_i(t,{\bf x}),U_j(t,{\bf y})]=0$$
so that the fields do not propagate as expected.
To conclude the discussion of the abelian case, let us notice that
eqs. (\ref{lagdet}) and (\ref{sfour})--(\ref{fixb}) admit only the
trivial solutions $U_0=\lambda_\mu=B=0$ in agreement with the fact that,
in absence of couplings with matter fields,
the C--S theory is topological and there
are no degrees of freedom. \smallskip
In the nonabelian case the situation is analogous, but the equations
of motion of the constraints become nonlinear and can in general be solved
only using a perturbative approach. The relevant equations determining the
fields $A_i^a(z)$, with $i=1,2$, are given by:
\begin{equation}
F_{12}^a=\partial_1A_2^a-\partial_2A_1^a-gf^{abc}A_1^bA_2^c\label{glclone}
\end{equation}
and
\begin{equation}
\partial_1A_1^a-\partial_2^aA_2^a=0\label{cgclone}
\end{equation}
With respect to eq. (\ref{lagrangian}),
we have introduced here the new coupling
constant $g^2=\frac{8\pi}{9s}$ and the fields $A_\mu$ have been rescaled
in such a way that the new action becomes:
$$
L=\epsilon^{\mu \nu \rho }\left( A_\mu ^a\partial _\nu A_\rho
^a-gf^{abc}A_\mu ^aA_\nu ^bA_\rho ^c\right)
$$
In the following, we will also suppose that $g$ is so small that
a perturbative treatment of the C--S field theory makes sense.
Under this hypothesis, the fields $A_i^a$ can be expanded in powers of $g$:
$$A_i^a(x)=\sum_{n=0}^\infty g^nA_i^{a (n)}(x)$$
where, from eqs. (\ref{glclone}) and (\ref{cgclone}), the $A_i^{a (n)}$'s
satisfy the following equations:
$$\partial_1A_2^{a(0)}-\partial_2A_1^{a(0)}=0\qquad\qquad\qquad
\partial_1A_1^{a(0)}+\partial_2A_2^{a(0)}=0$$
and
\begin{equation}
\partial_1A_2^{a(n)}-\partial_2A_1^{a(n)}-gf^{abc}
A_1^{b(n-1)}A_2^{c(n-1)}=0\qquad\qquad\qquad n=1,\ldots,\infty
\label{hoone}
\end{equation}
\begin{equation}
\partial_1A_1^{a(n)}+\partial_2A_2^{a(n)}=0
\qquad\qquad\qquad n=1,\ldots,\infty
\label{hotwo}
\end{equation}
Assuming that the gauge fields vanish at infinity, the solution of the above
equations at the zeroth order is
\begin{equation}
A_1^{a(0)}(t,{\bf x})=A_2^{a(0)}(t,{\bf x})=0\label{cvd}
\end{equation}
as shown in the abelian case.
Moreover, from eq. (\ref{hoone}), it turns out that $A_1^{a(n)}(t,{\bf x})=0$
for $n=1,\ldots,\infty$, so that all the field configurations solving
eqs. (\ref{glclone})--(\ref{cgclone}) vanish identically.
Pure gauge solutions obtained performing gauge transformations
are not allowed because, at least within
perturbation theory, the Coulomb gauge fixes the gauge freedom completely.
As a consequence, the right hand side
of (\ref{maincomrel}) is equal to zero. Indeed, due to eq. (\ref{cvd}), the
Green function $\text{\prova D}^{ab}({\bf x},{\bf y})$
is given by:
\begin{equation}
\text{\prova D}^{ab}({\bf x},{\bf y})=-\delta^{ab}{1\over 2\pi} {\rm log}|
{\bf x}-{\bf y}|\label{dab}
\end{equation}
and, substituting in eq. (\ref{maincomrel}), we obtain:
\begin{equation}
\{ A_i^a({\bf x}), A_j^b({\bf y})\}^*=0\label{physbra}
\end{equation}
as expected.\\
Of course, the vanishing of the gauge fields leads to the
trivial solutions $A_0^a=\lambda_\mu^a=B^a=0$ for the Lagrange multipliers
as in the abelian case.\\
It is worth remarking, that the would be Poincar\'e
algebra becomes trivial {\it a posteriori}, that is
when computed on the "physical" solutions
of the theory (eq. \ref{cvd}), characterized by the "strong" validity of the
constraints and of the brackets given in eq. (\ref{physbra}).
That means that the Poincar\'e covariance is recovered through the
trivial representation of the Poincar\'e group \footnotemark{} \footnotetext{
It is worth mentioning that in the case of the Maxwell--Chern--Simons
theory the Poincar\'e covariance takes place through a {\bf nontrivial}
representation of the Poincar\'e group \cite{gg}.}.
We stress the fact that one must evaluate in such {\it a posteriori} way the
algebra, as otherwise one finds "extra" terms, proportional to the
constraints. For instance, the intermediate Dirac brackets (
\ref{bone}--\ref{bthree}) yield for
the generators of the time and the space translations the following result:
$$\{P_0, P_k\} = \int d^2x A_0^a \partial_k \text{\prova G}^a$$
where $\text{\prova G}$ is given in (\ref{glaw}).
Let us notice that in the case of the MCS theory the Poncar\'e invariance
has been proved in the Coulomb gauge within the frame of the canonical
formalism in ref. \cite{gg}.
To quantize the theory, we have to replace
the Dirac brackets (\ref{maincomrel}) with commutators.
At least in the absence of coupling with matter fields, we obtain
trivial commutation relations between the gauge potentials:
\begin{equation}
\left[A_i^a({\bf x}),A_j^b({\bf y})\right]=0\label{quantumcr}
\end{equation}
\section{Conclusions}
In this paper the C--S field theories have been quantized in the Coulomb gauge
within the Dirac's canonical approach to constrained systems.
All the constraints coming from the Hamiltonian procedure and by the
Dirac's consistency requirements have been derived.
As anticipated in the Introduction, the C--S theories become
in this gauge two dimensional
models. Only the fields $A_i^a$, for $i=1,2$, have in fact a dynamics, which
is governed by the commutation relations (\ref{maincomrel}).
If no interactions with matter fields are present, we have shown
that these CR's vanish at all perturbative orders.
Thus the C--S field theories in the Coulomb gauge are not only finite,
but also free.
This result has been verified with explicit perturbative
calculations of the correlation functions in \cite{flnew}.
A natural question that arises at this point is if analogous conclusions
can be drawn for the covariant gauges.
For this reason it would be
interesting to repeat the procedure of canonical quantization developed
here
also in this case.
The situation becomes different
if the interactions with other fields are switched on.
Adding for instance a coupling with a current $J^a_\mu$ of the kind
$\int d^2{\bf x}A_\mu^a J^{\mu,a}$ to the Hamiltonian (\ref{neham}),
it is possible to see that the Gauss law (\ref{glaw}) is modified as follows:
$$D_i^{ab}\pi ^{i,b}+\partial _i\pi
^{i,b}+J_0^a\approx 0$$
Thus eqs. (\ref{cvd}) are no longer valid and we have to consider the
full commutation relations (\ref{maincomrel}).
Remarkably,
they trivially vanish at the zeroth level in the coupling constant $g$.
Moreover, the CR's (\ref{maincomrel})
are perfectly well defined and do not lead
to ambiguities in the quantization of the
C--S models in the Coulomb gauge.
In particular, we have verified here the consistency of (\ref{maincomrel})
with the Coulomb gauge fixing and their antisymmetry under the
exchange of the fields.
A physical application of our results,
which is currently under consideration, is the
investigation of the statistics of fermionic and bosonic matter
fields interacting with nonabelian C--S theories at high temperatures
\cite{higtemp}. Other interesting applications are $(2+1)$
quantum gravity and the calculation of the new link invariants
from C--S field theories quantized
on Riemann surfaces, whose existence has been formally shown
in \cite{cotta}. In these latter two cases,
the possibility offered by the Coulomb gauge of performing explicit
calculations also on non--flat space--times \cite{ffprd}
can be exploited.
|
1,116,691,499,827 | arxiv | \section{Introduction}
Sustained magnetism in astrophysical objects is due to the dynamo mechanism which relies on the generation of electrical currents by fluid motion~\cite{brandenburg2005}. The secular cooling of the Earth's interior and the release of light elements at the boundary of the solid inner core provide buoyancy sources that drive convection, leading to the generation of electrical currents~\cite{roberts2013}. It has been more than two decades since the idea of modeling the geomagnetic field using computer simulations was successfully demonstrated~\cite{glatzmaier1995a,glatzmaier1995b}. These pioneering simulations were able to reproduce the dipole dominant nature of the geomagnetic field and showed reversals of the geomagnetic dipole. Since then computer simulations have become a primary tool for studying the properties of the geomagnetic field~\cite{christensen2004,gubbins2011,aubert2013, olson2014,aubert2014}.
The range of flow length scales present in the liquid outer core is enormous due to the very small viscosity of the fluid. To model this aspect in geodynamo simulations one would require tremendous computing power that is not available even in the foreseeable future. Therefore, all geodynamo simulations must use unrealistically large viscosity to reduce the level of turbulence. One quantity that epitomizes this discrepancy is the Ekman number $E=\nu\Omega^{-1}D^{-2}$ ($\nu$ is the viscosity, $\Omega$ is the Earth's rotation rate, and $D$ is the thickness of the liquid outer core) which roughly quantifies the ratio of the viscous force ${F}_V$ and the Coriolis force ${F}_C$. The Ekman number is about $10^{-15}$ in the core while simulations typically use $10^{-4}$ \cite{roberts2013}.
The Coriolis force tends to suppress changes of the flow in the direction of the rotation axis, i.e., makes the flow nearly geostrophic~\cite{proudman1916, taylor1923}. This is known as the ``Proudman-Taylor constraint" (PTC). Because the boundary of the fluid core is inclined relative to the direction of rotation (except at the poles), convective motions cannot be purely geostrophic and therefore the PTC impedes convection~\cite{greenspan1968}. In the absence of a magnetic field viscous force or inertial force $F_I$ must compensate the part of the Coriolis force that cannot be balanced by the pressure force $F_P$. $F_V$ or $F_I$ may still be significantly smaller than the Coriolis force. For example, at the onset of non-magnetic convection in a sphere, $F_V$ is smaller than $F_C$ by a factor $E^{1/3}$. Nonetheless, it is of the same order as $|{\bf F}_C + {\bf F}_P|$ and plays a key role in the force balance. The buoyancy force $F_A$ (Archimedean) is comparable to $F_V$ and the state can be referred to as being in a VAC-balance (Viscous, Archimedean, Coriolis) \cite{king2013b}.
In the Earth's core, the buoyancy force and the Lorentz force $F_L$ due to the geomagnetic field are expected to be comparable to the Coriolic force \cite{chandrasekhar1954, stevenson1979,starchenko2002, roberts2013}. This state is commonly referred to as "MAC" state. Here, the dynamo presumably selects a magnetic field that leads to an efficient relaxation of the PTC. This is expected to occur at $\Lambda\approx\mathcal{O}(1)$, where the Elsasser number is $\Lambda = {B^2}(\rho\mu\lambda\Omega)^{-1}$ ($B$ is mean magnetic field, $\rho$ is density, $\mu$ is magnetic permeability, $\lambda$ is magnetic diffusivity) \cite{chandrasekhar1954, stevenson1979}. Note that here we use the term MAC-balance in the sense that $F_L$ and $F_A$ are of the same order as the uncompensated Coriolis force $|{\bf F}_C + {\bf F}_P|$, not necessarily the total Coriolis force.
Although a MAC state has long been expected from theoretical considerations, its existence in geodynamo simulations has not been demonstrated so far. A recent study of geodynamo models at an Ekman number of $10^{-4}$ explicitly calculated the value of the various forces \cite{soderlund2012}. The authors show that the viscous force was actually comparable to the other forces. Furthermore, the analysis of convection properties suggested that a VAC state exists in contemporary geodynamo simulations rather than a MAC state \cite{king2013b}. The presence of a VAC state promotes the idea that cost-efficient simulations might produce geodynamo-like features for the wrong reasons \cite{roberts2013}. A natural question then arises: How small should the viscosity be for a MAC state to appear? Due to the very nature of this question a detailed parameter study is called for that systematically explores the parameter regime of geodynamo simulations.
\section{Methods}
We carry out a detailed study of geodynamo models where we analyze data from our recent study \cite{yadav2016} and carry out new simulations at more extreme values of the control parameters. The basic setup is geodynamo-like and we consider a spherical shell where the ratio of the inner ($r_i$) and the outer ($r_o$) radius is 0.35. The thickness $D$ of the shell is given by $r_o-r_i$. The convection in the shell is driven by a superadiabatic temperature contrast $\Delta T$ across the two boundaries. The shell rotates along the $\hat z$ axis with an angular frequency $\Omega$. We work with non-dimensional equations and we use $D$ as standard length scale, $D^2/\nu$ as time scale, $\Delta T$ as temperature scale, and $\sqrt{\rho\mu\lambda\Omega}$ as magnetic field scale.
We employ the Boussinesq approximation and the equations governing the velocity $\mathbf{u}$, magnetic field $\mathbf{B}$, and temperature perturbation $T$ are:
\begin{gather}
E\left(\frac{\partial\mathbf{u}}{\partial t}+\mathbf{u\cdot\nabla\mathbf{u}}\right)+2\hat{z}\times\mathbf{u}= -\nabla P + \frac{Ra\,E}{P_r}\,{g(r)\,T\,\hat{\bf r}} \nonumber \\
+{\frac{1}{P_m}}(\nabla\times\mathbf{B})\times\mathbf{B}+E\nabla^{2}\mathbf{u}, \label{eq:MHD_vel} \\
\nabla \cdot \mathbf{u} = 0, \\
\frac{\partial T}{\partial t}+\mathbf{u\cdot\nabla}T = \frac{1}{P_r}\nabla^{2}T, \\
\frac{\partial\mathbf{B}}{\partial t} = \nabla\times(\mathbf{u}\times\mathbf{B})+\frac{1}{P_m}\nabla^{2}\mathbf{B}, \label{eq:MHD_mag} \\
\nabla \cdot \mathbf{B} = 0, \label{eq:div_B_0}
\end{gather}
where $g(r)$ is the gravity that varies as $r/r_o$, and $P$ is the pressure. The control parameters that govern the system are:
\begin{gather}
\text{Prandtl number } \,\,\,\, P_r=\frac{\nu}{\kappa}, \\
\text{magnetic Prandtl number } \,\,\,\, P_m=\frac{\nu}{\lambda}, \\
\text{Rayleigh number } \,\,\,\, Ra=\frac{\alpha\,g_o\,D^3\Delta T}{\nu\,\kappa},
\end{gather}
where $\alpha$ is the thermal expansivity, and $g_o$ is the gravity at the outer boundary, and $\kappa$ is the thermal diffusivity.
Both boundaries have fixed temperature, are no-slip, and are electrically insulating. The open-source code MagIC (available at \href{https://github.com/magic-sph/magic}{\tt www.github.com/magic-sph/magic}) is used to simulate the models \cite{wicht2002}. The code uses spherical harmonic decomposition in latitude and longitude and Chebyshev polynomials in the radial direction. MagIC uses the SHTns library \cite{schaeffer2013} to efficiently calculate the spherical harmonic transforms. Since we employ non-dimensional equations, the relative influence of viscosity is mainly expressed by the value of the Ekman number. To explore the effect of the magnetic field we perform hydrodynamic (HD) simulations, i.e. without a magnetic field, in parallel to the dynamo models.
The results of simulations with $E=10^{-4},\,10^{-5}$ are taken from our earlier study \cite{yadav2016} and are extended here to runs at $E=10^{-6}$. In all of our simulations, the fluid Prandtl number $P_r$ is unity. The magnetic Prandtl number $P_m$ is also unity for cases with $E=10^{-4}$ and $E=10^{-5}$. At $E=10^{-6}$, we ran five dynamo simulations with $P_m$ of 2, 1, 0.5, 0.5, and 0.4 (in order of increasing $Ra$). To reduce the time spent in calculating the transient stages for $E=10^{-6}$ simulation with highest $Ra$ we use a scaled $E=10^{-5}$ dynamo simulation as initial condition. The scaling factors for magnetic field and velocity are calculated using the scaling laws by Christensen \& Aubert \cite{christensen2006}. Furthermore, the other $E=10^{-6}$ simulations at lower $Ra$ use an initial condition from a higher $Ra$ case. Data tables that contain useful globally-averaged quantities, grid resolutions, and simulation run-time are provided as online supplementary material.
\begin{figure*}
\centering
\includegraphics[width=0.8\linewidth]{Fig1a} \\
\includegraphics[width=0.8\linewidth]{Fig1b}
\caption{Panels a, b, and c show the variation of the forces governing the dynamo simulations as a function of the convective supercriticality $Ra/Ra_c$. The $Ra_c$ values assumed for $E=10^{-4},\,10^{-5},\,10^{-6}$ are $6.96\times 10^5$, $1.06\times 10^7$, $1.79\times 10^8$, respectively (Christensen \& Aubert \cite{christensen2006}). The magnitudes of Coriolis force and the pressure gradient force are similar for most Rayleigh numbers and the data points overlap. The legend describing the data in panels a, b, and c is shown at the top. Panels d, e, and f show the behavior of various force ratios as a function of the dynamo generated Elsasser number $\Lambda$. The different colors in lower panels represent different Ekman numbers that are indicated in panel d. The curves connecting the $E=10^{-4}$ data points in the lower panels follow increasing $Ra$ trend. Therefore, as $E=10^{-4}$ dipolar dynamos become unstable at certain $\Lambda$, the curve turns back even though the $Ra$ increases.}
\label{fig:fig1}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=1\linewidth]{Fig2}
\caption{Panels a, b, c, and d show the radial velocity, given in terms of the Reynolds number ($u\,D/\nu$, where $u$ is the local velocity), in the equatorial plane of the hydrodynamic simulations. Panels e, f, g, and h show the same for the corresponding magnetohydrodynamic cases. The Rayleigh number of all the cases shown is about 10 times $Ra_c$. The color maps are saturated at values lower than the extrema to highlight fainter structures.}
\label{fig:fig2}
\end{figure*}
\section{Results}
We begin our analysis by explicitly calculating the various forces involved in the system, namely, Coriolis force $F_C$, buoyancy (Archimedian) forces $F_A$, Lorentz force $F_L$, inertial force $F_I$, viscous force $F_V$, and pressure gradient force $F_P$. We compare the root-mean-square values of these forces, averaged in space and in time. Since our main goal is to compare the importance of various forces for the flow dynamics, care must be exercised in choosing the appropriate quantities. The spherically-symmetric component of any force is dynamically irrelevant; we thus exclude the harmonic order $m$=0 component from the force values. The PTC implies that the Coriolis force is largely compensated by the pressure gradient. For our purpose, we only concentrate on that part of $F_C$ that is not balanced by the pressure gradient force. Therefore, we consider $\mathbf{F}_C + \mathbf{F}_P$ rather than $\mathbf{F}_C$ alone.
Since we employ no-slip boundary conditions, Ekman layers are formed at the boundaries \cite{greenspan1968}. Within these layers, the viscous force is dominant. Due to the larger viscosity, contemporary geodynamo simulations have much thicker Ekman boundary layers than those present in the Earth's core. This leads to a rather substantial contribution of the boundary layer viscous force to the total viscous force (e.g.~see \cite{stellmach2014,plumley2016}). To correct for this, we choose to exclude thin boundary layers, one below the outer boundary and one above the inner boundary, from the force calculation. The thickness of the excluded layers is 1, 2, 3\% of the shell thickness for $E=10^{-6}$, $10^{-5}$, $10^{-4}$, respectively. The chosen thickness of the layers is a rough estimate and the values are such that any larger value does not lead to further appreciable change in the bulk viscous force. For the sake of consistency, boundary layers are excluded from averaging procedure for all other force types as well. Sometimes it is argued that Ekman suction in the viscous boundary layer \cite{greenspan1968} plays an essential role for creating flow helicity as an important prerequisite for magnetic field generation \cite{davidson2015}. However, we note that geodynamo simulation with a stress-free boundary that lack Ekman suction show quite similar results compared to models with rigid boundaries \cite{yadav2013a, aubert2014}, hence viscous boundary layer effects do not seem to play an essential role.
The various forces calculated from the simulations are portrayed in Fig.~\ref{fig:fig1} (a,b,c) as a function of the convective supercriticality $Ra/Ra_c$ ($Ra_c$ is the critical $Ra$ where convection starts). First, notice that our choice of using ${\bf F}_C+{\bf F}_P$ rather than ${\bf F}_C$ makes a substantial difference since both $F_C$ and $F_P$ are very strong, however, they cancel each other to a large extent. This implies that to the zeroth-order the system is in a geostrophic state, where $F_C$ and $F_P$ are dominant. The first-order deviations are balanced by other weaker forces; these may be Lorentz, viscous, or inertial forces. One may call this state a `quasi-geostrophic' one \cite{calkins2015}. In the $E=10^{-4}$ simulations, the various forces remain comparable to each other to within an order of magnitude. This series of runs spans a large range of $Ra/Ra_c$, covering the transition from dipole-dominant dynamos to multipolar ones (occurring at around $Ra/Ra_c\approx 30$ for $E=10^{-4}$). With decreasing Ekman number the transition shifts to higher values of $Ra/Ra_c$ \cite{kutzner2002}, which are not reached in our simulations with $E\le 10^{-5}$. The latter all have a dominantly dipolar magnetic field. As convection becomes more turbulent, the inertial force eventually becomes the most dominant force in our $E=10^{-4}$ simulations. For low convective supercriticalities ($Ra/Ra_c <$ 10), ${F}_C$ and ${F}_A$ are comparable for all $E$. The Lorentz force ${F}_L$ starts to match these two forces as $Ra$ increases. At $E=10^{-5}$ and more obviously at $E=10^{-6}$ a clear hierarchy of forces becomes apparent for $Ra/Ra_c \gtrsim 10$. Inertial and viscous forces are at least a factor of 10 weaker than the others. Lorentz, Archimedean and (uncompensated) Coriolis forces are very similar in amplitude and must balance each other, i.e. the bulk of the fluid is in a dynamical MAC state. We reiterate that since Coriolis and pressure forces are individually rather strong, the zeroth-order force balance is largely geostrophic and the notion of a MAC state in our simulations is a first-order effect.
We plot the ratio of ${F}_L$ and ${F}_V$ as a function of the Elsasser number $\Lambda$ in Fig.~\ref{fig:fig1}d. In simulations with $E=10^{-4}$, as the dynamo generated field strength increases, the ratio $F_L/F_V$ reaches a maximum of about 8. Lowering $E$ to $10^{-5}$ and $10^{-6}$ increases this maximum ratio to about 30 and 45, respectively. The largest ratios between $F_L$ and $F_V$ is reached for cases with Elsasser numbers of order one. As shown in Fig.~\ref{fig:fig1}e, the ratio $F_L/F_I$ also follows the same qualitative trend as $F_L/F_V$. Note that a MAC state can be disturbed by the viscous force, however, with increasing flow turbulence, the inertial force can also do the same \cite{hughes2016}. Therefore, it is appropriate to compare Lorentz force and the sum of viscous and inertial force. As Fig.~\ref{fig:fig1}f shows, such a comparison provides a succinct way of highlighting the overall dominance of the Lorentz force. In this context, it is worth pointing out that assuming a higher magnetic Prandtl number may help to increase the strength of the magnetic field, and, in turn, its influence on the flow \cite{christensen2006, yadav2013a, dormy2016}. However, whether such an approach is justified or not remains to be tested.
The trends in the forces highlighted above have important consequences for the properties of convection. When a VAC balance holds in rapidly rotating convection, the characteristic flow length scale $l_u$ is proportional to $D\,E^{1/3}$, i.e length scales become smaller with decreasing $E$ \cite{jones2000b, king2013b, roberts2013}. As shown in Fig.~\ref{fig:fig2}(a,b,c,d), the convective structures in our hydrodynamic (HD) simulations do follow this trend qualitatively as $E$ decreases. On the other hand, in the MAC regime, $l_u$ is expected to be similar to the system size and to remain independent of $E$ \cite{jones2000a, starchenko2002, roberts2013}. For simulation with $E\ge10^{-4}$, both HD and dynamo cases have rather similar convective length scales (Fig.~\ref{fig:fig2}e,f). At $E=10^{-5}$, the dynamo case has a higher tendency for elongated structures in the radial direction and fewer up- and down-wellings in azimuthal direction (Fig.~\ref{fig:fig2}g) as compared to the HD case (Fig.~\ref{fig:fig2}c). At $E=10^{-6}$, the dynamo case has significantly larger length scales (Fig.~\ref{fig:fig2}h) than the corresponding HD setup (Fig.~\ref{fig:fig2}d). This increased influence of the magnetic field is also reflected in the total magnetic energy which exceeds the total kinetic energy more and more as $E$ is decreased (Supplementary figure \ref{fig:figS1}). Another interesting feature in the $E=10^{-6}$ dynamo case is the presence of a layer of small scale convection near the outer boundary. This is caused by a relatively weaker Lorentz force in these regions (Supplementary figure \ref{fig:figS2}). We conclude that hints of a MAC regime appear at $E=10^{-5}$ \cite{takahashi2012, teed2015} but this regime is more prominent at $E=10^{-6}$. Furthermore, in a single system, there might be regions where a MAC state prevails while in some other regions it may not (also see \cite{sreenivasan2006, dormy2016}).
\begin{figure*
\centering
\includegraphics[width=0.8\linewidth]{Fig3}
\caption{Perspective view of a hydrodynamic (panel {\bf a}) and dynamo case (panel {\bf b}) with $E=10^{-6}$, $P_m=0.5$, $Ra=2\times10^9$. The radial velocity on the equatorial plane is given in terms of the Reynolds number. The blue and light orange contours represent radial velocity of -300 and 300, respectively.}
\label{fig:fig3}
\end{figure*}
\begin{figure
\centering
\includegraphics[width=1\linewidth]{Fig4}
\caption{Ratio of Nusselt number $Nu$ of dynamo and hydrodynamic cases (with otherwise same control parameters) as a function of the Elsasser number.} \label{fig:fig4}
\end{figure}
In Fig.~\ref{fig:fig3} we present the 3-dimensional morphology of the convection in the HD and in the dynamo case for the lowest viscosity simulation with the largest ratio of Lorentz force to viscous and inertial forces. The HD setup has small axially-aligned tube-like convection columns. In the dynamo case, however, the convection occurs in the form of thin sheets stretched in the cylindrically-radial direction. It is also clear that as compared to the HD case the convective structures vary more along the rotation axis. Both features demonstrate the influence of the Lorentz forces on the convention morphology.
Another way to quantify the relaxed influence of the Proudman-Taylor condition in the dynamo cases is to analyze the total heat transferred from the bottom boundary to the top. This stems from the notion that rotation quenches the efficiency of convection by suppressing motions along the rotation axis \cite{greenspan1968}. Any relaxation of this constraint will lead to a gain in heat-transfer efficiency. We utilize the ratio of the Nusselt number $Nu$ (ratio of the total heat and the conductive heat transferred from the bottom to the top boundary) for dynamo and HD cases as a function of the dynamo generated average magnetic field strength (Fig.~\ref{fig:fig4}). At $E=10^{-4}$, the $Nu$-ratio remains close to unity, implying that the convective heat transport in dynamo and HD cases is similar. At $E=10^{-5}$, the $Nu$-ratio peaks for $\Lambda\approx 3$ and reaches a value of about 1.3 \cite{yadav2016}. This enhancement of heat transport by the presence of a magnetic field is more pronounced when we further decrease $E$ to $10^{-6}$. Here, the heat flow is doubled for $\Lambda\approx 1$. Comparing this figure with Fig.~\ref{fig:fig1}(d,e,f) highlights that the gain in the heat-transfer efficiency in the dynamo cases is largest when the Lorentz force is maximally dominant over viscous and inertial forces.
\section{Discussion}
To summarize, we used a systematic parameter study to test the existence of a dynamical state in dynamo simulations where magnetic forces play a crucial role together with Coriolis and buoyancy forces (MAC-state), as is expected to be present in the Earth's core. We lowered the viscosity to a small value, close to the limit allowed by today's computational resources, and found that Lorentz forces become equal in strength to (uncompensated) Coriolis and buoyancy forces and, for a limited range of Rayleigh numbers, far exceed viscous and inertial forces. This leads to large scale convection, substantial axial variation in the convection structures, and a 100\% increase in the heat-transfer efficiency as compared to the corresponding hydrodynamic setup. All of these features are expected theoretically \cite{roberts2013}. For higher viscosity values, the convection is much less affected by the magnetic field \cite{soderlund2012}.
We note that in our simulations at the lowest Ekman number the Lorentz force is substantially smaller than the Coriolis force or the pressure force (taken individually). Hence, the state can be called quasi-geostrophic \cite{calkins2015}. Nonetheless, a completely geostrophic state is impossible and the essential question is what balances the residual Coriolis force. Since these are the Lorentz and Archimedean forces, with an insignificant role for viscosity and inertia, it is also justified to speak of a MAC-balance. We also note that although a MAC-balance is satisfied globally, this does not imply that the residual Coriolis force, Lorentz force and buoyancy force are pointwise of the same magnitude. For example, strong Lorentz forces seem to be rather localized (see Supplementary figure \ref{fig:figS2}), as found in previous studies (e.g.~\cite{sreenivasan2006}). In regions where the Lorentz force is weak, the balance could be almost perfectly geostrophic or buoyancy alone could balance the residual Coriolis force.
Our results show some similarities with earlier studies done in a similar context. Larger scale convection in dynamo simulations compared to their HD counterparts has been reported in rotating convection in Cartesian geometry \cite{stellmach2004}; there, the dynamo simulation with $E=10^{-6}$ showed about 60\% increase in $Nu$. A recent laboratory experiment of rotating magnetoconvection (imposed magnetic field) in a cylinder also showed about 30\% increase in $Nu$ due to the presence of the magnetic field (at $E=4\times 10^{-6}$ and $\Lambda\approx 2$) \cite{king2015}.
In the context of geodynamo simulations, studies at Ekman numbers comparable to the lowest value used in our study have been reported before. A substantial change in the convection length scale due to the dynamo generated magnetic field was found, but it only occurred in cases with constant heat-flux boundary conditions \cite{sakuraba2009}. In contrast, we find the same enlargement of flow length scales also for fixed temperature conditions. Differences in the model setup and parameter values prevent us from elucidating the exact cause for these differences. Miyagoshi et al. \cite{miyagoshi2010, miyagoshi2011} also performed geodynamo simulations with $E\approx 10^{-6}$ (in our definition) and observed a ``dual convection" morphology where the deeper convecting regions had thin cylindrically-radial structures and the outer regions had very large scale spiraling features embedded into a prominent zonal flow. We also found such convection morphology at $E=10^{-6}$, in both hydrodynamic and dynamo simulations, but only at low Rayleigh numbers ($Ra/Ra_c <$ 10). Again, our simulations and these studies \cite{miyagoshi2010, miyagoshi2011} are significantly different in model details, for example they assumed gravity to drop sharply with radius whereas in our case it linearly increases from the inner to the outer boundary as it is appropriate for the Earth's core. A geodynamo simulation at the lowest Ekman number reached so far has been performed by Nataf and Schaeffer \cite{nataf2015} and shows rather small flow scales. Because hardly any details of the simulation are available it is difficult to assess the reasons. Possibly, strong driving could make inertial forces significant, leading to a compromised MAC state.
Our parameter study has shown that at an Ekman number of $10^{-6}$ a MAC-state, as is expected in the Earth's core, is very nearly reached, albeit only in a limited range of moderate Rayleigh numbers. As a consequence, the magnetic dipole dominates more strongly over higher multipoles at the outer boundary than it does in the geomagnetic field. Furthermore, the dipolar mode in the $E=10^{-6}$ simulation appears to be rather stable and does not show indications of reversals, unlike the geomagnetic field. In previous dynamo simulations, the onset of reversals has been associated with a growing influence of the inertial force at higher Rayleigh number \cite{sreenivasan2006,christensen2006}. We expect that pushing the Ekman number to even lower values would expand the range where a MAC-state exists towards more strongly supercritical values of the Rayleigh number \cite{christensen2010}, but this does not necessarily imply that inertia becomes significant. It remains an open question whether inertial effects are responsible for triggering reversals in the geodynamo (which would then not be in a pure MAC state), or if some other effects associated with a more strongly supercritical Rayleigh number play a role in reversals. Another challenge to tackle is the extreme value of the magnetic Prandtl number which is also fundamentally important for the geodynamo mechanism \cite{roberts2013}. In the Earth, $P_m$ is expected to be about $10^{-6}$, implying a large difference in the typical length scales of the velocity and the magnetic field (the latter varying on larger scales). To have a magnetic Reynolds number large enough to sustain a dynamo at low $P_m$, the convection must generate Reynolds number in excess of a million. In order to keep the system rotationally dominant {\em and} very turbulent one must inevitably decrease the Ekman number to much smaller values than what we could reach in this study. Therefore, a way forward in future is to strive for even lower Ekman numbers and lower magnetic Prandtl numbers to approach the conditions of the geodynamo.
\begin{acknowledgments}
We thank the two anonymous referees for very constructive comments. Funding from NASA (through the {\em Chandra} grant GO4-15011X) and DFG (through SFB 963/A17) is acknowledged. S.J.W. was supported by NASA contract NAS8-03060. Simulations were performed at GWDG and RZG.\end{acknowledgments}
|
1,116,691,499,828 | arxiv | \section{Introduction}
At present there are several methods of the relativistic correlation calculations of atoms, such as multiconfiguration Dirac-Fock \cite{Des75,JGBF13,FGBJ16}, configuration interaction (CI) \cite{Gu08,FFFG02,JMCB16,Tup03,Tup05}, many-body perturbation theory (MBPT) \cite{DFSS87,BGJS87,Sap98}, CI+MBPT \cite{DFK96,KaBe19,KPST15}, coupled cluster \cite{Eliav2010,Saue-DIRAC20,BJLS89,SaJo08,OZSE20}, and others. Calculations are usually done in the no-virtual-pair approximation using Dirac-Coulomb, or Dirac-Coulomb-Breit approximations \cite{Johnson07}. QED corrections may be included using radiative potential method developed by \citet{FlaGin05}, \cite{GiBe16a}, and QEDMOD potential \cite{STY13,TKSSD16}.
The coupled cluster method is one of the most popular and effective methods for calculation of atoms with a small number of open shell electrons (or holes). Calculations of the spectra of atoms and ions with many valence electrons (e.~g.\ transition metals, lanthanides, and actinides) are very difficult and usually not very accurate. The reason for that is a combination of strong correlations and a very large configuration space. To account for strong correlations one needs non-perturbative methods, such as CI. On the other hand, a large configuration space makes such calculations very expensive. As a compromise one can try to combine CI with perturbation theory (PT). We will first assume that all closed atomic shells are considered frozen. Then we are treating only valence correlations and consider a combination of the valence CI with valence perturbation theory (VPT). Later we will see that this approach can also be used to treat core-valence correlations.
Recently there were several attempts \cite{DBHF17,GCKB18,DFK19,ImaKoz18} to develop an effective and fast CI+VPT method to speed up calculations for such systems, where straightforward CI calculations are impossible. Application of these methods for systems with a large number of valence electrons was demonstrated in Refs.\ \cite{CSPK20,LiDz20}. A general idea of all these calculation schemes is to make CI in a smaller subspace $P$ and calculate corrections from a complementary subspace $Q$ using VPT. In Refs.\ \cite{DBHF17,GCKB18,DFK19} it is suggested to neglect non-diagonal blocks of the CI matrix in the subspace $Q$, which is equivalent to using VPT. All these methods require summation over all determinants of the complementary subspace $Q$. Though calculating this sum is much easier than calculating and diagonalizing the whole CI matrix, it is still too expensive for the number of valence electrons approaching, or exceeding ten.
In the paper \cite{DFK19}, the sum over determinants was partly substituted by the sum over configurations that led to a significant increase in calculation speed. Here we want to make another step in this direction. To this end, we will partly substitute VPT with many-body perturbation theory (MBPT). The method we propose here is similar to the old CI+MBPT method \cite{DFK96} but uses different splitting of the problem into the CI and MBPT parts. In particular, we suggest to account for double excitations (D) from the subspace $P$ by means of the MBPT and treat single excitations (S) within VPT, or, if possible, include them directly in CI. We think that this variant is not only more efficient for treating valence correlations, but may also be used for the core-valence correlations.
\section{Formalism}
\subsection{Valence correlations}
Consider many-electron atom, or ion with $N$ valence electrons, where $N\gg 1$. Let us first assume that other electrons always occupy closed core shells, which is known as a frozen core approximation. Our aim is to solve the $N$-electron Schr\"odinger equation and find the spectrum of this system.
We start with splitting $N$-electron configuration space in two orthogonal subspaces $P$ and $Q$. The subspace $P$, which we call valence, includes the most important shells. It may be not obvious from the start, which orbitals are `important'. We definitely must include into subspace $P$ all orbitals with occupation numbers of the order of unity in the physical states, we are interested in. Complementary subspace $Q$ includes S, D, and so on excitations from the valence shells to the virtual ones, thus, $Q=Q_S+Q_D+\dots$. We start by solving the matrix equation in the subspace $P$,
\begin{align}\label{eq_CI}
\hat{P}H\hat{P}\Psi_a &= E_a\hat{P}\Psi_a\,,
\end{align}
where $H$ is the Hamiltonian for valence electrons and $\hat{P}$ is the projector on the subspace $P$. We can find a correction from the complementary subspace $Q$ using the second-order
perturbation theory:
\begin{align}\label{eq_VPT}
\delta E_a &= \sum_{n \in Q} \frac{\langle \Psi_a |\hat{P}H\hat{Q}|n\rangle\langle n|\hat{Q}H\hat{P}| \Psi_a \rangle}{E_a-E_n},
\end{align}
where $|n\rangle$ are $N$-electron determinants in the complementary subspace $Q$ and $E_n= \langle n|\hat{Q}H\hat{Q}| n\rangle$.
The wavefunction $\Psi_a$ is a linear combination of the determinants:
\begin{align}\label{eq_Psi}
\Psi_a &= \sum_{m \in P} C^a_m |m\rangle
= \sum_{p,m_p} C^a_{p,m_p} |m_p\rangle
\,.
\end{align}
Here and below indexes $p$ and $q$ run over configurations in the subspaces $P$ and $Q$ respectively and indexes $m_p$ and $n_q$ numerate determinants within one configuration. Now \Eref{eq_VPT} takes the form:
\begin{multline}\label{eq_VPTa}
\delta E_a =
\sum_{p,m_p}
\sum_{p',m_{p'}}
C^a_{p,m_p} C^a_{p',m_{p'}}
\\ \times
\sum_{q,n_q}
\frac{\langle m_p |H|n_q\rangle\langle n_q|H| m_{p'} \rangle}{E_a-E_{n_q}},
\end{multline}
where the sum over the subspace $Q$ is also split in two.
For an atom with $N \sim 10$, the dimension of space $Q$ is very large, which makes evaluation of expression \eqref{eq_VPTa} very lengthy. Therefore, our aim is to substitute double sum over $q$ and $n_q$ by a single sum over $q$. To this end we do the following approximation: we substitute the energy $E_{n_q}$ in the denominator by the configuration average:
\begin{align}\label{eq_conf-av}
\Bar{E}_q &= \frac{1}{N_q}\sum_{n_q=1}^{N_q} E_{n_q}\,,
\end{align}
where $N_q$ is the number of determinants in configuration $q$.
Using this approximation we rewrite \eqref{eq_VPTa} in a form:
\begin{multline}\label{eq_VPTb}
\delta E_a =
\sum_{p,m_p} \sum_{p',m_{p'}}
C^a_{p,m_p} C^a_{p',m_{p'}}
\\
\times
\sum_{q} \frac{\langle m_p |H
\left({\sum_{n_q} |n_q\rangle\langle n_q|}\right)
H| m_{p'} \rangle}{E_a-\Bar{E}_q}
\,.
\end{multline}
Below we will show that in some very important cases one can get rid of the internal sum over $n_q$.
Hamiltonian $H$ includes one-particle and two-particle parts. The former consists of the kinetic term and the core potential, while the latter corresponds to the Coulomb (or Coulomb-Breit) interaction between valence electrons. Thus, in the sum over $q$ remain only configurations, which differ by no more than two electrons from configurations $p$ and $p'$. This means that within this approximation the subspace $Q$ is actually truncated to $Q_S+Q_D$. All non-zero contributions correspond to the diagrams, shown in Fig.\ \ref{fig:full_set}.
\begin{figure}[htb]
\includegraphics[width=0.95\columnwidth]{Set_of_diagrams1.jpg}
\caption{Set of connected second-order diagrams. Black dots correspond to the core potential and wavy lines to the Coulomb interaction. Double and single lines denote electrons in valence and virtual orbitals respectively. Non-symmetric diagrams $(b)$ and $(e)$ have mirror twins.
\label{fig:full_set}}
\end{figure}
According to our definition of the spaces $P$ and $Q$, the latter must include at least one electron in the virtual shell. Diagrams $(a)$, $(b)$, and $(e)$ include only one intermediate line, so they describe single excitations from the subspace $P$. Diagrams $(c)$ and $(d)$ include two intermediate lines, but only diagram $(d)$ describes double (D) excitations, as both intermediate lines correspond to the virtual shells.
Figure \ref{fig:full_set} shows that all many-electron matrix elements in \Eref{eq_VPTb} are reduced to the effective one-electron, two-electron, and three-electron contributions. Effective one-electron contributions are described by diagram $(a)$; diagrams $(b)$, $(c)$, and $(d)$ correspond to the two-electron contributions; finally, diagram $(e)$ describes effective three-electron contributions, see Figs.\ \ref{fig:many-el-me} and \ref{fig:3e-term}.
\begin{figure}[tb]
\includegraphics[width=0.95\columnwidth]{Eff_2.jpg}
\caption{Many-electron second-order expression in \Eref{eq_VPTd} (left) is reduced to the two-particle expression (middle), which, in turn, is reduced to the effective two-particle interaction (right).
\label{fig:many-el-me}}
\end{figure}
\begin{figure}[htb]
\includegraphics[width=0.95\columnwidth]{Eff_3.jpg}
\caption{The case when many-electron second-order expression in \Eref{eq_VPTb} (left) is reduced to the three-particle expression (middle), which, in turn, is reduced to the effective three-particle interaction (right). The initial configuration on the left differs from the intermediate configuration by the upper two electrons. The final configuration differs from the intermediate one by the second and third electron from the top.
\label{fig:3e-term}}
\end{figure}
For combinatorial reasons the number of configurations with two excited electrons is much bigger, than the number of those with only one such electron. Therefore the vast majority of terms in \Eref{eq_VPTb} correspond to the two-electron excitations from configurations $p$ and $p'$. For these terms in the Hamiltonian $H$ only the two-electron interaction $V$ can contribute, so we can neglect the one-electron part and make substitution $H\to V$. As we saw above, all such terms are described by the single diagram $(d)$ from Fig.\ \ref{fig:full_set}.
Let us consider the sum over doubly excited configurations. It can be written as:
\begin{multline}\label{eq_VPTd}
\delta E_a^D =
\sum_{p,m_p} \sum_{p',m_{p'}}
C^a_{p,m_p} C^a_{p',m_{p'}}
\\
\times
\sum_{q \in Q_D}\frac{\langle m_p |V
\left({\sum_{n_q} |n_q\rangle\langle n_q|}\right)
V| m_{p'} \rangle}{E_a-\Bar{E}_q}
\,,
\end{multline}
Non-zero contributions come from determinants $|n_q\rangle$, which differ from both determinants $|m_p\rangle$ and $|m_{p'}\rangle$ by two electrons. It is clear that it must be the same two electrons, see Fig.\ \ref{fig:many-el-me}. In this case the second-order expression from \eqref{eq_VPTd} is reduced to the effective two-particle interaction \cite{DFK96}:
\begin{align}\label{eq_Veff}
\delta E_a^D &=
\sum_{p,m_p} \sum_{p',m_{p'}}
C^a_{p,m_p} C^a_{p',m_{p'}}
\langle m_p |V_\mathrm{eff}| m_{p'} \rangle
\,.
\end{align}
This effective interaction can be expressed in terms of the effective radial integrals, which are similar to the Coulomb radial integrals. The latter appear when we expand Coulomb interaction in spherical multipoles,
\begin{align}
V=\sum_{k,\varkappa} V^k_\varkappa.
\end{align}
The matrix element of each multipole component $V^k_\varkappa$ has the form:
\begin{widetext}
\begin{multline}\label{eq_coulomb_me}
\langle c,d| V_\varkappa^k |a,b \rangle =
(-1)^{m_c+m_b+1} \delta_p
\sqrt{(2j_a+1)(2j_b+1)(2j_c+1)(2j_d+1)}
\\
\left(\!\begin{array}{ccc} j_c & j_a & k\\
-m_c & m_a & \varkappa \\ \end{array} \!\right)
\left(\!\begin{array}{ccc} j_b & j_d & k\\
-m_b & m_d & \varkappa \\ \end{array} \!\right)
\left(\!\begin{array}{ccc} j_c & j_a & k\\
\frac12 &-\frac12 & 0 \\ \end{array} \!\right)
\left(\!\begin{array}{ccc} j_b & j_d & k\\
\frac12 &-\frac12 & 0 \\ \end{array} \!\right)
R^k_{a,b,c,d}\,,
\end{multline}
where round brackets denote 3j-symbols, $R^k_{a,b,c,d}$ denotes the Coulomb radial integral and $\delta_p$ ensures parity selection rule: $\delta_p = \xi(l_a+l_c+k) \xi(l_b+l_d+k)$ and $\xi(n) = 1,0$ for $n=\mbox{even, odd}$. A similar multipole expansion holds for the effective interaction $V_\mathrm{eff}$, the effective radial integral being \cite{DFK96}:
\begin{multline}\label{eq_box_diag}
R^{k,\mathrm{eff}}_{a,b,c,d}
=\sum_{k_1,k_2}\sum_{m,n} (-1)^\chi
(2j_m+1)(2j_n+1)(2k+1)
\left\{\!\begin{array}{ccc} j_c & j_a & k\\
k_1 & k_2 & j_m \\ \end{array}\! \right\}
\left\{\!\begin{array}{ccc} j_b & j_d & k\\
k_2 & k_1 & j_n \\ \end{array} \!\right\}
\left(\!\begin{array}{ccc} j_m & j_a & k_1\\
\frac12 &-\frac12 & 0 \\ \end{array} \!\right)
\left(\!\begin{array}{ccc} j_b & j_n & k_1\\
\frac12 &-\frac12 & 0 \\ \end{array} \!\right)
\\
\left(\!\begin{array}{ccc} j_c & j_m & k_2\\
\frac12 &-\frac12 & 0 \\ \end{array} \!\right)
\left(\!\begin{array}{ccc} j_n & j_d & k_2\\
\frac12 &-\frac12 & 0 \\ \end{array} \!\right)
\left(\!\begin{array}{ccc} j_c & j_a & k\\
\frac12 &-\frac12 & 0 \\ \end{array} \!\right)^{-1}
\!\!\left(\begin{array}{ccc} j_b & j_d & k\\
\frac12 &-\frac12 & 0 \\ \end{array} \right)^{-1}
\frac{R^{k_1}_{a,b,m,n}R^{k_2}_{c,d,m,n}}{\Delta_{E}}\,,
\end{multline}
where curly brackets denote 6j-coefficients, the phase $\chi={j_a+j_b+j_c+j_d+j_m+j_n+k_1+k_2+k+1}$, and $\Delta_E$ is energy denominator, which we will discuss later. For the effective interaction there is no link between parity and multipolarity $k$, so for $V_\mathrm{eff}$ we do not have factor $\delta_p$ as in \Eref{eq_coulomb_me}. The sum in \eqref{eq_box_diag} runs over multipolarities $k_1$ and $k_2$, which satisfies the triangle rule $|k_1-k_2|\le k \le k_1+k_2$.
\end{widetext}
All single excitations are described by the remaining diagrams from Fig.\ \ref{fig:full_set}. The diagram $(a)$ has a form of the effective one-electron radial integral, while diagrams $(b)$ and $(c)$ are reduced to the two-electron effective radial integrals. In principle, these effective radial integrals can be calculated and stored. However, the diagram $(e)$ corresponds to the effective three-particle interaction. It is difficult to include such interactions into CI matrix for several reasons:
\begin{itemize}
\item When $N> 3$ the number of such effective three-particle integrals is huge.
\item It is difficult to store them and find them.
\item The number of the non-zero matrix elements in the matrix drastically increases. The matrix becomes less sparse and its diagonalization is much more difficult and time-consuming.
\end{itemize}
Because of all that it is inefficient to use the MBPT approach for three-particle diagrams and it is much easier to treat them within the determinant-based PT. However, it is difficult then to separate them from other contributions, which correspond to single excitations. Thus, it is better not to use MBPT for single excitations at all. We suggest to use instead any form of the determinant-based VPT described in Refs.\ \cite{DBHF17,GCKB18,ImaKoz18,DFK19}. This means that we do VPT in the subspace $Q_S$. Note that the dimension of this subspace is incomparably smaller than the dimension of the $Q_D$ subspace. In some cases it may be so small, that we can include $Q_S$ in the subspace $P$, where we do CI.
\subsection{Core-valence correlations}
\begin{figure}[htb]
\includegraphics[width=0.95\columnwidth]{sigma.jpg}
\caption{Set of one-electron second-order diagrams accounting for the excitations from the core. Diagrams $(e)$ and $(f)$ have mirror twins. Diagrams $(c)$ and $(d)$ describe double excitations from the core.
\label{fig:sigma}}
\end{figure}
It is easy to use the scheme described above for the core-valence correlations as well. Now $P$ subspace corresponds to the frozen-core approximation and the subspaces $Q_S$ and $Q_D$ include single and double excitations from the core respectively. This means that these subspaces include many-electron states with one and two holes in the core. As before, the second-order MBPT corrections are described by one-electron, two-electron, and three-electron diagrams. All one-electron diagrams are given in Fig.\ \ref{fig:sigma}. Excitations from the core correspond to the hole lines with arrows looking to the left. It is easy to see that only diagrams $(c)$ and $(d)$ describe double excitations. Therefore, we need to calculate them and store as one-electron effective radial integrals, see Fig.\ \ref{fig:sigma_eff} (note that there are no one-electron contributions for the valence excitations). Expressions for these diagrams were given in Ref.\ \cite{DFK96}.
There is only one two-electron diagram, which corresponds to the double excitations from the core. This diagram must be calculated and added to the similar diagram for valence excitations, which was discussed in the previous section, see Fig.\ \ref{fig:R_eff}. Finally, in analogy with the valence correlations, the three-particle diagrams correspond to the single excitations from the core.
\begin{figure}[htb]
\includegraphics[width=0.95\columnwidth]{sigma_eff.jpg}
\caption{Diagrams, which correspond to the double excitations from closed shells. These diagrams are described by the effective one-electron radial integrals, designated by a black circle. \label{fig:sigma_eff}}
\end{figure}
\begin{figure}[htb]
\includegraphics[width=0.95\columnwidth]{R_eff.jpg}
\caption{Diagrams contributing to the effective two-electron radial integrals. First diagram accounts for the double excitations to the virtual shells and second diagram accounts for the double excitations from closed shells. \label{fig:R_eff}}
\end{figure}
We conclude that in order to account for both valence and core-valence correlations we need to calculate one-electron and two-electron effective radial integrals, which corresponds to the diagrams from Figs.\ \ref{fig:sigma_eff} and \ref{fig:R_eff}. At the same time, we need to include all single excitations from the core shells and all single excitations to the virtual shells either in the subspace $P$ or in the subspace $Q_S$. After that, we make CI calculation with effective radial integrals possibly followed by the VPT calculation in the $Q_S$ subspace.
\subsection{Sketch of the possible calculation scheme}
Let us describe a most general computational scheme.
\begin{itemize}
\item Basis set orbitals are divided into four groups: inner core, outer core, valence, and virtual orbitals. The inner core is kept frozen on all stages of calculation.
\item Effective radial integrals are calculated for the valence orbitals, which account for the double excitations from the outer core and the double excitations from the valence orbitals to the virtual ones.
\item Full CI calculation is done for the valence electrons. The effective radial integrals are added to the conventional radial integrals when the Hamiltonian matrix is formed.
\item Determinant-based PT is used in the complementary subspace $Q_S$, which includes single excitations from the outer core and single excitations to the virtual states.
\end{itemize}
Depending on the number of the valence electrons and the size of the core this scheme can be simplified. If there are only two valence electrons, one can include all virtual basis states into valence space. Single excitation from the core can be also added to the valence space. Double excitations from the core are accounted for through the effective radial integrals, while single excitations are included explicitly in the CI matrix. Formally this means that we substitute $P$, $Q$ decomposition by the $P'$, $Q_D$ decomposition:
\begin{align}
\label{PQ}
&P+Q=P+Q_S+Q_D=P'+Q_D\,,
\\
\label{PQD}
&P'\equiv P+Q_S\,.
\end{align}
In the new valence space $P'$, we solve matrix equation with the energy-dependent effective Hamiltonian \cite{DFK96}:
\begin{align}
\label{Heff}
&H_\mathrm{eff}(E)=H+V_\mathrm{eff}(E)\,,
\\
\label{HeffPsi}
&\hat{P'}H_\mathrm{eff}(E_a)\hat{P'}\Psi_a= E_a \hat{P'}\Psi_a\,,
\end{align}
where $\hat{P'}$ is the projector on the subspace $P'$. When the size of the matrix $H_\mathrm{eff}$ becomes too large, one can neglect the non-diagonal part of the matrix in the $Q_S$ space, as in the emu CI method \cite{GCKB18}.
\section{Energy denominators}\label{sec_denom}
Let us discuss the energy denominator $\Delta_E$ in \Eref{eq_box_diag}. For simplicity we will consider the Rayleigh-Schr\"odinger perturbation theory, where the denominator in \Eref{eq_VPTb} would be $\Bar{E}_p-\Bar{E}_q$. Here $\Bar{E}_p$ and $\Bar{E}_q$ are average energies \eqref{eq_conf-av} for configurations $p$ and $q$. Note that in order to return to the Brillouin-Wigner perturbation theory we will need to add $E_a-\Bar{E}_p$, which can be approximately done using the method suggested in \cite{DFK96}.
In the conventional MBPT the denominator $\Bar{E}_p-\Bar{E}_q$ is reduced to the difference of the Hartree-Fock energies of the orbitals $\varepsilon_i$ which are different in these two configurations. That would give the following energy denominator in \Eref{eq_box_diag}:
\begin{align} \label{eq_denom}
\Delta_E \equiv\Delta_E(ab\to mn) = \varepsilon_a +\varepsilon_b -\varepsilon_m -\varepsilon_n\,,
\end{align}
where we assume that configuration $q$ differs from $p$ by excitation of two electrons from shells $a$ and $b$ to virtual shells $m$ and $n$ respectively. This expression neglects the interaction of the electrons with each other and depends on the choice of the Hartree-Fock potential. In order to improve this approximation, we will consider general expression for the average energy of the relativistic electronic configuration.
\subsection{Average energy of the relativistic configuration}
The average energy of the relativistic configuration $\Bar{E}_p$ \cite{Mann73,Grant70}:
\begin{multline}\label{eq_Eav}
\Bar{E}_p =
\sum_{a\in p} q_a \, I_a + \tfrac{1}{2} \,
\sum_{a\in p} q_a\,(q_a-1) \, U_{aa}
\\
+ \sum_{a < b;\, a,b\in p} q_a \, q_b \, U_{ab},
\end{multline}
where $q_a$ and $q_b$ are occupation numbers for the shells $a$ and $b$ in configuration $p$ and matrix elements of the potential $U$ are given by:
\begin{align}\label{eq_Uab}
U_{ab} =
\left \{
\begin{array}{ll}
\displaystyle F^0(a,a)+\sum_{k>0}
2 \, f^k_{a,a} \, F^k(a,a) \,,\quad & a=b\,,
\\[5mm] \displaystyle
F^0(a,b)+\sum_k g^k_{a,b} \, G^k(a,b) \,,\quad & a \ne b\,.
\end{array} \right .
\end{align}
In these equations $I_a$ is the one-electron radial integral, while $F^k(a,b)$ and $G^k(a,b)$ are standard Coulomb and exchange two-electron radial integrals \cite{Grant70}. The angular factors $f^k_{a,a}$ and $g^k_{a,b}$ are also defined in agreement with Ref.\ \cite{Grant70}:
\begin{align}\label{eq_f&g}
\begin{array}{lll}
f^k_{a,a} &= \displaystyle
- \, \frac{1}{2} \,\frac{2j_a+1}{2j_a} \,
\left (
\begin{array}{llll}
j_a & j_a & k \\
\frac{1}{2} & -\frac{1}{2} & 0
\end{array} \right )^2\,,%
\\[5mm] \displaystyle
g^k_{a,b} &= \displaystyle
-
\left (
\begin{array}{llll}
j_a & j_b & k \\
\frac{1}{2} & -\frac{1}{2} & 0
\end{array} \right )^2\,,%
\end{array}
\end{align}
where $j_a$ and $j_b$ are the one-electron total angular momenta.
Let us use \Eref{eq_Eav} to calculate the energy difference between configurations $p$ and $q$ which differ by the excitation of two electrons from shells $a,b$ to shells $m,n$. In other words we need to calculate how the energy changes when occupation numbers change in the following way: $\delta q_a=\delta q_b=-1$ and $\delta q_m=\delta q_n=1$. To this end we, can use Taylor expansion of \Eref{eq_Eav} near the initial configuration $p$:
\begin{align}\label{eq_Taylor}
\Bar{E}_q=\Bar{E}_p
+ \sum_a \frac{\partial \Bar{E}_p}{\partial q_a} \, \delta q_a
+ \frac{1}{2} \sum_{a,b} \frac{\partial^2 \Bar{E}_p}{\partial q_a \partial q_b} \,
\delta q_a \, \delta q_b\,,
\end{align}
where derivatives are given by:
\begin{align}
\frac{\partial \Bar{E}_p}{\partial q_a}
&= I_a+(q_a-\tfrac12) U_{aa}
+\sum_{b\neq a}q_b U_{ab}
\nonumber
\\
&= \label{eq_der1}
I_a-\tfrac12 U_{aa}
+\sum_{b}q_b U_{ab}\,,
\\
\frac{\partial^2 \Bar{E}_p}{\partial q_a \partial q_b}
&= \label{eq_der2}
U_{ab}\,.
\end{align}
Note that all higher derivatives vanish, so expression \eqref{eq_Taylor} is exact. With its help we get:
\begin{multline}\label{eq_ab-mn}
\Delta_E(ab\to mn)
= I_a+I_b-I_m-I_n
\\
+\sum_{c\in p} q_c\, (U_{ac}+U_{bc}-U_{mc}-U_{nc})
\\
-U_{aa}-U_{bb}-U_{ab}-U_{mn}
\\
+U_{am}+U_{bn}+U_{an}+U_{bm}\,.
\end{multline}
This expression can be also used for the special cases $a=b,\,\delta q_a=-2$ and/or $m=n,\,\delta q_m=2$.
Equation \eqref{eq_ab-mn} includes the sum over the occupied shells of the initial configuration $p$. Let us introduce one-electron energies in respect to this configuration as:
\begin{align}\label{eq_eps}
\varepsilon_a &= I_a +\sum_{c\in p} q_c\, U_{ac}- (1-\delta_{q_a,0})\, U_{aa}\,.
\end{align}
Then \Eref{eq_ab-mn} is simplified to
\begin{multline}\label{eq_ab-mn2}
\Delta_E(ab\to mn)
= \varepsilon_a+\varepsilon_b-\varepsilon_m-\varepsilon_n
\\
-U_{ab}-U_{mn}+U_{am}+U_{bn}+U_{an}+U_{bm}\,.
\end{multline}
The first line here reproduces the conventional MBPT denominator \eqref{eq_denom}, while the second line gives corrections caused by the interactions of the electrons with each other. It is important that in this form we do not have explicit sums over all electrons, which significantly simplifies calculations.
In the relativistic calculations the non-relativistic configurations are typically not used. However, sometimes one may need to find the average energy of the non-relativistic configuration. In the Appendix \ref{Appendix_non-rel} we derive the necessary expressions for this case.
\section{Numerical tests}
We made four test calculations for very different systems. In the first two calculations for He I and B I, there was no core and we tested our method for the valence correlations. Then we applied our method for the highly charged ion Fe XVII, where there is a very strong central field, correlation corrections are rather small, and perturbation theory must be quite accurate. In this system we had core $1s^2$, so we calculated core-valence correlation corrections as well as valence ones. Finally, we made calculations for Sc I, where valence $3d$ electrons have a large overlap with the core shell $3p^6$ and core-valence correlation corrections are as important as valence ones.
\begin{table*}[htb]
\caption{Ground state binding energy of He I (in a.u.). CI calculations are made for three spaces: $P$, $P+Q_S$, and $P+Q$. $\Delta_{P+Q}$ is the difference from the CI result in the $P+Q$ space. Three variants of PT calculations are made based on the CI calculation in $P+Q_S$ space: (a) determinant-based PT; (b) effective Hamiltonian with Hartree-Fock denominators \eqref{eq_denom}; (c) effective Hamiltonian with corrected denominators \eqref{eq_ab-mn2}. Experimental binding energy is given for comparison in the last column \cite{NIST}.}
\label{tab:HeI}
\begin{tabular}{lrrrrrrr}
\hline\hline\\[-8pt]
&\multicolumn{1}{c}{$P$}
&\multicolumn{1}{c}{$P+Q_S$}
&\multicolumn{1}{c}{$P+Q$}
&\multicolumn{3}{c}{PT}
&\multicolumn{1}{c}{NIST}\\
\cline{5-7}
&&&&\multicolumn{1}{c}{(a)}&\multicolumn{1}{c}{(b)}&\multicolumn{1}{c}{(c)}
&\multicolumn{1}{c}{Ref.\ \cite{NIST}}\\
$E(1s^2)$ &$ 2.8626 $&$ 2.8700 $&$ 2.9010 $&$ 2.9021 $&$ 2.9064 $&$ 2.9031 $&$ 2.9034 $ \\
$\Delta_{P+Q}$ &$ 0.0384 $&$ 0.0310 $&$ 0.0000 $&$-0.0011 $&$-0.0054 $&$-0.0021 $&$-0.0024 $ \\
\hline\hline
\end{tabular}
\end{table*}
\begin{table*}[htb]
\caption{Ground state binding energy of B I (in a.u.). CI calculations are made for valence spaces $P$ and $\tilde{P}$, which included 3 and 4 lower shells respectively.
Experimental binding energy is given for comparison in the last column \cite{NIST}.}
\label{tab:B_I}
\begin{tabular}{lrrrrrrr}
\hline\hline\\[-8pt]
&\multicolumn{1}{c}{$P$}
&\multicolumn{2}{c}{$P+Q_S$}
&\multicolumn{1}{c}{$\tilde{P}$}
&\multicolumn{2}{c}{$\tilde{P}+\tilde{Q}_S$}
&\multicolumn{1}{c}{NIST}\\
\cline{3-4} \cline{6-7}
&&\multicolumn{1}{c}{$H$}&\multicolumn{1}{c}{$H_\mathrm{eff}$}
&&\multicolumn{1}{c}{$H$}&\multicolumn{1}{c}{$H_\mathrm{eff}$}
&\multicolumn{1}{c}{Ref.\ \cite{NIST}}
\\
$E({}^2P_{1/2})$ &$ 24.5683 $&$ 24.5976 $&$ 24.6595 $&$ 24.5721 $&$ 24.5999 $&$ 24.6581 $&$ 24.6581 $ \\
$\Delta_{\mathrm{NIST}}$ &$ 0.0898 $&$ 0.0605 $&$ -0.0014 $&$ 0.0860 $&$ 0.0582 $&$ 0.0000 $&$ 0.0000 $ \\
\hline\hline
\end{tabular}
\end{table*}
\begin{table*}[htb]
\caption{Low-lying energy levels of Fe XVII in respect to the ground state (in cm$^{-1}$). The subspace $Q_S$ includes single excitations to virtual shells $n=5-17$. The subspace $Q_S^\prime$ in addition includes single excitations from the $1s$ shell. Effective Hamiltonians account for the respective double excitations. For each calculation we also give relative accuracy in percent.}
\label{tab:Fe_XVII}
\begin{tabular}{lcrrrrrrrrrrr}
\hline\hline\\[-8pt]
\multicolumn{1}{c}{Config.}&\multicolumn{1}{c}{Level}
&\multicolumn{1}{c}{NIST}
&\multicolumn{4}{c}{CI($P$)}
&\multicolumn{4}{c}{CI$_\mathrm{emu}(P+Q_S)$}
&\multicolumn{2}{c}{CI$_\mathrm{emu}(P+Q^\prime_S)$}
\\
&&\multicolumn{1}{c}{Ref.\ \cite{NIST}}
&\multicolumn{2}{c}{$H$} &\multicolumn{2}{c}{$H_\mathrm{eff}$}
&\multicolumn{2}{c}{$H$} &\multicolumn{2}{c}{$H_\mathrm{eff}$} &\multicolumn{2}{c}{$H^\prime_\mathrm{eff}$}
\\
\hline
\\[-8pt]
\\[-8pt]
$2p^6 $ & ${}^1S_0 $&$ 0$&$ 0$&$ $&$ 0$&$ $&$ 0$&$ $&$ 0$&$ $&$ 0$&$ $\\
$2p^5 3p$ & ${}^3S_1 $&$6093450$&$6076370$&$-0.28\%$&$6083540$&$-0.16\%$&$6088405$&$-0.08\%$&$6095600$&$ 0.04\%$&$6095086$&$ 0.03\%$\\
$2p^5 3p$ & ${}^3D_2 $&$6121690$&$6105049$&$-0.27\%$&$6111933$&$-0.16\%$&$6117307$&$-0.07\%$&$6124215$&$ 0.04\%$&$6123709$&$ 0.03\%$\\
$2p^5 3p$ & ${}^3D_3 $&$6134730$&$6118010$&$-0.27\%$&$6125056$&$-0.16\%$&$6130067$&$-0.08\%$&$6137137$&$ 0.04\%$&$6136602$&$ 0.03\%$\\
$2p^5 3p$ & ${}^1P_1 $&$6143850$&$6127278$&$-0.27\%$&$6134193$&$-0.16\%$&$6139345$&$-0.07\%$&$6146283$&$ 0.04\%$&$6145772$&$ 0.03\%$\\[2pt]
$2p^5 3s$ & $ 2^o $&$5849490$&$5830778$&$-0.32\%$&$5838679$&$-0.18\%$&$5842900$&$-0.11\%$&$5850823$&$ 0.02\%$&$5850330$&$ 0.01\%$\\
$2p^5 3s$ & $ 1^o $&$5864770$&$5846269$&$-0.32\%$&$5854109$&$-0.18\%$&$5858397$&$-0.11\%$&$5866260$&$ 0.03\%$&$5865678$&$ 0.02\%$\\
$2p^5 3s$ & $ 1^o $&$5960870$&$5942198$&$-0.31\%$&$5950103$&$-0.18\%$&$5954316$&$-0.11\%$&$5962244$&$ 0.02\%$&$5961601$&$ 0.01\%$\\
$2p^5 3d$ & ${}^3P_1^o$&$6471800$&$6455306$&$-0.25\%$&$6462010$&$-0.15\%$&$6463149$&$-0.13\%$&$6469882$&$-0.03\%$&$6468962$&$-0.04\%$\\
$2p^5 3d$ & ${}^3P_2^o$&$6486400$&$6470075$&$-0.25\%$&$6476738$&$-0.15\%$&$6477839$&$-0.13\%$&$6484531$&$-0.03\%$&$6483612$&$-0.04\%$\\
$2p^5 3d$ & ${}^3F_4^o$&$6486830$&$6471630$&$-0.23\%$&$6478532$&$-0.13\%$&$6478129$&$-0.13\%$&$6485057$&$-0.03\%$&$6484147$&$-0.04\%$\\
$2p^5 3d$ & ${}^3F_3^o$&$6493030$&$6477585$&$-0.24\%$&$6484338$&$-0.13\%$&$6484319$&$-0.13\%$&$6491101$&$-0.03\%$&$6490177$&$-0.04\%$\\
$2p^5 3d$ & ${}^1D_2^o$&$6506700$&$6491383$&$-0.24\%$&$6498026$&$-0.13\%$&$6498360$&$-0.13\%$&$6505032$&$-0.03\%$&$6504101$&$-0.04\%$\\
\hline\hline
\end{tabular}
\end{table*}
\begin{table*}[htb]
\caption{Low-lying energy levels of Sc I (in cm$^{-1}$). For each calculation we also give the differences with NIST \cite{NIST} and the average absolute difference $|\Delta|_\mathrm{av} = \frac{1}{k}\sum_{i=1}^k |\Delta_i|$. For the CI calculations in the $P+Q_S$ space we use the emu CI approach \cite{GCKB18} where we neglect non-diagonal matrix elements in the $Q_S$ subspace. On the diagonal we use averaging over relativistic configurations, see \Eref{eq_Eav}.}
\label{tab:Sc_I}
\begin{tabular}{lcrrrrrrr}
\hline\hline\\[-8pt]
\multicolumn{1}{c}{Config.}&\multicolumn{1}{c}{Level}
&\multicolumn{1}{c}{NIST}
&\multicolumn{2}{c}{CI($P$)}
&\multicolumn{4}{c}{CI$_\mathrm{emu}(P+Q_S)$}
\\
&&\multicolumn{1}{c}{Ref.\ \cite{NIST}}
&\multicolumn{2}{c}{$H$}&\multicolumn{2}{c}{$H$}
&\multicolumn{2}{c}{$H_\mathrm{eff}$}
\\
&&\multicolumn{1}{c}{$E$}&\multicolumn{1}{c}{$E$}&\multicolumn{1}{c}{$\Delta$}
&\multicolumn{1}{c}{$E$}&\multicolumn{1}{c}{$\Delta$}
&\multicolumn{1}{c}{$E$}&\multicolumn{1}{c}{$\Delta$}
\\[2pt]
$3d 4s^2$ & ${}^2D_{3/2}$ &$ 0$&$ 0$&$ 0$&$ 0$&$ 0$&$ 0$&$ 0$\\
$ $ & ${}^2D_{5/2}$ &$ 168$&$ 147$&$ -21$&$ 157$&$ -11$&$ 155$&$ -13$\\[2pt]
$3d^2 4s$ & ${}^4F_{3/2}$ &$ 11520$&$ 14945$&$ 3425$&$ 7361$&$ -4159$&$ 11786$&$ 266$\\
$ $ & ${}^4F_{5/2}$ &$ 11558$&$ 14968$&$ 3410$&$ 7422$&$ -4136$&$ 11847$&$ 290$\\
$ $ & ${}^4F_{7/2}$ &$ 11610$&$ 15001$&$ 3391$&$ 7489$&$ -4121$&$ 11914$&$ 304$\\
$ $ & ${}^4F_{9/2}$ &$ 11677$&$ 15047$&$ 3370$&$ 7541$&$ -4136$&$ 11963$&$ 285$\\[2pt]
$3d^2 4s$ & ${}^2F_{5/2}$ &$ 14926$&$ 17368$&$ 2442$&$ 11331$&$ -3595$&$ 15661$&$ 735$\\
$ $ & ${}^2F_{7/2}$ &$ 15042$&$ 17455$&$ 2413$&$ 11453$&$ -3589$&$ 15781$&$ 739$\\[2pt]
$3d^2 4s$ & ${}^2D_{5/2}$ &$ 17013$&$ 19972$&$ 2960$&$ 14574$&$ -2439$&$ 17475$&$ 462$\\
$ $ & ${}^2D_{3/2}$ &$ 17025$&$ 19980$&$ 2955$&$ 14601$&$ -2424$&$ 17500$&$ 475$\\[2pt]
$3d^2 4s$ & ${}^4P_{1/2}$ &$ 17226$&$ 20329$&$ 3103$&$ 14606$&$ -2620$&$ 17472$&$ 246$\\
$ $ & ${}^4P_{3/2}$ &$ 17255$&$ 20339$&$ 3084$&$ 14679$&$ -2576$&$ 17552$&$ 297$\\
$ $ & ${}^4P_{5/2}$ &$ 17307$&$\quad 20380$&$ 3073$&$\quad 14739$&$ -2568$&$\quad 17606$&$ 299$\\[2pt]
$3d4s4p $ & ${}^4F_{3/2}^o$ &$ 15673$&$ 13921$&$ -1751$&$ 16019$&$ 346$&$ 15872$&$ 200$\\
$ $ & ${}^4F_{5/2}^o$ &$ 15757$&$ 14002$&$ -1754$&$ 16099$&$ 342$&$ 15953$&$ 197$\\
$ $ & ${}^4F_{7/2}^o$ &$ 15882$&$ 14139$&$ -1743$&$ 16211$&$ 330$&$ 16064$&$ 183$\\
$ $ & ${}^4F_{9/2}^o$ &$ 16027$&$ 14290$&$ -1737$&$ 16340$&$ 314$&$ 16194$&$ 168$\\[2pt]
$3d4s4p $ & ${}^4D_{1/2}^o$ &$ 16010$&$ 14265$&$ -1745$&$ 16318$&$ 308$&$ 16448$&$ 438$\\
$ $ & ${}^4D_{3/2}^o$ &$ 16022$&$ 14311$&$ -1711$&$ 16351$&$ 329$&$ 16517$&$ 495$\\
$ $ & ${}^4D_{5/2}^o$ &$ 16141$&$ 14375$&$ -1766$&$ 16403$&$ 262$&$ 16559$&$ 418$\\
$ $ & ${}^4D_{7/2}^o$ &$ 16211$&$ 14458$&$ -1753$&$ 16503$&$ 292$&$ 16621$&$ 410$\\[2pt]
$3d4s4p $ & ${}^2D_{3/2}^o$ &$ 16023$&$ 14172$&$ -1851$&$ 16516$&$ 493$&$ 16442$&$ 419$\\
$ $ & ${}^2D_{5/2}^o$ &$ 16097$&$ 14189$&$ -1907$&$ 16525$&$ 428$&$ 16449$&$ 352$\\[2pt]
$3d4s4p $ & ${}^4P_{1/2}^o$ &$ 18504$&$ 16854$&$ -1650$&$ 18528$&$ 24$&$ 18529$&$ 25$\\
$ $ & ${}^4P_{3/2}^o$ &$ 18516$&$ 16930$&$ -1586$&$ 18538$&$ 23$&$ 18543$&$ 27$\\
$ $ & ${}^4P_{5/2}^o$ &$ 18571$&$ 17007$&$ -1565$&$ 18577$&$ 6$&$ 18572$&$ 1$\\[2pt]
\multicolumn{2}{c}{$|\Delta|_\mathrm{av}$}&
&\multicolumn{2}{c}{2247}&\multicolumn{2}{c}{1595}&\multicolumn{2}{c}{310}
\\
\hline\hline
\end{tabular}
\end{table*}
\subsection{Ground state of He I}
Helium is the simplest system where correlation effects can be tested. We calculate the ground state energy, where correlation corrections are the largest. We choose the space $P$ to include shells $n=1\dots3$. The space $Q$ includes virtual shells $s,p,d$ with $4\le n\le20$. For this model problem, we can easily do CI in the whole space $P+Q$ thus producing the ``exact'' solution and compare these results with different variants of the perturbation theory discussed above. Results are listed in Table \ref{tab:HeI}.
One can see that the valence CI provides accuracy on the order of 1\%. The accuracy does not improve when we account for the single excitations to the virtual shells. However, when we include double excitations the agreement with the ``exact'' answer is significantly better. The determinant-based PT gives the best result. The results obtained with the effective Hamiltonian are less accurate, but corrections to the denominators reduce the discrepancy. Even the uncorrected variant of the MBPT is closer to the ``exact'' answer by an order of magnitude compared to the valence CI.
\subsection{Ground state of B I}
B is a five electron system. The full CI calculation here is already very expensive. The determinant-based PT is also rather lengthy, so we made calculations only with the effective Hamiltonian and compared our results with the experiment \cite{NIST}. The effective radial integrals were calculated using the Hartree-Fock denominators. We tested two variants of the valence space: the first one, $P$, included shells $n=1\dots 3$ and the second one, $\tilde{P}$, included also the shell $n=4$. Corresponding $Q$ and $\tilde{Q}$ spaces included $s,p,d,f,g$ shells up to $n=20$. Results of these calculations for the ground state ${}^2P_{1/2}$ are given in Table \ref{tab:B_I}. We see that the accuracy of the CI calculation does not change much when we include an extra shell in the subspace $P$. The accuracy of the CI calculation in the subspaces $P+Q_S$ and $\tilde{P}+\tilde{Q}_S$ is only slightly better than similar calculation in the subspaces $P$ and $\tilde{P}$. Only including double excitations by means of the MBPT improves the agreement with the experiment by more than an order of magnitude.
\subsection{Spectrum of Fe XVII}
Ten-electron ion Fe XVII plays an important role in astrophysics and plasma physics, see Ref. \cite{Kuhn2020} and references therein. The spectrum of this ion was calculated within several different approaches \cite{Kuhn2020suppl} with relative accuracy of about 0.03\%. Here we repeat these calculations using the new method. We use basis set $[17spdfg]$. Virtual orbitals starting from $4s$ and up are formed from B-splines using the method from Ref.\ \cite{KozTup19}. Valence subspace $P$ includes shells $2s,2p,3s,3p,3d,4s,4p,4d$, and $4f$, while the $1s$ shell is frozen. Single excitations to all higher orbitals are included in the subspace $Q_S$ and the subspace $Q_S^\prime$ in addition includes single excitations from the $1s$ shell. We make two CI calculations in the spaces $P$ and $P+Q_S$ respectively. Then we repeat these calculations using the effective Hamiltonian, which accounts for the excitations to the subspace $Q_D$. Finally, we make CI calculation in the $P+Q_S^\prime$ for the effective Hamiltonian $H_\mathrm{eff}^\prime$ which accounts for the double excitations from $1s$ shell as well as for the double excitations to the virtual shells with $n \ge 5$. Results of all these calculations are given in Table \ref{tab:Fe_XVII}.
One can see that already the CI calculation in the subspace $P$ is quite accurate here, the relative errors being about 0.3\%. This is not surprising for such a strong central field. When we increase the size of the configuration space by adding single excitations to the virtual shells $n=5\dots 17$ the errors substantially decrease but remain of the same order of magnitude. The same happens when we do CI for the effective Hamiltonian in the subspace $P$. Only when we include both single and double excitations to the virtual shells by doing CI for the effective Hamiltonian in the subspace $P+Q_S$ we increase the accuracy by an order of magnitude, the errors being 0.04\% or less. Adding S and D excitations from the $1s$ shell leads to corrections to the transition energies within $0.01\%$. Our final accuracy is similar to the accuracy obtained in Ref.\ \cite{Kuhn2020suppl}, where CI space included all double and some triple excitations to all virtual shells (the basis set there was different, but of the same length). In our present calculation, the size of the space $P+Q_S$ is about 1.4 million determinants, and the size of the space $P+Q_S^\prime$ is close to 2 million determinants, which is significantly less than the CI space of Ref.~\cite{Kuhn2020}.
\subsection{Spectrum of Sc I}
The ground state configuration for Sc I is $\mathrm{[Ar]} 3d^1 4s^2$ and lowest excited states belong to the configurations $3d^2 4s$ and $3d 4s 4p$. The $3d$ shell has a large overlap with the core shells $3s$ and $3p$. Because of that frozen core approximation can not reproduce even the lowest part of the spectrum. Including $3s$ and $3p$ shells into the valence space makes its size extremely large. Therefore, this is a good system to apply our method.
We use a short basis set $[9spdfgh]$, which is constructed as described in Ref.\ \cite{KozTup19}. In the valence space $P$, the shells $n\le 3$ are closed and the virtual shells $n\ge 8$ and all $h$ orbitals are empty. The space $Q_S$ includes single excitations from the upper core shells $n=3$ and single excitations to the virtual shells. We keep core shells up to $n\le 2$ frozen on all stages. Results of the calculation of the spectrum are presented in Table \ref{tab:Sc_I}, where excitation energies from the ground state in cm $^{-1}$ are shown for approximately 10 lower levels of each parity. The sizes of the valence space $P$ and $P+Q_S$ are about $6\times 10^4$ and $1\times 10^6$ determinants respectively. We list the results of three calculations: the full CI in the valence space $P$ and emu CI \cite{GCKB18} in the space $P+Q_S$ for the bare and the effective Hamiltonians. The effective radial integrals were calculated with the Hartree-Fock denominators. For each of these calculations we also give differences from the experimental values \cite{NIST} and the averaged absolute difference.
One can see that all the levels in the CI calculation are shifted from their experimental energies: the levels of the configuration $3d^2 4s$ lie higher by 3 thousand inverse centimeters, while the levels of the configuration $3d4s4p$ lie lower by 2 thousand inverse centimeters. The picture changes drastically when we add single excitations and solve the problem in the space $P+Q_S$. Now the levels of the configuration $3d^2 4s$ lie lower by 3 thousand inverse centimeters, while the levels of the configuration $3d4s4p$ are almost in place. Finally, when we use the effective Hamiltonian, which accounts for the double excitations, the levels get closer to their places with the average deviation about 300 cm$^{-1}$, or 7 times smaller, than for the CI calculation.
In this test calculation, we used a rather short basis set and were probably rather far from saturation. Therefore we can not reliably estimate the ultimate accuracy of the method for scandium. Looking at the results we see that the size of the PT corrections is very large and there is also large cancellation between contributions of the single and double excitations. Therefore it is unlikely that converged results would be significantly better than what we got here. On the other hand, we see systematic improvement in our final results compared to the pure valence calculation. It is also worth mentioning that if one would try to include all double excitations in CI calculation the size of the configuration space would be much above $1\times 10^8$ even for the basis set as short as this one.
\section{Conclusions}
We suggest a new version of the CI+MBPT method \cite{DFK96} with the different division of the many-electron space into parts where non-perturbative and perturbative methods are used. This new division may be more practical for the atoms with many valence electrons, where the size of the valence space may be too big for solving the matrix eigenvalue problem. This method can be used in the all-electron calculations for light atoms as well as for the calculations with the frozen core. In the latter case, the single and double excitations from (some of) the core shells can be treated perturbatively. We ran four rather different tests which showed systematic one-order-of-magnitude improvement of the results when we added MBPT corrections to the CI calculations.
\section*{Acknowledgements}
We thank Marianna Safronova, Charles Cheung, and Sergey Porsev for their constant interest in this work and very useful discussions. This work was supported by the Russian Science Foundation (Grant No. 19-12-00157). I.I.T. acknowledges the support from the Resource Center ``Computer Center of SPbU'', St. Petersburg, Russia.
\\
|
1,116,691,499,829 | arxiv | \section{Introduction}
One of the important tasks in many-body physics is to understand the emergence of the
collective features as well as their structure in terms of the individual motion of the
constituents. The steady progress of experimental methods of investigation opens now
the possibility to study very neutron rich nuclei, beyond the limits of stability.
The goal is to have a unified picture of the evolution of various nuclear properties with mass and
isospin and to test the validity of our theoretical understanding over an extended domain
of analysis.
New exotic collective excitations show up when one moves away from the valley of
stability \cite{paar2007}. Their experimental characterization and theoretical
description is a challenge for modern nuclear physics. Recent experiments provided
several evidences about their existence but the available information is still
incomplete and their nature is a matter of debate.
An interesting exotic mode is the Pygmy Dipole Resonance (PDR) which was observed as
an unusually large concentration of the dipole response at energies
clearly below the values associated with the
GDR. The latter is one of the most prominent and robust collective
motions, present in all nuclei, whose centroid position varies, for
medium-heavy nuclei, as $80 A^{-1/3} MeV$.
Adrich et al. \cite{adrich2005} reported the observation of a resonant-like shape distribution
with a pronounced peak around $10MeV$ in $^{130}Sn$ and $^{132}Sn$ isotopes.
A concentration of dipole excitations near and below the particle emission threshold was
also observed in stable Sn nuclei, a systematics of
PDR in these systems being presented in \cite{ozel2007}. It was
concluded that the strongest transitions locate at energies between
$5$ and $8.5 MeV$ and a sizable fraction of the Energy-Weighted Sum
Rule (EWSR) is exhausted by these states. From a comparison of the
available data for stable and unstable $Sn$ isotopes a correlation
between the fraction of pygmy strength and isospin asymmetry was
noted \cite{klimkiewicz2007}. In general the exhausted sum-rule
increases with the proton-to-neutron asymmetry. This behavior was
related to the symmetry energy properties below saturation and
therefore connected to the size of the neutron skin
\cite{yoshida2004,piekarewicz2006,carbone2010}. However other
theoretical analyses suggest a weak connection between the PDR and
skin thickness \cite{reinhard2010}.
In spite of the theoretical progress in the interpretation of
this mode within phenomenological studies based on hydrodynamical equations
\cite{mohan1971,bastrukov2008}, non-relativistic microscopic
approaches using Random Phase Approximation (RPA) with various
effective interactions \cite{tsoneva2008,co2009,yoshida2009} or
relativistic quasi-particle RPA
\cite{vretenar2001,litvinova2008}, and new experimental
information \cite{savran2008,
wieland2010,tonchev2010,makinaga2010}, a number of critical
questions concerning the nature of the PDR still remains. This
includes the macroscopic picture of neutron and proton vibrations,
the exact location of the PDR excitation energy and the degree of
collectivity of the low-energy dipole states,
the role of the symmetry energy \cite{paar2010}.
Some microscopic studies predict a large fragmentation of the GDR strength and
the absence of collective states in the low-lying region in $^{132}Sn$ \cite{sarchi2004}.
The purpose of this letter is to address the important issue related
to the collective nature of PDR. In the first part, within the
Harmonic Oscillator Shell Model (HOSM) for neutron rich nuclei,
we show that the coordinates associated with the neutron excess
vibration against the core, and to the dipole core mode respectively, are separable
and derive
the EWSR exhausted by each of them.
Then we adopt a description based on the Fermi liquid theory
with effective interactions and investigate the dynamics and the interplay
between the dipole modes identified in HOSM. This self-consistent
treatment allows us to inquire on the role of the symmetry energy
and its density dependence upon the dipole response.
In a seminal paper \cite{brink1957}, Brink has shown that for a
system of $A=N+Z$ nucleons moving in a harmonic oscillator well with the Hamiltonian
$ \displaystyle H_{sm} = \sum_{i=1}^{A} \frac{\vec{p}_i^2}{2 m}+
\frac{K}{2} \sum_{i=1}^{A} \vec{r}_i^2~, $
it is possible to perform a separation in four independent parts
$\displaystyle H_{sm} = H_{n~int}+H_{p~int}+H_{CM}+H_{D} $.
The first two terms
determine the internal motion of protons and neutrons
respectively, depending only on proton-proton and neutron-neutron relative coordinates.
The Hamiltonian
$ \displaystyle H_{CM} = \frac{1}{2Am} \vec{P}_{CM}^2+\frac{KA}{2}\vec{R}_{CM}^2 $
characterizes the nucleus center of mass (CM) motion, while
$ \displaystyle H_{D} = \frac{A}{2mNZ} \vec{P}^2+\frac{K NZ}{2A}\vec{X}^2,$
describes a Goldhaber-Teller (G-T) \cite{goldhaber1948} vibration of protons against neutrons.
The oscillator constant $K$ can be determined by fitting the nuclear size \cite{bohr1998}.
In the expressions above
$\vec{X}$ and $\vec{R}_{CM}$ denote the neutron-proton relative coordinate
and the center of mass position, and we have
introduced the conjugate momenta
$\displaystyle \vec{P}=\frac{N Z}{A}(\frac{1}{Z} \vec{P}_Z- \frac{1}{N} \vec{P}_N) $
and
$\displaystyle \vec{P}_{CM}=\vec{P}_{Z}+\vec{P}_{N}$,
where $\vec{P}_{Z}$ ($\vec{P}_{N}$) are proton (neutron) total momenta.
Correspondingly, the eigenstates of the nucleus are
represented as a product of four wave functions $\displaystyle \Psi=\psi_{n~int}
\chi_{p~int} \alpha (\vec{R}_{CM}) \beta(\vec{X})$,
simultaneous eigenvectors of the four Hamiltonians constructed above.
For an $E1$ absorption a G-T collective motion with a specific linear
combination of single particle excitations is produced
and the wave function $\beta(\vec{X})$ is changing from the ground state
to the one GDR phonon state.
Denoting by $E_i$ the energy eigenvalues of the system
and by $D$ the dipole operator ,
with the help of the Thomas-Runke-Kuhn (TRK) sum rule, the total absorption cross section is given by:
$\displaystyle \sigma_D = \int_0^\infty \sigma(E) dE=
\frac{4 \pi^2 e^2}{\hbar c} \sum_i E_i |\langle i |D| 0 \rangle|^2
=\frac{4 \pi^2 e^2}{\hbar c} \frac{1}{2} \langle 0 |[D,[H_{sm},D]] |0 \rangle
= 60 \frac{NZ}{A} mb \cdot MeV $.
Now let us turn to the physical situation, corresponding to the case of very neutron-rich nuclei,
where the system is conveniently described in terms of a bound core containing
all protons and $N_c$ neutrons, plus some (less bound) excess neutrons $N_e$.
Thus the total neutron number $N$ is split into the sum $N = N_c + N_e$ and
we denote by $A_c = Z + N_c$ the number of nucleons contained in the core.
In this case we have worked out an exact separation of the HOSM
Hamiltonian in a sum of six independent (commuting) quantities:
$\displaystyle H_{sm}= H_{n_c~int}+H_{p_c~int}+H_{e~int}+ H_{CM}+H_{c} + H_{y}$.
The first three terms
contain only relative coordinates and momenta among nucleons of each ensemble,
i.e. core neutrons, core protons
and excess neutrons and, as before, describe their internal motion.
$\displaystyle H_{c}=\frac{A_c}{2Z~N_c m}\vec{P}_{c}^2+\frac{K N_c Z}{2A_c}\vec{X}_{c}^2,$
characterizes the core dipole vibration, while the relative
motion of the excess neutrons against the core, usually associated with the pygmy mode, is determined
by $ \displaystyle H_{y}=\frac{A}{2A_c N_e m}\vec{P}_{y}^2 + \frac{K N_e A_c}{2 A}\vec{Y}^2$.
Here $\displaystyle \vec{X}_{c} $ denotes the distance between neutron and proton centers of
mass in the core, while
$\displaystyle \vec{Y}$
is the distance between the core center of mass and the center of mass of the excess neutrons.
The corresponding canonically conjugate momenta are
$\displaystyle \vec{P}_{c}=\frac{N_c Z}{A_c}(\frac{1}{Z} \vec{P}_{Z}-\frac{1}{N_c}\vec{P}_{N_c})$,
and $\displaystyle \vec{P}_{y}=\frac{N_e A_c}{A}(\frac{1}{A_c}(\vec{P}_{Z}+\vec{P}_{N_c})-\frac{1}{N_e}
\vec{P}_{N_e})$.
The eigenstates of $H_{c}$ and
$H_{y}$ are describing two independent collective excitations and
both of them will contribute to the dipole response since
the total dipole momentum can be expressed as:
$\displaystyle \vec{D}=\frac{N Z}{A} \vec{X}=
\frac{Z~ N_c}{A_c} \vec{X}_{c} + \frac{Z~ N_e}{A} \vec{Y} \equiv \vec{D}_c+\vec{D}_y.$
In this picture the PDR results in a collective motion of
G-T type with the excess neutrons oscillating against the core.
The $E1$ absorption leads also to the change of the wave function
associated with the coordinate $\vec Y$. The total cross section for the
PDR is:
$\displaystyle \sigma_y = \frac{4 \pi^2 e^2}{\hbar c} \frac{1}{2} \langle 0 |[D_y,[H_{sm},D_y]]|0 \rangle =
\frac{N_e Z}{N A_c} \sigma_D.$
This shows that a fraction $ \displaystyle f_y=\frac{N_e Z}{N A_c}$
of the EWSR is exhausted by the pygmy mode. It is worth to mention
that this result is consistent with the molecular sum rule introduced by
Alhassid et al. \cite{alhassid1982}. For the tin isotope $^{132}Sn$,
if the excess neutrons were simply defined as the difference between
neutron and proton numbers, i.e. $N_e=32$,
one would expect $f_y = 19.5 \%$. This is
greater than the value estimated experimentally, which is around $5
\%$. A possible explanation for this difference is that
only a part of the excess neutrons, $N_{y}$, with $N_{y}<N_e$,
contribute to PDR, the rest being still bound to the core
\cite{ncore}.
Therefore it is important to test this assumption within a more
sophisticated analysis of the dipole response. Indeed, a more
accurate picture of the GDR in nuclei corresponds to an admixture of
G-T and Stenweidel-Jensen (S-J) vibrations. The latter, in
symmetric nuclear matter, is a volume type oscillation of the
isovector density $\rho_i= \rho_n - \rho_p$ keeping the total
density $\rho=\rho_n+\rho_p$ constant \cite{steinwedel1950}.
A microscopic, self-consistent study of the collective features
and of the role of the nuclear effective interaction
upon the PDR can be performed within the Landau theory of Fermi liquids.
This is based on two coupled Landau-Vlasov kinetic equations for neutron and proton one-body
distribution functions $f_q(\vec{r},\vec{p},t)$ with $q=n,p$:
\begin{equation}
\frac{\partial f_q}{\partial t}+\frac{\bf p}{m}\frac{\partial f_q}{\partial {\bf r}}-
\frac{\partial U_q}{\partial {\bf r}}\frac{\partial f_q}{\partial {\bf p}}=I_{coll}[f] ,
\label{vlasov}
\end{equation}
and was applied quite successfully in describing various features of the GDR,
including pre-equilibrium dipole excitation in fusion reactions \cite{baran1996}.
Within a linear response approach, it was also considered to investigate
properties of the PDR \cite{abrosimov2009}.
However, it should be noticed that
within such a semi-classical description
shell effects are absent, certainly important in shaping the fine structure
of the dipole response \cite{maza2012}.
We neglect here the two-body collisions effects and hence the main ingredient of
the dynamics is the nuclear mean-field, for which we consider a Skyrme-like ($SKM^*$) parametrization
$\displaystyle U_{q} = A\frac{\rho}{\rho_0}+B(\frac{\rho}{\rho_0})^{\alpha+1} + C(\rho)
\frac{\rho_n-\rho_p}{\rho_0}\tau_q
+\frac{1}{2} \frac{\partial C}{\partial \rho} \frac{(\rho_n-\rho_p)^2}{\rho_0}$,
where $\tau_q = +1 (-1)$ for $q=n (p)$ and $\rho_0$ denotes the saturation density.
The saturation properties of symmetric nuclear matter are reproduced
with the values of the coefficients
$A=-356 MeV$, $B=303 MeV$, $\alpha=1/6$,
leading to a compressibility modulus $K=200 MeV$. For the isovector sector we employed
three different parameterizations of $C(\rho)$ with the density: the asysoft,
the asystiff and asysuperstiff respectively, see \cite{baran2005} for a detailed description.
The value of the symmetry energy,
$\displaystyle E_{sym}/A = {\epsilon_F \over 3}+{C(\rho) \over 2}{\rho \over \rho_0}$,
at saturation, as well as
the slope parameter, $\displaystyle L = 3 \rho_0 \frac{d E_{sym}/A}{d \rho} |_{\rho=\rho_0}$,
are reported in Table \ref{table1} for each of these asy-EoS. Just below the saturation density
the asysoft mean field has a weak variation with density while the asysuperstiff shows
a rapid decrease.
Then, due to surface
contributions to the collective oscillations, we expect to see
some differences in the energy position of the dipole response of the system.
The numerical procedure to integrate the transport equations is based on the
test-particle (t.p.) method. For a good spanning of phase-space we work with $1200$ t.p. per nucleon.
We consider the
neutron rich nucleus $^{132}Sn$ and we
determine its ground state configuration as the equilibrium (static)
solution of Eq.(\ref{vlasov}). Then proton and neutron densities
$\displaystyle \rho_q(\vec{r},t)=\int \frac{2 d^3 {\bf p}}{(2\pi\hbar)^3}f_q(\vec{r},\vec{p},t)$
can be evaluated.
The radial density profiles
for two asy-EOS
are reported in Fig. \ref{densprof}.
\begin{figure}
\begin{center}
\includegraphics*[scale=0.27]{sn132_i23_denprof_prl.eps}
\end{center}
\caption{(Color online) The total (black), neutrons (blue) and
protons (red) radial density profiles for asysoft (solid lines) and asysuperstiff (dashed lines).}
\label{densprof}
\end{figure}
As an additional check of our initialization procedure,
the neutron and proton mean square radii
$\displaystyle \langle r_q^2 \rangle = \frac{1}{N_q} \int r^2 \rho_q(\vec{r},t) d^3 {\bf r}$,
as well as the skin thickness
$\displaystyle \Delta R_{np}= \sqrt{\langle r_n^2 \rangle}-\sqrt{\langle r_p^2 \rangle}$,
were also calculated in the ground state and shown in Table \ref{table1}.
\begin{table}
\begin{center}
\begin{tabular}{|l|r|r|r|r|r|} \hline
asy-EoS & $E_{sym}/A$ & L(MeV) & $R_n$(fm) & $R_p$(fm) & $\Delta R_{np}(fm)$ \\ \hline
asysoft & 30. & 14. & 4.90 & 4.65 & 0.25 \\ \hline
asystiff & 28. & 73. & 4.95 & 4.65 & 0.30 \\ \hline
asysupstiff & 28. & 97. & 4.96 & 4.65 & 0.31 \\ \hline
\end{tabular}
\caption{The symmetry energy at saturation (in MeV), the slope parameters, neutron rms radius,
protons rms radius, neutron skin thickness for the three asy-EoS.}
\label{table1}
\end{center}
\end{table}
The values obtained with our semi-classical approach
are in a reasonable agreement with those reported by employing other
models for similar interactions \cite{paar2005}.
The neutron skin thickness is increasing with the slope parameter,
as expected from a faster reduction of the symmetry term on the surface \cite{yoshida2004,baran2005}.
This feature has been discussed in detail in \cite{carbone2010}.
To inquire on the collective properties of the pygmy dipole we excite the nuclear system
at the initial time $t=t_0 = 30fm/c$ by boosting along the $z$
direction all excess neutrons and in opposite direction all core
nucleons, while keeping the CM of the nucleus at rest (Pygmy-like
initial conditions).
The excess neutrons were identified as the most
distant $N_e=32$ neutrons from the nucleus CM. Then the system is left to
evolve and the evolution of the collective coordinates $Y$, $X_c$ and $X$,
associated with the different dipole modes,
is followed for $600 fm/c$
by solving numerically the equations (\ref{vlasov}).
The simple estimate of the EWSR provided
by the HOSM suggests, when compared to experiments,
that some of the $N_e$ neutrons boosted
in the initial conditions are still bound to the core.
This is confirmed by the transport simulations.
Indeed, apart from the quite undamped oscillations of the $Y$ coordinate,
we also remark that the core does not remain inert.
In Fig. \ref{diptime} we plot the time evolution of the dipole
$D_y$, of the total dipole $D$ and core dipole $D_c$ moments, for two asy-EoS.
As observed,
while $D_y$ approaches its maximum
value, an oscillatory motion of the dipole $D_c$ initiates and
this response is symmetry energy dependent: larger is the slope
parameter $L$, more delayed is the isovector core reaction.
This can be explained in terms of low-density (surface)
contributions to the vibration and therefore of the density behavior
of the symmetry energy below normal density: a larger L corresponds
to a larger neutron presence in the surface and so to a smaller
coupling to the core protons.
\begin{figure}
\begin{center}
\includegraphics*[scale=0.32]{dip_i23_prl.eps}
\end{center}
\caption{(Color online) The time evolution of the total dipole $D$ (top)
of the dipole $D_y$ (middle) and of core dipole $D_c$
for asysoft (blue, solid) and asysuperstiff (red, dashed) EoS. Pygmy-like initial excitation.}
\label{diptime}
\end{figure}
We see that the total dipole $D(t)$ is strongly affected by the presence of
isovector core oscillations,
mostly related to the isovector part of the effective interaction.
Indeed, $D(t)$ gets a higher oscillation frequency with respect to
$D_y$, sensitive to the asy-EOS. The fastest
vibrations are observed in the asysoft case, which gives the largest
value of the symmetry energy below saturation. In correspondence the
frequency of the pygmy mode seems to be not much affected by the
trend of the symmetry energy below saturation, see also next Fig.
\ref{dipspectrum}, clearly showing the different nature,
isoscalar-like, of this oscillation. For each case we calculate the
power spectrum of $D_y$:
$\displaystyle |D_y (\omega)| ^2 = |\int_{t_0}^{t_{max}} D_y(t) e^{-i\omega t} dt|^2$
and similarly for $D$. The results are shown in Fig.
\ref{dipspectrum}. The position of the centroids corresponding to
the GDR shifts toward larger values when we move from superasystiff (largest slope parameter $L$)
to asysoft EoS.
This evidences the importance of the volume, S-J component of the GDR in
$^{132}Sn$.
The energy centroid associated with the PDR is situated below the GDR peak, at around $8.5 MeV$,
quite insensitive to the asy-EOS,
pointing to an isoscalar-like nature of this mode.
A similar conclusion
was reported within a relativistic mean-field approach
\cite{liang2007}. While in the schematic HOSM all
dipole modes are degenerate, with an energy $E=41 A^{-1/3} \approx
8MeV$ for $^{132}Sn$, within the Vlasov approach the GDR energy is
pushed up by the isovector interaction. Hence the structure of the
dipole response can be explained in terms of the development of
isoscalar-like (PDR) and isovector-like (GDR) modes, as
observed in asymmetric systems \cite{baran2001a}.
\begin{figure}
\begin{center}
\includegraphics*[scale=0.34]{pygmy132_fou_i123_prl.eps}
\end{center}
\caption{(Color online) The power spectrum of total dipole (left) and
of the dipole $D_y$ (right) (in $fm^4/c^2$), for asysoft
(blue, solid line), asystiff (black, dot-dashed line) and asysuperstiff (red, dashed line)
EoS. Pygmy-like initial conditions.
}
\label{dipspectrum}
\end{figure}
Both modes are excited in the considered, pygmy-like initial conditions. Looking
at the total dipole mode direction, that is close to the isovector-like normal mode,
one observes a quite large contribution in the GDR region. On the other hand,
considering the $Y$ direction, more closely related to the isoscalar-like mode, a larger
response amplitude is detected in the pygmy region.
\begin{figure}
\begin{center}
\includegraphics*[scale=0.34]{gdr_pygmy_fou_prl.eps}
\end{center}
\caption{(Color online) The same as in Fig. \ref{dipspectrum} but for a GDR-like
initial excitation.}
\label{gdrspectrum}
\end{figure}
To check the influence of the initial conditions on the dipole
response, let us consider the case of a GDR-like excitation, corresponding
to a boost of all neutrons against all protons, keeping the CM at rest.
The initial collective energy corresponds to first GDR excited state, around $15MeV$.
Now the initial excitation favours the isovector-like mode and
even in the $Y$ direction we observe a sizeable contribution in the
GDR region, see the Fourier spectrum of $D_y$ in Fig. \ref{gdrspectrum}.
From this result it clearly emerges that
a part of the $N_e$ excess neutrons is involved in a GDR type motion
and the relative weight depends on the symmetry
energy: more neutrons
are involved in the pygmy mode in the
asysuperstiff EOS case, in connection to the larger neutron skin size.
We have also checked that, if the coordinate Y is constructed
by taking the $N_y$ most distant neutrons (with $N_y < N_e$),
the relative weight increases in the PDR region.
In any case, since part of the excess nucleons
contributes to the GDR mode, a lower EWSR value than
the HOSM predictions corresponding to $N_y=N_e$ is expected.
Indeed, in the Fourier power spectrum of $D$ in Fig.
\ref{gdrspectrum}, a weak response is seen at the pygmy frequency.
These investigations also rise the question of the appropriate way to
excite the PDR. Nuclear rather than electromagnetic probes
can induce neutron skin excitations closer to our first class of initial conditions
\cite{vitturi2010}.
In the case of the GDR-like initial excitation we can relate the strength function
to $Im(D(\omega))$ \cite{suraud1997} and then the corresponding
cross section can be calculated. Our estimate of the integrated cross section
over the PDR region represents $2.7 \%$
for asysoft, $4.4 \%$ for asystiff and $4.5 \%$ for asysuperstiff,
out of the total cross section. Hence the EWSR
exhausted by the PDR is proportional to the skin thickness, in agreement
with the results of \cite{inakura2011}.
The fraction of photon emission probability associated with the PDR region
can be estimated from the total dipole acceleration, within a
bremsstrahlung approach \cite{baran2001}.
We obtain a percentage
of $4.7 \%$ for asysoft, $7.7 \%$ for asystiff and $9 \%$ for
asysuperstiff EOS, consistent with the previous interpretation.
Summarizing, in this work we evidence, both within HOSM and a
semi-classical Landau-Vlasov approach, the existence, in neutron
rich nuclei, of a collective pygmy dipole mode determined by the
oscillations of some excess neutrons against the nuclear
core. From the transport simulation results the PDR energy centroid
for $^{132}Sn$ appears around $8.5$ $MeV$, rather insensitive to the
density dependence of the symmetry energy and well below the GDR peak.
This supports the isoscalar-like character of this collective
motion. A complex pattern, involving the coupling of the neutron skin with
the core dipole mode, is noticed.
While HOSM can provide some predictions of the EWSR fraction exhausted
by the pygmy mode, $\displaystyle f_y=\frac{N_y Z}{N A_c}$, depending on the
number $N_y\leq N_e$ of neutrons involved, the transport model
indicates that part of the excess neutrons $N_e$ are coupled to the
GDR mode and gives some hints about
the number of neutrons, $N_y$, actually partecipating in the pygmy mode.
This reduces considerably the EWSR acquired by the PDR, our numerical
estimate providing values well below $10 \%$, but proportional to the
symmetry energy slope parameter $L$, that affects the number of
excess neutrons on the nuclear surface. We consider these effects as
related also to the S-J component of the dipole dynamics
in medium-heavy nuclei. It is therefore interesting to extend the
present analysis to lighter nuclei, like Ni or Ca isotopes, where
the Goldhaber-Teller component can be more important. We would like
to mention that such self-consistent, transport approaches,
can be valuable in exploring the collective response of other
mesoscopic systems where similar normal modes may manifest, see
\cite{mesoscopic}.
This work for V. Baran was supported by a grant of the Romanian National
Authority for Scientific Research, CNCS - UEFISCDI, project number PN-II-ID-PCE-2011-3-0972.
For B. Frecus this work was supported by the strategic grant POSDRU/88/1.5/S/56668.
|
1,116,691,499,830 | arxiv | \section{Introduction}
\label{sec:intro}
In our previous work \cite{ouroscillationcompensation}, we investigate the vanishing-step subgradient method applied to a nonsmooth, nonconvex objective function $f$ in the hope of finding
\[\argmin_{x\in\mathbb{R}^n}f(x).\]
This paper is intended as a companion to \cite{ouroscillationcompensation}, as it presents two examples that show that the results obtained there are sharp in several senses. We also aim here to provide insight into the types of dynamics that the subgradient algorithm presents in the asymptotic limit, and we evaluate some of the ideas that are believed to show promise towards a proof of convergence of the algorithm, such as the Kurdyka--\L{}ojasiewicz inequality. We refer the reader to \cite{ouroscillationcompensation} for some discussion of the historical background.
\medskip
We shall now give some definitions that will allow us to discuss our results.
For a locally Lipschitz function $f\colon\mathbb{R}^n\to\mathbb{R}$, we denote by $\partial^cf(x)$ the \emph{Clarke subdifferential} of $f$ at $x\in\mathbb{R}^n$, that is, the convex envelope of the set of vectors $v\in\mathbb{R}^n$ such that there is a sequence $\{y_i\}_i\subset\mathbb{R}^n$ such that $f$ is differentiable at $y_i$, $y_i\to x$ and $\nabla f(y_i)\to v$.
\begin{defn}[Small-step subgradient method]\label{def:subgradientmethod}
Let $f\colon\mathbb{R}^n\to\mathbb{R}$ be a locally Lipschitz function, and $\{\varepsilon_i\}_i$ be a sequence of positive step sizes such that
\[\sum_{i=0}^\infty\varepsilon_i=+\infty\qquad\textrm{and}\qquad \lim_{i\to+\infty}\varepsilon_i=0.\]
Given $x_0\in\mathbb{R}^n$, consider the recursion, for $i\geq0$,
\[x_{i+1}=x_i-\varepsilon_iv_i,\qquad v_i\in\partial^cf(x_i).\]
Here, $v_i$ is chosen freely among $\partial^cf(x_i)$. The sequence $\{x_i\}_{i\in\mathbb{N}}$ is called a \emph{subgradient sequence}. %
\end{defn}
Since the dynamics of the subgradient method in the case of $f$ locally Lipschitz had been shown \cite{daniilidisdrusvyatskiy,borweinetal} to be too unwieldy, in \cite{ouroscillationcompensation} we instead discuss the dynamics of the subgradient method for $f$ path-differentiable.
\begin{defn}[Path-differentiable functions]\label{def:pathdiff}
A locally Lipschitz function $f\colon\mathbb{R}^n\to\mathbb{R}$ is \emph{path-dif\-fer\-en\-tia\-ble} if for each Lipschitz\footnote{In other parts of the literature (see e.g. \cite{boltepauwels}), this definition is given with absolutely-continuous curves, and this is equivalent because such curves can be reparameterized (for example, by arclength) to obtain Lipschitz curves, without affecting their role in the definition.} curve $\gamma\colon\mathbb{R}\to\mathbb{R}^n$, for almost every $t\in\mathbb{R}$, the composition $f\circ\gamma$ is dif\-fer\-en\-tia\-ble at $t$ and the derivative is given by
\[(f\circ\gamma)'(t)=v\cdot \gamma'(t)\]
for all $v\in\partial^cf(\gamma(t))$.
\end{defn}
\begin{defn}[Weak Sard condition]\label{def:weaksard}
We will say that $f$ satisfies the \emph{weak Sard condition} if it is constant on each connected component of its critical set $\crit f=\{x\in\mathbb{R}^n:0\in\partial^c f(x)\}$.
\end{defn}
Recall that the \emph{accumulation set $\acc\{x_i\}_i$} of the sequence $\{x_i\}_i$ is the set of points $x\in\mathbb{R}^n$ such that, for every neighborhood $U$ of $x$, the intersection $U\cap\{x_i\}_i$ is an infinite set. Its elements are known as \emph{limit points}.
\begin{defn}[Essential accumulation set]\label{def:essacc}
Given sequences $\{x_i\}_i\subset\mathbb{R}^n$ and $\{\varepsilon_i\}_i\subset\mathbb{R}_{\geq0}$, the \emph{essential accumulation set $\essacc\{x_i\}_i$} is the set of points $x\in\mathbb{R}^n$ such that, for every neighborhood $U$ of $x$,
\begin{equation}\label{eq:essaccdef}
\limsup_{N\to+\infty}\frac{\displaystyle\sum_{\substack{0\leq i\leq N\\ x_i\in U}}\varepsilon_i}{\displaystyle\sum_{0\leq i\leq N}\varepsilon_i}>0.\end{equation}
\end{defn}
\begin{defn}[Whitney stratifiable functions]\label{def:stratification}
Let $X$ be a nonempty subset of $\mathbb{R}^m$ and $0<p\leq +\infty$. A \emph{$C^p$ stratification} $\mathcal X=\{X_i\}_{i\in I}$ of $X$ is a locally finite partition of $X=\bigsqcup_iX_i$ into connected submanifolds $X_i$ of $\mathbb{R}^m$ of class $C^p$ such that for each $i\neq j$
\[\overline{X_i}\cap X_j\neq \emptyset \Longrightarrow X_j\subset \overline{X_i}\setminus X_i.\]
A $C^p$ stratification $\mathcal X$ of $X$ \emph{satisfies Whitney's condition A} if, for each $x\in \overline{X_i}\cap X_j$, $i\neq j$, and for each sequence $\{x_k\}_k\subset X_i$ with $x_k\to x$ as $k\to+\infty$, and such that the sequence of tangent spaces $\{T_{x_k}X_i\}_k$ converges (in the usual metric topology of the Grassmanian) to a subspace $V\subset T_x\mathbb{R}^m$, we have that $T_xX_j\subset V$. A $C^p$ stratification is \emph{Whitney} if it satisfies Whitney's condition A.
With the same notations as above,
a function $f\colon \mathbb{R}^n\to\mathbb{R}^k$ is \emph{Whitney $C^p$-stratifiable} if there exists a Whitney $C^p$ stratification of its graph as a subset of $\mathbb{R}^{n+k}$.
\end{defn}
\paragraph{Summary of the results.}
Let
\begin{itemize}
\item $n>0$,
\item $f\colon\mathbb{R}^n\to\mathbb{R}$ be a locally Lipschitz, path-differentiable function,
\item the sequence $\{\varepsilon_{i}\}_i\subset\mathbb{R}_{>0}$ of step sizes satisfy $\lim_{i\to+\infty}\varepsilon_i=0$, and
\item $\{x_i\}_i$ be a bounded subgradient sequence with stepsizes $\{\varepsilon_i\}_i$.
\end{itemize}
The main questions we address here are the following:
\begin{enumerate}[label=Q\arabic*.,ref=Q\arabic*,series=Q]
\item\label{q:1}\label{q:prevfirst} \emph{Does the sequence $\{x_i\}_i$ converge in general?}
While it is tempting to hope for the sequence to converge since we have proven \cite[Theorems 6(i),7(i),7(ii)]{ouroscillationcompensation} that the sequence slows down indefinitely, in Section \ref{sec:circle} we give an example in which the sequence forever accumulates around a circle and never converges. The function we construct satisfies the weak Sard condition, so even with that assumption there is no hope for the convergence of $\{x_i\}_i$. The function also satisfies the Kurdyka--\L{}ojasiewicz inequality; see \ref{q:9}.
In contrast, in can be proven \cite{bolte2010characterizations} that if $f$ satisfies the weak Sard condition and the Kurdyka--\L{}ojasiewicz inequality, then the flow lines $x\colon\mathbb{R}\to\mathbb{R}^n$ of the continuous-time subgradient flow, which satisfy
\[-\dot x(t)\in\partial^cf(x(t)),\]
always converge. Thus the example in Section \ref{sec:circle} shows that the convergence of the continuous-time process may not guarantee the convergence of the discrete subgradient sequence.
\item\label{q:5} \emph{Do the values $\{f(x_i)\}_i$ converge for a general path-differentiable function $f$?}
Although this convergence can be proved when $f$ satisfies the weak Sard condition \cite[Theorem 7(v)]{ouroscillationcompensation}, the example in Section \ref{sec:sardcounterexample} shows that the convergence of the values $f(x_i)$ fails in general. In fact, in that example we have $f(\acc\{x_i\}_i)=[0,1]=f(\essacc\{x_i\}_i)$.
\item\label{q:2} \emph{Must $\acc\{x_i\}_i$ be a subset of $\crit f$ in general?}
The example in Section \ref{sec:sardcounterexample} shows that in general the set $\acc\{x_i\}_i\setminus\essacc\{x_i\}_i$ may not intersect $\crit f$. This contrasts with results that $\essacc\{x_i\}_i$ is always contained in $\crit f$ \cite[Theorem 6(iii)]{ouroscillationcompensation}, and that $\acc\{x_i\}_i$ is contained in $\crit f$ if $f$ satisfies the weak Sard condition \cite[Theorem 7(iv)]{ouroscillationcompensation}.
\item\label{q:4} \emph{Do we always have $\essacc\{x_i\}_i=\acc\{x_i\}_i$?}
No, in the example in Section \ref{sec:sardcounterexample} we have a situation in which the set $\essacc\{x_i\}_i$ is strictly smaller than $\acc\{x_i\}_i$. We do not know the answer to this question with more stringent assumptions, such as $f$ satisfying the weak Sard condition.
\item\label{q:6}\emph{Can the essential accumulation set $\essacc\{x_i\}_i$ be disconnected?}
Yes. Although for simplicity we do not construct an example here, the reader will surely understand that the example in Section \ref{sec:sardcounterexample} can be easily modified (by taking several copies of $\Gamma$ and joining them with curves having roles similar to the one played by $J$) to produce a situation in which $\essacc\{x_i\}_i$ is disconnected. This contrasts with the fact that $\acc\{x_i\}_i$ is always connected because $\dist(x_i,x_{i+1})\leq \varepsilon_i\lip(f)\to 0$ as $i\to+\infty$, where $\lip(f)$ is the Lipschitz constant for $f$ in a compact set that contains $\{x_i\}_i$.
\item\label{q:3} \emph{A certain spontaneous slowdown phenomenon is proved in \cite[Theorem 6(i)]{ouroscillationcompensation} of the fragments of the subgradient sequence as (roughly speaking) it traverses the piece of $\acc\{x_i\}_i$ starting at a point $x$ and ending at another point $y$, such that $x,y\in\acc\{x_i\}$ verify $f(x)\leq f(y)$ (see the precise statement below). }
\emph{Is there any hope of proving, for general $f$,
that this phenomenon always occurs uniformly throughout the accumulation set, regardless of the restriction $f(x)\leq f(y)$? }
%
No, the example in Section \ref{sec:sardcounterexample} shows that the speed of drift of the sequence can remain high forever between points that do not satisfy this inequality.
To be precise, the result in \cite[Theorem 6(i)]{ouroscillationcompensation} is this: Let $x$ and $y$ be two distinct points in $\acc\{x_i\}_i$ satisfy $f(x)\leq f(y)$, and take subsequences $\{x_{i_k}\}_k$ and $\{x_{i'_k}\}_k$ such that $x_{i_k}\to x$, $x_{i'_k}\to y$ as $k\to+\infty$, and $i'_k>i_k$ for all $k$. Then
\[\lim_{k\to+\infty}\sum_{p=i_k}^{i'_k}\varepsilon_p=+\infty.\]
This is verified independently of the subsequences taken.
On the other hand, the endpoints $x$ and $y$ of the curve $J$ in the example in Section \ref{sec:sardcounterexample} are contained in $\acc\{x_i\}_i$, satisfy $f(x)>f(y)$, and we can take subsequences $\{x_{i_k}\}_k$ and $\{x_{i'_k}\}_k$ converging to $x$ and $y$, respectively, and with $i'_k>i_k$,
for which we additionally have
\[\sup_{k}\sum_{p=i_k}^{i'_k}\varepsilon_p <+\infty.\]
\item\label{q:7} \emph{Does the oscillation compensation phenomenon described in \cite[Theorem 6(ii)]{ouroscillationcompensation} occur on the entire accumulation set in general?}
While we are able to prove an oscillation compensation result \cite[Theorem 7(iii)]{ouroscillationcompensation} that holds throughout $\acc\{x_i\}_i$ with the assumption that $f$ satisfies the weak Sard condition, the example in Section \ref{sec:sardcounterexample} shows that in general, in the absence of the weak Sard condition, there need not be any oscillation compensation on $\acc\{x_i\}_i\setminus\essacc\{x_i\}_i$, which in the example corresponds to the curve $J$. For a precise statement, please refer to \ref{itex:osccomp} in Section \ref{sec:sardcounterexample}.
\item\label{q:8} \emph{Can the perpendicularity of the oscillations of $\{x_i\}_i$ verified around $\essacc\{x_i\}_i$ \cite[Remark 9]{ouroscillationcompensation} be proved on the entire accumulation set?}
No, as is shown in the example of Section \ref{sec:sardcounterexample} this may fail on $\acc\{x_i\}_i\setminus\essacc\{x_i\}_i$ for general $f$. The perpendicularity can, however, be proved to happen on $\essacc\{x_i\}_i$ or, if $f$ satisfies the weak Sard condition, on all of $\acc\{x_i\}_i$; see \cite[Remark 9]{ouroscillationcompensation}.
\item\label{q:9} \emph{Would it be possible to prove the convergence of $\{x_i\}_i$ if $f$ is Whitney stratifiable (cf. Definition \ref{def:stratification}) and satisifes a Kurdyka--\L{}o\-ja\-sie\-wicz inequality?}
No; more assumptions are necessary. The objective function $f$ in the example in Section \ref{sec:circle} is Whitney $C^\infty$ stratifiable and satisfies a Kurdyka--\L{}ojasiewicz inequality of the form
\[\|\nabla f(x)\|\geq \frac12\quad\textrm{for all $x\notin \crit f$,}\]
but we also construct a bounded subgradient sequence that fails to converge. However, in the case of $f$ smooth, the Kurdyka--\L{}ojasiewicz inequality does suffice to prove convergence of the subgradient method \cite{attouchboltesvaiter}.
\item\label{q:10} Recall that the Hausdorff dimension of a set $X$ is
\[\dim X=\inf\{d\in\mathbb{R}:\mathcal H^d(X)=0\}, \]
where $\mathcal H^d(X)$ is the $d$-dimensional Hausdorff outer measure,
\begin{equation*}
\mathcal H^d(X)\coloneqq\liminf_{r\to0}\{\textstyle\sum_ir_i^d:
\textrm{there is a cover of $X$ by balls of radii $0<r_i<r$}\}.
\end{equation*}
\emph{Must the Hausdorff dimension of the accumulation set of $\{x_i\}_i$ be $\dim\acc\{x_i\}_i\leq n-1$?}
No, the example in Section \ref{sec:sardcounterexample} gives a function $f\colon\mathbb{R}^2\to\mathbb{R}$ and a subgradient sequence $\{x_i\}_i$ such that the Hausdorff dimension satisfies
\begin{equation}\label{eq:fractaldimension}
1<\dim\acc\{x_i\}_i=\dim\essacc\{x_i\}_i\leq \frac{\log 4}{\log 3}\approx 1.26,
\end{equation}
and actually depends on a parameter $\alpha$ that can be tweaked to produce any value of the Hausdorff dimension in this range; see Lemma \ref{lem:dimgamma}.
Although the function $f$ in that example does not satisfy the weak Sard condition, the example can be easily modified (by changing the value of $f$ on $\Gamma\cup J$ to a constant) to satisfy also this condition and still have the dimension attain any value in the range \eqref{eq:fractaldimension}.
This contrasts with the result \cite[Remark 10]{ouroscillationcompensation} that, if $f$ is Whitney $C^n$ stratifiable, then
\[\dim\acc\{x_i\}_i\leq n-1.\]
\item\label{q:11}\label{q:prevlast}\emph{Can the set of limit closed measures of the interpolant curve be infinite?}
Yes. This is the case in the situation of the example in Section \ref{sec:circle} (and also in the example of Section \ref{sec:sardcounterexample}, but for simplicity we will not prove it in that case). Please refer to Section \ref{sec:circlemeasures} for the full definitions and an explanation.
\item\label{q:12}\emph{Would the answer to any of the previous questions \ref{q:prevfirst}--\ref{q:prevlast} be different if one enforced that
the sequence be contained in the (full measure) set of
differentiability points of the function $f$? }
No, all our claims are based on
constructive existence proofs of subgradient sequences $\{x_i\}_i$
such that each point $x_i$ is contained in a ball in which the objective function $f$ is $C^\infty$.
\end{enumerate}
\paragraph{Notation.}
Given two sets $A$ and $B$, denote by $B^c$ the complement of $B$ and by $A\setminus B=A\cap B^c$.
Let $n$ be a positive integer, and let $\mathbb{R}^n$ denote $n$-dimensional Euclidean space. %
For two vectors $u=(u_1,\dots,u_n)$ and $v=(v_1,\dots,v_n)$ in $\mathbb{R}^n$, we let $u\cdot v=\sum_{i=1}^nu_iv_i$ and $\|u\|=\sqrt{u\cdot u}$. We will denote the gradient of $f$ at $x$ by $\nabla f(x)$. We denote $\log_ba=\log a/\log b$ the logarithm of $a$ in base $b$. We denote the unit circle by $S^1$, and the open ball of radius $r$ centered at $x$ by $B_r(x)$. A number with a subindex $b$ is in base $b$; for example, $0.12_9=1/9+2/81$. For a Lipschitz function $g\colon\mathbb{R}^n\to\mathbb{R}^m$, we denote by
\[\lip(g)=\sup_{x,y\in\mathbb{R}^n}\frac{\|g(x)-g(y)\|}{\|x-y\|}.\]
\section{Example on the circle}
\label{sec:circle}
We construct a path-differentiable function $f\colon \mathbb{R}^2\to\mathbb{R}$ and a subgradient sequence $\{x_i\}_i$ that does not converge and instead accumulates around a circle. The function $f$ additionally has the property that it is Whitney $C^\infty$ stratifiable and satisfies a Kurdyka-\L{}ojasiewicz inequality. The construction is given in Section \ref{sec:circleconstruction} and the main properties are collected in Proposition \ref{prop:propertiesf}.
In the context of the theory developed in \cite[\S4.2]{ouroscillationcompensation}, it is also interesting that the dynamics in this example induce, through the interpolant curve, infinitely-many limiting closed measures. This is discussed in Section \ref{sec:circlemeasures}.
\subsection{Construction and main properties}\label{sec:circleconstruction}
For $i\geq 2$, let (see Figure \ref{fig:example})
\begin{figure}
\centering
\includegraphics[width=0.3\textwidth]{img/ex}
\caption{The unit circle and the path joining $\{x_i\}_i$ in the example of Section \ref{sec:circle}.}
\label{fig:example}
\end{figure}
\[x_i=[1+\tfrac{(-1)^i}i](\cos\vartheta_i,\sin\vartheta_i)\quad\textrm{with}\quad \vartheta_i=\sum_{k=2}^i\frac{1}{k\log k}\]
and
\[\varepsilon_i=\|x_{i+1}-x_i\|,\quad v_i=-\frac{x_{i+1}-x_i}{\|x_{i+1}-x_i\|},\]
so that $x_{i+1}=x_i-\varepsilon_iv_i$. Note that $\varepsilon_i$ satisfies, for large $i$,
\[%
\frac{2}{i+1}<\varepsilon_i<\frac1i+\frac1{i+1}+\frac1{i\log i}< \frac2i,
\]
so that $\varepsilon_i\to0$, $\sum_i\varepsilon_i=+\infty$, and $\sum_i\varepsilon_i^2<+\infty$.
We want to obtain a function $f$ that is very close to the function $\phi$ given by the distance to the circle,
\[\phi(x)= |1-\|x\||,\]
yet satisfies
\begin{equation}\label{eq:wishes}
\nabla f(x)= v_i\quad\textrm{for all $x\in B_{1/2^{i}}(x_i)$}.
\end{equation}
Let $\psi\colon\mathbb{R}^2\to[0,1]$ be a $C^\infty$ function with radial symmetry (i.e. $\psi(x)=\psi(y)$ for $\|x\|=\|y\|$), such that $\psi(x)=1$ for $x\in B_1(0)$, $\psi(x)=0$ for $\|x\|\geq 2$, and decreases monotonically on rays emanating from the origin. Let
\[\psi_i(x)=\psi(2^i(x-x_i)),\]
so that $\psi_i$ equals 1 on $B_{1/2^i}(x_i)$ and vanishes outside $B_{1/2^{i-1}}(x_i)$. %
Note that the supports of the functions $\psi_i$ are pairwise disjoint.
Define
\[V_i(x)=(x-x_i)\cdot v_i+\frac1i.\]
\begin{prop}\label{prop:propertiesf} Let $i_0\geq 2$ and
\begin{equation}\label{eq:deff}
f(x)=\left(1-\sum_{i=i_0}^\infty\psi_i(x)\right)\phi(x)+\sum_{i=i_0}^\infty\psi_i(x)V_i(x).
\end{equation}
Then we have:
\begin{enumerate}[label=\roman*.,ref=(\roman*)]
\item\label{f:regularity} The function $f$ is $C^\infty$ %
on $\mathbb{R}^2\setminus S^1$.
\item\label{f:gradientsequence} The function $f$ satisfies \eqref{eq:wishes}, so that $\{x_i\}_i$ is a subgradient sequence with stepsizes $\{\varepsilon_i\}_i$.
\item\label{f:clarke} Let $p$ be a point in the unit circle, then $\partial^c f(p)=\{ap:a\in[-1,1]\}=\partial^c\phi(p)$.
\item\label{f:crit} The critical set of $f$ is $\crit f=S^1\cup \{0\}$.
\item\label{f:pathdiff} The function $f$ is Lipschitz path-differentiable.
\item\label{f:whitney} The function $f$ is Whitney $C^\infty$ stratifiable.
\item\label{f:KL} If $i_0$ is large enough, $f$ satisfies a Kurdyka-\L{}ojasiewicz inequality of the form
\[\|\nabla f(x)\|>1/2\]
for $x\in\mathbb{R}^2\setminus \crit f$.
\end{enumerate}
\end{prop}
To prove the proposition we need
\begin{lem}\label{lem:estimates}
For $i$ large enough we have the estimates
\begin{equation}\label{it:estimate3} \left\|v_i-(-1)^i\frac{x_i}{\|x_i\|}\right\|\leq \frac{6}{\log i}
\end{equation}
and, if $\dist(x_i,y)\leq 2^{1-i}$,
\begin{equation}\label{it:estimate4}
\left\|\frac{x_i}{\|x_i\|}-\frac{y}{\|y\|}\right\|\leq 3\dist(x_i,y).
\end{equation}
\end{lem}
\begin{proof}
To show \eqref{it:estimate3}, first observe that, in the definition of $x_i$, the jump in the direction tangential to the circle has magnitude $\vartheta_{i}-\vartheta_{i-1}=1/i\log i$, while the jump in the direction normal to the circle has magnitude $\frac1i+\frac{1}{i+1}$. It follows that
\begin{gather*}
\frac{1}{2i\log i}\leq (x_{i+1}-x_i)\cdot \frac{x_i^\perp}{\|x_i\|}\leq \frac{2}{i\log i},\\
\frac{2}{i+1}\Big(1-\frac{1}{\log i}\Big)\leq \frac{2}{i+1}\sqrt{1-\frac{1}{\log^2i}}\leq (x_{i+1}-x_i)\cdot\frac{x_i}{\|x_i\|}\leq \|x_{i+1}-x_i\|,
\end{gather*}
where $(a,b)^\perp=(-b,a)$ and we have used the Cauchy--Schwarz inequality.
Since $1/i\leq\varepsilon_i=\|x_{i+1}-x_i\|\leq 2/i$, together with $v_i=-(x_{i+1}-x_i)/\varepsilon_i$ and the estimates above, we also have
\begin{equation}\label{it:estimate2}
\frac{1}{2\log i}\leq\left|v_i\cdot \frac{x_i^\perp}{\|x_i\|}\right|\leq \frac{2}{\log i},\end{equation}
and
\begin{equation}\label{it:estimate1}
\frac{i}{i+1}\left(1-\frac2{\log i}\right)\leq \left|v_i\cdot\frac{x_i}{\|x_i\|}\right|\leq 1.
\end{equation}
The estimate \eqref{it:estimate3} follows from \eqref{it:estimate2} and \eqref{it:estimate1}:
\begin{align*}
\left\|v_i-(-1)^i\frac{x_i}{\|x_i\|}\right\|
&=\sqrt{\left(v_i\cdot \frac{x_i}{\|x_i\|}-1\right)^2+\left(v_i\cdot \frac{x^\perp_i}{\|x_i\|}\right)^2}\\
&\leq \sqrt{\left(\frac{i}{i+1}\left(\frac{2}{\log i}+1\right)-1\right)^2+\left(\frac{2}{\log i}\right)^2}\\
%
&\leq \frac{4}{\log i}+\frac2i\leq \frac{6}{\log i}.
\end{align*}
Estimate \eqref{it:estimate4} can be deduced by letting $w=y-x_i$, so that $\|w\|=\dist(x,y)$ and observing that
\[1-\tfrac2i\leq\|x_i\|\leq 1+\tfrac2i\quad\textrm{and}\quad \|x_i+w\|=\|y\|\geq 1-\tfrac2i,\]
which means that, for $i$ large, we have
\begin{align*}
\left\|\frac{x_i}{\|x_i\|}-\frac{y}{\|y\|}\right\|
&= \left\|\frac{x_i}{\|x_i\|}-\frac{x_i+w}{\|x_i+w\|}\right\|\\
&=\frac{\big\|\,x_i(\|x_i+w\|-\|x_i\|)+w\|x_i\|\,\big\|}{\|x_i\|\,\|x_i+w\|}\\
&\leq \frac{2\|x_i\|\,\|w\|}{\|x_i\|\,\|x_i+w\|}\\
&\leq 2\frac{1+\frac2i}{(1-\frac2i)^2}\|w\|\\
&\leq 3\|w\|.\qedhere
\end{align*}
\end{proof}
\begin{proof}[Proof of Proposition \ref{prop:propertiesf}]
Item \ref{f:regularity} becomes evident once we realize that the sum \eqref{eq:deff} reduces to $f(x)=(1-\psi_i(x))\phi(x)+\psi_i(x)V_i(x)$ for $x$ in $B_{1/2^{i-1}}(x_i)$ and to $f(x)=\phi(x)$ elsewhere, since $\psi_i$, $V_i$ and $\phi$ are $C^\infty$ on $\mathbb{R}^2\setminus (S^1\cup\{0\})$.
To prove item \ref{f:gradientsequence}, note that, for $x\in B_{1/2^i}(x_i)$, we have $f(x)=V_i(x)$ and $\nabla f(x)=\nabla V_i(x)=v_i$ so that $x_{i+1}-x_i=-\varepsilon_iv_i=-\varepsilon_i\nabla f(x_i)$.
In order to prove item \ref{f:clarke}, let $p\in S^1$. Let us first show that, as $y\in\mathbb{R}^2$ with $\|y\|<1$ tends $p$, $\nabla f(y)\to -p$. If $y\notin \bigcup_i B_{1/2^{i-1}}(x_i)$ is near $p$, then
\[\|\nabla f(y)+p\|=\|\nabla \phi(y)+p\|=\left\|-\frac{y}{\|y\|}+p\right\|,\]
which clearly tends to 0 as $y\to p$.
If $y\in B_{1/2^{i-1}}(x_i)$ (and since $\|y\|<1$ we must have $i$ odd), then
we have, by a Taylor expansion, $\nabla\phi(x_i)=-x_i/\|x_i\|$, the Cauchy--Schwarz inequality, and \eqref{it:estimate3},
\begin{align*}
|V_i(y)-\phi(y)|&=\left|(y-x_i)\cdot v_i+\tfrac1i-\phi(x_i)-\nabla\phi(x_i)\cdot(y-x_i)\right|+2\|y-x_i\|^2\\
&=\left|(y-x_i)\cdot (v_i+\frac{x_i}{\|x_i\|})+\tfrac1i-\tfrac1i\right|+2\|y-x_i\|^2\\
&\leq 2\|y-x_i\|\left\|v_i+\frac{x_i}{\|x_i\|}\right\|+2\left(\frac{1}{2^{i-1}}\right)^2\\
&\leq 2\frac1{2^{i-1}}\frac{6}{\log i}=\frac{12}{2^{i-1}\log i}
\end{align*}
and, since also $\nabla \phi(y)=-y/\|y\|$, $\nabla V_i(y)=v_i$, $\lip(\nabla\psi_i)=2^i\lip(\nabla\psi)$, $|\psi_i(y)|\leq 1$, the triangle inequality, the estimates from Lemma \ref{lem:estimates}, and $y\in B_{1/2^{i-1}}(x_i)$,
\begin{align*}
\left\|\nabla f(y)+\frac{y}{\|y\|}\right\|&=\left\|\nabla[(1-\psi_i(y))\phi(y)+\psi_i(y)V_i(y)]+\frac{y}{\|y\|}\right\|\\
&=\left\|\nabla \psi_i(y)(V_i(y)-\phi(y))+\psi_i(y)\left(\nabla V_i(y)+\frac{y}{\|y\|}\right)\right\| \\
&\leq \lip(\nabla \psi_i)|V_i(y)-\phi(y)|+\left\|v_i+\frac{y}{\|y\|}\right\| \\
&\leq 2^i\lip(\nabla \psi)|V_i(y)-\phi(y)|+\left\|v_i+\frac{x_i}{\|x_i\|}\right\|+\left\|\frac{x_i}{\|x_i\|}-\frac{y}{\|y\|}\right\| \\
&\leq 2^i\lip(\nabla\psi)\frac{12}{2^{i-1}\log i}+\frac{6}{\log i}+\frac3{2^{i-1}}\\
&=(12\lip(\nabla\psi)+6)\frac{2}{\log i} +\frac3{2^{i-1}}\to 0\quad\textrm{as $i\to +\infty$.}
\end{align*}
It follows from the triangle inequality that
\[\left\|\nabla f(y)+p\right\|\leq \left\|\nabla f(y)+\frac{y}{\|y\|}\right\|+\left\|p-\frac{y}{\|y\|}\right\|\]
so that, as $y\to p$ with $\|y\|<1$, we have $\nabla f(y)\to -p$.
A similar argument yields that, as $y\to p$ with $\|y\|>1$, we have $\nabla f(y)\to p$, which proves item \ref{f:clarke}.
To prove item \ref{f:pathdiff}, note that, by items \ref{f:regularity} and \ref{f:clarke}, if a Lipschitz curve $\gamma$ satisfies either $\gamma(t)\in S^1$ and $\gamma'(t)$ tangent to $S^1$ or $\gamma(t)\in\mathbb{R}^2\setminus S^1$, then indeed we have $(f\circ\gamma)'(t)=v\circ\gamma'(t)$ for all $v\in\partial^cf(\gamma(t))$. On the other hand, the set of points $t$ in the domain of $\gamma$ such that $\gamma(t)\in S^1$ but $\gamma'(t)$ is not tangent to $S^1$ is at most countable (these points $t$ can be covered by disjoint open sets) and hence has measure zero; see also the proof of \cite[Theorem 5.3]{Davis2019}. It follows that the chain rule condition for path differentiability is satisfied for almost all $t$. Since this is true for all curves $\gamma$, $f$ is path-differentiable.
Item \ref{f:whitney} is clear in view of items \ref{f:regularity} and \ref{f:clarke}.
If follows from item \ref{f:clarke} that $S^1\subseteq \crit f$.
Recall $f=\phi$ in a neighborhood of $0$ and $0\in\crit \phi$, so $0\in\crit f$. If $x\notin \bigcup_iB_{1/2^{i-1}}(x_i)$, then $\|\nabla f(x)\|=\|\nabla \phi(x)\|=1$ and $\nabla f(x)$ is the only element of $\partial^cf(x)$, so $x\notin\crit f$. If $x\in B_{1/2^{i-1}}(x_i)$, then, taking $i_0$ large enough, we can ensure that, for $i\geq i_0$, we have, by the triangle inequality and the estimates above,
\[\|\nabla f(x)\|\geq \left\|\frac{x}{\|x\|}\right\|- \left\|\nabla f(x)-\frac{x}{\|x\|}\right\|%
>\frac12.
\]
This settles items \ref{f:whitney} and \ref{f:KL}.
\end{proof}
\subsection{Limiting measures}\label{sec:circlemeasures}
Here we recall some of the theory of \cite[Section 4]{ouroscillationcompensation}, and we show that in the example constructed in Section \ref{sec:circleconstruction}, the set of limiting measures is uncountable. We also compute those measures explicitly.
\paragraph{The interpolating curve and its associated closed measures.}
Given a measure $\xi$ on $X$ and a measurable map $g\colon X\to Y$, the \emph{pushfoward $g_*\xi$} is defined to be the measure on $Y$ such that, for $A\subset Y$ measurable, $g_*\xi(A)=\xi(g^{-1}(A))$.
Recall that the \emph{support $\supp\mu$} of a positive Radon measure $\mu$ on $\mathbb{R}^n$ is the set of points $x\in \mathbb{R}^n$ such that $\mu(U)>0$ for every neighborhood $U$ of $x$. It is a closed set.
\begin{defn}\label{def:closedmeasure}
A compactly-supported, positive, Radon measure on $\mathbb{R}^n\times\mathbb{R}^n$ is \emph{closed} if, for all functions $f\in C^\infty(\mathbb{R}^n)$,
\[\int_{\mathbb{R}^n\times\mathbb{R}^n}\nabla f(x)\cdot v\,d\mu(x,v)=0.\]
\end{defn}
Let $\pi\colon\mathbb{R}^n\times\mathbb{R}^n\to\mathbb{R}^n$ be the projection $\pi(x,v)=x$. To a measure $\mu$ in $\mathbb{R}^n\times\mathbb{R}^n$ we can associate its \emph{projected measure $\pi_*\mu$}. We have $\supp\pi_*\mu=\pi(\supp\mu)\subseteq\mathbb{R}^n$.
Let $\gamma\colon\mathbb{R}_{\geq 0}\to\mathbb{R}^n$ be the curve linearly interpolating the sequence $\{x_i\}_i$ with $\gamma(t_i)=x_i$ for $t_i=\sum_{j=0}^{i-1}\varepsilon_i$ and $\gamma'(t)=v_i$ for $t_i<t<t_{i+1}$.
\medskip
For a bounded set $B\subset\mathbb{R}_{\geq0}$, we define a measure on $\mathbb{R}^n\times\mathbb{R}^n$ by
\[\meas{\gamma}{B}=\frac{1}{|B|}(\gamma,\gamma')_*\mathsf{Leb}_{B},\]
where $|B|=\int_B 1\,dt$ is the length of $B$, and $\mathsf{Leb}_B$ is the Lebesgue measure on $B$ (so that $\mathsf{Leb}_B(A)=|A|$ for $A\subseteq B$ measurable).
If $\varphi\colon\mathbb{R}^n\times\mathbb{R}^n\to\mathbb{R}$ is measurable, then
\[\int_{\mathbb{R}^n\times\mathbb{R}^n}\varphi\,d\meas{\gamma}{B}=\frac1{|B|}\int_B\varphi(\gamma(t),\gamma'(t))\,dt.\]
\begin{lem}[{\cite[Lemmas 20 and 21]{ouroscillationcompensation}}]\label{lem:longintervals}
In the weak* topology, the set of limit points of the sequence $\{\meas{\gamma}{[0,N]}\}_N$ is nonempty, and its elements are closed probability measures. Also,
\[\overline{\bigcup_{\mu\in\acc\{\meas{\gamma}{[0,N]}\}_N}\pi(\supp\mu)}=\essacc\{x_i\}_i.\]
\end{lem}
A measure $\mu$ on $\mathbb{R}^n\times\mathbb{R}^n$ can be fiberwise disintegrated as
\[\mu=\int_{\mathbb{R}^n}\mu_x\,d\pi_*\mu(x),\]
where $\mu_x$ is a probability on $\mathbb{R}^n$ for each $x\in\mathbb{R}^n$.
We define the \emph{centroid field $\bar v_x$} of $\mu$ by
\[\bar v_x=\int_{\mathbb{R}^n}v\,d\mu_x(v).\]
An important intermediate result of \cite{ouroscillationcompensation} is
\begin{thm}[Subgradient-like closed measures are trivial {\cite[Theorem 23]{ouroscillationcompensation}}]\label{thm:vxvanishes}
Assume that $f\colon\mathbb{R}^n\to\mathbb{R}$ is a path-differentiable function.
Let $\mu$ be a closed measure on $\mathbb{R}^n\times\mathbb{R}^n$, and assume that every $(x,v)\in\supp\mu$ satisfies $-v\in\partial^cf(x)$. Then the centroid field $\bar v_x$ of $\mu$ vanishes for $\pi_*\mu$-almost every $x$.
\end{thm}
\paragraph{Analysis of the example.}
Let $\gamma$ be the interpolating curve of the sequence $\{x_i\}_i$, as defined in Section \ref{sec:circleconstruction}.
In this example, the set of limit points of the sequence $\{\meas{\gamma}{[0,N]}\}_N$ consists of all measures on $T\mathbb{R}^2$ given by
\begin{equation}\label{eq:mutheta}\mu^{\theta_0}=\int_{\theta_0-2\pi}^{\theta_0}\frac{\delta_{(r(\theta),r(\theta))}+\delta_{(r(\theta),-r(\theta))}}2\frac{e^{\theta-\theta_0}}{1-e^{-2\pi}}\,d\theta,\quad \theta_0\in\mathbb{R},
\end{equation}
where $r(\theta)=(\cos \theta,\sin \theta)$ and $\delta_{(u,v)}$ denotes the Dirac delta in $\mathbb{R}^2\times\mathbb{R}^2$ concentrated at $(u,v)$. This is the measure that captures the dynamics occurring whenever $x_N$ has angle $\vartheta_N$ close to $\theta_0$. Of course, we have $\mu^{\theta_1}=\mu^{\theta_2}$ if $\theta_2-\theta_1$ is an integer multiple of $2\pi$, as well as $R^\xi_*\mu^{\theta_0}=\mu^{\xi+\theta_0}$ for $R^\xi$ the rotation by angle $\xi$.
Before proving \eqref{eq:mutheta}, we remark that in accordance with Theorem \ref{thm:vxvanishes} we have, for $x\in S^1$,
\[\mu_x^{\theta_0}=\frac{\delta_{(x,x)}+\delta_{(x,-x)}}2\]
and
\[\bar v_x=\int_{\mathbb{R}^2} v\,d\mu_x(v)=x-x=0.\]
Also the conclusion of Lemma \ref{lem:longintervals} is verified: we have \[\essacc\{x_i\}_i=\pi(\supp\mu^{\theta_0})=S^1,\]
and each $\mu^{\theta_0}$ is a closed probability measure.
Let us see how to arrive at \eqref{eq:mutheta}.
From the construction, it is clear that these measures must have the form \begin{equation*}
\int_{\theta_0-2\pi}^{\theta_0}\frac{\delta_{(r(\theta),r(\theta))}+\delta_{(r(\theta),-r(\theta))}}2\varrho(\theta)\,d\theta
\end{equation*}
for some density $\rho$ on $\mathbb{R}$;
the sum of Dirac deltas in \eqref{eq:mutheta} can be deduced from the fact that the vectors $v_i$ asymptotically approach $y$ and $-y$ as $x_i\to y\in S^1$ (with a subsequence), together with $\gamma'(t)=v_i$ for $t_i<t<t_{i+1}$.
Let us compute the density $\varrho$.
Let $I\subset\mathbb{R}$ be an interval of length $0<\alpha=|I|\leq 2\pi$. Considering $I$ as an arc in the circle, we will write \[\beta\in I\!\!\!\!\mod 2\pi\]
if $\beta\in\mathbb{R}$ and there is some $k\in\mathbb{Z}$ such that $\beta+2\pi k\in I$.
Let
\[m_0=\min\{i:\vartheta_i\in I\!\!\!\!\mod 2\pi\}.\]
%
Writing $P\approx Q$ if $P/Q\to1$ as $N\to+\infty$, if $m<n$ are two integers such that $\alpha=\vartheta_{n}-\vartheta_m=\sum_{k=m}^{n-1}\frac1{k\log k}$, then
\[\alpha\approx\int_m^ndx/x\log x=\log \log n-\log\log m;\]
thus $n\approx m^{e^\alpha}$.
In other words, the intervals $J\subset\mathbb{N}$ such that $\vartheta_i+2\pi k\in I\!\!\mod 2\pi$ if $i\in J$ are approximately
\[[m_0,m_0^{e^\alpha}],\;[m_0^{e^{2\pi}},m_0^{e^{\alpha+2\pi}}],\;[m_0^{e^{4\pi}},m_0^{e^{\alpha+4\pi}}],\;\dots,\;[m_0^{e^{2k\pi}},m_0^{e^{\alpha+2k\pi}}],\;\dots\]
Letting $k_N\in\mathbb{N}$ be such that $N=m_0^{e^{\alpha+2\pi k_N}}$, we compute
\begin{align*}
\frac{\sum_{\substack{\vartheta_i\in I\\i\leq N}}\varepsilon_i}{\sum_{i=2}^N\varepsilon_i}
&\approx \frac{\sum_{\substack{\vartheta_i\in I\\i\leq N}}2/i}{\sum_{i=2}^N2/i}\\
&\approx\frac{1}{\log N}\sum_{k=0}^{k_N}\int_{m_0^{e^{2k\pi}}}^{m_0^{e^{\alpha+2k\pi}}}\frac{dx}x\\
&=\frac{1}{\log N}\sum_{k=0}^{k_N}(e^\alpha-1)e^{2k\pi}\log m_0\\
&=\frac{(e^\alpha-1)\log m_0}{\log N}\frac{e^{2\pi( k_N+1)}-1}{e^{2\pi}-1}\\
&=\frac{(e^\alpha-1)\log m_0}{\log N}\frac{e^{2\pi-\alpha}\log N/\log{m_0}-1}{e^{2\pi}-1}\\
&\to\frac{1-e^{-\alpha}}{1-e^{-2\pi}}\eqqcolon p(\alpha)
\end{align*}
as $N\to+\infty$.
To compute $\varrho$, we apply that to an interval $I$ of the form $[\theta,\theta_0]=[\theta_0-\alpha,\theta_0]$ and we take the derivative
\[\varrho(\theta)=\frac{dp(\theta_0-\theta)}{d\theta}=\frac{d}{d\theta}\frac{1-e^{-(\theta_0-\theta)}}{1-e^{-2\pi}}=\frac{e^{\theta-\theta_0}}{1-e^{-2\pi}},\quad \theta\in[\theta_0-2\pi,\theta_0).\]
\section{Example on a fractal set}
\label{sec:sardcounterexample}
In the spirit of Whitney's counterexample \cite{whitney} to the Morse--Sard theorem, we construct a function $f\colon\mathbb{R}^2\to\mathbb{R}$ and a bounded subgradient sequence $\{x_i\}_i$ satisfying:
\begin{enumerate}[label=C\arabic*.,ref=C\arabic*]
\item\label{itex:pathdiff}\label{itex:first} $f$ is path-differentiable,
\item\label{itex:nonconstcrit} $f(\crit f)\supset f(\essacc\{x_i\}_i)= f(\acc\{x_i\}_i)=[0,1]$,
%
\item\label{itex:essacc} The accumulation set $\acc\{x_i\}_i$ is not contained in $\crit f$, and \[\essacc\{x_i\}_i\neq \acc\{x_i\}_i.\]
\item\label{itex:fconv} $\{x_i\}_i$ and $\{f(x_i)\}_i$ do not converge.
\item \label{itex:dim} The Hausdorff dimensions of $\essacc\{x_i\}_i$ and $\acc\{x_i\}_i$ are greater than 1 and satisfy \eqref{eq:fractaldimension}.
\item \label{itex:slowdown} There are points $x$ and $y$ in $\essacc\{x_i\}_i\setminus\acc\{x_i\}_i$ such that we can take subsequences $\{x_{i_k}\}_k$ and $\{x_{i'_k}\}_k$ converging to $x$ and $y$, respectively, with $i_k< i'_k<i_{k+1}$ for all $k$ and \[\sup_k\sum_{p=i_k}^{i'_k}\varepsilon_p<+\infty.\]
\item \label{itex:osccomp} There is no oscillation compensation on $\acc\{x_i\}_i\setminus\essacc\{x_i\}_i$. This means, precisely, that there is a continuous function $Q\colon\mathbb{R}^n\to[0,1]$ such that
\begin{equation}\label{eq:speedaverages}
\liminf_{N\to+\infty}\left\|\frac{\sum_{i=0}^{N}\varepsilon_iv_i Q(x_i)}{\sum_{i=0}^{N}\varepsilon_i Q(x_i)}\right\|>0.
\end{equation}
Crucially, since we strive to show that the dynamics on $\acc\{x_i\}_i\setminus\essacc\{x_i\}_i$ may be very different to the one displayed on $\essacc\{x_i\}_i$, we are not requiring the condition from \cite[Theorem 6(ii)]{ouroscillationcompensation}, namely, the existence of a sequence $\{N_i\}_i$ with
\[\liminf_{j\to+\infty}\frac{\sum_{i=1}^{N_j}\varepsilon_i Q(x_i)}{\sum_{i=0}^{N_j}\varepsilon_i}>0,\]
which would force the focus to be on the dynamics around $\essacc\{x_i\}_i$.
\item \label{itex:perp}\label{itex:last} The oscillations near $\acc\{x_i\}_i\setminus\essacc\{x_i\}_i$ are not asymptotically perpendicular to $\acc\{x_i\}_i$.
\end{enumerate}
\paragraph{Outline.}
To construct the function $f$, we will first define a fractal curve $\Gamma$ and $f$ on it, aiming to have $\Gamma\subset\crit f$ and $f(\Gamma)=[0,1]$. We will also define a curve $J$ such that $\Gamma\cup J$ is a closed loop and $J$ only intersects $\crit J$ at its endpoints. We will construct an auxiliary path-differentiable function $h$ coinciding with $f$ on the curve $\Gamma$, and in Lemma \ref{lem:defh} we will prove some properties of $h$. We will next construct a series of loops $T_0,T_1, T_2,\dots$ that will help us define the sequence $\{x_i\}_i$, which we carefully specify so that it is almost a subgradient sequence of $h$. The dynamics of $\{x_i\}_i$ around $\Gamma$ will mimic that of the sequence in the example of Section \ref{sec:circle}, and near $J$ it will instead move relatively fast. To obtain $f$, we modify $h$ slightly in a way that ensures that $\{x_i\}_i$ is a subgradient sequence. In Proposition \ref{prop:propertiesf2} we show that $f$ has certain properties, which we will finally link, in our concluding remarks, to claims \ref{itex:first}--\ref{itex:last} above.
The reader will find this example easier to follow after having looked at the construction of Section \ref{sec:circleconstruction}. The role of the function $\phi$ in that construction is taken by the function $h$ in the one presented below.
\paragraph{ A fractal curve.} Pick $\frac14<\alpha\leq\frac13$.
We begin by constructing a set $\Gamma\subset\mathbb{R}^2$ recursively as illustrated in Figure \ref{fig:Gamma}. For the first step, we pick four disjoint squares of side $\alpha$ inside the unit square, and we let $\Gamma_1$ be the closed set consisting of the five disjoint paths joining the left and bottom sides of the unit square with those four squares successively, as in the figure. In each of the following inductive steps, we rescale the set $\Gamma_{i}$ we had for the previous step and we place new copies inside each of the four squares, perhaps rotated by an angle $\pi/2$, so that the paths making up $\Gamma_1$ connect with those of each rescaled copy of $\Gamma_i$. The set $\Gamma_{i+1}$ is then the union of $\Gamma_1$ with the four rescaled and appropriately rotated copies of $\Gamma_i$. This defines an increasing sequence of sets $(\Gamma_i)_{i\in \mathbb{N}}$ and $\Gamma=\overline{\bigcup_i\Gamma_i}$.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{img/whitney-critf}
\caption{The first three steps $\Gamma_1$, $\Gamma_2$, and $\Gamma_3$ in the construction of the set $\Gamma$.}
\label{fig:Gamma}
\end{figure}
We proceed to parameterize $\Gamma$ with a continuous curve $\phi\colon[0,1]\to\mathbb{R}^2$. To do this, we will imitate the procedure in the construction of the Cantor staircase. Thus, we first divide $[0,1]$ into nine contiguous intervals of equal length, namely, the nine intervals (we write in base 9)
\[[0_9,0.1_9),[0.1_9,0.2_9),\dots,[0.7_9,0.8_9),[0.8_9,1_9].\]
We define the map $\phi$ on each of the of the five odd-numbered intervals
\[[0_9,0.1_9),[0.2_9,0.3_9),[0.4_9,0.5_9),[0.6_9,0.7_9),[0.8_9,1_9]\]
to map the corresponding interval to one of the intervals making up $\Gamma_1$ (in Figure \ref{fig:Gamma}, these are the five blue curves in the left-hand diagram). Then iteratively, at step $i$, we divide each of the remaining intervals into nine equal subintervals, and we map the odd-numbered subintervals into the pieces of $\Gamma_i\setminus\Gamma_{i-1}$.
Thus for example the interval $[0.1_9,0.2_9)$ gets divided into
\[[0.1_9,0.11_9),[0.11_9,0.12_9),\dots,[0.17_9,0.18_9),[0.18_9,0.2_9),\]
and the images of the intervals $[0.1_9,0.11_9)$ and $[0.18_9,0.2_9)$ will touch the images of the intervals $[0.0_9,0.1_9)$ and $[0.2_9,0.3_9)$, but the intervals
\[[0.12_9,0.13_9),[0.14_9,0.15_9),[0.16_9,0.17_9)\]
will not touch the image of the curve defined in the previous step; refer to the middle diagram in Figure \ref{fig:Gamma}.
The map $\phi$ is the unique continuous extension of the thus-defined function.
The resulting curve $\phi$ has infinite arc length. Indeed at each construction step of the $\Gamma_i$, the paths in $\Gamma_i\setminus\Gamma_{i-1}$ are contained in $4^{i-1}$ squares, each of them contributing in an increase of at least $2\alpha^i$ in the total length. This results in a global increase of at least $(4\alpha)^i/2>1/2$ in the $i$-th step.
Let $p\colon[0,1]\to\mathbb{N}\cup\{+\infty\}$ be the function that assigns to a number $t$ the first appearance of an even digit after the decimal point in its base 9 expansion, so that for example $p(0_9)=1$ and $p(0.757823_9)=4$. Thus if $t\in[0,1]$ and $p(t)<+\infty$, then $\phi(t)\in\Gamma_{p(t)}\setminus\Gamma_{p(t)-1}$, and if $p(t)=+\infty$ then $\phi(t)$ is a point in the Cantor set $\Gamma\setminus\bigcup_i\Gamma_i$ at the intersection of all the squares used in the construction.
\begin{lem}\label{lem:dimgamma}
The Hausdorff dimension of $\Gamma$ is $\log_\alpha\frac14$.
\end{lem}
\begin{proof}
The definition of Hausdorff dimension was recalled in \ref{q:10} in Section \ref{sec:intro}.
Let $r>0$. As explained above, the length of $\Gamma_i\setminus\Gamma_{i-1}$ is at least $(4\alpha)^i/2$. Thus, a lower bound on the number of balls of radius $r$ necessary to cover $\Gamma_i\setminus\Gamma_{i-1}$ is $(4\alpha)^i/2r-1$ balls, for $i$ such that $\alpha^i>r$, i.e., $i<\log_\alpha r$. We have, for $d>0$,
\begin{align*}
\mathcal H^d(\Gamma)&\geq\mathcal H^d(\textstyle\bigcup_i\Gamma_i)\\
& \geq\liminf_{r\to 0}\sum_{i=1}^{\log_\alpha r-1}r^d\left(\frac{(4\alpha)^i}{2r}-1\right)\\
&=\liminf_{r\to0}\frac12r^{d-1}\frac{(4\alpha)^{\log_\alpha r}-1}{4\alpha-1}-(\log_\alpha r-1)r^d\\
&=\liminf_{r\to0}\frac12r^{d-1}\frac{(4\alpha)^{\log_\alpha r}}{4\alpha-1}\\
&=\liminf_{r\to0}\frac{1/2}{4\alpha-1}\exp(\left(d-1+\log_\alpha(4\alpha)\right)\log r)\\
&=\liminf_{r\to0}\frac{1/2}{4\alpha-1}\exp(\left(d-\log_\alpha\tfrac14\right)\log r).
\end{align*}
Hence in order to have $\mathcal H^d(\Gamma)=0$ it is necessary that $d>\log_\alpha\frac14$ because this $\liminf$ must vanish and $\log r\to-\infty$. This translates to $\dim \Gamma\leq\log_\alpha\frac14$.
Let us prove the opposite inequality.
For $r>0$ we cover $\Gamma_i\setminus\Gamma_{i-1}$ with $A(4\alpha)^i/r$ balls of radius $r$ for $i$ such that $\alpha^i\geq r$; here $A>0$ is taken so that $4A\alpha^i$ is an upper bound for the contribution of the paths in each of the $4^{i-1}$ squares added.
Since $\Gamma\setminus\bigcup_i\Gamma_i$ is also the intersection of the squares in the construction above, we know that it can be covered by $4^k$ balls of radius $2\alpha^k$, and these balls will cover the remaining part of $\bigcup_i\Gamma_i$. Hence we have, with $r=2\alpha^k$ and its consequence $\log_\alpha r=k+\log_\alpha2$,
\begin{align*}
\mathcal H^d(\Gamma)
&\leq\mathcal H^d(\Gamma\setminus\textstyle\bigcup_i\Gamma_i)+\mathcal H^d(\textstyle\bigcup_i\Gamma_i)\\
&\leq \displaystyle\liminf_{k\to+\infty} 4^kr^d+\liminf_{k\to+\infty} \sum_{i=1}^{\log_\alpha r}r^d\frac{A(4\alpha)^i}{r}\\
&= \displaystyle\liminf_{k\to+\infty} 4^k\big(2\alpha^k\big)^d+\liminf_{k\to+\infty} \sum_{i=1}^{k+\log_\alpha 2}(2\alpha^k)^d\frac{A(4\alpha)^i}{2\alpha^k}\\
&\leq\liminf_{k\to+\infty}e^{k(\log4+d\log\alpha)+d\log 2}+\liminf_{k\to+\infty} A(2\alpha^k)^{d-1}\frac{(4\alpha)^{k+\log_\alpha 2+1}-1}{4\alpha-1},
\end{align*}
which vanishes unless $\log4+d\log\alpha>0$, that is, unless $d<\log_\alpha\frac14$. This gives $\dim \Gamma\geq\log_\alpha\frac14$.
\end{proof}
\paragraph{Defining $f$ on $\Gamma\cup J$.} We define $f$ on $\Gamma$ imitating the construction of the Cantor staircase as follows. For a point $q\in\Gamma_i$, we let $t=\phi^{-1}(q)$, and we express $t$ in base 9, so that the first $k=p(t)-1$ numbers $a_1$, $a_2$, \dots $a_{k}$ in the base 9 expansion $x=(0.a_1a_2a_3\dots)_9$ are odd. We then let, for $1\leq i\leq k$, $b_i=(a_i-1)/2$, and $f(q)=(0.b_1b_2\dots b_k)_4$ in base 4. The values so-assigned for $f$ are illustrated in Figure \ref{fig:fvalues}. The reader will convince herself that with this definition, $f$ is constant on each path-connected component of $\bigcup_i\Gamma_i$ and can be uniquely extended to a continuous function on all of $\Gamma$.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{img/whitney-values}
\caption{Values of $f$ on the set $\Gamma_3$. All numbers are in base 4.}
\label{fig:fvalues}
\end{figure}
We remark that the function $f\circ\phi\colon[0,1]\to[0,1]$, just like the Cantor staircase, is continuous but not absolutely continuous; indeed, since it is constant on the intervals where $p$ is constant, its derivative $(f\circ\phi)'$ vanishes almost everywhere on $[0,1]$, yet $f\circ\phi$ is not constant, contradicting the fundamental theorem of calculus, which is valid for absolutely continuous functions. %
Let $J\subset \mathbb{R}^2\setminus((0,1)\times(0,1))$ be a smooth, non-self-intersecting curve joining the two intersections of $\Gamma$ with the boundary of the unit square. We define $f$ on $J$ to smoothly and strictly monotonously take the values between $0$ and $1$, keeping $f$ continuous.
\paragraph{Lipschitz continuity of $f$ on $\Gamma\cup J$.}
Let $j\colon(0,1)\times(0,1)\to\mathbb{N}\cup\{+\infty\}$ be, for each pair of points $s$ and $t$ in $(0,1)$, the position of the first digit of the base-9 expansion $s$ and $t$ that differs; thus for example $j(0.112_9,0.1223_9)=2$.
Since we have $|s-t|>9^{-j(s,t)-1}$, as $|s-t|\to0$ we necessarily have $j(s,t)\to+\infty$.
Note also that if $j(s,t)>0$, then $\phi(s)$ and $\phi(t)$ must be contained in the same square of side $\alpha^{j(s,t)-1}$.
Thus, for some $A,B>0$,
$|\phi(s)-\phi(t)|\geq A\alpha^{j(s,t)}$ and $|f\circ\phi(s)-f\circ\phi(t)|\leq B 4^{-j(s,t)}$.
Thus if $x,y\in\Gamma$, letting $s$ and $t$ be such that $\phi(s)=x$ and $\phi(t)=y$, we have $\|x-y\|=|\phi(s)-\phi(t)|\geq A\alpha^{j(s,t)}$ or, equivalently, $-j(s,t)\leq-\log_\alpha \frac{\|x-y\|}A$. Also,
\begin{align*}
|f(x)-f(y)|
&=|f\circ\phi(s)-f\circ\phi(t)|\\
&\leq B4^{-j(s,t)}\\
&\leq B4^{-\log_\alpha\frac{\|x-y\|}A}\\
&=\frac BA\|x-y\|^{\log_\alpha\frac14} \\
&\leq \frac BA\|x-y\|.
\end{align*}
because $\alpha>1/4$ so $\log_\alpha1/4>1$. This, together with the smoothness of $f$ on $J$ implies that $f$ on $\Gamma\cup J$ is Lipschitz. Let $\lip(f|_{\Gamma\cup J})$ be the Lipschitz constant of $f$ on $\Gamma\cup J$.
\paragraph{The auxiliary function $h$.}
Let $C$ be a connected component of $J\cup\Gamma_i$ for some $i$, without its endpoints. As such, $C$ is a smooth, non-self-intersecting curve, diffeomorphic to an open interval. As is well known (see for example \cite[p. 109]{lang}) there exists a tubular neighborhood $W_C$ around $C$, by which we mean specifically:
\begin{itemize}
\item there is an open set $W_C\subset\mathbb{R}^2$ that contains $C$,
\item there is an open set $U\subset\mathbb{R}^2$ of the form $(a,b)\times (-c,c)$ for some $a,b,c\in\mathbb{R}$, $c>0$, and
\item there is a smooth, bijective function $\varphi_C\colon \overline U\to \overline{W_C}$ such that
\begin{itemize}
\item the map $x\mapsto\varphi_C(x,0)$ is a parameterization of $C$ by arclength,
\item the map $y\mapsto\varphi_C(x,y)$ is a parameterization, by arclength, of the segment perpendicular to $C$ and passing through $x$.
\end{itemize}
\end{itemize}
We will refer to $\varphi_C$ as the \emph{chart} of $W_C$, and to the number $c>0$ as the \emph{thickness} of $W_C$.
The statement of existence of the tubular neighborhoods $W_C$ is obvious if we choose all $\Gamma_i$ and $J$ to be composed of straight line segments and circle arcs, so readers unfamiliar with the general case may assume that this is the case.
\begin{lem}\label{lem:defh}
There is a function $h\colon\mathbb{R}^2\to\mathbb{R}$ such that
\begin{enumerate}[label=\roman*.,ref=(\roman*)]
\item \label{ith2:pathdiff} $h$ is locally Lipschitz and path-differentiable.
\item \label{ith2:extendsf} $h$ coincides with $f$ on $\Gamma\cup J$.
\item \label{ith2:C1} $h$ is $C^1$ on $\mathbb{R}^2\setminus (\Gamma\cup J)$.
\item \label{ith2:piecewisesmooth} On a tubular neighborhood $W_C$ of each connected component $C$ of $J\cup \Gamma_i$, $h$ is defined by
\begin{equation}\label{eq:defh}
h(\varphi_C(x,y))=f(\varphi_C(x,0))+2L|y|,
\end{equation}
where $\varphi_C$ is the chart of $W_C$. Hence
$h$ is piecewise $C^\infty$ in $W_C$, with the singular locus of $h$ within $W_C$ coinciding exactly with $C$.
\item \label{ith2:clarke} Let $L=\lip(f|_{\Gamma\cup J})$. If $p\in \Gamma_i$ for some $i$, and if $\mathbf n$ is a unit vector normal to $\Gamma_i$ at $p$, then
\[\partial^ch(p)=\{\lambda \mathbf n:-2L\leq \lambda\leq 2L\}, \quad p\in \textstyle\bigcup_i \Gamma_i.\]
More precisely, the gradients of $h$ on each side of $\Gamma_i$ at $p$ are asymptotically equal to $2L\mathbf n$ and $-2L\mathbf n$, respectively, pointing away from $\Gamma_i$.
Similarly, if now $p\in J$, $\mathbf n$ is a unit vector normal to $J$ at $p$, and $\mathbf t$ is the unit vector tangent to $J$ at $p$ that points in the clockwise direction (for the loop $\Gamma\cup J$) and if $a>0$ is the magnitude of the derivative of $f|_J$ at $p$, then
\[\partial^ch(p)=\{-a\mathbf t+\lambda \mathbf n:-2L\leq \lambda\leq 2L\},\quad p\in J.\]
More precisely, the gradients of $h$ on each side of $J$ at $p$ are asymptotically equal to $-a\mathbf t+2L\mathbf n$ and $-a\mathbf t-2L\mathbf n$, respectively, pointing away from $J$.
\item \label{ith2:hessian} The norm of the Hessian of $h$ is bounded on each connected component of $W_C\setminus C$, for $C$ and $W_C$ as in item \ref{ith2:piecewisesmooth}.
\item \label{ith2:crit} $\Gamma\subseteq\crit h$.
\end{enumerate}
\end{lem}
This lemma will be proved in Appendix \ref{sec:proofh}.
\paragraph{A skeleton curve for the sequence.} We shall now define a sequence of smooth loops $T_1,T_2,\dots$ that will guide the trajectory of the sequence $\{x_i\}_i$. Figure~\ref{fig:trajectories} illustrates the shapes of the first elements of the sequence of closed curves that we now proceed to construct.
The first one, $T_0$, will simply be a small loop around the origin, containing $J=T_0\setminus((0,1)\times(0,1))$ and closing it up with a circular arc contained in $[0,1]\times[0,1]$.
For $i>0$, the path $T_i$ will be equal to $\Gamma_i\cup J$ together with some small circular arcs glued to close up the loose ends in such a way that we obtain a smooth loop that does not touch the $4^{i+1}$ smaller squares of side $\alpha^{i+1}$ involved in the construction of $\Gamma_{i+1}$.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{img/whitney-trajectory}
\caption{The first three loops, $T_0$, $T_1$, $T_2$, used to define the subgradient sequence $\{x_i\}_i$. The loop $T_i$ contains $J$ and, for $i>0$, it also contains $\Gamma_i$. }
\label{fig:trajectories}
\end{figure}
\paragraph{Specification of the sequence $\{x_i\}_i$.}
Unlike what we did for the example described in Section \ref{sec:circle}, we will not try here to define $\{x_i\}_i$ explicitly; instead, we will take the lesson from that example as to what this sequence should look like. We pick $\{x_i\}_i$ to be a sequence of distinct points with $\|x_{i+1}-x_i\|\to0$ successively bouncing around each path $T_1, T_2,\dots$. Thus, the sequence will start near $J$, it will go around $T_0$ a few times, and while it is at $J$, it will start going around $T_1$, which it will do a few times, and then $T_2$, and so on.
Let $L=\lip(f|_{\Gamma\cup J})$ and let $I_0, I_1,\dots\subset\mathbb{N}$ be the intervals during which $x_i$ will be going around each of the paths $T_j$, respectively. We will choose an initial value $i_0>0$ such that the sequence $\{x_i\}_{i=i_0}^\infty$ will satisfy:
\begin{enumerate}[label=S\arabic*.,ref=S\arabic*]
\item\label{S:first}\label{S:notselfaccum} Not self-accumulating. We require the sequence $\{x_i\}_i$ to be such that, for each $i$, there is some $r>0$ such that $B_r(x_i)\cap\{x_j\}_{j\neq i}=\emptyset$.
\item\label{S:thickness} If $i\in I_j$ then
\[\frac 2{iL}\leq \operatorname{thickness}(W_C)\]
for all connected components $C$ of $\Gamma_j\cup J$.
\item\label{S:bouncing} Bouncing. If $j>0$ and $i,i+1\in I_j$, the points $x_i$ and $x_{i+1}$ are on opposite sides of $T_j$.
\item\label{S:closetoTi} Distance to $T_j$. For $i\in I_j$, we require the points $x_i$ to remain at a distance
\[\left|\dist(x_i,T_j)-\frac{1}{iL}\right|\leq \frac1{i^2}.\]
\item\label{S:aroundsmooth} Around $\Gamma_j\cup J$. Recall from Lemma \ref{lem:defh} that $h$ is piecewise smooth near $\Gamma_j\cup J$. If $j>0$, $i\in I_j$ and the closest point of $T_j$ to $x_i$ is $y\in \Gamma_j\cup J$, and if $\mathbf t$ is the unit vector tangent to $\Gamma_j\cup J$ pointing in the clockwise direction, then we require $h$ to be differentiable at $x_i$ and
\[\left\|[x_{i+1}-x_i+\frac1{iL}\nabla h(x_i)]\cdot \mathbf t-\frac{1}{i\log i}\right\|\leq\frac1{i^2}.\]
\item\label{S:aroundcloseoff} Around the circle arcs $T_j\setminus(\Gamma_j\cup J)$.
If $j>0$, $i\in I_j$, and the point $y$ of $T_j$ closest to $x_i$ is in $T_j\setminus(\Gamma_j\cup J)$, and if $\mathbf t$ is a unit vector tangent to $T_j$ at $y$ pointing in the clockwise direction, we require
\[\frac{3}{iL}\leq (x_{i+1}-x_i)\cdot \mathbf t\leq \frac{4}{iL}\]
\item\label{S:last}\label{S:nobigjumps} Small jumps.
For all $i$ in the situation of \ref{S:aroundsmooth},
\[\|x_{i+1}-x_i+\frac1{iL}\nabla h(x_i)\|\leq \frac{3}{iL}.\]
For all $i$ in the situation of \ref{S:aroundcloseoff},
\[\|x_{i+1}-x_i\|\leq \frac{6}{iL}.\]
\end{enumerate}
Let us explain how such a sequence can be constructed. First, we choose $i_0>0$ large enough that if $C_1$ is the connected component of $J\cup \Gamma_1$ containing $J$, then $2/i_0L\leq \operatorname{thickness}(W_{C_1})$. We then choose $x_{i_0}$ in $W_{C_1}$ such that the point of $C_1$ closest to $x_{i_0}$ is in $J$, and such that \ref{S:closetoTi} is satisfied with $j=0$.
By induction, assuming that for some $i\geq i_0$ we have chosen $x_i$ satisfying \ref{S:first}-\ref{S:last}, we let $x_{i+1}$ be a point in the component on the opposite side of $T_j$ (thus complying with \ref{S:bouncing}) of the nonempty set $X_i$ determined by \ref{S:closetoTi} and \ref{S:nobigjumps} together with either \ref{S:aroundsmooth} or \ref{S:aroundcloseoff}, depending on the location of $x_i$. The set $X_i$ is indeed nonempty because the inequality in \ref{S:closetoTi} determines two stripes going parallel to $T_j$, while \ref{S:aroundsmooth} and \ref{S:aroundcloseoff} determine stripes perpendicular to $T_j$. So they intersect (with at least one connected component of the intersection on each side of $T_j$) as long as the step size is small enough with respect to the curvature of $T_j$; this can be ensured in the case of $j=0$ by increasing $i_0$, and in the case of $j>0$ by increasing the amount of times the sequence goes around $T_{j-1}$ before moving on to $T_j$. Although the intersection of the condition in \ref{S:closetoTi} and those of either \ref{S:aroundsmooth} or \ref{S:aroundcloseoff} may also include points located far from $x_i$, \ref{S:nobigjumps} forces the choose a connected component that is directly ahead along $T_j$, and it is impossible that the sequence will jump very far. Thus \ref{S:bouncing}--\ref{S:nobigjumps} can be complied with.
To see that \ref{S:notselfaccum} can be complied with as well, note that, by \ref{S:closetoTi}, together with \ref{S:aroundsmooth} and \ref{S:aroundcloseoff}, a ball of radius $r=1/2i^2$ works automatically once the other conditions have been satisfied.
To ensure \ref{S:thickness} is true, we let the sequence go around each $T_j$ a few times until $i$ grows enough that the inequality in \ref{S:thickness} becomes true.
We remark that the precise form of \ref{S:aroundcloseoff} will not be used explicitly, and its only purpose is to keep the sequence moving around the circular arcs $T_j\setminus (\Gamma_j\cup J)$ at a moderate rate.
\paragraph{Construction of $f$.}
Choose real numbers $\{r_i\}_i\subset\mathbb{R}$ such that $0<r_i<1/i^2$ and such that the disk $B_{3r_i}(x_i)$ of radius $3r_i$ centered at $x_i$ does not intersect $\Gamma$ and all the disks $B_{3r_i}(x_i)$ are disjoint. This is possible because of our specification \ref{S:notselfaccum}.
Let $\psi\colon\mathbb{R}^2\to[0,1]$ be a $C^\infty$ function with radial symmetry, $\psi(x)=\psi(y)$ for $\|x\|=\|y\|$, such that $\psi(x)=1$ for $x\in B_1(0)$, $\psi(x)=0$ for $\|x\|\geq 2$, and decreases monotonically on rays emanating from the origin. Let
\[\psi_i(x)=\psi\Big(\frac{x-x_i}{r_i}\Big),\]
so that $\psi_i$ equals 1 on $B_{r_i}(x_i)$ and vanishes outside $B_{2r_i}(x_i)$. Denote by $\lip(\psi)>0$ the Lipschitz constant of $\psi$, and by $\lip(\nabla\psi)>0$ the Lipschitz constant of its gradient. Note that the supports of the functions $\psi_i$ are pairwise disjoint and $\lip(\nabla\psi_i)=\frac{1}{r_i}\lip(\nabla\psi)$.
Let %
\[v_i=iL(x_i-x_{i+1}),\]%
and define, for $h$ as in Lemma \ref{lem:defh},
\[V_i(x)=(x-x_i)\cdot v_i+h(x_i).\]
\begin{prop}\label{prop:propertiesf2} Let
\begin{equation}\label{eq:deff2}
f(x)=\left(1-\sum_{i=0}^\infty\psi_i(x)\right)h(x)+\sum_{i=0}^\infty\psi_i(x)V_i(x).
\end{equation}
Then we have
\begin{enumerate}[label=\roman*.,ref=(\roman*)]
\item\label{f2:regularity} $f$ is piecewise $C^\infty$ in a tubular neighborhood $W_C$ of each connected component $C$ of $\Gamma_i\cup J$, $i>0$.
\item\label{f2:gradientsequence} $\{x_i\}_i$ is a subgradient sequence for $f$ with stepsizes
\[\varepsilon_i=\frac1{iL}.\]
In particular, $\sum_{i}\varepsilon_i=+\infty$ and $\sum_{i}\varepsilon_i^2<+\infty$.
\item\label{f2:acc} $\acc\{x_i\}_i=\Gamma\cup J$.
\item\label{f2:clarke} Let $p$ be a point in $\Gamma_i\cup J$ for some $i>0$. Then
\[\partial^c f(p)=\partial^ch(p).\]
\item\label{f2:crit} The critical set of $f$ contains $\Gamma$, but $J\cap\crit f$ consists only of the two endpoints of $J$.
\item\label{f2:pathdiff} $f$ is locally Lipschitz and path-differentiable.
\end{enumerate}
\end{prop}
\begin{proof}
By item \ref{ith2:piecewisesmooth} in Lemma \ref{lem:defh} we know that $h$ is piecewise $C^\infty$ in a tubular neighborhood $W_C$ of each connected component $C$ of $\Gamma_i\cup J$. Item \ref{f2:regularity} then follows from the facts that $V_i$ is $C^\infty$ and that the supports of the functions $\psi_i$ are piecewise disjoint, and the form of \eqref{eq:deff2}.
From \eqref{eq:deff2} and the fact that $\nabla \psi_j(x_i)=0=1-\sum_k\psi_k(x_i)$ for all $i$ and all $j$, we have
\[\nabla f(x_i)=\nabla V_i(x_i)=v_i.\]
Thus $\partial^cf(x_i)=\{v_i\}$ and
\[x_i-\varepsilon_iv_i=x_i-\frac1{iL}iL(x_{i+1}-x_i)=x_{i+1},\]
which is the statement of item \ref{f2:gradientsequence}.
Note that \ref{S:aroundsmooth} and \ref{S:aroundcloseoff} force the sequence to always advance around each $T_j$ and finish the loop.
Item \ref{f2:acc} is then clear from the construction of $\Gamma$ and the loops $T_j\supset J$, together with the specification \ref{S:closetoTi} that forces the sequence $\{x_i\}_i$ to get ever closer to $\Gamma\cup J$.
Let us prove item \ref{f2:clarke}. Fix $j>0$ and $p\in \Gamma_j\cup J$, and denote by $C$ the connected component of $\Gamma_j\cup J$ that contains $p$. Consider a point $y$ near $p$. In particular, we may assume that $y$ is not in the situation described in \ref{S:aroundcloseoff}. If $y\notin\bigcup_iB_{2r_i}(x_i)$, then $f=h$ on a neighborhood of $y$ and we have nothing to prove. Otherwise, we have $y\in B_{2r_i}(x_i)$ for some $i\geq 0$, and by \ref{S:thickness} we may assume $B_{2r_i}(x_i)$ is contained in the neighborhood $W_C$ of item \ref{ith2:piecewisesmooth} in Lemma \ref{lem:defh}. Item \ref{ith2:hessian} in Lemma \ref{lem:defh} means that the derivative of $\nabla h$ (the Hessian of $h$) is bounded on $W_C$, which means in particular that $\nabla h$ is Lipschitz in $W_C$; in other words, there is some $K>0$, depending only on $C$ such that, for all $z\in B_{2r_i}(x_i)$,
\[\|\nabla h(z)-\nabla h(x_i)\|\leq K\|z-x_i\|.\]
Note that it follows from \ref{S:bouncing}, \ref{S:closetoTi}, and \ref{S:aroundsmooth} and item \ref{ith2:clarke} of Lemma \ref{lem:defh} that, if $i$ is large enough,
\begin{equation}\label{eq:oldS4}
\varepsilon_i\left\|v_i-\nabla h(x_i)-\frac L{\log i}\mathbf t\right\|\leq \frac2{i^2}.
\end{equation}
By \eqref{eq:deff2}, the Lipschitzity of $\nabla \psi$, the fact that $0\leq \psi_i\leq 1$, a Taylor expansion with $w$ a point in the segment joining $x_i$ and $y$, the definition of $K$, the Cauchy-Schwarz and triangle inequalities, and \eqref{eq:oldS4}, %
\begin{align*}
\|&\nabla f(y)-\nabla h(y)\|=\left\|\nabla[(1-\psi_i(y))h(y)+\psi_i(y)V_i(y)]-\nabla h(y)\right\|\\
&=\left\|\nabla \psi_i(y)(V_i(y)-h(y))+\psi_i(y)\left(\nabla V_i(y)-\nabla h(y)\right)\right\| \\
&\leq \lip(\nabla \psi_i)|V_i(y)-h(y)|+\left\|v_i-\nabla h(y)\right\| \\
&\leq \frac1{r_i}\lip(\nabla\psi)(\|h(x_i)+v_i\cdot(y-x_i)\\
&\qquad\qquad\qquad-h(x_i)-\nabla h(w)\cdot(y-x_i)\|)+\|v_i-\nabla h(y)\|\\
&\leq \frac1{r_i}\lip(\nabla\psi)\|v_i-\nabla h(w)\|\,\|y-x_i\|+\|v_i-\nabla h(y)\|\\
&\leq \frac1{r_i}\lip(\nabla\psi)(2r_i)(\|v_i-\nabla h(x_i)\|+\|\nabla h(x_i)-\nabla h(w)\|)\\
&\qquad\qquad\qquad+\|v_i-\nabla h(x_i)\|+\|\nabla h(x_i)-\nabla h(y)\|\\
&\leq 2\lip(\nabla\psi)(\|v_i-\nabla h(x_i)\|+2Kr_i)+\|v_i-\nabla h(x_i)\|+2Kr_i\\
&\leq %
%
(2\lip(\nabla\psi)+1)\left(\frac{4L}{\log i} +2Kr_i\right)\to 0,
\end{align*}
as $y\to p$ because, in that case, $i\to +\infty$. So item \ref{f2:clarke} follows.
Item \ref{f2:crit} follows from item \ref{f2:clarke} together with the same being true for $h$; see items \ref{ith2:clarke} and \ref{ith2:crit} in Lemma \ref{lem:defh}.
Item \ref{f2:pathdiff} follows from item \ref{ith2:pathdiff} in Lemma \ref{lem:defh}, the form of \eqref{eq:deff2} on $\mathbb{R}^2\setminus(\Gamma\cup J)$, which ensures that the path differentiability of $h$ is inherited by $f$ on that region, and from item \ref{f2:clarke} above, which ensures the modification \eqref{eq:deff2} of $h$ does not change the path differentiability property on $\Gamma\cup J$.
\end{proof}
\begin{lem}\label{lem:essacc}
$\essacc\{x_i\}_i=\Gamma$.
\end{lem}
\begin{proof}
We will first show that $\bigcup_i \Gamma_i\subseteq\essacc\{x_i\}_i$, and from the fact that $\essacc\{x_i\}_i$ is closed it will follow that $\Gamma$ is contained in it. We use the notation $P\approx Q$ to mean that $P/Q\to1$.
Let $j>0$, $p\in\Gamma_j$ that is not an endpoint of the connected component $C$ of $\Gamma_j$ containing $p$, and $\{N_i\}_i\subset\mathbb{N}$ be a subsequence such that $\lim_ix_{N_i}=p$. Let $\alpha>0$ be smaller than the distance between $p$ and the closest of the two endpoints of $C$. Let also $\{M_i\}_i\subset \mathbb{N}$ be a subsequence such that $q=\lim_{i}x_{M_i}$ is a point on $C$ at arclength $\alpha$ from $p$, $M_i<N_i$ for all $i$, and $\dist(x_k,C)<2/{kL}$ for all $M_i\leq k\leq N_i$. In view of \ref{S:closetoTi}, for each $i$ the sequence $x_{M_i},x_{M_i+1},\dots,x_{N_i}$ is bouncing around the segment of $C$ of length $\alpha$ that starts at $q$ and ends at $p$. By item \ref{ith2:clarke} of Lemma \ref{lem:defh}, we know that $\partial^ch$ on the points of $C$ contains only vectors that are normal to $C$, so \ref{S:aroundsmooth} implies that
\[\tfrac12\alpha\leq \sum_{k=M_i}^{N_i}\frac{1}{i\log i}\approx \log\log N_i-\log\log M_i.\]
This means that
\[\log M_i\leq \exp(\log\log N_i-\tfrac12\alpha)=e^{-\alpha/2}\log N_i.\]
Hence also
\[\sum_{k=M_i}^{N_i}\varepsilon_k=\sum_{k=M_i}^{N_i}\frac1{kL}\approx\frac1L(\log N_i-\log M_i)\geq \frac1L(1-e^{-\alpha/2})\log N_i.\]
Similarly,
\[\sum_{k=1}^{N_i}\varepsilon_k=\sum_{k=1}^{N_i}\frac1{kL}\approx\frac1L\log N_i.\]
Thus the $\limsup$ in the definition \eqref{eq:essaccdef} of $\essacc\{x_i\}_i$ is at least $1-e^{-\alpha/2}>0$. This proves that $p\in \essacc\{x_i\}_i$, and thus also that $\Gamma\subseteq\essacc\{x_i\}_i$.
In view of item \ref{f2:acc} of Proposition \ref{prop:propertiesf2} and the fact that $\essacc\{x_i\}_i\subseteq\acc\{x_i\}_i=\Gamma\cup J$, we now need to show that if $p'\in J\setminus \Gamma$ then $p'\notin\essacc\{x_i\}_i$.
For such $p'$ we pick an open ball $U$ containing $p'$ such that $\overline U\cap \Gamma=\emptyset$ and
\[\kappa_U\coloneqq\inf_{x\in U}\dist(0,\partial^cf(x))>0,\]
as is possible because of item \ref{f2:clarke} of Proposition \ref{prop:propertiesf2}, together with the fact that $f$ is strictly monotonous on $J$. Let $a>0$ be the arclength of $J\cap U$. Then from \ref{S:aroundsmooth} it follows that if $i_1<i_2$ are such that for all $i_1\leq k\leq i_2$ we have $x_k\in U$, while $x_{i_1-1},x_{i_2+1}\notin U$, then
\[2a\geq \sum_{k=i_1}^{i_2}\varepsilon_k\|v_k\|\geq \sum_{k=i_1}^{i_2}\varepsilon_k\kappa_U\]%
For $\ell>0$, let $p_\ell$ denote the number of times the sequence goes around $T_\ell$.
If $N>0$ is in $I_j$, so that the sequence is bouncing around $T_j$, then
\[\sum_{\substack{x_i\in U\\i\leq N}}\varepsilon_k\leq \frac{2a}{\kappa_U}\sum_{\ell=0}^jp_\ell.\]
On the other hand, to estimate $j$ as a function of $N$ we compute a lower bound of the length of the path traversed by $x_1,\dots, x_N$,
\begin{align*}
%
\sum_{i=1}^{j-1} p_i\arclength \Gamma_i&\geq
\sum_{i=1}^{j-1}p_i\sum_{k=1}^i\frac{(4\alpha)^k}2\\
&=\sum_{i=1}^{j-1}p_i\frac{(4\alpha)^{i+1}-1}{2(4\alpha-1)}\\
&\geq\sum_{i=1}^{j-1}\frac{(4\alpha)^{i+1}-1}{2(4\alpha-1)}\\
&=\frac{1}{2(1-4\alpha)^2} ((4\alpha)^{j+1}-16\alpha^2+(j-1)(1-4\alpha))\\
&=\frac{1}{2(1-4\alpha)^2}(4\alpha)^{j+1}+O(j).
\end{align*}
To turn this lower bound on the length of the path into an lower bound of the number $N$ of steps we use \ref{S:aroundsmooth} and the fact that $\nabla h$ is normal to $\Gamma_k$, so that we have
\[
\frac{1}{2(1-4\alpha)^2}(4\alpha)^{j+1}+O(j)\leq\sum_{k=2}^N\frac1{k\log k}\approx\log\log N.
\]
Whence
\[j\leq A\log\log\log N\]
for some $A>0$, and \eqref{eq:essaccdef} can be bounded by
\[
\frac{\sum_{\substack{x_k\in U\\k\leq N}} \varepsilon_k}{\sum_{i=1}^N\varepsilon_k}\leq \frac{2a/\kappa_U}{(\log N)/L}\sum_{\ell=0}^jp_\ell=O\left(\frac{\sum_{\ell=0}^jp_\ell}{e^{e^j}}\right).
\]
Because of the fractal form of the construction of $\Gamma$, we see that the thickness of the tubular neighborhoods around the connected components of $\Gamma_i\cup J$ and those around the connected components around $\Gamma_{i+1}\cup J$ are related by a factor $\alpha$. From our calculation above we conclude that, the number of steps it takes to traverse each $T_j$ increases rapidly, so that in view of \ref{S:thickness}, we see that $p_\ell$ can be uniformly bounded. This means that $\sum_{\ell=0}^jp_\ell\leq Cj$ for some $C>0$, and hence, as $j\to+\infty$,
\[\frac{\sum_{\ell=0}^jp_\ell}{e^{e^j}}\leq \frac{Cj}{e^{e^j}}\to 0.\]
This proves that $J\setminus \Gamma$ is not in $\essacc\{x_i\}_i$, and concludes the proof of the lemma.
\end{proof}
\paragraph{Conclusion.}
Claim \ref{itex:pathdiff} was proved as item \ref{f2:pathdiff} of Proposition \ref{prop:propertiesf2}.
It follows from item \ref{f2:crit} in Proposition \ref{prop:propertiesf2} and Lemma \ref{lem:essacc} that $\Gamma=\essacc\{x_i\}_i\subset\crit f$, and since $f(\Gamma)=[0,1]$, $f$ satisfies claim \ref{itex:nonconstcrit}.
Claim \ref{itex:essacc} is true by item \ref{f2:acc} of Proposition \ref{prop:propertiesf2} and Lemma \ref{lem:essacc}. %
Since $f(\Gamma)=[0,1]$ and $\{x_i\}_i$ bounces endlessly around $\Gamma\cup J$ by item \ref{f2:acc} in Proposition \ref{prop:propertiesf2}, the sequence $\{f(x_i)\}_i$ also does not converge, which is claim \ref{itex:fconv}.
Claim \ref{itex:dim} follows from Lemmas \ref{lem:dimgamma} and \ref{lem:essacc} and item \ref{f2:acc} of Proposition \ref{prop:propertiesf2}.
Claim \ref{itex:slowdown} requires some analysis. Let $x$ and $y$ be distinct points in $J$ with $f(x)>f(y)$. Let $\{x_{i_k}\}_k$ and $\{x_{i_k'}\}_k$ be subsequences that converge to them, respectively, and such that $i_k<i_k'$ for all $k$. Let $u\colon[0,T]\to J$ be a parameterization of $J$ such that $\|u'(t)\|=-(f\circ u)'(t)$ (this determines $T>0$), so that $u$ is a gradient curve, that is, $-u'(t)\in\partial^cf(u(t))$. Then it follows from item \ref{ith2:clarke} in Lemma \ref{lem:defh}, item \ref{f2:clarke} in Proposition \ref{prop:propertiesf2}, and \ref{S:aroundsmooth} that the subgradient sequence $\{x_i\}_i$ goes along $J$ at about the same speed as the neighboring curve $u$, so a very rough estimate of the amount of time it takes for it to go between $x$ and $y$ is $\sup_k\sum_{p=i_k}^{i_k'}\varepsilon_p\leq 2T$, which is claim \ref{itex:slowdown}.
Claim \ref{itex:osccomp} is true because, if we choose the function $Q$ so that its support intersects $J$ but not $\Gamma$, then it follows from item \ref{ith2:clarke} in Lemma \ref{lem:defh}, item \ref{f2:clarke} in Proposition \ref{prop:propertiesf2}, and assumption \ref{S:aroundsmooth} that the averages in the $\liminf$ in \eqref{eq:speedaverages} asymptotically approach
\[\left\|\frac{\int_0^TQ(u(t))u'(t)dt}{\int_0^TQ(u(t))dt}\right\|\neq 0,\]
with $u$ as in our discussion of claim \ref{itex:slowdown} above, which immediately implies inequality \eqref{eq:speedaverages}.
Claim \ref{itex:perp} follows immediately from item \ref{ith2:clarke} in Lemma \ref{lem:defh}, and assumptions \ref{S:closetoTi} and \ref{S:aroundsmooth}.
|
1,116,691,499,831 | arxiv | \section{Introduction}
\label{sec:intro}
A cycle cover of a graph is a spanning subgraph that consists solely of cycles
such that every vertex is part of exactly one cycle. Cycle covers are an
important tool for the design of approximation algorithms for different variants
of the traveling salesman problem~\cite{Blaeser:ATSPZeroOne:2004,BlaeserEA:ATSP:2006,
BlaeserEA:MetricMaxATSP:2005,BoeckenhauerEA:SharpenedIPL:2000,
ChandranRam:Parameterized:2007,ChenNagoya:MetricMaxTSP:2007,
ChenEA:ImprovedMaxTSP:2005,KaplanEA:TSP:2005},
for the shortest common superstring problem from computational
biology~\cite{BlumEA:Superstrings:1994,Sweedyk:ApproximationSuperstring:1999},
and for vehicle routing problems~\cite{HassinRubinstein:VehicleRouting:2005}.
In contrast to Hamiltonian cycles, which are special cases of cycle covers,
cycle covers of minimum weight can be computed efficiently. This is exploited in
the above mentioned algorithms, which in general start by computing a cycle
cover and then join cycles to obtain a Hamiltonian cycle (this technique is
called \emph{subtour patching}~\cite{GilmoreEA:WellSolved:1985}).
Short cycles limit the approximation ratios achieved by such algorithms. Roughly
speaking, the longer the cycles in the initial cover, the better the
approximation ratio. Thus, we are interested in computing cycle covers without
short cycles. Moreover, there are algorithms that perform particularly well if
the cycle covers computed do not contain cycles of odd
length~\cite{BlaeserEA:ATSP:2006}. Finally, some vehicle routing
problems~\cite{HassinRubinstein:VehicleRouting:2005} require covering vertices
with cycles of bounded length.
Therefore, we consider \emph{restricted cycle covers}, where cycles of certain
lengths are ruled out a priori: For a set $L \subseteq \ensuremath{\mathbb{N}}$, an
\emph{$L$-cycle cover} is a cycle cover in which the length of each cycle is
in~$L$.
Unfortunately, computing $L$-cycle covers is hard for almost all sets
$L$~\cite{HellEA:RestrictedTwoFactors:1988,Manthey:RestrictedCCWAOA:2006,
Manthey:RestrictedCC:2007ECCC}.
Thus, in order to fathom the possibility of designing approximation algorithms
based on computing cycle covers, our aim is to find out how well $L$-cycle
covers can be approximated.
Beyond being a basic tool for approximation algorithms, cycle covers are
interesting in their own right. Matching theory and graph factorization are
important topics in graph theory. The classical matching problem is the
problem of finding one-factors, i.~e., spanning subgraphs in which every vertex
is incident to exactly one edge. Cycle covers of undirected graphs are also
called two-factors since every vertex is incident to exactly two edges in a
cycle cover. Both structural properties of graph factors and the complexity of
finding graph factors have been the topic of a considerable amount of research
(cf.\ Lov{\'a}sz and Plummer~\cite{LovaszPlummer:Matching:1986} and
Schrijver~\cite{Schrijver:CombOpt:2003}).
\subsection{Preliminaries}
\label{ssec:prelim}
Let $G=(V,E)$ be a graph with vertex set $V$ and edge set $E$. If $G$ is
undirected, then a \bemph{cycle cover} of $G$ is a subset $C \subseteq E$ of the
edges of $G$ such that all vertices in $V$ are incident to exactly two edges in
$C$. If $G$ is a directed graph, then a cycle cover of $G$ is a subset
$C \subseteq E$ such that all vertices are incident to exactly one incoming and
one outgoing edge in $C$. Thus, the graph $(V,C)$ consists solely of
vertex-disjoint cycles. The length of a cycle is the number of edges it consists
of. We are concerned with simple graphs, i.~e., the graphs do not contain
multiple edges or loops. Thus, the shortest cycles of undirected and directed
graphs are of length three and two, respectively. We call a cycle of length
$\lambda$ a \bemph{$\lambda$-cycle} for short.
An \bemph{$L$-cycle cover} of an undirected graph is a cycle cover in which the
length of every cycle is in the set $L \subseteq {\mathcal{U}} = \{3,4,5,\ldots\}$.
An $L$-cycle cover of a directed graph is analogously defined except that
$L \subseteq {\mathcal{D}} = \{2,3,4,\ldots\}$. A special case of $L$-cycle covers
are \bemph{$k$-cycle covers}, which are $\{k, k+1, \ldots\}$-cycle covers. Let
$\ensuremath{\overline{L}} = {\mathcal{U}} \setminus L$ in the case of undirected graphs, and let
$\ensuremath{\overline{L}} = {\mathcal{D}} \setminus L$ in the case of directed graphs (whether we consider
undirected or directed cycle covers will be clear from the context).
Given edge weights $w: E \rightarrow \ensuremath{\mathbb{N}}$, the \bemph{weight $w(C)$} of a
subset $C \subseteq E$ of the edges of $G$ is $w(C) = \sum_{e \in C} w(e)$. In
particular, this defines the weight of a cycle cover since we view cycle covers
as sets of edges.
\bemph{\minug L} is the following optimization problem: Given an undirected
complete graph with non-negative edge weights that satisfy the triangle
inequality ($w(\{u,v\}) \leq w(\{u,x\})+w(\{x,v\})$ for all $u,x,v \in V$) find
an $L$-cycle cover of minimum weight. \bemph{\minug{k}} is defined for
$k \in {\mathcal{U}}$ like \minug{L} except that $k$-cycle covers rather than
$L$-cycle covers are sought. The triangle inequality is not only a natural
restriction, it is also necessary: If finding $L$-cycle covers in graphs is
\class{NP}-hard, then \minug L\ without the triangle inequality does not allow for any
approximation at all.
\bemph{\mindg L} and \bemph{\mindg k} are defined for directed graphs like
\minug{L} and \minug{k} for undirected graphs except that $L \subseteq {\mathcal{D}}$
and $k \in {\mathcal{D}}$ and the triangle inequality is of the form
$w(u,v) \leq w(u,x) + w(x,v)$.
Finally, \bemph{\maxug L}, \bemph{\maxug k}, \bemph{\maxdg L}, and
\bemph{\maxdg k} are analogously defined except that cycle covers of maximum
weight are sought and that the edge weights do not have to fulfill the triangle
inequality.
\subsection{Previous Results}
\label{ssec:previous}
\paragraph{Undirected Cycle Covers.}
\minug{{\mathcal{U}}}, i.~e., the undirected cycle cover problem without any
restrictions, can be solved in polynomial time via Tutte's reduction to the
classical perfect matching problem~\cite{LovaszPlummer:Matching:1986}. By a
modification of an algorithm of Hartvigsen~\cite{Hartvigsen:PhD:1984}, also
4-cycle covers of minimum weight in graphs with edge weights one and two can be
computed efficiently. For \minug{k} restricted to graphs with edge weights one
and two, there exists a factor $7/6$ approximation algorithm for all
$k$~\cite{BlaeserSiebert:CycleCovers:2001}. Hassin and
Rubinstein~\cite{HassinRubinstein:TrianglePacking:2006Erratum}
presented a randomized approximation algorithm for \maxug{\{3\}} that achieves
an approximation ratio of $83/43 + \epsilon$. \maxug{L} admits a factor $2$
approximation algorithm for arbitrary sets
$L$~\cite{Manthey:ImprovedCC:2006,Manthey:RestrictedCC:2007ECCC}.
Goemans and Williamson~\cite{GoemansWilliamson:ConstrainedForest:1995} showed
that \minug k and \minug{\{k\}} can be approximated within a factor of $4$.
\minug L is \class{NP}-hard and \class{APX}-hard if $\ensuremath{\overline{L}} \not\subseteq \{3\}$, i.~e., for all
but a finite number of sets $L$~\cite{HellEA:RestrictedTwoFactors:1988,
Manthey:RestrictedCCWAOA:2006,Manthey:RestrictedCC:2007ECCC,
Vornberger:EasyHard:1980}.
This means that for almost all $L$, these problems are unlikely to possess
polynomial-time approximation schemes (PTAS, see Ausiello et
al.~\cite{AusielloEA:ComplApprox:1999} for a definition).
If \minug L is \class{NP}-hard, then the triangle inequality is necessary for efficient
approximations of this problem; without the triangle inequality, \minug L cannot
be approximated at all.
\paragraph{Directed Cycle Covers.}
\mindg{{\mathcal{D}}}, which is also known as the \emph{assignment problem}, can be
solved in polynomial time by a reduction to the minimum weight perfect matching
problem in bipartite graphs~\cite{AhujaEA:NetworkFlows:1993}. The only other $L$
for which \mindg{L} can be solved in polynomial time is $L = \{2\}$. For all
$L \subseteq {\mathcal{D}}$ with $L \neq \{2\}$ and $L \neq {\mathcal{D}}$, \mindg L and
\maxdg L are \class{APX}-hard and \class{NP}-hard, even if only two different edge weights are
allowed~\cite{Manthey:RestrictedCCWAOA:2006,Manthey:RestrictedCC:2007ECCC}.
There is a $4/3$ approximation algorithm for
\maxdg 3~\cite{BlaeserEA:MetricMaxATSP:2005} as well as for \mindg{k} for
$k \geq 3$ with the restriction that the only edge weights allowed are one and
two~\cite{BlaeserManthey:MWCC:2005}. \maxdg{L} can be approximated within a
factor of $8/3$ for all $L$~\cite{Manthey:RestrictedCC:2007ECCC}.
Analogously to \minug L, \mindg L cannot be approximated at all without the
triangle inequality.
\subsection{New Results}
\label{ssec:new}
While $L$-cycle covers of \emph{maximum} weight allow for constant factor
approximations, only little is known so far about the approximability of
computing $L$-cycle covers of \emph{minimum} weight. Our aim is to close this gap.
We present an approximation algorithm for \minug L that works for all sets
$L \subseteq {\mathcal{U}}$ and achieves a constant approximation ratio
(Section~\ref{ssec:goewill}). Its running-time is $O(n^2 \log n)$.
On the other hand, we show that the problem cannot be approximated within a
factor of $2-\varepsilon$ for general $L$ (Section~\ref{ssec:undinapp}).
Our approximation algorithm for \mindg L achieves a ratio of $O(n)$, where $n$
is the number of vertices (Section~\ref{ssec:directedalg}). This is
asymptotically optimal: There exists sets $L$ for which no algorithm can
approximate \mindg L within a factor of $o(n)$ (Section~\ref{ssec:inappdir}).
Furthermore, we argue that \mindg L is harder to approximate than the other
three variants even for more ``natural'' sets $L$ than the sets used to show the
inapproximability (Section~\ref{ssec:directedremarks}).
Finally, to contrast our results for \minug L and \mindg L, we show that
\maxug L and \maxdg L can be approximated arbitrarily well at least in principle
(Section~\ref{sec:maxgood}).
\section{\boldmath Approximability of \minug L}
\label{sec:appund}
\subsection{\boldmath An Approximation Algorithm for \minug L}
\label{ssec:goewill}
The aim of this section is to devise an approximation algorithm for \minug L
that works for all sets $L \subseteq {\mathcal{U}}$. The catch is that for most $L$
it is impossible to decide whether some cycle length is in $L$ since there are
uncountably many sets $L$: If, for instance, $L$ is not a recursive set, then
deciding whether a cycle cover is an $L$-cycle cover is impossible. One option
would be to restrict ourselves to sets $L$ such that the unary language
$\{1^\lambda \mid \lambda \in L\}$ is in \class{P}. For such $L$, \minug L and
\mindg L are \class{NP}\ optimization problems (see Ausiello et
al.~\cite{AusielloEA:ComplApprox:1999} for a definition). Another possibility
for circumventing the problem would be to include the permitted cycle lengths in
the input. While such restrictions are mandatory if we want to compute optimum
solutions, they are not needed for our approximation algorithms.
A complete $n$-vertex graph contains an $L$-cycle cover as a spanning subgraph
if and only if there exist (not necessarily distinct) lengths
$\lambda_1, \ldots, \lambda_k \in L$ for some $k \in \ensuremath{\mathbb{N}}$ with
$\sum_{i=1}^k \lambda_i = n$. We call such an $n$ \bemph{$L$-admissible} and
define $\close L = \{n \mid \text{$n$ is $L$-admissible}\}$. Although $L$ can be
arbitrarily complicated, $\close L$ always allows efficient membership testing
according to the following lemma.
\begin{lemma}[\mbox{Manthey~\cite[Lem.~3.1]{Manthey:RestrictedCC:2007ECCC}}]
\label{lem:finite}
For all $L \subseteq \ensuremath{\mathbb{N}}$, there exists a finite set $L' \subseteq L$ with
$\close{L'} = \close L$.
\end{lemma}
Let $g_L$ be the greatest common divisor of all numbers in $L$. Then $\close L$
is a subset of the set of natural numbers divisible by $g_L$. The proof of
Lemma~\ref{lem:finite} shows that there exists a minimum $p_L \in \ensuremath{\mathbb{N}}$ such
that $\eta g_L \in \close L$ for all $\eta > p_L$. The number $p_L$ is the
Frobenius number~\cite{RamirezAlfonsin:Frobenius:2006} of the set
$\{\lambda \mid g_L \lambda \in L\}$, which is $L$ scaled down by $g_L$. For
instance, if $L = \{8,10\}$, then $g_L =2$ and $p_L = 11$ since the Frobenius
number of $\{4,5\}$ is $11$.
In the following, it suffices to know such a finite set $L' \subseteq L$. The
$L$-cycle covers computed by our algorithm will in fact be $L'$-cycle covers. In
order to estimate the approximation ratio, this cycle cover will be compared to
an optimal $\close{L'}$-cycle cover. Since
$L' \subseteq L \subseteq \close{L'}$, every $L'$- or $L$-cycle cover is also a
$\close{L'}$-cycle cover. Thus, the weight of an optimal $\close{L'}$-cycle
cover provides a lower bound for the weight of both an optimal $L'$- and an
optimal $L$-cycle cover. For simplicity, we do not mention $L'$ in the
following. Instead, we assume that already $L$ is a finite set, and we compare
the weight of the $L$-cycle cover computed to the weight of an optimal
$\close L$-cycle cover to bound the approximation ratio.
Goemans and Williamson have presented a technique for approximating constrained
forest problems~\cite{GoemansWilliamson:ConstrainedForest:1995}, which we will
exploit. Let $G=(V,E)$ be an undirected graph, and let $w: E \rightarrow \ensuremath{\mathbb{N}}$
be non-negative edge weights. Let $2^V$ denote the power set of $V$. A
function $f: 2^V \rightarrow \{0,1\}$ is called a \bemph{proper function} if it
satisfies
\begin{itemize}
\item $f(S) = f(V \setminus S)$ for all $S \subseteq V$ (symmetry),
\item if $A$ and $B$ are disjoint, then $f(A) = f(B) = 0$ implies
$f(A \cup B) = 0$ (disjointness), and
\item $f(V) = 0$.
\end{itemize}
The aim is to find a set $F$ of edges such that there is an edge connecting $S$
to $V \setminus S$ for all $S \subseteq V$ with $f(S) = 1$. (The name
``constrained forest problems'' comes from the fact that it suffices to consider
forests as solutions; cycles only increase the weight of a solution.) For
instance, the minimum spanning tree problem corresponds to the proper function
$f$ with $f(S) = 1$ for all $S$ with $\emptyset \subsetneq S \subsetneq V$.
Goemans and Williamson have presented an approximation
algorithm~\cite[Fig.~1]{GoemansWilliamson:ConstrainedForest:1995} for
constrained forest problems that are characterized by proper functions. We will
refer to their algorithm as \algo{GoeWill}.
\begin{theorem}[\mbox{Goemans and
Williamson~\cite[Thm.~2.4]{GoemansWilliamson:ConstrainedForest:1995}}]
\label{thm:gw}
Let $\ell$ be the number of vertices $v$ with $f(\{v\}) = 1$. Then \algo{GoeWill}\ is a
$(2-\frac 2\ell)$-approximation for the constrained forest problem defined by
a proper function $f$.
\end{theorem}
In particular, the function $f_L$ given by
\[
f_L(S) =
\left\{
\begin{array}{ll}
1 & \text{if $|S| \not\equiv 0 \pmod{g_L}$ and} \\
0 & \text{if $|S| \equiv 0 \pmod{g_L}$}
\end{array}
\right.
\]
is proper if $|V| = n$ is divisible by $g_L$. (If $n$ is not divisible by $g_L$,
then $G$ does not contain an $L$-cycle cover at all.) Given this function, a
solution is a forest $H=(V,F)$ such that the size of every connected component
of $H$ is a multiple of~$g_L$. In particular, if $g_L = 1$, then $f_L(S) = 0$
for all $S$, and an optimum solution are $n$ isolated vertices.
If the size of all components of the solution obtained are in $\close L$, we are
done: By duplicating all edges, we obtain Eulerian components. Then we construct
an $\close L$-cycle cover by traversing the Eulerian components and taking
shortcuts whenever we come to a vertex that we have already visited. Finally, we
divide each $\lambda$-cycle into paths of lengths
$\lambda_1-1, \ldots, \lambda_k-1$ for some $k$ such that
$\lambda_1+ \ldots+ \lambda_k = \lambda$ and $\lambda_i \in L$ for all $i$. By
connecting the respective endpoints of each path, we obtain cycles of lengths
$\lambda_1, \ldots, \lambda_k$. We perform this for all components to get an
$L$-cycle cover. A straightforward analysis yields an approximation ratio of
$8$. A more careful analysis shows that the actual ratio achieved is $4$. The
details for the special case of $L = \{k\}$ are spelled out by Goemans and
Williamson~\cite{GoemansWilliamson:ConstrainedForest:1995}.
However, this procedure does not work for general sets $L$ since the sizes of
some components may not be in $\close L$. This can happen if $p_L > 0$ (for
$L = \{k\}$, for which the algorithm works, we have $p_L = 0$). At the end of
this section, we argue why it seems to be difficult to generalize the approach
of Goemans and Williamson in order to obtain an approximation algorithm for
\minug L whose approximation ratio is independent of $L$.
In the following, our aim is to add edges to the forest $H=(V,E)$ output by \algo{GoeWill}\
such that the size of each component is in $\close L$. This will lead to an
approximation algorithm for \minug L with a ratio of $4 \cdot (p_L+4)$, which is
constant for each $L$. Let $F^\ast$ denote the set of edges of a minimum-weight
forest such that the size of each component is in $\close L$. The set $F^\ast$
is a solution to $G$, $w$, and~$f_L$, but not necessarily an optimum solution.
By Theorem~\ref{thm:gw}, we have $w(F) \leq 2 \cdot w(F^\ast)$ since $w(F^\ast)$
is at least the weight of an optimum solution to $G$, $w$, and $f_L$. Let
$C=(V', F')$ be any connected component of $F$ with $|V'| \notin \close L$. The
optimum solution $F^\ast$ must contain an edge that connects $V'$ to
$V \setminus V'$. The weight of this edge is at least the weight of the
minimum-weight edge connecting $V'$ to $V \setminus V'$.
We will add edges until the sizes of all components is in $\close L$. Our
algorithm acts in phases as follows: Let $H=(V, F)$ be the graph at the
beginning of the current phase, and let $C_1, \ldots, C_a$ be its connected
components, where $V_i$ is the vertex set of $C_i$. We will construct a new graph
$\tilde H=(V, \tilde F)$ with $\tilde F \supseteq F$. Let $C_1, \ldots, C_b$ be
the connected components with $|V_i| \notin \close L$. We call these components
\bemph{illegal}. For $i \in \{1, \ldots, b\}$, let $e_i$ be the cheapest edge
connecting $V_i$ to $V\setminus V_i$. (Note that $e_i = e_j$ for $i \neq j$ is
allowed.)
We add all these edges to $F$ to obtain
$\tilde F = F \cup \{e_1, \ldots, e_b\}$. Since $e_i$ is the cheapest edge
connecting $V_i$ to $V \setminus V_i$, the graph $\tilde H = (V,\tilde F)$ is
a forest. (If some $e_i$ are not uniquely determined, cycles may occur. We can
avoid these cycles by discarding some of the $e_i$ to break the cycles. For the
sake of simplicity, we ignore this case in the following analysis.) If
$\tilde H$ still contains illegal components, we set $H$ to be $\tilde H$ and
iterate the procedure.
\begin{lemma}
Let $F$ and $\tilde F$ be as described above. Then
$w(\tilde F) \leq w(F) + 2\cdot w(F^\ast)$.
\end{lemma}
\begin{proof}
We observe that $F^\ast$ contains at least one edge $e^\ast_i$ connecting
$V_i$ to $V \setminus V_i$ for every $i \in \{1,\ldots, b\}$. If
$e^\ast_i = e^\ast_j$ for $i \neq j$, then $e_k^\ast \neq e_i^\ast$ for all
$k \neq i,j$. This means that every edge occurs at most twice among
$e_1^\ast, \ldots, e_b^\ast$, which implies
\[
\sum_{i=1}^b w(e_i^\ast) \leq 2 \cdot w(F^\ast).
\]
By the choice of $e_i$, we have $w(e_i) \leq w(e_i^\ast)$. Putting everything
together yields
\[
w(\tilde F)
\leq w(F) + \sum_{i=1}^b w(e_i)
\leq w(F) + \sum_{i=1}^b w(e_i^\ast)
\leq w(F) + 2w(F^\ast).
\]
\end{proof}
Let us bound the number of phases that are needed in the worst case.
\begin{lemma}
After at most $\lfloor p_L/2\rfloor +1$ phases, $\tilde H$ does not contain
any illegal components.
\end{lemma}
\begin{proof}
In the beginning, all components of $H=(V,F)$ contain at least $g_L$ vertices.
If $g_L \in L$, no phases are needed at all. Thus, we can assume that
$\min(L) \geq 2g_L$.
To bound the number of phases needed, we will estimate the size of the
smallest illegal component. Consider any of the smallest illegal components
before some phase $t$, and let $s$ be the number of its vertices. In phase
$t$, this component will be connected either to another illegal component,
which results in a component with a size of at least $2s$, or to a legal
component, which results in a component with a size of at least $s+2g_L$.
(It can happen that more than two illegal components are connected to a single
component in one phase.)
In either case, except for the first phase, the size of the smallest illegal
component increases by at least $2g_L$ in every step. Thus, after at most
$\lfloor p_L/2\rfloor +1$ phases, the size of every illegal component is at
least $(p_L+1)g_L$. Hence, there are no more illegal components since
components that consist of at least $(p_L +1)g_L$ vertices are not illegal.
\end{proof}
Eventually, we obtain a forest that consists solely of components whose sizes
are in $\close L$. We call this forest $\tilde H=(V, \tilde F)$. Then we proceed
as already described above: We duplicate each edge, thus obtaining Eulerian
components. After that, we take shortcuts to obtain an $\close L$-cycle cover.
Finally, we break edges and connect the endpoints of each path to obtain an
$L$-cycle cover. The weight of this $L$-cycle cover is at most $4 \cdot w(\tilde F)$.
Overall, we obtain \algo{ApxUndir}\ (Algorithm~\ref{algo:undirected}) and the following theorem.
\begin{algorithm}[t]
\begin{algorithmic}[1]
\item[\textbf{Input:}] undirected complete graph $G = (V, E)$, $|V| = n$;
edge weights $w: E \rightarrow \ensuremath{\mathbb{N}}$ satisfying the triangle inequality
\item[\textbf{Output:}] an $L$-cycle cover $C^{\operatorname{apx}}$ of $G$ if $n$ is $L$-admissible,
$\bot$ otherwise
\If{$n \notin \close L$}
\State return $\bot$
\EndIf
\State run \algo{GoeWill}\ using the function $f_L$ described in the text to obtain
$H=(V,F)$
\While{the size of some connected components of $H$ is not in $\close L$}
\State let $C_1, \ldots, C_a$ be the connected components of $H$, where
$V_i$ is the vertex set of $C_i$; let $C_1, \ldots, C_b$ be its
illegal components
\State let $e_i$ be the lightest edge connecting $V_i$ to $V \setminus V_i$
\State add $e_1, \ldots, e_b$ to $F$
\While{$H$ contains cycles}
\State remove one $e_i$ to break a cycle
\EndWhile
\EndWhile
\State duplicate each edge to obtain a multi-graph consisting of Eulerian
components
\ForAll{components of the multi-graph}
\State walk along an Eulerian cycle
\State take shortcuts to obtain a Hamiltonian cycle
\State discard edges to obtain a collection of paths, the number of vertices
of each of which is in $L$
\State connect the two endpoints of every path in order to obtain cycles
\EndFor
\State the union of all cycles constructed forms $C^{\operatorname{apx}}$; return $C^{\operatorname{apx}}$
\end{algorithmic}
\caption{\algo{ApxUndir}.}
\label{algo:undirected}
\end{algorithm}
\begin{theorem}
\label{thm:au}
For every $L \subseteq {\mathcal{U}}$, \algo{ApxUndir}\ is a factor $(4\cdot (p_L +4))$
approximation algorithm for \minug L. Its running-time is
$O(n^2 \log n)$.
\end{theorem}
\begin{proof}
Let $C^\ast$ be a minimum-weight $\close L$-cycle cover. The weight of
$\tilde F$ is bounded from above by
\[
w(\tilde F)
\leq \left(\left\lfloor \frac{p_L}2 \right\rfloor +1 \right) \cdot 2 \cdot w(F^\ast)
+ 2 \cdot w(F^\ast)
\leq \bigl(p_L+4\bigr) \cdot w(C^\ast).
\]
Combining this with $w(C^{\operatorname{apx}}) \leq 4 \cdot w(\tilde F)$ yields the
approximation ratio.
Executing \algo{GoeWill}\ takes time $O(n^2 \log n)$. All other operations can be
implemented to run in time $O(n^2)$.
\end{proof}
We conclude the analysis of this algorithm by providing an example that shows
that the approximation ratio of the algorithm depends indeed linearly on $p_L$.
To do this, let $p \in \ensuremath{\mathbb{N}}$ be even. We choose
$L = \{4, 2p+2, 2p+4, 2p+6, \ldots\}$. Thus, $g_L = 2$ and $p_L = p-1$.
Figure~\ref{fig:graphopt} shows the graph that we consider and its optimal
$L$-cycle cover. The graph consists of $4p + 4$
vertices. The weights of the edges, which satisfy the triangle inequality,
are as follows:
\begin{itemize}
\item Solid, bold edges have a weight of $1$.
\item Dashed, bold edges have a weight of $1+\varepsilon$, where
$\varepsilon > 0$ can be made arbitrarily small.
\item Solid, non-bold edges have a weight of $\varepsilon$.
\item Dashed, non-bold edges have a weight of $2 \varepsilon$.
\item The weight of the edges not drawn is given by the shortest path between
the respective vertices.
\end{itemize}
The weight of the optimum $L$-cycle cover is
$2 + (6p+4)\varepsilon$: The four central
vertices contribute $2 + 4 \varepsilon$, and each of the $p$
remaining $4$-cycles contributes $6 \varepsilon$. By decreasing $\varepsilon$,
the weight of the optimum $L$-cycle cover can get arbitrarily close to $2$.
\begin{figure}
\centering
\subfigure[The graph.]{%
\label{fig:tightgraph}\includegraphics{TightGraph}}
\qquad \quad
\subfigure[The optimal $L$-cycle cover.]{%
\label{fig:tightoptimal}\includegraphics{TightOptimal}}
\caption{An example on which \algo{ApxUndir}\ achieves only a ratio of roughly $p_L/2$.}
\label{fig:graphopt}
\end{figure}
Figure~\ref{fig:bad} shows what \algo{ApxUndir}\ computes. Let us assume that \algo{GoeWill}\ returns
the optimum $L$-forest shown in Figure~\ref{fig:tightforest}. \algo{GoeWill}\ might also
return a different forest of the same weight: Instead of creating a component of
size four, it can take two vertical edges of weights $\varepsilon$ and
$2 \varepsilon$. However, the resulting $L$-cycle covers will be equal.
Starting with the output of \algo{GoeWill}, \algo{ApxUndir}\ chooses greedily the bold edges, which
have a weight of $1$, rather than the two edges of weight $1+\varepsilon$
(Figure~\ref{fig:tightfinal}). From the forest thus obtained, it constructs an
$L$-cycle cover (Figure~\ref{fig:tightcycles}). The weight of this $L$-cycle
cover is $2 (p/2 +1) + (4p +2)\varepsilon$. For sufficiently small
$\varepsilon$, this is approximately $p+2 = p_L+3$, which is roughly $p_L/2 + 3/2$
times as large as the weight of the optimum $L$-cycle cover.
\begin{figure}
\centering
\subfigure[The output of \algo{GoeWill}.]{%
\label{fig:tightforest}\includegraphics{TightForest}}
\qquad \quad
\subfigure[The final forest.]{%
\label{fig:tightfinal}\includegraphics{TightFinal}}
\\
\subfigure[The $L$-cycle cover $C^{\operatorname{apx}}$.]{%
\label{fig:tightcycles}\includegraphics{TightCycles}}
\caption{How \algo{ApxUndir}\ computes an $L$-cycle cover of the graph of
Figure~\ref{fig:tightgraph}.}
\label{fig:bad}
\end{figure}
Of course, it would be desirable to have an approximation algorithm with a ratio
that does not depend on $L$. Directly adapting the technique of Goemans and
Williamson~\cite{GoemansWilliamson:ConstrainedForest:1995} does not seem to
work: The function $f(S) = 1$ if and only if $|S| \notin \close L$ is not
proper because it violates symmetry. To force it to be symmetric, we can modify
it to $f'(S) = 1$ if and only if $|S| \notin \close L$ or
$|V \setminus S| \notin \close L$. But $f'$ does not satisfy disjointness. There
are generalizations of Goemans and Williamson's approximation technique to
larger classes of functions~\cite{GoemansWilliamson:PrimalDualNetwork:1997}.
However, it seems that $L$-cycle covers can hardly be modeled even by these more
general functions.
An alternative approach might be to grow a forest greedily without prior
execution of \algo{GoeWill}. This works if $g_L = 1$. In this case, \algo{GoeWill}\ outputs an empty
forest anyway since $f(S) = 0$ for all $S \subseteq V$, and \algo{ApxUndir}\ boils down to a
greedy algorithm. However, if $g_L > 1$, then it is not guaranteed that we
obtain a feasible forest at all.
\subsection{\boldmath Unconditional Inapproximability of \minug L}
\label{ssec:undinapp}
In this section, we provide a lower bound for the approximability of \minug L as
a counterpart to the approximation algorithm of the previous section. We show that
the problem cannot be approximated within a factor of $2-\varepsilon$. This
inapproximability result is unconditional, i.~e., it does not rely on complexity
theoretic assumptions like $\class{P} \neq \class{NP}$.
The key to the inapproximability of \minug L are
\bemph{immune sets}~\cite{Odifreddi:Recursion1:1989}: An infinite set
$L \subseteq \ensuremath{\mathbb{N}}$ is called an immune set if $L$ does not contain an infinite
recursively enumerable subset. Such sets exist. One might want to argue that
inapproximability results based on immune sets are more of a theoretical
interest. But our result limits the possibility of designing general
approximation algorithms for $L$-cycle covers. To obtain algorithms with a ratio
better than 2, we have to design algorithms tailored to specific sets $L$.
Finite variations of immune sets are again immune sets. Thus for every
$k \in \ensuremath{\mathbb{N}}$, there exist immune sets $L$ containing no number smaller than $k$.
\begin{theorem}
\label{thm:inappu}
Let $\varepsilon > 0$ be arbitrarily small. Let $k > 2/\varepsilon$, and let
$L \subseteq \{k, k+1, \ldots\}$ be an immune set. Then \minug{L} cannot be
approximated within a factor of $2 - \varepsilon$.
\end{theorem}
\begin{proof}
Let $G_n$ be an undirected complete graph with $n$ vertices
$\{1,2, \ldots,n\}$. The weight of an edge $\{i,j\}$ for $i < j$ is
$\min\{j-i, n+i-j\}$. This means that the vertices are ordered along an
undirected cycle, and the distance from $i$ to $j$ is the number of edges that
have to be traversed in order to get from $i$ to $j$. These edge weights
fulfill the triangle inequality.
For all $n \in L$, the optimal $L$-cycle cover of $G_n$ is a Hamiltonian cycle
of weight $n$. Furthermore, the weight of every cycle $c$ that traverses
$\ell \leq n/2$ vertices has a weight of at least $2\ell-2$: Let $i$ and $j$
be two vertices of $c$ that are farthest apart according to the edge lengths
of $G_n$. Assume that $i < j$. By the triangle inequality, the weight of $c$
is at least $2 \cdot \min\{j-i, n+i-j\}$. Since $\ell \leq n/2$ and by the
choice of $i$ and $j$, we have $\min\{j-i, n+i-j\} \geq \ell -1$, which proves
$w(c) \geq 2\ell -2$.
Consider any approximation algorithm \algo{Approx}\ for \minug{L}. We run \algo{Approx}\
on $G_n$ for $n \in \ensuremath{\mathbb{N}}$. By outputting the cycle lengths occurring in the
$L$-cycle cover of $G_n$ for all $n$, we obtain an enumeration of a subset
$S \subseteq L$. Since $L$ is immune, $S$ must be a finite set, and
$s= \max(S)$ exists. Let
$n \geq 2s$. The $L$-cycle cover output for $G_n$ consists of cycles whose
lengths are at most $s \leq n/2$. Since $\min(L) \geq k$, we also have
$\min(S) \geq k$ and the $L$-cycle cover output for $G_n$ consists of at most
$n/k$ cycles. Hence, the weight of the cycle cover computed by \algo{Approx}\ is at
least $\frac nk \cdot (2k-2)$. For $n \in L$, this is a factor of
$2-\frac 2k > 2 - \varepsilon$ away from the optimum solution.
\end{proof}
Theorem~\ref{thm:inappu} is tight since $L$-cycle covers can be approximated
within a factor of $2$ by $L'$-cycle covers for every set $L' \subseteq L$ with
$\close{L'} = \close L$. For finite sets $L'$, all $L'$-cycle cover problems are
\class{NP}\ optimization problems. This means that in principle optimum solutions can
be found, although this may take exponential time. The following
Theorem~\ref{thm:inappundtight} holds in particular for finite sets $L'$. In
order to actually get an approximation algorithm for \minug L out of it, we have
to solve \minug{L'} finite $L'$, which is \class{NP}-hard and \class{APX}-hard. But the proof
of Theorem~\ref{thm:inappundtight} shows also that any approximation algorithm
for \minug{L'} for finite sets $L'$ that achieves an approximation ratio of $r$
can be turned into an approximation algorithm for the general problem with a
ratio of $2r$.
Let $\min_L(G,w)$ denote the weight of a minimum-weight $L$-cycle cover of $G$
with edge weights $w$, which have to fulfill the triangle inequality.
\begin{theorem}
\label{thm:inappundtight}
Let $L \subseteq {\mathcal{U}}$ be a non-empty set, and let $L' \subseteq L$ with
$\close{L'} = \close L$. Then we have
$\min_{L'}(G,w) \leq 2 \cdot \min_{L}(G,w)$ for all undirected graphs $G$ with
edge weights $w$ that satisfy the triangle inequality.
\end{theorem}
\begin{proof}
Consider an arbitrary $L$-cycle cover $C$ and any of its cycles $c$ of length
$\lambda \in L$. To prove the theorem, we show how to obtain an $L'$-cycle
cover $C'$ from $C$ with $w(C') \leq 2 \cdot w(C)$. Consider any cycle $c$ of
$C$ that has a length of $\lambda$. If $\lambda \in L'$, we simply put $c$
into $C'$. Otherwise, since $\close{L'} = \close L \supseteq L$, there exist
$\lambda_1, \ldots, \lambda_k \in L'$ for some $k \in \ensuremath{\mathbb{N}}$ such that
$\sum_{i=1}^k \lambda_i = \lambda$. We remove $k$ edges from $c$ to obtain $k$
paths consisting of $\lambda_1, \ldots, \lambda_k$ vertices. No additional
weight is incurred in this way. Then we connect the respective endpoints of
each path to obtain $k$ cycles of lengths $\lambda_1, \ldots, \lambda_k$. By
the triangle inequality, the weight of an edge added to close a cycle is at
most the weight of the corresponding path. By performing this for every cycle
of $C$, we obtain an $L'$-cycle cover $C'$ as claimed.
\end{proof}
\section{\boldmath Approximability of \mindg L}
\label{sec:inappdir}
\subsection{\boldmath An Approximation Algorithm for \mindg L}
\label{ssec:directedalg}
\begin{algorithm}[t]
\begin{algorithmic}[1]
\item[\textbf{Input:}] directed complete graph $G = (V, E)$, $|V| = n$;
edge weights $w: E \rightarrow \ensuremath{\mathbb{N}}$ satisfying the triangle inequality
\item[\textbf{Output:}] an $L$-cycle cover $C^{\operatorname{apx}}$ of $G$ if $n$ is $L$-admissible,
$\bot$ otherwise
\If{$n \notin \close L$}
\State return $\bot$
\EndIf
\State construct an undirected complete graph $G_U = (V,E_U)$ with edge weights
$w_U(\{u,v\}) = w(u,v) + w(v,u)$
\State run \algo{ApxUndir}\ on $G_U$ and $w_U$ to obtain $C_U^{\operatorname{apx}}$
\ForAll{cycles $c_U$ of $C_U^{\operatorname{apx}}$}
\State $c_U$ corresponds to a cycle of $G$ that can be oriented in two ways;
put the orientation $c$ that yields less weight into $C^{\operatorname{apx}}$
\EndFor
\State return $C^{\operatorname{apx}}$
\end{algorithmic}
\caption{\algo{ApxDir}.}
\label{algo:directed}
\end{algorithm}
In this section, we present an approximation algorithm for \mindg L. The
algorithm exploits \algo{ApxUndir}\ to achieve an approximation ratio of $O(n)$. The hidden
factor depends on $p_L$ again. This result matches asymptotically the lower
bound of Section~\ref{ssec:inappdir} and shows that \mindg L can be approximated
at least to some extent. (For instance, without the triangle inequality, no
polynomial-time algorithm achieves a ratio of $O(\exp(n))$ for an \class{NP}-hard
$L$-cycle cover problem unless $\class{P} = \class{NP}$.)
In order to approximate \mindg L, we reduce the problem to a variant of
\minug L, where also $2$-cycles are allowed: We obtain a $2$-cycle of an
undirected graph by taking an edge $\{u,v\}$ twice. Let $G=(V,E)$ be a directed
complete graph with $n$ vertices and edge weights $w:E \rightarrow \ensuremath{\mathbb{N}}$ that
fulfill the triangle inequality. The corresponding undirected complete graph
$G_U = (V, E_U)$ has weights $w_U: E_U \rightarrow \ensuremath{\mathbb{N}}$ with
$w_U(\{u,v\}) = w(u,v) + w(v,u)$.
Let $C$ be any cycle cover of $G$. The corresponding cycle cover $C_U$ of $G_U$
is given by $C_U = \{\{u,v\} \mid (u,v) \in C\}$. Note that we consider $C_U$ as
a multiset: If both $(u,v)$ and $(v,u)$ are in $C$, i.~e., $u$ and $v$ form a
$2$-cycle, then $\{u,v\}$ occurs twice in $C_U$. Let us bound the weight of $C_U$
in terms of the weight of $C$.
\begin{lemma}
\label{lem:nbound}
For every cycle cover $C$ of $G$, we have $w_U(C_U) \leq n \cdot w(C)$.
\end{lemma}
\begin{proof}
Consider any edge $e= (u,v) \in C$, and let $c$ be the cycle of length
$\lambda$ that contains $e$. By the triangle inequality, we have
$w_U(\{u,v\}) = w(u,v) + w(v,u) \leq w(c)$. Let $c_U$ be the cycle of $C_U$
that corresponds to $c$. Since $c$ consists of $\lambda$ edges, we obtain
$w_U(c_U) \leq \lambda \cdot w(c) \leq n \cdot w(c)$. Summing over all cycles
of $C$ completes the proof.
\end{proof}
Our algorithm computes an $L'$-cycle cover for some finite $L' \subseteq L$
with $\close{L'} = \close L$. As in Section~\ref{ssec:goewill}, the weight of
the cycle cover computed is compared to an optimum $\close L$-cycle
cover rather than an optimum $L$-cycle cover. Thus, we can again assume that
already $L$ is a finite set.
The algorithm \algo{ApxUndir}, which was designed for undirected graphs, remains to be an
$O(1)$ approximation if we allow $2 \in L$. The numbers $p_L$ and $g_L$ are
defined in the same way as in Section~\ref{ssec:goewill}.
Let $C_U^{\operatorname{apx}}$ be the $L$-cycle cover output by \algo{ApxUndir}\ on $G_U$. We transfer
$C_U^{\operatorname{apx}}$ into an $L$-cycle cover $C^{\operatorname{apx}}$ of $G$. For every cycle $c_U$ of
$C_U^{\operatorname{apx}}$, we can orient the corresponding directed cycle $c$ in two
directions. We take the orientation that yields less weight, thus
$w(C^{\operatorname{apx}}) \leq w_U(C_U^{\operatorname{apx}})/2$. Overall, we obtain \algo{ApxDir}\
(Algorithm~\ref{algo:directed}), which achieves an approximation ratio of
$O(n)$ for every $L$.
\begin{theorem}
\label{thm:algodirected}
For every $L \subseteq {\mathcal{D}}$, \algo{ApxDir}\ is a factor $(2n \cdot (p_L +4))$
approximation algorithm for \mindg L. Its running-time is
$O(n^2 \log n)$.
\end{theorem}
\begin{proof}
We start by estimating the approximation ratio. Theorem~\ref{thm:au} yields
$w_U(C_U^{\operatorname{apx}}) \leq 4\cdot (p_L +4) \cdot w_U(C_U^\ast)$, where $C_U^\ast$ is
an optimal $\close L$-cycle cover of $G_U$. Now consider an optimum
$\close L$-cycle cover $C^\ast$ of $G$. Lemma~\ref{lem:nbound} yields
$w_U(C_U^\ast) \leq n \cdot w(C^\ast)$. Overall,
\[
w(C^{\operatorname{apx}}) \leq \frac 12 \cdot w_U(C_U^{\operatorname{apx}})
\leq 2\cdot (p_L +4) \cdot w_U(C_U^\ast)
\leq 2\cdot (p_L +4) \cdot n \cdot w(C^\ast).
\]
The running-time is dominated by the time needed to execute \algo{GoeWill}\ in \algo{ApxUndir}, which is
$O(n^2 \log n)$.
\end{proof}
\subsection{\boldmath Unconditional Inapproximability of \mindg L}
\label{ssec:inappdir}
For undirected graphs, both $\maxug L$ and $\minug L$ can be approximated
efficiently to within constant factors. Surprisingly, in case of directed
graphs, this holds only for
the maximization variant of the directed $L$-cycle cover problem. \mindg L
cannot be approximated within a factor of $o(n)$ for certain sets $L$, where $n$
is the number of vertices of the input graph. In particular, \algo{ApxDir}\ achieves
asymptotically optimal approximation ratios for \mindg L.
One might again want to argue that such an inapproximability result is more of
theoretical interest. But, similar to the case of \minug L, this result shows
that to find approximation algorithms, specific properties of the sets $L$ have
to be exploited. A general algorithm with a good approximation ratio for all
sets $L$ does not exist. Furthermore, as we will discuss in
Section~\ref{ssec:directedremarks}, \mindg L seems to be much harder a problem
than the other three variants, even for more practical sets~$L$.
\begin{theorem}
\label{thm:inapp}
Let $L \subseteq {\mathcal{U}}$ be an immune set. Then no approximation algorithm
for \mindg L achieves an approximation ratio of $o(n)$, where $n$ is the
number of vertices of the input graph.
\end{theorem}
\begin{proof}
Let $G_n$ be a directed complete graph with $n$ vertices $\{1,2, \ldots,n\}$.
The weight of an edge $(i,j)$ is $(j-i) \bmod n$. This means that the
vertices are ordered along a directed cycle, and the distance from $i$ to $j$
is the number of edges that have to be traversed in order to get from $i$ to
$j$. These edge weights fulfill the triangle inequality.
For all $n \in L$, the optimal $L$-cycle cover of $G_n$ is a Hamiltonian cycle
of weight $n$. Furthermore, the weight of every cycle that traverses some of
$G_n$'s vertices has a weight of at least $n$: Let $i$ and $j$ be two
traversed vertices with $i<j$. By the triangle inequality, the path from $i$
to $j$ has a weight of at least $j-i$ while the path from $j$ to $i$ has a
weight of at least $i-j+n = (i-j) \bmod n$.
Consider any approximation algorithm \algo{Approx}\ for \mindg L. We run \algo{Approx}\ on
$G_n$ for $n \in \ensuremath{\mathbb{N}}$. By outputting the cycle lengths occurring in the
$L$-cycle cover of $G_n$ for all $n = 1, 2, \ldots$, we obtain an enumeration
of a subset $S \subseteq L$.
Since $L$ is immune, $S$ is a finite set, and $s = \max(S)$ exists. Thus, the
$L$-cycle cover output for $G_n$ consists of at least $n/s$ cycles and has a
weight of at least $n^2/s$. For $n \in L$, this is a factor of $n/s$ away from
the optimum solution, where $s$ is a constant that depends only on \algo{Approx}.
Thus, no recursive algorithm can achieve an approximation ratio of $o(n)$.
\end{proof}
\mindg{L'} for a finite set $L'$ is an \class{NP}\ optimization problem. Thus, it can
be solved, although this may take exponential time. Therefore, the following
result shows that \mindg L can be approximated for all $L$ within
a ratio of~$n/s$ for arbitrarily large constants $s$, although this may also
take exponential time. In this sense, Theorem~\ref{thm:inapp} is tight.
We will first prove a lemma, which we will also use to prove Theorem~\ref{thm:maxptas}.
\begin{lemma}
\label{lem:numbertheory}
For every $L \subseteq \ensuremath{\mathbb{N}}$ and every $s > 1$, there exists a finite set
$L' \subseteq L$ with $\close{L'} = \close L$ and the following property:
For every $\lambda \in L \setminus L'$, there exist
$\lambda_1, \ldots, \lambda_z \in L'$ with $z \leq \lambda/s$ such that
$\sum_{i=1}^z \lambda_i = \lambda$.
\end{lemma}
\begin{proof}
If $L$ is finite, we simply choose $L' = L$. So we assume that $L$ is
infinite. Let again $g_L$ denote the greatest common divisor of all numbers of
$L$. Let us first describe how to proceed if $g_L \in L$. After that we deal
with the case that $g_L \notin L$.
Let $L' = \{\lambda \in L \mid \lambda \leq m\}$, and let $\ell \in L'$. If
$m$ is sufficiently large, then $\close{L'} = \close L$ (this follows from
the proof of
Lemma~\ref{lem:finite}~\cite[Lem.~3.1]{Manthey:RestrictedCC:2007ECCC} and also
implicitly from this proof). We will specify $\ell$ and $m$, which depend on
$s$, later on.
Let $\lambda \in L \setminus L'$. Thus, $\lambda > m$. Let
$r = \bmod(\lambda, \ell)$. Since $\lambda$ and $\ell$ are divisible by
$g_L$, also $r$ is divisible by $g_L$. Since $\lambda \notin L'$, we have to
find $\lambda_1, \lambda_2, \ldots \in L'$ that add up to $\lambda$. We have
$\lambda = \lfloor \lambda/\ell \rfloor \cdot \ell + (r/g_L) \cdot g_L$. Now
we choose
$\lambda_1 = \ldots = \lambda_{\lfloor \lambda/\ell \rfloor } = \ell$ and
$\lambda_{\lfloor \lambda/\ell \rfloor +1} = \ldots =
\lambda_{\lfloor \lambda/\ell \rfloor + r/g_L} = g_L$. What remains is to
show that $\lfloor \lambda/\ell \rfloor + r/g_L \leq \lambda/s$.
To do this, we choose $\ell > s$. Since $r/g_L$ is bounded from above by
$\ell/g_L$, which does not depend on $\lambda$, we obtain
$\lfloor \lambda/\ell \rfloor + r/g_L \leq \lambda/s$ for all $\lambda > m$
for some sufficiently large $m$.
The case that $g_L \notin L$ remains to be considered. There exist
$\pi_1, \ldots, \pi_p \in L$ and $\xi_1, \ldots, \xi_p \in \ensuremath{\mathbb{Z}}$ for some
$p \in \ensuremath{\mathbb{N}}$ with $g_L = \sum_{i=1}^p \xi_i \pi_i$. Without loss of
generality, we assume that $\xi_1 = \min_{1 \leq i \leq p} \xi_i$. We have
$\xi_1 < 0$ since $g_L \notin L$.
As above, let $L' = \{\lambda \in L \mid \lambda \leq m\}$, and let
$\ell \in L'$. Let $\ell^\ast = -\xi_1 \ell \cdot \sum_{i=1}^p \pi_i > 0$. We
choose $m$ to be larger than $\ell^\ast$. Let $\lambda > m$, and let
$r = \bmod(\lambda - \ell^\ast,\ell)$. Then
\begin{eqnarray*}
\lambda & = &
\left\lfloor \frac{\lambda - \ell^\ast}{\ell} \right\rfloor \cdot \ell
+ r + \ell^\ast \: = \: \left\lfloor \frac{\lambda - \ell^\ast}{\ell} \right\rfloor \cdot \ell
+ \frac r{g_L} \cdot \sum_{i=1}^p \pi_i \xi_i - \xi_1 \ell
\cdot \sum_{i=1}^p \pi_i \\
& = &
\left\lfloor \frac{\lambda - \ell^\ast}{\ell} \right\rfloor \cdot \ell
+ \sum_{i=1}^p \pi_i \cdot \left(\frac{r \xi_i}{g_L} - \xi_1 \ell\right) .
\end{eqnarray*}
We have $\rho_i = \frac{r \xi_i}{g_L} - \xi_1 \ell \geq 0$: Since $\xi_1 < 0$,
we have $-\xi_1 \ell> 0$. If $\xi_i > 0$, then of course $\rho_i \geq 0$. If
$\xi_i<0$, then $-\xi_i \leq -\xi_1$, and $\rho_i \geq 0$ follows from
$r < \ell$.
According to the deliberations above, we choose $\lambda_1 = \ldots =
\lambda_{\lfloor (\lambda - \ell^\ast)/\ell \rfloor} = \ell$. In addition, we
set $\rho_i$ of the $\lambda_j$s to $\pi_i$ for $1 \leq i \leq p$.
It remains to be shown that $\lfloor (\lambda - \ell^\ast)/\ell \rfloor
+ \sum_{i=1}^p \rho_i \leq \lambda/s$. This follows from the fact that
$\rho_i \leq \ell \cdot (\xi_i/g_L - \xi_1)$ for all $i$, which is independent
of $\lambda$. Again, we choose $\ell > s$ and $m$ sufficiently large to
complete the proof.
\end{proof}
\begin{theorem}
\label{thm:dirtight}
For every $L$ and every $s > 1$, there exists a finite set $L' \subseteq L$
with $\close{L'} = \close L$ such that
$\min_{L'}(G,w) \leq \frac ns \cdot \min_{L}(G,w)$ for all directed graphs $G$
with edge weights $w$.
\end{theorem}
\begin{proof}
Let $s > 1$ and $L \subseteq {\mathcal{D}}$ be given. We choose $L'\subseteq L$
as described in the proof of Lemma~\ref{lem:numbertheory}. In order to prove
the theorem, let $G$ be a directed complete graph, and let $C$ be an $L$-cycle
cover of minimum weight of $G$. We show that we can find an $L'$-cycle cover
$C'$ with $w(C') \leq \frac ns \cdot w(C)$.
The $L'$-cycle cover $C'$ contains all cycles of $C$ whose lengths are in
$L'$. Now consider any cycle $c$ of length $\lambda \in L \setminus L'$.
According to Lemma~\ref{lem:numbertheory}, there exist
$\lambda_1, \ldots, \lambda_z \in L'$ with $\sum_{i=1}^z \lambda_i = \lambda$
and $z \leq \lambda/s$. We decompose $c$ into $z$ cycles of length
$\lambda_1, \ldots, \lambda_z$. By the triangle inequality, the
weight of each of these new cycles is at most $w(c)$. Thus, the total weight
of all $z$ cycles is at most
$z \cdot w(c) \leq (\lambda/s) \cdot w(c) \leq (n/s) \cdot w(c)$. By
performing this for all cycles of $C$, we obtain an $L'$-cycle cover $C'$ with
$\min_{L'}(G) \leq w(C') \leq (n/s) \cdot w(C) = (n/s) \cdot \min_L(G)$.
\end{proof}
\subsection{\boldmath Remarks on the Approximability of \mindg L}
\label{ssec:directedremarks}
It might seem surprising that \mindg L is much harder to
approximate than \minug L or the maximization problems \maxug L and \maxdg L. In
the following, we give some reasons why \mindg L is more difficult than the
other three $L$-cycle cover problems. In particular, even for ``easy'' sets $L$,
for which membership testing can be done in polynomial time, it seems that
\mindg L is much harder to approximate than the other three variants.
Why is minimization harder than maximization? To get a good approximation ratio
in the case of maximization problems, it suffices to detect a few ``good'',
i.~e., heavy edges. If we have a decent fraction of the heaviest edges, their
total weight is already within a constant factor of the weight of an optimal
$L$-cycle cover.
In order to form an $L$-cycle cover, we have to connect the heavy edges using
other edges. These other edges might be of little weight, but they do not
decrease the weight that we have already obtained from the heavy edges.
Now consider the problem of finding cycle covers of minimum weight. It does not
suffice to detect a couple of ``good'', i.~e., light edges: Once we have
selected a couple of good edges, we might have to connect them with heavy-weight
edges. These heavy-weight edges can worsen the approximation ratio dramatically.
Why is \mindg L harder than \minug L? If we have a cycle in an undirected graph
whose length is in $\close L$ but not in $L$ (or not in $L'$ but we do not know
whether it is in $L$), then we can decompose it into smaller cycles all lengths
of which are in $L$. This can be done such that the weight at most doubles (see
Section~\ref{sec:appund}). However, this does not work for directed cycles as we
have seen in the proof of Theorem~\ref{thm:inapp}: By decomposing a long cycle
into smaller ones, the weight can increase tremendously.
Finally, a question that arises naturally is whether we can do better if all
allowed cycle lengths are known a priori. This can be achieved by restricting
ourselves to sets $L$ that allow efficient membership testing. Another option is
to include the allowed cycle lengths in the input, i.~e., in addition to an
$n$-vertex graph and edge weights, we are given a subset of $\{2,3,\ldots, n\}$
of allowed cycle lengths.
The cycle cover problem with cycle lengths included in the input contains the
ATSP as a special case: for an $n$-vertex graph, we allow only cycles of length
$n$. Any constant factor approximation for this variant would thus immediately
lead to a constant factor approximation for the ATSP. Despite a considerable
amount of research devoted to the ATSP in the past decades, no such algorithm
has been found yet. This is an indication that finding a constant factor
approximation for the more general problem of computing directed cycle covers
might be difficult.
Now consider the restriction to sets $L$ for which
$\{1^\lambda \mid \lambda \in L\}$ is in $\class{P}$ (\mindg L is an \class{NP}\ optimization
problem for all such $L$). If we had a factor $r$ approximation algorithm for
\mindg L for such $L$, where $r$ is independent of $L$, we would obtain a
$c \cdot \log n$ approximation algorithm for the ATSP, where $c> 0$ can be made
arbitrarily small: In particular, such an algorithm for \mindg L would allow for
an $r$ approximation of \mindg k for all $k \in \ensuremath{\mathbb{N}}$. A close look at the
$(\log n)$ approximation algorithm for ATSP of Frieze et
al.~\cite{FriezeEA:TSP:1982} shows that an $r$-approximation for $k$-cycle
covers would yield an $(r \cdot \log_kn)$ approximation for the ATSP. We have
$r \cdot \log_kn = \frac{r}{\log k} \cdot \log n$. Thus, by increasing $k$, we
can make $c = \frac{r}{\log k}$ arbitrarily small. This would improve
dramatically over the currently best approximation ratio of
$0.842 \cdot \log_2 n$~\cite{KaplanEA:TSP:2005}.
\section{\boldmath Properties of Maximum-weight Cycle Covers}
\label{sec:maxgood}
To contrast our results for \minug L and \mindg L, we show that their
maximization counterparts \maxug L and \maxdg L can, at least in principle, be
approximated arbitrarily well; their inapproximability is solely due to their
\class{APX}-hardness and not to the difficulties arising from undecidable sets $L$.
In other words, the lower bounds for \minug L and \mindg L presented in this
paper are based on the hardness of deciding whether certain lengths are in $L$.
The inapproximability of \maxug L and \maxdg L is based on the difficulty of
finding good $L$-cycle covers rather than testing whether they are $L$-cycle
covers.
Let $\max_L(G,w)$ be the weight of a maximum-weight $L$-cycle cover of $G$ with
edge weights $w$. The edge weights $w$ do not have to fulfill the triangle inequality.
We will show that $\max_L(G,w)$ can be approximated arbitrarily well by
$\max_{L'}(G,w)$ for finite sets $L' \subseteq L$ with $\close{L'} = \close L$.
Thus, any approximation
algorithm for \maxug{L'} or \maxdg{L'} for finite sets $L'$ immediately yields
an approximation algorithm for general sets $L$ with an only negligibly worse
approximation ratio.
The following theorem for directed cycle covers contains the case of
undirected graphs as a special case.
\begin{theorem}
\label{thm:maxptas}
Let $L \subseteq {\mathcal{D}}$ be any non-empty set, and let $\varepsilon > 0$.
Then there exists a finite subset $L' \subseteq L$ with
$\close{L'} = \close L$ such that
$\max_{L'}(G,w) \geq (1-\varepsilon) \cdot \max_L(G,w)$ for all graphs $G$
with edge weights $w$.
\end{theorem}
\begin{proof}
Let $\varepsilon> 0$ be given. We choose $s > 1$ with $1/s \leq \varepsilon$.
According to Lemma~\ref{lem:numbertheory}, there exists a finite set
$L' \subseteq L$ with $\close{L'} = \close L$ with the following property: For
all $\lambda \in L \setminus L'$, there exist
$\lambda_1, \ldots, \lambda_z \in L'$ for
$z \leq \lambda/s \leq \varepsilon \lambda$ that sum up to $\lambda$. Let us
compare $\max_{L'}(G)$ and $\max_L(G)$. Therefore, let $C$ be an optimum
$L$-cycle cover. We show how to obtain an $L'$-cycle cover $C'$ from $C$.
The $L'$-cycle cover $C'$ contains all cycles of $C$ whose lengths are in
$L'$. Let us consider any cycle $c$ of length $\lambda \in L \setminus L'$.
There exist $\lambda_1, \ldots, \lambda_z \in L'$ for some
$z \leq \varepsilon \lambda$ that sum up to $\lambda$. We break $z$ edges of
$c$ to obtain a collection of paths of lengths
$\lambda_1-1, \ldots, \lambda_z-1$. Since we break at most an $\varepsilon$
fraction of $c$'s edges, we can remove these $z$ edges such that at most an
$\varepsilon$ fraction of $w(c)$ is lost. Then we connect the respective
endpoints of each path to obtain $z$ cycles of lengths
$\lambda_1, \ldots, \lambda_z$. No weight is lost in this way.
We have lost at most $\varepsilon \cdot w(c)$ of the weight of every cycle $c$
of $C$, thus $\max_{L'}(G) \geq w(C') \geq (1-\varepsilon) \cdot w(C) =
(1-\varepsilon) \cdot \max_L(G)$.
\end{proof}
\section{Concluding Remarks}
\label{sec:concl}
First of all, we would like to know whether there is a general upper bound for
the approximability of \minug L: Does there exist an $r$ (independent of $L$)
such that \minug L can be approximated within a factor of $r$? We conjecture
that such an algorithm exists. If such an algorithm works also for the slightly
more general problem \minug L with $2 \in L$ (see
Section~\ref{ssec:directedalg}), then we would obtain a factor $rn/2$
approximation for $\mindg L$ as well.
While the problem of computing $L$-cycle cover of minimum weight can be
approximated efficiently in the case of undirected graphs, the directed variant
seems to be much harder. We are interested in developing approximation
algorithms for \mindg L for particular sets $L$ or for certain classes of sets
$L$. For instance, how well can \mindg L be approximated if $L$ is a finite set?
Are there non-constant lower bounds for the approximability of \mindg L, for
instance bounds depending on $\max(L)$? Because of the similarities between
\mindg L and ATSP, an answer to either question would hopefully also shed some
light on the approximability of the ATSP.
\bibliographystyle{plain}
|
1,116,691,499,832 | arxiv | \section{Introduction}
In most real word problems, parameters are uncertain at the optimization phase and decisions need to be made in the face of uncertainty. Stochastic and robust optimization are two widely used paradigms to handle uncertainty. In the stochastic optimization approach, uncertainty is modeled as a probability distribution and the goal is to optimize an expected objective~\cite{Dantzig55}. We refer the reader to Kall and Wallace~\cite{KW94}, Prekopa~\cite{Prekopa95}, Shapiro~\cite{Shapiro08}, Shapiro et al.~\cite{SDR09} for a detailed discussion on stochastic optimization. On the other hand, in the robust optimization approach, we consider an adversarial model of uncertainty using an uncertainty set and the goal is to optimize over the worst-case realization from the uncertainty set. This approach was first introduced by Soyster~\cite{SA73} and has been extensively studied in recent past. We refer the reader to Ben-Tal and Nemirovski~\cite{BN98,BN99,Ben-Tal02}, El Ghaoui and Lebret~\cite{EL97}, Bertsimas and Sim~\cite{BS03,BS04}, Goldfarb and Iyengar~\cite{GI03}, Bertsimas et al.~\cite{BBC08} and Ben-Tal et al.~\cite{BNE10} for a detailed discussion of robust optimization. However, in both these paradigms, computing an optimal dynamic solution is intractable in general due to the ``curse of dimensionality''.
This intractability of computing the optimal adjustable solution necessitates considering approximate solution policies such as static and affine policies where the decision in any period $t$ is restricted to a particular function of the sample path until period $t$. Both static and affine policies have been studied extensively in the literature and can be computed efficiently for a large class of problems. While the worst-case performance of such approximate policies can be significantly bad as compared to the optimal dynamic solution, the empirical performance, especially of affine policies, has been observed to be near-optimal in a broad range of computational experiments. Our goal in this paper is to address this stark contrast between the worst-case performance bounds and near-optimal empirical performance of affine policies.
In particular, we consider the following two-stage adjustable robust linear optimization problems with uncertain demand requirements:
\begin{equation}\label{eq:ar}
\begin{aligned}
z_{\sf AR}\left( \mb c, \mb d, \mb A, \mb B, {\cal U} \right) = \min_{\mb x} \; & \mb{c}^T \mb{x} + \max_{\mb{h}\in {\cal U}} \min_{\mb{y}(\mb{h})} \mb{d}^T \mb{y}(\mb{h}) \\
& \mb{A}\mb{x} + \mb{B}\mb{y}(\mb{h}) \; \geq \; \mb{h} \; \; \; \forall \mb h \in {\cal U} \\
& \mb{y}(\mb{h}) \in {\mathbb R}^{n}_+ \; \; \; \forall \mb h \in {\cal U} \\
& \mb{x} \in {\mathbb R}^{n}_+
\end{aligned}
\end{equation}
where $\mb{A} \in {\mathbb R}_+^{m {\times} n}, \mb{c}\in {\mathbb R}^{n}_+, \mb{d}\in {\mathbb R}^{n}_+, \mb{B} \in {\mathbb R}^{m {\times} n}_+$. The right-hand-side $\mb{h}$ belongs to a compact convex uncertainty set ${\cal U}\subseteq{\mathbb R}^m_+$. The goal in this problem is to select the first-stage decision $\mb{x}$, and the second-stage recourse decision, $\mb y(\mb{h})$, as a function of the uncertain right hand side realization, $\mb{h}$ such that the worst-case cost over all realizations of $\mb h \in {\cal U}$ is minimized. We assume without loss of generality that $\mb c= \mb e$ and $ \mb d = \bar d \cdot\mb e$ (by appropriately scaling $\mb A$ and $\mb B$). Here, $\bar d$ can interpreted as the inflation factor for costs in the second-stage.
This model captures many important applications including set cover, facility location and network design problems under uncertain demand. Here the right hand side, $\mb h$ models the uncertain demand and the covering constraints capture the requirement of satisfying the uncertain demand. However, the adjustable robust optimization problem \eqref{eq:ar} is intractable in general. In fact, Feige et al.~\cite{FJMM07} show that $\Pi_{\sf AR}({\cal U})$~\eqref{eq:ar} is hard to approximate within any factor that is better than $\Omega(\log n)$.
Both static and affine policy approximations have been studied in the literature for~\eqref{eq:ar}. In a static solution, we compute a single optimal solution $(\mb{x},\mb{y})$ that is feasible for all realizations of the uncertain right hand side. Bertsimas et al.~\cite{BGS10} relate the performance of static solution to the symmetry of the uncertainty set and show that it provides a good approximation to the adjustable problem if the uncertainty is close to being centrally symmetric. However, the performance of static solutions can be arbitrarily large for a general convex uncertainty set with the worst case performance being $\Omega (m)$. El Housni and Goyal \cite{elhousni2015piecewise} consider piecewise static policies for two-stage adjustable robust problem with uncertain constraint coefficients. These are a generalization of static policies where we divide the uncertainty set into several pieces and specify a static solution for each piece. However, they show that, in general, there is no piecewise static policy with a polynomial number of pieces that has a significantly better performance than an optimal static policy.
An affine policy restricts the second-stage decisions, $\mb{y}(\mb{h})$ to being an affine function of the uncertain right-hand-side $\mb{h}$, i.e., $\mb{y}(\mb{h})=\mb{P}\mb{h}+\mb{q}$ for some $\mb{P}\in{\mathbb R}^{n\times m}$ and $\mb{q}\in{\mathbb R}^m$ are decision variables. Affine policies in this context were introduced in Ben-Tal et al.~\cite{Ben-Tal04} and can be formulated as:
\begin{equation}\label{eq:aff}
\begin{aligned}
z_{\sf Aff}\left( \mb c, \mb d, \mb A, \mb B, {\cal U} \right) = \min_{\mb x} \; & \mb{c}^T \mb{x} + \max_{\mb{h}\in {\cal U}} \min_{\mb P, \mb q} \mb{d}^T\left( \mb{P}\mb{h}+\mb{q}\right) \\
& \mb{A}\mb{x} + \mb{B}\left( \mb{P}\mb{h}+\mb{q}\right) \; \geq \; \mb{h} \; \; \; \forall \mb h \in {\cal U} \\
& \mb{P}\mb{h}+\mb{q} \; \geq \; \mb{0} \; \; \; \forall \mb h \in {\cal U} \\
& \mb{x} \in {\mathbb R}^{n}_+\\
\end{aligned}
\end{equation}
An optimal affine policy can be computed efficiently for a large class of problems. Bertsimas and Goyal~\cite{BG10} show that affine policies give a $O(\sqrt{m})$-approximation to the optimal dynamic solution for~\eqref{eq:ar}. Furthermore, they show that the approximation bound $O(\sqrt m)$ is tight. However, the observed empirical performance for affine policies is near-optimal for a large set of synthetic instances of~\eqref{eq:ar}.
\subsection{Our Contributions}
Our goal in this paper is to address this stark contrast by providing a theoretical analysis of the performance of affine policies on synthetic instances of the problem generated from a probabilistic model. In particular, we consider random instances of the two-stage adjustable problem \eqref{eq:ar} where the entries of the constraint matrix $\mb B$ are random from a given distribution and analyze the performance of affine policies for a large class of distributions. Our main contributions are summarized below.
\vspace{2mm}
\noindent {\bf Independent and Identically distributed Constraint Coefficients}. We consider random instances of the two-stage adjustable problem where the entries of $\mb B$ are generated i.i.d. according to a given distribution and show that an affine policy gives a good approximation for a large class of distributions including distributions with bounded support and unbounded distributions with Gaussian and sub-gaussian tails.
In particular, for distributions with bounded support in $[0,b]$ and expectation $\mu$, we show that for sufficiently large values of $m$ and $n$, affine policy gives a $b/\mu$-approximation to the adjustable problem~\eqref{eq:ar}. More specifically, with probability at least $(1-1/m)$, we have that
\[ z_{\sf Aff}(\mb c, \mb d, \mb A, \mb B, {\cal U}) \leq \frac{b}{\mu (1- \epsilon)} \cdot z_{\sf AR}(\mb c, \mb d, \mb A, \mb B, {\cal U}),\]
where $\epsilon = b/\mu \sqrt{\log m/n}$ (Theorem~\ref{thm:bounded}). Therefore, if the distribution is {\em symmetric}, affine policy gives a $2$-approximation for the adjustable problem~\eqref{eq:ar}. For instance, for the case of uniform distribution, or Bernoulli distribution with parameter $p=1/2$, affine gives a nearly $2$-approximation for~\eqref{eq:ar}.
While the above bound leads to a good approximation for many distributions, the ratio $\frac{b}{\mu}$ can be significantly large in general; for instance, for distributions where extreme values of the support are extremely rare and significantly far from the mean. In such instances, the bound $b/\mu$ can be quite loose. We can tighten the analysis by using the concentration properties of distributions and can extend the analysis even for the case of unbounded support. More specifically, we show that if $B_{ij}$ are i.i.d. according to an unbounded distribution with a sub-gaussian tail, then for sufficiently large values of $m$ and $n$, with probability at least $(1-1/m)$,
\[ z_{\sf Aff}(\mb c, \mb d, \mb A, \mb B, {\cal U}) \leq O(\sqrt{\log mn}) \cdot z_{\sf AR}(\mb c, \mb d, \mb A, \mb B, {\cal U}).\]
Here we assume that the parameters of the distributions are constants independent of the problem dimension. We prove the case of {\em folded normal} distribution in Theorem~\ref{thm:gaus}.
We would like to note that the above performance bounds are in stark contrast with the worst case performance bound $O(\sqrt{m})$ for affine policies which is tight. For the random instances where $B_{ij}$ are i.i.d. according to above distributions, the performance is significantly better. Therefore, our results provide a theoretical justification of the good empirical performance of affine policies and close the gap between worst case bound of $O(\sqrt{m})$ and observed empirical performance. Furthermore, surprisingly these performance bounds are independent of the structure of the uncertainty set, ${\cal U}$ unlike in previous work where the performance bounds depend on the geometric properties of ${\cal U}$. Our analysis is based on a {\em dual-reformulation} of~\eqref{eq:ar} introduced in~\cite{bertsimas2016duality} where~\eqref{eq:ar} is reformulated as an alternate two-stage adjustable optimization and the uncertainty set in the alternate formulation depends on the constraint matrix $\mb B$. Using the probabilistic structure of $\mb B$, we show that the alternate {\em dual} uncertainty set is close to a simplex for which affine policies are optimal.
We would also like to note that our performance bounds are not necessarily tight and the actual performance on particular instances can be even better. We test the empirical performance of affine policies for random instances generated according to uniform and folded normal distributions and observe that affine policies are nearly optimal with a worst optimality gap of $4\%$ (i.e. approximation ratio of $1.04$) on our test instances as compared to the optimal adjustable solution that is computed using a MIP.
\vspace{2mm}
\noindent {\bf Worst-case distribution for Affine policies.} While for a large class of commonly used distributions, affine policies give a good approximation with high probability for random i.i.d. instances according to the given distribution, we present a distribution where the performance of affine policies is $\Omega(\sqrt m)$ with high probability for instances generated from this distribution. Note that this matches the worst-case deterministic bound for affine policies. We would like to remark that in the worst-case distribution, the coefficients $B_{ij}$ are not identically distributed. Our analysis suggests that to obtain bad instances for affine policies, we need to generate instances using a structured distribution where the structure of the distribution might depend on the problem structure.
\section{Random instances with i.i.d. coefficients}
In this section, we theoretically characterize the performance of affine policies for random instances of $\eqref{eq:ar}$ for a large class of generative distributions including both bounded and unbounded support distributions. In particular, we consider the two-stage problem where constraint coefficients $\mb A$ and $\mb B$ are i.i.d. according to a given distribution. We consider a polyhedral uncertainty set ${\cal U}$ given as
\begin{equation} \label{def:U}
{\cal U}=\{\mb{h}\in{\mathbb R}^m_+\;|\; \mb R \mb h \leq \mb r \}
\end{equation}
where $\mb{R} \in {\mathbb R}_+^{L {\times} m}$ and $ \mb{r}\in {\mathbb R}^{L}_+$. This is a fairly general class of uncertainty sets that includes many commonly used sets such as hypercube and {\em budget uncertainty} sets.
Our analysis of the performance of affine policies does not depend on the structure of first stage constraint matrix $\mb A$ or cost $ \mb c$. The second-stage cost, as already mentioned, is wlog of the form $\mb d= \bar d \mb e$. Therefore, we restrict our attention only to the distribution of coefficients of the second stage matrix $\mb B$. We will use the notation $\tilde{\mb B}$ to emphasis that $\mb B$ is random. For simplicity, we refer to $z_{\sf AR}\left( \mb c, \mb d, \mb A, \mb B, {\cal U} \right)$ as $ z_{\sf AR}\left( \mb B \right)$ and to $z_{\sf Aff}\left( \mb c, \mb d, \mb A, \mb B, {\cal U} \right)$ as $z_{\sf Aff}\left( \mb B \right)$.
\subsection{Distributions with bounded support}
We first consider the case when $\tilde{B}_{ij}$ are i.i.d. according to a bounded distribution with support in $[0,b]$ for some constant $b$ independent of the dimension of the problem. We show a performance bound of affine policies as compared to the optimal dynamic solution. The bound depends only on the distribution of $\tilde{\mb B}$ and holds for any polyhedral uncertainty set $\cal U$. In particular, we have the following theorem.
\
\begin{theorem} \label{thm:bounded}
Consider the two-stage adjustable problem \eqref{eq:ar} where $\tilde{B}_{ij}$ are i.i.d. according to a bounded distribution with support in $ [0,b] $ and $\mathbb{E} [{\tilde{B}}_{ij} ]=\mu$ $\forall i \in [m] \; \forall j \in [n]$. For $n$ and $m$ sufficiently large, we have with probability at least $1- \frac{1}{m}$,
$$z_{\sf AR}(\tilde{\mb B}) \leq z_{\sf Aff}(\tilde{\mb B}) \leq \frac{b}{\mu ( 1- \epsilon)} \cdot z_{\sf AR}(\tilde{\mb B})$$
where $\epsilon = \frac{b}{\mu} \sqrt{\frac{\log m}{n}} $.
\end{theorem}
The above theorem shows that for sufficiently large values of $m$ and $n$, the performance of affine policies is at most $b/\mu$ times the performance of an optimal adjustable solution. This shows that affine policies give a good approximation (and significantly better than the worst-case bound of $O(\sqrt m)$) for many important distributions. We present some examples below.
\vspace{2mm}
\noindent
{\bf Example 1. [Uniform distribution]} \label{ex:uniform}
Suppose for all $ i \in [m]$ and $ j \in [n]$ $\tilde{B}_{ij}$ are i.i.d. uniform in $ [0,1] $. Then $\mu = 1/2$ and from Theorem \ref{thm:bounded} we have with probability at least $1- 1/m$,
$$z_{\sf AR}(\tilde{\mb B}) \leq z_{\sf Aff}(\tilde{\mb B}) \leq \frac{2}{1- \epsilon} \cdot z_{\sf AR}(\tilde{\mb B})$$
where $\epsilon =2\sqrt{\log m/n}$. Therefore, for sufficiently large values of $n$ and $m$ affine policy gives a $2$-approximation to the adjustable problem in this case. Note that the approximation bound of $2$ is a conservative bound and the empirical performance is significantly better. We demonstrate this in our numerical experiments.
\vspace{2mm}
\noindent
{\bf Example 2. [Bernoulli distribution]} \label{exr:ber}
Suppose for all $ i \in [m]$ and $ j \in [n]$, $\tilde{B}_{ij}$ are i.i.d. according to a Bernoulli distribution of parameter $p$. Then $\mu = p$, $b=1$ and from Theorem \ref{thm:bounded} we have with probability at least $1- \frac{1}{m}$,
$$z_{\sf AR}(\tilde{\mb B}) \leq z_{\sf Aff}(\tilde{\mb B}) \leq \frac{1}{p(1- \epsilon)} \cdot z_{\sf AR}(\tilde{\mb B})$$
where $\epsilon = \frac{1}{p}\sqrt{ \frac{\log m}{n} }$. Therefore for constant $p$, affine policy gives a constant approximation to the adjustable problem (for example $2$-approximation for $p=1/2$).
Note that these performance bounds are in stark contrast with the worst case performance bound $O(\sqrt{m})$ for affine policies which is tight. For these random instances, the performance is significantly better. We would like to note that the above distributions are very commonly used to generate instances for testing the performance of affine policies and exhibit good empirical performance. Here, we give a theoretical justification of the good empirical performance of affine policies on such instances, thereby closing the gap between worst case bound of $O(\sqrt{m})$ and observed empirical performance. We discuss the intuition and the proof of Theorem \ref{thm:bounded} in the following subsections.
\subsubsection{Preliminaries}
In order to prove Theorem \ref{thm:bounded}, we need to introduce certain peliminary results. We first introduce the following formulation for the adjustable problem \eqref{eq:ar} based on ideas in Bertsimas and de Ruiter \cite{bertsimas2016duality}.
\begin{equation}\label{eq:dual}
\begin{aligned}
z_{\sf d-AR}(\mb B)= \min_{\mb x} \; & \mb{c}^T \mb{x} + \max_{\mb{w}\in {\cal W}} \min_{\mb{\lambda}(\mb{w})} - (\mb A \mb x)^T \mb w + \mb{r}^T \mb{\lambda}(\mb{w}) \\
& \mb{R}^T \mb{\lambda}(\mb{w}) \; \geq \; \mb{w} \; \; \; \forall \mb w \in {\cal W} \\
& \mb{\lambda}(\mb{w}) \in {\mathbb R}^{L}_+, \; \forall \mb w \in {\cal W} \\
& \mb{x} \in {\mathbb R}^{n}_+
\end{aligned}
\end{equation}
where the set $\cal W$ is defined as
\begin{equation} \label{def:W}
{\cal W}=\{\mb{w}\in{\mathbb R}^m_+\;|\; \mb B^T \mb w \leq \mb d \}.
\end{equation}
We show that the above problem is an equivalent formulation of \eqref{eq:ar}.
\begin{lemma} \label{lem:reform}
Let $z_{\sf AR}(\mb B)$ be as defined in \eqref{eq:ar} and $z_{\sf d-AR}(\mb B) $ as defined in \eqref{eq:dual}.
Then,
$$z_{\sf AR}(\mb B)= z_{\sf d-AR}(\mb B).$$
\end{lemma}
The proof follows from \cite{bertsimas2016duality}. For completeness, we present it in Appendix \ref{apx-proofs:lem:reform}. Reformulation \eqref{eq:dual} can be interpreted as a new two-stage adjustable problem over {\em dualized} uncertainty set ${\cal W}$ and decision $\mb{\lambda}(\mb{w})$. Following \cite{bertsimas2016duality}, we refer to \eqref{eq:dual} as the {\em dualized} formulation and to \eqref{eq:ar} as the {\em primal} formulation.
Bertsimas and de Ruiter \cite{bertsimas2016duality} show that even the affine approximations of \eqref{eq:ar} and \eqref{eq:dual} (where recourse decisions are restricted to be affine functions of respective uncertainties) are equivalent.
In particular, we have the following Lemma which is a restatement of Theorem 2 in \cite{bertsimas2016duality}.
\begin{lemma}{\bf (Theorem 2 in Bertsimas and de Ruiter~\cite{bertsimas2016duality})}\label{lem:berti-de Ruiter}
Let $z_{\sf d-Aff}(\mb B)$ be the objective value when $\mb{\lambda}(\mb{w}) $ is restricted to be affine function of $\mb{w}$ and $z_{\sf Aff}(\mb B) $ as defined in \eqref{eq:aff}. Then, $$z_{\sf d-Aff}(\mb B) =z_{\sf Aff}(\mb B) .$$
\end{lemma}
Bertsimas and Goyal \cite{BG10} show that affine policy is optimal for the adjustable problem \eqref{eq:ar} when the uncertainty set ${\cal U}$ is a simplex. In fact, optimality of affine policies for simplex uncertainty sets holds for more general formulation than considered in \cite{BG10}. In particular, we have the following lemma.
\begin{lemma}\label{lem:berti-goyal}
Suppose the set $ {\cal W} $ is a simplex, i.e. a convex combination of $m+1$ affinley independant points, then affine policy is optimal for the adjustable problem \eqref{eq:dual}, i.e. $z_{\sf d-Aff}(\mb B) =z_{\sf d-AR}(\mb B) $.
\end{lemma}
The proof proceeds along similar lines as in \ref{lem:berti-goyal}. For completeness, we provide it in Appendix \ref{apx-proofs:lem:berti-goyal}. In fact, if the uncertainty set is not simplex but can be approximated by a simplex within a small scaling factor, affine policies can still be shown to be a good approximation, in particular we have the following lemma.
\begin{lemma} \label{lem:inclusion}
Denote ${\cal W}$ the dualized uncertainty set as defined in \eqref{def:W} and suppose there exists a simplex ${\cal S}$ and $ \kappa \geq 1$ such that
$ {\cal S} \subseteq {\cal W} \subseteq \kappa\cdot {\cal S}$. Therefore,
$$ z_{\sf d-AR}(\mb B) \leq z_{\sf d-Aff}(\mb B) \leq \kappa \cdot z_{\sf d-AR}(\mb B).$$
Furthermore,
$$ z_{\sf AR}(\mb B) \leq z_{\sf Aff}(\mb B) \leq \kappa \cdot z_{\sf AR}(\mb B).$$
\end{lemma}
The proof of Lemma \ref{lem:inclusion} is presented in Appendix \ref{apx-proofs:lem:inclusion}.
\subsubsection{Proof of Theorem \ref{thm:bounded}}
We consider instances of problem \eqref{eq:ar} where $\tilde{B}_{ij}$ are i.i.d. according to a bounded distribution with support in $ [0,b] $ and $\mathbb{E} [{\tilde{B}}_{ij} ]=\mu$ for all $i\in[m], j \in [n].$ Denote the dualized uncertainty set $\tilde{{\cal W}}= \{\mb{w}\in{\mathbb R}^m_+\;|\; \mb {\tilde{B}}^T \mb w \leq \bar d \cdot \mb e \}$. Our performance bound is based on showing that $\tilde{{\cal W}}$ can be sandwiched between two simplicies with a small scaling factor. In particular, consider the following simplex,
\begin{equation}\label{def:simplex}
{\cal S}= \left\{ \mb w \in \mathbb{R}_+^m \; \Bigg\vert \; \sum_{i=1}^m w_i \leq \frac{\bar d}{b} \right\}.
\end{equation}
we will show that $ {\cal S} \subseteq \tilde{{\cal W}} \subseteq \frac{b}{\mu ( 1- \epsilon)} \cdot {\cal S}$ with probability at least $1 - \frac{1}{m}$
where $\epsilon = \frac{b}{\mu} \sqrt{\frac{\log m}{n}} $.
First, we show that $ {\cal S} \subseteq \tilde{{\cal W}} $. Consider any $ \mb w \in {\cal S}$. For any any $i=1, \ldots , n $
$$ \sum_{j=1}^m {\tilde{B}}_{ji} w_j \leq b \sum_{j=1}^m w_j \leq \bar d.$$
The first inequality holds because all components of $\mb {\tilde{B}}$ are upper bounded by $b$ and the second one follows from $ \mb w \in {\cal S}$. Hence, we have $\mb {\tilde{B}}^T \mb w \leq \bar d \mb e$ and consequently ${\cal S} \subseteq \tilde{{\cal W}}$.
Now, we show that the other inclusion holds with high probability. Consider any $ \mb w \in \tilde{{\cal W}}$. We have $\mb {\tilde{B}}^T \mb w \leq \bar d \cdot \mb e $. Summing up all the inequalities and dividing by $n$, we get
\begin{equation}\label{pr:sum}
\sum_{j=1}^m \left( \frac{\sum_{i=1}^n \tilde{B}_{ji}}{n } \right) \cdot w_j \leq \bar d .
\end{equation}
Using Hoeffding's inequality \cite{hoeffding1963probability} (see Appendix \ref{apx-proofs:Hof-ineq}) with $\tau = b\sqrt{ \frac{\log m}{n}}$, we have
\begin{align*}
\mathbb{P} \left( \frac{\sum_{i=1}^n \tilde{B}_{ji}}{n } - \mu \geq - \tau \right) \geq 1 - \exp \left( \frac{-2n{\tau}^2}{b^2} \right) = 1- \frac{1}{m^2} \\
\end{align*}
and a union bound over $j=1,\ldots,m$ gives us
\begin{equation*}
\mathbb{P} \left( \frac{\sum_{i=1}^n \tilde{B}_{ji}}{n } \geq \mu - \tau \; \; \forall j=1,\ldots,m \right) \geq \left( 1- \frac{1}{m^2} \right)^m \geq 1- \frac{1}{m} .
\end{equation*}
where the last inequality follows from Bernoulli's inequality. Therefore, with probability at least $1-\frac{1}{m} $, we have
$$ \sum_{j=1}^m w_j \leq \sum_{j=1}^m \frac{1}{\mu - \tau }\left( \frac{\sum_{i=1}^n \tilde{B}_{ji}}{n } \right) \cdot w_j \leq \frac{\bar d}{(\mu - \tau) } = \frac{b}{\mu(1 - \epsilon)} \cdot \frac{\bar{d}}{b} $$
where the second inequality follows from \eqref{pr:sum}. Note that for $m$ sufficiently large , we have $ \mu - \tau> 0$.
Then, $ \mb w \in \frac{b}{\mu(1 - \epsilon)} \cdot {\cal S} $ for any $\mb w \in \tilde{\cal W}$ and consequently $ {\cal S} \subseteq \tilde{\cal W} \subseteq \frac{b}{\mu(1 - \epsilon)} \cdot {\cal S} $ with probability at least $ 1 - 1/m$. Finally, we apply the result of Lemma \ref{lem:inclusion} to conclude.$ \hfill \square$
\subsection{ Unbounded distributions}
While the approximation bound in Theorem~\ref{thm:bounded} leads to a good approximation for many distributions, the ratio $b/\mu$ can be significantly large in general. We can tighten the analysis by using the concentration properties of distributions and can extend the analysis even for the case of distributions with unbounded support and sub-gaussian tails. In this section, we consider the special case where $\tilde{B}_{ij}$ are i.i.d. according to absolute value of a standard Gaussian, also called the {\em folded normal } distribution, and show a logarithmic approximation bound for affine policies.
In particular, we have the following theorem.
\begin{theorem} \label{thm:gaus}
Consider the two-stage adjustable problem \eqref{eq:ar} where $\forall i \in [n] , j \in [m]$, $\tilde{B}_{ij} = \vert \tilde{G}_{ij} \vert $ and $\tilde{G}_{ij} $ are i.i.d. according to a standard Gaussian distribution. For $n$ and $m$ sufficiently large, we have with probability at least $1- \frac{1}{m}$,
$$z_{\sf AR}(\tilde{\mb B}) \leq z_{\sf Aff}(\tilde{\mb B}) \leq \kappa \cdot z_{\sf AR}(\tilde{\mb B})$$
where $\kappa = O\left( \sqrt{ \log m + \log n} \right)$.
\end{theorem}
\begin{proof}
Denote $\tilde{{\cal W}}= \{\mb{w}\in{\mathbb R}^m_+\;|\; \mb {\tilde{B}}^T \mb w \leq \bar d \cdot \mb e \}$ and ${\cal S}= \{ \mb w \in \mathbb{R}_+^m \; \big\vert \; \sum_{i=1}^m w_i \leq \bar d \}$. Our goal is to sandwich $\tilde{{\cal W}}$ between two simplicies and use Lemma \ref{lem:inclusion}. Using the following tail inequality for Gaussian random variables $ \tilde{G} \sim {\cal N} ( \mu , \sigma^2)$, $ \mathbb{P}( \vert \tilde{G} - \mu \vert \geq t) \leq 2 e^ { - \frac{t^2}{2 \sigma^2}}$, we have
\begin{align*}
\mathbb{P}( \tilde{B}_{ij} \leq \sqrt{6 \log(mn)} ) &= 1- \cdot \mathbb{P} \left( \vert \tilde{G}_{ij} \vert \geq \sqrt{ 6\log(mn)} \right)\\
& \geq 1- 2 \exp\left( \frac{- 6 \log(mn)}{2} \right) = 1- \frac{2}{(mn)^3} \geq 1- \frac{1}{(mn)^2}
\end{align*}
Therefore by taking a union bound,
$$\mathbb{P} \left( \tilde{B}_{ij} \leq \sqrt{6 \log(mn)} \; \; \forall i \in [n], \forall j \in [m] \right) \geq \left( 1- \frac{1}{(mn)^2} \right)^{mn} \geq 1- \frac{1}{mn}$$
where the last inequality follows from Bernoulli's inequality. Therefore for any $w \in{\cal S}$, we have with probability at least $1-\frac{1}{mn }$,
$$ \sum_{j=1}^m \tilde{B}_{ji} w_j \leq \sqrt{6 \log(mn)} \sum_{j=1}^m w_j \leq \sqrt{6 \log(mn)} \cdot \bar{d} \qquad \forall i \in [n] $$
Hence, with probability at least $1-\frac{1}{mn }$ we have,
$ {\cal S} \subseteq \sqrt{6 \log(mn)} \cdot \tilde{\cal W}$.
Now, we want to find a simplex that includes $ \tilde{\cal W}$. We follow a similar approach to the proof of Theorem \ref{thm:bounded}. Consider any $ \mb w \in \tilde{{\cal W}}$. We have similarly to equation \eqref{pr:sum}
\begin{equation} \label{toz}
\sum_{j=1}^m \left( \frac{\sum_{i=1}^n \tilde{B}_{ji}}{n } \right) \cdot w_j \leq \bar d .
\end{equation}
We have the following concentration inequality for non-negative random variables (see Theroem 7 in \cite{chung2006concentration}),
\begin{align*}
\mathbb{P} \left( \frac{\sum_{i=1}^n \tilde{B}_{ji}}{n } \geq \mu - \tau \right) \geq 1- \exp \left( \frac{-n \tau^2}{2 \mathbb{E}(\tilde{B}_{11}^2)} \right) = 1- \exp \left( \frac{-n \tau^2}{2 } \right) =1- \frac{1}{m^2} \\
\end{align*}
where $\tau = 2 \sqrt{ \frac{\log m}{n}}$ and $ \mu = \mathbb{E}[ \tilde{B}_{ji} ] = \sqrt{\frac{2}{\pi}}$ is the expectation of a folded standard normal distribution.
Then, union bound over $j=1,\ldots,m$ gives us
\begin{equation*} \label{pr:prob}
\mathbb{P} \left( \frac{\sum_{i=1}^n \tilde{B}_{ji}}{n } \geq \mu - \tau \; \; \forall j=1,\ldots,m \right) \geq \left( 1- \frac{1}{m^2} \right)^m \geq 1- \frac{1}{m} .
\end{equation*}
where the last inequality follows from Bernoulli's inequality. Therefore, combining this result with inequality \eqref{toz}, we have with probability at least $1-\frac{1}{m}$,
$ \tilde{\cal W} \subseteq \frac{1}{\mu - \tau}\cal S$. Denote, ${\cal S'} = \frac{1}{ \sqrt{ 6\log(mn)}} {\cal S}$. Then, we have with probabilty at least $1-\frac{1}{m}$, $ {\cal S'} \subseteq \tilde{\cal W} \subseteq \kappa \cdot {\cal S'} $ where
$$ \kappa = \frac{ \sqrt{ 6\log(mn)}}{ \sqrt{\frac{2}{\pi}} - 2 \sqrt{ \frac{\log m}{n}} }= O\left( \sqrt{ \log m + \log n } \right),$$ for sufficiently large values of $m$ and $n$.
We finally use Lemma \ref{lem:inclusion} to conclude.
\end{proof}
We can extend the analysis and show a similar bound for the class of distributions with sub-gaussian tails. The bound of $O\left( \sqrt{ \log m + \log n} \right)$ depends on the dimension of the problem unlike the case of uniform bounded distribution. But, it is significantly better than the worst-case of $O(\sqrt{m})$ \cite{BG10} for general instances. Furthermore, this bound holds for all uncertainty sets with high probability. We would like to note though that the bounds are not necessarily tight. In fact, in our numerical experiments where the uncertainty set is a {\em budget of uncertainty}, we observe that affine policies are near optimal.
\section{Family of worst-case distribution: perturbation of i.i.d. coefficients}
For any $m$ sufficiently large, the authors in \cite{BG10} present an instance where affine policy is $\Omega( m^{\frac{1}{2}- \delta } )$ away from the optimal adjustable solution. The parameters of the instance in \cite{BG10} were carefully chosen to achieve the gap $\Omega(m^{\frac{1}{2}- \delta})$. In this section, we show that the family of worst-case instances is not measure zero set. In fact, we exhibit a distribution and an uncertainty set such that a random instance from that distribution achieves a worst-case bound of $\Omega(\sqrt{m})$ with high probability. The coefficients $\tilde{B}_{ij}$ in our bad family of instances are independent but not identically distributed. The instance can be given as follows.
\begin{equation} \label{ex:bad:dist}
\begin{aligned}
&n=m, \; \; \mb A= \mb 0, \; \; \mb c= \mb 0, \; \; \mb d = \mb e \\
&{\cal U} = {\sf{conv}} \left( \mb 0, \mb e_1 , \ldots ,\mb e_m, \mb \nu_1, \ldots, \mb \nu_m \right) \;
\text{ where } \mb \nu_i = \frac{1}{\sqrt{m}}( \mb e- \mb e_i) \; \forall i \in [m]. \\
&\tilde{B}_{ij}= \left\{
\begin{array}{ll}
1 & \mbox{if} \; \; i=j \\
\frac{1}{\sqrt{m}} \cdot \tilde{u}_{ij} & \mbox{if} \; \; i \neq j \\
\end{array}
\right.
\text{where for all } i \neq j, \tilde{u}_{ij} \text{ are i.i.d. uniform} [0,1].
\end{aligned}
\end{equation}
\begin{theorem} \label{thm:bad:dist}
For the instance defined in \eqref{ex:bad:dist}, we have with probability at least $1-1/m$,
$$z_{\sf Aff}({\tilde{\mb B}})= \Omega ( \sqrt{m}) \cdot z_{\sf AR}({\tilde{\mb B}}).$$
\end{theorem}
As a byproduct, we also tighten the lower bound on the performance of affine policy to $\Omega(\sqrt m)$ improving from the lower bound of $\Omega( m^{\frac{1}{2}- \delta} ) $ in \cite{BG10}. We would like to note that both uncertainty set and distribution of coefficients in our instance \eqref{ex:bad:dist} are carefully chosen to achieve the worst-case gap. Our analysis suggests that to obtain bad instances for affine policies, we need to generate instances using a structured distribution as above and it may not be easy to obtain bad instances in a completely random setting.
To prove Theorem \ref{thm:bad:dist}, we introduce the following Lemma which shows a deterministic bad instance where the optimal affine solution is $\Theta (\sqrt{m})$ away from the optimal adjustable solution.
\begin{lemma} \label{lem:worst-case}
Consider the two-stage adjustable problem \eqref{eq:ar} where: $n=m, \mb c = \mb 0, \; \mb d = \mb e, \mb A= \mb 0$,
\begin{equation} \label{matrix:B}
{B}_{ij}= \left\{
\begin{array}{ll}
1 & \mbox{if} \; \; i=j \\
\frac{1}{\sqrt{m}} & \mbox{if} \; \; i \neq j \\
\end{array}
\right.
\end{equation}
and the uncertainty set is defined as
\begin{equation} \label{eq:worstU}
{\cal U} = {\sf{conv}} \left( \mb 0, \mb e_1 , \ldots ,\mb e_m, \mb \nu_1, \ldots, \mb \nu_m \right)
\end{equation}
where $\mb \nu_i = \frac{1}{\sqrt{m}}( \mb e- \mb e_i)$ for $i=1, \ldots,m$.
Then, $z_{\sf Aff}({\mb B})= \Omega ( \sqrt{m}) \cdot z_{\sf AR}({\mb B}).$
\end{lemma}
\begin{proof}
First, let us prove that $z_{\sf AR}({\mb B}) \leq 1$. It is sufficient to define an adjustable solution only for the extreme points of ${\cal U}$ because the constraints are linear. We define the following solution for all $i=1,\ldots,m$.
$$ \mb x = \mb 0 , \qquad \mb y ( \mb 0) = \mb 0, \qquad \mb y ( \mb e_i) = \mb e_i, \qquad \mb y ( \mb \nu_i) = \frac{1}{m} \mb e.$$
We have $\mb B \mb y ( \mb 0) = \mb 0$ and for $i \in [m]$
$$\mb B \mb y ( \mb e_i) = \mb e_i + \frac{1}{\sqrt{m}} ( \mb e - \mb e_i) \geq \mb e_i$$
and
$$\mb B \mb y (\mb \nu_i) = \frac{1}{m} \mb B \mb e = \left( \frac{1}{m}+ \frac{m-1}{m \sqrt{m}} \right) \mb e \geq \frac{1}{\sqrt{m}} \mb e \geq \mb \nu_i$$
Therefore, the solution defined above is feasible. Moreover, the cost of our feasible solution is $1$ because for all $i \in [m]$, we have
$$ \mb d^T \mb y ( \mb e_i)= \mb d^T \mb y ( \mb \nu_i)= 1.$$
Hence, $z_{\sf AR}({\mb B}) \leq 1.$
Now, it is sufficient to prove that $z_{\sf Aff}({\mb B})= \Omega ( \sqrt{m})$. From Lemma 8 in Bertsimas and Goyal \cite{BG10}, since our instance is symmetric, i.e. ${\cal U}$ and ${\cal W}$ are permutation invariant, where ${\cal W}$ is the dualized uncertainty set, there exists an optimal solution for the affine problem \eqref{eq:aff} of the following form $ \mb y( \mb h) = \mb P \mb h + \mb q$ for $\mb h \in {\cal U}$ where
\begin{equation} \label{matrix:P}
\mb P= \left(
\begin{matrix}
\theta & \mu & \ldots & \mu \\
\mu & \theta & \ldots & \mu \\
\vdots & \vdots & \ddots & \vdots\\
\mu & \mu &\ldots & \theta
\end{matrix}
\right)
\end{equation}
and $ \mb q = \lambda \mb e$.
We have $ \mb y(\mb 0) = \lambda \mb e \geq \mb 0$ hence
\begin{equation}\label{eq:lambda}
\lambda \geq 0.
\end{equation}
We know that
\begin{equation}\label{eq:lem:1}
z_{\sf Aff}({\mb B}) \geq \mb d^T \mb y( \mb 0) = \lambda m.
\end{equation}
{\bf Case 1:} If $\lambda \geq \frac{1}{6 \sqrt{m}}$, then from \eqref{eq:lem:1} we have $z_{\sf Aff}({\mb B}) \geq \frac{\sqrt{m}}{6}$.
{\bf Case 2:} If $ \lambda \leq \frac{1}{6 \sqrt{m}}$. We have
$$ \mb y( \mb e_1) = ( \theta+ \lambda ) \mb e_1 + ( \mu+ \lambda) ( \mb e - \mb e_1).$$
By feasibility of the solution, we have $ \mb B \mb y ( \mb e_1) \geq \mb e_1$, hence
$$ (\theta+ \lambda) +\frac{1}{\sqrt{m}} (m-1)(\mu+ \lambda) \geq 1$$
Therefore $ \theta + \lambda \geq \frac{1}{2}$ or $\frac{1}{\sqrt{m}} (m-1)(\mu + \lambda) \geq \frac{1}{2}$.
{\bf Case 2.1:} Suppose $\frac{1}{\sqrt{m}} (m-1)(\mu + \lambda) \geq \frac{1}{2}$.
Therefore,
$$ z_{\sf Aff}({\mb B}) \geq \mb d^T \mb y (\mb e_1) = \theta+ \lambda + (m-1)(\mu+ \lambda) \geq \frac{\sqrt{m}}{2}.$$
where the last inequality holds because $\theta+ \lambda \geq 0 $ as $\mb y( \mb e_1) \geq \mb 0$.
{\bf Case 2.2:} Now suppose we have the other inequality i.e. $ \theta + \lambda \geq \frac{1}{2}$. Recall that we have $ \lambda \leq \frac{1}{6\sqrt{m}}$ as well. Therefore,
$$ \theta \geq \frac{1}{2}- \frac{1}{6\sqrt{m}} \geq \frac{1}{3}.$$
We have,
$$ \mb y ( \mb \nu_1 ) = \frac{1}{\sqrt{m}} \left( ( \theta + (m-2) \mu ) ( \mb e- \mb e_1) + (m-1) \mu \mb e_1 \right) + \lambda \mb e. $$
In particular we have ,
\begin{align} \label{eq:up} \nonumber
z_{\sf Aff}({\mb B}) \geq \mb d^T \mb y (\mb \nu_1) &= \frac{1}{\sqrt{m}} ( (m-1) \theta +(m-1)^2 \mu) + \lambda m \\
& \geq \frac{m-1}{\sqrt{m}} \left( \frac{1}{3} + (m-1) \mu \right).\\ \nonumber
\end{align}
where the last inequality follows from $\lambda \geq 0$ and $ \theta \geq \frac{1}{3}.$
{\bf Case 2.2.1:} If $ \mu \geq 0$ then from \eqref{eq:up}
$$ z_{\sf Aff}({\mb B}) \geq \frac{m-1}{3\sqrt{m}} = \Omega ( \sqrt{m}).$$
{\bf Case 2.2.2:} Now suppose that $ \mu < 0$, by non-negativity of $ \mb y ( \mb \nu_1) $ we have
$$ \frac{m-1}{\sqrt{m}} \mu + \lambda \geq 0$$
i.e. $$ \mu \geq \frac{-\lambda \sqrt{m}}{m-1}$$
and from \eqref{eq:up}
\begin{align*}
z_{\sf Aff}({\mb B}) &\geq \frac{m-1}{\sqrt{m}} \left( \frac{1}{3} + (m-1) \mu \right) \\
& \geq \frac{m-1}{\sqrt{m}}\left( \frac{1}{3} - \lambda \sqrt{m}\right) \\
& \geq \frac{m-1}{\sqrt{m}}\left( \frac{1}{3} - \frac{1}{6}\right) = \frac{m-1}{6 \sqrt{m}} = \Omega ( \sqrt{m}). \\
\end{align*}
We conclude that in all cases $z_{\sf Aff}({\mb B})= \Omega ( \sqrt{m})$ and consequently $z_{\sf Aff}({\mb B})= \Omega ( \sqrt{m}) \cdot z_{\sf AR}({\mb B}).$
\end{proof}
\subsection*{Proof of Theorem \ref{thm:bad:dist}}\label{apx-proofs:thm:bad:dist}
\begin{proof}
Denote ${\cal W}=\{\mb{w}\in{\mathbb R}^m_+\;|\; \mb B^T \mb w \leq \bar d \mb e\}$ and $\tilde{\cal W}=\{\mb{w}\in{\mathbb R}^m_+\;|\; \tilde{\mb B}^T \mb w \leq \bar d \mb e\}$
where $\mb B$ is defined in \eqref{matrix:B} and $\tilde{\mb B}$ is defined in \eqref{ex:bad:dist}. Since for all $i,j$ in $\{1,\ldots,m\}$ we have $\tilde{B}_{ij} \leq {B}_{ij}$. Hence, for any $\mb w \in {\cal W}$, we have $ \tilde{\mb B}^T \mb w \leq {\mb B}^T \mb w \leq \bar d \mb e$. Therefore $ \mb w \in \tilde{\cal W}$ and consequently
${\cal W} \subseteq \tilde{\cal W}$.
Now, suppose $\mb w \in \tilde{\cal W}$, we have for all $i=1, \ldots,m$
\begin{equation}\label{eq:thm:worst3}
w_i + \frac{1}{\sqrt{m}} \sum_{\underset{j \neq i}{j=1} }^m \tilde{u}_{ji} w_j \leq \bar d .
\end{equation}
By taking the sum over $i$, dividing by $m$ and rearranging, we get
\begin{equation}\label{eq:thm:worst}
\sum_{i=1}^m w_i \left(\frac{1}{m} + \frac{1}{m\sqrt{m}} \sum_{\underset{j \neq i}{j=1} }^m \tilde{u}_{ij} \right) \leq \bar d.
\end{equation}
Here, similarly to the proof of Lemma \ref{thm:bounded} we apply Hoeffding's inequality \cite{hoeffding1963probability}(see appendix \ref{apx-proofs:Hof-ineq}), with $\tau=\sqrt{\frac{\log m}{m-1}}$
\begin{align*}
\mathbb{P} \left( \frac{\sum_{j \neq i}^m \tilde{u}_{ij}}{m-1 } \geq \frac{1}{2}- \tau \right) \geq 1 - \exp \left( -2(m-1){\tau}^2 \right) = 1- \frac{1}{m^2} \\
\end{align*}
and we take a union bound over $j=1,\ldots,m$
\begin{equation} \label{eq:thm:worst2}
\mathbb{P} \left( \frac{\sum_{i=1}^n \tilde{u}_{ij}}{m-1} \geq \frac{1}{2} - \epsilon \; \; \forall j=1,\ldots,m \right) \geq \left( 1- \frac{1}{m^2} \right)^m \geq 1- \frac{1}{m} .
\end{equation}
where the last inequality follows from Bernoulli's inequality. Therefore, we conclude from \eqref{eq:thm:worst} and \eqref{eq:thm:worst2}, that with probability at least $1- \frac{1}{m}$ we have $ \beta \sum_{i=1}^m w_i \leq \bar{d}$ where $\beta = \frac{1}{m}+ \frac{m-1}{m\sqrt{m}}( \frac{1}{2}- \tau) \geq \frac{1}{4\sqrt{m}}$ for $m$ sufficiently large.
Note from \eqref{eq:thm:worst3} that for all $i$ we have $w_i \leq \bar d$. Hence with probability at least $1- \frac{1}{m}$, we have for all $i=1,\ldots,m$
$$ \mb B^T_i w = w_i + \frac{1}{\sqrt{m}} \sum_{\underset{j \neq i}{j=1} }^m w_j \leq \bar{d}+ \frac{\bar{d}}{\beta \sqrt{m}} \leq 5 \cdot \bar{d}$$
Therefore, $\mb w \in 5 \cdot {\cal W}$ for any $\mb w$ in ${\cal W}$ and consequently we have with probability at least $1 - \frac{1}{m}$, $ \tilde{\cal W} \subseteq 5 \cdot {\cal W}$. All together we have proved with probability at least $1 - \frac{1}{m}$ $ {\cal W} \subseteq \tilde{\cal W} \subseteq 5 \cdot {\cal W}$.
This implies with probability at least $1 - \frac{1}{m}$, that $z_{\sf d-Aff}(\tilde{\mb B}) \geq z_{\sf d-Aff}({\mb B})$ and $\ z_{\sf d-AR}({\mb B}) \geq \frac{z_{\sf d-AR}(\tilde{\mb B}) }{5}$.
We know from from Lemma \ref{lem:berti-de Ruiter} and Lemma \ref{lem:reform} that the dualized and primal are the same both for the adjustable problem and affine problem. Hence,
with probability at least $1 - \frac{1}{m}$, we have $z_{\sf Aff}(\tilde{\mb B}) \geq z_{\sf Aff}({\mb B})$ and $\ z_{\sf AR}({\mb B}) \geq \frac{z_{\sf AR}(\tilde{\mb B}) }{5}$.
Moreover, we know from Lemma \ref{lem:worst-case} that $z_{\sf Aff}({\mb B}) \geq \Omega( \sqrt{m} ) \cdot z_{\sf AR}({\mb B})$. Therefore,
$z_{\sf Aff}(\tilde{\mb B}) \geq \Omega( \sqrt{m} ) z_{\sf AR}(\tilde{\mb B}) $ with probability at least $1 - \frac{1}{m}$.
\end{proof}
\section{Performance of affine policy: Empirical study}
In this section, we present a computational study to test the empirical performance of affine policy for the two-stage adjustable problem \eqref{eq:ar} on random instances.
\vspace{1mm}
\noindent {\bf Experimental setup.} We consider two classes of distributions for generating random instances: $i)$ Coefficients of $\tilde{\mb B}$ are i.i.d. uniform $[0,1]$, and $ii)$ Coefficients of $\tilde{\mb B}$ are absolute value of i.i.d. standard Gaussian. We consider the following {\em budget of uncertainty} set.
\begin{equation}\label{set-ones}
{\cal U}= \left\{ \mb h \in [0,1]^m \; \bigg\vert \; \sum_{i=1}^m h_i \leq \sqrt{m} \right\}.
\end{equation}
Note that the set \eqref{set-ones} is widely used in both theory and practice and arises naturally as a consequence of concentration of sum of independent uncertain demand requirements. We would like to also note that the adjustable problem over this budget of uncertainty, ${\cal U}$ is hard to approximate within a factor better than $O(\log n)$~\cite{FJMM07}. We consider
$n=m, \mb d = \mb e$. Also, we consider $\mb c= \mb 0, \mb A= \mb 0$. We restrict to this case in order to compute the optimal adjustable solution in a reasonable time by solving a single MIP. For the general problem, computing the optimal adjustable solution requires solving a sequence of MIPs each one of which is significantly challenging to solve. We would like to note though that our analysis does not depend on the first stage cost $\mb c$ and matrix $\mb A$ and affine policy can be computed efficiently even without this assumption.
We consider values of $m$ from $10$ to $ 50$ and consider $20$ instances for each value of $m$. We report the ratio $r = z_{\sf{Aff}}(\tilde{\mb B}) / z_{\sf AR} (\tilde{\mb B}) $ in Table \ref{tab}. In particular, for each value of $m$, we report the average ratio $r_{\sf avg}$, the maximum ratio $r_{\sf max}$, the running time of adjustable policy $T_{\sf AR}(s)$ and the running time of affine policy $T_{\sf Aff}(s)$. We first give a compact LP formulation for the affine problem \eqref{eq:aff} and a compact MIP formulation for the separation of the adjustable problem\eqref{eq:ar}.
\vspace{1mm}
\noindent {\bf LP formulations for the affine policies.}
The affine problem \eqref{eq:aff} can be reformulated as follows
\iffals
\[
z_{\sf Aff}({\mb B}) = \min \; \left\{ \mb{c}^T \mb{x} + z \; \left | \; \begin{array}{ll}
& z \geq \mb{d}^T\left( \mb{P}\mb{h}+\mb{q}\right) \; \; \; \forall \mb h \in {\cal U} \\
& \mb{A}\mb{x} + \mb{B}\left( \mb{P}\mb{h}+\mb{q}\right) \; \geq \; \mb{h} \; \; \; \forall \mb h \in {\cal U} \\
& \mb{P}\mb{h}+\mb{q} \; \geq \; \mb{0} \; \; \; \forall \mb h \in {\cal U} \\
& \mb{x} \in {\mathbb R}^{n}_+
\end{array}
\right. \right \}.
\]
\f
\begin{equation*}
\begin{aligned}
z_{\sf Aff}({\mb B}) = \min_{\mb x} \; & \mb{c}^T \mb{x} + z\\
& z \geq \mb{d}^T\left( \mb{P}\mb{h}+\mb{q}\right) \; \; \; \forall \mb h \in {\cal U} \\
& \mb{A}\mb{x} + \mb{B}\left( \mb{P}\mb{h}+\mb{q}\right) \; \geq \; \mb{h} \; \; \; \forall \mb h \in {\cal U} \\
& \mb{P}\mb{h}+\mb{q} \; \geq \; \mb{0} \; \; \; \forall \mb h \in {\cal U} \\
& \mb{x} \in {\mathbb R}^{n}_+\\
\end{aligned}
\end{equation*}
Note that this formulation has infinitely many constraints but we can write a compact LP formulation using standard techniques from duality. For example, the first constraint is equivalent to
$$z - \mb d^T \mb q \geq \max \; \{ \mb{d}^T\mb{P}\mb{h}\; | \; \mb R \mb h \leq \mb r, \; \mb h \geq \mb 0\}.$$
By taking the dual of the maximization problem, the constraint becomes $$z - \mb d^T \mb q \geq \min \; \{\mb{r}^T\mb v \; | \; \mb R^T \mb v \geq \mb P^T \mb d, \; \mb v \geq \mb 0\}.$$
We can then drop the min and introduce $\mb v $ as a variable, hence we obtain the following linear constraints
$$ z- \mb d^T \mb q \geq \mb r^T \mb v, \qquad \mb R^T \mb v \geq \mb P^T \mb d, \qquad \mb{v} \geq \mb 0.$$ We can apply the same techniques for the other constraints. The complete LP formulation and its proof of correctness is presented in Appendix \ref{apx-LP-MIP}.
\vspace{1mm}
\noindent {\bf MIP Formulation for the adjustable problem~\eqref{eq:ar}}.
For the adjustable problem~\eqref{eq:ar}, we show that the separation problem \eqref{eq:sep} can be formulated as a mixed integer program. The separation problem can be formulated as follows:
\noindent
Given $ \hat{\mb x}$ and $ \hat{z}$ decide whether
\begin{equation}\label{eq:sep}
\max \; \{ (\mb h - \mb A \hat{\mb x})^T \mb w \; | \; \mb w \in {\cal W}, \mb h \in {\cal U}\} > \hat{z}
\end{equation}
The correctness of formulation \eqref{eq:sep} follows from equation \eqref{eq:proof:sep} in the proof of Lemma \ref{lem:reform} in Appendix \ref{apx-proofs:lem:reform}. The constraints in~\eqref{eq:sep} are linear but the objective function contains a bilinear term, ${\mb h}^T \mb w$. We linearize this using a standard {\em digitized reformulation}. In particular, we consider finite bit representations of continuous variables, $h_i$ nd $w_i$ to desired accuracy and introduce additional binary variables, $\alpha_{ik}$, $\beta_{ik}$ where $\alpha_{ik}$ and $\beta_{ik}$ represents the $k^{th}$ bits of $h_i$ and $w_i$ respectively. Now, for any $i\in [m]$, $h_i \cdot w_i$ can be expressed as a bilinear expression with products of binary variables, $\alpha_{ik} \cdot \beta_{ij}$ which can be linearized using additional variable $\gamma_{i jk}$ and standard linear inequalities: $ \gamma_{ij k} \leq \beta_{ij}$, $ \gamma_{ijk} \leq \alpha_{ik} $, $ \gamma_{ijk} +1 \geq \alpha_{ik} + \beta_{ij} $. The complete MIP formulation and the proof of correctness is presented in Appendix \ref{apx-LP-MIP}.
\iffals
Let $ \mb h = \sum_{i=1}^m h_i \mb e_i$. For all $i \in [m]$ we digitize the component $ h _i$ as follows
$$h _i= \sum_{k= -\Delta_{\cal U}}^s \frac{\alpha_{ik}}{2^k}$$
where $s= \ceil{ \log_2 \left( \frac{m}{\epsilon } \right) }$, $\Delta_{\cal U}$ is an upper bound on any $ h_i$ and $\alpha_{ik}$ are binary variables. This digitization gives an approximation to $h_i$ within $\frac{\epsilon }{m}$ which translates to an accuracy of $\epsilon$ in the objective function. We have
$$ \mb h = \sum_{i=1}^m \sum_{k= -\Delta_{\cal U}}^s \frac{\alpha_{ik}}{2^k} \cdot \mb e_i \qquad
\text{and similarly,} \qquad
\mb w = \sum_{i=1}^m \sum_{j= -\Delta_{\cal W}}^s \frac{\beta_{ij}}{2^j} \cdot \mb e_i $$
where $\Delta_{\cal W}$ is an upper bound on any component of $ w \in {\cal W}$. Therefore, the first term in the objective function becomes
$$ \mb h^T \mb w= \sum_{i=1}^m \sum_{j= -\Delta_{\cal W}}^s \sum_{k= -\Delta_{\cal U}}^s \frac{1}{2^{j+k}} \cdot \alpha_{ik} \beta_{ij}$$
The final step is to linearize the term $\alpha_{ik} \beta_{ij}$. We set, $\alpha_{ik} \beta_{ij}=\gamma_{ijk}$ where again $\gamma_{ijk}$ is a binary variable. We can express $\gamma_{ijk}$ using only linear constraints as follows $ \gamma_{ijk} \leq \beta_{ij}$, $ \gamma_{ijk} \leq \alpha_{ik} $ and $ \gamma_{ijk} +1 \geq \alpha_{ik} + \beta_{ij} $. The complete MIP formulation in presented in Appendix \ref{apx-LP-MIP}.
\f
For general $\mb A \neq 0$, we need to solve a sequence of MIPs to find the optimal adjustable solution. In order to compute the optimal adjustable solution in a reasonable time, we assume $\mb A=0, \mb c=0$ in our experimental setting so that we only need to solve one MIP
\vspace{2mm}
\noindent {\bf Results.}
In our experiments, we observe that the empirical performance of affine policy is near-optimal. In particular, the performance is significantly better than the theoretical performance bounds implied in Theorem~\ref{thm:bounded} and Theorem~\ref{thm:gaus}. For instance, Theorem~\ref{thm:bounded} implies that affine policy is a 2-approximation with high probability for random instances from a uniform distribution. However, in our experiments, we observe that the optimality gap for affine policies is at most $4\%$ (i.e. approximation ratio of at most $1.04$). The same observation holds for Gaussian distributions as well Theorem \ref{thm:gaus} gives an approximation bound of $O(\sqrt{\log(mn)})$. We would like to remark that we are not able to report the ratio $r$ for large values of $m$ because the adjustable problem is computationally very challenging and for $m \geq 40$, MIP does not solve within a time limit of $3$ hours for most instances . On the other hand, affine policy scales very well and the average running time is few seconds even for large values of $m$. This demonstrates the power of affine policies that can be computed efficiently and give good approximations for a large class of instances.
\begin{table}[htp]
\begin{subtable}{.5\linewidth}\centering
{\begin{tabular}[t]{|l|l|l|c|c|}
\hline
$m$ & $r_{\sf avg}$ & $r_{\sf max}$ & $T_{\sf AR}(s)$ & $T_{\sf Aff}(s)$ \\ \hline
10 &1.01 &1.03 & 10.55 & 0.01 \\ \hline
20 & 1.02& 1.04 & 110.57 & 0.23\\ \hline
30 & 1.01 & 1.02 & 761.21 & 1.29 \\ \hline
50 &** & **& ** & 14.92 \\ \hline
\end{tabular}}
\caption{Uniform}\label{tab:uniform}
\end{subtable}%
\begin{subtable}{.5\linewidth}\centering
{\begin{tabular}[t]{|l|l|l|c|c|}
\hline
$m$ & $r_{\sf avg}$ & $r_{\sf max}$ & $T_{\sf AR}(s)$ & $T_{\sf Aff}(s)$\\ \hline
10 & 1.00 & 1.03& 12.95& 0.01 \\ \hline
20 &1.01 & 1.03& 217.08& 0.39\\ \hline
30 & 1.01&1.03 &594.15 & 1.15 \\ \hline
50 &** & **&** & 13.87 \\ \hline
\end{tabular}}
\caption{Folded Normal}\label{tab:gaussian}
\end{subtable}
\caption{Comparison on the performance and computation time of affine policy and optimal adjustable policy for uniform and folded normal distributions. For 20 instances, we compute $ z_{\sf{Aff}} (\tilde{\mb B}) /z_{\sf{AR}} (\tilde{\mb B}) $ and present the average and $\max$ ratios. Here, $T_{\sf AR}(s)$ denotes the running time for the adjustable policy and $T_{\sf Aff}(s)$ denotes the running time for affine policy in seconds. ** Denotes the cases when we set a time limit of 3 hours. These results are obtained using Gurobi 7.0.2 on a 16-core server with 2.93GHz processor and 56GB RAM.}
\label{tab}
\end{table}
\small
{\newpage
\bibliographystyle{abbrv}
|
1,116,691,499,833 | arxiv | \section{Introduction}
In the present paper we are concerned with solutions to
\begin{equation} \label{Liouville-Quasilinear}
-\Delta_n u = h(x) e^u \hbox{ in } \Omega ,
\end{equation}
where $\Omega \subset \mathbb{R}^n$, $n \geq 2$, is a bounded open set and $\Delta_n u = \hbox{div} ( |\nabla u|^{n-1} \nabla u) $ stands for the $n$-Laplace operator. Solutions are meant in a weak sense and by elliptic estimates \cite{Dib,Ser1,Tol} such solutions are in $C^{1,\alpha}(\Omega)$ for some $\alpha \in (0,1)$.
\medskip
When $n=2$ problem~\eqref{Liouville-Quasilinear} reduces to the so-called Liouville equation, see \cite{Liou}, that represents the simplest case of ``Gauss curvature equation" on a two-dimensional surface arising in differential geometry.
In the higher dimensional case similar geometrical problems have led to different type of curvature equations. Recently, it has been observed that the $n$-Laplace operator comes into play
when expressing the Ricci curvature after a conformal change of the metric \cite{MaQing}, leading to another class of curvature equations that are of relevance. Moreover, the $n-$Liouville equation \eqref{Liouville-Quasilinear} represents a simplified version of a quasilinear fourth-order problem arising \cite{EsMa} in the theory of log-determinant functionals, that are relevant in the study of the conformal geometry of a $4-$dimensional closed manifold. In order to understand some of the bubbling phenomena that may occur in such geometrical contexts, we are naturally led to study the simplest situation given by~\eqref{Liouville-Quasilinear}.
\medskip Starting from the seminal work of Brezis and Merle~\cite{BrMerle} in dimension two, the asymptotic behavior of a sequence $u_k$ of solutions to
\begin{equation} \label{758}
-\Delta_n u_k= h_k (x) e^{u_k} \hbox{ in }\Omega,
\end{equation}
with
\begin{equation} \label{758bis}
\sup_k \int_\Omega e^{u_k}<+\infty
\end{equation}
and $h_k $ in the class
\begin{equation} \label{1059}
\Lambda_{a,b} =\{h \in C (\Omega): \ a \leq h \leq b \hbox{ in } \Omega\},
\end{equation}
can be generally described by a ``concentration-compactness" alternative. Extended \cite{AgPe} to the quasi-linear case, it reads as follows.
\medskip
{\bf Concentration-Compactness Principle:}
{\it Consider a sequence of functions $u_k$ such that \eqref{758}-\eqref{758bis} hold with $h_k \in \Lambda_{0,b}$. Then, up to a subsequence, the following alternative holds:
\begin{enumerate}
\item[(i)] $u_k$ is bounded in $L^{\infty}_{loc} (\Omega)$;
\item[(ii)] $u_k \to - \infty$ locally uniformly in $\Omega$ as $k\to +\infty$;
\item[(iii)] the blow-up set $\mathcal{S}$ of the sequence $u_k$, defined as
$$
\mathcal{S} = \{ p \in \Omega: \hbox{ there exists } x_k \in \Omega \hbox{ s.t. } \, x_k \to p , u_k (x_k) \to \infty \hbox{ as }k \to +\infty \},
$$
is finite, $u_k \to - \infty$ locally uniformly in $\Omega \setminus S$ and
\begin{equation} \label{253}
h_k e^{u_k} \rightharpoonup \sum_{p \in {\mathcal S}} \beta_p \delta_p
\end{equation}
weakly in the sense of measures as $k \to +\infty$ for some coefficients $\beta_p \geq n^{n} \omega_n$,
where $\omega_n$ stands for the volume of the unit ball in $\mathbb R^n$.
\end{enumerate}
}
\medskip
The compact case, in which the sequence $e^{u_k}$ does converge locally uniformly in $\Omega$, is expressed by alternatives (i) and (ii), thanks to elliptic estimates \cite{Dib,Tol}; alternative (iii) describes the non-compact case and the characterization of the possible values for the Dirac masses $\beta_p$ becomes crucial towards an accurate description of the blow-up mechanism.
\medskip
When a boundary control on $u_k$ is assumed, the answer is generally very simple. If one assumes that the oscillation of $u_k$ on $\partial B_{\delta} (p)$, $p \in \mathcal S$, is uniformly bounded for some $\delta>0$ small, using a Pohozaev identity it has been shown \cite{EsMo} that
$\beta_p = c_n \omega_n $, $c_n =n(\frac{n^2}{n-1})^{n-1}$, provided $h_k$ is in the class
\begin{equation} \label{1100}
\Lambda_{a,b}'=\{h \in C^1 (\Omega): \ a\leq h \leq b,\ |\nabla h| \leq b \hbox{ in }\Omega\}
\end{equation}
with $a>0$. Moreover, in the two-dimensional situation and under the condition
\begin{equation} \label{eq:Peso1}
0\leq h_k \to h \hbox{ in } C_{loc} (\Omega) \hbox{ as } k \to +\infty,
\end{equation}
a general answer has been found by Y.Y.~Li and Shafrir \cite{LiSh} showing that, for any $p \in \mathcal S$, $h(p)>0$ and the concentration mass $\beta_p$ is quantized as follows:
\begin{equation} \label{1011}
\beta_p \in 8\pi \mathbb{N}.
\end{equation}
\medskip \noindent The meaning of the value $8\pi$ in \eqref{1011} can be roughly understood as the sequence $u_k$ was developing several sharp peaks collapsing in $p$, each of them looking like, after a proper rescaling, as a solution $U$ of
\begin{equation} \label{Limiting2}
- \Delta U = h(p) e^U \hbox{ in }\mathbb{R}^2,
\quad
\int_{\mathbb R^2} e^U < \infty,
\end{equation}
with $h(p)>0$. Using the complex representation formula obtained by Liouville \cite{Liou} or the more recent PDE approach by Chen-Li~\cite{ChenLi}, the solutions of~\eqref{Limiting2} are explicitly known and they all have the same mass: $\int_{\mathbb R^2} h(p) e^U = 8 \pi$. Therefore the value of $\beta_p$ in \eqref{1011} just represents the sum of the masses $8\pi$ carried by each of such sharp peaks collapsing in $p$.
\medskip \noindent When $n\geq 3$ a similar classification result for solutions $U$ of
\begin{equation} \label{Limitingn}
- \Delta_n U = h(p) e^U \hbox{ in }\mathbb{R}^n,
\quad
\int_{\mathbb R^n} e^U < \infty,
\end{equation}
with $h(p)>0$, has been recently provided by the first author in \cite{Esp}. For later convenience, observe in particular that the unique solution to
\begin{equation} \label{limitpb}
-\Delta_n U=h(p) e^U \quad \hbox{ in }\mathbb{R}^n, \quad U \leq U(0)=0, \quad \int_{\mathbb{R}^n} e^U<+\infty ,
\end{equation}
is given by
\begin{equation} \label{eq:Bubble}
U(y)=-n \log \left(1+c_n^{-\frac{1}{n-1}} h(p)^{\frac{1}{n-1}}|y|^{\frac{n}{n-1}}\right)
\end{equation}
and satisfies
\begin{equation} \label{quantization}
\int_{\mathbb{R}^n} h(p) e^U =c_n \omega_n , \quad c_n =n(\frac{n^2}{n-1})^{n-1}.
\end{equation}
Due to the invariance of \eqref{Limitingn} under translations and scalings, all solutions to \eqref{Limitingn} are given by the $(n+1)-$parameter family
$$
U_{a, \lambda} (y) = U \left(\lambda (y - a) \right) + n \log \lambda = \log \frac{ \lambda^n}{ ( 1 + c_n^{-\frac{1}{n-1}} h(p)^{\frac{1}{n-1}} \lambda^{\frac{n}{n-1}} |y - a |^{\frac{n}{n-1}} )^n } ,\quad (a,\lambda) \in \mathbb R^n \times (0,\infty),
$$
and satisfy $\int_{\mathbb R^n} h(p) e^{U}=c_n \omega_n $. As a by-product, under the condition
\eqref{eq:Peso1} we necessarily have in \eqref{253} that
\begin{equation} \label{1718}
\beta_p \geq c_n \omega_n, \end{equation}
a bigger value than the one appearing in the alternative (iii) of the Concentration-Compactness Principle.
\medskip
The blow-up mechanism that leads to the quantization result \eqref{1011} relies on an almost scaling-invariance property of the corresponding PDE, which guarantees that all the involved sharp peaks carry the same mass. Since it is also shared by the $n$-Liouville equation, a similar quantization property is expected to hold for the quasilinear case too:
\begin{equation} \label{10111}
\beta_p \in c_n\omega_n \mathbb{N}.
\end{equation}
However, the main point in proving \eqref{1011} is the limiting vanishing of the mass contribution coming from the neck regions between the sharp peaks. In the two-dimensional situation such crucial property follows by a Harnack inequality of $\sup+\inf$ type, established first in \cite{Shafrir} through an isoperimetric argument and an analysis of the mean average for a solution $u$ to \eqref{Liouville-Quasilinear}$_{n=2}$. A different proof can be given according to \cite{Rob} through Green's representation formula, see Remark \ref{commento} for more details, and a sharp form of such inequality has been later established in \cite{BrLiSh,ChenLin,ChenLi1} via isoperimetric arguments or moving planes/spheres techniques. However, all such approaches are not operating for the $n-$Liouville equation due to the nonlinearity of the differential operator; for instance, the Green representation formula is not anymore available in the quasilinear context and in the nonlinear potential theory an alternative has been found \cite{KiMa} in terms of the Wolff potential, which however fails to provide sharp constants as needed to derive precise asymptotic estimates on blowing-up solutions to \eqref{758}-\eqref{758bis}. We refer the interested reader to \cite{HKM,KuMi} for an overview on the nonlinear potential theory.
\medskip In order to establish the validity of \eqref{10111}, the first main contribution of our paper is represented by a new and very simple blow-up approach to $\sup+\inf$ inequalities. Since the limiting profiles have the form \eqref{eq:Bubble}, near a blow-up point we are able to compare in an effective way a blowing-up sequence $u_k$ with the radial situation, in which sharp constants are readily available. Using the notations \eqref{1059} and \eqref{1100} our first main result reads as follows:
\begin{thm} \label{711intro}
Given $0<a\leq b <\infty$, let $ \Lambda \subset \Lambda_{a,b}$ be a set which is equicontinous at each point of $\Omega$ and consider
$$ {\mathcal U} :=
\{ u \in C^{1,\alpha} (\Omega) \, : \, u \hbox{ solves } \eqref {Liouville-Quasilinear} \hbox{ with } h \in \Lambda \}.$$
Given a compact set $K \subset \Omega$ and $C_1>n-1$, then there exists $C_2 =C_2( \Lambda, K, C_1) >0$ so that
\begin{equation}\label{317bis}
\max_K u+C_1 \inf_\Omega u \leq C_2,
\quad
\forall u \in {\mathcal U}.
\end{equation}
In particular, the inequality \eqref{317bis} holds for the solutions $u$ of \eqref{Liouville-Quasilinear} with $h \in \Lambda_{a,b}'$.
\end{thm}
By combining the $\sup+ \inf-$inequality with a careful blow-up analysis, we are able to prove our second main result:
\begin{thm} \label{thm:Main}
Let $u_k$ be a sequence of solutions to \eqref{758} so that \eqref{758bis}-\eqref{253} hold. If one assumes~\eqref{eq:Peso1}, then $h(p)>0$ and $\beta_p $ satisfies \eqref{10111} for any $p \in \mathcal S$.
\end{thm}
Our paper is structured as follows. Section~\ref{sec:SupInf} is devoted to establish the $\sup+\inf$ inequality. Starting from a basic description of the blow-up mechanism, reported in the appendix for reader's convenience, a refined asymptotic analysis is carried over in Section~\ref{sec:Simple} to establish Theorem~\ref{thm:Main} when the blow-up point is ``isolated", according to some well established terminology as for instance in \cite{YYLi}. The quantization result in its full generality will be the object of Section~\ref{sec:Quantization}.
\section{The $\sup+\inf$ inequality} \label{sec:SupInf}
\noindent
When $n=2$ the so-called ``sup+inf" inequality has been first derived by Shafrir~\cite{Shafrir}: given $a,b>0$ and $K \subset \Omega$ a non-empty compact set, there exist constants $C_1,C_2> 0 $ so that
\begin{equation} \label{0317}
\sup_K u + C_1 \inf_{\Omega} u \leq C_2
\end{equation}
does hold for any solution $u$ of \eqref{Liouville-Quasilinear}$_{n=2}$ with $0<a\leq h \leq b$ in $\Omega$; moreover one can take $C_1=1$ if $h\equiv 1$. Later on, Brezis, Li and Shafrir showed \cite{BrLiSh} the validity of \eqref{0317} in its sharp form with $C_1=1$ for any $h \in \Lambda_{a,b}'$, $a>0$.
\medskip \noindent To reach this goal, a first tool needed is a general Harnack inequality that holds for solutions $u$ of $-\Delta_n u=f \geq 0 $ in $\Omega$. By means of the so-called nonlinear Wolff potential in \cite{KiMa} it is proved that there exists a constant $c_1>0$ such that
$$u(x)-\inf_{\Omega} u \geq c_1 \int_0^{\delta} \Big[ \int_{B_t(x)} f \Big]^{\frac{1}{n-1}} \frac{dt}{t} $$
holds for each ball $B_{2\delta}(x) \subset \Omega$. Since $f \geq 0$, note that the above inequality implies that
\begin{equation} \label{0617bis}
u(x)-\inf_{\Omega} u \geq c_1 \Big[ \int_{B_r(x)} f \Big]^{\frac{1}{n-1}} \log \frac{\delta}{r}
\end{equation}
for all $0<r<\delta$. The constant $c_1$ is not explicit and our argument could be significantly simplified if we knew $c_1= (n \omega_n)^{-\frac{1}{n-1}}$ , see Remark \ref{commento} for a thourough discussion. However, in the class of radial functions, the following lemma shows that indeed \eqref{0617bis} holds with the sharp constant $c_1=(n \omega_n)^{-\frac{1}{n-1}}$:
\begin{lm} \label{radial}
Let $u \in C^1 (B_{R_2}(a))$ and $0\leq f \in C(\overline{B_{R_2}(a)}) $ a radial function with respect to $a \in \mathbb{R}^n$ so that
$$-\Delta_n u \geq f \quad \hbox{in } B_{R_2}(a).$$
Then
\begin{equation} \label{1122}
u(a)-\inf_{B_{R_2}(a)} u \geq (n \omega_n)^{-\frac{1}{n-1}} \int_0^{R_2} \left( \int_{B_{t} (a)} f \right)^{\frac{1}{n-1}} \frac{dt}{t}.
\end{equation}
In particular, for each $0<R_1<R_2$ there holds
$$u(a)-\inf_{B_{R_2}(a)} u \geq (n \omega_n)^{-\frac{1}{n-1}} \left( \int_{B_{R_1} (a)} f \right)^{\frac{1}{n-1}} \log \frac{R_2}{R_1}.$$
\end{lm}
\begin{proof} Consider the radial solution $u_0$ solving
$$-\Delta_n u_0=f \hbox{ in }B_{R_2}(a),\quad u_0=0 \hbox{ on }\partial B_{R_2}(a).$$
Since
$$-\Delta_n u \geq -\Delta_n u_0 \hbox{ in }B_{R_2}(a),\quad u-\inf_{B_{R_2}(a)} u \geq u_0 \hbox{ on }\partial B_{R_2}(a),$$
by comparison principle there holds
\begin{equation} \label{1127}
u-\inf_{B_{R_2}(a)} \geq u_0 \qquad \hbox{in }B_{R_2}(a).
\end{equation}
Furthermore, $u_0$ is radial with respect to $a$ and can be explicitly written as $(r=|x-a|)$:
\begin{equation} \label{1130}
u_0(r)=\int_r^{R_2} \left( \int_0^t s^{n-1}f(s) ds \right)^{\frac{1}{n-1}} \frac{dt}{t} =
\int_r^{R_2} \left( \frac{1}{n \omega_n} \int_{B_t(a)} f \right)^{\frac{1}{n-1}} \frac{dt}{t} .
\end{equation}
By \eqref{1127}-\eqref{1130} we deduce the validity of \eqref{1122}. Since the function $t \to \int_{B_t(a)}f$ is non decreasing in view of $f \geq 0$, we have for each $0<R_1<R_2$ that
\begin{eqnarray*}
u_0(a) \geq (n \omega_n)^{-\frac{1}{n-1}}
\int_{R_1}^{R_2} \left( \int_{B_t(a)} f \right)^{\frac{1}{n-1}} \frac{dt}{t}
\geq (n \omega_n)^{-\frac{1}{n-1}}
\left( \int_{B_{R_1} (a)} f \right)^{\frac{1}{n-1}} \log \frac{R_2}{R_1}
\end{eqnarray*}
and the proof is complete thanks to \eqref{1127}.
\end{proof}
\medskip This lemma is helpful to extend \eqref{0317} to the quasilinear case for all $C_1>n-1$ and this will be enough to establish the quantization result \eqref{10111}. It is an interesting open question to know whether or not the sharp inequality with $C_1=n-1$ is valid for a reasonable class of weights $h$, as when $n=2$ \cite{BrLiSh}.
The $\sup+\inf$ inequality in Theorem \ref{711intro} will be an immediate consequence of the following result.
\begin{thm} \label{711}
Let $0 < a \leq b < \infty$, consider the sets $\Lambda$ and ${\mathcal U}$ defined in Theorem \ref{711intro}. Then given
$K \subset \Omega$ a nonempty compact set and $C_1>n-1$, there exists a constant $C_3 >0$ such that
$\displaystyle \max_K u \leq C_3$ holds for all $u \in {\mathcal U}$ satisfying
$\displaystyle \max_K u + C_1 \inf_\Omega u \geq 0$
(Theorem \ref{711intro} follows by taking $C_2 = C_1 + C_3$).
\end{thm}
\begin{proof} Choose $\delta>0$ so that $K_\delta=\{ x \in \Omega:\ \hbox{dist}(x,K) \leq 2 \delta\} \subset \Omega$.
Let $u$ be a solution to \eqref{Liouville-Quasilinear} with $h \geq 0$ so that
\begin{equation} \label{1253}
\max_K u+C_1 \inf_\Omega u \geq 0.
\end{equation}
Denote by $\bar x \in K$ a maximum point of $u$ in $K$: $u(\bar x)=\displaystyle \max_K u$.
It follows from \eqref{0617bis} that
\begin{equation} \label{0617}
u(\bar x)-\inf_{\Omega} u \geq c_1 \Big[ \int_{B_r(\bar x)} h e^{u} \Big]^{\frac{1}{n-1}} \log \frac{\delta}{r}
\end{equation}
for all $0<r<\delta$ in view of $B_{2\delta} (\bar x) \subset \Omega$. Therefore, we deduce that
$$ c_1 \Big[ \int_{B_r(\bar x)} h e^{u} \Big] ^{\frac{1}{n-1}}
\leq
\left\{
\frac{u(\bar x)-\displaystyle \inf_{\Omega} u}{\log \frac{\delta}{r}}
\right\} \leq (1+\frac{1}{C_1}) \frac{u(\bar x)}{\log \frac{\delta}{r}}
$$
for all $0<r<\delta$ in view of \eqref{1253}-\eqref{0617}.
\medskip \noindent Arguing by contradiction, if the conclusion of the theorem is wrong, we can find a sequence
$u_k \in {\mathcal U}$ satisfying \eqref{1253} such that, as $k \to \infty$, we have
\begin{equation} \label{blow}
\max_K u_k \to +\infty.
\end{equation}
Letting $\bar x_k \in K$ so that $u_k(\bar x_k)=\displaystyle \max_K u_k$ and $\bar \mu_k =e^{-\frac{u_k(\bar x_k)}{n}}$, we have that $\bar \mu_k \to 0$ as $k \to +\infty$ in view of \eqref{blow}. Since for each $R>0$ we can find $k_0 \in \mathbb{N}$ so that $R\bar \mu_k < \delta$ for all $k \geq k_0$, by \eqref{0617} we deduce that
\begin{equation} \label{1224}
c_1 \limsup_{k \to \infty}
\Big[ \int_{B_{R \bar \mu_k}(\bar x_k)} h_k e^{u_k} \Big] ^{\frac{1}{n-1}}
\leq n \left( 1 + \frac{1}{C_1} \right).
\end{equation}
By applying Ascoli-Arzela, we can further assume, up to a subsequence, that
\begin{equation} \label{0437}
\bar x_k \to p \in K, \quad h_k \to h \geq a>0 \hbox{ in }C_{loc}(\Omega)
\end{equation}
as $k \to +\infty$.
\medskip \noindent Once \eqref{1224} is established, in order to reach a contradiction we aim to replace $\bar x_k$ by a nearby local maximum point $x_k \in \Omega$ of $u_k$ with $u_k(x_k)\geq u_k(\bar x_k)=\displaystyle \max_K u_k$. We can argue as follows: the function $\bar U_k (y) = u_k(\bar \mu_k y+\bar x_k)+n\log \bar \mu_k$ satisfies
$$
- \Delta_n \bar U_k = h_k (\bar \mu_k y+\bar x_k) e^{\bar U_k}
\hbox{ in } \Omega_k=\frac{\Omega-\bar x_k}{ \bar \mu_k}
$$
and
\begin{equation} \label{1058bis}
\bar U_k \leq \bar U_k (0) = 0 \, \, \hbox{ in } \frac{K - \bar x_k}{\bar \mu_k} ,
\qquad
\limsup_{k \to +\infty} \int_{B_R(0)} e^{\bar U_k}\leq \frac{1}{a} \big( \frac{n}{c_1} \big)^{n-1} \left( 1 + \frac{1}{C_1} \right)^{n-1}
\end{equation}
in view of \eqref{1224}-\eqref{0437}. From \eqref{1058bis}, the Concentration-Compactness Principle and $\bar U_k(0)=0$ we deduce, up to a subsequence, that:
\medskip
\begin{enumerate}
\item[{\bf (i)}]
either, $\bar U_k$ is bounded in $L^\infty_{loc}(\mathbb{R}^n)$
\item[{\bf (ii)}]
or, $h_k (\bar \mu_k y+\bar x_k) e^{\bar U_k} \rightharpoonup \beta_0 \delta_0+\displaystyle \sum_{i=1}^I \beta_i \delta_{p_i}$ weakly in the sense of measures in $\mathbb{R}^n$, for some $\beta_i \geq n^n \omega_n$, $i \in \{0,\ldots, I\}$, and distinct points $p_1,\ldots,p_I \in \mathbb{R}^n \setminus \{0\}$, and $\bar U_k \to -\infty$ locally uniformly in $\mathbb{R}^n \setminus\{0, p_1,\ldots,p_I\}$.
\end{enumerate}
\medskip \noindent \underline{\bf{Case (i)}}: $\bar U_k$ is bounded in $L^\infty_{loc}(\mathbb{R}^n)$
\medskip \noindent
By elliptic estimates \cite{Dib,Tol} we deduce that $ \bar U_k \to \bar U $ in $C^1_{loc}(\mathbb{R}^n)$ as $k\to +\infty$, where $\bar U$ satisfies \eqref{Limitingn} with $h(p)>0$ and $\bar U(0)=0$ in view of \eqref{0437}-\eqref{1058bis}. Since in general $\frac{K-\bar x_k}{\bar \mu_k}$ does not tend to $\mathbb{R}^n$ as $k \to +\infty$, by \eqref{1058bis} we cannot guarantee that $\bar U$ achieves the maximum value at $0$. However, by the classification result in \cite{Esp} we have that $\bar U=U_{a,\lambda}$ for some $(a,\lambda)\in \mathbb{R}^n \times (0,\infty)$. Since $\bar U$ is a radially strictly decreasing function with respect to $a$, we can find a sequence $a_k \to a$ such that as $k \to +\infty$
\begin{equation}\label{804}
\bar U_k(a_k)=\max_{B_R(a_k)}\bar U_k,
\quad
\bar U_k (a_k) \to \bar U (a) =\max_{\mathbb{R}^n} \bar U
\end{equation}
for all $R>0$ and $k$ large (depending on $R$). Setting $x_k=\bar \mu_k a_k+\bar x_k$ and $\mu_k=e^{-\frac{u_k(x_k)}{n}}$, we have that
$$
u_k(x_k)=\bar U_k(a_k)-n\log \bar \mu_k \geq \bar U_k(0)-n \log \bar \mu_k=u_k(\bar x_k)$$
and
\begin{equation} \label{750}
1 \leq \frac{\bar \mu_k}{\mu_k}=e^{\frac{u_k(x_k)-u_k(\bar x_k)}{n}}=e^{\frac{\bar U_k(a_k)}{n}} \xrightarrow{ \, k \to \infty \,}
e^{\frac{\max_{\mathbb{R}^n} \bar U}{n}}
\end{equation}
in view of \eqref{804}. Let us now rescale $u_k$ with respect to $x_k$ by setting
$$U_k(y)=u_k(\mu_k y+x_k)+n\log \mu_k.
$$
Since \eqref{1058bis}-\eqref{804} re-write in terms of $U_k$ as
\begin{eqnarray}\label{1020}
&& \limsup_{k \to +\infty} \int_{B_{R \frac{\bar \mu_k}{\mu_k}}(- \frac{\bar \mu_k}{\mu_k} a_k)} e^{U_k}
\leq \frac{1}{a} \big( \frac{n}{c_1} \big)^{n-1} \left( 1 + \frac{1}{C_1} \right)^{n-1} \\
\label{2146}
&& U_k(0)=\max_{B_{R \frac{\bar \mu_k}{\mu_k}}(0)}U_k=0
\end{eqnarray}
for all $R>0$, thanks to the uniform convergence \eqref{0437}, by \eqref{750}-\eqref{2146} and elliptic estimates \cite{Dib,Tol} we have that $U_k \to U $ in $C^1_{loc}(\mathbb{R}^n)$ as $k\to +\infty$, where $U$ satisfies \eqref{limitpb} with $h(p)>0$. Then $U$ takes precisely the form \eqref{eq:Bubble} and satisfies \eqref{quantization}.
\medskip \noindent Therefore for each $R >0$ and $\epsilon\in (0,1)$, there exists $k_0 = k_0 (R, \varepsilon) >0$ so that for all $k \geq k_0$ there hold $B_{R\mu_k}(x_k) \subset B_{\delta}(x_k) \subset B_{2\delta}(\bar x_k)$ and
\begin{equation} \label{1159bis}
h_k(x) \geq \sqrt{1-\epsilon} \ h(p), \:\: u_k(x) \geq U_{x_k, \mu_k^{-1} } + \log \sqrt{1-\epsilon} \qquad \hbox{in }B_{R \mu_k}(x_k)
\end{equation}
in view of \eqref{0437} and $U_k \geq U + \log \sqrt{1-\epsilon}$ in $B_{R}(0)$. Setting $f_k (t) = (1-\epsilon) h(p) e^{U_{ x_k, \mu_{k}^{-1} } } \chi_{B_{R \mu_k}(x_k)}$, by \eqref{1159bis} we have that $h_k e^{u_k}\geq f_k$ in $B_{\delta}(x_k)$ and then Lemma \ref{radial} implies the following lower bound for all $k \geq k_0$:
\begin{eqnarray}
u_k(x_k)-\inf_{B_{\delta}(x_k)}u_k \geq \left( \frac{1-\epsilon}{n \omega_n} \int_{B_{R} (0)} h(p) e^U \right)^{\frac{1}{n-1}} \log \frac{ \delta}{R \mu_k}
\label{1231}
\end{eqnarray}
in view of $\int_{B_{R\mu_k} (x_k)} f_k= (1-\epsilon) \int_{B_R(0)} h(p) e^U$. Recalling that $\mu_k=e^{-\frac{u_k(x_k)}{n}}$, by \eqref{1231} we deduce that
$$
\left( \frac{1-\epsilon}{n \omega_n} \int_{B_{R} (0)} h(p) e^U \right)^{\frac{1}{n-1}}
\leq
\frac{u_k(x_k)-\inf_{B_{\delta}(x_k)} u_k}{ \log \frac{\delta}{R} + \frac{ u_k (x_k)}{n} }.
$$
Since
$$u_k(x_k)+C_1 \inf_{B_{\delta}(x_k)} u_k \geq u_k(\bar x_k)+C_1 \inf_\Omega u_k=\max_K u_k+C_1 \inf_\Omega u_k\geq 0$$
in view of \eqref{1253}, letting $k \to \infty$ we deduce
$$
\left( \frac{1-\epsilon}{n \omega_n} \int_{B_{R} (0)} h(p) e^U \right)^{\frac{1}{n-1}}
\leq
n \limsup_{k \to \infty} \left\{ 1 -\frac{\inf_{B_{\delta}(x_k)} u_k}{u_k (x_k)} \right\}
\leq
n \left( 1 + \frac{1}{C_1} \right) .
$$
Since this holds for each $R, \varepsilon >0$ we deduce that
$$
\frac{1}{n \omega_n} \int_{\mathbb{R}^n } h(p) e^U \leq \left[n ( 1 + \frac{1}{C_1}) \right]^{n-1}
< \left( \frac{n^2}{n-1} \right)^{n-1}
$$
in view of the assumption $C_1>n-1$. On the other hand, by \eqref{quantization} the left hand side is precisely $\left( \frac{n^2}{n-1} \right)^{n-1}$ and this is a contradiction.
\medskip \noindent
\underline{\bf{Case (ii)}}: $h_k (\bar \mu_k y+\bar x_k) e^{\bar U_k} \rightharpoonup \beta_0 \delta_0+\displaystyle \sum_{i=1}^I \beta_i \delta_{p_i}$ weakly in the sense of measures in $\mathbb{R}^n$, for some $\beta_i \geq n^n \omega_n$, $i \in \{0,\ldots, I\}$, and distinct points $p_1,\ldots,p_I \in \mathbb{R}^n \setminus \{0\}$, and $\bar U_k \to -\infty$ locally uniformly in $\mathbb{R}^n \setminus\{ 0, p_1,\ldots,p_I\}$
\medskip \noindent If $I \geq 1$, w.l.o.g. assume that $p_1,\dots,p_I \notin \overline{B_1(0)}$. Since $\bar U_k \to -\infty$ locally uniformly in $\overline{B_1(0)} \setminus \{0\}$ and $\displaystyle \max_{B_1(0)} \bar U_k \to +\infty$ as $k \to +\infty$, we can find $a_k \to 0$ so that
\begin{equation} \label{903}
\bar U_k(a_k)=\displaystyle \max_{B_1(0)} \bar U_k \to +\infty
\end{equation}
as $k \to +\infty$. We now argue in a similar way as in case (i). Setting $x_k=\bar \mu_k a_k+\bar x_k$ and $\mu_k=e^{-\frac{u_k(x_k)}{n}}$, we have that $u_k(x_k)=\bar U_k(a_k)-n\log \bar \mu_k \geq \bar U_k(0)-n \log \bar \mu_k=u_k(\bar x_k)$ and
\begin{equation} \label{905}
\frac{\bar \mu_k}{\mu_k}=e^{\frac{\bar U_k(a_k)}{n}} \to +\infty
\end{equation}
as $k \to +\infty$ in view of \eqref{903}. Setting
$$U_k(y)=u_k(\mu_k y+x_k)+n\log \mu_k,
$$
by \eqref{1058bis} and \eqref{903} we have that
\begin{eqnarray} \label{1020bis}
&& \limsup_{k \to +\infty} \int_{B_{R \frac{\bar \mu_k}{\mu_k}}(- \frac{\bar \mu_k}{\mu_k} a_k)} e^{U_k}
\leq \frac{1}{a} \big( \frac{n}{c_1} \big)^{n-1} \left( 1 + \frac{1}{C_1} \right)^{n-1} \\
\label{2146bis}
&& U_k(0)=\max_{B_{\frac{\bar \mu_k}{\mu_k}}(0)}U_k=0
\end{eqnarray}
for all $R>0$. Since $B_R(0) \subset B_{R \frac{\bar \mu_k}{\mu_k}}(- \frac{\bar \mu_k}{\mu_k} a_k) $ for all $k$ large in view of \eqref{905} and $\displaystyle \lim_{k\to +\infty}a_k=0$, by \eqref{905}-\eqref{2146bis} and elliptic estimates \cite{Dib,Tol} we have that $U_k \to U $ in $C^1_{loc}(\mathbb{R}^n)$ as $k\to +\infty$, where $U$ satisfies \eqref{limitpb}-\eqref{quantization}. We now proceed exactly as in case (i) to reach a contradiction. The proof is complete.
\end{proof}
\begin{oss} \label{commento} When $n=2$, the ``sup+inf" inequality was first derived by Shafrir~\cite{Shafrir} through an isoperimetric argument. It becomes clear in \cite{Rob}, when dealing with a fourth-order exponential PDE in $\mathbb{R}^4$, that the main point comes from the linear theory, which allows there to avoid the extra work needed in our framework. For instance, in the two dimensional case,
inequality \eqref{0617bis} is an easy consequence of the Green representation formula:
given a solution $u$ to $-\Delta u=f$ in a domain containing $B_1(0)$, we can use the fundamental solution of the Laplacian to obtain
$$u(x)-\inf_{B_1(0)} u \geq -\frac{1}{2\pi}\int_{B_1(0)} \log \frac{|x-y|}{||x|y-\frac{x}{|x|}|} f(y) \qquad \forall \ x \in B_1(0),$$
which through an integration by parts gives
$$ u(0)-\inf_{\Omega} u \geq -\frac{1}{2\pi}\int_{B_1(0)} \log |y| f(y) = \frac{1}{2\pi} \int_0^1 [\int_{B_t(0)} f] \frac{dt}{t}.$$
This linear argument also provides the optimal constant $c_1=\frac{1}{2\pi}$, which can be exploited to simplify the proof of Theorem \ref{711} as follows. The estimates~\eqref{1224} re-writes as
\begin{eqnarray} \label{1907}
\limsup_{k \to +\infty} \int_{B_R(0)} h_k(\bar \mu_ky+\bar x_k) e^{\bar U_k} &=&\limsup_{k \to +\infty} \int_{B_{R\bar \mu_k}(\bar x_k)} h_k e^{u_k}
\leq \left[\frac{n}{c_1}(1+\frac{1}{C_1})\right]^{n-1}\\
& < & \Big[ \frac{n^2}{ c_1 (n-1)} \Big]^{n-1} \nonumber
\end{eqnarray}
for all $R>0$ when $C_1 > n-1$. Since $c_1 = \frac{1}{2 \pi}$ and $[\frac{n^2}{ c_1 (n-1)}]^{n-1}=8\pi$ when $n =2$, in case (i) of the above proof we deduce that $\displaystyle \int_{\mathbb{R}^2} h(p) e^{\bar U} <8\pi$, in contrast with the quantization property $\int_{\mathbb{R}^2} h(p) e^{U}=8\pi$ for every solution $U$ of \eqref{Limiting2}. Assuming w.l.o.g. $p_1,\dots,p_I \notin \overline{B_1(0)}$ if $I\geq 1$, in case (ii) of the above proof we deduce from \eqref{1907} with $R=1$ that $\beta_0 <8\pi$, in contrast with the lower estimate $\beta_0\geq 8\pi$ coming from \eqref{eq:Peso1} and \eqref{1718} when $n=2$. Therefore, the proof of Theorem \ref{711} in dimension two becomes considerably simpler.
\medskip
When $n \geq 3$ Green's representation formula is not available for $\Delta_n$ and \eqref{0617bis} does hold \cite{KiMa} with some constant $0<c_1 \leq (n \omega_n)^{-\frac{1}{n-1}}$. Since $c_1$ is in general strictly below the optimal one $(n \omega_n)^{-\frac{1}{n-1}}$, we need to fill the gap thanks to the exponential form of the nonlinearity through a blow-up approach. With this strategy a comparison argument with the radial case is exploited, since in the radial context inequality \eqref{0617bis} does hold with optimal constant $c_1=(n \omega_n)^{-\frac{1}{n-1}}$ thanks to Lemma \ref{radial}.
\end{oss}
As a consequence of the $\sup+\inf$ estimates in Theorem \ref{711}, we deduce the following useful decay estimate:
\begin{cor} \label{decay} Let $u_k$ be a sequence of solutions to \eqref{758}, satisfying \eqref{eq:Peso1} with $h_k \geq \epsilon_0 >0$ in $B_{4r_0}(x_k) \subset \Omega$ and
\begin{equation} \label{1311}
|x-x_k|^n e^{u_k} \leq C \qquad \hbox{in }B_{2 b_k}(x_k) \setminus B_{a_k}(x_k)
\end{equation}
for $0<2a_k< b_k \leq 2 r_0$. Then, there exist $\alpha,C>0$ such that
\begin{equation}\label{1315}
u_k \leq C-\frac{\alpha}{n} u_k(x_k)-(n+\alpha)\log |x-x_k|
\end{equation}
for all $2a_k \leq |x-x_k| \leq b_k$. In particular, if $e^{-\frac{u_k(x_k)}{n}}=o(a_k)$ as $k \to +\infty$ we have that
\begin{equation} \label{1703}
\lim_{k \to +\infty} \int_{B_{b_k}(x_k) \setminus B_{2a_k}(x_k)} h_ke^{u_k} =0.
\end{equation}
\end{cor}
\begin{proof} Letting $V_k(y)=u_k(ry+x_k)+n\log r$ for any $0<r\leq b_k$, we have that $-\Delta_n V_k=h_k(ry+x_k) e^{V_k}$ does hold in $\Omega_k=\frac{\Omega-x_k}{r}$ and \eqref{1311} implies that
\begin{equation} \label{1311 1}
\sup_{B_2(0) \setminus B_{\frac{1}{2}}(0)} |y|^n e^{V_k}\leq C <+\infty
\end{equation}
for all $2a_k \leq r\leq b_k$. Since $V_k$ is uniformly bounded from above in $B_2(0) \setminus B_{\frac{1}{2}}(0)$ in view of \eqref{1311 1}, by the Harnack inequality \cite{Ser1,Tru} it follows that there exist $C>0$ and $C_0 \in (0,1]$ so that
\begin{equation} \label{1539}
C_0 \sup_{|y|=1}V_k\leq \inf_{|y|=1} V_k+C
\end{equation}
for all $2a_k\leq r \leq b_k$.
\medskip \noindent Up to a subsequence, assume that $\displaystyle \lim_{k \to +\infty} x_k =x_0$. By assumption we have that $h_k(ry+x_k) \to h(ry+x_0)\geq \epsilon_0>0$ in $C_{loc}(B_1(0))$ as $k \to +\infty$ for all $0<r\le 2 r_0$. For any given $C_1>n-1$, by Theorem \ref{711intro} applied to $V_k$ in $B_1(0)$ with $K=\{0\}$ we obtain the existence of $C_2>0$ so that
\begin{equation} \label{1546}
V_k(0)+C_1 \inf_{B_1(0)} V_k= V_k(0)+C_1 \inf_{|y|=1} V_k \leq C_2
\end{equation}
does hold for all $k$ and all $0<r\leq 2 r_0$. Inserting \eqref{1546} into \eqref{1539} we deduce that
$$\sup_{|y|=1}V_k \leq C-\frac{\alpha}{n} V_k(0)$$
for all $2a_k\leq r \leq b_k$, with $\alpha=\frac{n}{C_0 C_1}>0$ and some $C>0$, which re-writes in terms of $u_k$ as \eqref{1315}. In particular, by \eqref{1315} we deduce that
$$0\leq \int_{2a_k\leq |x-x_k|\leq b_k} h_ke^{u_k} \leq C e^{-\frac{\alpha}{n}u_k(x_k)}\int_{2a_k\leq |x-x_k|\leq b_k} \frac{dx}{|x-x_k|^{n+\alpha}}=\frac{C n \omega_n}{\alpha 2^{\alpha}} [a_k e^{\frac{u_k(x_k)}{n}} ]^{-\alpha}\to 0$$
provided $e^{-\frac{u_k(x_k)}{n}}=o(a_k)$ as $k \to +\infty$, in view of \eqref{eq:Peso1} and $B_{4r_0}(x_k) \subset \Omega$.
\end{proof}
\section{The case of isolated blow-up} \label{sec:Simple}
\noindent
\noindent The following basic description of the blow-up mechanism is very well known, see \cite{LiSh} in the two-dimensional case and for example \cite{DHR} in a related higher-dimensional context, and is the starting point for performing a more refined asymptotic analysis. For reader's convenience its proof is reported in the appendix.
\begin{thm} \label{352}
Let $u_k$ be a sequence of solutions to \eqref{758} which satisfies \eqref{758bis} and
\begin{equation}\label{824}
h_k e^{u_k} \rightharpoonup \beta \delta_0 \hbox{ weakly in the sense of measures in }B_{3\delta}(0) \subset \Omega \end{equation}
for some $\beta>0$ as $k \to \infty$. Assuming \eqref{eq:Peso1}, then $h(0)>0$ and, up to a subsequence, we can find a finite number of points $x_k^1,\dots,x_k^N$ so that for all $i \not= j$
\begin{eqnarray}
&& |x_k^i| +\mu_k^i+ \frac{\mu_k^i+\mu_k^j}{|x_k^i-x_k^j|}\to 0 \label{339}\\
&& u_k(\mu_k^i y+x_k^i)+n \log \mu_k^i \to U(y) \hbox{ in }C^1_{loc}(\mathbb{R}^n) \label{340}
\end{eqnarray}
as $k \to +\infty$ and
\begin{eqnarray}
\min\{ |x-x_k^1|^n,\dots,|x-x_k^N|^n \} e^{u_k} \leq C \hbox{ in }B_{2 \delta}(0) \label{341}
\end{eqnarray}
for all $k$ and some $C>0$, where $U$ is given by \eqref{eq:Bubble} with $p=0$ and
\begin{equation} \label{828}
u_k(x_k^i)=\max_{B_{\mu_k^i}(x_k^i)} u_k, \quad \mu_k^i=e^{-\frac{u_k(x_k^i)}{n}}.
\end{equation}
\end{thm}
In this section we consider the case of an ``isolated" blow-up point corresponding to have $N=1$ in Theorem \ref{352}, namely
\begin{equation} \label{simple}
|x-x_k|^n e^{u_k} \leq C \hbox{ in }B_{2 \delta }(0)
\end{equation}
for all $k$ and some $C>0$, where $x_k$ simply denotes $x_k^1$. The following result, corresponding to Theorem \ref{thm:Main} for the case of an isolated blow-up, extends the analogous two-dimensional one \cite[Prop. $2$]{LiSh} to $n \geq 2$.
\begin{thm} \label{700}
Let $u_k$ be a sequence of solutions to \eqref{758} which satisfies \eqref{758bis}, \eqref{eq:Peso1}, \eqref{824} and \eqref{simple}. Then
$$\beta=c_n \omega_n.$$
\end{thm}
\begin{proof} First, notice that $x_k \to 0$ as $k \to +\infty$ and $h(0)>0$ in view of Theorem \ref{352}. Since $h \in C(\Omega)$ take $0<r_0 \leq \frac{\delta}{2}$ and $\epsilon_0>0$ so that $h \geq 2 \epsilon_0$ for all $y \in B_{5r_0}(0)$. By \eqref{eq:Peso1} we then deduce that $h_k \geq \epsilon_0>0$ in $B_{4r_0}(x_k)\subset \Omega$. Letting $\mu_k=e^{-\frac{u_k(x_k)}{n}}$ and $U_k=u_k(\mu_ky+x_k)+n \log \mu_k$, there holds
$$ \lim_{k \to +\infty} \int_{B_{R\mu_k}(x_k)} h_k e^{u_k} dx=\lim_{k \to +\infty} \int_{B_R(0)}h_k(\mu_ky+x_k) e^{U_k} dy = \int_{B_R(0)} h(0)e^U dy$$
in view of \eqref{eq:Peso1} and \eqref{340}. Therefore we can construct $R_k \to +\infty$ so that $R_k \mu_k \leq r_0$ and
\begin{equation} \label{1749}
\lim_{k \to +\infty} \int_{B_{R_k \mu_k}(x_k)} h_k e^{u_k}dx=c_n \omega_n
\end{equation}
in view of \eqref{quantization} with $p=0$. Since \eqref{simple} implies the validity of \eqref{1311} with $b_k=r_0$ and $a_k=\frac{R_k \mu_k}{2}$, we can apply Corollary \ref{decay} to deduce by \eqref{1703} that
\begin{equation} \label{1752}
\lim_{k \to +\infty} \int_{B_{r_0}(x_k) \setminus B_{R_k \mu_k}(x_k)} h_ke^{u_k} =0
\end{equation}
in view of $\mu_k=e^{-\frac{u_k(x_k)}{n}}=o(a_k)$ as $k \to +\infty$. Since by the Concentration-Compactness Principle we have that $u_k \to -\infty$ locally uniformly in $B_{3\delta}(0) \setminus \{0\}$ as $k \to +\infty$, we finally deduce that $\beta$ in \eqref{824} satisfies
$$\beta=\lim_{k \to +\infty} \int_{B_{r_0}(x_k)} h_k e^{u_k}=c_n \omega_n$$
in view of \eqref{1749}-\eqref{1752}, and the proof is complete.
\end{proof}
\section{General quantization result} \label{sec:Quantization}
\noindent In order to address quantization issues in the general case where $N\geq 2$ in Theorem \ref{352}, in the following result let us consider a more general situation.
\begin{thm} \label{3522}
Let $u_k$ be a sequence of solutions to \eqref{758} which satisfies \eqref{758bis} and \eqref{824}. Assume \eqref{eq:Peso1} and the existence of a finite number of points $x_k^1,\dots,x_k^N$ and radii $r_k^1,\dots,r_k^N$ so that
for all $i \not= j$
\begin{eqnarray} \label{858}
|x_k^i| +\frac{\mu_k^i}{r_k^i} +\frac{r_k^i+r_k^j}{|x_k^i-x_k^j|}\to 0
\end{eqnarray}
as $k\to +\infty$, where $\mu_k^i=e^{-\frac{u_k(x_k^i)}{n}}$, and
\begin{eqnarray}
\min\{ |x-x_k^1|^n,\dots,|x-x_k^N|^n \} e^{u_k} \leq C \hbox{ in }B_{2\delta}(0) \setminus \bigcup_{i=1}^N B_{r_k^i}(x_k^i) \label{811}
\end{eqnarray}
for all $k$ and some $C>0$. If $\displaystyle \lim_{k \to +\infty} \int_{B_{2 r_k^i}(x_k^i)} h_k e^{u_k}=\beta_i$ for all $i=1,\dots,N$, then
\begin{equation} \label{tquant}
\lim_{k \to +\infty} \int_{B_{\frac{\delta}{2}}(0)} h_k e^{u_k}=\sum_{i=1}^N \beta_i.
\end{equation}
\end{thm}
\begin{proof}
First of all, by applying the Concentration-Compactness Principle to $u_k(r_k^i y+x_k^i)+n \log r_k^i$ we obtain that $\beta_i>0$, $i=1,\dots,N$, in view of $\frac{\mu_k^i}{r_k^i}\to 0$ as $k \to +\infty$. Since $h(0)>0$ by Theorem \ref{352} and $h \in C(\Omega)$, we can find $0<r_0 \leq \frac{\delta}{2}$ so that $h_k \geq \epsilon_0>0$ in $B_{4r_0}(x_k)\subset \Omega$ in view of \eqref{eq:Peso1}. The case $N=1$ follows the same lines as in Theorem \ref{700}: since \eqref{858}-\eqref{811} imply the validity of \eqref{1311} with $b_k=r_0$ and $a_k=r_k$, by Corollary \ref{decay} we get that
$$\lim_{k \to +\infty} \int_{B_{r_0}(x_k) \setminus B_{2r_k}(x_k)} h_ke^{u_k} =0$$
in view of $\mu_k=o(r_k)$. Since $u_k \to -\infty$ locally uniformly in $B_{3\delta}(0) \setminus \{0\}$ as $k \to +\infty$ in view of the Concentration-Compactness Principle, \eqref{tquant} is then established when $N=1$.
\medskip \noindent We proceed by strong induction in $N$ and assume the validity of Theorem \ref{3522} for a number of points $\leq N-1$. Given $x_k^1,\dots,x_k^N$, define their minimal distance as $d_k=\min\{ |x_k^i-x_k^j|: \: i,j=1,\dots,N,\: i \not=j \}$. Since $B_{\frac{d_k}{2}}(x_k^i) \cap B_{\frac{d_k}{2}}(x_k^j)=\emptyset$ for $i \not= j$, we deduce that $|x-x_k^i|\leq |x-x_k^j|$ in $B_{\frac{d_k}{2}}(x_k^i)$ for all $i \not= j$ and then \eqref{811} gets rewritten as $ |x-x_k^i|^n e^{u_k} \leq C$ in $B_{\frac{d_k}{2}}(x_k^i) \setminus B_{r_k^i}(x_k^i)$ for all $i=1,\dots,N$. By \eqref{858} and Corollary \ref{decay} with $b_k=\frac{d_k}{4}$ and $a_k=r_k^i$ we deduce that $\displaystyle \int_{B_{\frac{d_k}{4}}(x_k^i) \setminus B_{2 r_k^i}(x_k^i)} h_k e^{u_k} \to 0$ as $k \to +\infty$ and then
\begin{equation} \label{1038}
\lim_{k \to +\infty} \int_{B_{\frac{d_k}{4}}(x_k^i)} h_k e^{u_k}=\beta_i \qquad \forall \: i=1,\dots,N.
\end{equation}
Up to relabelling, assume that $d_k=|x_k^1-x_k^2|$ and consider the following set of indices
$$I=\{i=1,\dots,N: \: |x_k^i-x_k^1| \leq C d_k \hbox{ for some }C>0\}$$
of cardinality $N_0 \in [2,N]$ since $1,2 \in I$ by construction. Up to a subsequence, we can assume that
\begin{equation} \label{1314}
\frac{|x_k^j-x_k^i|}{d_k} \to +\infty \hbox{ as }k\to +\infty
\end{equation}
for all $i \in I$ and $j \notin I$. Letting $\tilde u_k(y)=u_k(d_ky+x_k^1)+n \log d_k$, notice that
\begin{equation} \label{1845}
\tilde u_k (\frac{x_k^i-x_k^1}{d_k})=u_k(x_k^i)+n \log d_k=n \log \frac{d_k}{\mu_k^i} \to +\infty
\end{equation}
as $k \to +\infty$ in view of \eqref{858}, and \eqref{811} re-writes as
\begin{equation} \label{1842}
\min\{ |y-\frac{x_k^i-x_k^1}{d_k}|^n: i \in I \} e^{\tilde u_k} \leq C_R \hbox{ uniformly in } B_R(0) \setminus \bigcup_{i\in I} B_{\frac{r_k^i}{d_k}}(\frac{x_k^i-x_k^1}{d_k})
\end{equation}
for any $R>0$ thanks to \eqref{1314}. Since $\frac{r_k^i}{d_k} \to 0$ as $k \to +\infty$ in view of \eqref{858}, by \eqref{1845}-\eqref{1842} and the Concentration-Compactness Principle we deduce that
$$\tilde u_k \to -\infty \hbox{ uniformly on } B_R(0) \setminus \bigcup_{i\in I} B_{\frac{1}{4}}(\frac{x_k^i-x_k^1}{d_k})$$
as $k \to +\infty$ and then
\begin{equation} \label{1041}
\lim_{k \to +\infty} \int_{B_{Rd_k}(x_k^1) \setminus \displaystyle \bigcup_{i \in I} B_{\frac{d_k}{4}}(x_k^i)} h_k e^{u_k}=0.
\end{equation}
By \eqref{1038} and \eqref{1041} we finally deduce that
$$ \lim_{k \to +\infty} \int_{B_{Rd_k}(x_k^1)} h_k e^{u_k}=\sum_{i \in I} \beta_i$$
since the balls $B_{\frac{d_k}{4}}(x_k^i)$, $i \in I$, are disjoint.
\medskip \noindent Set $x_k'=x_k^1$, $r_k'=\frac{Rd_k}{2}$ and $\beta'=\displaystyle \sum_{i \in I}\beta_i$. We apply the inductive assumption with the $N-N_0+1$ points $x_k'$ and $\{ x_k^j\}_{j \notin I}$, radii $r_k'$ and $\{r_k^j\}_{j \notin I}$, masses $\beta'$ and $\{\beta_j\}_{j \notin I}$ thanks to the following reduced form of assumption \eqref{811}:
$$\min\{ |x-x_k'|^n,\: |x-x_k^j|^n: \: j \notin I \} e^{u_k} \leq C \hbox{ in }B_{2\delta}(0) \setminus [B_{r_k'}(x_k') \cup \bigcup_{j \notin I} B_{r_k^j}(x_k^j)]$$
provided $R$ is taken sufficiently large. It finally shows the validity of \eqref{tquant} for the index $N$, and the proof is achieved by induction.
\end{proof}
\medskip \noindent We are now in position to establish Theorem \ref{thm:Main} in full generality.
\begin{proof} We first apply Theorem \ref{352} to have a first blow-up description of $u_k$. We have that $\beta_p=N c_n \omega_n$ for all $p \in \mathcal S$ in view of Theorem \ref{3522}, provided we can construct radii $r_k^i$, $i=1,\dots,N$, satisfying \eqref{858} and
\begin{equation} \label{1615}
\lim_{k \to +\infty} \int_{B_{2 r_k^i}(x_k^i)} h_k e^{u_k}=c_n \omega_n.
\end{equation}
Since by \eqref{eq:Peso1} and \eqref{340} we deduce that
\begin{equation} \label{1619}
\int_{B_{R \mu_k^i}(x_k^i)} h_k e^{u_k} \to \int_{B_R(0)}h(p) e^U
\end{equation}
as $k \to +\infty$, by \eqref{quantization} for all $i=1,\dots N$ we can find $R_k^i \to +\infty$ so that $R_k^i \mu_k^i \leq \delta$ and
\begin{equation} \label{1931}
\int_{B_{2 R_k^i \mu_k^i}(x_k^i)} h_k e^{u_k} \to c_n \omega_n.
\end{equation}
If $N=1$ we simply set $r_k=R_k \mu_k$ (omitting the index $i=1$). When $N\geq 2$, by \eqref{339} we deduce that $\mu_k^i=o(d_k^i)$, where $d_k^i=\min \{|x_k^j-x_k^i|: \: j \not=i \}$, and we can set $r_k^i= \min\{ \sqrt{d_k^i \mu_k^i}, R_k^i \mu_k^i\}$ in this case. By construction the radii $r_k^i$ satisfy \eqref{858} and \eqref{1615} easily follows by \eqref{quantization} and \eqref{1619} and \eqref{1931}, in view of the chain of inequalities
$$\int_{B_{R \mu_k^i}(x_k^i)} h_k e^{u_k} \leq \int_{B_{2 r_k^i}(x_k^i)} h_k e^{u_k} \leq \int_{B_{2 R_k^i \mu_k^i}(x_k^i)} h_k e^{u_k}$$
for all $R>0$ and $k$ large (depending on $R$).
\end{proof}
\section{Appendix} \label{Appendix}
\noindent
For the sake of completeness, we give below the proof of Theorem \ref{352}.
\begin{proof}
By the Concentration-Compactness Principle and \eqref{824} we know that
\begin{equation} \label{0930}
\max_{\overline{B_{2 \delta}(0)}}u_k \to +\infty,\qquad u_k \to -\infty \hbox{ locally uniformly in }\overline{B_{2\delta}(0)} \setminus \{0\}.
\end{equation}
Let $x_k=x_k^1$ be the sequence of maximum points of $u_k$ in $\overline{B_{2\delta}(0)}$: $u_k(x_k)=\displaystyle \max_{\overline{B_{2\delta}(0)}}u_k$. If \eqref{341} does already hold, the result is established by simply taking $k=1$ and $\mu_k=\mu_k^1$ according to \eqref{828}, since \eqref{339} follows by \eqref{0930} and the proof of \eqref{340} is classical and indipendent on the validity of \eqref{341}. Indeed, $U_k(y)=u_k(\mu_k y+x_k)+n \log \mu_k$ satisfies $U_k(y)\leq U_k(0)=0$ in $B_{\frac{2\delta}{\mu_k}}(0)$ and
\begin{equation} \label{1040}
-\Delta_n U_k=h_k(\mu_k y+x_k) e^{U_k} \hbox{ in }\frac{\Omega -x_k}{\mu_k},\qquad \int_{\frac{\Omega -x_k}{\mu_k} }e^{U_k}= \int_{\Omega} e^{u_k}.
\end{equation}
Since $\frac{\Omega -x_k}{\mu_k} \to \mathbb{R}^n$ as $k \to +\infty$ in view of \eqref{339} and $B_{3\delta}(0) \subset \Omega$, by \eqref{758bis}, \eqref{eq:Peso1} and elliptic estimates \cite{Dib,Tol} we deduce that, up to a subsequence, $U_k \to U$ in $C^1_{loc}(\mathbb{R}^n)$, where $U$ solves \eqref{limitpb} with $p=0$. Notice that $h(0)=0$ would imply that $U$ is an upper-bounded $n-$harmonic function in $\mathbb{R}^n$ and therefore a constant function (see for instance Corollary 6.11 in \cite{HKM}), contradicting $\int_{\mathbb{R}^n} e^U<\infty$. As a consequence, we deduce that $h(0)>0$ and $U$ is the unique solution of \eqref{limitpb} given by \eqref{eq:Bubble} with $p=0$.
\medskip \noindent Assume that \eqref{341} does not hold with $x_k=x_k^1$ and proceed by induction. Suppose to have found $x_k^1,\dots,x_k^l$ so that \eqref{339}-\eqref{340} and \eqref{828} do hold. If \eqref{341} is not valid for $x_k^1,\dots,x_k^l$, in view of \eqref{0930} we construct $\bar x_k \in B_{2\delta}(0)$ as
\begin{equation} \label{0957}
u_k(\bar x_k)+n \log \min_{i=1,\dots,l} |\bar x_k-x_k^i| =\max_{\overline{B_{2\delta}(0)}} [u_k+n \log \min_{i=1,\dots,l} |x-x_k^i|] \to +\infty
\end{equation}
and have that \eqref{339} is still valid for $x_k^1,\dots,x_k^l,\bar x_k$ with $\bar \mu_k=e^{-\frac{u_k(\bar x_k)}{n}}$ as it follows by \eqref{340} for $i=1,\dots,l$ and \eqref{0957}.
\medskip
Let us argue in a similar way as in the proof of Theorem \ref{711}. Observe that
$\displaystyle \min_{i=1,\dots,l} |\bar x_k+\bar \mu_k y-x_k^i| \geq \frac{1}{2} \displaystyle \min_{i=1,\dots,l} |\bar x_k-x_k^i|$ and
$$u_k(\bar \mu_k y+\bar x_k) +n \log \bar \mu_k \leq n \log \min_{i=1,\dots,l} |\bar x_k-x_k^i|-n \log \min_{i=1,\dots,l} |\bar \mu_k y+\bar x_k-x_k^i| \leq n \log 2$$
for $|y|\leq R_k=\frac{1}{2 \bar \mu_k} \displaystyle \min_{i=1,\dots,l} |\bar x_k-x_k^i|$ in view of \eqref{0957}. Hence $\bar U_k(y)=u_k(\bar \mu_k y+\bar x_k) +n \log \bar \mu_k $ satisfies the analogue of \eqref{1040} with $\bar U_k \leq n \log 2$ in $B_{R_k}(0)$. Since $R_k \to +\infty$ in view of \eqref{339} for $x_k^1,\dots,x_k^l,\bar x_k$, up to a subsequence, by elliptic estimates \cite{Dib,Tol} $\bar U_k \to \bar U$ in $C^1_{loc}(\mathbb{R}^n)$, where $\bar U$ is a solution of \eqref{Limitingn} with $p=0$. By the classification result \cite{Esp} we know that $\bar U=U_{a,\lambda}$ for some $(a,\lambda) \in \mathbb R^n \times (0,\infty)$. Since $\bar U$ is a radially strictly decreasing function with respect to $a$, there exists a sequence $a_k \to a$ as $k \to +\infty$ so that
\begin{equation}\label{1058}
\bar U_k(a_k)=\max_{B_R(a_k)}\bar U_k
\end{equation}
for all $R>0$ and $k$ large (depending on $R$). Setting $x_k^{l+1}=\bar \mu_k a_k+\bar x_k$, since
$\mu_k^{l+1}=e^{-\frac{u_k(x_k^{l+1})}{n}}$ satisfies
\begin{equation} \label{1152}
\frac{\bar \mu_k}{\mu_k^{l+1}}=e^{\frac{\bar U_k(a_k)}{n}} \to
e^{\frac{\max_{\mathbb{R}^n} \bar U}{n}}
\end{equation}
as $k \to +\infty$, we deduce that \eqref{339} is valid for $x_k^1,\dots,x_k^{l+1}$ and \eqref{828} follows by \eqref{1058} with some $R> \displaystyle e^{- \frac{\max_{\mathbb{R}^n} \bar U}{n}}$. Since $U_k^{l+1}=u_k(\mu_k^{l+1}y+x_k^{l+1})+n \log \mu_k^{l+1}$ satisfies $U_k^{l+1}(y)\leq U_k^{l+1}(0)=0$ in $B_{R \frac{\bar \mu_k}{\mu_k^{l+1}}}(0)$ in view of \eqref{1058}, by
\eqref{758bis}, \eqref{eq:Peso1}, \eqref{1152} and elliptic estimates \cite{Dib,Tol} we deduce that, up to a subsequence, $U_k^{l+1} \to U$ in $C^1_{loc}(\mathbb{R}^n)$, where $U$ is the unique solution of \eqref{limitpb} given by \eqref{eq:Bubble} with $p=0$, establishing the validity of \eqref{340} for $i=l+1$ too.\\
\medskip \noindent Since \eqref{339}-\eqref{340} and \eqref{828} on $x_k^1,\dots,x_k^l$ imply
\begin{eqnarray*}
\lim_{k \to +\infty} \int_{B_{3\delta}(0)} h_k e^{u_k} \geq \lim_{R \to +\infty} \lim_{k \to +\infty} \sum_{i=1}^l \int_{B_{R \mu_k^i}(x_k^i)} h_k e^{u_k}
= l c_n \omega_n
\end{eqnarray*}
thanks to \eqref{eq:Peso1}, \eqref{quantization} and \eqref{340}, in view of \eqref{824} the inductive process must stop after a finite number of iterations, say $N$, yielding the validity of Theorem \ref{352} with $x_k^1,\dots,x_k^N$.
\end{proof}
\bibliographystyle{plain}
|
1,116,691,499,834 | arxiv | \section{Introduction}
\label{sec:intro}
Because of its proximity, youth, richness, and location in the
northern hemisphere, the Pleiades has long been a favorite target of
observers. The Pleiades was one of the first open clusters to have
members identified via their common proper motion \citep{trumpler21},
and the cluster has since then been the subject of more than a dozen
proper motion studies. Some of the earliest photoelectric photometry
was for members of the Pleiades \citep{cummings21}, and the cluster
has been the subject of dozens of papers providing additional optical
photometry of its members. The youth and nearness of the Pleiades
make it a particularly attractive target for identifying its
substellar population, and it was the first open cluster studied for
those purposes \citep{jameson89,stauffer89}. More than 20 papers
have been subsequently published, identifying additional substellar
candidate members of the Pleiades or studying their properties.
We have three primary goals for this paper. First, while
extensive optical photometry for Pleiades members is available in the
literature, photometry in the near and mid-IR is relatively spotty.
We will remedy this situation by using new 2MASS $JHK_s$ and Spitzer IRAC
photometry for a large number of Pleiades members. We will use these
data to help identify cluster non-members and to define the
single-star locus in color-magnitude diagrams for stars of 100 Myr
age. Second, we will use our new IR imaging photometry of the center
of the Pleiades to identify a new set of candidate substellar members
of the cluster, extending down to stars expected to have masses of
order 0.04 M$_{\sun}$. Third, we will use the IRAC data to briefly
comment on the presence of circumstellar debris disks in the Pleiades
and the interaction of the Pleiades stars with the molecular cloud
that is currently passing through the cluster.
In order to make best use of the IR imaging data,
we will begin with a necessary digression. As noted
above, more than a dozen proper motion surveys of the Pleiades
have been made in order to identify cluster members. However,
no single catalog of the cluster has been published which
attempts to collect all of those candidate members in a single
table and cross-identify those stars. Another problem is that
while there have been many papers devoted to providing optical
photometry of cluster members, that photometry has been
bewilderingly inhomogeneous in terms of the number of
photometric systems used. In Sec.\ 3 and in the Appendix,
we describe our efforts to create a reasonably complete catalog
of candidate Pleiades members and to provide optical photometry
transformed to the best of our ability onto a
single system.
\section{New Observational Data}
\label{sec:observations}
\subsection{2MASS ``6x" Imaging of the Pleiades}
During the final months of Two Micron All Sky Survey (2MASS; \citet{skrutskie06})
operations, a series of special observations were carried out that
employed exposures six times longer than used for the the primary survey.
These so-called ``6x" observations targeted 30 regions of scientific interest
including a 3 deg $x$ 2 deg area centered on the Pleiades cluster. The 2MASS
6x data were reduced using an automated processing pipeline similar to that
used for the main survey data, and a calibrated 6x Image Atlas and extracted
6x Point and Extended Source Catalogs (6x-PSC and 6x-XSC) analogous to the
2MASS All-Sky Atlas, PSC and XSC have been released as part of the 2MASS
Extended Mission. A description of the content and formats of the 6x image
and catalog products, and details about the 6x observations and data reduction
are given by Cutri et al. (2006; section A3).
\footnote{http://www.ipac.caltech.edu/2mass/releases/allsky/doc/explsup.html}
The 2MASS 6x Atlas and Catalogs
may be accessed via the on-line services of the NASA/IPAC Infrared Science
Archive (http://irsa.ipac.caltech.edu).
Figure 1 shows the area on the sky imaged by the 2MASS 6x observations
in the Pleiades field. The region was covered by two rows of scans, each
scan being one degree long (in declination) and 8.5' wide in right
ascension. Within each row, the scans overlap by approximately one arcminute
in right ascension. There are small gaps in coverage in the declination
boundary between the rows, and one complete scan in the southern row is
missing because the data in that scan did not meet the minimum required
photometric quality. The total area covered by the 6x Pleiades observations
is approximately 5.3 sq. degrees.
There are approximately 43,000 sources extracted from the 6x Pleiades
observations in the 2MASS 6x-PSC, and nearly 1,500 in the 6x-XSC. Because
there are at most about 1000 Pleiades members expected in this region, only
$\sim$2\% of the 6x-PSC sources are cluster members, and the rest are field stars
and background galaxies. The 6x-XSC objects are virtually all resolved
background galaxies. Near infrared color-magnitude and color-color diagrams
of the unresolved sources from the 2MASS 6x-PSC and all sources in the 6x-XSC
sources from the Pleiades region are shown in Figures 2 and 3, respectively.
The extragalactic sources tend to be redder than most stars, and the galaxies
become relatively more numerous towards fainter magnitudes. Unresolved
galaxies dominate the point sources that are fainter than $K_s$ $>$ 15.5 and redder
than $J-K_s >$ 1.2 mag.
The 2MASS 6x observations were conducted using the same freeze-frame scanning
technique used for the primary survey \citep{skrutskie06}. The longer
exposure times were achieved by increasing the ``READ2-READ1" integration to
7.8 sec from the 1.3 sec used for primary survey. However, the 51 ms ``READ1"
exposure time was not changed for the 6x observations. As a result,
there is an effective ``sensitivity gap" in the 8-11 mag region where objects
may be saturated in the 7.8 sec READ2-READ1 6x exposures, but too faint to
be detected in the 51 ms READ1 exposures. Because the sensitivity gap can
result in incompleteness and/or flux bias in the photometric overlap regime,
the near infrared photometry for sources brighter than J=11 mag in the 6x-PSC
was taken from the 2MASS All-Sky PSC during compilation of the catalog
of Pleiades candidate members presented in Table 2 (c.f. Section 3).
\subsection{Shallow IRAC Imaging}
Imaging of the Pleiades with Spitzer was obtained in April 2004
as part of a joint GTO program conducted by the IRAC instrument
team and the MIPS instrument team. Initial results of the MIPS
survey of the Pleiades have already been reported in
\citet{gorlova06}. The IRAC observations were obtained as two
astronomical observing requests (AORs). One of them was
centered near the cluster center, at RA=03h47m00.0s and
Dec=24d07m (2000), and consisted of a 12 row by 12 column map,
with ``frametimes" of 0.6 and 12.0 seconds and two dithers at
each map position. The map steps were 290$\arcsec$ in both
the column and row direction. The resultant map covers a region
of approximately one square degree, and a total integration time
per position of 24 sec over most of the map. The second AOR
used the same basic mapping parameters, except it was smaller (9
rows by 9 columns) and was instead centered northwest from the
cluster center at RA=03h44m36.0s and Dec=25d24m. A two-band
color image of the AOR covering the center of the Pleiades is
shown in Figure~\ref{fig:pleIRAC}. A pictorial guide to the
IRAC image providing Greek names for a few of the brightest
stars, and \citet{hertzsprung47} numbers for several stars
mentioned in Section 6 is provided in Figure~\ref{fig:cartoon}.
We began our analysis with the basic calibrated data (BCDs) from
the Spitzer pipeline, using the S13 version of the Spitzer Science
Center pipeline
software. Artifact mitigation and masking was done using the
IDL tools provided on the Spitzer contributed software website.
For each AOR, the artifact-corrected BCDs were combined into
single mosaics for each channel using the post-BCD ``MOPEX"
package \citep{makovoz05}. The mosaic images were constructed
with 1.22$\times$1.22 arcsecond pixels (i.e., approximately the
same pixel size as the native IRAC arrays).
We derived aperture photometry for stars present in these IRAC
mosaics using both APEX (a component of the MOPEX package) and
the ``phot" routine in DAOPHOT. In both cases, we used a 3 pixel
radius aperture and a sky annulus from 3 to 7 pixels (except
that for Channel 4, for the phot package we used a 2 pixel
radius aperture and a 2 to 6 pixel annulus because that provided
more reliable fluxes at low flux levels). We used the flux for
zero magnitude calibrations provided in the IRAC data handbook
(280.9, 179.7, 115.0 and 64.1 Jy for Ch 1 through Ch 4,
respectively), and the aperture corrections provided in the same
handbook (multiplicative flux correction factors of
1.124, 1.127, 1.143 and 1.584 for Ch 1-4, inclusive. The Ch4
correction factor is much bigger because it is for an aperture
radius of 2 rather than 3 pixels.).
Figure~\ref{fig:plecomp1} and Figure~\ref{fig:plecomp2} provide
two means to assess the accuracy of the IRAC photometry. The
first figure compares the aperture photometry from APEX to that
from phot, and shows that the two packages yield very similar
results when used in the same way. For this reason, we have
simply averaged the fluxes from the two packages to obtain our
final reported value. The second figure shows the difference
between the derived 3.6 and 4.5 $\mu$m\ magnitudes for Pleiades
members. Based on previous studies (e.g. \citet{allen04}),
we expected this difference
to be essentially zero for most stars, and the Pleiades data
corroborate that expectation. For [3.6]$<$10.5, the RMS
dispersion of the magnitude difference between the two channels
is 0.024 mag. Assuming that each channel has similar
uncertainties, this indicates an internal 1-$\sigma$ accuracy of
order 0.017 mag. The absolute calibration uncertainty for the
IRAC fluxes is currently estimated at of order 0.02 mag.
Figure~\ref{fig:plecomp2} also shows that fainter than [3.6]=10.5
(spectral type later than about M0), the [3.6]$-$[4.5] color for
M dwarfs departs slightly from zero, becoming increasingly redder
to the limit of the data (about M6).
\section{A Catalog of Pleiades Candidate Members}
\label{sec:catalog}
If one limits oneself to only stars visible with the naked eye,
it is easy to identify which stars are members of the Pleiades --
all of the stars within a degree of the cluster center that have
$V<$ 6 are indeed members. However, if one were to try to
identify the M dwarf stellar members of the cluster (roughly 14
$<V<$ 23), only of order 1\% of the stars towards the
cluster center are likely to be members, and it is much harder
to construct an uncontaminated catalog. The problem is
exacerbated by the fact that the Pleiades is old enough that
mass segregation through dynamical processes has occurred, and
therefore one has to survey a much larger region of the sky in
order to include all of the M dwarf members.
The other primary difficulty in constructing a comprehensive
member catalog for the Pleiades is that the pedigree of the
candidates varies greatly. For the best studied stars,
astrometric positions can be measured over temporal baselines
ranging up to a century or more, and the separation of cluster
members from field stars in a vector point diagram (VPD) can be
extremely good. In addition, accurate radial velocities and
other spectral indicators are available for essentially all of
the bright cluster members, and these further allow membership
assessment to be essentially definitive. Conversely, at the
faint end (for stars near the hydrogen burning mass limit in the
Pleiades), members are near the detection limit of the existing
wide-field photographic plates, and the errors on the proper
motions become correspondingly large, causing the separation of
cluster members from field stars in the VPD to become poor.
These stars are also sufficiently faint that spectra capable
of discriminating members from field dwarfs can only be
obtained with 8m class telescopes, and only a very small
fraction of the faint candidates have had such spectra obtained.
Therefore, any comprehensive catalog created for the Pleiades
will necessarily have stars ranging from certain members to
candidates for which very little is known, and where the
fraction of spurious candidate members increases to lower
masses.
In order to address the membership uncertainties and biases, we
have chosen a sliding scale for inclusion in our
catalog. For all stars, we require that the available photometry
yields location in color-color and color-magnitude diagrams
consistent with cluster membership. For the
stars with well-calibrated photoelectric photometry, this means
the star should not fall below the Pleiades single-star locus by
more than about 0.2 mag or above that locus by more than
about 1.0 mag (the expected displacement for a
hierarchical triple with three nearly equal mass components).
For stars with only photographic optical photometry, where the
1-$\sigma$ uncertainties are of order 0.1 to 0.2 mag, we
still require the star's photometry to be consistent with
membership, but the allowed displacements from the single star
locus are considerably larger. Where accurate radial
velocities are known, we require that the star be considered a
radial velocity member based on the paper where the radial
velocities were presented. Where stars have been previously
identified as non-members based on photometric or spectroscopic
indices, we adopt those conclusions.
Two other relevant pieces of information are sometimes available.
In some cases, individual proper motion membership probabilities
are provided by the various membership surveys. If no other
information is available, and if the membership probability for
a given candidate is less than 0.1, we exclude that star from
our final catalog. However, often a star appears in several
catalogs; if it appears in two or more proper motion
membership lists we include it in the final catalog even if P
$<$ 0.1 in one of those catalogs. Second, an entirely
different means to identify candidate Pleiades members is via
flare star surveys towards the cluster \citep{haro82,jones81}.
A star with a formally low membership probability in one catalog
but whose photometry is consistent with membership and that was
identified as a flare star is retained in our catalog.
Further details of the catalog construction are provided in the
appendix, as are details of the means by which the $B$, $V$, and
$I$ photometry have been homogenized. A full discussion and listing
of all of the papers from which we have extracted astrometric and
photometric information is also provided in the appendix.
Here we simply provide a
very brief description of the inputs to the catalog.
We include candidate cluster members from the following proper
motion surveys: \citet{trumpler21}, \citet{hertzsprung47},
\citet{jones81}, Pels and Lub -- as reported in
\citet{vanlee86}, \citet{stauffer91}, \citet{artyukhina69},
\citet{hambly93}, \citet{pinfield00}, \citet{adams01} and \citet{deacon04}.
Another important compilation which provides the initial
identification of a significant number of low mass cluster
members is the flare star catalog of \citet{haro82}. Table 1
provides a brief synopsis of the characteristics of the
candidate member catalogs from these papers. The Trumpler paper
is listed twice in Table 1 because there are two membership
surveys included in that paper, with differing spatial coverages
and different limiting magnitudes.
In our final catalog, we have attempted to follow the standard
naming convention whereby the primary name
is derived from the paper where it was first identified
as a cluster member. An exception to this arises for stars with
both \citet{trumpler21} and \citet{hertzsprung47} names, where
we use the Hertzsprung numbers as the standard name because that
is the most commonly used designation for these stars in the
literature. The failure for the Trumpler numbers to be given
precedence in the literature perhaps stems from the fact that
the Trumpler catalog was published in the Lick Observatory
Bulletins as opposed to a refereed journal. In addition to
providing a primary name for each star, we provide
cross-identifications to some of the other catalogs,
particularly where there is existing photometry or spectroscopy
of that star using the alternate names. For the brightest
cluster members, we provide additional cross-references (e.g.,
Greek names, Flamsteed numbers, HD numbers).
For each star, we attempt to include an estimate for Johnson $B$
and $V$, and for Cousins $I$ ($I_{\rm C}$). Only a very small fraction of
the cluster members have photoelectric photometry in these
systems, unfortunately. Photometry for many of the stars has
often been obtained in other systems, including Walraven,
Geneva, Kron, and Johnson. We have used previously published
transformations from the appropriate indices in those systems to
Johnson $BV$ or Cousins $I$. In other cases, photometry is
available in a natural $I$ band system, primarily for some of the
relatively faint cluster members. We have attempted to transform
those $I$ band data to $I_{\rm C}$\ by deriving our own
conversion using stars for which we already have a $I_{\rm C}$\ estimate
as well as the natural $I$ measurement. Details of these issues
are provided in the Appendix.
Finally, we have cross-correlated the cluster candidates catalog
with the 2MASS All-Sky PSC and also with the 6x-PSC
for the Pleiades. For every star in the catalog, we
obtain $JH$$K_{\rm s}$\ photometry and 2MASS positions. Where we have
both main survey 2MASS data and data from the 6x catalog, we
adopt the 6x data for stars with $J>$11, and data from the
standard 2MASS catalog otherwise. We verified that
the two catalogs do not have any obvious photometric or
astrometric offsets relative to each other. The
coordinates we list in our catalog are entirely from these 2MASS
sources, and hence they inherit the very good and homogeneous
2MASS positional accuracies of order 0.1 arcseconds RMS.
We have then plotted the candidate Pleiades members in a variety
of color-magnitude diagrams and color-color diagrams, and
required that a star must have photometry that is consistent
with cluster membership. Figure~\ref{fig:ple1695} illustrates
this process, and indicates why (for example) we have excluded
HII 1695 from our final catalog.
Table 2 provides the collected data for the 1417 stars we have
retained as candidate Pleiades members. The first two columns are the
J2000 RA and Dec from 2MASS; the next are the 2MASS $JH$$K_{\rm s}$\ photometry and
their uncertainties, and the 2MASS photometric quality flag (``ph-qual").
If the number
following the 2MASS quality flag is a 1, the 2MASS data come from
the 2MASS All-Sky PSC; if it is a 2, the data come from the 6x-PSC.
The next three columns provide the $B$, $V$ and $I_{\rm C}$\
photometry, followed by a flag which indicates the provenance of that
photometry. The last column provides the most commonly used names
for these stars. The hydrogen burning mass limit for the Pleiades
occurs at about $V$=22, $I$=18, $K_{\rm s}$=14.4. Fifty-three of the
candidate members in the catalog are fainter than this limit, and
hence should be sub-stellar if they are indeed Pleiades members.
Table 3 provides the IRAC [3.6], [4.5], [5.8] and [8.0] photometry we
have derived for Pleiades candidate members included within the region
covered by the IRAC shallow survey of the Pleiades (see section 2).
The brightest stars are saturated even in our short integration frame
data, particularly for the more sensitive 3.6 and 4.5 $\mu$m\ channels.
At the faint end, we provide photometry only for 3.6 and 4.5 $\mu$m\
because the objects are undetected in the two longer wavelength
channels. At the ``top" and ``bottom" of the survey region, we have
incomplete wavelength coverage for a band of width about 5$\arcmin$,
and for stars in those areas we report only photometry in either the
3.6 and 5.8 bands or in 4.5 and 8.0 bands.
Because Table 2 is an amalgam of many previous catalogs, each of
which have different spatial coverage, magnitude limits and
other idiosyncrasies, it is necessarily incomplete and
inhomogeneous. It also certainly includes some non-members. For
$V<$ 12, we expect very few non-members because of the extensive
spectroscopic data available for those stars; the fraction of
non-members will likely increase to fainter magnitudes,
particularly for stars located far from the cluster center. The
catalog is simply an attempt to collect all of the available
data, identify some of the non-members and eliminate
duplications. We hope that it will also serve as a starting
point for future efforts to produce a ``cleaner" catalog.
Figure~\ref{fig:plespatial2} shows the distribution on the sky
of the stars in Table 2. The complete spatial distribution of
all members of the Pleiades may differ slightly from what is
shown due to the inhomogeneous properties of the proper motion
surveys. However, we believe that those effects are relatively
small and the distribution shown is mostly representative of the
parent population. One thing that is evident in Figure
\ref{fig:plespatial2} is mass segregation -- the highest mass
cluster members are much more centrally located than the lowest
mass cluster members. This fact is reinforced by calculating
the cumulative number of stars as a function of distance from
the cluster center for different absolute magnitude bins.
Figure~\ref{fig:ple_segreg} illustrates this fact. Another
property of the Pleiades illustrated by Figure
\ref{fig:plespatial2} is that the cluster appears to be
elongated parallel to the galactic plane, as expected from n-body
simulations of galactic clusters \citep{terlevich87}. Similar
plots showing the flattening of the cluster and evidence for
mass segregation for the V $<$ 12 cluster members were provided
by \citep{raboud98}.
\section{Empirical Pleiades Isochrones and Comparison to Model Isochrones}
Young, nearby, rich open clusters like the Pleiades can and
should be used to provide template data which can
help interpret observations of more distant clusters or to
test theoretical models. The identification of candidate
members of distant open clusters is often based on plots of
stars in a color-magnitude diagram, overlaid upon which is a
line meant to define the single-star locus at the distance of
the cluster. The stars lying near or slightly above the locus
are chosen as possible or probable cluster members. The data we
have collected for the Pleiades provide a means to define the
single-star locus for 100 Myr, solar metallicity stars in a
variety of widely used color systems down to and slightly below
the hydrogen burning mass limit. Figure~\ref{fig:cmd_vmi} and
Figure~\ref{fig:cmd_km1} illustrate the appearance of the
Pleiades stars in two of these diagrams, and the single-star
locus we have defined. The curve defining the single-star
locus was drawn entirely ``by eye.'' It is displaced slightly
above the lower envelope to the locus of stars to
account for photometric uncertainties (which increase to fainter
magnitudes). We attempted to use all of the information
available to us, however. That is, there should also be an
upper envelope to the Pleiades locus in these diagrams, since
equal mass binaries should be displaced above the single star
sequence by 0.7 magnitudes (and one expects very few systems of
higher multiplicity). Therefore, the single star locus was
defined with that upper envelope in mind. Table 4 provides the
single-star loci for the Pleiades for $BVI_{\rm c}JK_{\rm s}$
plus the four IRAC channels. We have dereddened the
empirical loci by the canonical mean extinction to the Pleiades
of $A_V$\ = 0.12 (and, correspondingly, A$_B$ = 0.16, A$_I$ =
0.07, A$_J$ = 0.03, A$_K$ = 0.01, as per the reddening law
of \citet{rieke85}).
The other benefit to constructing the new catalog is that it can
provide an improved comparison dataset to test theoretical
isochrones. The new catalog provides homogeneous photometry in
many photometric bands for stars ranging from several solar
masses down to below 0.1 M$_{\sun}$.
We take the distance to the Pleiades
as 133 pc, and refer the reader to \citet{soderblom05} for a
discussion and a listing of the most recent determinations. The
age of the Pleiades is not as well-defined, but is probably
somewhere between 100 and 125 Myr \citep{meynet93, stauffer98}.
We adopt 100 Myr for the purposes of this discussion; our
conclusions relative to the theoretical isochrones would not be
affected significantly if we instead chose 125 Myr. As noted
above, we adopt $A_V$=0.12 as the mean Pleiades extinction, and
apply that value to the theoretical isochrones. A small number
of Pleiades members have significantly larger extinctions
\citep{breger86, stauffer87}, and we have dereddened those
stars individually to the mean cluster reddening.
Figures \ref{fig:super_vmi} and \ref{fig:super_kik} compare
theoretical 100 Myr isochrones from \citet{siess00} and
\citet{baraffe98} to the Pleiades member photometry from Table 2
for stars for which we have photoelectric photometry. Neither
set of isochrones are a good fit to the $V-I$ based
color-magnitude diagram. For \citet{baraffe98} this is not a
surprise because they illustrated that their isochrones are too
blue in $V-I$ for cool stars in their paper, and ascribed the
problem as likely the result of an incomplete line list,
resulting in too little absorption in the $V$ band. For
\citet{siess00}, the poor fit in the $V-I$ CMD is somewhat
unexpected in that they transform from the theoretical to the
observational plane using empirical color-temperature
relations. In any event, it is clear that neither model
isochrones match the shape of the Pleiades locus in the $V$ vs.\
$V-I$ plane, and therefore use of these $V-I$ based isochrones for
younger clusters is not likely to yield accurate results (unless
the color-$T_{\rm eff}$\ relation is recalibrated, as described for
example in \citet{jeffries05}). On the other hand, the
\citet{baraffe98} model provides a quite good fit to the
Pleiades single star locus for an age of 100 Myr in the $K$ vs.\
$I-K$ plane.\footnote{These isochrones are calculated for the
standard K filter, rather than $K_{\rm s}$. However, the difference in
location of the isochrones in these plots because of this should
be very slight, and we do not believe our conclusions are significantly
affected.}.
This perhaps lends support to the hypothesis that
the misfit in the $V$ vs.\ $V-I$ plane is due to missing opacity in
their V band atmospheres for low mass stars (see also \citet{chabrier00}
for further evidence in support of this idea). The
\citet{siess00} isochrones do not fit the Pleiades locus in the
$K$ vs.\ $I-K$ plane particularly well, being too faint near
$I-K$=2 and too bright for $I-K >$ 2.5.
\section{Identification of New Very Low Mass Candidate Members}
The highest spatial density for Pleiades members of any mass
should be at the cluster center. However, searches for
substellar members of the Pleiades have generally avoided the
cluster center because of the deleterious effects of scattered
light from the high mass cluster members and because of the
variable background from the Pleiades reflection nebulae. The
deep 2MASS and IRAC 3.6 and 4.5 $\mu$m\ imaging provide accurate
photometry to well below the hydrogen burning mass limit, and
are less affected by the nebular emission than shorter
wavelength images. We therefore expect that it should be
possible to identify a new set of candidate Pleiades substellar
members by combining our new near and mid-infrared photometry.
The substellar mass limit in the Pleiades occurs at about
$K_{\rm s}$ =14.4, near the limit of the 2MASS All-Sky PSC. As
illustrated in Figure \ref{fig:2macmd}, the deep 2MASS survey
of the Pleiades should easily detect objects at least two
magnitudes fainter than the substellar limit. The key to
actually identifying those objects and separating them from the
background sources is to find color-magnitude or color-color
diagrams which separate the Pleiades members from the other
objects. As shown in Figure~\ref{fig:cmd3dot6}, late-type
Pleiades members separate fairly well from most field stars
towards the Pleiades in a $K_{\rm s}$\ vs.\ $K_s-[3.6]$ color-magnitude
diagram. However, as illustrated in Figure~\ref{fig:2macmd}, in
the $K_s$ magnitude range of interest there is also a large
population of red galaxies, and they are in fact the primary
contaminants to identifying Pleiades substellar objects in the $K_{\rm s}$\
vs.\ $K_s-[3.6]$ plane. Fortunately, most of the contaminant
galaxies are slightly resolved in the 2MASS and IRAC imaging,
and we have found that we can eliminate most of the red galaxies
by their non-stellar image shape.
Figure~\ref{fig:cmd3dot6} shows the first step in our process of
identifying new very low mass members of the Pleiades. The
red plus symbols are the known Pleiades members from Table 2.
The red open circles are candidate Pleiades substellar members
from deep imaging surveys published in the literature, mostly of
parts of the cluster exterior to the central square degree,
where the IRAC photometry is from \citet{lowrance07}. The blue,
filled circles are field M and L dwarfs, placed at the distance
of the Pleiades, using photometry from \citet{patten06}.
Because the Pleiades is $\sim$100 Myr, its very low mass stellar
and substellar objects will be displaced about 0.7 mag above the
locus of the field M and L dwarfs according to the
\citet{baraffe98} and \citet{chabrier00} models, in accord with the location in the
diagram of the previously identified, candidate VLM and
substellar objects. The trapezoidal shaped region outlined with
a dashed line is the region in the diagram which we define as
containing candidate new VLM and substellar members of the
Pleiades. We place the faint limit of this region at $K_{\rm s}$ =16.2
in order to avoid the large apparent increase in faint, red
objects for $K_{\rm s}$ $>$ 16.2, caused largely by increasing errors in
the $K_{\rm s}$\ photometry. Also, the 2MASS extended object flags cease
to be useful fainter than about $K_{\rm s}$= 16.
We took the following steps to identify a set of candidate substellar
members of the Pleiades:
\begin{itemize}
\item keep only objects which fall in the trapezoidal region in
Figure~\ref{fig:cmd3dot6}.
\item remove objects flagged as non-stellar by the 2MASS pipeline software;
\item remove objects which appear non-stellar to the eye in the IRAC images;
\item remove objects which do not fall in or near the locus of
field M and L dwarfs in a $J-H$ vs.\ $H-K_s$ diagram;
\item remove objects which have 3.6 and 4.5 $\mu$m\ magnitudes that differ
by more than 0.2 mag.
\item remove objects which fall below the ZAMS in a J vs. $J-K_s$ diagram.
\end{itemize}
As shown in Figure~\ref{fig:cmd3dot6}, all stars earlier than
about mid-M have $K_s-[3.6]$ colors bluer than 0.4. This ensures
that for most of the area of the trapezoidal region, the primary
contaminants are distant galaxies. Fortunately, the 2MASS
catalog provides two types of flags for identifying extended
objects. For each filter, a chi-square flag measures the match
between the objects shape and the instrumental PSF, with values
greater than 2.0 generally indicative of a non-stellar object.
In order not to be misguided by an image artifact in one filter,
we throw out the most discrepant of the three flags and average
the other two. We discard objects with mean $\chi^2$ greater
than 1.9. The other indicator is the 2MASS extended object
flag, which is the synthesis of several independent tests of the
objects shape, surface brightness and color (see \citet{jarrett00}
for a description of this process). If one
simply excludes the objects classified as extended in the 2MASS
6x image by either of these techniques, the number of
candidate VLM and substellar objects lying inside the
trapezoidal region decreases by nearly a half.
We have one additional means to demonstrate that many
of the identified objects are probably Pleiades members, and
that is via proper motions. The mean Pleiades proper motion is
$\Delta$RA = 20 mas yr$^{-1}$ and $\Delta$Dec = $-$45 mas
yr$^{-1}$ \citep{jones73}. With an epoch difference of only 3.5
years between the deep 2MASS and IRAC imaging, the expected
motion for a Pleiades member is only 0.07 arcseconds in RA and
$-$0.16 arcseconds in Dec. Given the relatively large pixel
size for the two cameras, and the undersampled nature of the
IRAC 3.6 and 4.5 $\mu$m\ images, it is not a priori obvious that
one would expect to reliably detect the Pleiades motion.
However, both the 2MASS and IRAC astrometric solutions have been
very accurately calibrated. Also, for the present purpose, we
only ask whether the data support a conclusion that most of the
identified substellar candidates are true Pleiades members
(i.e., as an ensemble), rather than that each star is well
enough separated in a VPD to derive a high membership
probability.
Figure~\ref{fig:super_propmo} provides a set of plots that we
believe support the conclusion that the majority of the
surviving VLM and substellar candidates are Pleiades members.
The first plot shows the measured motions between the epoch of
the 2MASS and IRAC observations for all known Pleiades members
from Table 2 that lie in the central square degree region and
have 11 $<$ $K_{\rm s}$\ $<$ 14 (i.e., just brighter than the substellar
candidates). The mean offset of the Pleiades stellar members
from the background population is well-defined and is
quantitatively of the expected magnitude and sign (+0.07 arcsec
in RA and $-$0.16 arcsec in Dec). The RMS dispersion of the
coordinate difference for the field population in RA and Dec is
0.076 and 0.062 arcseconds, supportive of our claim that the
relative astrometry for the two cameras is quite good. Because
we expect that the background population should have essentially
no mean proper motion, the non-zero mean ``motion" of the field
population of about $<\Delta$RA$>$=0.3 arcseconds is
presumably not real. Instead, the offset is probably due to
the uncertainty in transferring the Spitzer coordinate
zero-point between the warm star-tracker and the cryogenic focal
plane. Because it is simply a zero-point offset applicable to
all the objects in the IRAC catalog, it has no effect on the
ability to separate Pleiades members from the field star
population.
The second panel in Figure~\ref{fig:super_propmo} shows the
proper motion of the candidate Pleiades VLM and substellar
objects. While these objects do not show as clean a
distribution as the known members, their mean motion is clearly
in the same direction. After removing 2-$\sigma$ deviants, the
median offsets for the substellar candidates are 0.04 and
$-$0.11 arcseconds in RA and Dec, respectively. The objects
whose motions differ significantly from the Pleiades mean may be
non-members or they may be members with poorly determined
motions (since a few of the high probability members in the
first panel also show discrepant motions).
The other two panels in Figure~\ref{fig:super_propmo} show the proper
motions of two possible control samples. The first control sample was
defined as the set of stars that fall up to 0.3 magnitudes below the
lower sloping boundary of the trapezoid in
Figure~\ref{fig:cmd3dot6}. These objects should be late type dwarfs
that are either older or more distant than the Pleiades or red galaxies.
We used the 2MASS data to remove extended or blended objects from the
sample in the same way as for the Pleiades candidates. If the
objects are nearby field stars, we expect to see large proper motions;
if galaxies, the real proper motions would be small -- but relatively
large apparent proper motions due to poor centroiding or different
centroids at different effective wavelengths could be present. The
second control set was defined to have $-0.1 < K - [3.6] < 0.1$ and
$14.0 < K < 14.5$, and to be stellar based on the 2MASS flags. This
control sample should therefore be relatively distant G and K dwarfs
primarily. Both control samples have proper motion distributions
that differ greatly from the Pleiades samples and that make sense for,
respectively, a nearby and a distant field star sample.
Figure~\ref{fig:cmd3dot6memb} shows the Pleiades members from
Table 2 and the 55 candidate VLM and substellar members that
survived all of our culling steps. We cross-correlated this list
with the stars from Table 2 and with a list of the previously
identified candidate substellar members of the cluster from
other deep imaging surveys. Fourteen of the surviving objects
correspond to previously identified Pleiades VLM and substellar
candidates. We provide the new list of candidate members in
Table 5. The columns marked as $\mu$(RA) and $\mu$(DEC) are
the measured motions, in arcsec over the 3.5 year epoch difference
between the 2MASS-6x and IRAC observations.
Forty-two of these objects have $K_{\rm s}$ $>$ 14.0, and hence
inferred masses less than about 0.1 M$_{\sun}$; thirty-one of them
have $K_{\rm s}$ $>$ 14.4, and hence have inferred masses below the
hydrogen burning mass limit.
Our candidate list could be contaminated by foreground late type
dwarfs that happen to lie in the line of sight to the Pleiades. How
many such objects should we expect? In order to pass our culling
steps, such stars would have to be mid to late M dwarfs, or early to
mid L dwarfs. We use the known M dwarfs within 8 pc to estimate how
many field M dwarfs should lie in a one square degree region and at
distance between 70 and 100 parsecs (so they would be coincident in a
CMD with the 100 Myr Pleiades members). The result is $\sim$3 such
field M dwarf contaminants. \citet{cruz06} estimate that the volume
density of L dwarfs is comparable to that for late-M dwarfs, and
therefore a very conservative estimate is that there might also be 3
field L dwarfs contaminating our sample. We regard this (6
contaminating field dwarfs) as an upper limit because our various
selection criteria would exclude early M dwarfs and late L dwarfs.
\citet{bihain06} made an estimate of the number of contaminating
field dwarfs in their Pleiades survey of 1.8 square degrees; for the
spectral type range of our objects, their algorithm would have
predicted just one or two contaminating field dwarfs for our survey.
How many substellar Pleiades members should there be in the
region we have surveyed? That is, of course, part of the
question we are trying to answer. However, previous studies
have estimated that the Pleiades stellar mass function for M $<$
0.5 M$_{\sun}$\ can be approximated as a power-law with an exponent
of -1 (dN/dM $\propto$ M$^{-1}$). Using the known Pleiades
members from Table 2 that lie within the region of the IRAC
survey and that have masses of 0.2 $<$ M/M$_{\sun}$ $<$ 0.5 (as
estimated from the \citet{baraffe98} 100 Myr isochrone) to
normalize the relation, the M$^{-1}$\ mass function predicts
about 48 members in our search region and with 14 $<$ K $<$ 16.2
(corresponding to 0.1 $<$ M/M$_{\sun}$ $<$ 0.035). Other studies
have suggested that the mass function in the Pleiades becomes
shallower below 0.1 M$_{\sun}$, dN/dM $\propto$ M$^{-0.6}$. Using
the same normalization as above, this functional form for the
Pleiades mass function for M $<$ 0.1 M$_{\sun}$\ yields a prediction
of 20 VLM and substellar members in our survey. The number of
candidates we have found falls between these two estimates.
Better proper motions and low-resolution spectroscopy will almost
certaintly eliminate some of these candidates as non-members.
\section{Mid-IR Observations of Dust and PAHS in the Pleiades}
\label{sec:discussion}
Since the earliest days of astrophotography, it has been clear
that the Pleiades stars are in relatively close proximity to
interstellar matter whose optical manifestation is the
spider-web like network of filaments seen particularly strongly
towards several of the B stars in the cluster. High resolution
spectra of the brightest Pleiades stars as well as CO maps
towards the cluster show that there is gas as well as dust
present, and that the (primary) interstellar cloud has a
significant radial velocity offset relative to the Pleiades
\citep{white03, federman84}. The gas and dust, therefore, are
not a remnant from the formation of the cluster but are simply
evidence of a a transitory event as this small cloud passes by
the cluster in our line of sight (see also \citet{breger86}).
There are at least two claimed morphological signatures of a
direct interaction of the Pleiades with the cloud.
\citet{white93} provided evidence that the IRAS 60 and 100 $\mu$m\
image of the vicinity of the Pleiades showed a dark channel
immediately to the east of the Pleiades, which they interpreted
as the ``wake" of the Pleiades as it plowed through the cloud
from the east. \citet{herbig01} provided a detailed analysis of
the optically brightest nebular feature in the Pleiades -- IC
349 (Barnard's Merope nebula) -- and concluded that the shape
and structure of that nebula could best be understood if the
cloud was running into the Pleiades from the southeast.
\citet{herbig01} concluded that the IC 349 cloudlet, and by
extension the rest of the gas and dust enveloping the Pleiades,
are relatively distant outliers of the Taurus molecular clouds
(see also \citet{eggen50} for a much earlier discussion ascribing
the Merope nebulae as outliers of the Taurus clouds).
\citet{white03} has more recently proposed a hybrid model, where
there are two separate interstellar cloud complexes with very
different space motions, both of which are colliding
simultaneously with the Pleiades and with each other.
\citet{breger86} provided polarization measurements for a sample
of member and background stars towards the Pleiades, and argued
that the variation in polarization signatures across the face of
the cluster was evidence that some of the gas and dust was
within the cluster. In particular, Figure 6 of that paper showed
a fairly distinct interface region, with little residual
polarization to the NE portion of the cluster and an L-shaped
boundary running EW along the southern edge of the cluster and
then north-south along the western edge of the cluster. Stars
to the south and west of that boundary show relatively large
polarizations and consistent angles (see also our Figure
\ref{fig:cartoon} where we provide a few polarization vectors
from \citet{breger86} to illustrate the location of the
interface region and the fact that the position angle of the
polarization correlates well with the location in the
interface).
There is a general correspondence between the polarization map
and what is seen with IRAC, in the sense that the B stars in the
NE portion of the cluster (Atlas and Alcyone) have little
nebular emission in their vicinity, whereas those in the
western part of the cluster (Maia, Electra and Asterope) have
prominent, filamentary dust emission in their vicinity. The
L-shaped boundary is in fact visible in Figure~\ref{fig:pleIRAC}
as enhanced nebular emission running between and below a line
roughly joining Merope and Electra, and then making a right
angle and running roughly parallel to a line running from
Electra to Maia to HII1234 (see Figure~\ref{fig:cartoon}).
\subsection{Pleiades Dust-Star Encounters Imaged with IRAC}
\label{sec: dust structures}
The Pleiades dust filaments are most strongly evident in IRAC's
8 $\mu$m\ channel, as evidenced by the distinct red color of the
nebular features in Figure~\ref{fig:pleIRAC}. The dominance at
8 $\mu$m\ is an expected feature of reflection nebulae, as
exemplified by NGC 7023 \citep{werner04}, where most of the
mid-infrared emission arises from polycyclic aromatic
hydrocarbons (PAHs) whose strongest bands in the 3 to 10 $\mu$m\
region fall at 7.7 and 8.6 $\mu$m. One might expect that if
portions of the passing cloud were particularly near to one of
the Pleiades members, it might be possible to identify such
interactions by searching for stars with 8.0 $\mu$m\ excesses or
for stars with extended emission at 8 $\mu$m. Figure
\ref{fig:dusty1} provides two such plots. Four stars stand out
as having significant extended 8 $\mu$m\ emission, with two of
those stars also having an 8 $\mu$m\ excess based on their
[3.6]$-$[8.0] color. All of these stars, plus IC 349, are
located approximately along the interface region identified by
\citet{breger86}.
We have subtracted a PSF from the 8 $\mu$m\ images for the stars
with extended emission, and those PSF-subtracted images are
provided in Figure~\ref{fig:psfsub}. The image for HII 1234 has
the appearance of a bow-shock. The shape is reminiscent of
predictions for what one should expect from a collision between
a large cloud or a sheet of gas and an A star as described in
\citet{artymowicz97}. The \citet{artymowicz97} model posits that
A stars encountering a cloud will carve a paraboloidal shaped
cavity in the cloud via radiation pressure. The exact size and
shape of the cavity depend on the relative velocity of the
encounter, the star's mass and luminosity and properties of the
ISM grains. For typical parameters, the predicted
characteristic size of the cavity is of order 1000 AU,
quite comparable to the size of the structures around HII 652 and
HII 1234. The observed appearance of the cavity depends on the
view angle to the observer. However, in any case, the
direction from which the gas is moving relative to the star can
be inferred from the location of the star relative to the curved
rim of the cavity; the ``wind" originates approximately from
the direction connecting the star and the apex of the rim. For
HII 1234, this indicates the cloud which it is encountering has a
motion relative to HII 1234 from the SSE, in accord with a
Taurus origin and not in accord for where a cloud is impacting
the Pleiades from the west as posited in \citet{white03}. The
nebular emission for HII 652 is less strongly bow-shaped, but the
peak of the excess emission is displaced roughly southward
from the star, consistent with the Taurus model and
inconsistent with gas flowing from the west.
Despite being the brightest part of the Pleiades nebulae in the
optical, IC 349 appears to be undetected in the 8 $\mu$m\ image.
This is not because the 8 $\mu$m\ image is insensitive to the
nebular emission - there is generally good agreement between
the structures seen in the optical and at 8 $\mu$m, and most
of the filaments present in optical images of the Pleiades
are also visible on the 8 $\mu$m\ image (see Figures
\ref{fig:pleIRAC} and \ref{fig:psfsub}) and even the psf-subtracted
image of Merope shows well-defined nebular filaments.
The lack of enhanced 8 $\mu$m\
emission from the region of IC 349 is probably
because all of the small particles have been scoured away from this cloudlet,
consistent with Herbig's model to explain the HST surface
photometry and colors. There is no PAH emission from
IC 349 because there are none of the small molecules that are the
postulated source of the PAH emission.
IC349 is very bright in the optical, and undetected to a good sensitivity
limit at 8 $\mu$m; it must be detectable via imaging at some wavelength
between 5000 \AA\ and 8 $\mu$m. We checked our 3.6 $\mu$m\ data for
this purpose. In the standard BCD mosaic image, we were unable to
discern an excess at the location of IC349 either simply by displaying
the image with various stretches or by doing cuts through the image.
We performed a PSF subtraction of Merope from the image in order to
attempt to improve our ability to detect faint, extended emission
30" from Merope - unfortunately, bright stars have ghost images
in IRAC Ch. 1, and in this case the ghost image falls almost
exactly at the location of IC349. IC349 is also not detected in
visual inspection of our 2MASS 6x images.
\subsection{Circumstellar Disks and IRAC}
As part of the Spitzer FEPS (Formation and Evolution of Planetary
Systems) Legacy program,
using pointed MIPS photometry, \citet{stauffer05} identified
three G dwarfs in the Pleiades as having 24 $\mu$m\ excesses
probably indicative of circumstellar dust disks.
\citet{gorlova06} reported results of a MIPS GTO survey of the
Pleiades, and identified nine cluster members that appear to
have 24 $\mu$m\ excesses due to circumstellar disks. However,
it is possible that in a few cases these apparent excesses could
be due instead to a knot of the passing interstellar dust
impacting the cluster member, or that the 24 $\mu$m\ excess could
be flux from a background galaxy projected onto the line of
sight to the Pleiades member. Careful analysis of the IRAC
images of these cluster members may help confirm that the MIPS
excesses are evidence for debris disks rather than the other
possible explanations.
Six of the Pleiades members with probable 24 $\mu$m\ excesses are
included in the region mapped with IRAC. However, only four of
them have data at 8 $\mu$m\ -- the other two fall near the edge of
the mapped region and only have data at 3.6 and 5.8 $\mu$m.
None of the six stars appear to have significant local nebular
dust from visual inspection of the IRAC mosaic images. Also,
none of them appear problematic in Figure \ref{fig:dusty1}.
For a slightly more quantitative analysis of possible nebular
contamination, we also constructed aperture growth curves for
the six stars, and compared them to other Pleiades members. All
but one of the six show aperture growth curves that are normal
and consistent with the expected IRAC PSF. The one exception is
HII 489, which has a slight excess at large aperture sizes as is
illustrated in Figure \ref{fig:ap_grow2}. Because HII 489 only
has a small 24 $\mu$m\ excess, it is possible that the 24 $\mu$m\
excess is due to a local knot of the interstellar cloud material
and is not due to a debris disk. For the other five 24 $\mu$m\
excess stars we find no such problem, and we conclude that their
24 $\mu$m\ excesses are indeed best explained as due to debris
disks.
\section{Summary and Conclusions}
We have collated the primary membership catalogs for the Pleiades to
produce the first catalog of the cluster extending from its highest
mass members to the substellar limit. At the bright end, we expect
this catalog to be essentially complete and with few or no non-member
contaminants. At the faint end, the data establishing membership are
much sparser, and we expect a significant number of objects will be
non-members. We hope that the creation of this catalog will spur
efforts to obtain accurate radial velocities and proper motions for
the faint candidate members in order to eventually provide a
well-vetted membership catalog for the stellar members of the
Pleiades. Towards that end, it would be useful to update the current
catalog with other data -- such as radial velocities, lithium
equivalent widths, x-ray fluxes, H$\alpha$ equivalent widths, etc.\ --
which could be used to help accurately establish membership for the
low mass cluster candidates. It is also possible to make more use of
``negative information" present in the proper motion catalogs. That
is, if a member from one catalog is not included in another study but
does fall within its areal and luminosity coverage, that suggests that
it likely failed the membership criteria of the second study. For a
few individual stars, we have done this type of comparison, but a
systematic analysis of the proper motion catalogs should be
conducted. We intend to undertake these tasks, and plan to establish
a website where these data would be hosted.
We have used the new Pleiades member catalog to define the
single-star locus at 100 Myr for $BVI_c$$K_{\rm s}$\ and the four IRAC
bands. These curves can be used as empirical calibration curves
when attempting to identify members of less well-studied, more
distant clusters of similar age. We compared the Pleiades
photometry to theoretical isochrones from \citet{siess00} and
\citet{baraffe98}. The \citet{siess00} isochrones are not, in
detail, a good fit to the Pleiades photometry, particularly for
low mass stars. The \citet{baraffe98} 100 Myr isochrone does
fit the Pleiades photometry very well in the $I$ vs.\ $I-K$ plane.
We have identified 31 new substellar candidate members of the
Pleiades using our combined seven-band infrared photometry, and
have shown that the majority of these objects appear to share the
Pleiades proper motion. We believe that most of the objects
that may be contaminating our list of candidate brown dwarfs are
likely to be unresolved galaxies, and therefore low resolution
spectroscopy should be able to provide a good criterion for
culling our list of non-members.
The IRAC images, particularly the 8 $\mu$m\ mosaic, provide vivid
evidence of the strong interaction of the Pleiades stars and the
interstellar cloud that is passing through the Pleiades. Our
data are supportive of the model proposed by \citet{herbig01}
whereby the passing cloud is part of the Taurus cloud complex
and hence is encountering the Pleiades from the SSE direction.
\citet{white93} had proposed a model whereby the cloud was encountering
the Pleiades from the west and used this to explain features in
the IRAS 60 and 100 $\mu$m images of the region as the wake of
the Pleiades moving through the cloud. Our data appear to not be
supportive of that hypothesis, and therefore leaves the apparent
structure in the IRAS maps as unexplained.
\acknowledgments
Most of the support for this work was provided by the Jet
Propulsion Laboratory, California Institute of Technology, under
NASA contract 1407. This research has made use of NASA's
Astrophysics Data System (ADS) Abstract Service, and of the
SIMBAD database, operated at CDS, Strasbourg, France. This
research has made use of data products from the Two Micron
All-Sky Survey (2MASS), which is a joint project of the
University of Massachusetts and the Infrared Processing and
Analysis Center, funded by the National Aeronautics and Space
Administration and the National Science Foundation. These data
were served by the NASA/IPAC Infrared Science Archive, which is
operated by the Jet Propulsion Laboratory, California Institute
of Technology, under contract with the National Aeronautics and
Space Administration. The research described in this paper was
partially carried out at the Jet Propulsion Laboratory,
California Institute of Technology, under contract with the
National Aeronautics and Space Administration.
|
1,116,691,499,835 | arxiv | \section{Introduction}
Modern machine learning techniques, specifically deep neural networks (DNNs), have
enabled tremendous progress for diverse applications, ranging from speech
recognition, natural language processing, image classification, to data
analytics and self-driving cars, and many more. In this article, we ask the
following question: Is there a role for machine learning in physical-layer
wireless communications system design? If so, where do opportunities lie, and
where would the potential benefits come from?
Fundamental to the phenomenal success of the machine learning techniques across
a wide range of applications is its apparent universal ability to approximate
any functional mapping from an input space to an output space, given sufficiently
complex neural network structure and enough training data \cite{hornik1989multilayer}.
In fact, common characteristics of application domains where
machine learning has made the most impact, are that the inputs to these tasks
are high-dimensional complex data, whose structure needs to be explored, while
the outputs of these tasks can either be categorical (e.g., classification,
segmentation, sentiment analysis) or have complex structures themselves
(e.g., machine translation, image labelling). The field of machine learning has developed
myriad techniques to enable automatic feature extraction and to explore the
structure of the problem in order to efficiently train a DNN to
map the input to the desired output. The machine learning paradigm essentially
solves optimization problems by pattern matching. This is a vastly different
philosophy as compared to the traditional model-based information theoretical
approach to communication system design.
This article aims to illustrate that machine learning has an important role
to play even in the physical-layer wireless communications, which has traditionally
been dominated by model-based design and optimization approaches. This is so
for several reasons:
\begin{itemize}
\item First, traditional wireless communication design methodologies typically
rely on the channel model, but models are inherently only an approximation to
the reality. In applications where the models are complex and the channels are
difficult to estimate, a data-driven methodology that allows the system design to
bypass explicit channel estimation can potentially be a better approach.
\item Second, modern wireless communication
applications often involve optimization problems that are high dimensional,
nonconvex, and difficult to solve efficiently. By exploiting the availability of
training data, a machine learning approach may be able to learn the
solutions of the optimization problems directly. This can lead to a more
efficient way to explore the nonconvex optimization landscape than the
traditional model-based optimization approaches.
\item Third, traditional
communication system designs are based on the principle of source-channel
separation and the optimal design of compression and channel codes. But when
the encoder and the decoder are block-length and/or complexity constrained, or
when the overall communication scenario involves multiple transmitters and
multiple receivers, the optimal design of practical encoder and decoder is highly
challenging. In this realm, there is the potential for discovering better
source and channel encoders and decoders using machine learning, as many of
these code design problems boil down to solving optimization problems over the
codebook structure for which data-driven methods may be able to identify
better solutions more efficiently.
\end{itemize}
The field of machine learning for communication system design has exploded
in recent years \cite{o2017introduction,qin2019deep,8755300,eldar2022machine}. We mention some of the references here, e.g., in source and channel coding \cite{8723589,8242643,kim2020physical}, waveform design \cite{aoudia2022waveform}, signal detection \cite{ye2017power,farsad2018neural,9735332}, resource allocation \cite{8444648,8664604,lee2019graph,shen2020graph,9448070, 9783100} and channel estimation \cite{he2018deep,8272484}, etc.
This article does not attempt to do justice in surveying the entire
literature and the recent progress on this topic. Instead, we focus on the
questions of why and how machine learning can benefit wireless communication
system design by presenting the following three specific examples.
First, we consider communication scenarios in which a naive parameterization
of the channel would involve a large number of parameters, thus making channel
estimation a challenging task. Specifically, we show that in a wireless
communication system involving a reconfigurable intelligent surface (RIS), comprising
of a larger number of reflective elements, a machine learning approach that
directly optimizes the reflection coefficients without first estimating the
channel can significantly improve the overall performance \cite{9427148}.
Second, we consider a distributed source coding problem in the context of
channel estimation and feedback for a massive multiple-input
multiple-output (MIMO) system, and show that short block-length code design for
distributed data compression with system-level objective is feasible and can
result in significant performance improvements over the
single-user data compression codebook design \cite{9347820}.
Third, we use an active sensing problem for millimeter wave (mmWave) initial
alignment to illustrate the role of machine learning in exploring the
optimization landscape in a complex sequential learning problem \cite{9724252}.
We show that selecting the right neural network architecture to match the
problem structure is crucial for its success.
\section{Information Theoretical Approach to Communication System Design}
Information theory has been the guiding principle in the development of
communication system design in the past seventy years. The driving philosophy
in information theory has always been reductionist---putting it in words of a
famous quote: {\it everything should be as simple as possible, but no simpler}.
A celebrated example of this philosophy is the additive white Gaussian noise
(AWGN) channel model, in which the choice of the Gaussian noise distribution is
justified both by a central limit theorem argument based on the assumption that
the overall noise is comprised of many independent small components and by the
fact that the Gaussian distribution is the worst-case noise distribution for
the additive channel. The AWGN model is cherished in the research community
and has played a central role in many historical developments in communication
theory (e.g., from time-domain equalization, to orthogonal frequency-division
multiplex, to multiuser detection), in coding theory (e.g., from maximum
likelihood decoding, to Viterbi algorithm, to Turbo, low-density parity-check,
and polar codes), and in multiuser information theory (e.g., from multiple-access,
to broadcast, and to interference channel models).
The wireless channels are however much more complicated than the AWGN channel
model. The wireless channel can be frequency selective; it is inherently
time-varying; it often involves multiple users and multiple antennas.
Historically, communication engineers have invested heavily in developing
models for various types of wireless channels. These models are often based on the
physics of electromagnetic wave propagation; many of these models are
statistical in nature; these channel models have played an important role in the design, analysis,
performance evaluation, and standardization of generations of wireless systems \cite{3GPP_channel_model}.
Channel modelling is important in wireless communication engineering
because most modern wireless systems
operate under the framework of first estimating the channel, then feeding back the
estimated channel to the transmitter, and finally optimizing transmission and
reception strategies to maximize the mutual information between the input and
the output.
In this article, we argue however that this model-then-optimize approach is not necessarily always the best approach.
\section{From Model-Based Optimization to Learning-Based Design}
\begin{figure*}[t]
\centering
\includegraphics[width=12cm]{fig/Fig1}
\caption{Traditional wireless system design follows the paradigm of
model-then-optimize, as shown in the top branch. The design problem is modelled
mathematically; the model parameters are then estimated, which allows the
associated optimization landscape to be characterized; finally, the optimal
solution is obtained by mathematical programming. The machine learning approach
aims to directly learn the optimal solution from a representation of
the problem instance, as shown in the bottom branch. The neural network is
trained over many problem instances, by adjusting its weights according to
the overall system objective as a function of the representation of
the problem instances.}
\label{fig:ML_opt}
\end{figure*}
\subsection{Model-Based Communication System Design}
In traditional communication system design, maximizing the capacity of a wireless link typically requires channel estimation; the
process of channel estimation always depends on the channel model. Choosing which
model to use is however an art rather than science. This is because wireless channels
often have inherent structures that make certain models more appropriate than
others. For example, a MIMO channel with $M$
transmit antennas and $N$ receive antennas can simply be modelled as a $M\times
N$ matrix. But a mmWave massive
MIMO channel often has a sparsity structure, corresponding to the finite number of propagation paths from the transmitter to the receiver, so that a
sparse path-based model in the spatial domain is a more efficient representation
of the channel. Likewise, a frequency-selective channel can be modelled by its
channel response across the frequencies. But, the frequency selectivity is usually
a consequence of the different delays across the multiple paths, so the
channel variations across the frequencies are correlated. Instead of estimating
the channel in the frequency domain, a multipath time-domain model may be more
appropriate.
Moreover, the channel estimation process requires specifying a loss function.
The squared-error metric is often adopted for tractability reasons, but
minimizing the mean-squared-error (MSE) of the estimates of the channel parameters
does not necessarily correspond to maximizing the overall system objective.
For example, some parts of the channel may be more important
to describe than others. Clearly, the specific parameterization of the
channel and the choice of the estimation error metric have a significant impact
on the ultimate system performance.
Traditionally, wireless researchers rely on experience and engineering
judgement in choosing the best channel model and the best optimization
formulation. The design decisions need to balance the inherent trade-offs
between: (i) how complex the model is, e.g., the number of parameters in the
model; (ii) how well the model approximates the reality; (iii) how easy it is to
estimate the model parameters; (iv) how easily the model can be used for
subsequent transmitter and receiver optimization. We emphasize that in a
wireless fading channel with limited coherence time/frequency, model estimation
comes at a significant cost in term of the coherence slots occupied by pilot
transmissions. For example, a highly complex model may better approximate the
reality, but may require too many pilots for parameter estimation, hence may
not be worth the effort. The point is that there is no universal theory about
how to choose the best channel model and how to best perform channel
estimation. To characterize and to take advantage of the underlying channel
structure in the design of the channel estimation process require
engineering intuition and are highly nontrivial tasks.
In contrast, this article shows that a machine learning approach can
be used to allow an automatic discovery of the appropriate representation of
the channel based on training data. Further, it allows the optimization of the
system metric that actually matters (e.g., the achievable rate as opposed to
the MSE of the channel reconstruction) without having to first explicitly estimate the
channel. This can have a
significant advantage as illustrated in the example of optimizing the RIS
coefficients directly based on received pilots in Section \ref{sec:RIS} and
the application of neural networks for
channel feedback for the massive MIMO system in Section \ref{sec:CSI_feedback}.
\subsection{Model-Based Optimization}
In many communication system design problem, even if the model parameters are
perfectly estimated, the resulting transmitter and receiver optimization
problem may still be not so easy to solve.
The formulation of the optimization problem is also an art rather than science.
In fact, wireless engineers often adopt optimization formulations, \emph{because}
the resulting mathematical programming problem is amendable to either analytic
or computationally efficient numerical solution. We remark that a
mathematical optimization problem can often be parameterized in many different
ways. The ``holy grail'' of mathematical optimization is often thought of
as to transform a problem into a convex form, so that computationally efficient
numerical procedures can be developed to find the global optimal solution of
the resulting mathematical programming problem. But there is no universal
theory about how best to transform the optimization landscape.
In contrast, this article shows that a machine learning approach can
be used for the automatic discovery of the mapping from the problem
representation to the optimal solution based on training data, as illustrated
in the examples of optimizing RIS coefficients based on received pilots in Section
\ref{sec:RIS}, and optimizing of beamformers based on channel feedback
in Section \ref{sec:CSI_feedback}, finally optimizing a sequence of
active sensing strategies in Section \ref{sec:active_sensing}.
\subsection{Data-Driven Communication System Design}
The article advocates the viewpoint that a data-driven approach can
circumvent many of the modelling and optimization difficulties for wireless
system design as mentioned in the previous section. The main idea is as shown
in Fig.~\ref{fig:ML_opt}. Instead of the traditional
model-then-optimize approach, which involves choosing an appropriate parameter
space, then characterizing the associated optimization landscape, and finally
performing the resulting mathematical optimization, we adopt a data-driven
approach to directly map the problem instances to the corresponding optimized solutions.
By training such a neural network over many problem instances, the task of
optimization is essentially turned into \emph{pattern matching}. When a new
optimization task comes along, the trained neural network can then simply output the
corresponding solution. This is akin to a human learner who is trained to use
past experience to perform future optimization tasks.
The advantages of the proposed data-driven paradigm are:
\begin{itemize}
\item It allows direct system-level optimization without the intermediary
step of channel estimation. The modelling uncertainty and the channel
estimation error are implicitly taken into account in the overall
optimization process.
\item It allows an end-to-end design with a realistic system-level objective
function, instead of relying on some arbitrary metric in the model parameter estimation process.
\item It allows the problem instances to be represented in an arbitrary
fashion. Additional side information which is often not easy to
incorporate into a model can now be accounted for in the optimization process.
\item By using a large number of problem instances as training data, it allows
the optimization process to efficiently explore the high-dimensional optimization
landscape in the training stage.
\item Once trained, the neural network can efficiently output the optimized
solution for new problem instances. In effect, the computational
complexity is moved from the optimization stage to the neural network training process.
\end{itemize}
Thus, instead of using a mathematical optimization approach that requires
highly structured models over well-defined problems and relies on the specific
(e.g., convex) structure of the optimization landscape, a machine
learning approach is capable of solving relatively poorly defined problems and
exploring high-dimensional optimization space without first identifying the
problem structure. This is made possible because of the ability of the neural
network to find patterns in the vast amount of training data, thanks to the nowadays
prevalent highly parallel computer architectures for both neural
network training and implementation processes \cite{GPU_2010,tensorflow2016}.
Machine learning is about approximating functions---its broad impact comes from
the fact that it is particularly effective in processing high-dimensional data.
The phenomenal success of deep learning in domains such as image and speech
processing is due to the fact that the specific task at hand is often governed by some low-dimensional characteristics
(e.g., labels) embedded in high-dimensional observations (e.g., images).
As we shall see in the examples in the sequel, the wireless communications
scenarios in which the data-driven optimization can be shown to substantially
outperform the traditional model-driven design are also precisely the situations in which
the problem instances have some low-dimensional structure and are observable
only through limited number of high-dimensional outputs. In the communications setting,
the observations are typically the received pilots; the low-dimension problem
structure is typically due to the sparsity of the underlying wireless channel.
The benefit of machine learning comes from bypassing the explicit modelling of
the channel structure and instead using a DNN to directly process the
high-dimensional received pilots to arrive at a desired communication action.
The remaining of this article uses three examples to illustrate the success of
machine learning in wireless applications.
\section{Capacity Maximization for Reconfigurable Intelligent Surface System}
\label{sec:RIS}
\begin{figure*}[t]
\centering
\includegraphics[width=0.89\linewidth]{fig/Fig2.pdf}
\caption{The deep learning framework for directly designing the multiuser beamformers and reflection coefficients based on the received uplink pilots for a downlink RIS-assisted multiuser system.}
\label{fig_ris:dnn_overall_arch}
\end{figure*}
Wireless channels are often high dimensional. This is the case for massive
MIMO systems in which the transmitters and the receivers are equipped with large
antenna arrays, and is also true of emerging devices such as a class of metasurfaces
known as RIS, which consists of a large number of reflecting elements and can be
dynamically reconfigured to refocus the electromagnetic waves to the intended receivers \cite{di2020smart}.
The physical electromagnetic propagation environment of a wireless channel
is also often sparse, especially as compared to the number of elements in
the antenna array or the reflective surface. This is because
the propagation characteristics typically only depend on a small number of scatters,
and the number of propagation paths in the environment can be significantly less
than the number of transmit, receive, or reflecting elements.
On the other hand, due to the limited number of radio-frequency (RF) chains and the finite
pilot overhead, the available observations of the channel is typically limited.
How can we estimate a sparse high-dimensional channel through limited number of
observations? The traditional approach is to take advantage of the channel
sparsity and to build a channel model with a small number of parameters, then
proceed with estimating the parameters of the channel based on the received
pilots, followed by optimizing the system according to the estimated channel.
How well such an approach works would depend on how well the model approximates
the actual channel. In this section, we advocate an alternative data-driven approach
that bypasses the explicit modelling stage and directly optimizes the system
using a neural network with the received pilots as inputs. We use the RIS as
an example in which explicit channel estimation is especially challenging, but
the proposed approach can be adopted equally well in many other scenarios, including the
conventional massive MIMO system.
A commonly used model for RIS is to regard it as a device consisting of a large
number of tunable elements that can reflect incoming signals with arbitrary
phase shifts. The goal is to dynamically reconfigure the phase shifts at the
RIS according to the channel realizations of the users in order to maximize a
system-level metric, e.g., the system downlink throughput. The problem is
that channel estimation is highly nontrivial for the RIS system. Assuming a
time-division duplex (TDD) system with channel reciprocity, channel estimation
can be done using uplink pilots. However, the large number of RIS
elements gives rise to a high-dimensional channel, which would require many
pilots to estimate. Further, even if the channel can be accurately estimated,
the optimization of the RIS coefficients is a highly complex and nonconvex
optimization problem, which is difficult to solve.
We show that the approach of using machine learning to directly map the
received pilots to the optimized RIS reflective coefficients can yield a
significant performance improvement as compared to the traditional channel
estimation based approach \cite{9427148}. The performance gain comes from the
fact that channel models are only an approximation of the reality and
that traditional channel estimation always needs to assume an estimation error
metric (such as the MSE), but such metric does not perfectly match
the system-level objective. This problem can be alleviated by bypassing the
modelling stage, by using the true system objective as the loss function, and
by training a neural network to directly output the optimized reflective
coefficients based on the received pilots. Essentially, the wireless channel
is now represented by the received pilots. The complexity of high-dimensional
optimization is shifted to the training stage, where a large number of
channel instances and the corresponding reflecting coefficients are processed
by the neural network so that it can produce a desired solution when a new
channel realization is observed.
Choosing the right architecture for the neural network turns out to be important.
For this application, we experimentally find that the best system-level performance is obtained
by adopting a graph neural network (GNN) \cite{xu2019powerful,lee2019graph,shen2020graph} that captures the spatial relationship
between the {base-station (BS)}, the RIS, and the users. The proposed approach
and the interpretations of the solutions are presented below.
\subsection{System Model and Problem Formulation}
Consider an RIS-assisted {MIMO system} with a BS equipped with $M$ antennas serving $K$ single-antenna users. An RIS consisting of $N$ elements is deployed between the BS and the users to enable a reflection link. Let $\mathbf h_k^{\rm d}\in\mathbb{C}^{M}$ denote the direct channel from the BS to {user $k$}, and $\mathbf h_k^{\rm r}\in\mathbb{C}^{N}$ denote the channel from the RIS to user $k$, and $\mathbf G\in\mathbb{C}^{M\times N}$ denote the channel from the RIS to the BS. We assume a block-fading channel model.
In the downlink, the BS sends the data symbol $s_k\in\mathbb{C}$ with $\mathbb{E}[|s_k|^2]=1$ to {user $k$} using a {beamforming} vector $\mathbf w_k\in\mathbb{C}^M$, which satisfies a total power constraint $\sum_{k=1}^K\mathbb\|\mathbf w_k\|_2^2\le P_d$. The RIS reflection coefficients are denoted by $\mathbf v=[e^{j\omega_1},e^{j\omega_2},\cdots,e^{j\omega_N}]^\top$, where $\omega_i\in(-\pi,\pi]$ is the phase shift of the $i$-th element. Then, the received signal at user $k$ is represented as:
\begin{align}
r_k &= \sum_{j=1}^K(\mathbf h_k^{\rm d}+ \mathbf G\operatorname{diag}(\mathbf v)\mathbf h_k^{\rm r})^\top\mathbf w_j s_j + n_k \nonumber\\
&= \sum_{j=1}^K(\mathbf h_k^{\rm d}+ \mathbf A_k\mathbf v )^\top\mathbf w_j s_j + n_k,
\end{align}
where $\mathbf A_k = \mathbf G\operatorname{diag}(\mathbf h_k^{\rm r}) \in\mathbb{C}^{M\times N}$ denotes the cascaded channel from the BS to user $k$ through reflection at the RIS, and $ n_k\sim\mathcal{CN}( 0,\sigma_0^2)$ is the additive white Gaussian noise.
The $k$-th user's achievable rate $R_k$ is computed as:
\begin{equation}
R_k = \log_2\left(1+\frac{|(\mathbf h_k^{\rm d}+ \mathbf A_k\mathbf v )^\top\mathbf w_k|^2}{\sum_{i\neq k} |(\mathbf h_k^{\rm d}+ \mathbf A_k\mathbf v )^\top\mathbf w_i|^2+\sigma_0^2}\right).
\end{equation}
The overall problem is to maximize some network utility {function ${\mathcal U}(R_1,\ldots,R_K)$} by optimizing the beamforming vectors at the BS $\{\mathbf{w}_k\}_{k=1}^K$ and the RIS
reflection coefficients $\mathbf{v}$. Now, since the channel coefficients are not known,
we need to use a pilot training phase to learn the channel. Assuming
a TDD system with channel reciprocity, we let each user $k$ send an uplink pilot
sequence $x_k(\ell),$ $\ell=1,\ldots,L$, with $|x_k(\ell)|^2\le P_u$, to the BS.
Then, the received pilots at the BS can be denoted as:
\begin{equation}\label{eq:uplink}
\mathbf y(\ell)
=\sum_{k=1}^K\left(\mathbf h_k^{\rm d}+\mathbf A_k\mathbf{\tilde{v}}(\ell)\right)x_k(\ell)+\mathbf n(\ell),
\end{equation}
where $\mathbf{\tilde{v}}(\ell)$ is the vector of RIS reflection coefficients at the uplink transmission slot $\ell$ and can be thought of as part of the pilot, and
$\mathbf n(\ell)\sim\mathcal{CN}(\mathbf 0,\sigma_1^2\mathbf I)$ is the additive Gaussian noise. Denoting $\mathbf{Y} = [\mathbf y(1),\mathbf y(2),\cdots,\mathbf y(L)]\in\mathbb{C}^{M\times L}$ and $\mathbf W=[\mathbf w_1,\cdots,\mathbf w_k]$,
our goal is to design the downlink beamformers $\mathbf{W}$ and the reflection coefficients $\mathbf{v}$, based on the received uplink pilots $\mathbf{Y}$, which contains information about the channel.
This overall process can be thought of as solving the following optimization problem over the mappings
from $\mathbf{Y}$ to $(\mathbf{W}, \mathbf{v})$:
\begin{subequations}
\label{prob:formulation_ris}
\begin{align}
\underset{\begin{subarray}{c}
(\mathbf W, \mathbf v)= \mathcal{G}(\mathbf{Y})
\end{subarray}}{\operatorname{maximize}}\quad &\mathbb{E}\left[{\mathcal U}( R_1(\mathbf W,\mathbf v),\ldots, R_K(\mathbf W,\mathbf v) )\right] \\
\operatorname{subject~to}\quad& \displaystyle{\sum}_k\mathbb\|\mathbf w_k\|_2^2\le P_d,\\
&|v_i| = 1,~~i=1,2,\ldots,N,
\end{align}
\end{subequations}
where the function $\mathcal{G}(\cdot):\mathbb{C}^{M\times L}\rightarrow \mathbb{C}^{M\times K}\times\mathbb{C}^{N}$ is the mathematical representation of the mapping to be optimized over, and the expectation is taken over the random channel realizations and the uplink noise.
Directly solving problem \eqref{prob:formulation_ris} is challenging, because it
involves optimizing over the high-dimensional mapping $\mathcal{G}(\cdot)$.
The conventional approach is to first estimate the
channels from the received pilots $\mathbf Y$, then to solve the subsequent
network utility maximization problem based on the estimated channel.
Instead, we propose a machine learning approach to directly learn such a mapping
using a GNN.
\begin{figure}[t]
\centering
\includegraphics[width=3.3in]{fig/Fig3.pdf}
\caption{Geographic layout of an RIS-assisted downlink system. \cite{9427148}}
\label{fig_ris:simulation_layout}\vspace{0.3cm}
\end{figure}
\begin{figure}[t]
\includegraphics[width=9.5cm]{fig/Fig4}
\caption{Sum rate versus pilot length for an RIS-assisted multiuser downlink system with an 8-antenna BS, a 100-element RIS, and 3 single-antenna users, comparing end-to-end deep learning approach to the conventional approach of channel estimation (CE) followed by RIS coefficients and BS beamforming optimization. \cite{9427148} }
\label{fig_ris:rate_vs_pilot}
\end{figure}
\begin{figure}[t]
\centering
\subfigure[Array response of the BS.]{ \includegraphics[width=7.7cm]{fig/Fig5a}\label{fig_ris:bs_array_respons_mu}}
\subfigure[Array response of the RIS.]{\includegraphics[width=7.8cm]{fig/Fig5b}\label{fig_ris:irs_array_respons_mu}}
\caption{The array response of the BS and the RIS obtained from the GNN for $N=100$ and $M=8$ for a 3-user system for maximizing the minimum rate.
The true $(\phi_3^\ast,\theta_3^\ast)$ are $(-1.176, -0.994)$, $(0,-0.980)$, and $(1.176,-0.994)$, respectively, for the 3 users. \cite{9427148}}\label{fig_ris:array_response}
\end{figure}
\subsection{Learning to Beamform and to Reflect}
The overall learning framework is shown in Fig.~\ref{fig_ris:dnn_overall_arch},
where the received pilots after matched filtering {$\{\tilde{\mathbf{Y}}_k\}_{k=1}^K$}
is the input to a neural network that learns the optimized reflection coefficients
$\mathbf v$ and the beamforming matrix $\mathbf W$ without the intermediary
channel estimation step.
The remaining key question is how to choose the neural network architecture.
In theory, a fully connected neural network can already learn the mapping from
the received pilots to the optimization variables.
However, a more efficient architecture is a one that captures the structure of
the network utility maximization problem \eqref{prob:formulation_ris}.
Specifically, observe that in \eqref{prob:formulation_ris}, if the indices of
users permute, the optimal RIS coefficients $\mathbf v$ should remain the same,
while the optimal beamforming vectors $\{\mathbf w_k\}_{k=1}^K$ should permute
in the same manner. These properties are known as \emph{permutation invariance} and
\emph{permutation equivariance}.
It is possible to design a neural network to automatically
enforce these properties. This can be done using a GNN based on a graph
representation of the RIS and the users. The details of the GNN structure are
described in \cite{9427148}. The idea is to associate a representation vector $\mathbf{z}_k^d$
with each user and also with the RIS. The representation vectors are updated
layer-by-layer, but the connections between the layers are based on aggregation and combination operations that are invariant with respect to input permutation, e.g., the ${\rm mean}()$ or $\max()$ functions.
After multiple layer iterations, the node representation vectors
are mapped to the beamforming matrix $\mathbf W$ and
the RIS coefficients $\mathbf{v}$.
To make the architecture generalizable with respect to the number of users, the neural network weights across the users are tied together.
The overall neural network can be trained to maximize the network utility function.
\subsection{Numerical Results}
To illustrate the performance of the machine learning approach for optimizing the
beamformers and the reflective coefficients, we report the simulation results\footnote{The code for this simulation is available at \url{https://github.com/taojiang-github/GNN-IRS-Beamforming-Reflection}}
in \cite{9427148} on a scenario with $M=8$ antennas at the BS, $N=100$ elements at the RIS, and $3$ users. The direct-link channel $\mathbf{h}_k^{\rm d}$ is assumed to be Rayleigh fading, and the BS-RIS and RIS-users channels are assumed to be Rician fading with Rician factor set as $10$. The geographic locations of the BS, RIS, and users are shown in Fig.~\ref{fig_ris:simulation_layout}.
The uplink pilot transmit power and the downlink data transmit power are respectively set to be $15$dBm and $20$dBm. The uplink and downlink noise power {are} $-100$dBm and $-85$dBm, respectively.
Fig.~\ref{fig_ris:rate_vs_pilot} plots the average sum rate versus pilot length for different approaches. As can be seen from Fig.~\ref{fig_ris:rate_vs_pilot}, the performance of the {linear minimum mean-squared-error (LMMSE)} channel estimation based method is able to
approach the perfect channel state information (CSI) baseline as the pilot length increases. However, the end-to-end deep learning method approaches the perfect CSI baseline much faster, showing that the GNN can utilize the pilots in a more efficient way.
We also provide the simulation results on the model-then-optimize approach in which a GNN is used for explicit channel estimation, and the beamforming matrix and RIS coefficients are optimized based on the estimated channel. While this method shows better performance as compared to the LMMSE based approach, its performance is still much worse than the GNN approach that directly learns the solution from the pilots. This shows the benefit of bypassing explicit channel estimation.
Moreover, additional information such as the locations of the users can be easily incorporated in the end-to-end deep learning framework, which can further improve the performance as shown in Fig.~\ref{fig_ris:rate_vs_pilot}.
The GNN produces interpretable solutions.
Fig.~\ref{fig_ris:array_response} shows the array responses
learned by the GNN for a maximizing minimum rate problem for three users
at different locations.
We observe from Fig.~\ref{fig_ris:irs_array_respons_mu} that the learned RIS
coefficients indeed focus the beams to the corresponding user locations, but
the three users get different focusing strengths. Interestingly, because the BS
beamformers and the RIS reflective coefficients are designed jointly, the user
corresponding to the weakest RIS focusing is compensated by a
stronger BS beamforming gain as seen in Fig.~\ref{fig_ris:bs_array_respons_mu}.
Thus, the combined channel strengths are equalized across the three users.
Overall, these results show that the GNN indeed is able to learn
interpretable solutions, based on much fewer pilots than the conventional strategies.
\section{Distributed Source Coding for Channel Estimation and Feedback in FDD Massive MIMO}
\label{sec:CSI_feedback}
The channel estimation problem is more challenging in the frequency-division
duplex (FDD) system, which cannot rely on channel reciprocity. In this case,
as shown in Fig.~\ref{fig_fdd:system}, the pilots are sent by the BS in the
downlink and are observed by the users. The users need to estimate their
channels, then send quantized versions of the channels through rate-limited feedback links to the BS, so that the BS can design a
precoding strategy to serve all the users. The conventional approach to this
problem relies on model-based channel estimation followed by independent
codebook-based quantization and feedback \cite{Love2008,Gao2019,Rao2014}. This is far from optimal. We show
here that machine learning techniques can be used to train a set of optimized
distributed source encoders together with a centralized decoder in an
end-to-end fashion in order to maximize a system-level objective. Such an
approach can significantly reduce the length of pilots needed to achieve the
maximum throughput in an FDD massive MIMO system.
The channel estimation and feedback design for a multiuser FDD massive MIMO
system can be thought of as a distributed source coding problem.
Distributed source coding is a long-standing information
theoretical problem in which distributed encoders compress their observations
for centralized reconstruction at the decoder. Here, the users are the
distributed source encoders who observe then quantize a noisy version of the
sources. The BS is the centralized source decoder, which aims to compute a
function of the sources.
The optimal design of distributed source encoders and decoder is highly nontrivial. Information theoretic optimal coding strategies involve concepts such as binning, which can be thought of as a multiuser codebook. While it is unlikely for a neural network to learn structured binning, it can help design good codebook-based quantization and feedback strategies that reap the benefit of distributed source coding. This is an example in which a data-driven approach can play an important role in designing short block-length quantization codes under rate constraints.
\begin{figure}[t]
\centering
\includegraphics[width=3.3in]{fig/Fig6.pdf}
\caption{An FDD massive MIMO system, in which the BS transmits the pilots, the users estimate their channels then feedback a quantized version of the channels to BS, and the BS designs the precoders based on the feedback from all the users. \cite{9347820}}\label{fig_fdd:system}
\end{figure}
\begin{figure*}[t]
\centering
\subfigure[]
{ \includegraphics[width=7.8cm]{fig/Fig7a.pdf}\label{fig_fdd:fdd_a}}\hspace{0.2cm}
\subfigure[]
{\includegraphics[width=9.2cm]{fig/Fig7b.pdf}\label{fig_fdd:fdd_b}}
\caption{Comparison between end-to-end design and conventional scheme in FDD downlink precoding problem.
(a) The FDD downlink precoding design problem can be viewed as a distributed source coding problem in which the downlink pilots and the feedback schemes adopted at the users can be thought of as the source encoders and the precoding scheme adopted at the BS can be thought of as the decoder;
(b) The conventional channel feedback scheme can be regarded as a separate source coding strategy of independent quantization of each user's channel.
In the machine learning approach, the feedback scheme at the user side and the precoding scheme at the BS side are replaced by DNNs that can be trained in an end-to-end fashion. \cite{9347820} }\label{fig_fdd:fig_fdd}
\end{figure*}
\subsection{System Model and Problem Formulation}
Consider an FDD multiuser MIMO system in which a BS equipped with $M$ antennas serves $K$ single-antenna users. Analogues to the previous section, we consider the downlink scenario in which the BS aims to communicate the data symbol $s_k \in \mathbb{C}$ with $\mathbb{E}[|s_k|^2]=1$ to user $k$ using a precoding vector $\mathbf w_k\in\mathbb{C}^M$, which satisfies a total power constraint $\sum_{k=1}^K\mathbb\|\mathbf w_k\|_2^2\le P_d$. Assuming a narrowband block-fading channel model, the received signal at the $k$-th user in the data transmission phase can be written as:
\begin{equation}\label{eq_rx_sig}
r_k = \mathbf{h}_k^\top \mathbf{w}_k s_k + \sum_{i\not=k} \mathbf{h}_k^\top \mathbf{w}_ i s_i + z_k,
\end{equation}
where $\mathbf{h}_k \in \mathbb{C}^M$ is the channel between the BS and user $k$ and $z_k \sim \mathcal{CN}(0,\sigma_0^2)$ is the additive white Gaussian noise.
The achievable rate of user $k$ is given by:
\begin{equation}\label{eq:ratek}
R_k = \log_2\left(1 + \frac{\lvert \mathbf{h}_k^\top \mathbf{w}_k \rvert^2}{ \sum_{i\not=k} \lvert \mathbf{h}_k^\top \mathbf{w}_i \rvert^2+\sigma_0^2 } \right).
\end{equation}
The aim is to maximize a network utility function $\mathcal{U}(R_1,\ldots,R_K)$, which is a function of the precoding vectors $\{\mathbf{w}_k\}_{k=1}^{K}$. To design the optimal precoding vectors, the BS must first acquire the instantaneous CSI.
We consider a pilot phase for the FDD system in which the BS sends pilots ${\mathbf{X}}\in \mathbb{C}^{M\times L}$ of length $L$, and the $k$-th user receives ${\mathbf{y}}_k\in\mathbb{C}^{1\times L}$ as
\begin{equation}\label{eq_rx_pilot}
{\mathbf{y}}_k = \mathbf{h}_k^\top {\mathbf{X}} + {\mathbf{n}}_k,
\end{equation}
where the pilots in the $\ell$-th transmission satisfy the power constraint, i.e., $\|\mathbf{x}_\ell\|_2^2\leq P_d$ with $\mathbf{x}_\ell$ being the $\ell$-th column of $\mathbf{X}$, and ${\mathbf{n}}_k \sim \mathcal{CN}(\mathbf{0},\sigma_0^2\mathbf{I})$ is the additive white Gaussian noise at user $k$. Subsequently, the $k$-th user abstracts the useful information in the received pilots $\mathbf{y}_k$ for the purpose of multiuser downlink precoding, and feeds back that information to
the BS under a feedback constraint of $B$ bits, i.e.,
\begin{equation}\label{eq_feedback}
\mathbf{q}_k = \mathcal{F}_k\left( {\mathbf{y}}_k\right),
\end{equation}
where the function $\mathcal{F}_k: \mathbb{C}^{1\times L} \rightarrow \{\pm 1\}^B $ is the $k$-th user feedback scheme. Finally, the BS designs the multiuser precoding matrix $\mathbf W=[\mathbf w_1,\cdots,\mathbf w_k]$ based on the feedback bits received from all $K$ users (i.e., $\mathbf{q} = [\mathbf{q}_1^\top,\mathbf{q}_2^\top,\ldots,\mathbf{q}_K^\top]^\top$), i.e.,
\begin{equation}\label{eq_decoding}
\mathbf{W} = \mathcal{P} \left( \mathbf{q} \right),
\end{equation}
where the function $\mathcal{P}: \{\pm 1\}^{KB} \rightarrow \mathbb{C}^{M\times K} $ denotes the multiuser downlink precoding scheme.
The overall problem formulation is therefore
\begin{subequations}
\begin{align}
\displaystyle{\Maximize_{{\mathbf{X}},\hspace{2pt}\{\mathcal{F}_k(\cdot)\}_{\forall k},\hspace{2pt}\mathcal{P}(\cdot)}} ~~ & \mathcal{U}(R_1,\ldots,R_K) \\
\text{subject to} \quad~~ & \mathbf{W} = \mathcal{P}\left(\left[\mathbf{q}_1^\top,\ldots, \mathbf{q}_K^\top \right]^\top \right),\\
~&\mathbf{q}_k = \mathcal{F}_k(\mathbf{h}_k^\top {\mathbf{X}} + {\mathbf{n}}_k), ~~\forall k, \\
~& \displaystyle{\sum}_k \mathbb\|\mathbf w_k\|_2^2\le P_d,\\%\operatorname{Tr}(\mathbf{V} \mathbf{V}^H) \leq P,\\
~&\|\mathbf{x}_\ell\|^2_2\leq P_d, ~~\forall \ell,
\end{align}
\label{main_problem}
\end{subequations}
in which the training pilots ${\mathbf{X}}$, all $K$ users' feedback schemes $\{\mathcal{F}_k(\cdot)\}_{k=1}^{K}$, and the multiuser precoding scheme $\mathcal{P}(\cdot)$ can be designed to optimize the overall utility function of the system.
This problem can be viewed as a distributed source coding problem with the network
utility as the ``distortion'' metric,
because channel estimation and quantization are performed across $K$
distributed users, and the feedback bits from all $K$ users are centrally
processed at the BS for the purpose of designing the multiuser precoder,
as illustrated in Fig.~\ref{fig_fdd:fdd_a}.
Obtaining the optimal distributed source coding strategy by directly solving the optimization problem
\eqref{main_problem} is challenging. As shown in
Fig.~\ref{fig_fdd:fdd_b}, the conventional design of FDD massive
MIMO system is based on independent quantization and
feedback of the channel vector (or channel parameters) at each user. However,
such independent quantization and feedback approach is
quite suboptimal, especially in the short pilot regime.
In this section, we show that a deep learning approach can be used to design
a more efficient distributed source coding codebook
for the FDD massive MIMO systems.
\subsection{Learning Distributed Channel Estimation and Feedback}
The idea is to use DNNs to model the feedback scheme $\{\mathcal{F}_k(\cdot)\}_{k=1}^{K}$
and the multiuser precoding scheme $\mathcal{P}(\cdot)$ in Fig.~\ref{fig_fdd:fdd_a}.
The rest of this subsection briefly explains how we solve the overall optimization
problem \eqref{main_problem} by employing such a deep learning framework.
As the first step of the downlink training phase, the BS sends $L$ training pilots and the $k$-th user observes the pilots through its channel as ${\mathbf{y}}_k = \mathbf{h}_k^\top {\mathbf{X}} + {\mathbf{n}}_k$. Since the received signal $\mathbf{y}_k$ is a linear function of the channel $\mathbf{h}_k$, we can simply model it as the output of a single-layer neural network with linear activation function in which the input is the channel $\mathbf{h}_k$. In this single-layer neural network, the weight matrix is the pilot ${\mathbf{X}}$ and the bias vector is the noise vector $\mathbf{n}_k$.
To enforce the total power constraint on each pilot transmission, we adopt a weight constraint under which each column of ${\mathbf{X}}$ satisfies $\|\mathbf{x}_\ell\|^2_2 \le P_d$. It is worth mentioning that such a weight constraint is often used in the machine learning literature for regularization in order to reduce overfitting, e.g., \cite{hinton2012improving}. Here, we use the weight constraint to capture the physical constraint on the downlink power level of transmit antennas of a cellular BS.
At the user side, upon receiving ${\mathbf{y}}_k$, the user seeks to summarize the useful information in $\mathbf{y}_k$ and to feed back that information to the BS in a form of $B$ information bits. We can simply model this process by a DNN which maps $\mathbf{y}_k$ to feedback bits $\mathbf{q}_k$. To make sure that the final output of the DNN is in the form of binary bits, we use the sign activation function at the last layer of the user-side DNNs.
Finally, assuming an error-free feedback channel between each user and the BS,
the BS designs precoding vectors as a function of the received feedback bits from all $K$ users. We propose to use another DNN to map the received feedback bits $\mathbf{q}$ to the design of the multiuser precoding matrix $\mathbf{W}$. To ensure that the precoding matrix designed by the DNN satisfies the total power constraint, we employ a normalization layer at the last layer of the BS-side DNN.
The overall distributed source coding strategy is designed by training the
end-to-end deep learning framework to maximize the network utility using
stochastic gradient descent.
But care must be taken, due to the fact that the derivative of the sign activation
function is always zero, so the conventional back-propagation method cannot be
directly used to train the overall network. It is possible to circumvent
this difficulty by adopting the straight-through approximation
in which the sign activation function is approximated by another smooth
differentiable function for the back-propagation step \cite{chung2016}.
By gradually tightening the approximation, we eventually arrive at a beamforming
codebook that maps the noisy version of the channels from all the users to an optimized
set of downlink beamformers.
\begin{figure}[t]
\centering
\includegraphics[width=9.5cm]{fig/Fig8}
\caption{Sum rate versus feedback rate constraint in a 2-user FDD massive MIMO system with $64$ antennas, sparse channels with 2 dominant paths, pilot length of $8$ and SNR of 10dB. \cite{9347820} }
\label{fig:L8}
\end{figure}
\subsection{Numerical Results}
We now present the performance evaluation of the end-to-end deep learning framework in a scenario where a BS with $M=64$ antennas serves $K = 2$ users in a mmWave propagation environment with $2$ dominant paths as reported in \cite{9347820}\footnote{The code for this simulation is available at \url{https://github.com/foadsohrabi/DL-DSC-FDD-Massive-MIMO}}. The fading coefficient of each path is modelled by a Gaussian random variable and the corresponding angle of departure is modelled by a uniform random variable in the range of $[-30^\circ,30^\circ]$. The signal-to-noise ratio (SNR) $P_d/\sigma_0^2$ is set to
$10$dB and the pilot length $L=8$.
Fig.~\ref{fig:L8} plots the average sum rate versus per-user feedback rate constraint $B$. It can be seen that the end-to-end deep learning framework with relatively low rate feedback links (i.e., about $15$ bits per user) can already outperform the maximum-ratio transmission (MRT) precoding baseline with full CSI. The MRT precoding design does not take the inter-user interference into account. This shows that the trained DNN has actually learned a precoding mechanism capable of alleviating inter-user interference in a multiuser FDD massive MIMO system.
Furthermore, we compare the performance of the end-to-end deep learning framework with that of the conventional design methodology based on channel estimation followed by linear precoding schemes such as zero forcing (ZF). For the channel estimation part of the conventional approach, two different methods are used: (i) a compressed sensing algorithm called orthogonal matching pursuit (OMP) and (ii) deep learning-based channel estimation method.
Fig.~\ref{fig:L8} shows that the end-to-end deep learning framework can achieve a significantly better performance as compared to the conventional channel estimation based design methodology (either when the channel estimation is implemented by OMP or by deep learning). This confirms the intuition that in practical massive MIMO systems in which the pilot length is much smaller than the number of antennas, the conventional approach of first estimating then quantizing the sparse channel parameters is quite suboptimal.
The end-to-end deep learning framework can achieve much better performance, because it is able to better explore the channel sparsity. It implicitly estimates the channel and designs the quantization codebooks jointly across the multiple users
in order to maximize an overall true system objective, i.e., the sum rate in this case.
\section{Active Sensing for mmWave Channel \\ Initial Alignment}
\label{sec:active_sensing}
Machine learning also has an important role to play in solving high-dimensional nonconvex optimization problems in sensing applications. To illustrate this point, we consider the mmWave initial alignment problem for a BS equipped with a
hybrid massive MIMO architecture, consisted of an analog beamformer and a
low-dimensional digital beamformer. The user transmits a sequence of pilot signals;
the BS makes a corresponding sequence of observations, via the analog beamformers, which it can
design, but the observations reside only in the low-dimensional digital domain.
The question is in which analog directions should the BS choose to observe in a sequential manner
in order to obtain the most accurate channel information for a communication or sensing task of interest?
Because the sensing direction in each stage can be designed as
a function of the previous observations, this is an active sensing problem for which
the analytic solution is highly nontrivial and the conventional codebook-based approach is
highly suboptimal \cite{alkhateeb2014channel,Tara2019Active}. Specifically, \cite{alkhateeb2014channel} proposes a bisection search algorithm to gradually narrow down the AoA range. However, the performance of the bisection algorithm is very sensitive to noise power,
so it is suitable for the high SNR scenario only.
To address this issue, \cite{Tara2019Active} proposes to select the next sensing vector from a predefined codebook based on the posterior distribution of the angle-of-arrival (AoA). Further, \cite{9448070} eliminates the codebook constraint by directly mapping the posterior distribution to the next sensing vector using a DNN. However, as the computation of posterior distribution is applicable only to the single-path channel model, the generalization of these ideas to the multipath channel is challenging.
Instead, we show that an excellent solution can be obtained by
training a DNN to learn the sensing direction in an end-to-end manner without needing
to compute the posterior.
Further, we explore the active nature of the problem and show that by using a
long short-term memory (LSTM) based architecture \cite{lstm}, the state representation
in each observation stage can be learned and be used to design the sensing
direction in the next stage. The results show that machine learning can offer
a significant advantage over the current state-of-the-art.
\subsection{System Model and Problem Formulation}
Consider a TDD mmWave system in which a BS equipped with $M$ antennas and a single RF chain serves a single-antenna user. The user transmits a sequence of pilots to the BS, and the BS seeks to estimate the channel or to design a subsequent downlink beamformer to maximize the beamforming gain, based on the received pilots. Due to the limited RF chain, the BS can only sense the channel through an analog beamformer (or combiner), but it can design the analog beamformers sequentially to sense different directions over time. Specifically, in time frame $t\in\{1,\ldots, T\}$, let $\mathbf w_t\in\mathbb{C}^M$ denote the sensing (i.e., combining) vector with $\|\mathbf w_t\|_2^2=1$ and let $x_t=\sqrt{P_u}$ be the pilot symbol, then the received pilot at the BS is given by:
\begin{align}
y_t = \mathbf{w}_t^{\top} \mathbf{h}x_t+n_t,
\end{align}
where $n_t\sim\mathcal{CN}(0,\sigma_1^2)$ is the effective noise, and $\mathbf{h}\in\mathbb{C}^{M}$ is the channel from the user to the BS. In a mmWave environment, the channel $\mathbf{h}$ is often sparse, and can typically be modelled in the form of a multipath channel as follows:
\begin{equation}
\mathbf{h} = \sum_{i=1}^{L_{\rm p}}\alpha_i \mathbf{a}_i({\phi}_i),
\end{equation}
where $L_{\rm p}$ is the number of paths, $\alpha_i\in\mathcal{CN}(0,1)$ is the fading coefficient of the $i$-th path, and $\phi_i\in[\phi_{\min},\phi_{\max}] $ is the AoA of the $i$-th path, and
$\mathbf{a}(\phi) = \left [ 1, e^{j {\pi } \sin{\phi} },..., e^{j(M-1){\pi } \sin{\phi}} \right]^\top$ is the array response vector.
\begin{figure}[t]
\centering
{\includegraphics[width=\linewidth]{fig/Fig9}}
\caption{Active sensing for mmWave initial alignment at a BS with a single RF chain. The goal is to design the analog sensing beamformers $\mathbf{w}_t$ adaptively as a function of the previous observations over multiple sensing stages $t=1,\cdots,T$ for the purpose of maximizing a utility function, e.g., the eventual downlink transmission beamforming gain $|\mathbf{h}^\top \mathbf v|^2$ after the sensing stage. \cite{9724252} }
\label{Fig_sys_model_active}
\end{figure}
\begin{figure*}[t]
\centering
\includegraphics[width=0.86\textwidth]{fig/Fig10}
\caption{An LSTM-based sequential learning architecture for solving an active sensing problem in mmWave initial alignment, in which the LSTM cells aim to summarize the system state based on the observations made so far, and a DNN is used to produce the analog combiner to be used in the next sensing stage. \cite{9724252} }
\label{fig:active_learning_entire}
\end{figure*}
Assuming a fixed total number of pilot stages $T$, the objective in the active sensing problem is to sequentially design the sensing beamformers $\{\mathbf{w}_t\}_{t=1}^T$ to maximize some utility function, i.e., $\mathcal{J}(\boldsymbol{\theta},\mathbf{v})$, where $\boldsymbol{\theta}=\{\alpha_i,\phi_i\}_{i=1}^{L_{\rm p}}$ contains all the channel parameters and $\mathbf{v}\in\mathbb{C}^V$ is the
parameter to be designed or estimated after receiving all the pilots.
For example, as illustrated in Fig.~\ref{Fig_sys_model_active}, $\mathbf{v}\in\mathbb{C}^M$ can be the subsequent downlink data transmission beamformer and the goal can be to maximize the beamforming gain, i.e., $\mathcal{J}(\boldsymbol{\theta},\mathbf{v}) := |\mathbf{h}^\top \mathbf v|^2$. In other applications, e.g., AoA-based localization, we might be interested in estimating the AoAs of the multipath channel, i.e., $\mathcal{J}(\boldsymbol{\theta},\mathbf{v}) := -\sum_{i=1}^{L_{\rm p}}(\hat{\phi}_i-\phi_i)^2$, where $\mathbf{v} := [\hat{\phi}_1,\cdots,\hat{\phi}_{L_{\rm p}}]^\top$. The key characteristic of such problems is that the sensing vector $\mathbf{w}_{t+1}$ can be designed based on the historical observations at stage $t$. Accordingly, the overall problem can be formulated as:
\begin{subequations}
\label{eq:problem_formulation_unsup}
\begin{align}
\Maximize_{\left\{\mathcal{G}_t(\cdot,\cdot)\right\}_{t=0}^{T-1},\hspace{1pt} \mathcal{F}(\cdot,\cdot) }& \mathbb{E}\left[ \mathcal{J}(\boldsymbol{\theta} ,\mathbf{v}) \right]\\
\text{subject to}\hspace{14pt} &\mathbf{w}_{t+1} = \mathcal{G}_t\left(\mathbf{y}_{1:t},\mathbf{w}_{1:t}\right),~ t=0,\ldots,T-1,\\
& \mathbf{v} = \mathcal{F}\left(\mathbf{y}_{1:T},\mathbf{w}_{1:T}\right),
\end{align}
\end{subequations}
where $\mathcal{G}_t: \mathbb{R}^{t} \times \mathbb{R}^{tM} \rightarrow \mathbb{R}^M$ is the adaptive sensing strategy adopted by the BS in time frame $t$ and $\mathcal{F}: \mathbb{R}^{T} \times \mathbb{R}^{TM} \rightarrow \mathbb{R}^V$ is the function for designing the vector $\mathbf{v}$.
The active sensing problem \eqref{eq:problem_formulation_unsup} is challenging
to solve, because both the active sensing strategy
$\left\{\mathcal{G}_t(\cdot,\cdot)\right\}_{t=0}^{T-1}$ and the mapping
$\mathcal{F}(\cdot,\cdot)$ are functions in high-dimensional spaces.
Moreover, the input dimension of the function
$\mathcal{G}_t(\cdot,\cdot)$ increases as the number of sensing stages
increases, making the sensing strategy particularly difficult to design when
$T$ is large. The conventional strategies are codebook based. For example,
a hierarchical beamforming codebook \cite{alkhateeb2014channel} can be designed
based on the principle of bisection as mentioned before. A posterior matching
based approach for sequentially selecting the appropriate analog combiners from
the hierarchical codebook is proposed in \cite{Tara2019Active}. But these approaches
are by no means optimal and are restricted to single-path channels. For the
multipath channel, nonadaptive sensing strategies which exploit the channel
sparsity are usually adopted \cite{alkhateeb2014channel}.
In this section, we show that instead of using a model-based approach, a
codebook-free data-driven approach can be used to design the analog
combiners to sense a multipath channel. Specifically, the sequential nature of
the problem suggests that a recurrent neural network (RNN) is an appropriate
network architecture. We show that a deep active sensing framework based on
the LSTM network, which is a variation of RNN, can be used to efficiently solve
the active sensing problem~\eqref{eq:problem_formulation_unsup}.
\subsection{Learning Active Sensing Strategy}
The proposed active sensing framework is as shown in Fig.~\ref{fig:active_learning_entire}.
It consists of $T$ deep active learning units, corresponding to $T$
different sensing stages. Each active sensing stage is designed based on an
LSTM cell and a fully connected DNN.
Specifically, in the $t$-th active sensing stage, the LSTM cell takes the previous cell state vector
$\mathbf{c}_{t-1}$, the previous hidden state vector $\mathbf{s}_{t-1}$, and
the current measurement ${y}_t$ as input, and outputs the next cell state
vector $\mathbf{c}_{t}$ and hidden state vector $\mathbf{s}_{t}$.
The LSTM cell is capable of automatically summarizing the previous
observations into state vectors. At each stage, we use the fully connected DNN to map
the hidden state vector $\mathbf{s}_{t}$ to the sensing vector $\mathbf{w}_t$.
After receiving the last pilot symbol $y_T$, the LSTM cell updates its cell
state to $\mathbf{c}_T$, which is then mapped to the desired parameter
$\mathbf{v}$ using another DNN. This active sensing framework is trained
end-to-end to maximize the objective function in
\eqref{eq:problem_formulation_unsup}, with neural network weights tied together
across the sensing stages. Such an end-to-end training approach enables the
learning of an active sensing policy that accounts for the ultimate design or
estimation objective after the $T$ sensing stages.
\subsection{Numerical Results}
\begin{figure}[t]
\centering
\includegraphics[width=9.3cm]{fig/Fig11}
\caption{Average beamforming gain in dB versus the number of sensing stages $T$ for different methods after beam alignment in a TDD mmWave system with
$M_r = 64$, $\operatorname{SNR} = 0$dB, $L_p=3$, and $\phi_1,\phi_2,\phi_3 \in [-60^{\circ},60^{\circ}]$. \cite{9724252} }
\label{fig:sim_multi_AoA}
\end{figure}
To illustrate the performance of the active sensing framework, we now present
the simulation results\footnote{The code for this simulation is available at \url{https://github.com/foadsohrabi/DL-ActiveSensing}}
in \cite{9724252} for a downlink beamforming gain maximization
problem in a setting with $M=64$, $L_{\rm p}=3$, and $\text{SNR}=0$dB. The AoAs are randomly generated from the range $[-60^\circ,60^\circ] $. We compare the proposed active sensing method with the channel estimation based approach as well as a design using a DNN to map the received pilots to the beamforming vector, but the sensing beamformers are fixed, either at random or learned from the statistics of the channel. In Fig.~\ref{fig:sim_multi_AoA}, we see that the deep learning methods outperform the channel estimation based method with OMP. This shows the benefit of bypassing channel estimation. The active sensing method achieves better performance than deep learning with fixed sensing vectors. This shows the benefits of adaptive sensing and the ability of the LSTM network to optimize the sensing vectors.
To see where the performance gain comes from, we examine the output of the LSTM
framework for an AoA estimation problem in a single-path channel, and plot the
posterior distribution of the AoA at each stage $t$ as well as the array
response of the sensing vectors designed by the LSTM and the DNN.
As can be seen from Fig.~\ref{fig:pos_lstm_tau12}, the posterior distribution
gradually converges to a distribution concentrated at the true AoA
$\phi=25.82^\circ$. In the meanwhile, the array response of the sensing
vectors designed by the active sensing framework is relatively flat
across the angles at the beginning, indicating that it is exploring all
directions in searching for the AoA, but gradually narrows down to around the
direction of the true AoA as the sensing operation progresses. This shows that
the active sensing framework indeed learns a meaningful sensing
strategy. It is capable of quickly converging to the true AoA. It is
remarkable that although finding the truly optimal sensing vectors is extremely difficult
to do computationally, the LSTM framework is able to learn an excellent sensing
strategy based on training over millions of channel instances.
\begin{figure}[t]
\centering
\includegraphics[width=0.226\textwidth]{fig/Fig12a.pdf}
\hspace{0.1cm}
\includegraphics[width=0.242\textwidth]{fig/Fig12b.pdf}
\caption{Posterior distributions of the AoA (left) and the beamforming patterns of the sensing vectors (right) learned from the proposed active sensing framework over $8$ stages in a mmWave alignment problem for a single-path channel where $\operatorname{SNR}=0$dB, $M_r = 64$, and $T = 12$. \cite{9724252} }
\label{fig:pos_lstm_tau12}
\end{figure}
\section{Standardization Impact}
While the experimental results reported in this article are still generated using widely accepted wireless channel propagation models so the proposed framework should be regarded as proof-of-concept rather than as field tested, the wireless communications standardization bodies have recognized the potential for using machine learning
techniques in future cellular networks and have taken steps toward standardizing the communication protocols between the BS and the users in order to enable learning-based system-level optimization.
Specifically, the 3rd Generation Partnership Project (3GPP) has recognized channel estimation and feedback, mmWave beam management, and positioning as the three initial areas where machine learning can have a significant impact \cite{3GPP_RP-213599}.
One of the target scenarios related to the CSI feedback enhancement that 3GPP aims to study is CSI compression and feedback for FDD massive MIMO systems where a wireless device has already obtained the entire high-dimensional channel matrix and it needs to compress and feedback this CSI to the BS. Such a CSI acquisition process can be modelled by an auto-encoder consisting of a DNN encoder at the device and a DNN decoder at the BS. In particular, the DNN encoder first maps the high-dimensional channel to a low-dimensional quantized signal, then the compressed signal is sent to the BS via the uplink feedback channel, and finally, the BS reconstructs the channel by using the DNN decoder.
The goal of DNNs here is to capture the spatial-domain and frequency-domain correlations in the channel matrix, so convolutional neural networks (CNNs) are an excellent candidate as an autoencoder. Preliminary results reported by the different companies suggest that machine learning can outperform the existing 5G codebook-based CSI compression methods, e.g., \cite{3GPP_R1-2204238}.
This use-case is closely related to the CSI estimation and feedback problem studied in Section \ref{sec:CSI_feedback}.
The second use-case is about beam management procedure (e.g., alignment) to find the best transmit-receive beam pair. The conventional practical beam management is based on exhaustive beam sweeping. While such linear beam search strategies lead to excellent performance, they suffer from significant time delay and power consumption issues. To address these concerns, sparse beam sweeping has been introduced in 3GPP \cite{3GPP_R1-2203142} in which a beam pair is selected by employing multiple-stage beam narrowing strategies. However, the existing algorithms developed for sparse beam sweeping are quite suboptimal, especially in higher frequency bands (FR2) and with high-mobility users. Data-driven methods, on the other hand, can utilize the training data sets to construct a mapping from sparse beam measurements to the best beam pair. This use-case is closely related to the initial beam alignment problem addressed in Section \ref{sec:active_sensing} using a deep active sensing approach. The use-case can actually be thought of as a non-active version of the problem.
Accurate positioning is a crucial component in several 5G industrial internet of things (IoT) applications such as smart factories and is another promising area for data-driven designs. The traditional model-based positioning relies on explicit mappings from timing/angle measurements to the position of the user.
These mappings are effective when there are multiple line-of-sight (LoS) paths between the target user and different reception points of the BS. But, practical industrial applications usually have to deal with non-line-of-sight (NLoS) conditions.
In these scenarios, the traditional model-based approach is not always feasible. Learning-based methods are promising solutions for these difficult positioning tasks since they can easily learn a good mapping from the radio measurements to the position by discerning patterns in the available training data sets. Preliminary results already show significant positioning accuracy enhancement over the conventional methods, e.g., \cite{3GPP_R1-2203901}.
While this article has not addressed the localization problem specifically, the techniques presented are quite applicable to localization \cite{david_GC}.
\section{Conclusion}
In conclusion, the modern machine learning approach is opening new opportunities in the optimization of physical-layer wireless communication systems. It challenges the conventional wisdom of always first modelling the channel, then optimizing wireless system design given the estimated channel. This article shows that much can be gained by bypassing explicit channel modelling, by designing the overall system in an end-to-end manner, and by formulating and solving optimization problems in a data-driven fashion.
\bibliographystyle{IEEEtran}
|
1,116,691,499,836 | arxiv |
\section{Introduction}\label{sec:Introduction}
Top-quark measurements have entered a~high-precision era at the Large Hadron Collider (LHC) where the cross-sections for single top-quark and top-quark pair (\ttbar{}) production at a~center-of-mass energy $\sqrt{s}=7\,{\rm TeV}$ are factors of 40 and 20 higher than at the Tevatron. The large number of $t\bar{t}$ events makes it possible to measure precisely the $\ttbar{}$ production cross-sections differentially, providing precision tests of current predictions based on perturbative Quantum Chromodynamics (QCD). The top quark plays an important role in many theories beyond the Standard Model (SM)~\cite{MochUwer2008} and differential measurements have been proposed to be sensitive to new-physics effects~\cite{Frederix:2009}.
The inclusive cross-section for \ttbar{} production (\ensuremath{\sigma_{\ttbar}}\xspace) in proton--proton ($pp$) collisions at a~center-of-mass energy $\sqrt{s}=7\,{\rm TeV}$ has been measured by both the ATLAS and CMS experiments with increasing precision in a~variety of channels~\cite{atlasXsec1,atlasXsec2,atlasXsec3,cmsXsec1,cmsXsec2,cmsXsec3,cmsXsec4}. The CMS Collaboration has published~\cite{cmsDiff} differential cross-sections using the full dataset collected in 2011 at $\sqrt{s}=7$ \TeV{} and corresponding to an integrated luminosity of 5.0 fb$^{-1}$. The ATLAS Collaboration has published~\cite{atlasDiff} the differential cross-sections as a~function of the mass ($\ensuremath{m_{\ttbar}}$), the transverse momentum ($\ensuremath{\pt^{\ttbar}}$), and the rapidity ($\ensuremath{y_{\ttbar}}$) of the $\ttbar$ system with a~subset of the data collected in 2011 at $\sqrt{s}=7$ \TeV{} corresponding to an integrated luminosity of 2.05 fb$^{-1}$. The measurements shown here improve the statistical precision of the previous ATLAS results by including the full 2011 dataset (\mbox{4.6\,fb$^{-1}$}). Furthermore, improved reconstruction algorithms and calibrations are used, thereby significantly reducing the systematic uncertainties affecting the measurements. The rapidity distribution is symmetrized and presented as $\ensuremath{\left|y_{\ttbar}\right|}\xspace$ and in addition to the variables previously shown, this paper also presents a~measurement of the cross-section as a~function of the top-quark transverse momentum ($\ensuremath{\pt^t}$).
In the SM, the top quark decays almost exclusively into a~\Wboson{} boson and a~$b$-quark. The signature of a~\ttbar{} decay is therefore determined by the \Wboson{} boson decay modes. This analysis makes use of the lepton$+$jets decay mode, where one \Wboson{} boson decays into an electron or muon and a~neutrino and the other \Wboson{} boson decays into a~pair of quarks, with the two decay modes referred to as the $e$+jets and $\mu$+jets channel, respectively. Events in which the \Wboson{} boson decays to an electron or muon through a~$\tau$~decay are also included.
Kinematic reconstruction of the \ttbar{} system is performed using a~likelihood fit. The results are unfolded to the parton level after QCD radiation, and the normalized differential cross-section measurements are compared to the predictions of Monte Carlo (MC)
generators and next-to-leading-order (NLO) QCD calculations. The \ensuremath{\pt^t}{}, \ensuremath{m_{\ttbar}}{} and \ensuremath{\pt^{\ttbar}}{} spectra are also compared to NLO QCD calculations including next-to-next-to-leading-logarithmic (NNLL) effects, namely Ref.~\cite{NNLO_calc} for \ensuremath{\pt^t}, Ref.~\cite{nnloMtt} for \ensuremath{m_{\ttbar}}{} and Ref.~\cite{PhysRevLett.110.082001,PhysRevD.88.074004} for \ensuremath{\pt^{\ttbar}}{}.
The paper is organized as follows. Section~\ref{sec:Detector} briefly describes the ATLAS detector, while Secs~\ref{sec:DataSamples} and~\ref{sec:Simulation} describe the data and simulation samples used in the measurements. The reconstruction of physics objects, the event selection and the kinematic reconstruction of the events are explained in Sec.~\ref{sec:EventReco}. Section~\ref{sec:BackgroundDetermination} discusses the background processes affecting these measurements. Event yields for both the signal and background samples, as well as distributions of measured quantities before unfolding, are shown in Sec.~\ref{sec:YieldsAndPlots}. The measurements of the cross-sections, including the unfolding and combination procedures, are described in Sec.~\ref{sec:XSDetermination}. Statistical and systematic uncertainties are discussed in Sec.~\ref{sec:Uncertainties}. The results are presented in Sec.~\ref{sec:Results} and the comparison with theoretical predictions is discussed in Sec.~\ref{sec:Interpretation}.
\section{The ATLAS Detector}\label{sec:Detector}
The ATLAS detector~\cite{atlasDetector3} is cylindrically symmetric and has a~barrel and two endcaps
~\footnote{ATLAS uses a~right-handed coordinate system with its origin at the nominal interaction point (IP) in the center of the detector and the $z$-axis along the beam pipe. The $x$-axis points from the IP to the center of the LHC ring, and the $y$-axis points upward. Cylindrical coordinates $(r,\phi)$ are used in the transverse plane, $\phi$ being the azimuthal angle around the beam pipe. The pseudorapidity is defined in terms of the polar angle $\theta$ as $\eta=-\ln\tan(\theta/2)$. The distance in $\eta$--$\phi$ coordinates is $\Delta R=\sqrt{(\Delta\eta)^2+(\Delta\phi)^2}$, also used to define cone radii.}.
The inner detector (ID) is nearest to the interaction point and contains three subsystems providing high-precision track reconstruction: a~silicon pixel detector (innermost), a~silicon microstrip detector, and a~transition radiation tracker (outermost), which also helps to discriminate electrons from hadrons. The ID covers a~range of $|\eta|<2.5$. It is surrounded by a~superconducting solenoid, which produces a~2\,T axial field within the ID. Liquid argon (LAr) sampling electromagnetic (EM) calorimeters cover $|\eta|<4.9$, while the hadronic calorimeter uses scintillator tiles within $|\eta|<1.7$ and LAr within $1.7<|\eta|<4.9$. The outermost detector is the muon spectrometer, which employs three sets of air-core toroidal magnets with eight coils each and is composed of three layers of chambers for triggering ($|\eta| <$ 2.4) and precision track measurements ($|\eta| <$ 2.7).
The trigger is divided into three levels referred to as Level 1 (L1), Level 2 (L2), and Event Filter (EF). The L1 trigger uses custom-made hardware and low-granularity detector data. The L2 and EF triggers are implemented as software algorithms. The L2 trigger has access to the full detector granularity, but only retrieves data for regions of the detector identified by L1 as containing interesting objects, while the EF system utilizes the full detector readout to reconstruct an event.
\section{Data Sample} \label{sec:DataSamples}
The dataset used in this analysis was recorded during $pp$ collisions at $\sqrt{s}=7\,{\rm TeV}$ in 2011. It only includes data recorded with stable beam conditions and with all relevant subdetector systems operational. The number of $pp$ collisions per bunch crossing significantly increased during the data taking, reaching mean values up to 20 in the last part of the 2011 LHC run.
Single-muon and single-electron triggers were used to select the data. The single-muon trigger required at least one muon with transverse momentum (\pt{}) of at least $18\,$GeV and the single-electron trigger required at least one electron with a~\pt{} threshold of either $20$ or $22\,$GeV. The \pt{} threshold increased during data taking to cope with increased luminosity. With these requirements the total integrated luminosity of the dataset is \mbox{4.6\,fb$^{-1}$}{} with an uncertainty of 1.8\%~\cite{lumi2011}.
\section{Simulation} \label{sec:Simulation}
Simulated \ttbar{} events with up to five additional light partons were generated using {\sc Alpgen}\xspace{}~\cite{ALPGEN} (v2.13) with the leading-order (LO) CTEQ6L1~\cite{cteq6l1} parton distribution functions (PDF). {\sc Herwig}\xspace{}~\cite{HERWIG} (v6.520) was used for parton showering and hadronization and {\sc Jimmy}\xspace{}~\cite{JIMMY} (v4.31) was used for the modeling of multiple parton interactions. The ATLAS AUET2 tune~\cite{tunesAUET2} was used for the simulation
of the underlying event.
The {\sc Alpgen}\xspace{} generator uses tree-level matrix elements with a~fixed number of partons in the final state, with the MLM matching scheme~\cite{Mangano:2001xp} to avoid double counting between partons created in the hard process or in the subsequent parton shower.
Two other generators, which make use of NLO QCD matrix elements with the NLO CT10 PDF~\cite{CT10}, are used for comparisons with the final measured results, namely {\sc MC{@}NLO}\xspace{}~\cite{MCATNLO} (v4.01) and {\sc Powheg}\xspace{}~\cite{POWHEGBOX} ({\sc Powheg}\xspace{}-hvq, patch4). Both are interfaced to {\sc Herwig}\xspace{} and {\sc Jimmy}\xspace{} with the ATLAS AUET2 tune. The {\sc MC{@}NLO}\xspace{} generator is also used for the evaluation of systematic uncertainties along with additional generators and simulation samples discussed in Sec.~\ref{sec:signalmodeling}.
As an additional comparison the {\sc Powheg}\xspace{} generator is also interfaced to {\sc Pythia}\xspace{}6~\cite{Sjostrand:2006za},
with the Perugia 2011C tune~\cite{perugia}.
All of the simulation samples were generated assuming a~top-quark mass, $m_t$, equal to $172.5\,$GeV. The \ttbar{} samples are normalized to
a~cross-section of $\ensuremath{\sigma_{\ttbar}}\xspace = 167^{+17}_{-18}$~pb, obtained from approximate NNLO QCD calculations~\cite{HATOR} for $pp$
collisions at $\sqrt{s} = 7 \tev$, again using $m_t=172.5\,$GeV. During the completion of this analysis, a~calculation of the inclusive cross-section to full NNLO precision with additional NNLL corrections was published~\cite{Czakon:2013goa} and gives a~cross-section of $\sigma_{t\bar{t}} = 177.3^{+11.5}_{-12.0}$~pb at $\sqrt{s} = 7 \tev$ for the same top-quark mass.
This change would only affect the results presented here by increasing the normalization of the dilepton \ttbar{} background. The corresponding effect on the final results would be at the sub-percent level and is covered by the assigned systematic uncertainties.
Single top-quark events produced via electroweak interactions were simulated using the {\sc AcerMC}\xspace{} generator~\cite{ACERMC} (v3.8) interfaced to {\sc Pythia}\xspace{}6 with the MRSTMCal PDF~\cite{MRST2007LO} for the $t$-channel process and {\sc MC{@}NLO}\xspace{} for the $s$-channel and $\Wboson t$-channel processes. The production of \Wboson{}/\Zboson{} bosons in association with jets (\Wboson+jets or \Zboson+jets) was simulated using {\sc Alpgen}\xspace{}+{\sc Herwig}\xspace{}. \Wboson{}+jets events containing heavy-flavor quarks ($Wbb$+jets, $Wcc$+jets, and $Wc$+jets) were generated separately using leading-order matrix elements with massive $b$- and $c$-quarks. An overlap-removal procedure was used to avoid double counting of heavy-flavor quarks between the matrix element and the parton shower evolution. Diboson events (\Wboson{}\Wboson{}, \Wboson{}\Zboson{}, \Zboson{}\Zboson{}) were generated using {\sc Herwig}\xspace{} with the MRSTMCal PDF.
All the simulation samples account for multiple $pp$ interactions per bunch crossing (pile-up), including both the in-time (additional collisions within the same bunch crossing) and out-of-time (collisions from neighboring bunch crossings) contributions, using {\sc Pythia}\xspace{}6 and the ATLAS AMBT2B CTEQ6L1 tune~\cite{atlastune2} to simulate minimum bias events. The events were reweighted so that the distribution of the average number of interactions per bunch crossing matches that observed in the data. The samples were processed through the GEANT4~\cite{GEANT4} simulation of the ATLAS detector~\cite{ATLASsim} and the standard ATLAS reconstruction software. Simulated events were corrected so that the trigger efficiency and physics object identification efficiencies, energy scales and energy resolutions match those determined in data control samples, with the exception of the electrons and jets, the energies of which were scaled in data to match the simulation.
\section{Event Reconstruction} \label{sec:EventReco}
The lepton+jets \ttbar{} decay mode is characterized by a~high-\pt{} lepton, two jets originating from $b$-quarks, two jets from the hadronic \Wboson{} boson decay, and missing transverse momentum due to the neutrino.
\subsection{Object Reconstruction and Identification}\label{sec:ObjectDef}
Primary vertices in the event are formed from reconstructed tracks such that they are spatially compatible with the luminous interaction region. The hard-scatter primary vertex is chosen to be the vertex with the highest $\sum \pt^2$ where the sum extends over all associated tracks with $\pt > 0.4\,$GeV.
The same electron definition as was used in the \ttbar{} cross-section measurement with 2010 data~\cite{atlas_ttxsec_2010} is adopted in this analysis, but optimized for the higher pile-up conditions of the 2011 data~\cite{Aad:2014fxa}. Strict quality requirements are applied to the shape of the energy deposition in the EM calorimeters and to the electron track variables~\cite{atlasElecPerf}.
The resulting electron candidates are required to have transverse energy $\ET>25 \,$GeV and $|\eta_{\rm cluster}|< 2.47$, where $|\eta_{\rm cluster}|$ is the pseudorapidity of the EM cluster associated with the electron. In order to ensure high-quality reconstruction, candidates in the transition region between the barrel and endcap calorimeters, $1.37 < |\eta_{\rm cluster}| < 1.52$, and candidates matching the criteria for converted photons are rejected.
Muon candidates are reconstructed by combining track segments in different layers of the muon chambers~\cite{muonReso,Aad:2014zya}.
Such segments are assembled starting from the outermost layer, with a~procedure that takes material effects into account, and are then matched with tracks found in the ID. The candidates are then re-fitted using all hits from both the muon spectrometer and the ID, and are required to have $\pt >25\,$GeV and $|\eta|<2.5$.
Electron and muon candidates are required to be isolated in order to reduce the backgrounds from hadrons mimicking lepton signatures and leptons from heavy-flavor decays.
For electrons, the isolation requirements are similar to the ones tuned for 2010 data~\cite{PUB2011006:elecperf} but optimized for the 2011 running conditions. The total transverse energy deposited in the calorimeter, in a~cone of size $\Delta R = 0.2$ around the electron candidate, is considered.
The energy associated with the electron is subtracted, and corrections are made to account for the energy deposited by pile-up interactions. An analogous isolation requirement is applied using the sum of track \pt{} (excluding the electron track) in a~cone of $\Delta R = 0.3$ around the electron direction.
Isolation requirements on both the transverse energy and momentum are tuned as a~function of $\eta_{\rm cluster}$ and $\ET$ in order to ensure a~uniform $90\%$ efficiency for electrons from $Z\to ee$ decays satisfying the electron definition described above.
For muon candidates, after subtracting the contributions from the muon itself, the total energy deposited in the calorimeter in a~cone of size $\Delta R = 0.2$ around the muon direction is required to be below $4\,$GeV and the sum of track transverse momenta for tracks with $\pt>1\,$GeV in a~cone of $\Delta R = 0.3$ around the muon direction is required to be below $2.5\,$GeV. The above set of cuts has an efficiency of $88\%$ for simulated \ttbar{} signal events in the {$\mu+$jets}\xspace{} channel with a~negligible dependence on the pile-up conditions.
Jets are reconstructed from topological clusters~\cite{Lampl:2008zz} of energy depositions using the anti-$k_{t}$ algorithm~\cite{akt1} with a~radius parameter of $R=0.4$. The jet energy is first corrected for pile-up effects and then to the hadronic scale corresponding to the particle-level jets using energy and $\eta$-dependent correction factors derived from simulation~\cite{jer_2}. The energies of jets in data are further corrected, using in situ measurements, to match simulation~\cite{jes:2013}. Only jets with $\pt >25\,$GeV and $|\eta|<2.5$ are considered in the analysis. To suppress jets from in-time pile-up, the jet vertex fraction, defined as the sum of the \pt{} of tracks associated with the jet and originating from the primary vertex divided by the sum of the \pt{} from all tracks associated with the jet, is required to be greater than~0.75.
The missing transverse momentum vector, {\bf \ensuremath{\ET^{\rm miss}}{}}, is derived from the vector sum of calorimeter cell energies within $|\eta| < 4.9$ and corrected on the basis of the dedicated calibrations of the associated physics objects~\cite{atlasEtmisPerf}, including muons.
Calorimeter cells containing energy depositions above noise and not associated with high-\pt{} physics objects (referred to as the unassociated-cell term) are also included.
The identification of \ttbar{} events is improved by tagging jets originating from $b$-quarks using a~combination of three $b$-tagging algorithms~\cite{btag}.
The results of the three taggers are combined using a~neural network resulting in a~single discriminating variable. The combined tagger operating point chosen for this analysis corresponds to a~tagging efficiency of 70\% for $b$-jets in simulated \ttbar{} events, while $c$-jets are suppressed by a~factor of five and light-flavor- and gluon-initiated jets are suppressed by a~factor of about 100.
\subsection{Event Selection}\label{sec:EventSelection}
Events are first required to pass either a~single-electron or single-muon trigger and the hard-scatter primary vertex is required to be constructed from at least five tracks with $p_{\rm T} > 0.4\,$GeV.
Leptons and jets are required to be well separated from each other to minimize ambiguities, background and systematic uncertainties.
First, jets within $\Delta R = 0.2$ of an electron satisfying the requirements described in Sec.~\ref{sec:ObjectDef},
but with the $p_{\rm T}$ threshold lowered to $15\,$GeV, are removed. If there is another jet found within $\Delta R = 0.4$, the electron is discarded. Finally muons within $\Delta R = 0.4$ of the axis of a~jet are removed.
Events are required to contain exactly one isolated lepton and this lepton is required to have fired the trigger. Four or more jets where at least one jet is $b$-tagged are also required. In addition, events must satisfy $\ensuremath{\ET^{\rm miss}} > 30\,$GeV and $\ensuremath{m_{\mathrm{T}}^W} > 35\,$GeV, where $\ensuremath{\ET^{\rm miss}}$ is the magnitude of the missing transverse momentum vector {\bf \ensuremath{\ET^{\rm miss}}{}} and the \Wboson{} boson transverse mass, $\ensuremath{m_{\mathrm{T}}^W}$, is defined as
\begin{equation}
\ensuremath{m_{\mathrm{T}}^W} = \sqrt{2p_{\rm T}^{\ell}p_{\rm T}^{\nu}(1-\cos(\phi^{\ell}-\phi^{\nu}))}\,,
\end{equation}
where $p_{\rm T}^{\ell}$ and $\phi^{\ell}$ are, respectively, the transverse momentum and the azimuthal angle of the lepton, $p_{\rm T}^{\nu}$ is identified at the reconstruction level with \ensuremath{\ET^{\rm miss}}{} and $\phi^{\nu}$ is the azimuthal angle of {\bf \ensuremath{\ET^{\rm miss}}{}}.
\subsection{Kinematic Reconstruction of the \ttbar{} System} \label{sec:TopSystemReconstruction}
A~kinematic likelihood fit~\cite{KLFit:2013} is used to fully reconstruct the $\ttbar$ kinematics. The algorithm relates the measured kinematics of the reconstructed objects (lepton, jets and {\bf \ensuremath{\ET^{\rm miss}}{}}) to the leading-order representation of the $\ttbar$ system decay. The event likelihood ($\mathscr{L}$) is constructed as the product of Breit--Wigner (BW) distributions and transfer functions (TF)
\begin{equation}
\begin{split}
\mathscr{L} \equiv {\rm TF}(\tilde{E}^{\ell},E^{\ell}) &\cdot \left( \prod_{i=1}^4 {\rm TF}(\tilde{E}_{{\rm jet}~i}, E_{{\rm quark}~i}) \right) \\
&\cdot\, {\rm TF}(E_x^{\rm miss}|p_x^{\nu}) \cdot \, {\rm TF}(E_y^{\rm miss}|p_y^{\nu}) \\
&\cdot \, {\rm BW}(m_{jj}|m_W) \cdot \, {\rm BW}(m_{\ell \nu}|m_W) \\
&\cdot \, {\rm BW}(m_{jjj}|m_{t}) \cdot {\rm BW}(m_{\ell \nu j}|m_{t})\,,
\end{split}
\end{equation}
where the Breit--Wigner distributions associate the {\bf \ensuremath{\ET^{\rm miss}}{}}, lepton, and jets with \Wboson{} bosons and top quarks, making use of their known widths and masses. The top-quark mass used is $172.5\,$GeV. The transfer functions, derived from the {\sc MC{@}NLO}\xspace{}+{\sc Herwig}\xspace{} simulation of the \ttbar{} signal, represent the experimental resolutions in terms of the probability that the observed energy at reconstruction level ($\tilde{E}$) is produced by a~parton-level object with a~certain energy $E$. Transverse energy is used to parameterize the muon momentum resolution while lepton energy is used in the electron channel.
The missing transverse momentum is used as a~starting value for the neutrino \pt{}, with its longitudinal component ($p_z^{\nu}$) as a~free parameter in the kinematic likelihood fit. Its starting value is computed from the \Wboson{} mass constraint. If there are no real solutions for $p_z^{\nu}$ then zero is used as a~starting value. Otherwise, if there are two real solutions, the one giving the larger likelihood is used. The five highest-$\pt$ jets (or four if there are only four jets in the event) are used as input to the likelihood fit and the best four-jet combination is selected.
The likelihood is maximized as a~function of the energies of the $b$-quarks, the quarks from the hadronic \Wboson{}~boson decay, the charged lepton, and the components of the neutrino three-momentum. The maximization is performed by testing all possible permutations, assigning jets to partons. The likelihood is combined with the probability for a~jet to be $b$-tagged, given the parton from the top-quark decay it is associated with, to construct an event probability. The $b$-tagging efficiencies and rejection factors are used to promote permutations for which a~$b$-tagged jet is assigned to a~$b$-quark and penalize those where a~$b$-tagged jet is assigned to a~light quark. The permutation of jets with the highest event probability is retained.
The event likelihood must satisfy $\log{(\mathscr{L})} > -50$. This requirement provides a~good separation between properly and poorly-reconstructed events.
Distributions of $\log{(\mathscr{L})}$ for data and simulation events are shown in Fig.~\ref{fig:lhood_tagged} separately for the {$e+$jets}\xspace{} and {$\mu+$jets}\xspace{} channels.
The data-to-MC ratio of the efficiency of the likelihood requirement is found to be 0.98 and the simulation is corrected for this difference.
The full event selection, including this final requirement on the likelihood, is summarized in Table~\ref{tab:selectionList}.
\begin{figure*}[!htbp]
\centering
\subfigure[]{ \includegraphics[width=0.45\textwidth]{Likelihood_tagged_ejets}\label{lhood_el}}
\subfigure[]{ \includegraphics[width=0.45\textwidth]{Likelihood_tagged_mujets}\label{lhood_mu}}
\caption{(Color online) Distribution of the logarithm of the likelihood ($\log(\mathscr{L})$) obtained from the kinematic fit in the \subref{lhood_el}~{$e+$jets}\xspace{} and \subref{lhood_mu}~{$\mu+$jets}\xspace{} channels. Data distributions are compared to predictions, using {\sc Alpgen}\xspace{}+{\sc Herwig}\xspace{} as the $t\bar{t}$ signal model. The hashed area indicates the combined statistical and systematic uncertainties in the prediction, excluding systematic uncertainties related to the modeling of the $\ttbar$ system. Signal and background processes are shown in different colors, with ``Other" including the small backgrounds from diboson and $Z$+jets production. The lower parts of the figures show the ratios of data to the predictions.}
\label{fig:lhood_tagged}
\end{figure*}
Once the best likelihood is found, the four-momenta of both top quarks in the event are formed from their decay products as determined by the kinematic likelihood fit. One top quark is reconstructed from the fitted charged lepton, neutrino and one of the $b$-partons. This is referred to as the leptonically decaying top quark. The other, referred to as the hadronically decaying top quark, is reconstructed from the other three partons.
The hadronically decaying top quark is selected to represent the top-quark \pt{} because the final result for this variable has smaller systematic uncertainties than the leptonically decaying top quark. The two spectra were compared and their results are compatible.
The \ttbar{} system is the combination of the leptonically and hadronically decaying top quarks.
\begin{table}[!ht]
\begin{center}
\begin{tabular}{ l | l}
Event selection \\
\hline
\hline
Trigger & {\ }Single lepton \\
Primary vertex & {\ }$\ge5$ tracks with \pt{} $ >0.4\,$GeV \\
Exactly one & {\ }Muons: \pt{} $> 25\,$GeV, $|\eta| < 2.5$ \\
isolated lepton & {\ }Electrons: \pt{} $>$ $25\,$GeV \\
& {\ }$|\eta|< 2.47$, excluding $1.37 < |\eta| < 1.52$ \\
$\ge4$ jets & {\ }\pt{} $> 25\,$GeV, $|\eta| < 2.5$ \\
$b$-tagging & {\ }$\geq 1$ $b$-tagged jet at $\epsilon_b=70\%$ \\
\ensuremath{\ET^{\rm miss}}{} & {\ }\ensuremath{\ET^{\rm miss}}{} $>$ $30\,$GeV \\
\ensuremath{m_{\mathrm{T}}^W} & {\ }\ensuremath{m_{\mathrm{T}}^W}{} $>$ $35\,$\GeV \\
Kinematic fit & {\ } $\log (\mathscr{L}) > -50$ \\
\hline
\end{tabular}
\end{center}
\caption{Summary of all requirements included in the event selection.}
\label{tab:selectionList}
\end{table}
\section{Background Determination} \label{sec:BackgroundDetermination}
After the event selection is applied, the largest background process is \Wboson{}+jets. Other backgrounds are due to multijet production, single top-quark electroweak production, diboson production, $Z$+jets production and the other decay channels associated with \ttbar{} production: the dilepton channel, which gives a~significant contribution, and the all-hadronic channel, which is found to be negligible. The \Wboson{}+jets and multijet backgrounds are determined using a~combination of simulation and data-driven techniques. The other backgrounds are determined from simulation and normalized to higher-order theoretical predictions.
\subsection{Simulated Background Contributions} \label{sec:SimulatedBackgrounds}
The single top-quark, dilepton \ttbar, \Zboson+jets, and diboson contributions are estimated from simulations and normalized to theoretical calculations of the inclusive cross-sections as follows. The single top-quark cross-section is normalized to the NLO+NNLL prediction: the $t$-channel to $64.6{}^{+2.6}_{-1.7}\,{\rm pb}$~\cite{Kidonakis:2011wy}, the $s$-channel to $4.6{}\pm 0.2\,{\rm pb}$~\cite{Kidonakis:2010tc}, and the $Wt$-channel to $15.7{} \pm 1.2 \,{\rm pb}$~\cite{Kidonakis:2010ux}. The dilepton \ttbar{} background is normalized to the same inclusive cross-section given in Sec.~\ref{sec:Simulation} for the signal $t\bar{t}\rightarrow \ell+$jets sample. The \Zboson+jets background is normalized to the NNLO QCD calculation for inclusive \Zboson{}~production~\cite{Anastasiou:2003ds}
and the diboson background is normalized to the NLO QCD cross-section prediction~\cite{Campbell:2011bn}.
\subsection{\Wboson+jets Background}\label{sec:WjetsBackground}
At the LHC the rate of $W^+$+jets events is larger than that of $W^-$+jets as
the up-quark density in the proton is larger than the down-quark one.
Exploiting the fact that the ratio of $W^+$+jets to $W^-$+jets cross-sections
is predicted more precisely than the total $W$+jets cross-section \cite{Kom:2010mv},
the charge asymmetry in $W$+jets production can be used to estimate the total
$W$+jets background from the data. Considering that processes other than
$W$+jets give, to a~good approximation, equal numbers of positively and
negatively charged leptons, the total number of $W$+$n$-jets events before
requiring a~$b$-tagged jet (pretag sample) can be estimated as
\begin{equation}
\begin{split}
N_{n_{\rm jets}}^{\rm W, pretag} &= N^{W^+}_{n_{\rm jets}} + N^{W^-}_{n_{\rm jets}} \\
&= \left( \frac{r^{\rm MC}_{n_{\rm jets}}+1}{r^{\rm MC}_{n_{\rm jets}}-1} \right) (D_{n_{\rm jets}}^+ - D_{n_{\rm jets}}^-)\,,
\end{split}
\label{eq:wnorm}
\end{equation}
where $n_{\rm jets}$ is the number of jets, $D_{n_{\rm jets}}^+$ ($D_{n_{\rm jets}}^-$) the total numbers of events with positively (negatively) charged leptons in data
meeting the selection criteria described in Sec.~\ref{sec:EventSelection} with the appropriate $n_{\rm jets}$ requirement and
without the $b$-tagging requirement, and
$r^{\rm MC}_{n_{\rm jets}}$ is the ratio of $\sigma(pp \rightarrow W^+ + n$-jets$)$ to
$\sigma(pp \rightarrow W^- + n$-jets$)$ estimated from simulation.
Small additional sources of charge asymmetry in data, mainly due to the single top-quark
contribution, are estimated from the simulation and subtracted from data.
The largest uncertainties in the ratio come from the PDFs and the
heavy-flavor fractions in $W$+jets events.
The jet flavor composition of the pretag sample is the other important element
needed to estimate the number of events after the requirement of at least one
$b$-tagged jet. It is evaluated using a~combination of data- and simulation-driven approaches starting
from the estimation of the flavor fractions from data for the two-jet sample.
\begin{equation}
\begin{split}
N^{W,{\rm tag}}_2 &= N^{W,{\rm pretag}}_2 (F_{bb,2}P_{bb,2}+F_{cc,2}P_{cc,2}\\
&+F_{c,2}P_{c,2}+F_{\rm light,2}P_{\rm light,2})\,,
\end{split}
\label{eq:flavfrac}
\end{equation}
where $N^{W,{\rm tag}}_2$ is the number of $W$+jets events after
the $b$-tagging requirement in the two-jet sample, evaluated
from data after subtracting all non-$W$ events (including the multijet background, estimated using the
data-driven method described in Sec.~\ref{sec:FakeLeptonBackground}, the \ttbar{} signal and the other backgrounds, estimated from simulation);
$N^{W,{\rm pretag}}_2$ is the number of events before the $b$-tagging
requirement estimated from data using Eq.(\ref{eq:wnorm})
for the background-dominated two-jet sample.
The quantities $F_{x,2}$ (with $x=bb/cc/c/{\rm light}$, where {\rm light} refers to $u/d/s$-quark- and gluon-initiated jets) represent the flavor fractions in the
two-jet sample and the $P_{x,2}$ the respective $b$-tagging probabilities taken from the simulation.
The flavor fractions add up to unity for each jet multiplicity
\begin{equation}
F_{bb,2}+k_{cc \rightarrow bb} \cdot F_{bb,2}+F_{c,2}+F_{\rm light,2}=1
\label{eq:scnorm}
\end{equation}
with $F_{cc,2}$ constrained by $F_{bb,2}$ using the ratio $k_{cc \rightarrow bb}$ between the two
fractions taken from simulation.
The $Wc$+jets events have a~different charge asymmetry with respect to
$Wbb$/$Wcc$/$W+{\rm light}$-jets events. This is because, at leading order, the former is dominated by gluon-$s$ and gluon-$\bar{s}$ scattering, which involve symmetric $s$- and $\bar{s}$-quark PDF, while the latter are dominated by $u$-$\bar{d}$ and $d$-$\bar{u}$ scattering, which are asymmetric because they involve the $u$- and $d$-valence-quark PDF.
The flavor fractions can therefore be determined by applying Eq.(\ref{eq:flavfrac})
and Eq.(\ref{eq:scnorm}) separately for events with positive and negative leptons. These flavor fractions are
used to re-determine the overall normalization and the procedure is iterated until no significant changes
are observed. They are then used to correct the flavor fractions in the simulation.
Finally the number of events after the $b$-tagging and requiring $\ge 4$-jets
is estimated using the number of pretag events, $N^{W,{\rm pretag}}_{\ge 4}$, measured from the
charge asymmetry method of Eq.(\ref{eq:wnorm}), as
\begin{equation}
N^{W,{\rm tag}}_{\ge 4}=N^{W,{\rm pretag}}_{\ge 4} \cdot f^{\rm tag}_2 \cdot f_{2\rightarrow \ge 4}^{\rm tag}\,,
\end{equation}
where $f^{\rm tag}_2$ is the fraction of events in the two-jet sample that are $b$-tagged
and $f_{2\rightarrow \ge 4}^{\rm tag}$ the ratio between the $b$-tagged event fractions in the $\ge 4$-jet
and two-jet samples evaluated using simulated $W$+jets events with corrected flavor fractions.
The correction factors for a~selection
requiring $\geq 4$ jets are obtained from the ones of the two-jet
sample by applying an overall normalization factor in order to preserve the requirement
that the flavor fractions add up to unity.
This method has the advantage
that $f^{\rm tag}_2$ is evaluated from the data in a~sample dominated by the $W$+jets background
and that it relies on the ratio between the tagging fractions in the two-jet and $\ge 4$-jet
samples, strongly reducing the systematic uncertainties due to the $b$-tagging efficiencies and
the heavy-flavor components of the $W$+jets background.
\subsection{Multijet Background} \label{sec:FakeLeptonBackground}
The multijet background is characterized by jets that are misidentified as isolated prompt leptons, or non-prompt leptons that are misidentified as isolated leptons. These are referred to as ``fake leptons".
The rate of identifying such a~fake lepton as a~real one
is calculated from data by defining two control samples. The first sample uses the lepton definition described in Sec.~\ref{sec:ObjectDef}, which is referred to as the tight selection. To define the second sample, a~loose selection is used, for which the identification criteria are relaxed and the isolation requirements are removed. Using these samples, the number of fake leptons passing the tight selection is given by
\begin{equation}
\label{eq:multijet_estimation}
N^{\rm tight}_{\rm fake} = \frac{\epsilon_{\rm fake}}{\epsilon_{\rm real}-\epsilon_{\rm fake}} (N^{\rm loose}\epsilon_{\rm real}-N^{\rm tight})\,,
\end{equation}
where $N^{\rm tight}$ and $N^{\rm loose}$ are the numbers of events with a~tight or loose lepton, respectively, and $\epsilon_{\rm real}$ and $\epsilon_{\rm fake}$ are the fractions of real and fake loose leptons that pass the tight selection. Decays of the \Zboson{}~boson to two leptons are used to measure the $\epsilon_{\rm real}$, while the $\epsilon_{\rm fake}$ are measured in control regions which are dominated by contributions from fake leptons. These control regions are defined by requiring low \ensuremath{\ET^{\rm miss}}{}, low $\ensuremath{m_{\mathrm{T}}^W}$, or by selecting leptons with high track impact parameter. Contributions from \Wboson+jets and \Zboson+jets production are subtracted in the control regions using simulation~\cite{atlasXsec3}. The resulting multijet background is larger for the {$e+$jets}\xspace{} channel than it is for the {$\mu+$jets}\xspace{} channel.
\section{Reconstructed Event Variables}\label{sec:YieldsAndPlots}
The event yields after the selection described in Sec.~\ref{sec:EventReco} are displayed in Table~\ref{tab:yields}, separately for the {$e+$jets}\xspace{} and {$\mu+$jets}\xspace{} channels, for the data, the simulated {$\ell+$jets}\xspace signal from $t\bar{t}$ production, and for the various backgrounds discussed in Sec.~\ref{sec:BackgroundDetermination}.
A~comparison of the data with the $\ttbar$ signal and background distributions, after all selection criteria are applied, is shown in Fig.~\ref{fig:controls_tagged} as functions of the $W$~boson transverse mass, the missing transverse momentum and the \pt{} of the highest-\pt{} (leading) $b$-tagged jet. Within the uncertainties shown, which cover the experimental and background systematic uncertainties
but not the $t\bar{t}$ modeling uncertainties (discussed in Sec.~\ref{sec:signalmodeling}), the data and predictions are in agreement.
\input{table_Yields}
\begin{figure*}[htbp]
\centering
\subfigure[]{ \includegraphics[width=0.38\textwidth]{InclusiveJetBinmwt_lhood_tagged_ejets}\label{mwt_el}}
\subfigure[]{ \includegraphics[width=0.38\textwidth]{InclusiveJetBinmwt_lhood_tagged_mujets}\label{mwt_mu}}
\subfigure[]{ \includegraphics[width=0.38\textwidth]{InclusiveJetBinMET_lhood_tagged_ejets_log}\label{met_el}}
\subfigure[]{ \includegraphics[width=0.38\textwidth]{InclusiveJetBinMET_lhood_tagged_mujets_log}\label{met_mu}}
\subfigure[]{ \includegraphics[width=0.38\textwidth]{InclusiveJetBinbJet1Pt_lhood_tagged_ejets}\label{bjet_el}}
\subfigure[]{ \includegraphics[width=0.38\textwidth]{InclusiveJetBinbJet1Pt_lhood_tagged_mujets}\label{bjet_mu}}
\caption{(Color online) Observables at the reconstruction level: $W$ transverse mass (\ensuremath{m_{\mathrm{T}}^W}{}) in the ~\subref{mwt_el}~{$e+$jets}\xspace{} and \subref{mwt_mu}~{$\mu+$jets}\xspace{} channels, missing transverse momentum (\ensuremath{\ET^{\rm miss}}{}) in the \subref{met_el}~{$e+$jets}\xspace{} and \subref{met_mu}~{$\mu+$jets}\xspace{} channels, and leading $b$-tagged jet \pt{} in the \subref{bjet_el}~{$e+$jets}\xspace{} and \subref{bjet_mu}~{$\mu+$jets}\xspace{} channels. Data distributions are compared to predictions, using {\sc Alpgen}\xspace{}+{\sc Herwig}\xspace{} as the \ttbar{} signal model. The hashed area indicates the combined statistical and systematic uncertainties in the total prediction, excluding systematic uncertainties related to the modeling of the $\ttbar$ system. Signal and background processes are shown in different colors, with ``Other" including the small backgrounds from diboson and $Z+$jets production. Events beyond the range of the horizontal axis are included in the last bin.
The lower parts of the figures show the ratios of data to the predictions.}
\label{fig:controls_tagged}
\end{figure*}
The kinematic spectra corresponding to individual top quarks as well as to the reconstructed $\ttbar$ system are shown in Figs.~\ref{fig:recoTop_tagged} and~\ref{fig:recottbar_tagged}.
Data and predictions agree within uncertainties with the exception of the high-\pt{} tails of the \ensuremath{\pt^t}{} and \ensuremath{\pt^{\ttbar}}{} distributions where data fall below the prediction.
\begin{figure*}[htbp]
\centering
\subfigure[]{ \includegraphics[width=0.45\textwidth]{Top2Pt_fine_lhood_tagged_ejets_log}\label{reco_had_top_el}}
\subfigure[]{ \includegraphics[width=0.45\textwidth]{Top2Pt_fine_lhood_tagged_mujets_log}\label{reco_had_top_mu}}
\subfigure[]{ \includegraphics[width=0.45\textwidth]{SystemMass_fine_lhood_tagged_ejets_log}\label{reco_mass_el}}
\subfigure[]{ \includegraphics[width=0.45\textwidth]{SystemMass_fine_lhood_tagged_mujets_log}\label{reco_mass_mu}}
\caption{(Color online) Reconstructed distributions for the transverse momentum of the hadronically decaying top quark ($\ensuremath{\pt^t}$) in the \subref{reco_had_top_el}~{$e+$jets}\xspace{} and \subref{reco_had_top_mu}~{$\mu+$jets}\xspace{} channels and for the mass of the \ttbar{} system (\ensuremath{m_{\ttbar}}{}) in the \subref{reco_mass_el}~{$e+$jets}\xspace{} and \subref{reco_mass_mu}~{$\mu+$jets}\xspace{} channels. Data distributions are compared to predictions, using {\sc Alpgen}\xspace{}+{\sc Herwig}\xspace{} as the \ttbar{} signal model.
The hashed area indicates the combined statistical and systematic uncertainties in the total prediction, excluding systematic uncertainties related to the modeling of the $\ttbar$ system. Signal and background processes are shown in different colors, with ``Other" including the small backgrounds from diboson and $Z+$jets production. Events beyond the axis range are included in the last bin.
The lower parts of the figures show the ratios of data to the predictions.
}
\label{fig:recoTop_tagged}
\end{figure*}
\begin{figure*}[htbp]
\centering
\subfigure[]{ \includegraphics[width=0.45\textwidth]{SystemPt_fine_lhood_tagged_ejets_log}\label{reco_pt_el}}
\subfigure[]{ \includegraphics[width=0.45\textwidth]{SystemPt_fine_lhood_tagged_mujets_log}\label{reco_pt_mu}}
\subfigure[]{ \includegraphics[width=0.45\textwidth]{SystemRapidity_fine_lhood_tagged_ejets}\label{reco_rap_el}}
\subfigure[]{ \includegraphics[width=0.45\textwidth]{SystemRapidity_fine_lhood_tagged_mujets}\label{reco_rap_mu}}
\caption{(Color online) Reconstructed distributions for the transverse momentum of the \ttbar{} system (\ensuremath{\pt^{\ttbar}}{}) in the \subref{reco_pt_el}~{$e+$jets}\xspace{} and \subref{reco_pt_mu}~{$\mu+$jets}\xspace{} channels and for the rapidity of the \ttbar{} system (\ensuremath{y_{\ttbar}}{}) in the \subref{reco_rap_el}~{$e+$jets}\xspace{} and \subref{reco_rap_mu}~{$\mu+$jets}\xspace{} channels. Data distributions are compared to predictions, using {\sc Alpgen}\xspace{}+{\sc Herwig}\xspace{} as the \ttbar{} signal model. The hashed area indicates the combined statistical and systematic uncertainties in the total prediction, excluding systematic uncertainties related to the modeling of the $\ttbar$ system. Signal and background processes are shown in different colors, with ``Other" including the small backgrounds from diboson and $Z+$jets production. Events beyond the axis range are included in the last bin, or in the case of the \ensuremath{y_{\ttbar}}{} spectrum the first and last bin.
The lower parts of the figures show the ratios of data to the predictions.}
\label{fig:recottbar_tagged}
\end{figure*}
\section{Differential Cross-section Determination}\label{sec:XSDetermination}
The estimated background contributions are subtracted from the measured distributions, which are then corrected for the efficiency to pass the event selection, for the detector resolution, and the branching ratio for the $\ttbar\rightarrow \ell$+jets channel. To facilitate the comparison to theoretical predictions, the cross-section measurements are defined with respect to the top quarks before the decay (parton level) and after QCD radiation~\footnote{Technically, the parton level, used both for the unfolding and for the predictions of the MC generators, is defined as status-code 155 for {\sc Herwig}\xspace{} and 3 for {\sc Pythia}\xspace{}.}.
The efficiency ($\epsilon_j$) to satisfy the selection criteria in bin $j$ for each variable is evaluated as the ratio of the parton-level spectra before and after implementing the event selection at the reconstruction level. The efficiencies are displayed in Fig.~\ref{fig:effs_tagged} and are typically in the 3--5\% range. The decrease in the efficiencies at high values of \ensuremath{\pt^t}{}, \ensuremath{m_{\ttbar}}{}, and \ensuremath{\pt^{\ttbar}}{} is primarily due to the increasingly large fraction of non-isolated leptons and angularly close or merged jets in events with high top-quark \pt{}. There is also a~decrease in the efficiency at high \ensuremath{\left|y_{\ttbar}\right|}\xspace{} due to jets and leptons falling outside of the pseudorapidity range required for the reconstructed lepton and jets.
The absolute variation of the efficiency as a~function of a~different choice of the top-quark mass is found to be $+0.025 \%/$GeV, independently of the kinematic variable and bin.
\begin{figure*}[htbp]
\centering
\subfigure[]{ \includegraphics[width=0.45\textwidth]{EffGen_HadTopPt_tagged_lhood}\label{eff_had_top}}
\subfigure[]{ \includegraphics[width=0.45\textwidth]{EffGen_Mass_tagged_lhood}\label{eff_mass}}
\subfigure[]{ \includegraphics[width=0.45\textwidth]{EffGen_Pt_tagged_lhood}\label{eff_pt}}
\subfigure[]{ \includegraphics[width=0.45\textwidth]{EffGen_AbsRap_tagged_lhood}\label{eff_rap}}
\caption{(Color online) The selection efficiencies binned in the~\subref{eff_had_top} transverse momentum of the top-quark ($\ensuremath{\pt^t}$), and the~\subref{eff_mass}~mass ($\ensuremath{m_{\ttbar}}$), \subref{eff_pt} transverse momentum ($\ensuremath{\pt^{\ttbar}}$) and the~\subref{eff_rap} absolute value of the rapidity ($\ensuremath{\left|y_{\ttbar}\right|}\xspace$) of the $\ttbar$ system obtained from the {\sc Alpgen}\xspace{}+{\sc Herwig}\xspace{} simulation of the \ttbar{} signal. The horizontal axes refers to parton-level variables.}
\label{fig:effs_tagged}
\end{figure*}
The influence of detector resolution is corrected by unfolding. The measured distributions in the {$e+$jets}\xspace{} and {$\mu+$jets}\xspace{} channels are unfolded separately by a~regularized inversion of the migration matrix (symbolized by $\mathcal{M}^{-1}$) described in Sec.~\ref{sec:Unfolding} and then the channels are combined as described in Sec.~\ref{sec:Combination}. The formula used to extract the cross-section in each bin is
\begin{equation}
\frac{\ensuremath{{ d}\sigma}}{\ensuremath{{ d}X}_j} \equiv \frac{1}{\Delta X_j} \cdot \frac{\sum\limits_{i} \mathcal{M}_{ji}^{-1}[D_i - B_i]}{{\rm BR} \,\cdot\, \mathcal{L} \,\cdot\, \epsilon_j},
\end{equation}
\noindent where $\Delta X_j$ is the bin width, $D_i$ ($B_i$) are the data (expected background) yields in each bin $i$ of the reconstructed variable, $\mathcal{L}$ is the integrated luminosity of the data sample, $\epsilon_j$ is the event selection efficiency, and ${\rm BR}=0.438$ is the branching ratio of $\ttbar\rightarrow \ell$+jets~\cite{PDG}.
The normalized cross-section
$1/\sigma \, \ensuremath{{ d}\sigma}/\ensuremath{{ d}X}_j$
is computed by dividing by the measured total cross-section, evaluated by integrating over all bins.
The normalized distributions have substantially reduced systematic uncertainties since most of the relevant sources of uncertainty (luminosity, jet energy scale, $b$-tagging, and absolute normalization of the data-driven background estimate) have large bin-to-bin correlations.
\subsection{Unfolding Procedure} \label{sec:Unfolding}
The binning for each of the distributions is determined by the experimental resolution of the kinematic variables, and poorly populated bins are combined with neighboring bins to reduce the uncertainty on the final result. Typical values of the fractional resolution for \ensuremath{\pt^t}{} and \ensuremath{m_{\ttbar}}{} are 25\% and 15\%, respectively, while the fractional resolution for \ensuremath{\pt^{\ttbar}}{} improves as a~function of \ensuremath{\pt^{\ttbar}}{} and is 40\% at $100\,$GeV. For \ensuremath{\left|y_{\ttbar}\right|}\xspace{}, the resolution varies from 0.25 to 0.35, from central to forward rapidities.
The effect of detector resolution is taken into account by constructing the migration matrices, relating the variables of interest at the reconstructed and parton levels, using the \ttbar{} signal simulation.
In Figs.~\ref{fig:migrations_tagged_1} and \ref{fig:migrations_tagged_2}, normalized versions of the migration matrices are presented, where each column is normalized by the number of parton-level events in that bin. The probability for parton-level events to remain in the same bin is therefore shown on the diagonal, and the off-diagonal elements represent the fraction of parton-level events that migrate into other bins. The fraction of events in the diagonal bins is always greater than 50\%, but significant migrations are present in several bins.
\begin{figure*}[htbp]
\centering
\subfigure[]{ \includegraphics[width=0.49\textwidth]{mig_Tag1_Top2_lhood_el}\label{migra_top2_el}}
\subfigure[]{ \includegraphics[width=0.49\textwidth]{mig_Tag1_Top2_lhood_mu}\label{migra_top2_mu}}
\subfigure[]{ \includegraphics[width=0.49\textwidth]{mig_Tag1_SystemMass_el}\label{migra_mass_el}}
\subfigure[]{ \includegraphics[width=0.49\textwidth]{mig_Tag1_SystemMass_mu}\label{migra_mass_mu}}
\caption{(Color online) The migration matrices obtained from the {\sc Alpgen}\xspace{}+{\sc Herwig}\xspace{} simulation, relating the parton and reconstructed levels for the transverse momentum of the hadronically decaying top quark ($\ensuremath{\pt^t}$) in the \subref{migra_top2_el}~{$e+$jets}\xspace{} and \subref{migra_top2_mu}~{$\mu+$jets}\xspace{} channels, and the mass of the $\ttbar$ system ($\ensuremath{m_{\ttbar}}$) in the \subref{migra_mass_el}~{$e+$jets}\xspace{} and \subref{migra_mass_mu}~{$\mu+$jets}\xspace{} channels. The linear correlation coefficient is given below each plot and all columns are normalized to unity (before rounding-off).}
\label{fig:migrations_tagged_1}
\end{figure*}
\begin{figure*}[htbp]
\centering
\subfigure[]{ \includegraphics[width=0.49\textwidth]{mig_Tag1_SystemPt_el}\label{migra_pt_el}}
\subfigure[]{ \includegraphics[width=0.49\textwidth]{mig_Tag1_SystemPt_mu}\label{migra_pt_mu}}
\subfigure[]{ \includegraphics[width=0.49\textwidth]{mig_Tag1_SystemAbsRap_el}\label{migra_rap_el}}
\subfigure[]{ \includegraphics[width=0.49\textwidth]{mig_Tag1_SystemAbsRap_mu}\label{migra_rap_mu}}
\caption{(Color online) The migration matrices obtained from the {\sc Alpgen}\xspace{}+{\sc Herwig}\xspace{} simulation, relating the parton and reconstructed levels for the transverse momentum of the $\ttbar$ system ($\ensuremath{\pt^{\ttbar}}$) in the \subref{migra_pt_el}~{$e+$jets}\xspace{} and \subref{migra_pt_mu}~{$\mu+$jets}\xspace{} channels, and the absolute value of the rapidity of the $\ttbar$ system ($\ensuremath{\left|y_{\ttbar}\right|}\xspace$) in the \subref{migra_rap_el}~{$e+$jets}\xspace{} and \subref{migra_rap_mu}~{$\mu+$jets}\xspace{} channels. The linear correlation coefficient is given below each plot and all columns are normalized to unity (before rounding-off).}
\label{fig:migrations_tagged_2}
\end{figure*}
The regularized Singular Value Decomposition~\cite{SVD} method is used for the unfolding procedure. A~regularized unfolding technique is chosen in order to prevent large statistical fluctuations that can be introduced when directly inverting the migration matrix.
To ensure that the results are not biased by the MC generator used for unfolding, the parton-level spectra in simulation are altered by changing the slopes of the \ensuremath{\pt^t}{} and \ensuremath{\pt^{\ttbar}}{} distributions by a~factor of two, while for the \ensuremath{m_{\ttbar}}{} distribution the content of one bin ($550$--$700\,$GeV) is increased by a~factor of two to simulate the presence of a~resonance. The shape of the rapidity of the $\ttbar$ system is changed by a~symmetric Gaussian distribution that results in a~reweighting factor of approximately $1.15$ at high $\ensuremath{\left|y_{\ttbar}\right|}\xspace$.
The studies confirm that these altered shapes are indeed recovered within statistical uncertainties by the unfolding based on the nominal migration matrices.
\clearpage
\subsection{Combination of Decay Channels}\label{sec:Combination}
The individual {$e+$jets}\xspace{} and {$\mu+$jets}\xspace{} channels give consistent results: the differences observed in the corresponding bins for all variables of interest are below two standard deviations, taking into account the correlated uncertainties between the two channels.
The Asymmetric BLUE method~\cite{BLUE} is used to combine the cross-sections measured in the {$e+$jets}\xspace{} and {$\mu+$jets}\xspace{} channels, where BLUE refers to the best linear unbiased estimator~\cite{BLUE_paper1}.
The covariance matrix between the two channels is constructed in each kinematic bin by assuming zero or full correlation for channel-specific or common systematic uncertainty sources, respectively.
The cross-sections are normalized to unity after the combination.
The combined results are compared and found to be in good agreement with the results of unfolding a~merged dataset of both the {$e+$jets}\xspace{} and {$\mu+$jets}\xspace{} channels.
\section{Uncertainties} \label{sec:Uncertainties}
The statistical uncertainty on the data is evaluated with pseudo-experiments by assuming Poisson fluctuations in the data event counts.
The systematic uncertainties are evaluated by varying each source of uncertainty by one standard deviation, propagating this effect through the event selection, unfolding and efficiency corrections, and then considering, for each channel, variable and bin, the variation with respect to the nominal result. This is done separately for the upward and downward variations. For one-sided uncertainties, as in the case of the comparison of two different models, the resulting variation is assumed to be of the same size in both directions and is therefore symmetrized.
The combined systematic uncertainties are obtained by using the nominal BLUE weights, assigned to each channel in each bin, to linearly combine the systematic uncertainties in the individual channels, and normalizing after the combination.
The total systematic uncertainty in each kinematic bin is computed as the sum in quadrature of individual systematic variations.
The systematic uncertainties and how they affect each of the variables studied are given, grouped into categories, in Tables~\ref{tab:CombSyst_1} and~\ref{tab:CombSyst_2}. The individual systematic uncertainties are listed for completeness in Appendix~\ref{Sec:Appendix:Syst}. The precision of the measurement is dominated by systematic uncertainties. They can be classified into three categories: systematic uncertainties affecting the detector modeling, signal modeling, and background modeling.
\begin{table*} [!htbp]
\footnotesize
\centering
\input{relativeUncertaintiesInCombination_Tag1_Top2_lhood_NormFinal_grouped}
\vspace{.1 cm}
\input{relativeUncertaintiesInCombination_Tag1_SystemMass_NormFinal_grouped}
\caption{The individual systematic uncertainties in the normalized differential cross-sections after combining the {$e+$jets}\xspace{} and {$\mu+$jets}\xspace{} channels for \ensuremath{\pt^t}{} and \ensuremath{m_{\ttbar}}{}, grouped into broad categories, and calculated as a~percentage of the cross-section in each bin. ``Other backgrounds" includes the systematic uncertainties in the single top-quark, dilepton, \Zboson{}+jets and QCD multijet backgrounds, and IFSR refers to initial- and final-state radiation. Dashes are used when the estimated relative systematic uncertainty for that bin is below 0.1\%.}
\label{tab:CombSyst_1}
\end{table*}
\begin{table*} [!htbp]
\footnotesize
\centering
\input{relativeUncertaintiesInCombination_Tag1_SystemPt_NormFinal_grouped}
\vspace{.1 cm}
\input{relativeUncertaintiesInCombination_Tag1_SystemAbsRap_NormFinal_grouped}
\caption{The individual systematic uncertainties in the normalized differential cross-sections after combining the {$e+$jets}\xspace{} and {$\mu+$jets}\xspace{} channels for \ensuremath{\pt^{\ttbar}}{} and \ensuremath{\left|y_{\ttbar}\right|}\xspace{}, grouped into broad categories, and calculated as a~percentage of the cross-section in each bin. ``Other backgrounds" includes the systematic uncertainties in the single top-quark, dilepton, \Zboson{}+jets and QCD multijet backgrounds, and IFSR refers to initial- and final-state radiation. Dashes are used when the estimated relative systematic uncertainty for that bin is below 0.1\%.}
\label{tab:CombSyst_2}
\end{table*}
\subsection{Detector Modeling} \label{sec:detectormodeling}
The systematic uncertainties related to the detector modeling induce effects on the reconstruction of the physics objects (leptons, jets and \ensuremath{\ET^{\rm miss}}{}) used in the selection and in the reconstruction of the kinematic variables under study.
The jet energy scale (JES) systematic uncertainty on the signal, acting on both the efficiency and bin migrations, is evaluated using 21 separate components~\cite{jes:2013}, which allow proper treatment of correlations across the kinematic bins.
The impact of the JES uncertainty on the background is evaluated using the overall JES variation defined as the sum in quadrature of the individual components, and is added to the signal JES systematic uncertainty linearly to account for the correlation between them. The simplified treatment of the JES uncertainty for the background has a~negligible effect on the results.
The uncertainty on the jet energy resolution is modeled by varying the jet energies according to the systematic uncertainties of the resolution measurement performed on data~\cite{jer:2013}. The contribution from this uncertainty is generally small except for the \ensuremath{\pt^{\ttbar}}{} distribution.
The uncertainty on the jet reconstruction efficiency is accounted for by
randomly removing jets, in the simulation, according to the uncertainty
on the jet reconstruction efficiency measured in data~\cite{jer_2}.
The effect of this uncertainty is negligible for all the spectra.
The corrections accounting for differences in $b$-tagging efficiencies and mistag rates for $c$-quarks and light-quarks, between data and simulation, are derived from data and parameterized as a~function of $\pt$ and $\eta$~\cite{btag1:2011,btag2:2011}. The uncertainties in these corrections are propagated through the analysis.
Electron and muon trigger, reconstruction, and selection efficiencies are measured in data using $W$ and $Z$~boson decays and are incorporated as appropriate correction factors into the simulation. A~similar procedure is used for the lepton energy and momentum scales and resolutions.
The impact of the uncertainties in all these corrections is at the sub-percent level.
The uncertainties in the energy scale and resolution corrections for jets and high-$\pt$ leptons are propagated to the uncertainty on \ensuremath{\ET^{\rm miss}}{}. Other minor systematic uncertainty contributions on the modeling of \ensuremath{\ET^{\rm miss}}{} arise from effects due to the pile-up modeling and the uncertainties in the unassociated-cell term~\cite{atlasEtmisPerf}. These contributions are generally at the sub-percent level except for the \ensuremath{\pt^{\ttbar}}{} distribution.
The efficiency of the likelihood cut discussed in Sec.~\ref{sec:TopSystemReconstruction} is observed to be $2\pm1\%$ smaller in data than in simulation, but this discrepancy has no kinematic dependence and hence no effect on the unfolded normalized distributions.
\subsection{Signal Modeling} \label{sec:signalmodeling}
The sources of uncertainty for the signal modeling come from the choice of generator used for the simulation of the \ttbar{} process, the parton shower and hadronization model, the model for initial- and final-state QCD radiation (IFSR), and the choice of PDF.
The uncertainties due to the generator choice are evaluated using {\sc MC{@}NLO}\xspace{}+{\sc Herwig}\xspace{} to unfold the data, instead of the nominal {\sc Alpgen}\xspace{}+{\sc Herwig}\xspace{}. These uncertainties are larger than those that would result from using {\sc Powheg}\xspace{}+{\sc Herwig}\xspace{} as an alternative model for unfolding.
The differences between the fully corrected data distributions obtained in this way and the nominal ones are symmetrized and taken as systematic uncertainties.
The parton shower and hadronization systematic uncertainties (referred to as fragmentation) are evaluated by comparing the distributions obtained using {\sc Alpgen}\xspace{}+{\sc Herwig}\xspace{} and {\sc Alpgen}\xspace{}+{\sc Pythia}\xspace{} to unfold the data. The {\sc Alpgen}\xspace{}+{\sc Pythia}\xspace{} sample is generated using {\sc Alpgen}\xspace{} (v2.14) and uses the CTEQ5L PDF~\cite{Lai:1999wy} for the hard process and parton shower.
The effect of IFSR modeling is determined by using two different {\sc Alpgen}\xspace{}+{\sc Pythia}\xspace{} samples with varied radiation settings. The distribution of the number of additional partons is changed by varying the renormalization scale associated with $\alpha_{\rm S}$ consistently in the hard matrix element as well as in the parton shower. The parameters controlling the level of radiation via parton showering~\cite{IFSRparameters} were adjusted to encompass the ATLAS measurement of additional jet activity in $\ttbar$ events~\cite{gap_fraction}. These samples are generated with dedicated Perugia 2011 tunes and used to fully correct the data through the unfolding. The IFSR uncertainty is assumed to be half the difference between the two unfolded distributions.
The PDF systematic uncertainty is evaluated by studying the effect on the signal efficiency of using different PDF sets to reweight simulated events at the hard-process level. The PDF sets used are CT10~\cite{CT10}, MSTW2008NLO~\cite{MSTW}, and NNPDF2.3~\cite{NNPDF}. Both the uncertainties within a~given PDF set and the variations between the different PDF sets are taken into account~\cite{PDF4LHC}.
The systematic uncertainties due to the finite size of the simulated samples are evaluated by varying the content of the migration matrix within statistical uncertainties and evaluating the standard deviation of the ensemble of results unfolded with the varied matrices. Simultaneously, the efficiency is re-derived using the parton spectrum projected from the varied migration matrix and therefore accounts for the same statistical fluctuations.
\subsection{Background Modeling} \label{sec:bkgmodeling}
The normalization of the \Wboson+jets background is varied within the uncertainty of the data-driven method, which amounts to 15\% and 13\% for the {$e+$jets}\xspace{} and {$\mu+$jets}\xspace{} channels, respectively. An additional uncertainty of 18\% ({$e+$jets}\xspace{}) and 21\% ({$\mu+$jets}\xspace{}) comes from determining the flavor composition of the sample. This includes the uncertainty on the extrapolation of the flavor composition to jet multiplicities beyond two (the $f^{\rm tag}_{2\rightarrow \geq 4}$ term described in Sec.~\ref{sec:WjetsBackground}).
The multijet background uncertainties are estimated by comparing alternative estimates and their agreement with data in control regions. The resulting normalization uncertainties are 50\% and 20\% for the {$e+$jets}\xspace{} and {$\mu+$jets}\xspace{} channels respectively.
The statistical uncertainty on the background simulation samples is taken into account by fluctuating the background sum with a~Gaussian distribution in each bin within the uncertainties and propagating the effect to the unfolded distributions.
The uncertainty on the \Zboson$+$jets background normalization is taken to be 50\%
in the four-jet bin and the uncertainty on the diboson normalization is taken to be 40\% in the same jet multiplicity bin. The effect of these uncertainties in the final results is negligible. Effects of the uncertainties in the normalizations of the single top and dilepton \ttbar{} backgrounds are also negligible.
\subsection{Main Sources of Systematic Uncertainties}
For \ensuremath{\pt^t}{} and \ensuremath{m_{\ttbar}}{} the largest systematic uncertainties come from JES, signal generator choice, and $b$-quark tagging efficiency. For \ensuremath{\pt^{\ttbar}}{} the uncertainty from IFSR is the largest, followed by signal generator choice, fragmentation and jet energy resolution. Finally, for \ensuremath{y_{\ttbar}}{} the main uncertainties come from the signal generator choice and fragmentation.
\section{Results}
\label{sec:Results}
The unfolded and combined normalized differential cross-sections are shown in Table~\ref{tab:XsectionTable}. The absolute cross-sections, calculated by integrating the spectra before normalization ($160\,{\rm pb}$ for the {$e+$jets}\xspace{} and {$\mu+$jets}\xspace{} channels combined, with a~relative uncertainty of $15\%$), agree with the theoretical calculations within uncertainties. The total uncertainty is dominated by systematic sources as discussed in Sec.~\ref{sec:Uncertainties}.
\input{table_Xsection}
The unfolded distributions are also shown compared to different MC generators in Fig.~\ref{fig:combined_results_with_MC}. {\sc Alpgen}\xspace{} and {\sc MC{@}NLO}\xspace{} use {\sc Herwig}\xspace{} for parton shower and hadronization, while the PDFs are different as mentioned in Sec.~\ref{sec:Simulation}, and {\sc Powheg}\xspace{} is shown interfaced with both {\sc Herwig}\xspace{} and {\sc Pythia}\xspace{}
The covariance matrices for the normalized unfolded spectra due to the statistical and systematic uncertainties are displayed in~Table~\ref{tab:Cov}. They are obtained by
evaluating the covariance between the kinematic bins using pseudo-experiments simultaneously in both the {$e+$jets}\xspace{} and {$\mu+$jets}\xspace{}
channels and combining them as described in Sec.~\ref{sec:Combination}.
The correlations due to statistical fluctuations are shown in Appendix~\ref{Sec:Appendix:CorrVars}.
They are evaluated by varying the data event counts independently in every bin before unfolding, propagating the statistical uncertainties through the unfolding
separately for the {$e+$jets}\xspace{} and {$\mu+$jets}\xspace{} channels, and then performing the
combination of the two channels.
Large off-diagonal correlations come from
the normalization constraint for the spectra and the regularization in
the unfolding procedure.
The statistical correlations between bins of different variables have also been
evaluated and are presented in Appendix~\ref{Sec:Appendix:CorrVars}.
\input{covarianceMatrices}
\section{Interpretation}
\label{sec:Interpretation}
The level of agreement between the measured distributions, simulations with different MC generators and
theoretical predictions was quantified
by calculating $\chi^2$ values, employing the full covariance matrices, evaluated as
described in Sec.~\ref{sec:Results}, and
inferring $p$-values (probabilities that the $\chi^2$ is larger than or equal to the observed value)
from the $\chi^2$ and the number of degrees of freedom (NDF).
The normalization constraint used to derive the normalized differential cross-sections
lowers by one unit the NDF and the rank of the $N_{\rm b} \times N_{\rm b}$ covariance matrix, where $N_{\rm b}$
is the number of bins of the spectrum under consideration. In order to evaluate the
$\chi^2$ the following relation was used:
\begin{equation}
\chi^2 = V_{N_{\rm b}-1}^{\rm T} \cdot {\rm Cov}_{N_{\rm b}-1}^{-1} \cdot V_{N_{\rm b}-1}
\end{equation} where $V_{N_{\rm b}-1}$ is the vector of differences between data and predictions obtained
discarding one of the $N_{\rm b}$ elements and ${\rm Cov}_{N_{\rm b}-1}$ is the $(N_{\rm b}-1) \times (N_{\rm b}-1)$ sub-matrix
derived from the full covariance matrix discarding the corresponding row and column.
The sub-matrix obtained in this way is invertible and allows the $\chi^2$ to be computed.
The $\chi^2$ value does not depend on the choice of the element discarded for the vector $V_{N_{\rm b}-1}$ and
the corresponding sub-matrix ${\rm Cov}_{N_{\rm b}-1}$.
The predictions from MC generators do not include theoretical uncertainties and were evaluated using a~specific set of tuned parameters.
The $p$-values comparing the measured spectra to the predictions of MC generators shown in Fig.~\ref{fig:combined_results_with_MC} are listed in Table~\ref{tab:pvalues_combined_norm}. No single generator performs best for all the kinematic variables; however, the difference in $\chi^2$ between generators demonstrates that the data have sufficient precision to probe the predictions. For \ensuremath{\pt^t}{} the agreement with {\sc Alpgen}\xspace{}+{\sc Herwig}\xspace{} and {\sc Powheg}\xspace{}+{\sc Pythia}\xspace{} is particularly bad due to a~significant discrepancy in the tail of the distribution. {\sc MC{@}NLO}\xspace{}+{\sc Herwig}\xspace{} and {\sc Powheg}\xspace{}+{\sc Herwig}\xspace{} predict shapes closer to the measured distribution. As can be seen in Fig.~\ref{fig:combined_results_with_MC}, there is a~general trend of data being softer in \ensuremath{\pt^t}{} above $200\,$GeV compared to all generators. The shape of the \ensuremath{m_{\ttbar}}{} distribution is best described by {\sc Alpgen}\xspace{}+{\sc Herwig}\xspace{} and {\sc Powheg}\xspace{}+{\sc Herwig}\xspace{}. The \ensuremath{\pt^{\ttbar}}{} shape is described best by {\sc MC{@}NLO}\xspace{}+{\sc Herwig}\xspace{} and particularly badly by {\sc Powheg}\xspace{}+{\sc Pythia}\xspace{} while the \ensuremath{y_{\ttbar}}{} shape is described best by {\sc Alpgen}\xspace{}+{\sc Herwig}\xspace{}.
\input{pvaluesTableCombinedNorm}
The distributions are also shown compared to QCD calculations at NLO (based on {\sc MCFM}\xspace~\cite{MCFM} version 6.5 with the CT10 PDF) in
Fig.~\ref{fig:combined_results_with_NLO} and to NLO+NNLL calculations for \ensuremath{\pt^t}{}~\cite{NNLO_calc}, \ensuremath{m_{\ttbar}}{}~\cite{nnloMtt} and \ensuremath{\pt^{\ttbar}}{}~\cite{PhysRevLett.110.082001,PhysRevD.88.074004}, all using the MSTW2008NNLO~\cite{MSTW} PDF,
in Fig.~\ref{fig:combined_results_with_NNLL}. The $p$-values for these comparisons are shown in
Table~\ref{tab:pvalues_combined_norm}.
The uncertainties in the NLO predictions due to the parton
distribution functions were evaluated at the 68$\%$ confidence level (CL) using the CT10 PDF error-sets.
Another source of uncertainty considered is the one related to the factorization and renormalization scales.
The nominal value was assumed to be $\mu = \ensuremath{m_{\mathrm{t}}}$ for both scales, and is varied simultaneously
up and down from $2 m_t$ to $m_t/2$.
The full covariance matrix, including the bin-wise correlations induced by the uncertainties in the scale and in
the different PDF components, was used for the $\chi^2$ evaluation.
For the NLO+NNLL predictions of \ensuremath{m_{\ttbar}}{} and \ensuremath{\pt^{\ttbar}}{} spectra, the calculation is performed using the mass of the \ttbar{} system as the dynamic scale of the process. The uncertainties come from doubling and halving this scale and
from the PDF uncertainty evaluated at the 68$\%$ CL using the MSTW2008NNLO PDF error-sets.
For the NLO+NNLL prediction of the \ensuremath{\pt^t}{} spectrum, besides the fixed scale uncertainty, the contribution of the
alternative dynamic scale $\mu = \sqrt{m_t^2 + {\ensuremath{\pt^t}}^2}$ is also included; in this case the PDF uncertainty is not provided.
For both the above theoretical calculations the bin-wise correlations were taken into account in evaluating the $\chi^2$s and $p$-values, which are shown in Table~\ref{tab:pvalues_combined_norm}.
The data are softer than both the NLO and NLO+NNLL QCD calculations in the
tail of the \ensuremath{\pt^t}{} distribution.
The measured \ensuremath{m_{\ttbar}}{} spectrum also falls more quickly than either the
NLO or NLO+NNLL predictions. The \ensuremath{\pt^{\ttbar}}{} spectrum agrees poorly with both the NLO and NLO+NNLL predictions.
No electroweak corrections are included in these predictions, and these were
shown in Ref.~\cite{Manohar:2012rs,Bernreuther:2008aw,epj.c51.37,kuhn:ttp13.015} to have non-negligible effects in the \ensuremath{\pt^t}{} and \ensuremath{m_{\ttbar}}{} distributions.
The predictions of various NLO PDF sets are evaluated using {\sc MCFM}\xspace, interfaced to four different PDF sets: CT10~\cite{CT10},
MSTW2008NLO~\cite{MSTW}, NNPDF2.3~\cite{NNPDF} and HERAPDF1.5~\cite{HERA}. The uncertainties in the predictions
include the PDF uncertainties~\footnote{For HERAPDF1.5, only the 21 member PDFs accounting for experimental uncertainties are taken into account.} and the fixed scale uncertainties already described.
The comparisons between data and the
different predictions are presented in Fig.~\ref{fig:pdf_ratios} for the normalized differential cross-sections and
the $p$-values for these comparisons are shown in Table~\ref{tab:PvalsPDF}.
The significant changes in $\chi^2$ between the different PDF sets for the \ensuremath{\pt^t}{}, \ensuremath{m_{\ttbar}}{} and \ensuremath{y_{\ttbar}}{} distributions indicate that the data can be used to improve the precision of future PDF fits.
\input{pvaluesPDFTableCombinedNorm}
As can be seen in Fig.~\ref{fig:pdf_ratios}, a~certain tension between data and all predictions is observed in the case of the top-quark $\pt$ distribution at high $\pt$ values. For the \ensuremath{m_{\ttbar}}{} distribution, the agreement with HERAPDF1.5 is better than that with the other PDF predictions. For the \ensuremath{\pt^{\ttbar}}{} distribution, one should note that {\sc MCFM}\xspace{} is effectively only a~leading-order calculation and resummation effects are expected to play an important role at low \ensuremath{\pt^{\ttbar}}{}. Finally, for the \ensuremath{\left|y_{\ttbar}\right|}\xspace{} distribution, the NNPDF2.3 and especially HERAPDF1.5 sets are in better agreement with the data.
\begin{figure*}[!ht]
\centering
\subfigure[]{ \includegraphics[width=0.45\textwidth]{Tag1_Top2_lhood_pubMC_normFinal_log}\label{unf_top2_MC}}
\subfigure[]{ \includegraphics[width=0.45\textwidth]{Tag1_SystemMass_pubMC_normFinal_log}\label{unf_mass_MC}}
\subfigure[]{ \includegraphics[width=0.45\textwidth]{Tag1_SystemPt_pubMC_normFinal_log}\label{unf_pt_MC}}
\subfigure[]{ \includegraphics[width=0.45\textwidth]{Tag1_SystemAbsRap_pubMC_normFinal_log}\label{unf_rap_MC}}
\caption{(Color online) Normalized differential cross-sections for the \subref{unf_top2_MC}~transverse momentum of the hadronically decaying top quark ($\ensuremath{\pt^t}$), and the \subref{unf_mass_MC} mass ($\ensuremath{m_{\ttbar}}$), \subref{unf_pt_MC}~transverse momentum ($\ensuremath{\pt^{\ttbar}}$) and the \subref{unf_rap_MC}~absolute value of the rapidity ($\ensuremath{\left|y_{\ttbar}\right|}\xspace$) of the \ttbar{} system. Generator predictions are shown as markers for {\sc Alpgen}\xspace{}+{\sc Herwig}\xspace{} (circles), {\sc MC{@}NLO}\xspace{}+{\sc Herwig}\xspace{} (squares), {\sc Powheg}\xspace{}+{\sc Herwig}\xspace{} (triangles) and {\sc Powheg}\xspace{}+{\sc Pythia}\xspace{} (inverted triangles). The markers are offset within each bin to allow for better visibility.
The gray bands indicate the total uncertainty on the data in each bin. The lower part of each figure shows the ratio of the generator predictions to data. For \ensuremath{\pt^{\ttbar}}{} the {\sc Powheg}\xspace{}+{\sc Pythia}\xspace{} marker cannot be seen in the last bin of the ratio plot because it falls beyond the axis range.
The cross-section in each bin is given as the integral of the differential cross-section over the bin width, divided by the bin width.
The calculation of the cross-sections in the last bins includes events falling outside of the bin edges, and the normalization is done within the quoted bin width.
The bin ranges along the horizontal axis (and not the position of the markers) can be associated with the normalized differential cross-section values along the vertical axis.
}
\label{fig:combined_results_with_MC}
\end{figure*}
\begin{figure*}[!ht]
\centering
\subfigure[]{ \includegraphics[width=0.45\textwidth]{Tag1_Top2_lhood_pubNLO_normFinal_log}\label{unf_top2_NLO}}
\subfigure[]{ \includegraphics[width=0.45\textwidth]{Tag1_SystemMass_pubNLO_normFinal_log}\label{unf_mass_NLO}}
\subfigure[]{ \includegraphics[width=0.45\textwidth]{Tag1_SystemPt_pubNLO_normFinal_log}\label{unf_pt_NLO}}
\subfigure[]{ \includegraphics[width=0.45\textwidth]{Tag1_SystemAbsRap_pubNLO_normFinal_log}\label{unf_rap_NLO}}
\caption{(Color online) Normalized differential cross-sections for the \subref{unf_top2_NLO}~transverse momentum of the hadronically decaying top-quark ($\ensuremath{\pt^t}$), and the \subref{unf_mass_NLO} mass ($\ensuremath{m_{\ttbar}}$), \subref{unf_pt_NLO}~transverse momentum ($\ensuremath{\pt^{\ttbar}}$) and the \subref{unf_rap_NLO}~absolute value of the rapidity ($\ensuremath{\left|y_{\ttbar}\right|}\xspace$) of the \ttbar{} system. The distributions are compared to NLO QCD predictions (based on {\sc MCFM}\xspace~\cite{MCFM} with the CT10 PDF). The bin ranges along the horizontal axis (and not the position of the markers) can be associated with the normalized differential cross-section values along the vertical axis. The error bars correspond to the PDF and fixed scale uncertainties in the theoretical prediction. The gray bands indicate the total uncertainty on the data in each bin. The lower part of each figure shows the ratio of the NLO QCD predictions to data.
The cross-section in each bin is given as the integral of the differential cross-section over the bin width, divided by the bin width.
The calculation of the cross-sections in the last bins includes events falling outside of the bin edges, and the normalization is done within the quoted bin width. }
\label{fig:combined_results_with_NLO}
\end{figure*}
\begin{figure*}[!ht]
\centering
\subfigure[]{ \includegraphics[width=0.45\textwidth]{Tag1_Top2_lhood_pubNNLL_normFinal_log}\label{unf_top2_NNLL}}
\subfigure[]{ \includegraphics[width=0.45\textwidth]{Tag1_SystemMass_pubNNLL_normFinal_log}\label{unf_mass_NNLL}}
\subfigure[]{ \includegraphics[width=0.45\textwidth]{Tag1_SystemPt_pubNNLL_normFinal_log}\label{unf_pTtt_NNLL}}
\caption{(Color online) Normalized differential cross-sections for the \subref{unf_top2_NNLL} transverse momentum of the hadronically decaying top-quark ($\ensuremath{\pt^t}$), the \subref{unf_mass_NNLL}~mass of the $\ttbar$ system ($\ensuremath{m_{\ttbar}}$), and the \subref{unf_pTtt_NNLL}~transverse momentum of the $\ttbar$ system ($\ensuremath{\pt^{\ttbar}}$) .
The distributions are compared to the predictions from
NLO+NNLL calculations for $\ensuremath{\pt^t}$~\cite{NNLO_calc}, $\ensuremath{m_{\ttbar}}$~\cite{nnloMtt} and $\ensuremath{\pt^{\ttbar}}$~\cite{PhysRevLett.110.082001,PhysRevD.88.074004},
all using the MSTW2008NNLO PDF. The bin ranges along the horizontal axis (and not the position of the markers) can be associated with the normalized differential cross-section values along the vertical axis. The error bars correspond to the fixed (and dynamic in the case of \ensuremath{\pt^t}{}) scale uncertainties in the theoretical prediction. The gray bands indicate the total uncertainty on the data in each bin. The lower part of each figure shows the ratio of the NLO+NNLL calculations to data.
The cross-section in each bin is given as the integral of the differential cross-section over the bin width, divided by the bin width.
The calculation of the cross-sections in the last bins includes events falling outside of the bin edges, and the normalization is done within the quoted bin width. }
\label{fig:combined_results_with_NNLL}
\end{figure*}
\begin{figure*}[!ht]
\centering
\subfigure[]{ \includegraphics[width=0.45\textwidth]{pdf-ratio_pt_top_hadronic_Relative} \label{pdf_pt}}
\subfigure[]{ \includegraphics[width=0.45\textwidth]{pdf-ratio_m_tt_Relative} \label{pdf_mass} }
\subfigure[]{ \includegraphics[width=0.45\textwidth]{pdf-ratio_pt_tt_Relative} \label{pdf_pttt}}
\subfigure[]{ \includegraphics[width=0.45\textwidth]{pdf-ratio_ay_tt_Relative} \label{pdf_aytt}}
\caption{(Color online) Ratios of the NLO QCD predictions~\cite{MCFM} to the measured normalized differential cross-sections for different PDF sets (CT10~\cite{CT10}, MSTW2008NLO~\cite{MSTW}, NNPDF2.3~\cite{NNPDF} and HERAPDF1.5~\cite{HERA}) (markers) for the \subref{pdf_pt}~transverse momentum of the hadronically decaying top-quark ($\ensuremath{\pt^t}$), and the \subref{pdf_mass} mass ($\ensuremath{m_{\ttbar}}$), the \subref{pdf_pttt} transverse momentum ($\ensuremath{\pt^{\ttbar}}$), and the \subref{pdf_aytt}~absolute value of the rapidity ($\ensuremath{\left|y_{\ttbar}\right|}\xspace$) of the $\ttbar$ system. The markers are offset in each bin and the bins are of equal size to allow for better visibility. The gray bands indicate the total uncertainty on the data in each bin, while the error bars denote the uncertainties in the predictions, which include the internal PDF set variations and also fixed scale uncertainties.}
\label{fig:pdf_ratios}
\end{figure*}
\clearpage
\section{Conclusion}
\label{sec:Conclusion}
Kinematic distributions of the top quarks in \ttbar{} events, selected in the {$\ell+$jets}\xspace channel, were measured using data from $7\,{\rm TeV}$ proton--proton collisions collected by the ATLAS detector at the CERN Large Hadron Collider. This dataset corresponds to an integrated luminosity of \mbox{4.6\,fb$^{-1}$}. Normalized differential cross-sections have been measured as a~function of the top-quark transverse momentum and as a~function of the mass, transverse momentum, and rapidity of the \ttbar{} system. These results agree with the previous ATLAS measurements and supersede them with a~larger dataset, smaller uncertainties, and an additional variable.
In general the Monte Carlo predictions and the QCD calculations agree with data in a~wide kinematic region. However, data are softer than all predictions in the tail of the \ensuremath{\pt^t}{} spectrum, particularly in the case of the {\sc Alpgen}\xspace{}+{\sc Herwig}\xspace{} and {\sc Powheg}\xspace{}+{\sc Pythia}\xspace{} generators.
The same trend is observed for the NLO+NNLL predictions of the \ensuremath{m_{\ttbar}}{} and \ensuremath{\pt^t}{}
spectra which tend to be above the data in the tail of the
distributions. Nevertheless the overall agreement is still found to be
reasonable for these two variables while it is worst for \ensuremath{\pt^{\ttbar}}{}.
The distributions show some preference for HERAPDF1.5 when used in conjunction with a~fixed-order NLO QCD calculation. More precise conclusions about PDFs will be possible from the comparison of these measurements to future calculations at NNLO+NNLL in QCD and after including electroweak effects.
\section*{Acknowledgments}
We thank CERN for the very successful operation of the LHC, as well as the support staff from our institutions without whom ATLAS could not be operated efficiently.
We acknowledge the support of ANPCyT, Argentina; YerPhI, Armenia; ARC,
Australia; BMWF and FWF, Austria; ANAS, Azerbaijan; SSTC, Belarus; CNPq and FAPESP,
Brazil; NSERC, NRC and CFI, Canada; CERN; CONICYT, Chile; CAS, MOST and NSFC,
China; COLCIENCIAS, Colombia; MSMT CR, MPO CR and VSC CR, Czech Republic;
DNRF, DNSRC and Lundbeck Foundation, Denmark; EPLANET, ERC and NSRF, European Union;
IN2P3-CNRS, CEA-DSM/IRFU, France; GNSF, Georgia; BMBF, DFG, HGF, MPG and AvH
Foundation, Germany; GSRT and NSRF, Greece; ISF, MINERVA, GIF, DIP and Benoziyo Center,
Israel; INFN, Italy; MEXT and JSPS, Japan; CNRST, Morocco; FOM and NWO,
Netherlands; BRF and RCN, Norway; MNiSW, Poland; GRICES and FCT, Portugal; MERYS
(MECTS), Romania; MES of Russia and ROSATOM, Russian Federation; JINR; MSTD,
Serbia; MSSR, Slovakia; ARRS and MIZ\v{S}, Slovenia; DST/NRF, South Africa;
MICINN, Spain; SRC and Wallenberg Foundation, Sweden; SER, SNSF and Cantons of
Bern and Geneva, Switzerland; NSC, Taiwan; TAEK, Turkey; STFC, the Royal
Society and Leverhulme Trust, United Kingdom; DOE and NSF, United States of
America.
The crucial computing support from all WLCG partners is acknowledged
gratefully, in particular from CERN and the ATLAS Tier-1 facilities at
TRIUMF (Canada), NDGF (Denmark, Norway, Sweden), CC-IN2P3 (France),
KIT/GridKA (Germany), INFN-CNAF (Italy), NL-T1 (Netherlands), PIC (Spain),
ASGC (Taiwan), RAL (UK) and BNL (USA) and in the Tier-2 facilities
worldwide.
\clearpage
\onecolumngrid
\newpage
|
1,116,691,499,837 | arxiv |
\section{Introduction}
\label{sec:introduction}
\input{introduction.tex}
\section{V2V+V2I Network Model}\label{sec:model}
\input{model.tex}
\section{V2V Cluster Characterization}
\label{sec:cluster}
\input{clusters.tex}
\section{Performance analysis} \label{sec-perf_analysis}
\label{sec:perf_analysis}
\input{perf_analysis.tex}
\section{Extension to Multilane Highways}
\label{sec:multilane}
\input{multilane_ext.tex}
\section{Multilane Performance Evaluation}
\label{sec:multilane_perf_eval}
\input{perf_eval.tex}
\section{Revisiting the Poisson Assumption}
\label{sec:poisson}
\input{poisson.tex}
\section{Conclusion}
\label{sec:conclusion}
\input{conclusion.tex}
\bibliographystyle{IEEEtran}
\subsection{Typical Vehicle Coverage Probability}
In this section, we shall refer to the coverage probability as the probability that a typical vehicle is connected to one or more RSUs, either directly or through V2V relaying.
Clearly the benefit of the V2V+V2I network is that it allows vehicles to relay messages from RSUs, increasing the coverage probability. We let $\pi_v$ denote the probability that a typical vehicle is connected (possible through relaying) to the infrastructure.
Specifically, note that the typical vehicle coverage probability for the benchmark V2I network is independent of the traffic intensity. By contrast, in the V2V+V2I network, higher traffic intensities lead to longer and bigger clusters, increasing the typical vehicle coverage probability. The following result addresses the coverage probability for both networks assuming $2d \leq \lambda_r^{-1}$.
\begin{lemma}(\textbf{Coverage probability})\label{lem:coverage}
The coverage probability of a \textbf{typical vehicle} in the V2V+V2I network is given by:
\begin{equation}
\pi_v=\varphi^2 \cdot\sum\limits_{n=1}^{\infty} n \cdot(1-\varphi)^{n-1}\cdot {\color{black}F^c_{M\mid N}\left(0\mid n\right),}
\label{eq:coverage_V2V}
\end{equation}
where $F^c_{M\mid N}\left(0\mid n\right)$ is the probability that a cluster is connected to at least 1 RSU given $N=n$, given in Lemma~\ref{lem:mdi}.
The coverage probability of a typical vehicle in a V2I network is independent of $\lambda_v$ and given by:
\begin{equation}
\pi_v^{*}=\frac{2d}{\lambda_r^{-1}},\quad \mbox{ for } ~ ~ ~ 2d\le \lambda_r^{-1}.
\label{eq:coverage_noV2V}
\end{equation}
\end{lemma}
Numerical evaluations of \eqref{eq:coverage_V2V} and \eqref{eq:coverage_noV2V} are displayed in Figure \ref{fig:results1} (left). As expected, the coverage probability is always greater for V2V+V2I and increases rapidly to 1 with the traffic load intensity $\lambda_v$ on the road. Figure~\ref{fig:results_gamma} exhibits the coverage probability for V2V+V2I as a function of the penetration $\gamma$; it shows that the sensitivity of the coverage to the traffic intensity is higher at higher $\gamma$, e.g., for $\gamma=0.9$ where the coverage probability attains a maximum for $\lambda_v\approx 25$ vehicles/km and varies notably with $\lambda_v$. Indeed, increasing $\lambda_v$ increases the effect of the blocking vehicles, reaching a regime where long clusters are not possible and where $\pi_v$ is independent of $\lambda_v$, consistently with \eqref{eq:coverage_noV2V}. Therefore, if $\gamma <1$, $\pi_v$ eventually decreases and converges back to the value presented in \eqref{eq:coverage_noV2V}.
\vspace{-0.2cm}
\begin{figure}[h]
\centering \includegraphics[width=\columnwidth]{Figures/v2v_inversion_gamma_var.pdf}
\vspace{-0.2cm}
\caption{Impact of the load in the coverage probability for different market penetrations $\gamma$.}
\label{fig:results_gamma}
\end{figure}
\vspace{-0.3cm}
\begin{figure}[h]
\centering
\includegraphics[width=\columnwidth]{Figures/results_ecdfv2}
\vspace{0.03cm}
\caption{Empirical CDF of the typical shared rate for V2I vs V2V+V2I and different inter-RSU distances ( $\gamma=\rho=1$).}
\label{fig:results_ecdf}
\end{figure}
\subsection{Typical Vehicle Shared Rate}
The shared rate seen by a typical vehicle is defined as its allocations of the multihomed RSU capacity of its cluster under max-min fair sharing and denoted by the random variable $R_v$. The shared rate, for both networks, i.e., V2V+V2I and V2I, thus depends on
$\lambda_v,\gamma, d, \rho^{\text{RSU}}$ and $\lambda_r^{-1}.$
\begin{theorem}(\textbf{Expected shared rate})~\label{thm:throughput}
The mean shared rate of a typical vehicle in the V2V+V2I and the V2I networks are equal, i.e., $\mathbb{E}[R_v]=\mathbb{E}[R_v^*]$ and given by:
\begin{equation}
\mathbb{E}[R_v] =\frac{\rho^{RSU}}{\gamma\lambda_v \lambda_r^{-1}}\left(1-e^{-2\gamma\lambda_v d}\right)\le\rho^{RSU} \frac{\mathbb{E}[M]}{\mathbb{E}[N]},
\label{eq:throughput}
\end{equation}
\end{theorem}
\noindent where $\mathbb{E}[M]$ and $\mathbb{E}[N]$ can be computed using Lemma~\ref{lem:mdi}.
{\color{black} Note that the mean rate for both architectures are equal because the number of busy RSUs is the same, independently of the underlying V2V connectivity. Assuming all vehicles are infinitely backlogged the overall downlink rate is the same and thus so is the mean rate per vehicle.}
Although V2V relaying collaboration does not alter the \textit{mean} shared rate seen by vehicles, see Figure \ref{fig:results1} (center); it significantly impacts the coverage probability and the shared rate \textit{distribution}.
\begin{theorem}(\textbf{Shared rate distribution}) \label{thm:cdf_rate}
The CDF of the shared rate in a V2V+V2I network $R_v$ satisfies:
\begin{align}
F_{R_v}(r)\ge 1- \varphi^2\sum\limits_{n=1}^{\infty} n ~ (1-\varphi)^{n-1} F^c_{M\mid N}&\left(\Bigl\lceil\frac{r n}{\rho^{\text{RSU}}}\Bigr\rceil\mid n\right), \label{eq:cdfR}
\end{align}
and $P(R_v=0)=1-\pi_v$ while that in the V2I network is given by
\begin{equation}
F_{R_v^{*}}(r)=1-\frac{2d}{\lambda_r^{-1}}\cdot Q\left( \frac{\rho^{RSU}}{r}-1 , ~ ~ 2~\gamma~\lambda_v~d \right),
\label{eq:cdfR_noV2V}
\end{equation}
where $Q$ is the regularized gamma function and $P(R_v^{*}=0)=1-\pi_v^{*}$.
Furthermore, ${R_v^{*}}\ge^{\text{icx}}{R_v}$, where \textit{icx} dominance\footnote{The definition for icx dominance is found in Definition \ref{def:dominance} in the appendix} implies:
\begin{equation}
\mbox{Var}({R_v^{*}})\ge\mbox{Var}({R_v}).
\label{eq:variance_rate}
\end{equation}
\end{theorem}
Numerical evaluations of \eqref{eq:cdfR} and \eqref{eq:cdfR_noV2V} are shown in Figure~\ref{fig:results_ecdf} and the resulting variability in Figure~\ref{fig:results_var}. These demonstrate the superiority of the V2V+V2I network architecture in terms of providing, not only improved connectivity, but also a substantial decrease in the shared rate variability of a typical user.
Note that in Figure~\ref{fig:results_var} we have plotted the dispersion of the per-user shared rate, defined as $\sigma/\mu,$ i.e., the standard-deviation over the mean of the per user shared rate. {\color{black}In addition, we have displayed the lower bound on the dispersion for the non-V2V scenario, given by the dispersion as $\lambda_v\to\infty$.}
{\color{black}It can be observed} that the rate dispersion converges to 0 for the V2V+V2I network. By contrast, in the V2I network the dispersion of the shared rate is bounded below. These results show that the V2V+V2I network at reasonably high vehicle density will provide them with an increasingly stable and almost deterministic shared rate to vehicles.
\subsection{Multihoming Redundancy}
RSU multihoming provides connection redundancy to a cluster. This redundancy in principle improves the reliability of vehicle connectivity in presence of unreliable/obstructed V2I links. The following result follows immediately from \eqref{eq:throughput} in Theorem~\ref{thm:throughput}.
\begin{corollary} (\textbf{Multihoming / redundancy})
The \textbf{expected number of RSUs} $\mathbb{E}[M]$ per cluster is bounded by:
\begin{equation}
\mathbb{E}[M]\ge \frac{ 1- e^{-2 \gamma \lambda_v d}}{\gamma\lambda_v \lambda_r^{-1} (1-\gamma+\gamma \cdot e^{-\gamma \lambda_v d})},
\end{equation}
which for full market penetration corresponds to
\begin{equation}
\mathbb{E}[M]\ge \frac{ e^{\lambda_v d}- e^{-\lambda_v d}}{\lambda_v \lambda_r^{-1}}=\frac{2\sinh(\lambda_v d)}{\lambda_v \lambda_r^{-1}}.
\end{equation}
\end{corollary}
As can be observed from this equation, $\mathbb{E}[M]$ i.e., the expected number of RSUs that the cluster of a typical vehicle is connected to grows rapidly with the traffic intensity $\lambda_v$ and the vehicle communication range $d.$
A similar trend is observed in Figure \ref{fig:results1} (right) where we have plotted $\mathbb{E}[M_v]$, the mean number of RSUs a typical vehicle would see its cluster connected to. We see a rapid increase in the expected number of RSUs as $\lambda_v$ increases. These results confirm an exponential growth of redundancy suggesting possibly substantial improvements in reliability of multihomed systems.
\begin{figure}[t]
\centering
\includegraphics[width=1\columnwidth]{Figures/dispersion_bb.pdf}
\caption{Dispersion (standard deviation over the mean) of the vehicle shared rates under {\color{black}V2V+V2I and V2I} only scenarios, and different inter-RSU distances ($d=150$m, $\gamma=1$).}
\label{fig:results_var}
\vspace{-0.15cm}
\end{figure}
The benefit of the redundancy is also reflected in Figure \ref{fig:results_ge2} which exhibits the probability that a typical vehicle benefits from multihoming as the vehicle intensity increases. This probability reaches values very close to 1 under heavy and congested traffic conditions, for the given values of $\lambda_r^{-1}$, providing evidence of the potential for higher reliability through multihoming.
\begin{figure}[h]
\centering
\includegraphics[width=0.95\columnwidth]{Figures/PMV_ge2_bb}
\caption{Redundancy: Probability for a typical vehicle cluster to be connected to 2 or more RSUs.}
\label{fig:results_ge2}
\end{figure}
\subsection{Homogeneous Multilane Highways}
\begin{figure}[h]
\centering
\includegraphics[width=\columnwidth]{Figures/DoF}
\caption{Typical vehicle's coverage probability analysis as the driving ``degrees of freedom", i.e. $\eta$ increases, for different penetration rates.}
\label{fig:DoF}
\vspace{-0.4cm}
\end{figure}
\begin{figure*}[h]
\centering
\begin{subfigure}[h]{0.5\textwidth}
\centering
{{\includegraphics[width=0.97\textwidth]{Figures/simplex}~\\~\\ }}%
\caption{Coverage probability for $\eta=3$ lanes.}
\end{subfigure}%
\hfill
\begin{subfigure}[h]{0.45\textwidth}
\centering
{{\includegraphics[width=1\textwidth]{Figures/mergedFigure3}~\\~\\ }}%
\caption{Coverage probability for different configurations.}
\end{subfigure}%
\caption{Multilane configuration coverage probability analysis for $\gamma=0.8, d=150$m, $\lambda_r^{-1}=1$Km.}%
\label{fig:configurations}%
\vspace{-0.5cm}
\end{figure*}
Figure~\ref{fig:DoF} illustrates the variation in the coverage probability $\pi_v$ as $\eta$ increases, but the overall traffic intensity on the highway ($\lambda_v=20$ vehicles/Km) remains unaltered. This can be interpreted as the effect of increasing the vehicles' ``degrees of freedom" to overcome blocking by legacy vehicles.
A first observation is that the marginal gain in performance is most considerable when increasing the number of lanes from 1 to 2, while further increments in the number of lanes result in smaller relative gains. An explanation of this effect is that vehicles in the V2V+V2I network will see on average twice fewer blockers when passing from $\eta=1$ to $2$; while the relative decrease in the average number of blockers is smaller for higher values of $\eta$. Note that increasing the ``degrees of freedom" does not affect the performance of the system under full-market penetration as the same clusters will be formed for any value of $\eta$. From this result, one can infer that, as long as it is greater or equal than 2, the number of lanes of a highway, will not substantially affect the connectivity probability.
\subsection{Heterogeneous Multilane Highways}
Next, we further explore the impact of heterogeneous traffic intensity across lanes on the coverage probability $\pi_v$. Note that such heterogeneity is typical in highways nowadays in a free-flow regime, since for instance a greater density of slower vehicles is seen in the right hand lanes.
Figure~\ref{fig:configurations}(a) exhibits the effect of the vehicle distribution on a three-lane highway.
In this figure, each coordinate represents the proportion of vehicles driving on each lane, therefore all possible configurations lie on the simplex. We observe that the homogeneous configuration has the best performance as it offers the best balance between minimizing the effect of blockers on the same and across lanes. The results show that performance deteriorates slowly when moving away from the homogeneous configuration, only experiencing notable decreases when moving to extreme distributions, e.g., all users are concentrated on one lane. In order to extrapolate these results to greater values of $\eta$ we define five different types of heterogeneous lane intensity distributions:
\begin{itemize}
\item Homogeneous: all lanes have equal vehicle intensities, e.g. for $\eta=5$, $\boldsymbol{\lambda} = \lambda_v \eta \cdot [\frac{1}{5}, \frac{1}{5}, \frac{1}{5}, \frac{1}{5}, \frac{1}{5}]$.
\item V: traffic is symmetrically and gradually concentrated around the leftmost and rightmost lanes of the highway, such that the intensity is minimized in the middle and maximized in the first and last lanes, e.g. for $\eta=5$, $\boldsymbol{\lambda} = \lambda_v \eta \cdot [\frac{1}{3}, \frac{2}{15}, \frac{1}{15}, \frac{2}{15}, \frac{1}{3}]$.
\item C: traffic is restricted to two lanes with identical intensities while $\eta-2$ lanes are empty, e.g. for $\eta=5$, $\boldsymbol{\lambda} = \lambda_v \eta \cdot [\frac{1}{2}, 0, 0, 0, \frac{1}{2}]$.
\item I: traffic is restricted to one lane with $\eta-1$ lanes empty, e.g. for $\eta=5$, $\boldsymbol{\lambda} = \lambda_v \eta \cdot [1, 0, 0, 0, 0]$.
\item L: $90\%$ of traffic is in the first lane while the other $10\%$ is evenly distributed across the $\eta-1$ remaining lanes, e.g. for $\eta=5$, $\boldsymbol{\lambda} = \lambda_v \eta \cdot [\frac{9}{10}, \frac{1}{40}, \frac{1}{40}, \frac{1}{40}, \frac{1}{40}]$.
\end{itemize}
Figure~\ref{fig:configurations}(b) confirms the trends exhibited in Figure~\ref{fig:configurations}(a) as the number of lanes of the highway increases. The homogeneous distribution remains best as compared to the V, C, L and I configurations.
We note that unlike in Figure~\ref{fig:DoF}, the total number of vehicles increases with $\eta$ in the highway system.
An interesting insight which can be inferred from these results is the idea that congested highways (large $\lambda_v$) may have a better connectivity performance than free-flowing systems, as the intensity distribution is typically uniform across all the lanes in such cases.
\subsection{V2V Segregation Impact}
While manufacturers progressively release new vehicle models equipped with the V2V+V2I technology, we envision a transition period during which the roads will be shared among the new V2V-enabled and older legacy vehicles.
In order to accelerate the integration and the spread of new automotive technologies, policies restricting specific lanes to driverless and V2V-enabled vehicles only might be put into place. This is akin to the current concept of high-occupancy vehicle lane.
We analyze the effect on the coverage probability of reserving the first lane for V2V-enabled vehicles and we will define $\alpha$ as the percentage of V2V-enabled vehicles driving on this lane, i.e. the first lane has a vehicle intensity of $\alpha\gamma\lambda_v$ with only V2V-enabled vehicles while the others are mixed and uniformly distributed. Figure~\ref{fig:segregation} shows the effect of $\alpha$ on the network performance.
\begin{figure}[!t]
\vspace{0cm}
\centering
\includegraphics[width=0.95\columnwidth]{Figures/segregation_truncated}
\caption{Connectivity of $\alpha$-segregated scenario for different penetration rates.} \label{fig:segregation}
\vspace{-0.3cm}
\end{figure}
We observe that for $\alpha$ large enough, segregation does indeed improve coverage, particularly at low market penetration levels, implying that such a policy would lead to improved connectivity in the early stages of the V2V capable vehicles deployment.
\section{Introduction}
\label{sec:introduction}
\input{introduction.tex}
\section{V2V+V2I Network Model}\label{sec:model}
\input{model.tex}
\section{V2V Cluster Characterization}
\label{sec:cluster}
\input{clusters.tex}
\section{Performance analysis} \label{sec-perf_analysis}
\label{sec:perf_analysis}
\input{perf_analysis.tex}
\section{Extension to Multilane Highways}
\label{sec:multilane}
\input{multilane_ext.tex}
\section{Multilane Performance Evaluation}
\label{sec:multilane_perf_eval}
\input{perf_eval.tex}
\section{Revisiting the Poisson Assumption}
\label{sec:poisson}
\input{poisson.tex}
\section{Conclusion}
\label{sec:conclusion}
\input{conclusion.tex}
\bibliographystyle{IEEEtran}
\subsection{Typical Vehicle Coverage Probability}
In this section, we shall refer to the coverage probability as the probability that a typical vehicle is connected to one or more RSUs, either directly or through V2V relaying.
Clearly the benefit of the V2V+V2I network is that it allows vehicles to relay messages from RSUs, increasing the coverage probability. We let $\pi_v$ denote the probability that a typical vehicle is connected (possible through relaying) to the infrastructure.
Specifically, note that the typical vehicle coverage probability for the benchmark V2I network is independent of the traffic intensity. By contrast, in the V2V+V2I network, higher traffic intensities lead to longer and bigger clusters, increasing the typical vehicle coverage probability. The following result addresses the coverage probability for both networks assuming $2d \leq \lambda_r^{-1}$.
\begin{lemma}(\textbf{Coverage probability})\label{lem:coverage}
The coverage probability of a \textbf{typical vehicle} in the V2V+V2I network is given by:
\begin{equation}
\pi_v=\varphi^2 \cdot\sum\limits_{n=1}^{\infty} n \cdot(1-\varphi)^{n-1}\cdot {\color{black}F^c_{M\mid N}\left(0\mid n\right),}
\label{eq:coverage_V2V}
\end{equation}
where $F^c_{M\mid N}\left(0\mid n\right)$ is the probability that a cluster is connected to at least 1 RSU given $N=n$, given in Lemma~\ref{lem:mdi}.
The coverage probability of a typical vehicle in a V2I network is independent of $\lambda_v$ and given by:
\begin{equation}
\pi_v^{*}=\frac{2d}{\lambda_r^{-1}},\quad \mbox{ for } ~ ~ ~ 2d\le \lambda_r^{-1}.
\label{eq:coverage_noV2V}
\end{equation}
\end{lemma}
Numerical evaluations of \eqref{eq:coverage_V2V} and \eqref{eq:coverage_noV2V} are displayed in Figure \ref{fig:results1} (left). As expected, the coverage probability is always greater for V2V+V2I and increases rapidly to 1 with the traffic load intensity $\lambda_v$ on the road. Figure~\ref{fig:results_gamma} exhibits the coverage probability for V2V+V2I as a function of the penetration $\gamma$; it shows that the sensitivity of the coverage to the traffic intensity is higher at higher $\gamma$, e.g., for $\gamma=0.9$ where the coverage probability attains a maximum for $\lambda_v\approx 25$ vehicles/km and varies notably with $\lambda_v$. Indeed, increasing $\lambda_v$ increases the effect of the blocking vehicles, reaching a regime where long clusters are not possible and where $\pi_v$ is independent of $\lambda_v$, consistently with \eqref{eq:coverage_noV2V}. Therefore, if $\gamma <1$, $\pi_v$ eventually decreases and converges back to the value presented in \eqref{eq:coverage_noV2V}.
\vspace{-0.2cm}
\begin{figure}[h]
\centering \includegraphics[width=\columnwidth]{Figures/v2v_inversion_gamma_var.pdf}
\vspace{-0.2cm}
\caption{Impact of the load in the coverage probability for different market penetrations $\gamma$.}
\label{fig:results_gamma}
\end{figure}
\vspace{-0.3cm}
\begin{figure}[h]
\centering
\includegraphics[width=\columnwidth]{Figures/results_ecdfv2}
\vspace{0.03cm}
\caption{Empirical CDF of the typical shared rate for V2I vs V2V+V2I and different inter-RSU distances ( $\gamma=\rho=1$).}
\label{fig:results_ecdf}
\end{figure}
\subsection{Typical Vehicle Shared Rate}
The shared rate seen by a typical vehicle is defined as its allocations of the multihomed RSU capacity of its cluster under max-min fair sharing and denoted by the random variable $R_v$. The shared rate, for both networks, i.e., V2V+V2I and V2I, thus depends on
$\lambda_v,\gamma, d, \rho^{\text{RSU}}$ and $\lambda_r^{-1}.$
\begin{theorem}(\textbf{Expected shared rate})~\label{thm:throughput}
The mean shared rate of a typical vehicle in the V2V+V2I and the V2I networks are equal, i.e., $\mathbb{E}[R_v]=\mathbb{E}[R_v^*]$ and given by:
\begin{equation}
\mathbb{E}[R_v] =\frac{\rho^{RSU}}{\gamma\lambda_v \lambda_r^{-1}}\left(1-e^{-2\gamma\lambda_v d}\right)\le\rho^{RSU} \frac{\mathbb{E}[M]}{\mathbb{E}[N]},
\label{eq:throughput}
\end{equation}
\end{theorem}
\noindent where $\mathbb{E}[M]$ and $\mathbb{E}[N]$ can be computed using Lemma~\ref{lem:mdi}.
{\color{black} Note that the mean rate for both architectures are equal because the number of busy RSUs is the same, independently of the underlying V2V connectivity. Assuming all vehicles are infinitely backlogged the overall downlink rate is the same and thus so is the mean rate per vehicle.}
Although V2V relaying collaboration does not alter the \textit{mean} shared rate seen by vehicles, see Figure \ref{fig:results1} (center); it significantly impacts the coverage probability and the shared rate \textit{distribution}.
\begin{theorem}(\textbf{Shared rate distribution}) \label{thm:cdf_rate}
The CDF of the shared rate in a V2V+V2I network $R_v$ satisfies:
\begin{align}
F_{R_v}(r)\ge 1- \varphi^2\sum\limits_{n=1}^{\infty} n ~ (1-\varphi)^{n-1} F^c_{M\mid N}&\left(\Bigl\lceil\frac{r n}{\rho^{\text{RSU}}}\Bigr\rceil\mid n\right), \label{eq:cdfR}
\end{align}
and $P(R_v=0)=1-\pi_v$ while that in the V2I network is given by
\begin{equation}
F_{R_v^{*}}(r)=1-\frac{2d}{\lambda_r^{-1}}\cdot Q\left( \frac{\rho^{RSU}}{r}-1 , ~ ~ 2~\gamma~\lambda_v~d \right),
\label{eq:cdfR_noV2V}
\end{equation}
where $Q$ is the regularized gamma function and $P(R_v^{*}=0)=1-\pi_v^{*}$.
Furthermore, ${R_v^{*}}\ge^{\text{icx}}{R_v}$, where \textit{icx} dominance\footnote{The definition for icx dominance is found in Definition \ref{def:dominance} in the appendix} implies:
\begin{equation}
\mbox{Var}({R_v^{*}})\ge\mbox{Var}({R_v}).
\label{eq:variance_rate}
\end{equation}
\end{theorem}
Numerical evaluations of \eqref{eq:cdfR} and \eqref{eq:cdfR_noV2V} are shown in Figure~\ref{fig:results_ecdf} and the resulting variability in Figure~\ref{fig:results_var}. These demonstrate the superiority of the V2V+V2I network architecture in terms of providing, not only improved connectivity, but also a substantial decrease in the shared rate variability of a typical user.
Note that in Figure~\ref{fig:results_var} we have plotted the dispersion of the per-user shared rate, defined as $\sigma/\mu,$ i.e., the standard-deviation over the mean of the per user shared rate. {\color{black}In addition, we have displayed the lower bound on the dispersion for the non-V2V scenario, given by the dispersion as $\lambda_v\to\infty$.}
{\color{black}It can be observed} that the rate dispersion converges to 0 for the V2V+V2I network. By contrast, in the V2I network the dispersion of the shared rate is bounded below. These results show that the V2V+V2I network at reasonably high vehicle density will provide them with an increasingly stable and almost deterministic shared rate to vehicles.
\subsection{Multihoming Redundancy}
RSU multihoming provides connection redundancy to a cluster. This redundancy in principle improves the reliability of vehicle connectivity in presence of unreliable/obstructed V2I links. The following result follows immediately from \eqref{eq:throughput} in Theorem~\ref{thm:throughput}.
\begin{corollary} (\textbf{Multihoming / redundancy})
The \textbf{expected number of RSUs} $\mathbb{E}[M]$ per cluster is bounded by:
\begin{equation}
\mathbb{E}[M]\ge \frac{ 1- e^{-2 \gamma \lambda_v d}}{\gamma\lambda_v \lambda_r^{-1} (1-\gamma+\gamma \cdot e^{-\gamma \lambda_v d})},
\end{equation}
which for full market penetration corresponds to
\begin{equation}
\mathbb{E}[M]\ge \frac{ e^{\lambda_v d}- e^{-\lambda_v d}}{\lambda_v \lambda_r^{-1}}=\frac{2\sinh(\lambda_v d)}{\lambda_v \lambda_r^{-1}}.
\end{equation}
\end{corollary}
As can be observed from this equation, $\mathbb{E}[M]$ i.e., the expected number of RSUs that the cluster of a typical vehicle is connected to grows rapidly with the traffic intensity $\lambda_v$ and the vehicle communication range $d.$
A similar trend is observed in Figure \ref{fig:results1} (right) where we have plotted $\mathbb{E}[M_v]$, the mean number of RSUs a typical vehicle would see its cluster connected to. We see a rapid increase in the expected number of RSUs as $\lambda_v$ increases. These results confirm an exponential growth of redundancy suggesting possibly substantial improvements in reliability of multihomed systems.
\begin{figure}[t]
\centering
\includegraphics[width=1\columnwidth]{Figures/dispersion_bb.pdf}
\caption{Dispersion (standard deviation over the mean) of the vehicle shared rates under {\color{black}V2V+V2I and V2I} only scenarios, and different inter-RSU distances ($d=150$m, $\gamma=1$).}
\label{fig:results_var}
\vspace{-0.15cm}
\end{figure}
The benefit of the redundancy is also reflected in Figure \ref{fig:results_ge2} which exhibits the probability that a typical vehicle benefits from multihoming as the vehicle intensity increases. This probability reaches values very close to 1 under heavy and congested traffic conditions, for the given values of $\lambda_r^{-1}$, providing evidence of the potential for higher reliability through multihoming.
\begin{figure}[h]
\centering
\includegraphics[width=0.95\columnwidth]{Figures/PMV_ge2_bb}
\caption{Redundancy: Probability for a typical vehicle cluster to be connected to 2 or more RSUs.}
\label{fig:results_ge2}
\end{figure}
\subsection{Homogeneous Multilane Highways}
\begin{figure}[h]
\centering
\includegraphics[width=\columnwidth]{Figures/DoF}
\caption{Typical vehicle's coverage probability analysis as the driving ``degrees of freedom", i.e. $\eta$ increases, for different penetration rates.}
\label{fig:DoF}
\vspace{-0.4cm}
\end{figure}
\begin{figure*}[h]
\centering
\begin{subfigure}[h]{0.5\textwidth}
\centering
{{\includegraphics[width=0.97\textwidth]{Figures/simplex}~\\~\\ }}%
\caption{Coverage probability for $\eta=3$ lanes.}
\end{subfigure}%
\hfill
\begin{subfigure}[h]{0.45\textwidth}
\centering
{{\includegraphics[width=1\textwidth]{Figures/mergedFigure3}~\\~\\ }}%
\caption{Coverage probability for different configurations.}
\end{subfigure}%
\caption{Multilane configuration coverage probability analysis for $\gamma=0.8, d=150$m, $\lambda_r^{-1}=1$Km.}%
\label{fig:configurations}%
\vspace{-0.5cm}
\end{figure*}
Figure~\ref{fig:DoF} illustrates the variation in the coverage probability $\pi_v$ as $\eta$ increases, but the overall traffic intensity on the highway ($\lambda_v=20$ vehicles/Km) remains unaltered. This can be interpreted as the effect of increasing the vehicles' ``degrees of freedom" to overcome blocking by legacy vehicles.
A first observation is that the marginal gain in performance is most considerable when increasing the number of lanes from 1 to 2, while further increments in the number of lanes result in smaller relative gains. An explanation of this effect is that vehicles in the V2V+V2I network will see on average twice fewer blockers when passing from $\eta=1$ to $2$; while the relative decrease in the average number of blockers is smaller for higher values of $\eta$. Note that increasing the ``degrees of freedom" does not affect the performance of the system under full-market penetration as the same clusters will be formed for any value of $\eta$. From this result, one can infer that, as long as it is greater or equal than 2, the number of lanes of a highway, will not substantially affect the connectivity probability.
\subsection{Heterogeneous Multilane Highways}
Next, we further explore the impact of heterogeneous traffic intensity across lanes on the coverage probability $\pi_v$. Note that such heterogeneity is typical in highways nowadays in a free-flow regime, since for instance a greater density of slower vehicles is seen in the right hand lanes.
Figure~\ref{fig:configurations}(a) exhibits the effect of the vehicle distribution on a three-lane highway.
In this figure, each coordinate represents the proportion of vehicles driving on each lane, therefore all possible configurations lie on the simplex. We observe that the homogeneous configuration has the best performance as it offers the best balance between minimizing the effect of blockers on the same and across lanes. The results show that performance deteriorates slowly when moving away from the homogeneous configuration, only experiencing notable decreases when moving to extreme distributions, e.g., all users are concentrated on one lane. In order to extrapolate these results to greater values of $\eta$ we define five different types of heterogeneous lane intensity distributions:
\begin{itemize}
\item Homogeneous: all lanes have equal vehicle intensities, e.g. for $\eta=5$, $\boldsymbol{\lambda} = \lambda_v \eta \cdot [\frac{1}{5}, \frac{1}{5}, \frac{1}{5}, \frac{1}{5}, \frac{1}{5}]$.
\item V: traffic is symmetrically and gradually concentrated around the leftmost and rightmost lanes of the highway, such that the intensity is minimized in the middle and maximized in the first and last lanes, e.g. for $\eta=5$, $\boldsymbol{\lambda} = \lambda_v \eta \cdot [\frac{1}{3}, \frac{2}{15}, \frac{1}{15}, \frac{2}{15}, \frac{1}{3}]$.
\item C: traffic is restricted to two lanes with identical intensities while $\eta-2$ lanes are empty, e.g. for $\eta=5$, $\boldsymbol{\lambda} = \lambda_v \eta \cdot [\frac{1}{2}, 0, 0, 0, \frac{1}{2}]$.
\item I: traffic is restricted to one lane with $\eta-1$ lanes empty, e.g. for $\eta=5$, $\boldsymbol{\lambda} = \lambda_v \eta \cdot [1, 0, 0, 0, 0]$.
\item L: $90\%$ of traffic is in the first lane while the other $10\%$ is evenly distributed across the $\eta-1$ remaining lanes, e.g. for $\eta=5$, $\boldsymbol{\lambda} = \lambda_v \eta \cdot [\frac{9}{10}, \frac{1}{40}, \frac{1}{40}, \frac{1}{40}, \frac{1}{40}]$.
\end{itemize}
Figure~\ref{fig:configurations}(b) confirms the trends exhibited in Figure~\ref{fig:configurations}(a) as the number of lanes of the highway increases. The homogeneous distribution remains best as compared to the V, C, L and I configurations.
We note that unlike in Figure~\ref{fig:DoF}, the total number of vehicles increases with $\eta$ in the highway system.
An interesting insight which can be inferred from these results is the idea that congested highways (large $\lambda_v$) may have a better connectivity performance than free-flowing systems, as the intensity distribution is typically uniform across all the lanes in such cases.
\subsection{V2V Segregation Impact}
While manufacturers progressively release new vehicle models equipped with the V2V+V2I technology, we envision a transition period during which the roads will be shared among the new V2V-enabled and older legacy vehicles.
In order to accelerate the integration and the spread of new automotive technologies, policies restricting specific lanes to driverless and V2V-enabled vehicles only might be put into place. This is akin to the current concept of high-occupancy vehicle lane.
We analyze the effect on the coverage probability of reserving the first lane for V2V-enabled vehicles and we will define $\alpha$ as the percentage of V2V-enabled vehicles driving on this lane, i.e. the first lane has a vehicle intensity of $\alpha\gamma\lambda_v$ with only V2V-enabled vehicles while the others are mixed and uniformly distributed. Figure~\ref{fig:segregation} shows the effect of $\alpha$ on the network performance.
\begin{figure}[!t]
\vspace{0cm}
\centering
\includegraphics[width=0.95\columnwidth]{Figures/segregation_truncated}
\caption{Connectivity of $\alpha$-segregated scenario for different penetration rates.} \label{fig:segregation}
\vspace{-0.3cm}
\end{figure}
We observe that for $\alpha$ large enough, segregation does indeed improve coverage, particularly at low market penetration levels, implying that such a policy would lead to improved connectivity in the early stages of the V2V capable vehicles deployment.
\subsection{Validity of the Poisson Assumption}
We first explored the degree to which the PPP assumption might hold for a detailed simulation of vehicles on the road. The system-level simulator used is an enhanced version of the open-source automotive Intersection Management (AIM4) simulator \cite{Sto18}, that captures several features of real traffic patterns such as the vehicle dimensions, vehicle types, vehicle velocity, as well as realistic car-following and car-overtaking models.
Figure~\ref{fig:Poisson_confirmation} shows the distribution for the inter-vehicle distances obtained in the simulator. The simulated traffic leads indeed to a configuration where the inter-vehicular distance is exponentially distributed, characterizing a PPP. The cumulative distribution function of the uniform distribution is also shown for comparison. This property holds for $\lambda_v = 14$ vehicles/km/lane, but can be generalized for any $\lambda_v$ small enough to remain in a free-flow regime, as well as any other number of lanes in the highway. Note that the results shown in Figure~\ref{fig:Poisson_confirmation} correspond to the inter-arrival distances for the projection of cars in the three lanes onto the given axis, hence although vehicles cannot be closer than their dimension permits on a given lane in the simulator, the projection of the vehicles' centers on the three lanes can be arbitrarily close.
\begin{figure}[h]
\vspace{0.0cm}
\centering
\includegraphics[trim={1.5cm 0 1.5cm 0},clip, width=0.9\columnwidth]{Figures/Poisson_confirmation}
\caption{Comparison of the simulated vehicle inter-arrival distance CDF with exponential and uniform random variables, on a collapsed 3 lanes highway system ($\eta = 3$).} \label{fig:Poisson_confirmation}
\end{figure}
Therefore, we expect that the observations and conclusions drawn from Figures~\ref{fig:results1}-\ref{fig:results_ge2} in the single lane scenario to apply in the multilane configuration as well. Moreover, our analysis in Sections~\ref{sec:multilane} and \ref{sec:multilane_perf_eval} predicts improved performances compared to the single lane case. For instance, we expect a higher probability of connectivity, better redundancy, or improved per-user shared rate for instance, due to the fact that clusters can be larger in size and that blocking vehicles have a less severe impact on the others.
\subsection{Insight on Alternative Distributions}
Although the PPP assumption will be a good fit in certain regimes, it will still fail for others that may arise in the future, e.g., where cars may intentionally form platoons to increase highway throughput. To better understand how such patterns might affect connectivity, in this section we ask the question ``What is the best possible configuration of cars, i.e., resulting in the best connectivity metrics?''. We shall focus on two performance metrics: coverage $\pi_v$ and mean rate per user.
Two regimes can be distinguished. The first one corresponds to situations where $\lambda_v \geq 1/d$, i.e. where the vehicle density is large enough so that vehicles can be separated by $1/d$ meters. In such a scenario, vehicles would form a single infinite cluster leading to $\pi_v = 1$ and maximum mean rate per user since all the RSUs are in use. The other regime of interest is where $\lambda_v < 1/d$. Consider first a configuration where all the clusters in the network are of same size. Then spacing the vehicles by $d$ within the cluster would ensure maximal cluster length, and hence maximal $\pi_v$ and $\mathbb{E}[R_v]$ as this would maximize the ``space covered" by clusters and thus the RSU busy time. Similarly, spacing vehicles in adjacent clusters by $2d$ would also maximize $\mathbb{E}[R_v]$, without affecting the coverage.
Following these two rules, we derive expressions for $\pi_v$ and $u$, the average RSU utilization capturing the same information as $\mathbb{E}[R_v]$. For a fixed cluster size $n$:
\begin{equation}
\pi_v(n) = \min [(n+1)\cdot d \cdot \lambda_r, 1]
\end{equation}
\begin{equation}
u(n) = \min [\frac{n+1}{n}\cdot d \cdot \lambda_v, 1]
\end{equation}
Clearly, as $n$ increases, $\pi_v(n)$ increases while $u(n)$ decreases. We exhibit that trend through a tradeoff curve between coverage and throughput as a function of $n$ in Figure \ref{fig:tradeoff_curves}:
\begin{figure}[h]
\vspace{0.0cm}
\centering
\includegraphics[width=1\columnwidth]{Figures/tradeoff4}
\caption{Tradeoff curve between connectivity $\pi_v$ and RSU utilization $u$, for different $\lambda_v$ (in vehicles/km), and the achievable performance by mixing cluster sizes.} \label{fig:tradeoff_curves}
\end{figure}
Figure~\ref{fig:tradeoff_curves} exhibits the tradeoff between connectivity and throughput. In a low density regime, vehicles form longer clusters but cover less area as the cluster size $n$ increases, improving the connectivity but reducing the average RSU utilization, and hence the mean rate per user. We note that when the vehicle density $\lambda_v$ is large enough, the tradeoff does not occur as vehicles can get full connectivity and maximum mean rate per user.
In scenarios where cluster size mixing is allowed, cluster can see an even better performance, represented by a straight line between any two points on the tradeoff curves. We note that the best mixing possible is combinations of clusters of size 1, i.e. isolated vehicles, and clusters of size $n = \mathlarger{\mathlarger{\mathlarger{\lfloor}}} \frac{1}{d\lambda_r}+1 \mathlarger{\mathlarger{\mathlarger{\rfloor}}}$. The tradeoff curves associated with such mixings are drawn as dashed lines on Figure~\ref{fig:tradeoff_curves}. Intuitively, clusters of size 1 help to maximize the total area covered by the clusters, while the largest clusters increase the connectivity probability of a typical vehicle. Different combinations of those two cluster sizes can be constituted to reach any specific connectivity or throughput target.
\subsection{Validity of the Poisson Assumption}
We first explored the degree to which the PPP assumption might hold for a detailed simulation of vehicles on the road. The system-level simulator used is an enhanced version of the open-source automotive Intersection Management (AIM4) simulator \cite{Sto18}, that captures several features of real traffic patterns such as the vehicle dimensions, vehicle types, vehicle velocity, as well as realistic car-following and car-overtaking models.
Figure~\ref{fig:Poisson_confirmation} shows the distribution for the inter-vehicle distances obtained in the simulator. The simulated traffic leads indeed to a configuration where the inter-vehicular distance is exponentially distributed, characterizing a PPP. The cumulative distribution function of the uniform distribution is also shown for comparison. This property holds for $\lambda_v = 14$ vehicles/km/lane, but can be generalized for any $\lambda_v$ small enough to remain in a free-flow regime, as well as any other number of lanes in the highway. Note that the results shown in Figure~\ref{fig:Poisson_confirmation} correspond to the inter-arrival distances for the projection of cars in the three lanes onto the given axis, hence although vehicles cannot be closer than their dimension permits on a given lane in the simulator, the projection of the vehicles' centers on the three lanes can be arbitrarily close.
\begin{figure}[h]
\vspace{0.0cm}
\centering
\includegraphics[trim={1.5cm 0 1.5cm 0},clip, width=0.9\columnwidth]{Figures/Poisson_confirmation}
\caption{Comparison of the simulated vehicle inter-arrival distance CDF with exponential and uniform random variables, on a collapsed 3 lanes highway system ($\eta = 3$).} \label{fig:Poisson_confirmation}
\end{figure}
Therefore, we expect that the observations and conclusions drawn from Figures~\ref{fig:results1}-\ref{fig:results_ge2} in the single lane scenario to apply in the multilane configuration as well. Moreover, our analysis in Sections~\ref{sec:multilane} and \ref{sec:multilane_perf_eval} predicts improved performances compared to the single lane case. For instance, we expect a higher probability of connectivity, better redundancy, or improved per-user shared rate for instance, due to the fact that clusters can be larger in size and that blocking vehicles have a less severe impact on the others.
\subsection{Insight on Alternative Distributions}
Although the PPP assumption will be a good fit in certain regimes, it will still fail for others that may arise in the future, e.g., where cars may intentionally form platoons to increase highway throughput. To better understand how such patterns might affect connectivity, in this section we ask the question ``What is the best possible configuration of cars, i.e., resulting in the best connectivity metrics?''. We shall focus on two performance metrics: coverage $\pi_v$ and mean rate per user.
Two regimes can be distinguished. The first one corresponds to situations where $\lambda_v \geq 1/d$, i.e. where the vehicle density is large enough so that vehicles can be separated by $1/d$ meters. In such a scenario, vehicles would form a single infinite cluster leading to $\pi_v = 1$ and maximum mean rate per user since all the RSUs are in use. The other regime of interest is where $\lambda_v < 1/d$. Consider first a configuration where all the clusters in the network are of same size. Then spacing the vehicles by $d$ within the cluster would ensure maximal cluster length, and hence maximal $\pi_v$ and $\mathbb{E}[R_v]$ as this would maximize the ``space covered" by clusters and thus the RSU busy time. Similarly, spacing vehicles in adjacent clusters by $2d$ would also maximize $\mathbb{E}[R_v]$, without affecting the coverage.
Following these two rules, we derive expressions for $\pi_v$ and $u$, the average RSU utilization capturing the same information as $\mathbb{E}[R_v]$. For a fixed cluster size $n$:
\begin{equation}
\pi_v(n) = \min [(n+1)\cdot d \cdot \lambda_r, 1]
\end{equation}
\begin{equation}
u(n) = \min [\frac{n+1}{n}\cdot d \cdot \lambda_v, 1]
\end{equation}
Clearly, as $n$ increases, $\pi_v(n)$ increases while $u(n)$ decreases. We exhibit that trend through a tradeoff curve between coverage and throughput as a function of $n$ in Figure \ref{fig:tradeoff_curves}:
\begin{figure}[h]
\vspace{0.0cm}
\centering
\includegraphics[width=1\columnwidth]{Figures/tradeoff4}
\caption{Tradeoff curve between connectivity $\pi_v$ and RSU utilization $u$, for different $\lambda_v$ (in vehicles/km), and the achievable performance by mixing cluster sizes.} \label{fig:tradeoff_curves}
\end{figure}
Figure~\ref{fig:tradeoff_curves} exhibits the tradeoff between connectivity and throughput. In a low density regime, vehicles form longer clusters but cover less area as the cluster size $n$ increases, improving the connectivity but reducing the average RSU utilization, and hence the mean rate per user. We note that when the vehicle density $\lambda_v$ is large enough, the tradeoff does not occur as vehicles can get full connectivity and maximum mean rate per user.
In scenarios where cluster size mixing is allowed, cluster can see an even better performance, represented by a straight line between any two points on the tradeoff curves. We note that the best mixing possible is combinations of clusters of size 1, i.e. isolated vehicles, and clusters of size $n = \mathlarger{\mathlarger{\mathlarger{\lfloor}}} \frac{1}{d\lambda_r}+1 \mathlarger{\mathlarger{\mathlarger{\rfloor}}}$. The tradeoff curves associated with such mixings are drawn as dashed lines on Figure~\ref{fig:tradeoff_curves}. Intuitively, clusters of size 1 help to maximize the total area covered by the clusters, while the largest clusters increase the connectivity probability of a typical vehicle. Different combinations of those two cluster sizes can be constituted to reach any specific connectivity or throughput target. |
1,116,691,499,838 | arxiv | \section{Introduction}
In \citet*[hereafter Paper I]{paper1}, we
introduced a model of blazar variability
designed to study rapid flares resulting from particle
acceleration at a shock front.
The case considered involved a collision between
relativistic shocks,
but the effects incorporated, light-travel delays and
energy/frequency
stratification, are of general importance.
The primary goal of that study was to establish the
conditions
for relative delays between synchrotron and
synchrotron self-Compton (SSC) flares at different
frequencies.
We found that the SSC emission can be strongly
affected
by light travel time of the synchrotron seed photons,
which can result in a considerable delay of the SSC
flares in the X-ray
band with respect to synchrotron variability at lower
frequencies.
In the present paper, we consider external sources of
the seed photons
that are scattered to high energies by the
relativistic electrons
heated at the front.
The current unified scheme of active galactic nuclei \citep{ant93}
includes a source of obscuration that shields the
emission
from the region within $\sim$ 0.1-10 pc of the central
black hole
when observed at large viewing angles.
This obscuration is usually assumed to be provided by
a dusty
molecular torus that surrounds the innermost nuclear
region.
The dust in the inner portions of the torus facing the
central
continuum source is heated to temperatures
$T\sim1000\,\mbox{K}$
and emits infrared radiation roughly as a blackbody.
In addition, reverberation studies of broad
emission lines
\citep{pet93} suggest
the presence of a clumpy broad emission line region
(BLR)
that consists of ionized clouds concentrated within
$\sim0.1L_{{{}} uv,42}^{1/2}\,\mbox{pc}$ of the black
hole,
where $L_{{{}} uv,42}$ is the UV spectral luminosity
normalized by
$10^{42}\,\mbox{erg}\,\mbox{s}^{-1}\,\mbox{\AA}^{-1}$.
The combined BLR emission and blackbody
radiation from the torus
permeate the jet on parsec scales, providing a
potentially
significant source of seed photons,
alongside the synchrotron seed photons produced
by the jet plasma itself \citep[e.g., see][]{bla00}.
In the rest frame of the emitting plasma,
radiation from the torus and BLR depends
on
location of the emission region in the jet and
its
bulk Lorentz factor $\Gamma'_p$.
Radiation from the torus is in general expected
to be Doppler boosted in the plasma rest frame. This
results in
stronger dependence of the ERC emission
on $\Gamma'_p$ compared with the SSC emission, whose
seed
radiation is produced in the rest frame of the plasma.
However,
this behavior can be modified by the effects of
electron energy losses
from ERC radiation which, if dominant, limit the
spatial
extent of the emitting plasma in an energy-dependent
manner, leading
to frequency stratification of the emission.
In this paper, we investigate the broadband features
of the
ERC emission during flares under different assumptions
about 1) properties of the molecular torus,
2) location in the jet where the flare occurs, and
3) the value of $\Gamma'_p$ of the emitting plasma.
As in Paper~I, we concentrate on the study of
relative
delays between flares at different frequencies and the
features
that can be used to distinguish among different
emission mechanisms.
An additional goal of this study is to define the
properties of the molecular torus and the BLR under
which
the results obtained in Paper~I are valid.
There
we assumed that the ERC emission provides a negligible
contribution
compared with the SSC radiation at the frequencies of
interest,
and that the energy losses of electrons are dominated by
synchrotron
emission.
\section{External Compton Model}
The inclusion of ERC radiation in our model of
variability
introduces a new component of high-energy emission.
It can also change the overall structure of the
emission region
and, hence, affect synchrotron and SSC radiation if
the energy losses of electrons are dominated by
scattering of
external photons.
In this case the decay time of electrons will be
reduced
compared with the case of pure synchrotron losses,
which leads to changes in flux levels and values
of critical frequencies, such as the break frequency.
The details depend on the structure and properties of
the sources
of external emission, which we describe in this
section.
As in Paper~I, here we concentrate on the study
of rapid
variability on time scales $\sim1\,\mbox{day}$.
We adopt the assumptions about geometry and excitation
structure of the emitting volume made in Paper~I.
We ignore the expansion of the emission region, which
is assumed to be a cylinder oriented along the jet.
We assume that the size of the emitting volume is small
compared to the sizes of and distances to the external
sources of emission.
Then we can assume that the external radiation is
homogeneous
throughout the emitting plasma, albeit highly anisotropic.
The structure and location of the molecular torus and
the BLR,
as well as the location of the emitting plasma in the
jet,
determine the angular dependence of external emission.
This anisotropy is further amplified by relativistic
aberration
and Doppler boosting or de-boosting in the rest frame
of the plasma.
For simplicity we assume that external radiation is static during the flare.
The properties of the putative dusty torus in
the nucleus of an active galaxy are poorly known.
In particular, despite what its name implies, the
geometrical shape and
size of this structure in a quasar or BL~Lac object
are poorly
constrained. According to one model \citep{elv00}, the
obscuration
is provided by a conical outflow of the material from
the accretion
disk. Recent interferometric observations of the
nucleus of NGC~1068
\citep{jaf04} reveal the presence of warm dust at
temperature
$T\sim300\,\mbox{K}$ in a structure
$\sim2.1\,\mbox{pc}$ in size,
surrounding a smaller, warmer ($T>800\,\mbox{K}$)
structure
of size $\sim0.7\,\mbox{pc}$.
The mass of the black hole in NGC~1068 is $1.4\times 10^7\mbox{M}_\sun$
according to VLBI measurements of water maser emission \citep{gri97}.
For quasars and BL~Lac objects harboring more massive black holes, one should
expect the size of the torus to scale accordingly.
Fig.~\ref{ext} illustrates the geometry and size
of the molecular torus in relation to
the position of the emitting blob of plasma in the jet
for a representative set of assumptions about
the external sources of seed emission and the
location of the radiating plasma.
The torus is characterized by semi-opening angle
$\theta_{{{}} op}$
and radius $r_{{{}} tor}$.
We assume that the emission from the torus is
dominated by
dust that radiates as a black body at temperature $T$,
so that the intensity of emission from the torus is
\begin{equation}
I'_{\nu'}(\theta')={\cal
B}_{\nu'}(T),\quad\mbox{for}\quad\theta'_{{{}}
min}<\theta'<\theta'_{{{}}
max},
\end{equation}
where ${\cal
B}_{\nu'}(T)$ is the Planck function. Here and below, the
primed quantities
associated with the emitting plasma are given in the
rest frame of the
host galaxy, whereas unprimed quantities are reserved
for use in the
plasma rest frame (this convention follows the one
adopted in Paper~I).
We only take into account the emission from the
portion of the torus
that faces the central continuum source.
However, the details of this approach are not crucial
to
the final results.
The only essential parameters are the angle $\theta'_{{{}} min}$,
which determines the maximum Doppler boosting,
and the dust temperature $T$.
The BLR can be represented by a uniform spherical source
of
emission at a fiducial frequency $\nu'_{{{}}
blr}$. The integrated
intensity of the incident emission from the BLR is
given by
\begin{equation}
I'(\theta')=\frac{L_{{{}}
blr}/(4\pi)}{\frac{4}{3}\pi{}r_{{{}}
blr}^3}\Delta{r}(\theta'),\quad\mbox{for}\quad
\theta'_{{{}}
blr}<\theta'<180^{\circ},
\end{equation}
where $r_{{{}} blr}$ is the radius of the BLR,
$\Delta{r}(\theta')$ is the geometric thickness of the
BLR in a given
direction, $\theta'_{{{}} blr}=\pi-\arcsin(r_{{{}}
blr}/z_{{{}} p})$,
and $L_{{{}} blr}$ is the BLR luminosity, which can be
estimated from
observations if the distance to the blazar is
known.
Again, as in the case of the molecular torus,
the angle $\theta'_{{{}} blr}$ and the BLR luminosity $L_{{{}}
blr}$ are the only
essential parameters.
The UV spectral luminosity of 3C~273 can been estimated as $L_{{{{}} uv},42}\approx2$ \citep{von97},
which gives the size of the BLR region as $\sim0.14\,\mbox{pc}$.
Although the size of the torus in 3C~273 is unknown, it is reasonable to expect
that it is at least $10\,\mbox{pc}$.
For the calculations reported in this paper, we assume that the location of the emitting plasma
$z_{{{}} p}\approx r_{{{}} tor}$.
Under this condition the contribution of the BLR region to the seed photon field
is negligible since BLR emission is significantly de-boosted by relativistic Doppler
effects.
We therefore neglected the contribution from the BLR in the reported results.
However, the role of the BLR would increase sharply
if the event that causes the flare were to occur
closer to the central
engine and, hence, closer to or even within the BLR.
The plasma that produces variable emission during a
flare
moves down the jet at a relativistic speed
$v'_p =c\beta'_p$, with corresponding
Lorentz factor $\Gamma'_p$.
The Doppler effect and relativistic abberation cause the intensity of external radiation to be
highly anisotropic in the plasma rest frame.
The direction of incoming radiation is modified
by relativistic aberration according to
\begin{equation}
\mu=(\mu'-\beta'_p)/(1-\beta'_p\mu'),
\end{equation}
where $\mu'=-\cos{\theta'}$ and $\theta'$ is the angle
of propagation of the incident photons relative to the
line of sight; $\mu$ is the
corresponding quantity in the plasma rest frame.
The spectral intensity of incident emission from the
molecular torus is
transformed according to $I_{{{}}
\nu}(\mu)=\delta^3I'_{{{}}
\nu'}(\mu')$, where the Doppler factor
$\delta=\Gamma'_p(1-
\beta'_{{{}} p}\mu')$ and $\nu=\delta\nu'$,
while the expression for transforming incident
integral
intensity is $I(\mu)=\delta^4I'(\mu')$.
Thus, for the parameters used in Fig.~\ref{ext},
the maximum Doppler factor for the emission from the
torus
$\delta_{{{}} max}\approx\Gamma'_p$.
Once the intensity as a function of direction is
known,
one can determine the value of the external radiation
energy density in the rest frame of the emitting
plasma
by integrating over all allowed incident directions
and over the black-body spectrum.
The external radiation energy density from three different models
of the molecular torus is presented in Fig.~\ref{dust.all}
as a function of position of the emitting plasma in
the jet.
The position of the emitting plasma is a free parameter
of the model. It does not change during the calculations,
but adopting a different value may affect the results of the simulations considerably.
The energy density of the magnetic field in the plasma
rest frame $u_B=B^2/(8\pi)$,
where the magnetic field strength as a function of position $z_p$
along the jet is given by $B(z_p)=B_0z_0/z_p$.
It can be seen that the energy density of
emission from the
torus exceeds that of the magnetic field for
$z_p\lesssim40\,\mbox{pc}$ when
the magnetic field $B_0$ is within half an order of
magnitude of our
adopted value of 0.4~G.
The expression for the energy density of blackbody
radiation in the rest
frame of the plasma can be integrated to produce the
following
approximate expression:
\begin{equation}
\label{uradapprox}
u_{{{}}
rad}\approx\frac{2\pi}{c}\sigma{}T^4\frac{\delta^3_{{{}}
max}}{3\Gamma'_p},
\end{equation}
where $\sigma$ is the Stefan-Boltzmann constant.
When the location of emitting plasma $z_p\approx{}r_{tor}$, one has $\theta'_{min}\approx90^{\circ}$
and the corresponding $\mu'\approx0.$
Under these conditions the ratio of the external radiation and magnetic energy densities
has a
simple dependence on the physical parameters:
\begin{equation}
u_{rad}/u_B\approx 30 \left(\frac{T}{1200\,\mbox{K}}\right)^4
\left(\frac{\Gamma'_p}{10}\right)^2
\left(\frac{B}{0.4\,\mbox{G}}\right)^{-2}.
\end{equation}
It can be seen that moderate changes
in the parameters can result in the case where magnetic energy density
dominates and, therefore, energy losses of electrons are primarily due to synchrotron (or SSC) emission.
For $T=800\mbox{K}$ and $B=1\mbox{G}$, $u_{rad}\approx{}u_B$ if $\theta'_{min}\approx90^{\circ}$.
Unfortunately, no simple approximation is available for the dependence of $u_{rad}$ on
the location of the emitting plasma $z_p$ when it is
different from $z_p\approx{}r_{tor}$.
If the size of the torus is considerably larger than $z_p$,
the angle $\theta'_{min}\approx\theta_{op}$, which generally results
in a somewhat larger value of $\delta_{max}$ and, hence,
even more pronounced dominance of $u_{rad}$ over $u_B$.
On the other hand, if $z_p>r_{tor}$, the contribution from the torus
is diminished rapidly with increasing $z_p$.
Since $u_{{{}} rad}$ can exceed $u_B$ when $z_p \lesssim{}r_{tor}$,
it must in general be taken into account when
considering
electron energy losses, which can be expressed as
$\dot\gamma=-\gamma^2/t_{{{}} u}$,
where
\begin{equation}
t_{{{}} u}=\frac{7.73\times10^8\,\mbox{s}}{8\pi(u_{B}+u_{{{}} rad})}.
\end{equation}
The formalism developed in Paper~I
can be recovered completely by substituting $t_1$
defined there
with $t_{{{}} u}$.
The ERC flux $F^E_{{{}} \nu}(t_{{{}} obs})$ as a
function of time
$t_{{{}} obs}$ and frequency $\nu$ of observation can
then be calculated
by utilizing the same procedures that are employed in the SSC
calculations.
\section{Calculated Spectra and Light Curves}
In this section, we present the results of simulations
of ERC
emission variability. We study both the broadband
spectral variability
(Figs.~\ref{mesp.0} and~\ref{mesp.90}) and
the light curves at several representative frequencies
(Figs.~\ref{melc.0} through~\ref{melc.90a})
for two viewing angles: $\theta_{{{}} obs}=0^{\circ}$
corresponding to
the case when the line of sight coincides with the jet
axis, and
$\theta_{{{}} obs}=90^{\circ}$
in the rest frame of the plasma, which maximizes
superluminal motion
and reduces to the small angle $\theta'_{{{}}
obs}\sim1/\Gamma'_p$
in the frame of the host galaxy.
To characterize external emission we specify the bulk
Lorentz
factor of the emitting plasma $\Gamma'_p=10$.
All the results presented in this section are given in
the rest frame
of the emitting plasma.
We use the same input parameters for the emitting
plasma as the ones
employed in the companion SSC calculations. The
parameters describing
external sources of emission, as well as the bulk speed
of the emitting
plasma, are chosen such that the ERC flux dominates
over the SSC radiation at the frequencies of interest.
This corresponds to the external radiation energy
density in the plasma
rest frame exceeding the energy density of the
magnetic field.
This affects the decay time of electrons and,
therefore,
the synchrotron and SSC emission variability.
The comparison between the SSC and ERC spectral energy distribution (SED)
at high frequencies is shown in Fig.~\ref{mesp3.0}.
Scattered photons from the molecular torus
dominate at frequencies above $10^{16}\,\mbox{Hz}$ in
the calculations.
Scattered BLR radiation is not shown, since it provides a negligible
fraction of the external photons because of severe
Doppler de-boosting.
\subsection{Spectral Evolution}
Figs.~\ref{mesp.0} and~\ref{mesp.90} show a sequence
of SEDs from the forward-shock
region
(see Paper~I) at different times normalized by
the apparent crossing
time $t_{ac}$.
The latter quantity
is defined in terms of the size of the
excitation zone,
which extends over a span $2R$ across and $H$ along
the jet, the speed
of the shock in the plasma rest frame $v$,
and the viewing angle: $t_{{{}} ac}=[(c/v)-1]H/c$ for
$\theta_{{{}} obs}=0^{\circ}$
and $t_{{{}} ac}=2R/c+H/v$ for $\theta_{{{}}
obs}=90^{\circ}$ (for more details see Paper~I).
The spectral features of the ERC emission depend
on the
characteristic frequency of the infrared photons from
the torus
at temperature $T$: $\nu_{{{}} tor}\sim 3kT/h$.
The ERC spectrum can be characterized by three
critical frequencies.
(1) The spectrum drops off exponentially above
frequency
$\nu_{{{}} e,max}(t=0)\approx 4\gamma^2_{{{}}
max}(t=0)\delta_{{{}} max}\nu_{{{}} tor}$
until $t=t_{{{}} ac}$, after which the drop-off
frequency begins to fall
quickly due to the sharp decrease in the maximum value
of the Lorentz
factor of electrons in the plasma owing to radiative
cooling.
(2) The turn-over frequency of the SED,
$\nu_{{{}} e,t}\approx4\gamma^2_{{{}} min}(t_{{{}} obs})
\delta_{{{}} max}\nu_{{{}} tor}$, decreases during the
flare as $\gamma_{{{}} min}$ declines
from the
initial value, $\gamma_{{{}} min}(t=0)$.
The value of the turn-over frequency at the
crossing time
$t_{{{}} ac}$ depends on the energy loss rate of
electrons, which
in turn depends on the bulk speed of the plasma
if ERC losses are dominant.
(3) A break frequency $\nu_{{{}} e,b}$ is well defined
in the case of
the ERC spectrum since the seed photon SED is nearly
monochromatic.
It is found by solving the equation
$t^E_{\nu}=\min\{t_{{{}} obs},t_{{{}} ac}\}$ for $\nu$,
where the decay time at frequency $\nu$ is defined as
\begin{equation}
t^E_{\nu}=t_{{{}} u}\sqrt{\frac{4\delta_{{{}}
max}\nu_{{{}} tor}}{\nu}}.
\end{equation}
Above the break frequency the decay of the emitting electrons
is high enough that the actual volume of plasma that
contributes to the
observed flux is smaller than that defined by the
extent of the excitation zone through the parameters
$R$ and $H$.
Because of this, the slope of the ERC spectrum
steepens by $1/2$ above
the break frequency due to the relations $t^E_{\nu}\propto\nu^{-1/2}$
and $F^E_{{{}} \nu}\propto j^E_{{{}} \nu}t^E_{{{}} \nu}$
\citep[see][]{mar85}.
\subsection{Time delays}
As far as the time delays are concerned,
the ERC flares at different frequencies
are affected by the geometry of the excitation
region and by electron energy stratification
in the same manner as synchrotron and SSC
emission.
However, certain aspects of the ERC emission variability
are markedly different.
Below, we describe the general features of the ERC flares
and the unique characteristics that distinguish them
from synchrotron and SSC flares.
The ERC light curves for the viewing angle $\theta_{obs}=0^\circ$ are
presented in Fig.~\ref{melc.0} for $\gamma_{min}=100$ and Fig.~\ref{melc.0a} for $\gamma_{min}=10$.
The frequencies were chosen around the break frequency
at the crossing
time so that the resulting profiles are roughly
symmetric.
At higher frequencies the profile of a flare is expected to have a flat top,
which is evident in the light curve at frequency $\nu=2.5\times10^{18}\,\mbox{Hz}$
(dot-dashed curves).
The decay time of scattering electrons that provide the dominant contribution
to the observed ERC flux is smaller than the apparent crossing time.
The light curves at lower frequency, $\nu=4\times10^{17}\,\mbox{Hz}$, are symmetric because the decay time
of electrons matches the apparent crossing time (dashed curves).
All light curves peak at the crossing time in the case of $\gamma_{min}=10$.
This is to be expected since the seed emission from the torus is constant during the flare,
unlike the seed emission in the SSC model.
However, in the case of $\gamma_{min}=100$ the light curve at $\nu=4\times10^{17}\,\mbox{Hz}$
peaks after the crossing time.
It is also evident that the spectral index of the ERC emission is
positive at the beginning of the flare when $\gamma_{min}=100$.
Positive values of the spectral index indicate that the light curves
are observed at frequencies below the turn-over frequency, which depends on $\gamma_{min}$.
This unusual time delay of the ERC light curve can be
understood as follows.
The frequencies at which this delay can be observed
are below the
turn-over frequency at the crossing time.
This means that the optimum Lorentz factors of
scattering electrons
that could provide the dominant contribution to the
observed flux
at these frequencies are below the minimum Lorentz
factor of the injected
electrons.
As the flare progresses, the minimum Lorentz factor of
the evolving electrons
will eventually drop to the optimum values for the
emission at frequencies
below the turn-over frequency.
However, if the initial $\gamma_{{{}} min}$ is high enough, the
optimum value
might only be reached after the crossing time.
In this case, the ERC emission at the frequency that
corresponds
to this optimum value will continue to grow even after
time $t_{{{}} ac}$
when the shock front exits the excitation region and
the acceleration of electrons stops.
This phenomenon should not affect the SSC flares in
the same fashion
since
seed photons from a broad range of frequencies
contribute equally to the SSC emission at a given
frequency of observation.
The frequencies
at which electrons with Lorentz factor $\gamma_{min}$
emit synchrotron radiation are generally
lower than the synchrotron
self-absorption frequency for realistic parameters.
Therefore, synchrotron flares should not be expected to exhibit this effect, either.
The ERC light curves for the viewing angle $\theta_{obs}=90^\circ$ are
presented in Fig.~\ref{melc.90} for $\gamma_{min}=100$ and Fig.~\ref{melc.90a} for $\gamma_{min}=10$.
At a viewing angle of $90^{\circ}$
the light curves at higher frequencies, defined by $t^E_{\nu}\lesssim{}t_{ac}$, peak at $\sim{}t_{{{}}
ac}/2$ as a result of 1) rapid decay of electrons that dominate the observed emission at higher frequencies
and 2) the circular geometry of the source along the line of sight.
At lower frequencies defined by $t^E_{\nu}\gtrsim{}t_{ac}$,
the maximum is closer to $t_{ac}$ since this is when the emission
fills the entire volume of the source.
The mechanism that causes extra delay of the ERC emission, which was described above,
affects ERC light curves at any viewing angle, including $\theta_{obs}=90^\circ$.
However, when the viewing angle $\sim90^{\circ}$
the effect is not as obvious since the delays
due to the geometrical shape of the source
and energy stratification are equally important.
In the calculations discussed here
the parameters have been selected such that
inverse Compton energy losses
dominate over synchrotron losses, $u_{rad}\approx30u_B$.
This, in particular, means that the decay time of synchrotron emission
is shorter than that calculated from synchrotron losses alone by a factor $\sim{}u_{rad}/u_B$.
In Paper~I we neglected external emission;
the break frequency of the synchrotron spectrum was at $\sim10^{12}\,\mbox{Hz}$
while the synchrotron self-absorption frequency $\sim10^{10}\,\mbox{Hz}$.
The dominance of ERC losses in the present calculations
shifts the break frequency of the synchrotron spectrum to $\sim10^9\,\mbox{Hz}$,
which is less than the self-absorption frequency.
Therefore, the synchrotron light curves
at all frequencies of interest originate from a volume of
the source that is limited by frequency stratification.
These light curves are expected to peak at $\sim{}t_{ac}/2$
when $\theta_{obs}\sim90^{\circ}$.
In contrast, when $\theta_{obs}=0^{\circ}$
the synchrotron flares at high
frequencies (at which $t_\nu < t_{\rm ac}$) are characterized by a quick rise,
flat top, and equally rapid decay after the crossing time.
Comparing this behavior with that of the ERC light curves,
one can assert that there must be a delay of about half
the crossing time between the synchrotron flares and
the ERC emission in the soft X-ray band (frequencies such that $t^E_{\nu}>t_{ac}$)
for viewing angle $\theta_{obs}\sim90^{\circ}$.
The synchrotron flare should
cease sharply before the peak of the ERC emission is reached
if the viewing angle $\theta_{obs}=0^{\circ}$.
This implies that the parameters of the emission region and
the external seed photon field are such that the break
frequency of the synchrotron spectrum is below the
frequency of observation (which can be verified by observing a rather
steep synchrotron spectrum, with slope between $-1$ and $-1.5$).
If this is not the case, smaller delays must be expected.
It should be noted that the presented results are independent of the size
of the torus as long as the location of the emitting plasma in the jet
$z_p$ is adjusted by the same factor as the size of the torus $r_{tor}$.
This ensures that the angles ${\theta'}_{min}$ and ${\theta'}_{max}$
are the same, which results in the same field of seed photons if
the dust temperature is the same.
On the other hand, if one keeps $z_p$ constant then
adoption of a larger torus
results in amplification of the external seed photons
and, consequently, more rapid decay of scattering electrons.
\section{Discussion}
The study that we have conducted allows one to
distinguish
between different emission mechanisms by means of
the constraints placed
on the magnitude of
time delays
between flares and shapes of the light curves at different frequencies.
At viewing angle $\theta_{{{}} obs}=90^{\circ}$
in the rest frame of the emitting plasma,
synchrotron and inverse Compton (both SSC and ERC)
flares
exhibit similar behavior in terms of time delays.
The maximum time delay cannot exceed half the crossing
time $t_{{{}} ac}$,
which can crudely be equated with the duration of the
flare
when the light curves are symmetric.
For this and other viewing angles,
the flares must be asymmetric,
with a quick rising part and long quasi-exponential decay
if observed at frequencies significantly below
the break frequency $\nu_b$, whose value is
different for each emission mechanism.
The flares observed at or above the break frequency
reflect the geometry of the source along the line
of sight and are symmetric in our simulations because of
the cylindrical geometry of the excitation volume.
Observing flares at zero viewing angle
allows one to
distinguish
more clearly
between emission mechanisms.
The higher frequency synchrotron light curves should peak at exactly
the
crossing time, whereas the SSC and ERC flares in the
X-ray band
have an extra time delay and, thus, peak after the
crossing time.
For this viewing angle, the flares are symmetric if
observed at or near the break frequency,
whereas the flares will have a flat top at higher frequencies.
In the case of the SSC emission, the delay is due to
the
light travel time of the synchrotron seed photons.
The ERC light curve might show similar delay
due to the decay of the minimum Lorentz factor of
electrons
if the initial value is large enough
($\gamma_{{{}} min}=100$ at $t=0$ proved sufficient
in our simulations).
If the initial $\gamma_{{{}} min}\ll100$, no extra time
delay of the ERC
emission is possible.
In addition, if the ERC emission exhibits an extra
delay
compared to the synchrotron variability at lower
frequencies,
the spectral index of the ERC emission must be
positive
for a significant fraction of the duration of the
flare.
This is because this extra delay is only possible
for the ERC light curves at frequencies below the
initial turn-over
frequency of the SED.
On the other hand, the spectral index of the SSC
emission
will be negative even at the beginning of the SSC
flare
because the turn-over frequency of the SSC spectrum
is determined by the synchrotron self-absorption
frequency,
which is typically much lower than the characteristic
frequency of dust emission, $\nu_{{{}} tor}$.
Similar behavior of the ERC flares has been found
by \citet{sik01}.
These authors explained the delays that they found by appealing to
the gradient in the magnetic field along the jet,
which leads to a decrease in the critical frequency of
synchrotron emission.
In their model the flare is produced by a plasma in a geometrically thin shell
energized by forward/reverse shocks produced following the collision
between portions of the jet propagating with different relativistic speeds.
In contrast with our model, these authors assume that the injection of
relativistic electrons is uniform throughout the shell.
They ascribe the time delays between ERC and synchrotron flares to
gradients in magnetic field.
Their modeling is thus suitable to more prolonged flares than the ones
considered in this paper.
The results reported in Paper~I are based on
the assumption
that the energy losses of electrons are dominated by
synchrotron
emission and that ERC emission is negligible
compared to
the SSC flux levels at the frequencies of interest.
These assumptions are valid when
the properties
of the
sources of external emission as well as the
location of the emitting plasma in the jet
are as in Fig.~\ref{mesp3.0.less}.
Decreasing the size of the torus and/or placing the
emitting plasma farther down the jet
substantially decreases the Doppler boosting
even if $\Gamma'_p$ remains the same.
For the parameters used in Fig.~\ref{mesp3.0.less}
the ERC losses of electrons are only a fraction of the
synchrotron losses.
Also, the SSC flux exceeds the ERC flux by a factor of
at least a few at
all frequencies
and by several orders of magnitude near
$10^{16}\,\mbox{Hz}$ (in the plasma rest frame),
at which the SSC flare has been shown in Paper~I
to have an extra delay due to the travel time of the seed photons.
The results of simulations in the plasma rest
frame can be transformed to the observer's frame
according to the
following expressions: $F^{*}_{\nu_{*}}(t^{*}_{{{}} obs})
=
\delta_{{{}} obs}^3(D_0/D^{*})^2
F^E_{{{}} \nu}(t_{{{}} obs})$,
$\nu_{*}=\delta_{{{}} obs}\nu/(1+z)$, and
$t^{*}_{{{}} obs}=(1+z)t_{{{}} obs}/\delta_{{{}}
obs}$,
where the asterisk denotes quantities in the
observer's frame, $\delta_{{{}} obs}$ is the Doppler
factor determined by
$\Gamma'_p$ and the viewing angle, and
$z$ and $D^{*}$ are the redshift and distance to
the blazar
($D_0$ is the reference distance used in the
simulations).
This implies a particular dependence of the observed
quantities
on the bulk Lorentz factor of the emitting plasma.
However, in the case where the energy losses of
electrons are dominated
by ERC emission, this dependence will be different
due to energy stratification of the emitting volume.
Indeed, if the frequency of observation is above the
break frequency,
the size of the actual emitting volume that
contributes to the observed
flux is smaller than the size of the excitation region
caused by the
shock collision. In this case the thickness of the
emitting volume is determined by the decay time of the
ERC emission:
\begin{equation}
t^{{{}} E}_{\nu}\propto\frac{\sqrt{\delta_{{{}}
max}}}{u_{{{}} rad}}\propto
{\Gamma'_p}{\delta^{-2.5}_{{{}} max}}.
\end{equation}
This expression follows from the definition of the
decay time plus Eq.~(\ref{uradapprox}).
By applying the Doppler boosting formulas to the
external emission,
one can show that the ERC emission coefficient $j^{{{}}
E}_{\nu}\propto\delta^{3+(s-1)/2}_{{{}}
max}{\Gamma'}_{{{}} p}^{-1}$,
where $-s$ is the slope of the injected power-law
distribution of electron
energies. By combining these two formulas, one finds
for the ERC flux in
the plasma rest frame
$F^E_{{{}} \nu}\propto{}j^{{{}} E}_{\nu}t^{{{}}
E}_{\nu}\propto\delta^{(s-2)/2}_{{{}} max}$.
For $s=2$ the ERC flux in the plasma rest frame does not depend on the bulk
speed of the
emitting plasma despite relativistic boosting of the incident emission, as one
can see in
Fig.~\ref{mesp3.0.g}. Similar reasoning applied to
the SSC decay time
gives $t^{{{}} C}_{{{}} \nu}\propto\Gamma'_p\delta^{-3}_{{{}} max}$.
The synchrotron emission that provides the seed
photons for the SSC
radiation is affected in a similar way, which results
in $F^C_{{{}} \nu}\propto{}j^C_{{{}} \nu}t^C_{{{}}
\nu}\propto{\Gamma'}^2_{{{}} p}\delta^{-6}_{{{}} max}$
for the SSC flux above the break frequency, in the rest frame of the plasma.
These modifications are due to internal structure of the emitting medium
and, thus, cannot be reproduced by the homogeneous models such as \citet{der97,sik01}.
The results reported in \citet{der97}, including the boosting formulas
that give the dependence of observed flux on the Doppler factor of the emitting medium,
should only apply to stationary emission or
in a limited way to the flares observed at or below the break frequency.
The break in the electron energy distribution used in these studies can only occur
in a homogeneous medium with uniform and continuous injection of electrons,
and is not applicable to flares generated by acceleration of
electrons at shocks
or any other type of front.
\section{Conclusions}
To expand the realm of the calculations of Paper~I, we have
conducted
simulations of the external Compton emission
generated by collisions of shocks in a relativistic
jet in a blazar.
We have considered the physical conditions under which
the emission at high energies is dominated by either SSC
or ERC emission. If the emitting region in the jet
lies beyond the distance of the torus from the
central engine, then SSC will dominate and the results
of Paper I should be used to calculate the evolution
of the high-energy spectrum. Otherwise, ERC will be
more important because of substantial relativistic boosting
of the external seed emission and the results of this paper are
relevant.
For the case when ERC emission dominates, we have investigated the multifrequency light curves
and determined that ERC flares at lower frequencies can incur
an extra time delay
due to the minimum Lorentz factor cut-off in the
injected distribution
of electrons.
The simulations indicate that the spectral index of
the ERC emission
must be positive for at least half of the duration of
the flare for such a delay to occur,
which distinguishes it from a time delay of the SSC
emission, which is characterized by
a negative spectral index at all times.
We have also found that if the energy losses of electrons
are dominated
by ERC emission,
the dependence of the observed flux (synchrotron, SSC,
and ERC)
on the bulk speed of the emitting plasma
is different from that expected in homogeneous
models.
In particular, there is no double boosting of the ERC
emission,
while the SSC flux might even decrease
at higher values of the bulk Lorentz factor
of the emitting plasma.
When the blazar is observed along the jet axis,
flat tops are expected in light curves
observed at frequencies sufficiently above
the break frequency.
If the flares are symmetric, it is possible for both SSC and ERC emission
to peak in the soft X-ray band after the maximum
of the synchrotron light curve.
The value of the X-ray spectral index during the flare
distinguishes between the SSC and ERC emission mechanism.
The latter is characterized by more shallow or even positive
spectral index.
The delayed ERC emission indicates that the minimum
Lorentz factor of the injected electrons is high enough
so that the frequency of observation is at or below the
turn-over frequency of the ERC spectrum.
Otherwise, ERC emission should peak at the same time as the synchrotron flare.
When superluminal apparent motion is observed
in VLBI images, the viewing angle cannot be zero.
In this case the behavior of the light curves is different;
it is closer to that found for $\theta_{obs}=90^{\circ}$.
Synchrotron flares that peak at IR or optical
frequencies precede the SSC and ERC flares in soft X-rays
by $\Delta{t}\sim t_{ac}/2$ owing to frequency stratification.
\acknowledgments
This material is based on work supported by NASA
grants NAG5-13074 and NNG04GO85G, as well as National
Science Foundation grant AST-0406865.
|
1,116,691,499,839 | arxiv | \subsubsection{Paragraphs and Footnotes}
\section{Conclusion}
In this paper, we propose a novel representation learning framework CtrlFormer\xspace that learns a transferable state representation for visual control tasks via a sample-efficient vision transformer. CtrlFormer\xspace explicitly learns the attention among the current task, the tasks it learned before, and the observations. Furthermore, each task is co-trained with contrastive learning as an auxiliary task to improve the sample efficiency when learning from scratch. Various experiments show that CtrlFormer\xspace outperforms previous work in terms of transferability and the great potential to be extended in multiple sequential tasks.
We hope our work could inspire rethinking the transferability of state representation learning for visual control and exploring the next generation of visual RL framework.
\noindent \textbf{Acknowledgement.} Ping Luo is supported by the General Research Fund of HK No.27208720 and 17212120.
\section{Experiments}
In this section, we evaluate our proposed CtrlFormer\xspace on multiple domains in DMControl benchmark~\cite{tassa2018deepmind}. We test the transferability among the tasks in the same domain and the tasks across different domains. Throughout these experiments, the encoder, actor, and critic neural networks are trained using the Adamw optimizer~\cite{loshchilov2017decoupled} with the learning rate $lr=10^{-4}$ and a mini-batch size of 512. The soft target update rate $\tau$ of the critic is 0.01, and target network updates are made every 2 critic updates (same as in DrQ~\cite{kostrikov2020image}). The full set of parameters is in Appendix~\ref{app:Detailed_Training}. The Pytorch-like pseudo-code is provided in Appendix~\ref{app:Pseudo-code}.
\begin{table}[h]
\vspace{-1mm}
\begin{center}
\footnotesize
\setlength{\tabcolsep}{8pt}
\renewcommand{\arraystretch}{1.0}
\begin{tabular}{l|c c}
\Xhline{1pt}
Domain & Task 1 & Task 2 \\
\Xhline{1pt}
\textbf{\walkercolor{Walker}} & \texttt{stand} & \texttt{walk} \\
\textbf{\cartpolecolor{Cartpole}} & \texttt{swingup} & \texttt{swingup-sparse} \\
\textbf{\reachercolor{Reacher}} & \texttt{easy} & \texttt{hard} \\
\textbf{\fingercolor{Finger}} & \texttt{turn-easy} & \texttt{hard} \\
\Xhline{1pt}
\end{tabular}
\end{center}
\vspace{-3mm}
\caption{Domains and the corresponding tasks.}\label{tab:task-domain}
\vspace{-3mm}
\end{table}
\vspace{-10pt}
\input{src/table1}
\input{src/table2}
\subsection{Benchmark}
The DeepMind control suite is a set of continuous control tasks and has been widely used as the benchmark for visual control~\cite{tassa2018deepmind}. We mainly test the performance of CtrlFormer\xspace in 4 typical domains from DeepMind control suite.
The dimensions of action space are the same for tasks within the same domain and different across domains.
In order to evaluate the performance of CtrlFormer\xspace among the tasks in same-domain and cross-domain scenarios, we conduct extensive experiments on multiple domains and tasks, as shown in Table~\ref{tab:task-domain}.
The detailed introduction of the domains and tasks we use is in Appendix \ref{app:dmc_description}.
\subsection{Baselines}
We mainly compare CtrlFormer\xspace with Dreamer~\cite{hafner2019dream}, DrQ~\cite{kostrikov2020image} and SAC~\cite{haarnoja2018sac} whose the representation encoded by a pre-trained ResNet~\cite{he2016deep}. Dreamer is a representative method with a latent variable model for state representation transferring, achieving the state-of-the-art performance for model-based reinforcement learning. To extract useful information from historical observations, it encodes the representation by a recurrent state space model~(RSSM)~\cite{hafner2019learning}. DrQ is the state-of-the-art model-free algorithm on DMControl tasks, which surpasses other model-free methods with task-specific representations~(\emph{i.e.}, methods update the encoder with the actor-critic gradients), such as CURL~\cite{laskin2020curl} and SAC-AutoEncoder~\cite{yarats2019improving}.
To test the transferability of DrQ, every task is assigned a specific head for the Q-network and the policy network, and all tasks share a CNN network to extract state representations. We utilize ResNet-50 ~\cite{he2016deep} as encoder and pre-trained it on the ImageNet~\cite{deng2009imagenet} to test the performance of task-agnostic representation for reinforcement learning.
\subsection{Transferability}
\noindent\textbf{Settings.} In all transferability testing experiments, the agent first learns a previous task, then adds a new policy token, uses the previous policy token as the initialization of the current task policy token, and then starts learning a new task. Finally, we retested the score (average episode return) of the old task using the latest Encoder and the Actor and Critic networks corresponding to the old task after learning the new task.
\textbf{Transferring in the same domain.} In the same domain, the dynamics of the agents from different tasks are similar, while the sizes of the target points and the characteristic of reward (sparse or dense) might be different.
We design an experimental pipeline to let the agent first learn an easier task and then transfer the obtained knowledge to a harder one in the same domain. The results are summarized in Table~\ref{table:Transferring_tasks_in_same_domain}. Compared to baselines, the sample efficiency of CtrlFormer\xspace in the new task is improved significantly after transferring the state representation from the previous task.
Taking Table~\ref{table:Transferring_tasks_in_same_domain-6} and Figure~\ref{fig:trans} as examples, CtrlFormer\xspace achieves an average episode return of \texttt{$769_{\pm{34}}$} with representation transferring using 100$k$ samples, while learning from scratch is totally failed using the same number of samples.
Furthermore, when retesting on the previous task after transferring to new task, as shown in Figure~\ref{fig:maintain}, CtrlFormer\xspace does not show an obvious decrease. However, the previous task's performance of DrQ is damaged significantly after learning the new task.
Moreover, When learning from scratch, the sample efficiency of CtrlFormer\xspace is also comparable to the DrQ with multi-heads. Although Dreamer shows promising transferability in the domain like \walkercolor{Walker}, where different tasks share the same dynamics, it transfers poorly in domains like \reachercolor{Reacher}, where the dynamics are different among tasks.
The representation encoded by a pre-trained ResNet-50 shows same performance among tasks but shows lower sample efficiency compared with CtrlFormer\xspace and DrQ.
\textbf{Transferring across domains.} Compared to the tasks in the same domain whose learning difficulty is readily defined in the DMControl benchmark, it is not easy to distinguish which task is easier to learn for tasks from different domains.
Thus, we test the bi-directional transferability in different domains, \emph{i.e.}, transferring from \fingercolor{Finger} domain to \reachercolor{Reacher} domain and transferring by an inverse order.
As shown in Table~\ref{table:cross_domain}, the CtrlFormer\xspace has the best transferability and does not show a significant decrease after fine-turning on the new task. In contrast, the performance of DrQ decreases significantly after learning a new task and has worse performance damage than in the same domain because of a large gap across different domains.
\textbf{Sequential transferring among more tasks.} In order to test whether the improvement by state representation transfer is more significant with the increase of the number of tasks learned, we test the performance of CtrlFormer\xspace in four tasks from easy to difficult sequentially and further compare the transfer leaning via CtrlFormer\xspace to the method that learns the four tasks simultaneously.
As shown in Table~\ref{tab:4tasks}, the method of sequentially learning four tasks via CtrlFormer\xspace has the best sample efficiency. Furthermore, we can find that transferring state representation from more tasks performs better than transferring from only one task. CtrlFormer\xspace can surpass the performance of learning from scratch at 500$k$ while only using 100$k$ samples under this setting.
\begin{table}[ht]
\begin{center}
\footnotesize
\setlength{\tabcolsep}{5pt}
\renewcommand{\arraystretch}{1.0}
\begin{tabular}{l | c cc c}
\Xhline{1.0pt}
Method & \multicolumn{4}{c}{Task 0 $\rightarrow$ Task 1 $\rightarrow$ Task 2 $\rightarrow$ Task 3} \\
\Xhline{1.0pt}
Scratch (100$k$) & \onepm{967}{27}& \onepm{869}{61} & \onepm{759}{48} & 0 \\
Train together (100$k$) & \onepm{433}{23}&\onepm{143}{34} & \onepm{310}{41} & $0$ \\
\cellcolor{codegray}CtrlFormer\xspace (100$k$) & \cellcolor{codegray}\onebfpm{967}{27}&\cellcolor{codegray}\onebfpm{981}{29} &\cellcolor{codegray}\onebfpm{988}{36} & \cellcolor{codegray}\onebfpm{853}{69} \\
\hline
Scratch (500$k$) & \onepm{995}{18}& \onepm{949}{44} & \onepm{846}{25} & \onepm{671}{81} \\
Train together (500$k$) & \onepm{947}{32}&\onepm{942}{53} & \onepm{632}{44} & \onepm{40}{15} \\
\cellcolor{codegray}CtrlFormer\xspace (500$k$) & \cellcolor{codegray}\onebfpm{995}{18}&\cellcolor{codegray}\onebfpm{1000}{0} &\cellcolor{codegray}\onebfpm{992}{26} & \cellcolor{codegray}\onebfpm{878}{64} \\
\Xhline{1.0pt}
\end{tabular}
\end{center}
\vspace{-10pt}
\caption{\textbf{Performance comparison with a series tasks.} Tasks 0-3 are \mono{balance}, \mono{balance\_sparse}, \mono{swingup} and \mono{swingup\_sparse}, from \cartpolecolor{Cartpole} domain.}
\label{tab:4tasks}
\end{table}
As shown in Figure \ref{fig:mani-env}, we also develop an experiment on the DeepMind manipulation benchmark \cite{tunyasuvunakool2020dm_control} to further show the advantage of CtrlFormer, which provides a Kinova robotic arm and a list of objects for building manipulation tasks. We test the performance of CtrlFormer\xspace in 2 tasks: (1) Reach the ball: push the small red object near the white ball by the robot arm; (2) Reach the chess piece: push the small red object near the white chess piece by the robot arm. As shown in Table 2, CtrlFormer surpasses DrQ in terms of both transferability and sample efficiency when learning from scratch.
\begin{figure}[t]
\vspace{-10pt}
\begin{center}
\includegraphics[width=0.8\textwidth]{imgs/mani_env.pdf}
\end{center}
\vspace{-15pt}
\caption{Tasks in DeepMind manipulation}
\label{fig:mani-env}
\end{figure}
\begin{table}[t]
\footnotesize
\begin{tabular}{l|c|c}
\hline & DrQ & CtrlFormer \\
\hline Scratch Task1(500k) & $154_{\pm 41}$ & $\textbf{175}_{\pm 63}$ \\
\hline Retest Task1 & $87_{\pm 33}$ & $\textbf{162}_{\pm 75}$ \\
\hline Scratch Task2(500k) & $141_{\pm 47}$ & $\textbf{164}_{\pm 33}$ \\
\hline Transfer T1 to T2 (100k) & $73_{\pm 48}$ & $\textbf{116}_{\pm 34}$ \\
\hline
\end{tabular}
\vspace{-13pt}
\caption{Performance comparison on DeepMind manipulation.}
\end{table}
\begin{figure*}[ht]
\centering
\label{fig:Visualization}
\begin{minipage}{0.31\linewidth}
\begin{subfigure}{\linewidth}
\centering
\includegraphics[width=\linewidth]{attention_vis/vis_of_res.pdf}
\caption{Visualization of Pre-trained ResNet}
\label{fig:visual_res}
\end{subfigure}
\end{minipage}~~
\begin{minipage}{0.31\linewidth}
\begin{subfigure}{\linewidth}
\centering
\includegraphics[width=\linewidth]{attention_vis/vis_of_control.pdf}
\caption{Visualization of CtrlFormer\xspace}
\label{fig:visual_control}
\end{subfigure}
\end{minipage}~~
\begin{minipage}{0.295\linewidth}
\begin{subfigure}{\linewidth}
\centering
\includegraphics[width=\linewidth]{imgs/chord.pdf}
\caption{Visualized similarity between tasks}
\label{fig:cos_similarity_dmc}
\end{subfigure}%
\end{minipage}
\vspace{-15pt}
\caption{Visualization of the attention on the input image and the similarity of different tasks in DMControl benchmark.}
\label{fig:visualization}
\end{figure*}
\subsection{Visualization}
\textbf{Visualization of the similarity of different tasks.} We visualize the cosine similarity of policy tokens between tasks in Figure~\ref{fig:cos_similarity_dmc}, which reflects the similarity of the visual representation between different tasks. The width and color of the bands in Figure \ref{fig:cos_similarity_dmc} represents the similarity of the representations between two tasks. The thicker the line and the darker the color, the higher the similarity between the two tasks.
As shown in Figure \ref{fig:cos_similarity_dmc}, the similarity between \walkercolor{Walker}~(\mono{stand}) and \walkercolor{Walker}~(\mono{walk}) and between \cartpolecolor{Cartpole}~(\mono{swingup}) and \cartpolecolor{Cartpole}~(\mono{swingupsparse}) in the same domain is the top two strongest, indicating that it has better feature transfer potential. This is consistent with our test results, CtrlFormer\xspace significantly improves the sample efficiency after transferring the state representation.
For the cross-domain, the similarity between different tasks is quite different, for example, the representation similarity between \fingercolor{finger}~(\mono{turn-easy}) and \reachercolor{reacher}~(\mono{easy}) is significantly higher than the similarity between finger~(turn-easy) and \reachercolor{reacher}~(\mono{hard}). This is because, in the \reachercolor{reacher}~(\mono{hard}) task, the size of the target point is quite small, and the model pays too much attention to the target point and relatively less attention to the rod, while the finger task focuses more on the rod control.
\begin{figure}
\begin{center}
\begin{subfigure}{0.49\textwidth}
\includegraphics[width=\textwidth]{imgs/CNN_BE_AF.pdf}
\caption{DrQ}\label{fig:cam-drq}
\end{subfigure}
\begin{subfigure}{0.49\textwidth}
\includegraphics[width=\textwidth]{imgs/CF_BE_AF.pdf}
\caption{CtrlFormer\xspace}\label{fig:cam-ctrlf}
\end{subfigure}
\end{center}
\vspace{-18pt}
\caption{
\textbf{Comparison of the attention map change before and after the transferring.}
Figure~\ref{fig:cam-drq} shows that DrQ fails to pay attention to some key areas after transferring.
In contrast, our CtrlFormer\xspace~(Figure~\ref{fig:cam-ctrlf}) has consistent attention areas before and after transferring, demonstrating its superior property of transferring without catastrophic forgetting.}
\vspace{-6pt}
\label{fig:BE_AF}
\end{figure}
\begin{table}[t]
\centering
\setlength{\tabcolsep}{2pt}
\renewcommand{\arraystretch}{1.0}
\begin{tabular}{l | c c|c c}
\Xhline{1.0pt}
\multirow{2}{*}{ CtrlFormer\xspace } & \multicolumn{2}{c|}{w/o contrastive} & \multicolumn{2}{c}{w/ contrastive } \\
& $100 k$ & $500 k$ & $100 k$ & $500 k$ \\
\Xhline{1.0pt}
Cartpole(swingup) & \onepm{391}{42} & \onepm{835}{29} & \onebfpm{759}{48} & \onebfpm{846}{25} \\
Cartpole(swingup-sp) & 0 & \onepm{137}{54} & 0 & \onebfpm{671}{81}\\
Reacher(easy) & \onepm{622}{38} & \onepm{917}{64}& \onebfpm{642}{42}& \onebfpm{973}{53}\\
Reacher(hard) & \onepm{49}{15}&\onepm{234}{67} &\onebfpm{104}{48} &\onebfpm{548}{131} \\
Walker(stand) & \onepm{865}{33} &\onepm{930}{44} & \onebfpm{877}{42} & \onebfpm{954}{38} \\
Walker(walk) & \onepm{406}{73}& \onepm{708}{45} & \onebfpm{593}{52} &\onebfpm{903}{43} \\
Finger(turn-easy) & \onepm{15}{7} & \onepm{344}{38} & \onebfpm{281}{67} & \onebfpm{493}{35} \\
Finger(turn-hard) & \onepm{99}{44} & \onepm{136}{38} &\onebfpm{197}{78} & \onebfpm{344}{47}\\
\Xhline{1.0pt}
\end{tabular}
\caption{Ablation study on the effectiveness of co-training with contrastive learning. }
\label{tab:ablation_study}
\vspace{-12pt}
\end{table}
\textbf{Visualization of the attention on input image:} we visualize the attention of CtrlFormer\xspace and pre-trained ResNet-50 on the input image by Grad-CAM ~\cite{selvaraju2016grad}, which is a typical visualization method, more details are introduced in Appendix \ref{app:Visual}. As shown in Figure \ref{fig:visual_res}, the attention of ResNet-50 is disturbed by a lot of things unrelated to the task, which is the key reason why it has low sample efficiency in both Table \ref{table:Transferring_tasks_in_same_domain} and Table \ref{table:cross_domain}. The attention of CtrlFormer\xspace is highly correlated with the task, and different policy tokens learn different attention to the input image. The attention learned from similar tasks has similarities, but each has its own emphasis. Moreover, we also compare the attention map change on the old task before and after transferring in Figure~\ref{fig:BE_AF}, the attention map of CtrlFormer\xspace is not obviously changed, while the attention map of CNN-based model used in DrQ changed obviously, which provides its poor retesting performance with a reasonable explanation.
\subsection{Ablation Study}
To illustrate the effect of the co-training method with contrastive learning, we compare the sample efficiency of CtrlFormer\xspace and CtrlFormer\xspace without co-training, as shown in Table \ref{tab:ablation_study}, CtrlFormer\xspace shows higher sample efficiency, proving the effectiveness of the proposed co-training for improving the sample efficiency of the transformer-based model.
In conclusion, CtrlFormer\xspace surpasses the baselines and shows great transferability for visual control tasks.
The experiments in DMControl benchmark illustrated that CtrlFormer\xspace has great potential to model the correlation and irrelevance between different tasks, which improves the sample efficiency significantly and avoids catastrophic forgetting.
\section{Related Works}\label{sec:related_works}
\textbf{Learning Transferable State Representation.} For task-specific representation learning tasks like classification and contrastive objectives, the representation learned on large-scale offline datasets~(ImageNet~\cite{deng2009imagenet}, COCO~\cite{lin2014microsoft}, and etc.) has high generalization ability~\cite{he2020momentum, yen2020learning}. However, the downstream reinforcement learning methods based on such a task-agnostic representation empirically show low sample efficiency since the representation contains a lot of task-irrelevant interference information.
Progressive neural network~\cite{rusu2016progressive, rusu2017sim,gideon2017progressive} is a representative structure of transferring state representations, which
is composed of multiple columns, where each column is a policy network for a specific task, and lateral connections are added.
The parameters need to learn grow proportionally with the number of incoming tasks, which hinders its scalability.
PathNet~\cite{fernando2017pathnet} tries to alleviate this issue by using a size-fixed network. It contains multiple pathways, which are subsets of neurons whose weights contain the knowledge of previous tasks and are frozen during training on new tasks. The pathways that determine which parts of the network could be re-used for new tasks are discovered by a tournament selection genetic algorithm. PathNet fixes the parameters along a path learned on previously learned tasks and re-evolves a new population of paths for a new task to accelerate the behavior learning. However, the process of pathways discovery with the genetic algorithm has a high cost on the computational resources.
Latent variable models offer a flexible way to represent key information of the observations by optimizing the variational lower bound~\cite{krishnan2015deep, karl2016deep, doerr2018probabilistic, buesing2018learning,ha2018world, hafner2019learning, hafner2019dream, tirinzoni2020sequential,mu2021modelbased,chen2022flow}. \citet{2018Recurrent} propose the world model algorithm to learn representation by variational autoencoder~(VAE). PlaNet~\cite{hafner2019learning} utilizes a recurrent stochastic state model~(RSSM) to learn the representation and latent dynamic jointly. The transition probability is modeled on the latent space instead of the original state space.
Dreamer~\cite{hafner2019dream} utilizes the RSSM to make the long-term imagination. Although Dreamer is promising to transfer the knowledge across tasks that share the same dynamics, it is still challenging to transfer among tasks across domains.
\textbf{Vision Transformer.} With the great successes of Transformers ~\cite{vaswani2017attention} in NLP~\cite{devlin2018bert, radford2015unsupervised}, people apply them to solve computer vision problems. ViT~\cite{dosovitskiy2020image} is the first pure Transformer model introduced into the vision community and surpasses CNNs with large scale pretraining on the private JFT dataset~\cite{riquelme2021scaling}. DeiT~\cite{touvron2021training} trains ViT from scratch on ImageNet-1K~\cite{deng2009imagenet} and achieves better performance than CNN counterparts. Pyramid ViT (PVT)~\cite{wang2021pyramid} is the first hierarchical design for ViT, and proposes a progressive shrinking pyramid and spatial-reduction attention.
Swin Transformer~\cite{liu2021swin} computes attention within a local window and adopts shifted windows for communication aggregation. More recently, efficient transfer learning is also explored in for vision Transformer~\cite{bahng-2022-vp, jia-2022-vpt, chen2022adaptformer}. In this paper, we take the original ViT~\cite{dosovitskiy2020image} as the visual backbone with simple pooling layers, which are used to reduce the calculation burden, and more advanced structures may bring further gain.
\section{Introduction}
\label{submission}
\begin{figure}[ht]
\footnotesize
\begin{center}
\begin{subfigure}{0.49\textwidth}
\includegraphics[width=0.99\textwidth]{imgs/teaser_old.pdf}
\caption{Maintainability}\label{fig:maintain}
\end{subfigure}
\begin{subfigure}{0.49\textwidth}
\includegraphics[width=0.99\textwidth]{imgs/teaser_new.pdf}
\caption{Transferability}\label{fig:trans}
\end{subfigure}
\end{center}
\vspace{-10pt}
\caption{\textbf{Effect of CtrlFormer\xspace.} The agent first learns state representations in one task, and then transfers them to a new task, \emph{e.g.}, from finger~(turn-easy) to finger~(turn-hard), and from cartpole~(swingup) to cartpole~(swingup-sparse). Figure~\textbf{\ref{fig:maintain}} shows the \textit{maintainability} by comparing the performance in the old task before~(scratch) and after~(retest) transferring to a new task. CtrlFormer\xspace doesn't have catastrophic forgetting after transferring to the new task, and the performance is basically the same as learning from scratch. However, the performance of DrQ~\cite{kostrikov2020image} drops significantly after transferring; Figure~\textbf{\ref{fig:trans}} shows the \textit{transferability} by comparing the performance of learning from scratch and from transferring.
Transferring previous learned knowledge benefits much for CtrlFormer\xspace~(labeled as CtrlF).}\label{fig:Effect}\vspace{-5pt}
\end{figure}
Visual control is important for various real-world applications such as playing Atari games~\cite{mnih2015human}, Go games~\cite{silver2017mastering}, robotic control~\cite{kober2013reinforcement,yuan2022don}, and autonomous driving~\cite{wang2018deep}.
Although remarkable successes have been made, a long-standing goal of visual control, transferring the learned knowledge to new tasks without catastrophic forgetting, remains challenging and unsolved.
Unlike a machine, a human can quickly identify the critical information to learn a new task with only a few actions and observations, since a human can discover the relevancy/irrelevancy between the current task and the previous tasks he/she has learned, and decides which information to keep or transfer.
As a result, a new task can be learned quickly by a human without forgetting what has been learned before. Moreover, the ``state representation'' can be strengthened for better generalization to future tasks.
Modern machine learning methods for transferable representation learning across tasks can be generally categorized into three streams. They have certain limitations.
In the first stream, many representative works such as~\cite{shah2021rrl} utilize features pretrained in the ImageNet~\cite{deng2009imagenet} and COCO~\cite{lin2014microsoft} datasets.
The domain gap between the pretraining datasets and the target datasets hinders their performance and sample efficiency.
In the second stream, \citet{rusu2016progressive, fernando2017pathnet} trained a super network
to accommodate a new task. These approaches often allocate different parts of the supernet to learn different tasks. However, the network parameters and computations are proportionally increased when the number of tasks increases.
In the third stream, the latent variable models~\cite{ha2018world,hafner2019learning,hafner2019dream} learn representation by optimizing the variational bound. These approaches are struggling when the tasks come from different domains.
Can we learn sample-efficient transferable state representation across different control tasks in a single Transformer network?
The self-attention mechanism in Transformer mimics the perceived attention of synaptic connections in the human brain~as argued in \cite{oby2019new}.
Transformer could be powerful to model the relevancy/irrelevancy between different control tasks to alleviate the weaknesses in previous works.
However, simply porting Transformer to this problem cannot solve the above limitations because Transformer is extremely sample-inefficient (data-hungry) as demonstrated in NLP~\cite{vaswani2017attention, devlin2018bert} and computer vision~\cite{dosovitskiy2020image}.
This paper proposes a novel Transformer for representation learning in visual control, named CtrlFormer\xspace, which has two benefits compared to previous works.
Firstly, the self-attention mechanism in CtrlFormer\xspace learns both visual tokens and policy tokens for multiple different control tasks simultaneously, fully capturing relevant/irrelevant features between tasks. This enables the knowledge learned from previous tasks can be transferred to a new task, while maintaining the learned representation of previous tasks.
Secondly, CtrlFormer reduces training sample size by using contrastive reinforcement learning, where the gradients of the policy loss and the self-supervised contrastive loss are propagated jointly for representation learning.
For example,
as shown in Figure \ref{fig:Overview}, the input image is divided into several patches, and each patch corresponds to a token. The CtrlFormer learns self-attentions between image tokens, as well as policy tokens of different control tasks such as ``standing" and ``walking".
In this way, CtrlFormer not only decouples multiple control policies, but also decouples the features for behaviour learning and self-supervised visual representation learning, improving transferability among different tasks.
Our contributions are three-fold. \textbf{(1)} A novel control Transformer (CtrlFormer\xspace) is proposed for learning transferable state representation in visual control tasks. CtrlFormer\xspace models the relevancy/irrelevancy between distinct tasks by self-attention mechanism across visual data and control policies. It makes the knowledge learned from previous tasks transferable to a new task, while maintaining accuracy and high sample efficiency in previous tasks.
\textbf{(2)} CtrlFormer\xspace can improve sample efficiency by combining reinforcement learning with self-supervised contrastive visual representation learning~\cite{he2020momentum, grill2020bootstrap} and can reduce the number of parameters and computations via a pyramid Transformer structure.
\textbf{(3)} Extensive experiments show that CtrlFormer\xspace outperforms previous works in terms of both transferability and sample efficiency. As shown in Figure~\ref{fig:Effect},
transferring previously learned state representation significantly improves the sample efficiency of learning new tasks. Furthermore, CtrlFormer\xspace does not have catastrophic forgetting after transferring to the new task, and the performance is basically the same as learning from scratch.
\begin{figure}[t]
\centering
\includegraphics[width=0.9\linewidth]{imgs/figure_1_new.pdf}
\vspace{-5pt}
\caption{\footnotesize
\textbf{Overview of CtrlFormer\xspace for visual control.}
The input image is split into several patch tokens. Each task is assigned a specific policy token, which is a randomly-initialized and learnable variable. Tasks can come from the same domain, such as the Finger-turn-easy and Finger-turn-hard, or cross-domain, such as Reacher-easy and Cartpole-swingup.
CtrlFormer\xspace learns the self-attention between observed image tokens, as well as policy tokens of
different control tasks by vision transformer, which helps the agent leverage the similarities with previous tasks and reuse the representations from previously learned tasks to promote the behaviour learning of the current task. The output of CtrlFormer\xspace is used as the input of the downstream policy networks.
}\vspace{-5pt}
\label{fig:Overview}
\end{figure}
\section{Preliminaries}
\textbf{Overview of Vision Transformer.}
The original Transformer~\cite{vaswani2017attention} tasks as input a 1D sequence of token embeddings. To handle 2D images, ViT~\cite{dosovitskiy2020image} splits an input image $\mathbf{x} \in \mathbb{R}^{H\times W\times C}$ to a sequence of flattened 2D patches $\mathbf{x}_p \in \mathbb{R}^{N\times(P^2 \cdot C)}$, where we let $H$ and $W$ denote the height and width of the image, $C$ the number of channels, $(P, P)$ the resolution of each image patch, and $N = \frac{HW}{P^2}$ is the number of flattened patches. After obtaining $\mathbf{x}_p$, ViT map it to $D$ dimensions with a trainable linear projection and uses this constant latent vector size $D$ through all of its layers. The output of this projection is named the patch embeddings.
\newcommand{\mbf}[1]{\mathbf{#1}}
\begin{equation}
\mathbf{z}_0 = [ \mbf{x}_\text{class}; \, \mbf{x}^1_p \mbf{E}; \, \mbf{x}^2_p \mbf{E}; \cdots; \, \mbf{x}^{N}_p \mbf{E} ]
\end{equation}
where $\mbf{E} \in \mathbb{R}^{(P^2 \cdot C) \times D}$ is the projection matrix. $\mbf{x}_\text{class}$ denotes the class token.
The Transformer block~\cite{vaswani2017attention} includes alternating layers of multiheaded self-attention~(MHSA) and MLP blocks. Besides, Layernorm~(LN) is applied before every block, and residual connections after every block. The MLP utilizes two layers with a GELU~\cite{hendrycks2016gaussian} non-linearity. This process can be formulated as:
\newcommand{\op}[1]{\text{#1}}
\begin{equation}
\footnotesize
\begin{split}
\mbf{z^\prime}_\ell &= \operatorname{MHSA}(\op{LN}(\mbf{z}_{\ell-1})) + \mbf{z}_{\ell-1}, \text{\quad} \ell=1\ldots L \\
\mbf{z}_\ell &= \operatorname{MLP}(\op{LN}(\mbf{z^\prime}_{\ell})) + \mbf{z^\prime}_{\ell}, \text{\quad} \ell=1\ldots L\\
\mbf{y} &= \operatorname{LN}(\mbf{z}_L^0))
\end{split}
\end{equation}
where $\mbf{y}$ denotes the image representation, which is encoded by the output of the class token at the last block.
To build up a Transformer block, an MLP~\cite{popescu2009multilayer} block with two linear transformations and GELU~\cite{hendrycks2016gaussian} activation are usually adopted to provide nonlinearity. Note that the dimensions of the parameter matrix will not change when the number of tokens increases, which is a key advantage of the Transformer handling variable-length inputs.
\textbf{Reinforcement Learning for Visual Control.} Reinforcement learning for visual control aims to learn the optimal policy given the observed images, and could be formulated as an infinite-horizon partially observable Markov decision process (POMDP) \cite{bellman1957mdp,kaelbling1998planning}. POMDP can be denoted by $\mathcal{M}=\langle\mathcal{O}, \mathcal{A}, \mathcal{P}, p_0, r, \gamma\rangle$, where $\mathcal{O}$ is the high-dimensional observation space (\emph{i.e.}, image pixels), $\mathcal{A}$ is the action space, $\mathcal{P} = Pr(o_{t+1}|o_{\leq t},a_{t})$ represents the probability distribution over the next observation $o_{t+1}$ given the history of previous observations $o_{\leq t}$ and the current action $a_{t}$ and $p_0$ is the distribution of initial state. $r: \mathcal{O} \times \mathcal{A} \rightarrow \mathbb{R}$ is the reward function that maps the current observation and action to a scalar representing the reward, $r_t = r(o_{\leq t}, a_t)$. The overall objective is to find the optimal policy $\pi^*$ to maximize the cumulative discounted return $E_{\pi}[\sum_{t=0}^{\infty} \gamma^t r_t | a_t \sim \pi(\cdot|s_{\leq t}), s_{t+1} \sim p(\cdot|s_{\leq t}, a_t), s_0 \sim p_0(\cdot)]$, where $\gamma \in [0, 1)$ is the discount factor, which is applied to pay more attention on recent rewards rather than future ones and is usually set to 0.99 in practice.
By stacking several consecutive image observations into a state, $s_t = \{o_t, o_{t-1}, o_{t-2}, \ldots\}$, the POMDP could be converted into an Markov Decision Process (MDP)~\cite{bellman1957mdp}, where information at the next time-step is determined by the information at current step, unrelated to those in the history. Thus, for a MDP process, the transition dynamics can be refined as $p = Pr(s'_{t}|s_t,a_{t})$ representing the distribution of next state $s'_{t}$ given the current state $s_{t}$ and action $a_{t}$, and the reward function is refined as $r_t = r(s_t, a_t)$ similarly.
In practice, three consecutive images are stacked into a state $s_{t}$ and as an MDP, the objective turns into finding the optimal policy, $\pi^{*}(a_t|s_t)$ to maximize the expected return.
\section{Method}
In this section, we introduce our CtrlFormer for visual control in details. As shown in Figure \ref{fig:Overview}, the observation is first split into $N$ patches and mapped to $N$ tokens $[\mathbf{x}_p^1;\cdots;\mathbf{x}_p^N]$. Then CtrlFormer takes as inputs the image patches with an contrastive token $\mathbf{x}_\text{con}$ to improve the sample efficiency and $K$ policy tokens $[\mathbf{x}_{\pi}^1;\cdots;\mathbf{x}_{\pi}^K]$ and interactively encodes them with self-attention mechanism, leading to representations $[\mathbf{z}_\text{con};\mathbf{z}_{\pi}^1;\cdots;\mathbf{z}_{\pi}^K;\mathbf{z}_{p}^1;\cdots;\mathbf{z}_{p}^N]$. Each task is assigned a policy token $x^{i}_{\pi}$, which is a randomly-initialized but learnable variable similar to the class token in conventional vision transformers.
In training, the policy token learns to abstract the characteristic of the corresponding task and the correlations across tasks via gradient back-propagation.
In inference, it serves as the query to progressively gather useful information from visual inputs and previously learned tasks through self-attention layers.
To train the CtrlFormer\xspace encoder, the representation of each policy token, $\mathbf{z}_{\pi}^{i}$ is utilized as the input of the task-specific policy network and Q-network in the downstream reinforcement learning algorithm, and the goal is to maximize the expected return for each task. The data-regularized method is used to reduce the variance of Q-learning and improve the robustness of the change of representation. Besides, a contrastive objective is applied on $\mathbf{z}_{\text{con}}$ to aid the training process, which significantly improves sample efficiency. The total loss is the sum of the reinforcement learning part and the contrastive learning part in equal proportion.
In Section \ref{subsec: architecture}, we first introduce the details in CtrlFormer, and then a discussion is presented on how to transfer the learned representations to a new task in Section \ref{subsec: transfer}. The two objectives for training CtrlFormer, i.e., the policy learning problem and contrastive learning, are discussed in detail respectively in Section \ref{subsec: RL} and \ref{subsec: contrastive}.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.92\textwidth]{imgs/overall_ab.pdf}
\end{center}
\vspace{-10pt}
\caption{\textbf{The structure of CtrlFormer\xspace.}
CtrlFormer\xspace has a pyramid structure consisting of 3 stages and each stage has 3 blocks. The input image is split into several patches and these patches are mapped to a sequence of \textit{patch} tokens. The number of patch tokens is reduced to half by pooling operation between two stages. Each task has an independent \textit{policy} token. The number of policy tokens maintains through 3 stages. All tasks shares an additional \textit{constractive} token. The representations are learned by the self-attention mechanism. The output of the policy token is used for downstream reinforcement learning and the output of the contrastive token is used for contrastive learning.
}\label{fig:overall-pipeline}
\vspace{-3pt}
\end{figure}
\subsection{The Architecture of CtrlFormer\xspace }
\label{subsec: architecture}
In CtrlFormer\xspace, as shown in Figure~\ref{fig:overall-pipeline}, each task has independent \verb|[policy]| token $\mathbf{x}_{\pi}^{i}$, which is similar to \verb|[class]| token in ViT~\cite{dosovitskiy2020image}. Three consecutive frames of images are stacked into a 9-channel input tensor $\mathbf{x}_{p} \in \mathbb{R}^{H \times W \times 9}$.
We split the input tensor into $N=9HW/P^{2}$ patches with patch size $P \times P$ and then map it to a sequence of vectors $\left\{\mathbf{x}_{p}^{1},\mathbf{x}_{p}^{2},\ldots,\mathbf{x}_{p}^{N}\right\}$.
Contrastive learning is learned together with reinforcement learning as an auxiliary task to improve the sample efficiency, which is assigned to an \verb|[contrastive]| token $\mathbf{x}_{\text{con}}$. Position embeddings $\mathbf{E}_{\text {pos}} \in \mathbb{R}^{N \times D}$, which is the same as those used in ~\cite{dosovitskiy2020image}, are added to the patch embeddings $\left\{\mathbf{x}_{p}^{i}\right\}_{i=1}^{N}$ to retain positional information.
Thus, the input of the transformer is
\begin{equation}
\vspace{-10pt}
\mathbf{z}_{\ell_0}=\left[\mathbf{x}_{\text{con}};\mathbf{x}_{\pi}^{1};\ldots;\mathbf{x}_{\pi}^{K}; \mathbf{x}_{p}^{1};\cdots; \mathbf{x}_{p}^{N} \right]+\mathbf{E}_{\text{pos}}
\end{equation}
The blocks with multi-head self-attention~(MHSA) ~\cite{vaswani2017attention} and layer normalization~(LN) ~\cite{ba2016layer} could be formulated as
\begin{equation}
\footnotesize
\begin{split}
\mathbf{z}_{\ell_{j}}^{\prime}
&=\operatorname{MHSA}\left(\operatorname{LN}\left(\mathbf{z}_{\ell_{j-1}}\right)\right)+\mathbf{z}_{\ell_{j-1}} \\
\mathbf{z}_{\ell_{j}} &=\operatorname{MLP}\left(\operatorname{LN}\left(z_{\ell_{j}}^{\prime}\right)\right)+\mathbf{z}_{\ell_{j}}^{\prime} \\
\end{split}
\end{equation}
where $\mathbf{z}_{l_{0}}$ is the input of transformer and $\mathbf{z}_{l_{j}}$
is the output of the $j$-th block.
To reduce the number of parameters needed to learn, we utilize a pyramidal structure. There are three stages in the pyramidal vision transformer, and the number of tokens decreases with the stages by a pooling layer. In the pooling layer, the token sequence with a length of $N$ is reshaped into $(H/P) \times (W/P)$ and is pooled by a 2$\times$2 filter with stride $(2,1)$ in the first stage and with stride $(1,2)$ in the second stage. After pooling, the tensor is flattened back to the token sequence with a length of $N$ to serve as the input of the next stage. More detailed structure of CtrlFormer\xspace is introduced in Figure \ref{fig:detailed-structure} in Appendix \ref{app: detailed-structure}.
\subsection{Representation Transferring in New Task}
\label{subsec: transfer}
After learning the $K$-th previous task with policy token $\mathbf{z}_{\pi}^{K}$, we pad a new policy token $\mathbf{z}_{\pi}^{K+1}$, and the new task is learned with policy token $\mathbf{z}_{\pi}^{K+1}$, policy network $\pi^{K+1}(\cdot)$ and Q-networks $Q^{K+1}_{1}(\cdot,\cdot)$ and $Q^{K+1}_{2}(\cdot,\cdot)$. CtrlFormer\xspace inherits the merit of the transformer model to resolve variable-length input sequences. The dimension of the CtrlFormer\xspace's model parameters remains unchanged when the number of policy tokens changes, \textit{i.e.}, the weight dimension in self-attention is only related to the dimension of the token rather than the number of tokens.
The parameter metrics $\mathbf{W}^{q} \in \mathbb{R}^{D \times D}$,$\mathbf{W}^{k} \in \mathbb{R}^{D \times D}$ and $\mathbf{W}^{v} \in \mathbb{R}^{D \times D}$ are all remain its original dimension. When transferring to the $K$+1 task, the number of input tokens is increased from $N$+$K$+1 to $N$+$K$+2, hence $N$+$K$+2 output representations. Thus there is always a one-to-one correspondence between policy tokens and output representations. Both old and new tasks can be tackled within CtrlFormer\xspace.
The policy query $\mathbf{q}_{\pi} \in \mathbb{R}^{(K+1) \times D}$ and the key $\mathbf{k} \in \mathbb{R}^{(N+K+2) \times D}$ are calculated by
\begin{equation}
\begin{aligned}
&\mathbf{q}_{\pi}=\mathbf{z}_{\pi} \mathbf{W}^{q}, \mathbf{k}=\mathbf{z} \mathbf{W}^{k} \\
&\mathbf{z}=\left[\mathbf{z}_{con} ; \mathbf{z}_{\pi}^{1} ; \ldots ; \mathbf{z}_{\pi}^{K+1} ; \mathbf{z}_{p}^{1}; \ldots ; \mathbf{z}_{p}^{N}\right]\\
&\mathbf{z}_{\pi}=\left[\mathbf{z}_{\pi}^{1} ; \ldots ; \mathbf{z}_{\pi}^{K+1}\right]
\end{aligned}
\end{equation}
\subsection{Downstream Visual Reinforcement Learning Task}
\begin{figure}
\begin{center}
\begin{subfigure}{0.49\textwidth}
\includegraphics[width=\textwidth]{imgs/branch_rl.pdf}
\caption{\footnotesize{Reinforcement learning.}}\label{RL_Branch}
\end{subfigure}
\begin{subfigure}{0.49\textwidth}
\includegraphics[width=0.98\textwidth]{imgs/branch_constra.pdf}
\caption{Contrastive learning.}\label{contrastive_branch }
\end{subfigure}
\end{center}
\vspace{-10pt}
\footnotesize
\caption{\textbf{Downstream reinforcement learning with contrastive learning co-training.} Both reinforcement learning and contrastive learning are based on a set of momentum-updated learning frameworks, including online encoder $f_{\theta}$, Q-networks $Q(\cdot, \cdot)$ and online projector $g_{\theta}$, and target encoder $f_{\xi}$, target Q-network $Q^{*}(\cdot,\cdot)$ and target projector $g_{\xi}$, which are updated by EMA method.
In the RL part, the observation is firstly encoded by the online encoder as the input of $Q(\cdot, \cdot)$. Different augmented views of the observation are encoded by the target encoder $f_{\xi}$ as the input of the $Q^{*}(\cdot, \cdot)$. The target Q-value is the mean of the value calculated by different views. The encoder and Q-networks are updated by the Critic loss calculated by the estimated Q-value and target Q-value. The objective of contrastive learning is to predict the projected representation from one augmented view of observation with the projected representation from another augmented view. }\label{fig:Downstream}
\end{figure}
\label{subsec: RL}
With the output $\mathbf{z}=[\mathbf{z}_{\text{con}};\mathbf{z}_{\pi}^{1};\ldots;\mathbf{z}_{\pi}^{K};\mathbf{z}_{p}^{1};\ldots;\mathbf{z}_{p}^{N}]$ of the CtrlFormer\xspace, we take $\mathbf{z}_{\pi}^{i}$ as the representation for the $i$-th task, which contains the task-relevant information gathered from all image tokens and policy tokens through the self-attention mechanism.
We utilize SAC ~\cite{haarnoja2018soft} as the downstream reinforcement learning algorithm to solve the optimal policy with the representation $\mathbf{z}_{\pi}^{i}$ for the $i$-th task, which is an off-policy RL algorithm that optimizes a stochastic policy for maximizing the expected trajectory returns. SAC learns a stochastic policy $\pi_\psi$ and critics $Q_{\phi_1}$ and $Q_{\phi_2}$ to optimize a $\gamma$-discounted maximum-entropy objective ~\cite{ziebart2008maxent}. We use the same structure of the downstream Q-network and policy network as DrQ ~\cite{kostrikov2020image}. More details about SAC are introduced in Appendix \ref{section:extended_background} and the detailed structures of Q-network and policy network are introduced in Appendix ~\ref{app: detailed-structure}. The parameters $\phi_j$ are learned by minimizing the Bellman error:
\begin{equation}\footnotesize\label{eq:qmsbe}
\mathcal L (\phi_{j},\mathcal{B}) = \mathbb{E}_{t \sim \mathcal B} \left [\left ( Q_{\phi_j}(\mathbf{z}_{\pi}^{i},a) - \left( r+\gamma(1-d)\mathcal T \right )\right )^2 \right]
\end{equation}
where $(\mathbf{z}_{\pi}^{i},a,{\mathbf{z}^{\prime}}_{\pi}^{i},r,d)$ is a tuple with current latent state $z_{\pi}^{i}$, action $a$, next latent state ${\mathbf{z}^{\prime}}_{\pi}^{i}$, reward $r$ and done signal $d$, $\mathcal{B}$ is the replay buffer, and $\mathcal T$ is the target, defined as:
\begin{equation}\footnotesize\label{eq:target}
\mathcal{T} = \left (\min_{i=1,2} Q^{*}_{\phi_j} ({\mathbf{z}^{\prime}}_{\pi}^{i},a') - \alpha \log \pi_\psi(a'|{\mathbf{z}^{\prime}}_{\pi}^{i})\right )
\end{equation}
In the target \eqref{eq:target}, $Q^{*}_{\phi_j}$ denotes the exponential moving average (EMA)~\cite{holt2004forecasting} of the parameters of $Q_{\phi_j}$. Using the EMA has empirically shown to improve training stability in off-policy RL algorithms \citep{mnih2015human}. The parameter $\alpha$ is a positive entropy coefficient that determines the priority of the entropy maximization during the policy optimization.
As shown in Figure \ref{RL_Branch},
to improve the robustness of the policy network and Q-network and further reduce the variance of Q-learning,
we regularize the Q-target function $Q_{\phi_{j}}^{*}(\cdot,\cdot)$ by data augmentation as shown in \eqref{eq:DRQ}
\vspace{-10pt}
\begin{equation}
\footnotesize
Q_{\phi_{j}}^{*}\left(\mathbf{z}_{\pi}^{i}, a\right) = \frac{1}{K} \sum_{m=1}^{K} Q_{\phi_{j}}^{*}\left(\mathbf{z}_{\pi}^{i,m}, a_{m}^{\prime}\right)
\vspace{-5pt}
\label{eq:DRQ}
\end{equation}
where $\mathbf{z}_{\pi}^{i,m}$ is encoded by a random augmented image input, $a_{m} \sim \pi\left(\cdot \mid \mathbf{z}_{\pi}^{i,m}\right)$.
Such a Q-regularization method allows the generation of several surrogate observations with the same $Q$-values, thus providing a mechanism to reduce the variance of $Q$-function estimation.
While the critic is given by $Q_{\phi_{j}}$, the actor samples actions from policy $\pi_\psi$ and is trained by maximizing the expected return of its actions as in:
\begin{equation}\label{eq:actorloss}
\mathcal L (\psi) = \mathbb{E}_{a \sim \pi} \left [ Q^\pi (\mathbf{z}_{\pi}^{i},a) - \alpha \log \pi_\psi (a|\mathbf{z}_{\pi}^{i}) \right ]
\end{equation}
\subsection{Contrastive Learning for Efficient Downstream RL}
\label{subsec: contrastive}
\label{sec:Contrastive}
Transformers are empirically proven hard to perform well when trained on insufficient amounts of data. However, reinforcement learning aims to learn behaviour with as little as possible interaction data, which is considered expensive to collect.
DeiT~\cite{touvron2021training} introduces a teacher-student strategy for data-efficient transformer training, which relies on an auxiliary distillation token, ensuring the student learns from the teacher through attention. By referencing this idea, we propose a sample-efficient co-training method with contrastive learning for transformer in reinforcement learning.
The goal of the contrastive task is to learn a representation $\mathbf{z}_{\text{con}}$, which can then be used for downstream tasks. As shown in Figure \ref{contrastive_branch }, we use a momentum learning framework, which contains an online network and a target network that is updated by the exponential moving average~(EMA) method. The online network is defined by a set of weights ${\theta}$ and is comprised of three stages: an encoder $f_{\theta}$, a projector~$g_{\theta}$, a predictor~$q_{\theta}$.
The target network, which is defined by a set of weights $\xi$, provides the regression targets to train the online network. As shown in Figure~\ref{contrastive_branch }, the observation $o$ is randomly augmented by two different image augmentation $t$ and $t^{\prime}$ respectively from two distributions of image augmentation s $\mathcal{T}$ and $\mathcal{T}'$.
Two augmented view s $v \triangleq t(o)$ and $v' \triangleq t'(o)$ are produced from $o$ by applying respectively image augmentation s $t\sim \mathcal{T}$ and $t'\sim \mathcal{T}'$. The details of the image augmentation implementation is introduced in Appendix\ref{augmentaion}.
From the first augmented view~$v$, the online network outputs a representation $\mathbf{z}_{\text{con}} \triangleq f_{\theta}(v)$ by the vision transformer and a projection $y_{\theta} \triangleq g_{\theta}(\mathbf{z}_{\text{con}})$. The target network outputs $z^{\prime}_{\text{con}_\xi} \triangleq f_\xi(v')$ and the target projection $y^{\prime}_\xi \triangleq g_\xi (z^{\prime}_{\text{con}_\xi})$ from the second augmented view~$v'$, which are all stop-gradient.
We then output a prediction $q_{\theta}(y_{\theta})$ of $y'_\xi$ and $\ell_2$-normalize both $q_{\theta}(y_{\theta})$ and $y'_\xi$ to $\overline{q_{\theta}}\left(y_{\theta}\right) \triangleq q_{\theta}\left(y_{\theta}\right) /\left\|q_{\theta}\left(y_{\theta}\right)\right\|_{2}$ and $\bar{y}_{\xi}^{\prime} \triangleq y_{\xi}^{\prime} /\left\|y_{\xi}^{\prime}\right\|_{2}$.
The contrastive objective is defined by the mean squared error between the normalized predictions and target projections,
\begin{equation}
\footnotesize
\mathcal{L}_{\theta, \xi} \triangleq\left\|\overline{q_{\theta}}\left(y_{\theta}\right)-\bar{y}_{\xi}^{\prime}\right\|_{2}^{2}=2-2 \cdot \frac{\left\langle q_{\theta}\left(y_{\theta}\right), y_{\xi}^{\prime}\right\rangle}{\left\|q_{\theta}\left(y_{\theta}\right)\right\|_{2} \cdot\left\|y_{\xi}^{\prime}\right\|_{2}}.
\end{equation}
The updates of online network and target network are summarized as
\begin{equation}
\begin{aligned}
&\theta \leftarrow \operatorname{optimizer}\left(\theta, \nabla_{\theta} \mathcal{L}_{\theta, \xi}, \eta\right), \\
&\xi \leftarrow \tau \xi+(1-\tau) \theta,
\end{aligned}
\end{equation}
where $\mathrm{optimizer}$ is an optimizer and $\eta$ is a learning rate.
\section{Extended Background}
\label{section:extended_background}
\paragraph{Soft Actor-Critic}
The Soft Actor-Critic (SAC)~\cite{haarnoja2018sac} learns a state-action value function $Q_\theta$, a stochastic policy $\pi_\theta$ and a temperature $\alpha$ to find an optimal policy for an MDP $(S, A, p, r, \gamma)$ by optimizing a $\gamma$-discounted maximum-entropy objective~\cite{ziebart2008maxent}.
$\theta$ is used generically to denote the parameters updated through training in each part of the model. The actor policy $\pi_{\theta}(a_t|s_t)$ is a parametric $\text{tanh}$-Gaussian that given $s_t$ samples $a_t = \text{tanh}(\mu_\theta(s_t)+ \sigma_\theta(s_t) \epsilon)$, where $\epsilon \sim N(0, 1)$ and $\mu_\theta$ and $\sigma_\theta$ are parametric mean and standard deviation.
The policy evaluation step learns the critic $Q_\theta(s_t, a_t)$ network by optimizing a single step of the soft Bellman residual
\begin{equation}
\begin{split}
J_Q(\mathcal{D}) &= E_{\substack{( s_t,a_t, s'_t) \sim \mathcal{D} \\ a_t' \sim \pi(\cdot|s_t')}}[(Q_\theta(s_t, a_t) - y_t)^2] \\
y_t &= r(s_t, a_t) + \gamma [Q_{\theta'}(s'_t, a'_t) - \alpha \log \pi_\theta(a'_t|s'_t)] ,
\end{split}
\end{equation}
where $\mathcal{D}$ is a replay buffer of transitions, $\theta'$ is an exponential moving average of the weights. SAC uses clipped double-Q learning~\cite{hasselt2015doubledqn,fujimoto2018td3}, which we omit from our notation for simplicity but employ in practice.
The policy improvement step then fits the actor policy $\pi_\theta(a_t|s_t)$ network by optimizing the objective
\begin{align*}
J_\pi(\mathcal{D}) &= E_{s_t \sim \mathcal{D}}[ D_{KL}(\pi_\theta(\cdot|s_t) || \exp\{\frac{1}{\alpha}Q_\theta(s_t, \cdot)\})].
\end{align*}
Finally, the temperature $\alpha$ is learned with the loss
\begin{align*}
J_\alpha(\mathcal{D}) &= E_{\substack{s_t \sim \mathcal{D} \\ a_t \sim \pi_\theta(\cdot|s_t)}}[-\alpha \log \pi_\theta(a_t|s_t) - \alpha \mathcal{H}],
\end{align*}
where $\mathcal{H}$ is the target entropy hyper-parameter that the policy tries to match.
\paragraph{Deep Q-learning} DQN~\cite{mnih2013dqn} also learns a convolutional neural net to approximate Q-function over states and actions. The main difference is that DQN operates on discrete actions spaces, thus the policy can be directly inferred from Q-values. The parameters of DQN are updated by optimizing the squared residual error
\begin{align*}
J_Q(\mathcal{D}) &= E_{( s_t,a_t, s'_t) \sim \mathcal{D}}[(Q_\theta(s_t, a_t) - y_t)^2] \\
y_t &= r(s_t, a_t) + \gamma \max_{a'} Q_{\theta'}(s'_t, a') . \\
\end{align*}
In practice, the standard version of DQN is frequently combined with a set of tricks that improve performance and training stability, wildly known as Rainbow~\cite{hasselt2015doubledqn}.
\section{Supplumentary Materials of Experiments}\label{sec:settings}
\label{section:hyperparams}
Our PyTorch code is implanted based on the Timm~\cite{rw2019timm}, Pytorch version SAC~\cite{haarnoja2018sac}, and the official code of DrQ-v1\cite{kostrikov2020image}. All the experiments are run on the GeForce RTX 3090 with 5 seeds.
\subsection{Detailed Structure of Vision Transformer in CtrlFormer\xspace}
We utilize a simple pyramidal vision transformer as the encoder of CtrlFormer\xspace, which has 3 stages and each stage contains 3 blocks. With the 192 dimension output of the vision transformer, we add a fully-connected layer to map the feature dimension to 50, and apply \text{tanh} nonlinearity to the $50$ dimensional output as the final output of the encoder.
The detailed structure of CtrlFormer\xspace is shown as Figure \ref{fig:detailed-structure}
The hyper-parameters are listed in Table \ref{tab:Hyper-parameter-former}. The implantation is based on Timm ~\cite{rw2019timm}, which is a collection of SOTA computer vision models with the ability to reproduce ImageNet training results.
We train the transformer model using Adamw~\citep{kingma2014adam} as optimizer with $\beta_1=0.9$, $\beta_2=0.999$, a batch size of 512 and apply a high weight decay of $0.1$.
\begin{table}[b]
\centering
\begin{tabular}{l|c}
\hline
Parameters& Value\\
\hline
Image size & $84 \times 84$ \\
Patch size & 8 \\
Num patches & 196 \\
Input channels & 9 \\
Embedding dim & 192 \\
Depth & 9 \\
Num stage & 3 \\
Num blocks per stage & 3 \\
num heads & 3 \\
\hline
\end{tabular}
\caption{Hyper-parameter of CtrlFormer\xspace}
\label{tab:Hyper-parameter-former}
\end{table}
\subsection{Actor and Critic Networks}
\label{app: detailed-structure}
The structure of actor and critic networks are the same in CtrlFormer\xspace and DrQ(CNN+multiple heads). Clipped double Q-learning~\cite{hasselt2015doubledqn, fujimoto2018td3} is used for the critic, where each $Q$-function is parametrized as a 3-layer MLP with ReLU activations after each layer except for the last. The actor is also a 3-layer MLP with ReLUs that outputs mean and covariance for the diagonal Gaussian that represents the policy. The hidden dimension is set to $1024$ for both the critic and actor.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.85\linewidth]{imgs/moreformer_pic.pdf}
\caption{The detailed structure of CtrlFormer\xspace}
\label{fig:detailed-structure}
\vspace{-10pt}
\end{figure}
\subsection{CNN-based Encoder Network used in DrQ}
The CNN-based model ~\cite{kostrikov2020image} employs an encoder consists of four convolutional layers with $3\times 3$ kernels and $32$ channels, which is same to DrQ~\cite{kostrikov2020image}. The ReLU activation is applied after each convolutional layer. We use stride to $1$ everywhere, except for the first convolutional layer, which has stride $2$. The output of the convolutional network is fed into a single fully-connected layer normalized by LayerNorm~\cite{ba2016layer}. Finally, we apply \text{tanh} non-linearity to the $50$ dimensional output of the fully-connected layer. We initialize the weight matrix of fully-connected and convolutional layers with the orthogonal initialization~\cite{saxe2013ortho}.
The actor and critic networks share the weights of the convolutional layers of the encoder. Furthermore, only the critic optimizer is allowed to update these weights (e.g. the gradients from the actor is stopped before they propagate to the shared CNN layers).
\subsection{Detailed Training and Evaluation Setup for Visual Control Tasks}
\label{app:Detailed_Training}
The agent first collects $1000$ seed observations using a random policy. The further training observations are collected by sampling actions from the current policy. The agent performs one training update every time when receiving a new observation.
The action repeat parameters are used as same as the DrQ~\cite{kostrikov2020image}, and is listed in Table~\ref{table:action_repeat}, and the number of training observations is only a fraction of the environment steps (e.g. a $1000$ steps episode at action repeat $8$ will only result in $125$ training observations).
In order to avoid damaging the CtrlFormer\xspace due to the low-quality policy gradient caused by the inaccurate policy network and Q-networks at the beginning of the behavior learning in the new task,
the learning rate of CtrlFormer\xspace is set $\alpha_{l}$ times lower than policy network and Q-networks. In this way, the agent is guided to quickly learn the policy of the current task based on the previously learned representation and fine-tune the model according to the relationship between the current task, the previous tasks, and the input. The training curves using different $\alpha_{\ell}$ are shown in the Figure \ref{fig:Training curves}, showing that $\alpha_{\ell}$ in the range of 0.04-0.08 is appropriate.
\begin{figure}
\centering
\includegraphics[width=0.55\textwidth]{imgs/lr.pdf}
\vspace{-20pt}
\caption{Training curves with different $\alpha_{\ell}$}
\label{fig:Training curves}
\end{figure}
We evaluate our agent every $10000$ environment step by computing the average episode return over $10$ evaluation episodes as same as DrQ~\cite{kostrikov2020image}. During the evaluation, we take the mean of the policy output action instead of stochastic sampling. In Table~\ref{table:hyper_params} we provide a comprehensive overview of all the other hyper-parameters.
\begin{table}[ht!]
\centering
\begin{tabular}{l|c}
\hline
Task name & Action repeat \\
\hline
Cartpole Swingup & $8$ \\
Cartpole Swingup sparse & $8$ \\
Cartpole balance & $8$ \\
Cartpole balance sparse & $8$ \\
Reacher Easy & $4$ \\
Reacher Hard & $4$ \\
Finger Turn easy & $2$ \\
Finger Turn Hard & $2$ \\
Walker Walk & $2$ \\
Walker stand & $2$ \\
\hline
\end{tabular}
\caption{\label{table:action_repeat} The action repeat hyper-parameter used for each task.}
\end{table}
\subsection{Image Preprocessing and Augmentation}
\label{augmentaion}
We construct an observational input as a $3$-stack of consecutive frames, which is the same as DQN~\cite{mnih2013dqn} and DrQ~\cite{kostrikov2020image}, where each frame is an RGB rendering of size $ 84 \times 84$ from the camera. We then divide each pixel by $255$ to scale it down to $[0, 1]$ range as the input of the encoder.
The images from the DeepMind control suite are $84\times 84$.
The image augmentation used in \ref{subsec: RL} and \ref{sec:Contrastive} is implanted by random crop which is also used in DrQ ~\cite{kostrikov2020image}. We pad each side by $4$ repeating boundary pixels and then select a random $84\times 84$ crop, yielding the original image shifted by $\pm 4$ pixels. This procedure is repeated every time an image is sampled from the replay buffer.
\begin{table}[hb!]
\centering
\begin{tabular}{l|c}
\hline
Parameter & Setting \\
\hline
Replay buffer capacity & $100000$ \\
Seed steps & $1000$ \\
Batch size & $512$ \\
Discount $\gamma$ & $0.99$ \\
Optimizer & Adamw \\
Learning rate & $10^{-4}$ \\
Critic target update frequency & $2$ \\
Critic Q-function soft-update rate $\tau$ & $0.01$ \\
Actor update frequency & $2$ \\
Actor log stddev bounds & $[-10, 2]$ \\
Init temperature & $0.1$ \\
\hline
\end{tabular}\\
\caption{\label{table:hyper_params} Hyper-parameters of downstream reinforcement learning.}
\end{table}
\subsection{Implementation Details of Contrastive Learning}
\label{sec:impl_detail_contrastive}
We implanted the contrastive optimization by BYOL ~\cite{grill2020bootstrap}, which is a typical approach to self-supervised image representation learning.
BYOL relies on two neural networks, referred to as \textit{online} and \textit{target} networks, that interact and learn from each other and do not rely on negative pairs.
From an augmented view of an image, we train the online network $f_{\theta}$ to predict the target network $f_\xi$ representation of the same image under a different augmented view.
At the same time, we update the target network with a slow-moving average of the online network.
The output of the contrastive token in CtrlFormer\xspace is a 192 dimension vector. We project it to a 96 dimension vector by a multi-layer perceptron (MLP) and similarly for the target projection.
This MLP consists of a linear layer with output size 384 followed by batch normalization~\cite{ioffe2015batch}, rectified linear units (ReLU)~\cite{agarap2018deep}, and a final linear layer with output dimension 96. The predictor $q_{{\theta}}$ uses the same architecture as the projector.
For the target network, the exponential moving average parameter $\tau$ starts from $\tau_\text{base} = 0.996$ and is increased to one during training.
\subsection{Additional Results}
\textbf{Ablation on Q-regularization}.
To illustrate the effect of the Q-regularization technique, we compare the transferability of CtrlFormer\xspace, CtrlFormer\xspace without Q-regularization, and DrQ without Q-regularization.
Table \ref{tab:rq_ab} shows that the Q-regularization technique helps to improve the performance of CtrlFormer, and CtrlFormer\xspace still outperforms DrQ on transferability without Q-regularization.
\begin{table}[h]
\caption{Ablation on Q-regularization by transferring from Reacher(easy) to Reacher(hard).
}
\scriptsize
\label{tab:rq_ab}
\setlength{\tabcolsep}{0.5pt}
\begin{tabular}{l|c|c|c|c|c}
\hline & Scratch Task1& Retest & Scratch Task2 & Transfer& Benifit \\
\hline Our w/ rQ & $973_{\pm 53}$ & $906_{\pm 31}$ & $548_{\pm 124}$ & $657_{\pm 68}$ &{${+16.5 \%}$} \\
\hline Our w/o rQ & $774_{\pm 32}$ & $738_{\pm 54}$ & $474_{\pm 56}$ & $551_{\pm 57}$ & {${+13.8 \%}$ }\\
\hline DRQ w/o rQ & $756_{\pm 47}$ & $329_{\pm 58}$ & $481_{\pm 198}$ & $410_{\pm 47}$ & {${-14.76 \%}$} \\
\hline
\end{tabular}
\vspace{-10pt}
\end{table}
{\textbf{Contrastive co-training on other baselines.}}
We provide an additional baseline that applies contrastive co-training on the CNN-based model and compare it with CtrlFormer. The results show DrQ with contrastive co-training (DrQ-C) achieves better transferring results from Walker-Stand (T1) to Walker-Walk (T2) than the original DrQ algorithm. But it still suffers from catastrophic forgetting.
\begin{table}[h]
\footnotesize
\setlength{\tabcolsep}{1pt}
\begin{tabular}{l|c|c}
\hline & Retest(T1 500k) & Transfer T2 (100k) \\
\hline DrQ&$698_{\pm 57}$ & $321_{\pm 54}$ \\
\hline DrQ-C& $707_{\pm 68}$ & $472_{\pm 68}$ \\
\hline Our & $\textbf{950}_{\pm 42}$ & $\textbf{857}_{\pm 47}$ \\
\hline
\end{tabular}
\vspace{-10pt}
\caption{Ablation on Contrastive co-training}
\end{table}
\subsection{PyTorch-style Pseudo-code}
\label{app:Pseudo-code}
We provide detailed PyTorch-style pseudo-codes of the method we visualize the policy attention shown in Figure \ref{fig:visualization}, and the method we update the encoder, the actor and the critic. The pseudo-code of Grad-CAM is shown in Listing 1. The pseudo-code of the actor and the entropy temperature is shown in Listing 2. The pseudo-code of the encoder and the critic is shown in Listing3.
\subsection{DMControl Benchmark}
\label{app:dmc_description}
The DeepMind Control Suite (DMControl) ~\cite{tassa2018deepmind} is a set of stable, well-tested continuous control tasks that are easy to use and modify. DMControl contains many well designed tasks, which are written in \href{https://www.python.org/}{Python} and physical models are defined using \href{http://mujoco.org/book/modeling.html}{MJCF}. It is currently one of the most recognized standard test environments for visual control.
The domain in DMControl refers to a physical model, while a task refers to an instance of that model with a particular MDP structure. For example, the difference between the \mono{swingup} and \mono{balance} tasks of the \mono{cartpole} domain is whether the pole is initialized pointing downwards or upwards, respectively.
We list the detailed descriptions of the domains used in this paper below, names are followed by three integers specifying the dimensions of the state, control and observation spaces i.e.\ $\Bigl(\dim(\mathcal{S}),\dim(\mathcal{A}),\dim(\mathcal{O})\Bigr)$.
\myfigure{
\textbf{\walkercolor{Walker} (18, 6, 24):}
The \textbf{\walkercolor{Walker}} domain contains a series of control tasks for two-legged robots.
In the \mono{stand} task, the reward is a combination of terms encouraging an upright torso and some minimal torso height. The \mono{walk} and \mono{run} tasks include a component encouraging forward velocity.
}{imgs/walker.png}
\vspace{-10pt}
\myfigure{
\textbf{\fingercolor{Finger}(6, 2, 12):} \textbf{ \fingercolor{Finger}} domain aims to rotate a body on an unactuated hinge. In the \mono{turn\_easy} and \mono{turn\_hard} tasks, the tip of the free body must overlap with a target (the target is smaller for the \mono{turn\_hard} task). In the \mono{spin} task, the body must be continually rotated.
}{imgs/finger.png}
\vspace{-10pt}
\myfigure{
\textbf{\cartpolecolor{Cartpole}(4, 1, 5):}
\textbf{\cartpolecolor{Cartpole}} domain aims to control the pole attached by an un-actuated joint to a cart.
In both the \mono{swingup} task and the \mono{swingup-sparse} task,
the pole starts pointing down and aim to make the unactuated pole keeping upright upward by applying forces to the cart, while in \mono{balance} and \mono{balance\_sparse} the pole starts near the upright. }{./imgs/cart-pole.png}
\vspace{-10pt}
\myfigure{
\textbf{\reachercolor{Reacher} (4, 2, 7):} \textbf{\reachercolor{Reacher}} domain aims to control the two-link planar reach a randomised target location. The reward is one when the end effector penetrates the target sphere. In the \mono{easy} task the target sphere is bigger than on the \mono{hard} task (shown on the left).
}{imgs/reacher.png}
\section{Visualization}
\label{app:Visual}
We use the Grad-CAM ~\cite{selvaraju2016grad} method to visualize the encoder's attention on the input image, which is a typical technique for visualizing the regions of input that are "important". Grad-CAM uses the gradient information flowing to produce a coarse localization map of the important regions in the image.
From Figure~\ref{fig:att_vis_ours} and Figure~\ref{fig:att_vis_resnet} we can observe that our attention map is more focused on objects and task-relevant body parts, while the attention of the pre-trained ResNet-50 (same as the network used in the experiments) is disturbed by irrelevant information and not focused. In this way, our CtrlFormer\xspace learns better policies by the representation highly relevant to the task.
The attention learned from similar tasks has similarities, but each has its own emphasis.
\clearpage
\onecolumn
\begin{figure*}[ht]
\centering
\begin{subfigure}{0.24\linewidth}
\centering
\includegraphics[width=\linewidth]{ctrl_vis/cartpole_swingup.pdf}
\caption{Cartpole-swingup}
\end{subfigure}%
\hfill
\begin{subfigure}{0.24\linewidth}
\centering
\includegraphics[width=\linewidth]{ctrl_vis/cartpole_swingup_sparse.pdf}
\caption{cartpole-swingup-sparse}
\end{subfigure}
\hfill
\begin{subfigure}{0.24\linewidth}
\centering
\includegraphics[width=\linewidth]{ctrl_vis/finger_easy.pdf}
\caption{Finger-turn-easy}
\end{subfigure}%
\hfill
\begin{subfigure}{0.24\linewidth}
\centering
\includegraphics[width=\linewidth]{ctrl_vis/finger_hard.pdf}
\caption{Finger-turn-hard}
\end{subfigure}%
\hfill
\begin{subfigure}{0.24\linewidth}
\centering
\includegraphics[width=\linewidth]{ctrl_vis/reacher_easy.pdf}
\caption{Reacher-easy}
\end{subfigure}%
\hfill
\begin{subfigure}{0.24\linewidth}
\centering
\includegraphics[width=\linewidth]{ctrl_vis/reacher_hard.pdf}
\caption{Reacher-hard}
\end{subfigure}%
\hfill
\begin{subfigure}{0.24\linewidth}
\centering
\includegraphics[width=\linewidth]{ctrl_vis/walker_stand.pdf}
\caption{Walker-stand}
\end{subfigure}%
\hfill
\begin{subfigure}{0.24\linewidth}
\centering
\includegraphics[width=\linewidth]{ctrl_vis/walker_walk.pdf}
\caption{Walker-walk}
\end{subfigure}%
\hfill
\caption{Visualization of the CtrlFormer\xspace's attention on the input image}
\label{fig:att_vis_ours}
\end{figure*}
\begin{figure*}[ht]
\centering
\begin{subfigure}{0.24\linewidth}
\centering
\includegraphics[width=\linewidth]{red_vis/cartpole_swingup.pdf}
\caption{Cartpole-swingup}
\end{subfigure}%
\hfill
\begin{subfigure}{0.24\linewidth}
\centering
\includegraphics[width=\linewidth]{red_vis/cartpole_swingup_sparse.pdf}
\caption{cartpole-swingup-sparse}
\end{subfigure}
\hfill
\begin{subfigure}{0.24\linewidth}
\centering
\includegraphics[width=\linewidth]{red_vis/finger_turn_easy.pdf}
\caption{Finger-turn-easy}
\end{subfigure}%
\hfill
\begin{subfigure}{0.24\linewidth}
\centering
\includegraphics[width=\linewidth]{red_vis/finger_turn_hard.pdf}
\caption{Finger-turn-hard}
\end{subfigure}%
\hfill
\begin{subfigure}{0.24\linewidth}
\centering
\includegraphics[width=\linewidth]{red_vis/reacher_easy.pdf}
\caption{Reacher-easy}
\end{subfigure}%
\hfill
\begin{subfigure}{0.24\linewidth}
\centering
\includegraphics[width=\linewidth]{red_vis/reacher_hard.pdf}
\caption{Reacher-hard}
\end{subfigure}%
\hfill
\begin{subfigure}{0.24\linewidth}
\centering
\includegraphics[width=\linewidth]{red_vis/walker_stand.pdf}
\caption{Walker-stand}
\end{subfigure}%
\hfill
\begin{subfigure}{0.24\linewidth}
\centering
\includegraphics[width=\linewidth]{red_vis/walker_walk.pdf}
\caption{Walker-walk}
\end{subfigure}%
\hfill
\caption{Visualization of the Resnet's attention on the input image}
\label{fig:att_vis_resnet}
\end{figure*}
\clearpage
\input{src/alg}
|
1,116,691,499,840 | arxiv | \section{Introduction}
\label{intro}
Studying the spread of infectious disease has been of great interest for a long time and
has motivated a lot of mathematical models.
Susceptible-Infected-Recovered (SIR) type models are among those that are
commonly studied.
A SIR model is a compartmental model, in which
a population of individuals is divided into three distinct
groups (compartments).
The first compartment consists of individuals that are
susceptible to the disease, but are not yet infected.
The second compartment is represented by infected individuals.
Finally, the remaining third compartment is a group of individuals, who have been infected and recovered
from the disease. A recovered individual is immune.
In this paper we study a continuous time discrete stochastic SIR model
and its continuous limit (to be explained).
In a discrete model SIR a population is modeled by vertices of a graph, and
infection is transmitted from infected vertices to susceptible ones via edges of the graph.
Such a model is usually studied under additional assumptions on
the graph, infection rates and recovery times
(e.g., see~\cite{Andersson}, \cite{BrittonLN},
\cite{Fabricius}, \cite{Montagnon}, \cite{moreno}, \cite{Schutz}, \cite{zhang}, and
references therein).
For example, if the underlying graph is complete,
infection rates are constant and the recovery times are exponentially
distributed, then the model is an immediate discrete version of
the classic SIR model of A.G. McKendrick and W.O. Kermack (\cite{SIR-paper}).
We consider the discrete SIR model
for the spread of infectious disease in
an infinite closed homogeneous population, in which
all individuals have the same number of social contacts.
Such a population is modeled by a homogeneous (regular) tree. The latter is an infinite connected constant
vertex degree graph without cycles. The constant vertex degree means that
each vertex has the same number of adjacent vertices (neighbors).
We assume that an infected vertex emits germs towards a
susceptible adjacent vertex (a susceptible neighbor)
according to a Poisson process with a time-varying rate.
A susceptible vertex can be also
infected by itself, according to another Poisson process.
An infected vertex recovers in a certain period of time (the recovery time)
given by a random variable.
The recovery times are assumed to be
independent and identically distributed.
All Poisson processes under consideration are independent of each other, and
they are also independent of the recovery times.
Our main result for the discrete model
concerns the distribution of the time it takes for a susceptible
vertex of a homogeneous tree to get infected (the time to infection).
Under general assumptions for infection rates and recovery times
we obtain a simple analytical expression for this distribution
in terms of a solution of a non-linear integral equation.
In some special cases, the integral equation
is equivalent to the Bernoulli type differential equation, which can be solved analytically.
The structure of a homogeneous tree plays the essential role
in our analysis of the discrete model. The key observation is that a susceptible vertex
splits the homogeneous tree into a finite number of
identical subgraphs, and infection processes on these subgraphs are independent and identically distributed.
It should be noted that despite the fact that discrete SIR models on graphs
have received considerable attention over the years, the SIR model on trees has been overlooked
(at least, to the best of our knowledge).
In the second part of the paper we study the discrete model in the limit, as the
vertex degree of the tree tends to infinity, and the infection rates decrease
proportionally. We show that in this limit our results for the discrete model
imply an equation for the susceptible compartment.
We call it the master equation, as both the
infectious and the recovered compartments can be explicitly expressed
in terms of its solution. In other words, the master equation
implies a continuous SIR model for the joint
time evolution of the three population compartments.
In a sense, this generalizes the result of~\cite{Harko} for the Kermack-McKendrick
model to the case of SIR models with time-dependent
infection rates and general recovery times.
Briefly, in~\cite{Harko} they derived a non-linear differential equation
equivalent to the system of equations of the classic SIR model
(see Remarks~\ref{remark-SIR} and~\ref{remark-Harko} below for more detail),
which allowed them to obtain an analytical solution for the model equations
in an exact parametric form.
It should be noted that in~\cite{Harko} the
master equation was obtained analytically from equations of the SIR model.
In contrast, the master equation in the present paper
is obtained from the discrete stochastic model. This is
in the spirit of~\cite{Angstmann}, where
a master equation for the infectious compartment
was obtained from an underlying stochastic process.
In fact, our master equation implies a family of continuous SIR models.
A specific type of the SIR model depends on the structure and interpretation of the parameters of the
master equation. This provides a flexible technical framework for modeling memory effects and arbitrarily distributed recovery times.
The rest of the paper is organized as follows.
In Section~\ref{disc-model} we consider the discrete SIR model.
The model is formally defined in Section~\ref{disc-def}.
The main result concerning the distribution of the time to infection is stated and proved
in Section~\ref{disc-main}.
In Section~\ref{master-sec} we use this result
to derive the master
equation for the susceptible compartment in the limit,
as the tree vertex degree tends to infinity.
Continuous SIR models implied by the master equation
are considered in Section~\ref{special-continuous}.
We briefly comment of the relationship of our model with fractional
SIR models in Section~\ref{fractional}.
Finally, in Section~\ref{examples-SIR}
we consider some special cases of the discrete model,
which might be of interest in their own right.
\section{The discrete SIR model}
\label{disc-model}
\subsection{The model definition}
\label{disc-def}
In this section we formally define the discrete SIR model on a general graph, state and prove the
main result in the case when the underlying graph is given by a homogeneous tree.
We start with defining the discrete continuous time SIR model on a general graph (since a particular
structure of the graph is not important at the definition).
Let $\Tau$ be a connected (and possibly infinite) graph.
With some abuse of notation, we will associate a graph with the set of its vertices.
Given vertices $x, y\in \Tau$ we write $x\sim y$, if these vertices are connected
by an edge, in which case we call them neighbors.
Given a vertex its vertex degree is defined as the number of its neighbors.
A vertex can be either susceptible, or infected, or recovered (and immune).
Initially, i.e. at time $t=0$, each vertex is either susceptible, or
infected (in which case we assume that it gets infected at time $t=0$).
When a vertex becomes infected, it starts emitting infectious germs
towards a given neighbor and continues to do so until the moment of its recovery from the disease.
Germs are emitted according to a Poisson process with a time dependent rate.
Namely, if a vertex $y\in \Tau$ becomes infected at time $t_y$ and recovers in time $H_y$,
then, at time $t\in [t_y, t_y+H_y]$ it
infects a susceptible neighbor with the rate $\varepsilon_{t-t_y}$,
where $(\varepsilon_t,\, t\geq 0)$ is a non-negative deterministic function.
In the general case the
recovery times $\{H_y,\, y\in \Tau\}$
are given by i.i.d. random variables with an arbitrary common distribution.
A susceptible vertex can be also infected by itself, according to a Poisson
process with the time-dependent rate $\lambda_{t}$, where
$(\lambda_{t},\, t\geq 0)$ is a non-negative deterministic function, so that
at time $t$ a susceptible vertex $x$ is infected
with the total infection rate
\begin{equation}
\label{r-ran0}
\lambda_{t} + \sum_{y: y\sim x}\varepsilon_{t-t_y}\cdot
{\bf 1}_{\{t_y\leq t\leq t_y+H_y\}}.
\end{equation}
We assume that all Poisson processes are independent of each other, and they are also independent of
the recovery times.
Proposition~\ref{prop1} below follows from the model
definition.
\begin{proposition}
\label{prop1}
Let $\varphi_t$ be the conditional probability that a susceptible vertex
is not infected until time $t$ by its given infectious neighbor infected
at time $t=0$,
and let $f_t$ be the probability that a susceptible vertex
is not infected until time $t$ by itself. Then
\begin{equation}
\label{phi-ran-f}
\varphi_t=
\begin{cases}
1,& t<0,\\
\mathsf{E}\left(e^{-\int_0^{t\wedge H}\varepsilon_udu}\right), & t\geq 0,
\end{cases}
\quad\text{and}\quad
f_t=
\begin{cases}
1,& t<0,\\
e^{-\int _0^t\lambda_u du},& t\geq 0,
\end{cases}
\end{equation}
where $H$ is a random variable, which has the same distribution as the recovery times.
\end{proposition}
\begin{remark}
{\rm
If $\varepsilon_t\equiv const$, the recovery time is exponentially distributed,
the underlying graph is complete (i.e. any two vertices are neighbors)
and $\lambda_{t}\equiv 0$,
then the corresponding discrete SIR model is a discrete version of the classic
continuous SIR model.
}
\end{remark}
\begin{remark}
{\rm
Note that the case, when the recovery time is deterministic and equal to a given constant $H$,
can be modeled by assuming that
$\varepsilon_t=0$ for $t\geq H$.
Setting formally $H=\infty$ gives the model, in which
an infected individual never recovers and stays infectious forever, which, of course, is not entirely realistic.
However, by considering a function $\varepsilon$, which decays to zero sufficiently fast,
one can model a situation, when an infected individual becomes less
contagious to others over time.
}
\end{remark}
\begin{remark}
{\rm
Our main result (Theorem~\ref{T1} below)
concerns the discrete SIR model on
a homogeneous tree
with the vertex degree $N$, i.e. where each vertex has $N$ neighbors.
However, we also consider the model on a rooted homogeneous tree with the vertex degree $N\geq 2$
(in the proof of Theorem 1 below).
A rooted tree is a tree, where one of the vertices, called the root,
has $N-1$ neighbors, while any other vertex has $N$ neighbors.
}
\end{remark}
\begin{remark}
{\rm We assume throughout
that all random variables are realized on a certain probability space $(\Omega, {\cal F}, \P)$, and the expectation with respect to the probability $\P$ is denoted by $\mathsf{E}$.
We also assume that all integrals and derivatives under consideration exist.
}
\end{remark}
\subsection{Distribution of time to infection}
\label{disc-main}
In this section we state and prove the main result (Theorem~\ref{T1} below)
for the discrete SIR model on a homogeneous tree.
In this theorem we obtain the distribution of the time it takes for a susceptible
vertex to get infected in terms of a solution of an integral equation.
\begin{theorem}
\label{T1}
Let $\Tau$ be a homogeneous tree with the vertex degree $n+1$, where $n\geq 1$.
Assume that at time $t=0$ a vertex $x\in \Tau$ is infected with probability $p$
and is susceptible with probability $1-p$ independently of other vertices.
Let $\tau$ be the time it takes for a susceptible vertex to get infected.
Then
\begin{equation}
\label{eqt1}
\P\left(\tau>t\right)=(1-p)f_{t}[s_{t}]^{n+1}\quad\text{for}\quad t\geq 0,
\end{equation}
where the function $s_{t}$
satisfies the integral equation
\begin{equation}
\label{int_eq}
s_{t}=\varphi_{t}-(1-p)\int _0^{t} f(u) s^n(u)\varphi'_{t-u}du,
\end{equation}
where functions $\varphi$ and $f$ are defined in~\eqref{phi-ran-f}
and $\varphi'$ is the derivative of $\varphi$.
\end{theorem}
\begin{proof}[Proof of Theorem \ref{T1}]
Consider a susceptible vertex $x\in\Tau$ and define the probability
\begin{equation}
\label{s-def0}
s_{t}=\P(x\text{ not infected by a neighbor before time } t),
\end{equation}
which does not depend on a neighbor due to homogeneity of both the tree
and the initial condition.
Recall that infectious neighbors infect the vertex $x$ independently of each other,
and also independently on self-infection.
Therefore, we have that
\begin{equation}
\label{eqt11}
\P(\tau>t)=(1-p)f_{t}s_{t}^{n+1}\quad\text{for}\quad t\geq 0,
\end{equation}
where the factor $f_{t}$ is defined in~\eqref{phi-ran-f}.
Next, consider the probability $s_{t}$ in more detail.
First, observe that
removing the vertex $x$ and all edges connecting $x$ to its neighbors generates
$n+1$ subgraphs given by rooted trees with roots $y_1,...,y_{n+1}$,
that are neighbors of $x$.
Further, given a neighbor $y\sim x$ consider an auxiliary SIR model
on the rooted tree $\Tau_y$ with the root $y$.
Assume that the auxiliary SIR model is specified by the same parameters as the original SIR model on
the tree $\Tau$. In addition, assume that at time $t=0$ any vertex in the auxiliary model is either infected with probability $p$ or susceptible with probability $1-p$, independently of other vertices.
Then, it follows then from the law of total probability and the definition of the function $\varphi$, that
\begin{equation}
\label{s-def}
s_{t}=p\varphi_{t}+(1-p)\int _0^{\infty}\varphi_{t-u}d\nu(u)
\quad\text{for}\quad t\geq 0,
\end{equation}
where $\nu$ is the distribution of the time to infection of the root vertex $y$ in the auxiliary SIR model
on the graph $\Tau_y$,
and $\varphi$ is the function defined in~\eqref{phi-ran-f}.
Using similarity of the rooted trees and independence, we get, analogously to~\eqref{eqt11}, the following equation
\begin{equation*}
\nu((t, \infty))=\P(\text{the root } y \text{ is not infected before time } t)=(1-p)f_{t}s^n_t\quad\text{for}\quad t\geq 0.
\end{equation*}
\begin{figure}[t!]
\centering
\includegraphics[scale=0.32]{Tree.pdf}
\caption{\small A finite fragment of a homogeneous tree, where $n=4$ (i.e. each vertex has $5$ neighbors).
}
\label{tree}
\end{figure}
Integrating by parts in~\eqref{s-def}
and using that $d\nu(t)=-(1-p)\left(f_{t}s^n_t\right)'dt$,
we obtain that
\begin{equation}
\label{e13}
\begin{split}
s_{t}&
=p\varphi_{t}-(1-p)\left[\varphi_{t-u}\left(f(u)s^n_u\right)\right]_0^{\infty }+(1-p)
\int _0^{\infty }f(u)s^n_u
\varphi'(t-u)du.
\end{split}
\end{equation}
Recalling that
$\varphi_{t-u}=1$ for $u>t$, $s_u\to 0$, as $u\to \infty$,
and $f_0=s_0=1$, we get that
$\left[\varphi_{t-u}\left(f_us^n_u\right)\right]_0^{\infty }=-\varphi_{t}$, and, hence,
the integral equation~\eqref{int_eq}, as claimed.
\end{proof}
\begin{remark}
\label{Rem1}
{\rm
Consider the SIR model on a homogeneous tree, in which
recovery times are given i.i.d. random variables
(including the degenerate case of deterministic recovery time).
Recall the function $(\varphi_{t},\, t\geq 0)$ defined in~\eqref{phi-ran-f} and
define
\begin{equation}
\label{tilde-eps}
\tilde{\varepsilon}_t=-\frac{d}{dt}\left(\log(\varphi_{t})\right)\quad\text{for}\quad t\geq 0.
\end{equation}
It is easy to see that, as long as one is interested in the distribution of the time to infection,
they can consider an equivalent model with just susceptible and infected compartments,
in which the recovery mechanism is somehow embedded into the new infection rate
given by the function $\tilde{\varepsilon}_t$.
For example, consider a SIR model with the infection rate $(\varepsilon_t\, t\geq 0)$ such that
$\varepsilon_t>0$ for all $t\geq$, and the deterministic recovery time
given by a constant $H>0$, then
\begin{equation}
\label{tilde-eps-1}
\tilde{\varepsilon}_t=\begin{cases}
\varepsilon_t,&\text{ for } t\leq H,\\
0,&\text{ for } t>H.
\end{cases}
\end{equation}
Trivially, if $H=\infty$, then $\tilde{\varepsilon}_t=\varepsilon_t$ for all $t\geq 0$.
However, the function $\tilde{\varepsilon}$ can differ significantly from the
original function $\varepsilon$ in the case of the random recovery time
(e.g., see Corollary~\ref{C3} in Section~\ref{examples-SIR}).
}
\end{remark}
\section{Continuous limit of the discrete model}
\label{cont}
\subsection{The master equation}
\label{master-sec}
In this section we analyze integral equation~\eqref{int_eq} in the limit, as
the tree vertex degree goes to infinity. Specifically, we show that in this limit
the integral equation implies an equation for the susceptible population proportion.
\begin{theorem}
\label{T2}
Consider the discrete SIR model on the homogeneous tree $\Tau$
with the vertex degree $n+1$. Suppose that an infected vertex infects
a susceptible neighbor with the rate $\frac{1}{n+1}\varepsilon_t$ after time $t$ of being infected, where
$(\varepsilon_t,\, t\geq 0)$ is a non-negative function. In addition, suppose that the
other model parameters (i.e. the recovery times and the rate of self-infection) do not depend on $n$.
Let $S_{t,n}$ be the expected susceptible population in this SIR model. Then,
any limit point $S_{t}$ of the sequence of functions $(S_{t,n},\, t\geq 0)$,\, $n\geq 1$,
must satisfy the following equation
\begin{equation}
\label{master0}
\log\left(\frac{S_{t}}{S_0}\right)=-\int_0^t\lambda_udu-\int_0^t(1-S_u)\gamma_{t-u}du.
\end{equation}
where
\begin{equation}
\label{gamma}
\gamma_t:=\frac{d}{d t}\mathsf{E}\left(\int\limits_0^{t\wedge H}\varepsilon_udu\right) \quad\text{for}\quad t\geq 0.
\end{equation}
\end{theorem}
\begin{proof}
Note first, that the corresponding $\varphi$-function (see equation~\eqref{phi-ran-f}) is given by
$$ \varphi_{t,n}=
\begin{cases}
1,& t<0,\\
\mathsf{E}\left(e^{-\frac{1}{n+1}\int_0^{t\wedge H}\varepsilon_udu}\right), & t\geq 0.
\end{cases}
$$
By Theorem 1
\begin{equation}
\label{S}
S_{t,n}=S_{0}f_{t}[s_{n,t}]^{n+1}\quad\text{for}\quad t\geq 0,
\end{equation}
where $S_0=S_{0,n}=1-p$,
and the function
$s_{t,n}$ satisfies the equation
\begin{equation}
\label{int_eq2}
s_{t,n}=\varphi_{t,n}-(1-p)\int _0^{t}f_u s_{u,n}^n\varphi'_{t-u,n}du.
\end{equation}
Observe that
$$s_n(t)=\left(\frac{S_{t,n}}{S_{0}f_{t}}\right)^{\frac{1}{n+1}}\sim 1+
\frac{1}{n+1}\left(\log\left(\frac{S_{t,n}}{S_{0}}\right)-\log(f_{t})\right)$$
for sufficiently large $n$. Therefore,
\begin{equation}
\label{int_eq3}
\begin{split}
\log\left(\frac{S_{t,n}}{S_{0}}\right)&-\log(f_{t})\\
&=(n+1)(\varphi_{t,n}-1)-
S_0\int\limits_{0}^t
\left(\frac{S_{u,n}}{S_{0}f_u}\right)^{\frac{n}{n+1}} f_u (n+1)\varphi'_{t-u,n}du.
\end{split}
\end{equation}
It is easy to see that
$$\varphi_{t,n}-1\sim -\frac{1}{n+1}\mathsf{E}\left(\int_0^{t\wedge H}\varepsilon_udu\right)$$
for sufficiently large $n$, which implies that
\begin{equation}
\label{-alpha}
\lim_{n\to \infty}(n+1)(\varphi_{t,n}-1)
=-\mathsf{E}\left(\int\limits_0^{t\wedge H}\varepsilon_udu\right)=-\int_0^t\gamma_udu\quad\text{for}\quad t\geq 0,
\end{equation}
where the function $\gamma$ is defined in~\eqref{gamma}.
In addition, note that $\left(S_{0}f_u\right)^{\frac{1}{n+1}}\sim 1$
and $\left(S_{u,n}\right)^{\frac{n}{n+1}}\sim S_{u,n}$, as $n\to \infty$.
Therefore, we can rewrite~\eqref{int_eq3} as follows
\begin{equation}
\label{int_eq31}
\log\left(\frac{S_{t,n}}{S_{0}}\right)-\log(f_{t})\sim -\int_0^t\gamma_udu+
\int\limits_{0}^tS_{u,n}\gamma_{t-u}du=
-\int\limits_{0}^t(1-S_{u,n})\gamma_{t-u}du.
\end{equation}
Recalling that
$\log(f_t)=-\int_0^t\lambda_udu$,
we obtain that
\begin{equation}
\label{int_eq32}
\log\left(\frac{S_{t,n}}{S_{0}}\right)\sim -\int_0^t\lambda_udu
-\int\limits_{0}^t (1-S_{u,n})\gamma_{t-u}du,
\end{equation}
which implies equation~\eqref{master0} for any limit point $S_t$, as claimed.
\end{proof}
Differentiating~\eqref{master0} gives the master equation in the differential form
\begin{equation}
\label{master-initial}
\frac{S_t'}{S_t}=-\lambda_{t}-(1-S_t)\gamma_{0}-\int_0^t(1-S_u)\gamma'_{t-u}du,
\end{equation}
which, integrating by parts, can be rewritten as follows
\begin{equation}
\label{master}
\frac{S_t'}{S_t}=-\lambda_{t}-(1-S_0)\gamma_{t}+\int_0^tS'_u\gamma_{t-u}du.
\end{equation}
\begin{remark}
{\rm
The existence and the uniqueness of solution of equation~\eqref{master0} follows from general results
for integral equations with delay (\cite{Burton}).
It can be shown that, under mild assumptions,
the sequence of functions $(S_{t,n},\, t\geq 0)$,\, $n\geq 1$, is equicontinuous. Therefore, there exists
a subsequence that uniformly converges to the solution of~\eqref{master0}. We skip the
technical details.
}
\end{remark}
\begin{remark}
\label{RemStability}
{\rm
It follows from equation~\ref{master0} that stationary
value $S_\infty$ satisfies the following equation
\begin{equation}
\label{S_inf}
\log\left(\frac{S_{\infty}}{S_0}\right)= -\int_0^\infty\lambda_u du- (1-S_\infty) \int_0^\infty\gamma_{u}du.
\end{equation}
}
\end{remark}
\begin{remark}
{\rm
Note that equation~\eqref{master0} (or its differential equivalent~\eqref{master})
is a standalone equation for
the susceptible population $S_t$, namely
that this equation does not involve neither the infected, nor the recovered
populations.
}
\end{remark}
\begin{remark}
\label{rem-on-gamma(t)}
{\rm
It should be noted that all the information concerning the infection rates and recovery times
of the original discrete SIR model is included in~\eqref{master} via the function
$\gamma$.
For example, if the recovery time in the discrete SIR model is given by a deterministic constant $H$, then
$$
\gamma_{t}=\begin{cases}
\varepsilon_t,& \text{for } t<H,\\
0, &\text{for } t\geq H,
\end{cases}
$$
where $\varepsilon_t$ is the rate of infection in the discrete model.
In particular, if $H=\infty$, then $\gamma_{t}=\varepsilon_t$.
In general, these two functions can be significantly different
(see Example~\ref{example-SIR} below).
}
\end{remark}
\subsection{Continuous SIR models implied by the master equation}
\label{special-continuous}
In this section we show that the master equation~\eqref{master0}
for the susceptible population implies equations for
other population compartments (which explains
the term master equation).
A SIR model implied by the master equation~\eqref{master0}
depends on the structure and its interpretation
of the function $\gamma$. To clarify what is meant by "interpretation"
consider the case when $\gamma_t=0$ for all $t>H$ for some $H>0$.
This can be interpreted as an infected individual recovering after time $H$ since the moment
of being infected (as in Remark~\ref{rem-on-gamma(t)}).
On the other hand, this can be interpreted, as if an infected individual
never recovers, but becomes not contagious to others after time
$H$ since the moment of being infected.
In this case one can operate with just two compartments, namely susceptible and infected ones.
Below we consider examples, where this argument is reinforced.
Note that for simplicity of exposition and without loss of generality
we assume throughout this section that
\begin{equation}
\label{lambda=0}
\lambda_{t}\equiv 0,
\end{equation}
i.e. there is no self-infection.
\subsubsection{The model with no recovery}
\label{no-recovery}
The basic continuous SIR model implied by the master
equation is a two-compartmental model, in which
the population is divided into two compartments, namely,
the compartment of susceptible individuals, described
by the variable $S_t$, and the compartment of infected ones, described by
the variable $I_t$, so that
\begin{equation}
\label{balance}
1=S_t+I_t\quad\text{and}\quad\text{for}\quad t\geq 0.
\end{equation}
Then $S'_t=-I'_t$, which allows to rewrite
equation~\eqref{master} as follows
\begin{equation}
\label{equations-no-recovery}
S'_t=-S_t\left(I_0\gamma_{t}
+\int_0^tI'_u\gamma_{t-u}du\right)
\end{equation}
Equation~\eqref{equations-no-recovery} describes the model, in which
an individual infected at time $u\geq 0$ infects
any susceptible individual with the rate $\gamma_{t-u}$ at time $t>u$.
This model can be interpreted as the model without recovery, as without further assumptions
on the function $\gamma$, we do not have any information about the recovery mechanism.
\begin{example}[The model with latent period]
\label{latent}
{\rm
Suppose that an infected individual is latent
for a non-random period of time of length $L>0$.
In addition, suppose that the rate of infection is constant.
Then
$\gamma_t=\varepsilon{\bf 1}_{\{t\geq L\}}$, where $\varepsilon$ is the rate of infection,
and equation~\eqref{equations-no-recovery} becomes
as follows
$$
S_t
=-\varepsilon S_tI_{t-L}\quad\text{and}\quad I_t'=-S'_{t}.
$$
}
\end{example}
\subsubsection{Model with a constant rate of infection and random recovery}
\label{constant-eps}
Suppose that the function $\gamma_t$ is
of the following form
\begin{equation}
\label{gamma1}
\gamma_{t}=\varepsilon\beta_{t}\quad\text{for}\quad t\geq 0,
\end{equation}
where $\varepsilon>0$ is a given constant and
$\beta_{t}$ is a non-increasing positive function, such that $\beta_{0}=1$ and
$\beta_t\to 0$, as $t\to \infty$.
Then, the master equation implies the continuous SIR model with the three standard
compartments, in which
an infected individual
recovers in a time given by random variable $\xi$ with the tail distribution
$\P(\xi>t)=\beta_{t}$, and during its infectious
period it infects any susceptible one with the constant rate $\varepsilon$.
Indeed, under these assumptions, equation~\eqref{master}
is as follows (recall that~\eqref{lambda=0})
\begin{equation}
\label{master-beta}
S_t'=-\varepsilon S_t\left(I_0\beta_{t}-\int\limits_0^tS'_u\beta_{t-u}du\right).
\end{equation}
Define
\begin{align}
\label{I_t}
I_t&=I_0\beta_{t}-\int\limits_0^tS'_u\beta_{t-u}du\quad\text{for}\quad t>0\quad\text{and}\quad
I_0=1-S_0
\end{align}
and
\begin{align}
\label{R}
R_t&=I_0(1-\beta_{t})-\int\limits_0^tS'_u(1-\beta_{t-u})du
\quad\text{for}\quad t>0\quad\text{and}\quad
R_0=0.
\end{align}
It is easy to see that
\begin{equation}
\label{balance2}
1=S_t+I_t+R_t\quad\text{for}\quad t\geq 0.
\end{equation}
Moreover, one can show that both $I_t\geq 0$ and $R_t\geq 0$ (we skip the details).
Therefore, variables $I_t$ and $R_t$ can be interpreted
as the population proportions of infected and recovered
individuals respectively in the continuous SIR model with the constant rate of infection $\varepsilon$
and the random recovery time with the tail distribution given by the function $\beta$.
Indeed, in this model the infected compartment at time $t$ consists of
1) those who were infected at time $0$ and did not recover before time $t$
(which gives the first term
$I_0\beta_{t}$ in~\eqref{I_t}), and
2) those, who were infected at time $u\in (0, t]$ and did not recover before
time $t$ (integrating over time gives the integral term in~\eqref{I_t}).
A similar argument gives~\eqref{R1}, which also follows from~\eqref{I_t} and~\eqref{balance2}
combined with the initial condition $S_0+I_0=1$.
Differentiating both~\eqref{I_t} and~\eqref{R}, and combining them with~\eqref{master-beta},
we get the following system of equations
\begin{align}
\label{S-eq}
S'_t&=-\varepsilon S_t I_t\\
\label{I-eq}
I'_t&=\varepsilon S_tI_t+I_0\beta'_{t}+\varepsilon\int\limits_0^tS_uI_u\beta'_{t-u}du\\
\label{R-eq}
R'_t&=-I_0\beta'_{t}-\varepsilon\int\limits_0^tS_u I_u\beta'_{t-u}du.
\end{align}
\begin{remark}
\label{Anna}
{\rm
The system of equations~\eqref{S-eq}-\eqref{R-eq} is similar to the
system of equations of the delay model proposed
in~\cite{DellAnna} for modeling the spread of Covid-19 in Italy.
}
\end{remark}
\begin{example}
[The classic SIR model]
\label{example-SIR}
{\rm
Consider the discrete SIR model on a homogeneous tree, in which
an infected individual infects its susceptible neighbor with the constant rate $\varepsilon$
and recovers with the constant rate $\mu$.
In addition, assume that there is no self-infection, i.e.
$\lambda_{t}=0$ for $t\geq 0$.
The corresponding function $\alpha$ is given by
$$\alpha_{t}=\varepsilon\mathsf{E}\left(\int_0^{t\wedge H}{\bf 1}_{\{u\geq 0\}}du\right)
=\varepsilon\mathsf{E}(\min(t, H)),$$
where $H$ is a random variable exponentially distributed with parameter $\mu$.
A direct computation gives that
$$\alpha_{t}=\frac{\varepsilon}{\mu}\left(1-e^{-\mu t}\right),\quad
\gamma_{t}=\varepsilon e^{-\mu t}\quad\text{and}\quad \beta_{t}=
\frac{\gamma_{t}}{\varepsilon}=e^{-\mu t}
\quad\text{for}\quad t\geq 0.$$
Since $\lambda_{t}\equiv 0$ and $\beta'_{t}=-\mu e^{-\mu t}$ for $t\geq 0$,
the system of equations~\eqref{S-eq}-\eqref{R-eq} becomes
as follows
\begin{align*}
S_t'&=-\varepsilon S_t I_t,\\
I_t'&=\varepsilon S_t I_t-\mu I_t,\\
R_t'&=\mu I_t,
\end{align*}
which is the system of equations of the classic SIR model (with the infection rate $\varepsilon$ and the recovery rate $\mu$),
where $S_t$, $I_t$ and $R_t$ stand for population proportions of susceptible, infected and recovered individuals respectively.
}
\end{example}
\begin{remark}
\label{remark-SIR}
{\rm
Note that in Example~\ref{example-SIR}
it is probably more transparent to start with
equation~\eqref{master0}, which in this case is
\begin{equation}
\label{int_exp1}
\log\left(\frac{S_t}{S_0}\right)=-\varepsilon\int_0^t(1-S_u)e^{-\mu(t-u)}du.
\end{equation}
Then, differentiating~\eqref{int_exp1} gives that
\begin{equation}
\label{basic1}
S_t'=-\varepsilon S_t\left(1-S_t-\mu\int_0^t(1-S_u)e^{-\mu(t-u)}du\right).
\end{equation}
Combining~\eqref{basic1} with~\eqref{int_exp1} we obtain
the following differential form of the master equation in the classic case
\begin{equation}
\label{basic2}
S_t'=-\varepsilon S_t\left(1-S_t+\frac{\mu}{\varepsilon}\log\left(\frac{S_t}{S_0}\right)\right).
\end{equation}
Then, setting $I_t=1-S_t+\frac{\mu }{\varepsilon}\log\left(\frac{S_t}{S_0}\right)$, one can proceed as
in Example~\ref{example-SIR}.
}
\end{remark}
\begin{remark}
\label{remark-Harko}
{\rm
It should be noted that our equation~\eqref{basic2} is equivalent to equation (26) in~\cite{Harko}.
In~\cite{Harko} they analytically obtained the equation from
equations of the classic SIR model
and used it to derive an exact analytical solution of the SIR model
in a parametric form.
}
\end{remark}
\begin{remark}
{\rm
Note that equating the time derivative to zero in equation~\eqref{basic2} , i.e.
$S'_t=0$, gives the known equation
\begin{equation}
\label{stat}
1-S+\frac{\mu }{\varepsilon} \log\left(\frac{S}{S_0}\right)=0
\end{equation}
for the stationary population proportion of susceptible individuals $S:=S_{\infty}$ in the SIR model
(e.g. see equation (7) in~\cite{Barlow} and references therein).
In particular, this equation shows that the limit value $S$ depends only on the ratio
$\displaystyle{\mu/\varepsilon=1/R_0}$, where $R_0$ is the basic reproduction number.
Note also that equation~\eqref{stat} is just a special case of more general equation~\eqref{S_inf}
for the stationary susceptible state.
}
\end{remark}
\begin{example}[Constant rate of infection and deterministic recovery]
{\rm
Consider a model, in which
the infection rate is given by a constant $\varepsilon>0$, and the recovery time is given
by a deterministic constant $H>0$.
This model can be obtained by setting
$\gamma_t=\varepsilon$ for $t\in [0,H]$ and $\gamma_t=0$ for $t>H$.
This gives the following model
equations
\begin{align*}
S_t'&=-\varepsilon S_tI_t,\\
I_t'&=\varepsilon S_tI_t-\varepsilon S_{t-H}I_{t-H},\\
R_t'&=\varepsilon S_{t-H}I_{t-H},
\end{align*}
where $S_t=I_t=0$ for $t<0$.
}
\end{example}
\subsubsection{The general case: time-varying infection rate and random recovery}
\label{general-SIR}
In this section we generalize SIR
models considered in Sections~\ref{no-recovery} and~\ref{constant-eps}.
Suppose that the function $\gamma$ is of the following form
\begin{equation}
\label{gamma-general}
\gamma_{t}=w_t\beta_{t}\quad\text{for}\quad t\geq 0,
\end{equation}
where $(w_t,\, t\geq 0)$ is a non-negative function
and the function $(\beta_t,\, t\geq 0)$ is the tail distribution
of some positive random variable (i.e. similarly to what we assumed
in Section~\ref{constant-eps}).
Arguing as in Section~\ref{constant-eps}, we obtain
the continuous SIR model
described by the following equations
\begin{align}
\label{master03}
\log\left(\frac{S_{t}}{S_0}\right)&=-\int_0^t(1-S_u)\gamma_{t-u}du\\
\label{I_t-30}
I_t&=I_0\beta_{t}-\int\limits_0^tS'_u\beta_{t-u}du\\
\label{R1}
R_t&=I_0(1-\beta_{t})-\int\limits_0^tS'_u(1-\beta_{t-u})du,
\end{align}
where, $S_t$, $I_t$ and $R_t$ are population proportions
of susceptible, infected and recovered individuals respectively, so that
$1=S_t+I_t+R_t$ for $t\geq 0$. As before, we assumed that $R_0=0$.
In this model an infected individual recovers
in a random time given by a random variable $\xi$ with the tail distribution
$\P(\xi>t)=\beta_{t}$ and, if it is infected at time $u$, then it
infects any susceptible one with the rate $w_t$ at the time $u+t$.
Recall, that we also assume~\eqref{lambda=0}.
In the differential form the model equations are as follows
\begin{align}
\label{master3}
S_t'&=S_t\left(-I_0\gamma_{t}+\int_0^tS'_u\gamma_{t-u}du\right)\\
\label{I3}
I_t'&=-S_t'+I_0\beta'_{t}-\int_0^tS'_{u}\beta'_{t-u}du\\
\label{R3}
R_t'&=-I_0\beta'_{t}+\int_0^tS'_{u}\beta'_{t-u}du.
\end{align}
\begin{remark}
{\rm
By choosing appropriate functions $(w_t,\, t\geq 0)$ and $(\beta_t,\, t\geq 0)$ one can model various
infection rates and recovery distributions.
For example, using the power law functions allows
to model memory effects observed in real data (e.g. see~\cite{Angstmann}
and references therein).
}
\end{remark}
In the rest of this section we use the idea from~\cite{Angstmann} in order
to rewrite equations~\eqref{master3}-\eqref{R3}
in terms of a certain kernel. The idea is based on the fact
that these equations contain convolutions, which makes it possible to apply the Laplace transform.
Let $(\L\{g\}_t,\, t\in {\mathbb{R}}_{+})$ be the Laplace transform of a function $(g_t,\, t\in {\mathbb{R}}_{+})$.
It follows from~\eqref{I3} that
$$\L\{I\}_t=I_0\L\{\beta\}_t-\L\{S'\}_t\L\{\beta\}_t,$$
and, hence,
$$\L\{S'\}_t=I_0-\frac{\L\{I\}_t}{\L\{\beta\}_t}.$$
Thus, for any appropriate function $(g_t,\,t\in{\mathbb{R}}_{+})$ we have that
\begin{equation}
\label{fact0}
\begin{split}
\int_0^tS'_{u}g_{t-u}du&=\L^{-1}\left[\L\{S'\}_t\L\{g\}_t\right]=
\L^{-1}\left[\left(I_0-\frac{\L\{I\}_t}{\L\{\beta\}_t}\right)\L\{g\}_t\right]\\
&=I_0g_t -\L^{-1}\left(\L\{I\}_t\frac{\L\{g\}_t}{\L\{\beta\}_t}\right).
\end{split}
\end{equation}
Since $\L^{-1}(\L\{a\}\L\{b\})$ is equal to the convolution $a\ast b$, we can rewrite~\eqref{fact0} as follows
\begin{equation}
\label{fact}
\begin{split}
\int_0^tS'_{u}g_{t-u}du&=I_0g_t-\int_0^tI_{u}\mathcal{K}(g)_{t-u}du,
\end{split}
\end{equation}
where $\mathcal{K}$ is a kernel defined by
\begin{equation}
\label{K}
\mathcal{K}(g)_t:=\L^{-1}\left(\frac{\L\{g\}_t}{\L\{\beta\}_t}\right).
\end{equation}
Finally, using~\eqref{fact} with $g_t=-\gamma_t$ in~\eqref{master3},
and with $g_t=-\beta'_t$ in~\eqref{I3} and~\eqref{R3}
gives the system of the model equations in the kernel form
\begin{align}
\label{master31}
S_t'&=-S_t\int_0^tI_{u}\mathcal{K}(\gamma)_{t-u}du\\
\label{I31}
I_t'&=S_t\int_0^tI_{u}\mathcal{K}(\gamma)_{t-u}du-\int_0^tI_{u}\mathcal{K}(\beta')_{t-u}du\\
\label{R31}
R_t'&=\int_0^tI_{u}\mathcal{K}(\beta')_{t-u}du.
\end{align}
\begin{remark}
{\rm
Note that equation~\eqref{K}
is an analogue of equation (16) in~\cite{Angstmann}.
}
\end{remark}
\section{Remark on fractional SIR models}
\label{fractional}
One of the recognized drawbacks of the classic SIR model is that
both the infection rate and the recovery rate do not depend on the state of the system, i.e.
the model is memoryless.
A popular approach to modeling memory effects consists in using fractional SIR models
(e.g., see ~\cite{Chen} and references therein).
Some of these models are obtained
by formal replacement of ordinary derivatives by fractional derivatives of a certain type.
This gives a system of fractional differential equations
that is equivalent to a system of integro-differential equations
with a power-law kernel.
For example, replacing ordinary derivatives in the classic SIR model by Caputo fractional derivatives
gives a system of fractional differential equations, which are
equivalent to the following system of integro-differential equations
\begin{align}
\label{formal}
S'_t&=-\varepsilon \int _0^tI_{u}S_{u}K_{t-u}du\\
\label{I-frac}
I'_t&=\int _0^t\left(\varepsilon I_{u}S_{u}-\mu I_{u}\right)K_{t-u}du\\
\label{R-frac}
R'_t&=\mu \int _0^tI_{u}K_{t-u}du
\end{align}
with the kernel $K_y=\frac{y^{\alpha -2}}{\Gamma (\alpha -1)}$, where
$\alpha\in (0,1]$ and
$\Gamma$ is the Gamma-function.
However, we are not quite clear what physical/biological process
is described by equations~\eqref{formal}-\eqref{R-frac}.
In contrast, equations~\eqref{master3}-\eqref{R3} and their equivalents
in the kernel form, i.e. equations~\eqref{master31}-\eqref{R31},
can be naturally interpreted in terms of the interaction between compartments.
For example, in the absence of any external factors and self-infection,
the flux into the infected compartment is equal to the flux out of the
susceptible compartment. The susceptible compartment decreases at the rate
proportional to its current value $S_t$.
The value $\mathcal{K}(\gamma)_{t-u}$ (in~\eqref{master31}-\eqref{R31})
describes the impact made on the susceptible compartment at time $t$
by those individuals, who were infected earlier and is still infectious.
The coefficient of proportionality, i.e. the integral term
$\int_0^tI_{u}\mathcal{K}(\gamma)_{t-u}du$, measures
the total impact of the infected compartment on the susceptible one over the time period
$[0,t]$. This generalizes the interaction between susceptible and infected
compartments in the classic SIR model, where only the current value $I_t$ is
taken into account.
It should be also noted that the continuous SIR model in the present paper
is obtained by passing to the limit in the discrete stochastic SIR model.
This is in line with SIR models in the kernel form that are derived from stochastic processes based on
natural biological assumptions (e.g., see~\cite{Angstmann},~\cite{DellAnna} and references therein).
\section{Appendix. Special cases of the discrete SIR model}
\label{examples-SIR}
In this section, we consider some special cases of the discrete SIR model
on the homogeneous tree with the vertex degree $n+1$.
In these cases the integral equation~\eqref{int_eq}
can be rewritten in an equivalent differential form, which is of interest on its own right.
For simplicity of notations we assume that all vertices are initially susceptible
(i.e. $p=0$ in Theorem~\ref{T1}).
\begin{corollary}
\label{C1}
Suppose that there is no recovery, i.e. $H=\infty$,
$\varepsilon_t=\varepsilon{\bf 1}_{\{t\geq 0\}}$ and $\lambda_{t}=\lambda{\bf 1}_{\{t\geq 0\}}$, where $\varepsilon>0$ and
$\lambda>0$ are given constants.
Then
\begin{equation}
\begin{split}
\label{exact}
s_{t}&=e^{-\frac{2\varepsilon}{\lambda}\left(e^{-\lambda t}-1+\lambda t\right)},
\quad\text{if}\quad n=1;\\
s_{t}&=\left(\frac{\varepsilon(n-1)+\lambda}{\varepsilon(n-1)
e^{-\lambda t}+\lambda e^{\varepsilon(n-1)t}}\right)^{\frac{1}{n-1}},\quad\text{if}\quad n\geq 2,
\end{split}
\end{equation}
so that
\begin{align}
\label{n1}
\P(\tau>t)&=e^{-\lambda t}e^{-\frac{2\varepsilon}{\lambda}\left(e^{-\lambda t}-1+\lambda t\right)},
\quad\text{if}\quad n=1;\\
\P(\tau>t)&
=e^{-\lambda t}\left(\frac{\varepsilon(n-1)+\lambda}{\varepsilon(n-1)e^{-\lambda t}+\lambda e^{\varepsilon(n-1)t}}\right)^{\frac{n+1}{n-1}},\quad\text{if}\quad n\geq 2.\label{nn}
\end{align}
\end{corollary}
\begin{proof}[Proof of Corollary~\ref{C1}]
Since $\varepsilon_t=\varepsilon{\bf 1}_{\{t\geq 0\}}$ and
$\lambda_{t}=\lambda{\bf 1}_{\{t\geq 0\}}$, we have that
\begin{equation}
\label{C1P1}
\varphi_{t}=
\begin{cases}
1, & t<0,\\
e^{-\varepsilon t},& t\geq 0,
\end{cases}
\quad\text{and}\quad
f_{t}=\begin{cases}
1, & t<0,\\
e^{-\lambda t},& t\geq 0,
\end{cases}
\end{equation}
and
$\varphi'_t=-\varepsilon e^{-\varepsilon t}$ for $t\geq 0$. The
integral equation~\eqref{int_eq} in this case is as follows
\begin{equation}
\label{int_eq0}
s_{t}=e^{-\varepsilon t}+\varepsilon \int _0^te^{-\lambda u}s^n_ue^{-\varepsilon(t-u)}du.
\end{equation}
Differentiating~\eqref{int_eq0} and simplifying gives the following differential equation
\begin{equation}
\label{ber1}
\begin{split}
s'_t
=-\varepsilon s_{t}+\varepsilon e^{-\lambda t}s^n_t.
\end{split}
\end{equation}
It is easy to verify that the function defined in~\eqref{exact} is a solution of equation~\ref{ber1}.
\end{proof}
\begin{remark}
{\rm
The equation~\eqref{ber1} is the well known Bernoulli equation (e.g., see~\cite{Parker}).
Under assumptions of Corollary~\ref{C1} the model was originally considered in~\cite{Gairat}.
}
\end{remark}
\begin{remark}
\label{R-logistic}
{\rm
It follows from equation~\eqref{nn}
that \begin{align*}
\P(\tau>t)&=e^{-\lambda t}\left(\frac{\varepsilon(n-1)+\lambda}{\varepsilon(n-1)e^{-\lambda t}+\lambda e^{\varepsilon(n-1)t}}\right)^{\frac{n+1}{n-1}}
\sim e^{-\lambda t}\left(\frac{\varepsilon(n-1)+\lambda}{\varepsilon(n-1)e^{-\lambda t}+\lambda e^{\varepsilon(n-1)t}}\right)\\
&=
\frac{1+\frac{\lambda}{\varepsilon(n-1)}}{1+\frac{\lambda}{\varepsilon(n-1)}e^{(\varepsilon(n-1)+\lambda)t}}
\end{align*}
for sufficiently large $n$.
Further, if $\varepsilon=\varepsilon_n$, where
$\varepsilon_nn\to c>0$, as $n\to\infty$,
then
\begin{equation}
\label{Eq-logistic}
1-\P(\tau>t)=\P(\tau\leq t) \to
\frac{\frac{\lambda}{c} e^{(c+\lambda)t}-\frac{\lambda}{c}}{1+\frac{\lambda}{c} e^{(c+\lambda)t}}
=\left(1+\frac{\lambda}{c}\right)\frac{1}{1+\frac{c}{\lambda} e^{-(c+\lambda)t}}-\frac{\lambda}{c},
\quad\text{as}\quad n\to \infty.
\end{equation}
In other words, the probability $\P(\tau\leq t)$ converges
to a linear transformation of the
logistic curve $\displaystyle{\frac{1}{1+\frac{c}{\lambda}e^{-(c+\lambda)t}}}$.
}
\end{remark}
\begin{corollary}
\label{C2}
Suppose that the recovery time is given by a deterministic constant $H>0$,
$\lambda_{t}=\lambda{\bf 1}_{\{t\geq 0\}}$
and $\varepsilon_{t}=\varepsilon{\bf 1}_{\{0\leq t\leq H\}}$,
where $\varepsilon>0$ and $\lambda>0$ are given constants.
Then integral equation~\eqref{int_eq} is equivalent to the following differential equation
\begin{equation}
\label{ber2}
\begin{split}
s'_t&=-\varepsilon s_{t}+\varepsilon e^{-\lambda t}s^n_t\quad\text{for}\quad t\leq H,\\
s'_t&
=-\varepsilon s_{t}+\varepsilon e^{-\lambda t}s^n_t+\varepsilon e^{-\varepsilon H}-\varepsilon e^{-\lambda(t-H)-\varepsilon H}s^n_{t-H}
\quad\text{for}\quad t>H.
\end{split}
\end{equation}
\end{corollary}
\begin{proof}[Proof of Corollary~\ref{C2}]
In this case we have that
$f_{t}=e^{-\lambda t}$ and $\varphi_{t}=e^{-\varepsilon\min(t, H)}$ for $t\geq 0$,
and equation~\eqref{int_eq} becomes as follows
\begin{equation}
\label{sH}
s_{t}=\begin{cases}
\varphi_{t}+\varepsilon\int_0^te^{-\lambda u}s^n_t\varphi_{t-u}du&\text{for}\quad t<H,\\
\varphi_{t}+\varepsilon \int_{t-H}^te^{-\lambda u}s^n_u\varphi_{t-u}u&\text{for}\quad t\geq H.
\end{cases}
\end{equation}
A direct computation gives that
$$
s'_t=-\varepsilon\varphi_t+\varepsilon e^{-\lambda t}s^n_t+\varepsilon ^2\int _{0}^te^{-\lambda u}s^n_u\varphi_{t-u}du
=-\varepsilon s_{t}+\varepsilon e^{-\lambda t}s^n_t\quad\text{for}\quad t<H,
$$
which is the first equation in~\eqref{ber2}.
Further, if $t>H$, then $\varphi_{t}=\varphi_H=e^{-\varepsilon H}$,
and, hence,
\begin{align*}
s'_t&=\varepsilon e^{-\lambda t}s^n_t-\varepsilon e^{-\lambda(t-H)-\varepsilon H}s^n_{t-H}
-\varepsilon ^2\int _{t-H}^tf(u)s^n_u\varphi_{t-u}du\\
&=\varepsilon e^{-\lambda t}s^n_t-\varepsilon e^{-\lambda(t-H)-\varepsilon H}s^n_{t-H}-\varepsilon(s_{t}-\varphi_H)\\
&=-\varepsilon s_{t}+\varepsilon e^{-\lambda t}s^n_t+\varepsilon e^{-\varepsilon H}-\varepsilon e^{-\lambda (t-H)-\varepsilon H}s^n_{t-H}
\end{align*}
which is the second equation in~\eqref{ber2}, as claimed.
\end{proof}
We are going to consider an example of the model with a random recovery time.
\begin{corollary}[Exponential recovery time]
\label{C3}
Suppose that the recovery time is exponentially distributed with the parameter $\mu$
and functions $(\varepsilon_{t},\, t\in{\mathbb{R}}_+)$ and
$(\lambda_{t},\, t\in{\mathbb{R}}_+)$
are as in Corollaries~\ref{C1} and~\ref{C2}.
Then integral equation~(\ref{int_eq}) is equivalent to the following differential equation
\begin{equation}
\label{free-term}
s'_t=-(\mu+\varepsilon)s_{t}+\varepsilon e^{-\lambda t}s^n_t+\mu.
\end{equation}
\end{corollary}
\begin{proof}[Proof of Corollary~\ref{C3}]
Start with computing the corresponding function $\varphi$
\begin{equation}
\label{expon}
\begin{split}
\varphi_{t}=\mathsf{E}\left(e^{-\varepsilon \min(t, H)}\right)
&=
\mu\int_0^te^{-\varepsilon u}e^{-\mu u}du+\mu e^{-\varepsilon t}\int_t^{\infty } e^{-\mu u}du\\
&
=\frac{\mu}{\mu+\varepsilon}+\frac{\varepsilon}{\mu+\varepsilon}e^{-(\mu+\varepsilon)t}
\quad\text{for}\quad t\geq 0.
\end{split}
\end{equation}
Recall Remark~\ref{Rem1} and define
\begin{equation}
\label{d=ran}
\tilde{\varepsilon}_{t}:=-\left(\log(\varphi_t)\right)'=
\frac{\varepsilon (\mu +\varepsilon )}{\mu e^{(\mu +\varepsilon )t}+\varepsilon }{\bf 1}_{\{t\geq 0\}}\quad\text{for}\quad t\geq 0,
\end{equation}
so that
$\varphi'_t=-\tilde{\varepsilon}_{t}\varphi_{t}$ for $t\geq 0$.
A direct computation gives that
$$\tilde{\varepsilon}'_t-\tilde{\varepsilon}^2_t
=-(\mu +\varepsilon )\widetilde\gamma_{t}\quad\text{for}\quad t>0.$$
Using the equation in the preceding display and
differentiating equation~\eqref{int_eq} gives that
\begin{align*}
s'_t
&=-\tilde{\varepsilon}_{t}\varphi (t)+\varepsilon f_{t}s^n_t+\int _0^{t}f(u)s^n_u\varphi_{t-u}
\left(\tilde\varepsilon_{t-u}'-\varepsilon^2_{t-u}\right)du
\\
&=-\tilde{\varepsilon}_{t}\varphi_t+\varepsilon f_{t}s^n_t-
(\mu +\varepsilon)\int _0^{t}f_us^n_u\varphi_{t-u}\tilde{\varepsilon}_{t-u}du
\\
&=-\tilde{\varepsilon}_{t}\varphi (t)+\varepsilon f_{t}s^n_t-(\mu +\varepsilon)(s_{t}-\varphi_t)\\
&=-(\mu +\varepsilon)s_{t}+\varepsilon f_{t}s^n_t+(\mu +\varepsilon -\tilde{\varepsilon}_{t})\varphi_t.
\end{align*}
Noting that
$$
(\mu +\varepsilon -\tilde{\varepsilon}_{t})\varphi_t=\left(\mu +\varepsilon-\frac{\varepsilon(\mu+\varepsilon)}{
\mu e^{(\mu+\varepsilon)t}+\varepsilon}\right)\left(\frac{\mu}{\mu+\varepsilon}+\frac{\varepsilon}{\mu+\varepsilon}e^{-(\mu+\varepsilon)t}\right)=\mu,
$$
gives the equation $s'_t=-(\mu +\varepsilon)s_{t}+\varepsilon f_{t}s^n_t+\mu$, which is
the Bernoulli equation with the additional term $\mu$,
as claimed.
\end{proof}
|
1,116,691,499,841 | arxiv | \section{Introduction}
Dark matter haloes have spin. This net angular momentum is acquired by tidal torquing in the early universe \citep{peebles69, dorosh70, white84}, and is later modified and shaped by the merging and accretion of substructures (e.g. \citealt{frenk85, catelan96, bullock01, gardner01, vitvitska02, peirani04, donghia07}). The acquisition and distribution of angular momenta in haloes is intimately linked to the evolution of the galaxies at their centres. Indeed, the relationship between halo spin and disc/baryonic spin is a fundamental topic in galaxy formation, and has been studied extensively in the literature (e.g. \citealt{vandenbosch02, sharma05, zavala08, bett10, deason11b, teklu15, zavala16}).
Initially, the angular momentum of the galaxy and the dark matter halo can be very well aligned. However, material is continually accreted onto the outer parts of the halo, which can alter its net angular momentum. Hence, while the galaxy and the halo often have aligned angular momentum vectors near their centers, they can be significantly misaligned at larger radii (e.g. \citealt{bett10, deason11b, gomez17}). Furthermore, major mergers can cause drastic ``spin flips'' in both the dark matter angular momenta and the central baryonic component \citep{bett12, padilla14}.
It is clear that the net spin of haloes is critically linked to their merger histories, and thus their \textit{stellar haloes} could provide an important segue between the angular momenta of the central baryonic disc and the dark matter halo. A large fraction of the halo stars in our Galaxy are the tidal remnants of destroyed dwarfs. Hence, to first order, the spin of the Milky Way stellar halo represents the net angular momentum of all of its past (stellar) accretion events.
The search for a rotation signal in the Milky Way halo dates back to the seminal work by \cite{frenk80}. The authors used line-of-sight velocities of the Galactic globular cluster system to infer a \textit{prograde} (i.e. aligned with the disc) rotation signal of $V_{\rm rot} \sim 60$ km s$^{-1}$. A prograde signal, with $V_{\rm rot} \sim 40-60$ km s$^{-1}$, in the (halo) globular cluster system has also been seen in several later studies (e.g. \citealt{zinn85, norris86, binney17}). However, the situation for the halo stars is far less clear. While most studies agree that the \textit{overall} rotation speed of the stellar halo is probably weak and close to zero \citep{gould98, sirko04, smith09, deason11a, fermani13b, das16}, there is some evidence for a kinematic correlation between metal-rich and metal-poor populations \citep{deason11a, kafle13, hattori13} and/or different rotation signals in the inner and outer halo \citep{carollo07,carollo10}.
An apparent kinematic dichotomy in the stellar halo (either inner vs. outer, or metal-rich vs. metal-poor) could be linked to different formation mechanisms. For example, state-of-the-art hydrodynamical simulations find that a significant fraction of the stellar haloes in the inner regions of Milky Way mass galaxies likely formed \textit{in situ}, and are more akin (at least kinematically) to a puffed up disc component \citep{zolotov09,font11,pillepich15}. Thus, one would expect a stronger prograde rotation signal in the inner and/or metal-rich regions of the Milky Way stellar halo \citep{mccarthy12}, and this theoretical scenario could account for the kinematic differences seen in the observations. However, as the detailed examination by \cite{fermani13b} shows, apparent kinematic signals depending on distance and/or metallicity can be wrongly inferred due to contamination in the halo star samples and/or systematic errors in the distance estimates to halo stars. Moreover, our observational inferences and comparisons with simulations should (but often do not) take into account the type of stars used to trace the halo. For example, commonly used tracers such as blue horizontal branch (BHB) and RR Lyrae (RRL) stars are biased towards old, metal-poor stellar populations, and this can affect the halo parameters we derive (see e.g. \citealt{xue11,janesh16}).
So far, our examination of the kinematics of distant halo stars has been almost entirely based on one velocity component. For large enough samples over a wide area of sky, kinematic signatures such as rotation can be teased out using line-of-sight velocities alone. However, at larger and larger radii this line-of-sight component gives less and less information on the azimuthal velocities of the halo stars. Moreover, the presence of cold structures in line-of-sight velocity space \citep{schlaufman09} can also bias results. It is clearly more desirable to infer a direct rotation estimate from the 3D kinematics of the stars. Studies of distant halo stars with proper motion measurements are scarce \citep{deason13, koposov13, sohn15, sohn16}, but this limitation will become a distant memory as we enter the era of \textit{Gaia}.
\textit{Gaia} is an unprecedented astrometric mission that will measure proper motions for hundreds of millions of stars in our Galaxy. In this contribution, we exploit the first data release of \textit{Gaia} (DR1, \citealt{gaia16}) to measure the net rotation of the Milky Way stellar halo. Although the first \textit{Gaia} data release does not contain any proper motions, we combine the exquisite astrometry of DR1 with the Sloan Digital Sky Survey (SDSS) images taken some $\sim 10-15$ years earlier to provide a stable and robust catalog of proper motions. Halo star tracers that have previously been identified in the literature are cross-matched with this new proper motion catalog to create a sample of halo stars with 2/3D kinematics.
The paper is arranged as follows. In Section \ref{sec:pms} we introduce the SDSS-\textit{Gaia} proper motion catalogue and investigate the statistical and systematic uncertainties in these measurements using spectroscopically confirmed QSOs. Our halo star samples are described in Section \ref{sec:samples}, and we provide further validation of our proper motion measurements by comparison with models and observations of the Sagittarius stream in Section \ref{sec:sgr}. In Section \ref{sec:like}, we introduce our rotating stellar halo model and apply a likelihood analysis to RRL, BHB and K giant halo star samples. We compare our results with state-of-the-art simulations in Section \ref{sec:sims}, and re-evaluate our expectations for the stellar halo spin. Finally, we summarise our main conclusions in Section \ref{sec:conc}.
\section{SDSS-\textit{Gaia} Proper Motions}
\label{sec:pms}
The aim of this work is to infer the average rotation signal of the Galactic halo using a newly calibrated SDSS-\textit{Gaia} catalog. This catalog (described below) is robust to systematic biases, which is vital in order to measure a rotation signal. Indeed, even with large proper motion errors (of order the size of the proper motions themselves!), with large enough samples distributed over the sky, the rotation signal can still be recovered provided that the errors are largely random rather than systematic.\\
The details of the creation of the recalibrated SDSS astrometric catalogue and measurement of SDSS-{\it Gaia} proper motions will be described in a separate paper (Koposov 2017 in preparation), but here we give a brief summary of the procedure.
In the original calibration of the astrometry of SDSS sources, exposed in detail by \cite{pier03}, there are two key ingredients. The first is the mapping between pixel coordinates on the CCD $(x,y)$ and the coordinates corrected for the differential chromatic refraction and distortion of the camera $(x',y')$ (see Eqn. 5-10 in \citealt{pier03}). The second is the mapping between $(x',y')$ and the great circle coordinates on the sky $(\mu, \nu)$ aligned with the SDSS stripe (Eqn. 9, 10, 13, 14 of \citealt{pier03}). The first transformation does not change strongly with time, requires only a few free parameters and is well determined in SDSS. However, the second transformation that describes the scanning of the telescope, how non-uniform it is and how it deviates from a great circle, as well as the behaviour of anomalous refraction is much harder to measure. In fact, the anomalous refraction and its variation at small timescales is the most dominant effect limiting the quality of SDSS astrometry (see Fig. 13 of \citealt{pier03}). The reason why those systematic effects could not have been properly addressed by the SDSS project itself is that the density of astrometric standards from UCAC
\citep{zacharias13} and Tycho catalogues used for the derivation of the $(x',y')$, $(\mu,\nu)$ transformation was too low. This is where the \textit{Gaia} DR1 comes to the rescue, with its astrometric catalogue being $\sim$ 4 magnitudes deeper than UCAC. The only issue with using the \textit{Gaia} DR1 catalogue as a reference for SDSS calibration is that the epoch of the \textit{Gaia} catalogue is 2015.0 as opposed to $\sim$ 2005 for SDSS and that the proper motions are not yet available for the majority of \textit{Gaia} DR1 stars.
To address this issue, we first compute the relative proper-motions between \textit{Gaia} and the original SDSS positions in bins in color-magnitude space and pixels on the sky (HEALPix level
16, angular resolution 3.6 deg; \citealt{gorski05}) that gives us
estimates of $\langle \mu_{\alpha}( \mathrm{hpx}, g-i,i) ]\rangle$
$\langle \mu_{\delta}(\mathrm{hpx},g-i,i) \rangle$. Those average proper motions can be used to estimate the expected positions of \textit{Gaia} stars at the epoch of each SDSS scan.
\begin{equation}
\hat\alpha_{\rm SDSS} = \alpha_{Gaia} - \langle \mu_{\alpha}(\mathrm{hpx},g-i,i) \rangle \delta T
\end{equation}
where $\delta T$ is the timespan between \textit{Gaia} and SDSS observation of a given star, hpx is the HEALPix pixel number of the star and $g-i$, and $i$ are colors and magnitudes of the star. With those positions $(\hat{\alpha}_{\rm SDSS}, \hat{\delta}_{\rm SDSS})$ computed for all the stars with both SDSS and \textit{Gaia} measurements we redetermine the astrometric mapping in SDSS between $(x',y')$ pixel coordinates and on the sky great circle $(\mu,\nu)$ coordinates by using a flexible spline model. There are many more stars available in \textit{Gaia} DR1 compared to the UCAC catalog, so in the model we are able to much better describe the anomalous refraction along the SDSS scans and, therefore, noticeably reduce the systematic uncertainties of the astrometric calibration. Furthermore, as a final step of the calibration, we also utilise the galaxies observed by Gaia and SDSS to remove any residual large scale astrometric offsets in the calibrated SDSS astrometry. With the SDSS astrometry recalibrated, the SDSS-{\it Gaia} proper motions are then simply obtained from the \textit{Gaia} positions and their recalibrated position in SDSS.
\subsection{Proper motion errors}
\label{sec:pmerr}
\begin{figure}
\centering
\includegraphics[width=8.5cm, height=7.08cm]{qsos_mjd.pdf}
\caption[]{ \small \textit{Left panel:} The distribution of measured proper motions of SDSS DR12 spectroscopically confirmed QSOs. We find very similar distributions for $\mu_{\alpha}$ and $\mu_\delta$ (also $\mu_\ell$ and $\mu_b$), so for simplicity we use both proper motion measurements in this plot (i.e. $\mu=[\mu_\alpha, \mu_\delta]$). \textit{Top right panel:} A histogram of the time baseline between first epoch SDSS and second epoch \textit{Gaia} measurements ($\Delta T$). \textit{Middle right panel:} Median proper motion of QSOs as a function of time baseline. The median $\mu_{\alpha}$ and $\mu_\delta$ values are shown with the dashed green and blue lines respectively. The median proper motions are consistent with zero at the 0.1 mas/yr level. The grey shaded region indicates median offsets from zero of $\pm 0.1$ mas/yr. \textit{Bottom right panel:} The dispersion in QSO proper motions as a function of $\Delta T$. Here, $\sigma$ is 1.48 times the median absolute deviation. The red dashed line shows the best-fit model for $\sigma(\mu)$, where $\sigma= A+B/\Delta T$. We use this relation to assign proper motion uncertainties to stars in the SDSS-\textit{Gaia} sample as a function of $\Delta T$.}
\label{fig:qso_mjd}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=8.5cm, height=4.25cm]{qsos_mag_color.pdf}
\caption{ \small Proper motion errors estimated from SDSS DR12 QSOs as a function of $r$-band magnitude (left panel) and $g-r$ colour (right panel). The top and bottom panels show the median and standard deviation of the QSO proper motions. The dashed green and blue lines in the top panels show the median $\mu_{\alpha}$ and $\mu_\delta$ proper motions, and the grey shaded region indicates median offsets from zero of $\pm 0.1$ mas/yr. The dotted line in the bottom panels indicates the median proper motion error of 2 mas/yr. There is a slight correlation of $\sigma(\mu)$with $r$-band magnitude, but this is very minor over the magnitude range probed in this study ($ r \lesssim 19$). Furthermore, there is no variation with colour. }
\label{fig:qso_mag_col}
\end{figure}
We quantify the uncertainties in the SDSS-\textit{Gaia} proper motion measurements using spectroscopically confirmed QSOs from SDSS DR12 \citep{paris17}. This QSO sample is cross-matched with the SDSS-\textit{Gaia} catalog by searching for the nearest neighbour within $1\arcsec$. There are $N=71, 799$ QSOs in the catalog with $r < 20$, and we show the distribution of QSO proper motions in the left-hand panel of Fig. \ref{fig:qso_mjd}. The QSO proper motions are nicely centred around $\mu =0$ mas/yr, and there are no significant high proper motion tails to the distribution. Note that we find no significant differences between the QSO proper motion components $\mu_\alpha$ and $\mu_\delta$, so we group both components together (i.e. $\mu=[\mu_\alpha, \mu_\delta]$) in the figure. However, we do show the $\mu_\alpha$ and $\mu_\delta$ components separately (green and blue dashed lines in the top-right panel) when we show the median proper motions to illustrate that these components \textit{individually} have no significant systematics.
The proper motion errors should roughly scale as $\sigma (\mu) \propto 1/\Delta T$, where $\Delta T$ is the timescale between the first epoch SDSS measurements and the second epoch \textit{Gaia} data\footnote{Note we compute $\Delta T$ using the modified Julian dates (MJD) of the SDSS observations and the last date of data collection for \textit{Gaia} DR1, i.e $\Delta T = $ MJD(\textit{Gaia})-MJD(SDSS) where MJD(\textit{Gaia})=MJD(16/9/2015)}. The SDSS photometry was taken over a significant period of time, and data from later releases have shorter time baselines. Thus, this variation in astrometry timespan is an important parameter when quantifying the proper motion uncertainties in our SDSS-\textit{Gaia} catalog. The top-right panel of Fig. \ref{fig:qso_mjd} shows a (normalised) histogram of the time baselines ($\Delta T$). There is a wide range of time baselines, but most of the SDSS data were taken $\sim 10-15$ years ago. In the bottom-right panel of Fig. \ref{fig:qso_mjd} we show the dispersion in QSO proper motion measurements (defined as $\sigma = 1.48$ times the median absolute deviation) as a function of $\Delta T$, and the middle-right panel shows the median values. The median values are consistent with zero at the level of $\sim 0.1$ mas/yr, and there is no systematic dependence on $\Delta T$. As expected, there is a strong correlation between the dispersion of QSO proper motions and $\Delta T$. The dashed red line shows a model fit to the relation of the form:
\begin{equation}
\label{eqn:sig}
\sigma = A + B/\Delta T,
\end{equation}
where $A=0.157$ mas/yr and $B=22.730$ mas. It is encouraging that this simple $A+B/\Delta T$ model agrees well with the QSO data, and we find no significant systematic differences between different SDSS data releases. Note that we show in Appendix \ref{sec:appendix} that there is no significant systematic variation in the QSO proper motions with position on the sky.
\begin{figure*}
\centering
\includegraphics[width=16cm, height=5.33cm]{sdss_rrl.png}
\caption[]{\small The sky distribution of ($N=8590$) RRL stars with SDSS-\textit{Gaia} proper motions in Equatorial coordinates. The points are colour coded according to heliocentric distance. The solid black lines indicate the approximate tracks of the Sagittarius leading and trailing arms. Galactic latitudes of $b=\pm 10$ deg are indicated with the dotted lines.}
\label{fig:sdss_rrl}
\end{figure*}
We also use the QSO sample to check whether or not the proper motion uncertainties vary significantly with magnitude or colour. In Fig. \ref{fig:qso_mag_col} we show the dispersion in QSO proper motions as a function of $r$-band magnitude (left panel) and $g-r$ colour (right panel). The dotted lines indicate the median standard deviation in proper motion of 2 mas/yr. There is a weak dependence on r-band magnitude, whereby the QSO proper motion distributions get slightly broader at fainter magnitudes. However, most of the halo stars in this work have $ r < 19$ and there is little variation at these brighter magnitudes. Finally, we find no detectable dependence of $\sigma (\mu)$ on $g-r$ colour. It is worth remarking that the stability of these proper motion measurements to changes in magnitude and colour is a testament to the astrometric stability of the improved SDSS-\textit{Gaia} catalog.
\\
\\
\noindent
In Section \ref{sec:like} we introduce a rotating velocity ellipsoid model for the Milky Way halo stars. In order to test the effects of any systematic uncertainties in the SDSS-\textit{Gaia} proper motions, we also apply this modeling procedure to the sample of SDSS DR12 QSOs. We adopt a ``distance'' of 20 kpc, which is the mean distance to our halo star samples, and find the best fit rotation ($\langle V_{\rm \phi} \rangle$) value. This procedure gives a best fit value of $\langle V_{\rm \phi} \rangle \sim 10$ km s$^{-1}$. Note, that if there were no systematics present, then there would be no rotation signal. In Fig. \ref{fig:qso_mjd} we showed that the median proper motions of the QSOs was $\sim 0.1$ mas/yr. Indeed, at a distance of 20 kpc, this proper motion corresponds to a velocity of 10 km s$^{-1}$. Thus, although the astrometry systematics in our SDSS-\textit{Gaia} proper motion catalog are small, at the typical distances of our halo stars we cannot robustly measure rotation signals weaker than 10 km s$^{-1}$. We discuss this point further in Section \ref{sec:res}.
In the remainder of this work, we use Eqn. \ref{eqn:sig} to define the proper motion uncertainties of our halo star samples (see below). Thus, we assume that the proper motion errors are random, independent and normally distributed with variance depending only on the time-baseline between SDSS and \textit{Gaia} measurements. Note that since we are trying to measure the \textit{centroid} of the proper motion distribution (i.e. the net rotation), rather than deconvolve it into components or measure their width, we are not very sensitive to knowing the proper motion errors precisely.
\section{Stellar Halo Stars}
\label{sec:samples}
\subsection{RR Lyrae}
\label{sec:rrl}
RR Lyrae (RRL) stars are pulsating horizontal branch stars found abundantly in the stellar halo of our Galaxy. These variable stars have a well-defined Period-Luminosity-Metallicity relation, and their distances can typically be measured with accuracies of less than 10 percent. Furthermore, RRL have bright absolute magnitudes ($M_V \sim 0.6$), so they can be detected out to large distances in relatively shallow surveys. These low mass, old (their ages are typically in excess of 10 Gyr) stars are ideal tracers of the Galactic halo, and, indeed, RRL have been used extensively in the literature to study the stellar halo (e.g. \citealt{vivas06, watkins09, sesar10, simion14, fiorentino15}).
In this work, we use a sample of type AB RRL stars from the Catalina Sky Survey \citep{drake13a,drake13b,torrealba15} to infer the rotation signal of the Milky Way stellar halo. This survey has amassed a large number ($N \sim 22,700$) of RRL stars over 33,000 deg$^2$ of the sky, with distances in excess of 50 kpc. The RRL sample is matched to the SDSS-\textit{Gaia} proper motion catalog by searching the nearest neighbours within $10\arcsec$. Our resulting sample contains $N=8590$ RRL stars with measured 3D positions, photometric metallicities (derived using Eqn. 7 from \citealt{torrealba15}) and proper motions. The distribution of this sample on the sky in Equatorial coordinates is shown in Fig. \ref{fig:sdss_rrl}. When evaluating the Galactic velocity components of the RRL stars, the random proper motion errors (derived in Section \ref{sec:pms}) dominate over the distance errors (typically $\sim 7\%$ see e.g. \citealt{simion14}), so we can safely ignore the RRL distance uncertainties in our analysis. Note that we have checked using mock stellar haloes from the Auriga simulation suite (see Section \ref{sec:sims}) that statistical distance uncertainties of $\sim 10\%$ make little difference to our results.
\subsection{Blue Horizontal Branch}
Blue Horizontal Branch (BHB) stars, like RRL, are an old, metal poor population used widely in the literature to study the distant halo (e.g. \citealt{xue08, deason12b}). BHBs have relatively bright absolute magnitudes ($M_g \sim 0.5$), which can be simply parametrised as a function of colour and metallicity (e.g. \citealt{deason11c, fermani13a}). However, unlike their RRL cousins, photometric samples of BHB stars are often significantly contaminated by blue straggler stars, which have similar colours but higher surface gravity. Spectroscopic samples of BHBs can circumvent this problem by using gravity sensitive indicators to separate out the contaminants (e.g. \citealt{clewley02, sirko04, xue08, deason12b}).
In this work we use the spectroscopic SEGUE sample of BHB stars compiled by \cite{xue11}. This sample was selected to be relatively ``clean'' of higher surface gravity contaminants, and has already been exploited in a number of works to study the stellar halo (e.g. \citealt{xue11, deason12a, kafle13, hattori13}). By cross-matching this sample with the SDSS-\textit{Gaia} catalog, we identify $N=4553$ BHB stars. We estimate distances to these stars using the $g-r$ colour and metallicity dependent relation derived by \cite{fermani13a}. Similarly to the RRL stars, we do not take into account the relatively small ($\sim 10\%$) distance uncertainties of the BHBs in our analysis. Our resulting BHB sample has 3D positions, 3D velocities and spectroscopic metallicity estimates.
\subsection{K Giants}
Giant stars are often a useful probe of the stellar halo, owing to their bright absolute magnitudes ($M_r \sim 1$ to $-3$), and large numbers in wide-field spectroscopic surveys (e.g. \citealt{morrison00, xue14}). Moreover, giants are one of the most common tracers of \textit{external} galaxy haloes (e.g. \citealt{gilbert06,monachesi16}). In contrast to BHB and RRL stars, giant stars populate all metallicities in old populations. Thus, they represent a less biased tracer of the stellar halo.
The drawback of using giant stars to trace the halo is that spectroscopic samples are required to limit contamination from dwarf stars, and the absolute magnitudes of giants are strongly dependent on colour and metallicity. Here, we use the spectroscopic sample of K giants compiled by \cite{xue14}, who derive distance moduli for each star using a probabilistic framework based on colour and metallicity. A distance modulus PDF is constructed for each star, and we use the mode of the distribution $DM_{\rm peak}$ and interval between the 84\% and 16\% percentiles, $\Delta DM = \left(DM_{84}-DM_{16}\right)/2$, as the 1$\sigma$ uncertainty. We find $N = 5814$ K giants cross-matched with the SDSS-\textit{Gaia} proper motion sample. Thus, our resulting K giant sample has 3D positions (with distance moduli described using a Gaussian PDF), 3D velocities and spectroscopic metallicities.
\section{Sagittarius Stream}
\label{sec:sgr}
\begin{figure*}
\centering
\includegraphics[width=16cm, height=13.71cm]{sgr_pms.png}
\caption[]{ \small Proper motions in Galactic coordinates ($\mu_l$ middle panel, $\mu_b$ bottom panel) against longitude in the Sagittarius (Sgr) coordinate system (see \citealt{majewski03}). We also show heliocentric distance against Sgr longitude in the top panel. The red and blue points show the leading and trailing arms of the
\cite{law10} model. Note that we only show material stripped within the last 3 pericentric passages of the model orbit. The black filled squares show the median proper motions for RRL stars associated with the Sgr stream, and the open green circles are the whole sample of RRL stars. Here, we show median proper motions in bins of $\Lambda_\odot$. The error bars indicate $1.48 \, \mathrm{MAD}/\sqrt{N}$, where MAD $=$ median absolute deviation and $N$ is the number of stars in each bin. The orange diamonds, cyan squares and grey triangles show proper motion measurements along the stream from \cite{koposov13}, \cite{sohn15,sohn16} and \cite{carlin12}. There is excellent agreement between the SDSS-\textit{Gaia} proper motion measurements and the model values from \cite{law10} as well as previous measurements in the literature (see Fig. \ref{fig:pm_comp}). The solid black line shows the maximum likelihood model for halo rotation computed in Section \ref{sec:like}. A model with mild prograde rotation agrees very well with the proper motion data. }
\label{fig:sgr_pms}
\end{figure*}
\begin{figure}
\centering
\includegraphics[width=8.5cm, height=8.5cm]{pm_comp_sgr.png}
\caption[]{\small Proper motions in Galactic coordinates ($\mu_l$ left panels, $\mu_b$ right panel) against longitude in the Sagittarius (Sgr) coordinate system. The symbols and colors are identical to Fig. \ref{fig:sgr_pms}. Here, we have zoomed in on the regions of the Sgr stream that have proper motion constraints in the literature. The SDSS-\textit{Gaia} RRL proper motions in the Sgr stream (solid black squares) are in excellent agreement with the literature values.}
\label{fig:pm_comp}
\end{figure}
Before introducing our model for halo rotation, we identify RRL stars in our sample that likely belong to the Sagittarius (Sgr) stream. This vast substructure is very prominent in the SDSS footprint \citep{belokurov06}, and thus it may overwhelm any halo rotation signatures associated with earlier accretion events. Furthermore, previous works have independently measured proper motions of Sgr stars\citep{carlin12, koposov13, sohn15}, and hence we can provide a further test of our SDSS-\textit{Gaia} proper motions. Note that we use RRL stars (rather than BHBs or K giants) in Sgr as these stars have the most accurate distance measurements, and thus Sgr members can be identified relatively cleanly.
We identify Sgr stars according to position on the sky $(\alpha,\delta)$ and heliocentric distance using the approximate stream coordinates used by \cite{deason12b} and \cite{belokurov14}. The top panel of Fig. \ref{fig:sgr_pms} shows that our distance selection of Sgr stars agrees well with the \cite{law10} model. Our selection procedure identifies $N=830$ candidate Sgr associations, which corresponds to roughly 10\% of our RRL sample.
In Fig. \ref{fig:sgr_pms} we show proper motions in Galactic coordinates ($\mu_\ell, \mu_b$) as a function of longitude in the Sgr coordinate system (see \citealt{majewski03}). The red and blue points show the leading and trailing arms of the \cite{law10} model of the Sgr stream. Note that we only show material stripped within the last 3 pericentric passages of the model orbit. The black filled squares show the median SDSS-\textit{Gaia} proper motions for RRL stars associated with the Sgr stream in bins of Sgr longitude, and the error bars indicate $1.48 \, \mathrm{MAD}/\sqrt{N}$, where MAD $=$ median absolute deviation and $N$ is the number of stars in each bin. It is encouraging that the Sgr stars in our RRL sample agree very well with the model predictions by \cite{law10}. Proper motion measurements of Sgr stars in the literature are also shown in Fig. \ref{fig:sgr_pms}: these are given by the orange diamonds \citep{koposov13}, cyan squares \citep{sohn15,sohn16} and grey triangles \citep{carlin12}. Our SDSS-\textit{Gaia} proper motions are in excellent agreement with these other (independent) measures (see also Fig. \ref{fig:pm_comp}). Finally, we show the proper motions for the entire sample of SDSS-\textit{Gaia} RRL stars with the open green circles. The stars associated with Sgr are clearly distinct from the overall halo in proper motion space. The solid black line shows the maximum likelihood model for halo rotation computed in Section \ref{sec:like}. A model with mild prograde rotation agrees very well with the proper motion data. Note that the variation in proper motion with $\Lambda_\odot$ in the model is largely due to the solar reflex motion. Indeed, the solar reflex motion (in proper motion space) for Sgr stars is lower because they are typically further away than the halo stars. This is the main reason for the stark difference between the proper motions of the two populations in Fig. \ref{fig:sgr_pms}.
We also show the heliocentric distances of the Sgr stars as a function of Sgr Longitude in the top panel of Fig. \ref{fig:sgr_pms}. Again, there is excellent agreement with the \cite{law10} models. This figure shows that we can probe the Sgr proper motions out to $D \sim 50$ kpc, and thus we can accurately trace halo proper motions out to these distances (see Section \ref{sec:res}).
In Fig \ref{fig:pm_comp} we zoom in on the regions along the Sgr stream where proper motions have been measured previously in the literature. Here, the agreement with the other observational data is even clearer. In particular, our Sgr leading arm proper motions at $ 240^\circ \lesssim \Lambda_\odot \lesssim 360^\circ$ are in excellent agreement with the \textit{HST} proper motions measured by \cite{sohn15}. This is a wonderful validation of two completely independent astrometric techniques! Note that the Sgr stream is not the focus of this study, but the proper motion catalog we present here is a useful probe of the stream dynamics. For example, the slight differences between the \cite{law10} model predictions and our measurements could be used to refine/improve models of the Sgr orbit. We leave this task, and other applications of the Sgr proper motions, to a future study.
\\
\noindent
We have now shown, using both spectroscopically confirmed QSOs and stars belonging to the Sgr stream, that our SDSS-\textit{Gaia} proper motions are free of any significant systematic uncertainties. In the following Section we use this exquisite sample to infer the rotation signal of the stellar halo.
\section{Halo Rotation}
\label{sec:like}
In this Section, we use the SDSS-\textit{Gaia} sample of RRL, BHB and K giant stars to measure the average rotation of the Galactic stellar halo. Below we describe our rotating halo model, and outline our likelihood analysis.
In order to convert observed heliocentric velocities into Galactocentric ones, we adopt a distance to the Galactic centre of $R_0=8.3 \pm 0.3$ kpc \citep{schonrich12, reid14}, and we marginalise over the uncertainty in this parameter in our analysis. Given $R_0$, the total solar azimuthal velocity in the Galactic rest frame is strongly constrained by the observed proper motion of Sgr A$^*$, i.e. $V_{g, \odot} = \mu (\mathrm{Sgr \, A^*}) \times R_0$. We adopt the \cite{reid04} proper motion measurement of Sgr A$^*$, which gives a solar azimuthal velocity of $V_{g, \odot} = 250 \pm 9$ km s$^{-1}$. Finally, we use the solar peculiar motions $(U_\odot, V_\odot, W_\odot)=(11.1, 12.24, 7.25)$ km s$^{-1}$ derived by \cite{schonrich10}. Thus, in our analysis, the circular speed at the position of the Sun is $V_c = 238$ km s$^{-1}$ (where $V_{g, \odot}= V_c +V_\odot$). We note that the combination of $R_0=8.5$ kpc and $V_c = 220$ km s$^{-1}$ has been used widely in the literature, so in Section \ref{sec:res} we show how our halo rotation signal is affected if we instead adopt these parameters.
\subsection{Model}
We define a (rotating) 3D velocity ellipsoid aligned in spherical coordinates:
\begin{eqnarray}
\label{eqn:fv}
&&P(v_r,v_\theta,v_\phi|\sigma_r,\sigma_\phi,\sigma_\theta,\langle V_\phi \rangle) = \\
&&\frac{1}{\left(2\pi\right)^{3/2}\sigma_r \sigma_\theta
\sigma_\phi} \mathrm{exp}\left[-\frac{v^2_r}{2\sigma^2_r}-\frac{v^2_\theta}{2
\sigma^2_\theta}-\frac{\left(v_\phi-\langle V_{\phi} \rangle\right)^2}{2
\sigma^2_\phi}\right] \notag
\end{eqnarray}
Here, we only allow net streaming motion in the $v_\phi$ velocity coordinate, and assume Gaussian velocity distributions. Note that positive $\langle V_{\phi} \rangle$ is in the same direction as the disc rotation. For simplicity, we assume an isotropic ellipsoid where $\sigma_r=\sigma_\theta=\sigma_\phi=\sigma_*$, but we have ensured that this assumption of isotropy does not significantly affect our rotation estimates (see also Section \ref{sec:sims}).
This velocity distribution function can be transformed to Galactic coordinates $(\mu_l, \mu_b, v_{\rm los})$ by using the Jacobian of the transformation $J=4.74047^2 D^2$, which gives $P(\mu_l, \mu_b,v_{\rm los}|\sigma_*,\langle V_\phi \rangle, D)$.
The RRL stars only have proper motion measurements, so, in this case, we marginalise the velocity distribution function along the line-of-sight to obtain $P(\mu_l, \mu_b|\sigma_*,\langle V_\phi \rangle)$. Furthermore, while we can safely ignore the distance uncertainties for the RRL and BHB stars, we do need to take the K giant absolute magnitude uncertainties into account (typically, $\Delta DM \sim 0.35$) . Thus, for the K giants we include a distance modulus PDF in the analysis. Here, we follow the prescription by \cite{xue14} and assume a Gaussian distance modulus distribution with mean, $\langle DM \rangle = DM_{\rm peak}$ and standard deviation, $\sigma_{DM} = \left(DM_{84}-DM_{16}\right)/2$. Here, $DM_{\rm peak}$ is the most probable distance modulus derived by \cite{xue14}, and $\left(DM_{84}-DM_{16}\right)/2$ is the central 68\% interval. This distance modulus PDF was derived by \cite{xue14} using empirically calibrated colour-luminosity fiducials, at the observed colour and metallicity of the K giants.
\begin{eqnarray}
&&P(\mu_l, \mu_b, v_{\rm los}|\sigma_*,\langle V_\phi \rangle) = \\
&&\int P(\mu_l, \mu_b, v_{\rm los}|\sigma_*,\langle V_\phi \rangle, DM)
\mathcal{N}(DM|DM_0, \sigma_{DM}) d DM \notag
\end{eqnarray}
where $\mathcal{N}(DM|DM_0, \sigma_{DM})$ is the normal distribution describing the uncertainty in measuring the distance modulus to a given star.
We then use a likelihood analysis to find the best-fit $\langle V_\phi \rangle$ value. The (isotropic) dispersion, $\sigma_*$, is also a free parameter in our analysis. As we are mainly concerned with net rotation, we assume a flat prior on $\sigma_*$ in the range $\sigma_* =[50,200]$ km s$^{-1}$, and marginalise over this parameter to find the posterior distribution for $\langle V_\phi \rangle$.
When evaluating the likelihoods of individual stars under our model we also take into account the Gaussian uncertainties on proper motions as prescribed by Eq.~ \ref{eqn:sig}. As the likelihood functions are normal distributions, this amounts to a simple convolution operation.
\subsection{Results}
\label{sec:res}
\begin{table*}
\begin{center}
\renewcommand{\tabcolsep}{0.8cm}
\renewcommand{\arraystretch}{1.3}
\begin{tabular}{c c c c c}
\hline
\hline
&\multicolumn{4}{c}{\textbf{RRL}}\\
& \multicolumn{2}{c}{All} & \multicolumn{2}{c}{Exc. Sgr} \\
& N & $\langle V_\phi \rangle$ [km s$^{-1}$] & N & $\langle V_\phi \rangle$ [km s$^{-1}$]\\
\cline{2-5} \\
All $\mathrm{[Fe/H]}$ & 7456 & $12^{+2}_{-3} $ & 6663 & $9^{+3}_{-2}$ \\
$\mathrm{[Fe/H]} >-1.5$ & 4322 & $14^{+3}_{-4} $ & 3983 & $11^{+4}_{-3}$ \\
$\mathrm{[Fe/H]} <-1.5$ & 1460 & $10^{+6}_{-7} $ & 1312 & $6.0^{+7}_{-6}$ \\
\hline
\hline
&\multicolumn{4}{c}{\textbf{BHB}}\\
& \multicolumn{2}{c}{All} & \multicolumn{2}{c}{Exc. Sgr} \\
& N & $\langle V_\phi \rangle$ [km s$^{-1}$]& N & $\langle V_\phi \rangle$ [km s$^{-1}$]\\
\cline{2-5} \\
All $\mathrm{[Fe/H]}$& 3947 & $6.0^{+3}_{-3} $ & 3671 & $5.0^{+3}_{-3}$ \\
$\mathrm{[Fe/H]} >-1.5$ & 756 & $18^{+7}_{-7} $ & 715 & $21^{+7}_{-7}$ \\
$\mathrm{[Fe/H]} <-1.5$ & 3191 & $2.0^{+4}_{-3}$ & 2956 & $0.0^{+4}_{-3}$ \\
$\mathrm{[Fe/H]} >-1.5$, PM only & 756 & $15^{+8}_{-9}$ & 715 & $19^{+8}_{-9}$ \\
$\mathrm{[Fe/H]} <-1.5$, PM only & 3191 & $1.0^{+4}_{-4}$ & 2956 & $-1.0^{+4}_{-4}$ \\
\hline
\hline
&\multicolumn{4}{c}{\textbf{K giants}}\\
& \multicolumn{2}{c}{All} & \multicolumn{2}{c}{Exc. Sgr} \\
& N & $\langle V_\phi \rangle$ [km s$^{-1}$] & N & $\langle V_\phi \rangle$ [km s$^{-1}$] \\
\cline{2-5} \\
All $\mathrm{[Fe/H]}$& 5284 & $23^{+3}_{-3} $ & 4603 & $19^{+3}_{-3}$ \\
$\mathrm{[Fe/H]} >-1.5$ & 2553 & $28^{+4}_{-4}$ & 2159 & $23^{+4}_{-4}$ \\
$\mathrm{[Fe/H]} <-1.5$ & 2731 & $17^{+4}_{-4}$ & 2444 & $14^{+4}_{-4}$ \\
$\mathrm{[Fe/H]} >-1.5$, $P_{\rm RGB} > 0.8$ & 1748 & $30^{+5}_{-5}$ & 1426 & $23^{+5}_{-5}$ \\
$\mathrm{[Fe/H]} <-1.5$, $P_{\rm RGB} > 0.8$ & 1985 & $22^{+5}_{-5}$ & 1744 & $18^{+5}_{-5}$ \\
\hline
\hline
\label{tab:res}
\end{tabular}
\caption{\small A summary of best-fit $\langle V_\phi \rangle$ values and associated $1 \sigma$ uncertainties. Halo stars with $r < 50$ kpc, $|z| > 4$ kpc and $\mu < 100$ mas/yr were used to derive these values.}
\end{center}
\end{table*}
In this Section, we apply our likelihood procedure to RRL, BHB and K giant stars with SDSS-\textit{Gaia} proper motions. For all halo tracers, we only consider stars with $r < 50$ kpc and $|z| > 4$ kpc. The latter cut is imposed to avoid potential disc stars. In addition, we remove any stars with considerable proper motion ($\mu > 100$ mas/yr), although, in practice this amounts to removing only a handful ($\ll 1\%$) of stars and their exclusion does not affect our rotation estimates. The best fit values of $\langle V_\phi \rangle$ described in this section are summarised in Table \ref{tab:res}.
In Fig. \ref{fig:vphi} we show the posterior distribution for $\langle V_\phi \rangle$ for each of the halo tracers. The solid black, dashed orange and dot-dashed purple lines show the results for RRL, BHBs and K giants, respectively. All the halo tracers favour a mild prograde rotation signal, with $\langle V_\phi \rangle \sim 5-25$ km s$^{-1}$. Note that the RRL model is shown against the proper motion data in Fig. \ref{fig:sgr_pms}. In general, the K giants show the strongest rotation signal of the three halo tracers. This is likely because the K giants have a broader age and metallicity spread than the RRL and BHB stars (see Section \ref{sec:sims}). However, the K giant rotation signal is still relatively mild ($\sim 20$ km s$^{-1}$) and similar (within 10-15 km s$^{-1}$) to the RRL and BHB results. The three tracer populations have different distance distributions, so it is not immediately obvious that their rotation signals can be directly compared. However, as we show in Fig. \ref{fig:vphi_rad}, we find little variation in the rotation signal with Galactocentric radius, so a comparison between the ``average'' rotation signal of the populations is reasonable. Finally, we note that we also check that the Sgr stars in our sample make little difference to the overall rotation signal of the halo (see Table \ref{tab:res}).
For comparison, the right-hand panel of Fig. \ref{fig:vphi} shows the posterior distributions if we adopt other commonly used parameters for distance from the Galactic centre and circular velocity at the position of the Sun: $R_0=8.5$ kpc, $V_c = 220$ km s$^{-1}$. In this case, only the K giants exhibit a detectable rotation signal. It is worth emphasizing that current estimates of the solar azimuthal velocity favour the larger value of $V_c \sim 240$ km s$^{-1}$ \citep{bovy12,schonrich12,reid14} that we use, but it is important to keep in mind that the rotation signal is degenerate with the adopted solar motion. In addition, as discussed in Section \ref{sec:pms}, the systematic uncertainties of our SDSS-\textit{Gaia} proper motion catalogue are at the level of $\sim 0.1$ mas/yr. Thus, for typical distances to the halo stars of 20 kpc, we cannot robustly measure a rotation signal that is weaker than 10 km s$^{-1}$.
In Fig. \ref{fig:dm} we compare the model predictions for $\mu_l$ with the observed data. We show the Galactic longitude proper motion $\mu_l$ because this component is more sensitive than $\mu_b$ to variations in $\langle V_\phi \rangle$. The solid black line shows the difference between the maximum likelihood models and the data as a function of Galactocentric longitude. The error bars indicate the median absolute deviation of the data in each bin. For comparison, we also show with the dashed blue and dot-dashed red lines the model predictions with $\langle V_\phi \rangle \pm 20$ km s$^{-1}$. For all three tracers, the models with very mild prograde rotation agree well with the data.
Our maximum likelihood models give $\sigma^*$ values of 138, 121 and 111 km s$^{-1}$ for the RRL, BHBs and K giants respectively. These values agree well with previous estimates of in the literature \citep{battaglia05, brown10, deason12b, xue08}. Note that our models assume isotropy, but we find that both radially and tangentially biased models make little difference to our estimates of $\langle V_\phi \rangle$.
\begin{figure}
\centering
\includegraphics[width=8.5cm, height=4.25cm]{vphi_plot.pdf}
\caption{ \small The posterior $\langle V_\phi \rangle$ distributions for RRL (solid black), BHB (dashed orange) and K giant (dot-dashed purple) tracers. The shaded grey region indicates the approximate systematic uncertainty in the proper motion measurements ($\sim 10$ km s$^{-1}$ at $D=20$ kpc). For comparison, the right-hand panel shows the posterior distributions when a different combination of position of the Sun ($R_0=8.5$ kpc), and circular velocity at the position of the Sun ($V_c = 220$ km s$^{-1}$) is used. With a lower solar azimuthal velocity, the (already mild) rotation signal disappears. Current estimates favour the larger value of $\sim 240$ km s$^{-1}$, but it is worth bearing in mind the degeneracy between the rotation signal and adopted solar motion.}
\label{fig:vphi}
\end{figure}
\begin{figure*}
\centering
\includegraphics[width=16cm, height=3.64cm]{data_model_pm.pdf}
\caption{ \small A comparison between the observed proper motions and the maximum likelihood model predictions. Here, we show the median $\mu_l$ values in bins of Galactocentric longitude. The RRL, BHB and K giant samples are shown in the left, middle and right panels, respectively. The solid black line shows the comparison with the maximum likelihood model. The error bars show the median absolute deviation of the data in each Galactic longitude bin. The dashed blue and dot-dashed red lines show models with $\langle V_\phi \rangle \pm 20$ km s$^{-1}$. Note that in this comparison we have removed stars that likely belong to the Sgr stream.}
\label{fig:dm}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=17cm, height=4.25cm]{vphi_radius.pdf}
\caption{\small The best-fit rotation of the stellar halo, $\langle V_\phi \rangle$, in Galactocentric radial bins. Here, 10 kpc radial bins are used and the error bars indicate the 1$\sigma$ confidence levels. RRL, BHBs and K giants are shown in the left, middle and right panels, respectively. The solid black circles show all halo stars, and the open orange circles show the rotation signal when stars likely associated with the Sagittarius (Sgr) stream are removed. The apparent rotation signal at radii $ 30 < r/\mathrm{kpc} < 40$ is due to the Sgr stream. The shaded grey region indicates the approximate systematic uncertainty of the SDSS-\textit{Gaia} proper motions ($\sim 0.1$ mas/yr).}
\label{fig:vphi_rad}
\end{figure*}
\begin{figure}
\centering
\includegraphics[width=8cm, height=12cm]{vphi_plot_met.pdf}
\caption{ \small The posterior $\langle V_\phi \rangle$ distributions for metal-richer (solid red lines, [Fe/H] $>-1.5$) and metal poorer (dashed blue lines, [Fe/H] $<-1.5$). RRL, BHBs and K giants are shown in the top, middle and bottom panels, respectively. The metal-richer BHB and K giant stars are mildly biased ($\sim 1\sigma$) towards stronger prograde rotation. The thinner lines show the estimated rotation signals when stars associated with the Sgr stream are excluded. The shaded grey region indicates the approximate systematic uncertainty in the proper motion measurements ($\sim 10$ km s$^{-1}$ at $D=20$ kpc) }
\label{fig:vphi_met}
\end{figure}
We now investigate if there is a radial dependence on the rotation signal of the stellar halo. Our likelihood analysis is applied to halo stars in radial bins 10 kpc wide between Galactocentric radii $0 < r/\mathrm{kpc} < 50$. The results of this exercise are shown in Fig. \ref{fig:vphi_rad}. The solid black circles show all halo stars, and the open orange circles show the rotation signal when stars likely associated with the Sagittarius (Sgr) stream are removed. Here, the error bars indicate the 1$\sigma$ confidence levels. We find that the (prograde) rotation signal stays roughly constant at $10 \lesssim \langle V_\phi/ \mathrm{km \, s^{-1}} \rangle \lesssim 20$. We do find a stronger rotation signal in the radial bin $ 30 < r/ \mathrm{kpc} < 40$ for both RRL and K giants, but this is attributed to a significant number of Sgr stars in this radial regime. The shaded grey regions in Fig. \ref{fig:vphi_rad} indicate the approximate \textit{systematic} uncertainty in the velocity measurements in each radial bin, assuming a systematic proper motion uncertainty of 0.1 mas/yr. Thus, the prograde rotation is very mild, and we are only just able to discern a rotation signal that is not consistent with zero.
In Fig. \ref{fig:vphi_met} we explore whether or not the rotation signal of the halo stars is correlated with metallicity. The spectroscopic BHB and K giant samples have measured [Fe/H] values, and for the RRL we use photometric metallicities measured from the light curves. The metallicity distribution functions of the three halo tracers are different, and we are using both spectroscopic and photometric metallicities. Thus, we only compare ``metal-richer'' and ``metal-poorer'' stars using a metallicity boundary of [Fe/H] $=-1.5$. This boundary was chosen as the median value of the K giant sample, which is the least (metallicity) biased tracer. In Fig. \ref{fig:vphi_met} we show the posterior probability distributions for the average rotation of the metal-rich (solid red) and metal-poor (dashed blue) tracers. The thinner lines show the posteriors when stars likely associated with the Sgr stream are excluded. There is no evidence for a metallicity dependence in the RRL sample, but both the BHBs and K giants show a slight ($\sim 1\sigma$) bias towards stronger prograde rotation for metal-rich stars.
The lack of a metallicity correlation in the rotation of the RRL stars could be due to the relatively poor photometric metallicity estimates (see e.g. Fig. 10 in \citealt{watkins09}), which could wash out any apparent signal. On the other hand, the apparent metallicity correlation in the BHB and K giant samples could be caused by contamination. We explore this scenario in more detail below.
Previous work using only line-of-sight velocities have also found evidence for a metal-rich/metal-poor kinematic dichotomy in spectroscopic samples of BHB stars \citep{deason11a, kafle13, hattori13}. However, \cite{fermani13b} argue that this signal is due to (1) contamination by blue straggler stars, (2) incorrect distance estimates and, (3) potential pipeline systematics in the \cite{xue11} BHB sample. The BHB sample used in this work should not suffer from significant blue straggler (or main sequence star) contamination. Moreover, our distance calibration is robust to systematic metallicity differences \citep{fermani13a}. However, we cannot ignore the potential line-of-sight velocity systematics in the \cite{xue11} sample. \cite{fermani13b} find that a subsample of hot metal-poor BHB stars exhibit peculiar line-of-sight kinematics, which likely causes the metallicity bias in the rotation estimates. It is worth noting that the peculiar line-of-sight kinematics of the hot BHB stars could also be due to a stream-like structure in the halo, and is not necessarily a pipeline failure. In Table \ref{tab:res} we also give the rotation estimates for metal-rich/metal-poor stars computed with proper motions only. The results are only slightly changed when we do not use the BHB line-of-sight velocities, and they agree within $1 \sigma$ of the rotation estimates when 3D velocities are used.
We also investigate whether or not the apparent metallicity correlation in the K giant sample could be due to contamination. For example, if there are (metal-rich) disc stars present in the sample this could lead to a stronger prograde signal in the metal-richer stars. Disc contamination could result from stars being misclassified as giant branch stars (e.g. dwarfs, red clump stars) and thus their distances are overestimated. To this end, we use a stricter cut on the $P_{\rm RGB}$ parameter provided by \cite{xue14}, which gives the probability of being a red giant branch stars. Our fiducial sample has $P_{\rm RGB} > 0.5$. We find that using $P_{\rm RGB} > 0.8$ results in little difference to the rotation signal of the metal-rich stars, and the rotation signal of the metal-poor stars becomes slightly stronger (see Table \ref{tab:res}). It does not appear that the sample is contaminated by disc stars, but the (slight) metallicity correlation in the K giant sample does lose statistical significance if a stricter cut on red giant branch classification is used. However, this is likely because the error bars are inflated due to smaller number statistics.
It is worth noting that the tests we perform above on the BHB and K giant samples do not significantly change the rotation signals of the stars (differences are less than $1 \sigma$), so, we are confident that contamination in these samples is not significantly affecting our results. Thus, we conclude that there does appear to be a mild correlation between rotation signal and metallicity in the halo star kinematics.
In summary, we find that the (old) stellar halo, as traced by RRL, BHB and K giant stars, has a very mild prograde rotation signal, and there is a weak correlation between rotation signal and metallicity. Is this the expected result for a Milky Way-mass galaxy stellar halo? Or, indeed, is this rotation signal result consistent with the predictions of the $\Lambda\mathrm{CDM}$ model? In the following Section, we exploit a suite of state-of-the-art cosmological simulations in order to address these questions.
\section{Simulated Stellar Haloes}
\label{sec:sims}
\subsection{Auriga Simulations}
\begin{figure*}
\centering
\includegraphics[width=17cm, height=5.1cm]{rot.pdf}
\caption{ \small \textit{Left panel:} The distribution of average azimuthal velocity ($\langle V_\phi \rangle $) of the 30 Auriga stellar haloes. Halo stars are spatially selected in the SDSS survey footprint with Galactocentric radius $5 < r/\mathrm{kpc} < 50$ and height above disc plane $|z| > 4$ kpc. The solid grey histogram shows all halo stars, and the line-filled green histogram shows old halo stars selected with $T_{\rm form} > 10$ Gyr. These ``old'' star particles are selected for a more direct comparison with the observed halo stars. The haloes have a range of rotation amplitudes between $0 \lesssim \langle V_{\phi} \rangle /\mathrm{km \, s^{-1}} \lesssim 120$. Old halo stars typically show a milder rotation signal with $0 \lesssim \langle V_{\phi} \rangle/\mathrm{km \, s^{-1}} \lesssim 80$. The solid red line indicates the approximate average rotation signal ($\sim 14$ km s$^{-1}$) of the old Milky Way halo populations. \textit{Middle and right panels:} The variation of rotation signal with Galactocentric radius. The solid black line and the grey filled region (middle panel) shows the median and 10th/90th percentile range for the 30 Auriga haloes. The solid green line and green filed region (right panel) are the same for the old halo stars. For comparison, we show the average (inverse variance weighted) rotation signal for RRL, BHB and K giant stars in the Milky Way with the red symbols (cf. Fig. \ref{fig:vphi_rad}). The mild prograde rotation we find in the observational data is in good agreement with the old halo stars in the simulations. The dashed green line indicates the 20th percentile of the distribution, which approximately follows the observed radial trend.}
\label{fig:sim_rot}
\end{figure*}
In this Section, we use a sample of $N=30$ high-resolution Milky Way-mass haloes from the Auriga simulation suite. These simulations are described in more detail in \cite{grand17}, and we only provide a brief description here.
A low-resolution dark matter only simulation with box size 100 Mpc $h^{-1}$ was used to select candidate Milky Way-mass ($1 < M_{200}/10^{12}M_\odot < 2$) haloes. These candidate haloes were chosen to be relatively isolated at $z=0$. More precisely, there are no objects with masses greater than half of the parent halo closer than 1.37 Mpc. A $\Lambda$CDM cosmology consistent with the \cite{planck14} data release is adopted with parameters, $\Omega_m=0.307$, $\Omega_b=0.048$, $\Omega_\Lambda=0.693$ and $H_0=100 \, h$ km s$^{-1} $Mpc$^{-1}$, where $h=0.6777$. Each candidate halo was re-simulated at a higher resolution using a multi-mass particle ``zoom-in'' technique.
The zoom re-simulations were performed with the state-of-the-art cosmological magento-hydrodynamical code \textsc{arepo} \citep{springel10}. Gas was added to the initial conditions by adopting the same technique described in \cite{marinacci14a, marinacci14b}, and its evolution was followed by solving the MHD equations on a Voronoi mesh. At the resolution level used in this work (level 4), the typical mass of a dark matter particle is $3 \times 10^5M_\odot$, and the baryonic mass resolution is $5 \times 10^4M_\odot$. The softening length of the dark matter particles and star particles grows with time in physical space until a maximum of 369 pc is reached at $z=1.0$ (where z is the redshift). The gas cells have a softening length that scales with the mean radius of the cell, and the maximum physical softening is 1.85kpc.
The Auriga simulations employ a model for galaxy formation physics that includes critical physical processes, such as star formation, gas heating/cooling, feedback from stars, metal enrichment, magnetic fields, and the growth of supermassive black holes (see \citealt{grand17} for more details). The simulations have been successful in reproducing a number of observable disc galaxy properties, such as rotation curves, star formation rates, stellar masses, sizes and metallicities.
This work is concerned with stellar haloes of the Auriga galaxies. A future study (Monachesi et al. in preparation) will present a more general analysis of the simulated stellar halo properties. Here, we focus on the net rotation of the Auriga stellar haloes for comparison with the observational results in the preceding sections.
\subsection{Rotation of Auriga Stellar Haloes}
The definition of ``halo stars'', in both observations and simulations, is somewhat arbitrary, and often varies widely between different studies. In this work, for a more direct comparison with our observational results, we spatially select stars within the SDSS survey footprint (see Fig. \ref{fig:sdss_rrl}) with Galactocentric radius $5 < r/\mathrm{kpc} < 50$ and height above disc plane $|z| > 4$ kpc. Note that the scale heights of the Auriga discs are generally thicker than the Milky Way disc (see \citealt{grand17}), so our spatial selection will likely include some disc star particles, particularly at small radii. Finally, for a fair comparison with the old halo tracers (i.e. RRL, BHBs, and K giants) used in this work, we also select ``old'' star particles. For this purpose, we consider halo stars that formed more than 10 Gyr ago in the simulations. Note that we align each halo with the stellar disc angular momentum vector, which we compute using all star particles within 20 kpc.
In the left-hand panel of Fig. \ref{fig:sim_rot} we show the distribution of average azimuthal velocity ($\langle V_\phi \rangle$) of halo stars in the 30 Auriga simulations. Here, halo stars are selected within the SDSS survey footprint between 5 and 50 kpc from the Galactic centre, and with height above the disc plane, $|z| > 4$ kpc. The average rotation for all halo stars in this radial range are shown with the grey histogram. Old halo stars (with $T_{\rm form} >$ 10 Gyr) are shown with the green line-filled histogram. The stellar haloes show a broad range of rotation velocities, ranging from $0 \lesssim \langle V_\phi \rangle /\mathrm{km \, s}^{-1} \lesssim 120$, but they are all generally \textit{prograde}. Similarly, the old halo stars exhibit prograde rotation, but they have much milder rotation amplitudes, with $\langle V_\phi \rangle \lesssim 80$ km s$^{-1}$. The average rotation signal of the three Milky Way halo populations we used in Section \ref{sec:res} is $14$ km s$^{-1}$. Only 3 percent of the Auriga haloes have net rotation signals $\le 14$ km s$^{-1}$, however, the fraction of "old" simulated haloes with similarly low rotation amplitudes is higher (20 percent).
In the middle- and right-hand panels we show the radial dependence of the rotation signal in the simulations. Here, in the middle panel, the solid black line shows the median value of the 30 Auriga haloes and the grey shaded region indicates the 10th/90th percentiles. Similarly, in the right-hand panel, the solid green line shows the median value of the old halo stars and the green shaded region indicates the 10th/90th percentiles. The rotation signal of the whole halo sample varies with radius and declines from $\langle V_\phi \rangle \sim 70$ km s$^{-1}$ at $r \sim 10$ kpc to $\langle V_\phi \rangle \sim 25$ km s$^{-1}$ at $r \sim 50$ kpc. In contrast, the old halo stars have a fairly constant rotation amplitude with Galactocentric distance of 20-30 km s$^{-1}$. It is likely that the higher rotation amplitude for halo stars at small Galactocentric distances is due to disc contamination and/or the presence of \textit{in situ} stellar halo populations more akin to a ``thick disc'' component (e.g. \citealt{zolotov09, font11, mccarthy12, pillepich15}). However, the old halo stars will suffer much less contamination from the disc (or disc-like) populations\footnote{It is worth noting that not \textit{all} old stars will have an external origin, as there are old ($T_{\rm form} > 10$ Gyr) populations present in the disc and \textit{in situ} halo components (see e.g. \citealt{mccarthy12, pillepich15}).}, and they are dominated by stellar populations accreted from dwarf galaxies (see e.g Figure 10 in \citealt{mccarthy12}). This is likely the reason why the rotation amplitude of the old halo stars is fairly constant with Galactocentric radius.
Finally, it is worth noting that there are a significant number of the Auriga galaxies ($\sim 1/3$) that have an ``\textit{ex situ} disc'' formed from massive accreted satellites \cite{gomez17b}. Some of these \textit{ex situ} discs can extend more than 4 kpc above the disc plane, and can be the cause of significant rotation in the stellar halos. However, this is not true for all of the \textit{ex situ} discs in the simulations: some are largely confined to small $|z|$ and will not necessarily affect the rotation signal at the larger Galacocentric radii probed in this work (see \citealt{gomez17b}).
We also show our observational results from the RRL, BHB and K giant stars in Fig. \ref{fig:sim_rot} (cf. Fig. \ref{fig:vphi_rad}). Here, we show the average (inverse variance weighted) rotation signal from the three populations. In practice, the rotation of the three populations is very similar (see Fig. \ref{fig:vphi_rad}). The observed rotation amplitude in the Galactic halo broadly agrees with the old halo population in the simulations: a mild prograde signal is consistent, and indeed \textit{typical} in the cosmological simulations. The dashed green line in the right-hand panel of Fig. \ref{fig:sim_rot} indicates the 20th percentile level, which agrees well with the observed values. Thus, while the mild prograde rotation of the old Milky Way halo stars is consistent with the simulated haloes, the observed rotation amplitude is on the low side of the distribution of Auriga haloes. We note that the Auriga haloes are randomly selected from the most isolated quartile of haloes (in the mass range $1-2 \times 10^{12}M_\odot$), and thus they are typical field disc galaxies (as opposed to those in a cluster environment). Thus, we can infer that the rotation signal of the old Milky Way halo is fairly low compared to the general field disc galaxy population.
It is not immediately obvious why the old halo stars in the simulations, even out to $r \sim 50$ kpc, have (mild) prograde orbits. If most of these stars come from destroyed dwarf galaxies, then their net spin will be related to the original angular momentum vectors of the accreted dwarfs. Previous studies using cosmological simulations have shown that subhalo accretion is anisotropic along filamentary structures, and is generally biased along the major axis of the host dark matter halo \citep{bailin05b, libeskind05, zentner05}. Indeed, \cite{lovell11} showed that the subhalo orbits in the Aquarius simulations are mainly aligned with the main halo spin. Hydrodynamic simulations predict that the angular momentum vector of disc galaxies tends to be aligned with the dark matter halo spin, at least in the inner parts of haloes (e.g. \citealt{bailin05, bett10, deason11b, shao16}). Thus, the slight preference for prograde orbits in the accreted stellar haloes is likely due to the filamentary accretion of subhaloes, which tend to align with the host halo major axis and stellar disc. Note that the non-perfect alignment between filaments, dark matter haloes and stellar discs will naturally lead to a relatively weak (but non-zero!) signal. In addition, the orbital angular momentum of massive accreted satellites can align with the host disc angular momentum \textit{after} infall. Indeed, \cite{gomez17b} show that when \textit{ex situ} discs are formed from the accretion of massive satellites the angular momentum of the dwarfs can be initially misaligned with the disc but can rapidly become aligned after infall. Furthermore, this alignment is not just due to a change in the satellite orbit, but also because of a response of the host galactic disc!
Note that, as mentioned above, some of the old stars will also belong to the \textit{in situ} halo component, which are more likely biased towards prograde (or disc-like) orbits. Thus, it is likely that those haloes with minor net rotation are less dominated by in situ populations. Indeed, the mild prograde rotation we see in the observational samples suggests that the in situ component of the Milky Way is relatively minor. Moreover, as more recent, massive mergers will lead to higher net spin in the halo, the weak rotation signal in the Milky Way halo is indicative of a quiescent merger history (see e.g. \citealt{gardner01, vitvitska02}).
\begin{figure}
\centering
\includegraphics[width=8.5cm, height=7.08cm]{rot_met.pdf}
\caption{ \small \textit{Left panels: } The distribution of $\langle V_\phi \rangle$ of halo stars in the Auriga simulations that are metal-rich ([Fe/H] $>-1$, red) and metal-poor ([Fe/H] $<-1$, blue). All halo stars with Galactocentric radius $5 < r/\mathrm{kpc} < 50$ and height above disc plane $|z| > 4$ kpc, are shown in the top panels, and old halo stars ($T_{\rm form} > 10$ Gyr) are shown in the bottom panels. \textit{Right panels:} The variation of $\langle V_\phi \rangle$ with Galactocentric radius. Solid red (metal-rich) and line-filled blue (metal-poor) regions indicate the 10th/90th percentile ranges for the 30 Auriga haloes. The median values are shown with the solid lines. Metal-richer stars tend to have a stronger (prograde) rotation signal than the metal-poorer stars. However, the old halo stars show a much milder metallicity correlation.}
\label{fig:sim_rot_met}
\end{figure}
In Fig. \ref{fig:sim_rot_met} we show how the rotation signal of the Auriga stellar haloes depends on metallicity. We define ``metal-rich and ``metal-poor'' populations as halo stars with metallicities above/below 1/10th of solar ([Fe/H] $= -1$). This metallicity boundary was chosen as it roughly corresponds to the median metallicity of the old halo stars in the simulations. However, as is the case in the observations, our choice of metallicity boundary is fairly arbitrary. When all halo stars are considered, there is a tendency for the metal-richer stars to have a stronger prograde rotation. This metallicity correlation is more prominent in the inner regions of the halo. It's likely that the correlation in the inner regions of the halo is, at least in part, attributed to disc contamination and/or the presence of \textit{in situ} (disc-like) stellar halo populations. Furthermore, most of the strongly rotating \textit{ex situ} disc material in the simulations is contributed by one massive, and thus metal-rich, satellite, which could also cause a metallicity correlation in the halo stars. The old halo stars, which suffer less from disc contamination, show only a very mild ($\sim 5-10$ km s$^{-1}$) bias towards more strongly rotating metal-rich populations. Indeed, we found a weak metallicity correlation in the observed samples of old halo stars, which seems to be in good agreement with the predictions of the simulations.
\begin{figure*}
\centering
\includegraphics[width=17cm, height=4.25cm]{vphi_mock_comp.pdf}
\caption{\small A comparison between the ``true'' rotation signal in the Auriga haloes and the value estimated by applying our likelihood procedure to mock observations. Old halo stars ($T_{\rm form} > 10$ Gyr) in the simulations are identified in the range $5 < r/\mathrm{kpc} < 50$, and $N \sim 4000-8000$ are randomly selected within the SDSS footprint with $|z| > 4$ kpc. The positions and velocities of the stars are converted into Galactic coordinates, and we apply a scatter of $2$ mas/yr when converting to proper motions. The left, middle and right panels show RRL, BHB and K giant ``like'' mocks. The RRL mocks have $N \sim 8000$ stars randomly selected and the line-of-sight velocity is presumed to be unknown. All three Galactic velocity components are used for the BHB and K giant mocks, but the sample size is smaller ($N \sim 4000-5000$). Finally, for the K giant mocks, we also apply a scatter of 0.35 dex to the distance moduli of the stars. The error bars show the 1$\sigma$ confidence derived from the likelihood analysis. The true rotation signal is typically recovered to within 5 km s$^{-1}$ (median $\sim 1$ km s$^{-1}$, $\sigma = 1.48 \times \mathrm{MAD} \sim 5$ km s$^{-1}$). Note that the outliers with $|\langle V_\phi \rangle_{\rm LIKE} - \langle V_\phi \rangle_{\rm TRUE, HALO}| > 20$ km s$^{-1}$ are likely due to substructure in the Auriga haloes. }
\label{fig:mock}
\end{figure*}
\subsubsection{Tests with mock observations}
In Figure \ref{fig:sim_rot} we showed the ``true'' average rotation signal of the Auriga stellar haloes. This is computed for all halo stars within the SDSS footprint with $5 < r/\mathrm{kpc} < 50$ and $|z| > 4$ kpc directly from the simulations. Now, we generate mock observations from the simulated stellar haloes to see if we can recover this rotation signal using the likelihood method described in Section \ref{sec:like}. For the mock observations, we convert spherical coordinates ($r, \theta, \phi$) into Galactocentric coordinates ($D, \ell, b$), and place the ``observer'' at the position of the Sun $(x,y,z)=(-8.5, 0,0)$ kpc. Old halo stars are identified ($T_{\rm form} > 10$ Gyr) in the coordinate ranges $5 < r/\mathrm{kpc} < 50$ and $|z| > 4$ kpc, and $N \sim 4000-8000$ are randomly selected within the SDSS footprint (see Fig. \ref{fig:sdss_rrl}). The tangential Galactic velocity components ($V_\ell, V_b$) are converted into proper motions, and we apply a scatter of 2 mas/yr, which is the typical observational uncertainty in the SDSS-\textit{Gaia} sample. After applying our modeling technique, we show the resulting best-fit $\langle V_\phi \rangle$ parameters in Fig. \ref{fig:mock}. The left, middle and right panels show RRL-, BHB- and K giant-like mocks. The RRL mocks have $N \sim 8000$ stars randomly selected and we marginalise over the line-of-sight velocity. All three Galactic velocity components are used for the BHB and K giant mocks, but the sample sizes are smaller ($N \sim 4000-5000$), and we apply a scatter of 0.35 dex to the distance moduli of the ``K giant'' stars. Note that we also use these mocks to ensure that we can safely ignore the small ($\sim 10\%$) distance uncertainties in the RRL and BHB populations. In Fig. \ref{fig:mock} we show the difference between the true and inferred $\langle V_\phi \rangle$ values as a function of the true rotation signal. The distribution of $\Delta \langle V_\phi \rangle = \langle V_\phi \rangle_{\rm LIKE} - \langle V_\phi \rangle_{\rm TRUE, HALO}$ is similar for all three mock tests, with median offset of $\sim 1$ km s$^{-1}$ and $\sigma=1.48 \times$ MAD of $\sim5$ km s$^{-1}$ (see right-hand inset)\footnote{Note that we attribute the outliers with large $\Delta \langle V_\phi \rangle$ to significant substructures in the Auriga haloes.}. Thus, even with observational proper motion errors of order the proper motions themselves, we are able to recover the average rotation signal of the stellar halo to $< 10$ km s$^{-1}$. Note that this level of scatter in the simulations is typically less than the \textit{systematic} uncertainty in the SDSS-\textit{Gaia} proper motion catalog of 0.1 mas/yr.
\section{Conclusions}
\label{sec:conc}
We have combined the exquisite astrometry from \textit{Gaia} DR1 and recalibrated astrometry of SDSS images taken some $\sim 10-15$ years earlier to provide a stable and robust catalog of proper motions. Using spectroscopically confirmed QSOs, we estimate typical proper motion uncertainties of $\sim 2$ mas/yr down to $r \sim 20$ mag, which are stable to variations in colour and magnitude. Furthermore, we estimate systematic errors to be of order 0.1 mas/yr, which is unrivaled by any other dataset of similar depth. We exploit this new SDSS-\textit{Gaia} proper motion catalogue to measure the net rotation of the Milky Way stellar halo using RRL, BHB and K giant halo tracers.
Our main conclusions are summarised as follows.
\begin{itemize}
\item We identify (RRL) halo stars that belong to the Sgr stream and compare the SDSS-\textit{Gaia} proper motions along the stream to the \cite{law10} model. In general, there is excellent agreement with the model predictions for the Sgr leading and trailing arms. Furthermore, previous proper motion measurements in the literature of the Sgr stream \citep{carlin12, koposov13, sohn15} agree very well with the new SDSS-\textit{Gaia} proper motions. These comparisons are a reassuring validation that these new proper motions can be used to probe the Milky Way halo.
\item We construct samples of RRL, BHB and K giant stars in the halo with measured proper motions, distances, and (for the spectroscopic samples) line-of-sight velocities. Using a likelihood procedure, we measure a weak prograde rotating stellar halo, with $\langle V_\phi \rangle \sim 5-25$ km s$^{-1}$. This weakly rotating signal is similar for all three halo samples, and varies little with Galactocentric radius out to 50 kpc. In addition, there is tentative evidence that the rotation signal correlates with metallicity, whereby metal-richer BHB and K giant stars exhibit slightly stronger prograde rotation.
\item The state-of-the-art Auriga simulations are used to compare our results with the expectations from the $\Lambda$CDM model. The simulated stellar haloes tend to have a net prograde rotation with $0 \lesssim V_{\phi}/\mathrm{km s^{-1}} \lesssim 120$. However, when we compare with ``old'' ($T_{\rm form} > 10$ Gyr) halo stars in the simulations, which are more akin to the old halo tracers like BHBs and RRL, the prograde signal is weaker and typically $V_{\phi} \lesssim 80$ km s$^{-1}$, in good agreement with the observations. Metal-rich(er) halo stars in the simulations are biased towards stronger prograde rotation than metal-poor(er) halo stars. It is likely that this correlation is, in part, due to contamination by disc stars and/or halo stars formed \textit{in situ}, which are more (kinematically) akin to a disc component. However, the rotation signal of the old halo stars, which are likely dominated by accreted stellar stars, shows only weak, if any, dependence on metallicity. Again, this is in line with the observations.
\item The weak prograde rotation of the Milky Way halo is in agreement with the simulations, but is still relatively low compared to the full Auriga suite of 30 haloes ($\sim$ 20th percentile). It is also worth remembering that the net spin of the halo disappears entirely if the circular velocity at the position of the Sun is set to the ``standard'' 220 km s$^{-1}$. Furthermore, the systematic uncertainty in the SDSS-\textit{Gaia} proper motions of $\sim 0.1$ mas/yr means that rotation signals $\lesssim 10$ km s$^{-1}$ are also consistent with zero. This mild, or zero, halo rotation suggests that above $z=4$ kpc, the Milky Way has (a) a minor, or non-existent, \textit{in situ} halo component and, (b) undergone a relatively quiescent merger history.
\item Finally, we use the simulated stellar haloes to quantify the systematic uncertainties in our modeling procedure. Using mock observations, we find that the rotation signals can typically be recovered to $< 10$ km s$^{-1}$. However, we do find that substructures in the halo can significantly bias the results. Indeed, in regions that the Sgr stream is prominent (e.g. $20 < r/\mathrm{kpc} < 30$) our measured rotation signal is increased by the Sgr members.
\end{itemize}
\section*{Acknowledgements}
We thank Carlos Frenk and Volker Springel for providing comments on an earlier version of this manuscript. We also thank the anonymous referee for providing valuable comments that improved the quality of our paper.
A.D. is supported by a Royal Society University Research Fellowship.
The research leading to these results has received funding from the
European Research Council under the European Union's Seventh Framework
Programme (FP/2007-2013) / ERC Grant Agreement n. 308024. V.B. and S.K. acknowledge financial support from the ERC. A.D. and S.K. also acknowledge the support from the STFC (grants ST/L00075X/1 and ST/N004493/1). RG acknowledges support by the DFG Research Centre SFB-881 ``The Milky Way System'' through project A1.
This work has made use of data from the European Space Agency (ESA)
mission {\it Gaia} (\url{http://www.cosmos.esa.int/gaia}), processed
by the {\it Gaia} Data Processing and Analysis Consortium (DPAC,
{\small
\url{http://www.cosmos.esa.int/web/gaia/dpac/consortium}}). Funding
for the DPAC has been provided by national institutions, in particular
the institutions participating in the {\it Gaia} Multilateral
Agreement.
This work used the DiRAC Data Centric system at Durham University, operated by the Institute for Computational Cosmology on behalf of the STFC DiRAC HPC Facility (www.dirac.ac.uk). This equipment was funded by BIS National E-infrastructure capital grant ST/K00042X/1, STFC capital grants ST/H008519/1 and ST/K00087X/1, STFC DiRAC Operations grant ST/K003267/1 and Durham University. DiRAC is part of the National E-Infrastructure.
\bibliographystyle{mnras}
|
1,116,691,499,842 | arxiv | \section{Introduction}
The mantle of Earth extends from the bottom of the crust to the top of
the iron core, some 3000~km below. Mantle rock, composed of silicate
minerals, behaves as an elastic solid on the time scale of seismic
waves but over geological time the mantle convects at high Rayleigh
number as a creeping, viscous fluid~\citep{Schubert:book}. This
convective flow is the hidden engine for plate tectonics, giving rise
to plate boundaries such as mid-ocean ridges (divergent) and
subduction zones (convergent). Plate boundaries host the vast majority
of terrestrial volcanism; their volcanoes are fed by magma extracted
from below, where partial melting of mantle rock occurs (typically at
depths less than $\sim$100~km).
Partially molten regions of the mantle are of interest to
geoscientists for their role in tectonic volcanism and in the chemical
evolution of the Earth. The depth of these regions makes them
inaccessible for direct observation, and hence studies of their
dynamics have typically involved numerical simulation. Simulations are
often based on a system of partial differential equations derived
by~\citet{McKenzie:1984} and since elaborated and generalised by other
authors~\citep[e.g.,][]{Bercovici:2003, Simpson:2010a,
Simpson:2010b}. The equations describe two interpenetrating fluids
of different density and vastly different viscosity: solid and molten
rock (i.e.,~mantle and magma). The grains of the rock form a viscously
deformable, permeable matrix through which magma can percolate. This
is captured in the theory by a coupling of the Stokes equations for
the mantle with Darcy's law for the magma. Although each phase is
independently incompressible, the two-phase mixture allows for
divergence or convergence of the solid matrix, locally increasing or
decreasing the volume fraction of magma. This process is modulated by
a compaction viscosity, and gives rise to much of the interesting
behaviour associated with coupled magma/mantle
dynamics~\cite{Spiegelman:1993a, Spiegelman:1993b, Katz:2006,
Takei:2013}.
The governing equations have been solved in a variety of contexts,
from idealised studies of localisation and wave behaviour
\cite[e.g.][]{Aharonov:1995, Barcilon:1986} to applied studies of
plate-tectonic boundaries, especially mid-ocean ridges
\cite[e.g.][]{Ghods:2000, Katz:2008}. These studies have employed
finite volume techniques on regular, Cartesian grids
\cite[e.g.][]{Katz:2007}. Unlike mid-ocean ridges, subduction zones
have a plate geometry that is awkward for Cartesian grids; it is,
however, conveniently meshed with triangles or tetrahedra, which can
also focus resolution where it is most needed
\cite{Keken:2008}. Finite element simulations of pure mantle
convection in subduction zones are common in the literature, but it
remains a challenge to model two-phase, three-dimensional,
magma/mantle dynamics of subduction, even though this is an area of
active research~\citep{Keller:2013, Wilson:2013b}. Such models require
highly refined computational meshes, resulting in very large systems
of algebraic equations. To solve these systems efficiently, iterative
solvers together with effective preconditioning techniques are
necessary. Although the governing equations are similar to those of
Stokes flow, there has been no prior analysis of their discretisation
and numerical solution by the finite element method.
The most computationally expensive step in modelling the partially
molten mantle is typically the solution of a Stokes-like problem for
the velocity of the solid matrix. To address this bottleneck in the
context of large, unstructured grids for finite element
discretisations, we describe, analyse, and test a preconditioner for
the algebraic system resulting from the simplified McKenzie equations.
The system of equations is similar to the Stokes problem, for which
the Silvester--Wathen preconditioner \cite{Silvester:1994} has been
proven to be optimal, i.e., the iteration count of the iterative
method is independent of the size of the algebraic system for a
variety of discretisations of the Stokes equations (see also
\cite{May:2008}). The key lies in finding a suitable approximation to
the Schur complement of the block matrix resulting from the finite
element discretisation. We follow this approach to prove and
demonstrate numerically the optimality of the preconditioner for
coupled magma/mantle dynamics problems. The analysis and numerical
examples highlight some issues specific to magma/mantle dynamics
simulations regarding the impact of model parameters on the solver
performance. To the best of our knowledge, together with the work of
\citet{Katz:2013}, we present the first three dimensional computations
of the (simplified) McKenzie equations, and the first analysis of a
preconditioner for this problem.
In this work we incorporate analysis, subduction zone-inspired
examples, and software implementation. The analysis is confirmed by
numerical examples that range from illustrative cases to large,
representative models of subduction zones solved using parallel
computers. The computer code to reproduce all presented examples is
parallelised and is freely available under the Lesser GNU Public
License (LGPL) as part of the supporting
material~\citep{supporting-material}. The proposed preconditioning
strategies have been implemented using libraries from the FEniCS
Project~\citep{alnaes:2013,logg:2010,fenics:book,oelgaard:2010} and
PETSc~\citep{petsc-efficient,petsc-user-ref,petsc-web-page}. The
FEniCS framework provides a high degree of mathematical abstraction,
which permits the proposed methods to be implemented quickly,
compactly and efficiently, with a close correspondence between the
mathematical presentation in this paper and the computer
implementation in the supporting material.
The outline of this article is as follows. In
Section~\ref{s:magmaDynamics} we introduce the simplified McKenzie
equations for coupled magma/mantle dynamics, followed by a finite
element method for these equations in Section~\ref{s:fem}. A
preconditioner analysis is conducted in Section~\ref{s:precond_stokes}
and its construction is discussed in
Section~\ref{s:pcconstruction}. Through numerical simulations in
Section~\ref{s:numsim} we verify the analysis; conclusions are drawn
in Section~\ref{s:conclusions}.
\section{Partially molten magma dynamics}
\label{s:magmaDynamics}
Let $\Omega \subset \mathbb{R}^d$ be a bounded domain with $2 \le d
\le 3$. The \citet{McKenzie:1984} model on $\Omega$ reads
\begin{align}
\label{eq:pf_mckenzie_a}
\partial_{t} \phi - \nabla \cdot \del{(1 - \phi)\mathbf{u}} &= 0,
\\
\label{eq:pf_mckenzie_b}
-\nabla \cdot 2\eta \boldsymbol{\epsilon}(\mathbf{u}) + \nabla p_{\rm f}
&= \nabla \del{\del{\zeta - \tfrac{2}{3} \eta} \nabla \cdot \mathbf{u}}
-\bar{\rho} g \mathbf{e}_{3},
\\
\label{eq:pf_mckenzie_c}
\nabla \cdot \mathbf{u}
&= \nabla \cdot \frac{\kappa}{\mu} \nabla \del{p_{\rm f}
+ \rho_{\rm f} g z},
\end{align}
where $\phi$ is porosity, $\mathbf{u}$ is the matrix velocity,
$\boldsymbol{\epsilon}(\mathbf{u}) = (\nabla\mathbf{u} +
(\nabla\mathbf{u})^T)/2$ is the strain rate tensor, $\kappa$ is
permeability, $\mu$ is the melt viscosity, $\eta$ and $\zeta$ are the
shear and bulk viscosity of the matrix, respectively, $g$ is the
constant acceleration due to gravity, $\mathbf{e}_{3}$ is the unit
vector in the $z$-direction (i.e., $\mathbf{e}_{3} = (0, 1)$ when $d =
2$ and $\mathbf{e}_{3} = (0, 0, 1)$ when $d = 3$), $p_{\rm f}$ is the
melt pressure, $\rho_{\rm f}$ and $\rho_{\rm s}$ are the constant melt
and matrix densities, respectively, and $\bar{\rho} = \rho_{\rm f}
\phi + \rho_{\rm s}(1 - \phi)$ is the phase-averaged density. Here we
assume that $\mu$, $\eta$ and $\zeta$ are constants and that $\kappa$
is a function of $\phi$. The magma (fluid) velocity $\mathbf{u}_{\rm
f}$ can be obtained from $\mathbf{u}$, $\phi$ and $p_{\rm f}$
through:
\begin{equation}
\label{eq:pf_magma_vel}
\mathbf{u}_{\rm f} = \mathbf{u}
- \frac{\kappa}{\phi\mu} \nabla \del{p_{\rm f} + \rho_{\rm f} g z}.
\end{equation}
It will be useful to decompose the melt pressure as $p_{\rm f} = p -
\rho_{\rm s} g z$, where $p$ is the dynamic pressure and $\rho_{\rm s}
g z$ the `lithostatic' pressure. Equations~\eqref{eq:pf_mckenzie_b},
\eqref{eq:pf_mckenzie_c} and~\eqref{eq:pf_magma_vel} may then be
written as
\begin{align}
\label{eq:p_mckenzie_b}
-\nabla \cdot 2\eta \boldsymbol{\epsilon}(\mathbf{u}) + \nabla p
&= \nabla \left(\left(\zeta - \tfrac{2}{3} \eta \right)
\nabla \cdot \mathbf{u} \right) + g\Delta \rho \phi \mathbf{e}_{3},
\\
\label{eq:p_mckenzie_c}
\nabla \cdot \mathbf{u}
&= \nabla \cdot \frac{\kappa}{\mu}
\nabla \left(p - \Delta \rho g z\right),
\\
\label{eq:p_magma_vel}
\mathbf{u}_{\rm f} &= \mathbf{u} - \frac{\kappa}{\phi\mu}
\nabla \left(p - \Delta \rho g z \right),
\end{align}
where $\Delta \rho = \rho_{\rm s} - \rho_{\rm f}$. Constitutive
relations are given by
\begin{equation}
\kappa = \kappa_{0} \del{\frac{\phi}{\phi_{0}}}^{n}, \quad
\zeta = r_{\zeta} \eta,
\end{equation}
where $\phi_{0}$ is the characteristic porosity, $\kappa_{0}$ the
characteristic permeability, $n \ge 1$ is a dimensionless constant and
$r_{\zeta}$ is the ratio between matrix bulk and shear viscosity. We
non-dimensionalise \eqref{eq:pf_mckenzie_a}, \eqref{eq:p_mckenzie_b},
\eqref{eq:p_mckenzie_c} and \eqref{eq:p_magma_vel} using
\begin{equation}
\mathbf{u} = u_{0} \mathbf{u}^{\prime}, \
\mathbf{x} = H \mathbf{x}^{\prime}, \
t = (H/u_0) t^{\prime}, \
\kappa = \kappa_{0} \kappa^{\prime}, \
p = \Delta \rho gH p^{\prime},
\end{equation}
where primed variables are non-dimensional, $u_{0}$ is the velocity
scaling, given by
\begin{equation}
u_{0} = \frac{\Delta \rho g H^{2}}{2\eta},
\end{equation}
and $H$ is a length scale. Dropping the prime notation, the McKenzie
equations (\eqref{eq:pf_mckenzie_a}, \eqref{eq:p_mckenzie_b} and
\eqref{eq:p_mckenzie_c}), in non-dimensional form are given by
\begin{align}
\label{eq:mckenzie_nd_a}
\partial_{t}\phi - \nabla \cdot \del{(1 - \phi)\mathbf{u}} &= 0,
\\
\label{eq:mckenzie_nd_b}
-\nabla \cdot \boldsymbol{\epsilon}(\mathbf{u}) + \nabla p
&= \nabla\del{\tfrac{1}{2}\del{r_{\zeta}-\tfrac{2}{3}}
\nabla \cdot \mathbf{u}} + \phi \mathbf{e}_{3},
\\
\label{eq:mckenzie_nd_c}
\nabla\cdot\mathbf{u}
&= \frac{2R^{2}}{r_{\zeta} + 4/3}
\nabla \cdot \del{\del{\frac{\phi}{\phi_{0}}}^{n}
\del{\nabla p - \mathbf{e}_{3}}},
\end{align}
where $R = \delta/H$ with $\delta$ the compaction length defined as
\begin{equation}
\delta = \sqrt{\frac{(r_{\zeta} + 4/3)\kappa_{0} \eta}{\mu}},
\end{equation}
and \eqref{eq:p_magma_vel} becomes
\begin{equation}
\mathbf{u}_{\rm f} = \mathbf{u} - \frac{2R^{2}}{r_{\zeta} + 4/3}\frac{1}{\phi}
\del{\frac{\phi}{\phi_{0}}}^{n}
\del{\nabla p - \mathbf{e}_{3}}.
\end{equation}
When solving the McKenzie model numerically for time-dependent
simulations, \eqref{eq:mckenzie_nd_a} is usually decoupled from
\eqref{eq:mckenzie_nd_b} and \eqref{eq:mckenzie_nd_c}. Porosity is
updated with \eqref{eq:mckenzie_nd_a} after which the velocity and
pressure are determined by solving \eqref{eq:mckenzie_nd_b} and
\eqref{eq:mckenzie_nd_c}; iteration can be used to better capture the
coupling. The most expensive part of this procedure is solving
\eqref{eq:mckenzie_nd_b} and~\eqref{eq:mckenzie_nd_c}. In this work we
study an optimal solver for equations \eqref{eq:mckenzie_nd_b}
and~\eqref{eq:mckenzie_nd_c} for a given porosity field. We remark
that an alternative to decoupling \eqref{eq:mckenzie_nd_a} from
\eqref{eq:mckenzie_nd_b} and \eqref{eq:mckenzie_nd_c} is to use a
composable linear solver for the full system
\eqref{eq:mckenzie_nd_a}-\eqref{eq:mckenzie_nd_c}, see
\citet{brown:2012}. In this case, our optimal solver may be used as a
preconditioner for part of this composable linear solver.
For the rest of this paper we replace $(r_{\zeta} - 2/3)/2$ by a
constant $\alpha$. Furthermore, we replace
\begin{equation}
\frac{R^{2}}{\alpha + 1}\del{\frac{\phi}{\phi_{0}}}^{n}
\end{equation}
by a spatially variable function $k(\mathbf{x})$ (independent of
$\alpha$ and $\phi$) and we obtain the problem
\begin{subequations}
\label{eq:magma}
\begin{alignat}{1}
\label{eq:magma_a}
-\nabla \cdot \boldsymbol{\epsilon}(\mathbf{u}) + \nabla p
&= \nabla(\alpha\nabla \cdot \mathbf{u}) + \phi \mathbf{e}_{3},
\\
\label{eq:magma_b}
\nabla \cdot\mathbf{u} &= \nabla \cdot (k(\nabla p - \mathbf{e}_{3})).
\end{alignat}
\end{subequations}
For coupled magma/mantle dynamics problems, $\alpha$ may range from
$-1/3$ to approximately $1000$. For this reason we will assume in this
paper that $-1/3 \le \alpha \le 1000$. We also bound $k$: $0 \le k_{*}
\le k(\mathbf{x}) \le k^{*}$ for all $\mathbf{x} \in \Omega$. In the
infinite-dimensional setting, we note that if $k(\mathbf{x}) = 0$
everywhere in $\Omega$, the compaction stress $\nabla (\alpha \nabla
\cdot \mathbf{u})$ vanishes as the velocity field is divergence free
and~\eqref{eq:magma} reduces to the Stokes equations. This will not
generally be the case for a finite element formulation, as will be
discussed in the following section.
On the boundary of the domain, $\partial \Omega$, we impose
\begin{align}
\label{eq:bc}
\mathbf{u} &= \mathbf{g},
\\
\qquad - k(\nabla p - \mathbf{e}_{3}) \cdot \mathbf{n} &= 0,
\end{align}
where $\mathbf{g} : \partial \Omega \rightarrow \mathbb{R}^d$ is given
boundary data satisfying the compatibility condition
\begin{equation}
0 = \int_{\partial \Omega} \mathbf{g} \cdot \mathbf{n} \dif s.
\end{equation}
\section{Finite element formulation}
\label{s:fem}
In this section we assume, without loss of generality, homogeneous
boundary conditions on~$\mathbf{u}$.
Let $\mathcal{T}_h$ be a triangulation of $\Omega$ with associated
finite element spaces ${\bf X}_{h} \subset
\left(H_{0}^{1}(\Omega)\right)^{d}$ and $M_h \subset H^1(\Omega) \cap
L_{0}^{2}(\Omega)$. The finite element weak formulation for
\eqref{eq:magma} and \eqref{eq:bc} is given by: find $\mathbf{u}_{h},
p_{h} \in \mathbf{X}_{h} \times M_{h}$ such that
\begin{equation}
\label{eq:fem_weak}
\mathcal{B}(\mathbf{u}_{h}; p_{h}, \mathbf{v}; q)
= \int_{\Omega} \phi \mathbf{e}_{3} \cdot \mathbf{v} \dif x
- \int_{\Omega} k \mathbf{e}_{3} \cdot \nabla q \dif x
\qquad \forall \mathbf{v}, q \in {\bf X}_{h} \times M_{h},
\end{equation}
where
\begin{equation}
\label{eq:Bform}
\mathcal{B}(\mathbf{u}; p, \mathbf{v}; q) = a(\mathbf{u}, \mathbf{v})
+ b(p,\mathbf{v}) + b(q,\mathbf{u}) - c(p,q),
\end{equation}
and
\begin{equation}
\label{eq:bilinear_parts}
\begin{split}
a(\mathbf{u}, \mathbf{v})
&= \int_{\Omega}
\boldsymbol{\epsilon}(\mathbf{u}) : \boldsymbol{\epsilon}(\mathbf{v})
+ \alpha(\nabla \cdot \mathbf{u}) (\nabla \cdot \mathbf{v}) \dif x,
\\
b(p, \mathbf{v}) &= - \int_{\Omega} p \nabla \cdot \mathbf{v} \dif x,
\\
c(p, q) &= \int_{\Omega} k \nabla p \cdot \nabla q \dif x.
\end{split}
\end{equation}
\begin{proposition}
\label{prop:a-stable}
For $\alpha > -1$, there exists a $c_{\alpha} > 0$ such that
\begin{equation}
a(\mathbf{v}, \mathbf{v}) \ge c_{\alpha} \norm{\mathbf{v}}_{1}^{2}
\quad \forall \ \mathbf{v} \in \del{H^{1}_{0}(\Omega)}^{d}.
\end{equation}
\end{proposition}
\begin{proof}
The proposition follows from
%
\begin{equation}
\norm{\nabla \cdot \mathbf{v}}^2
\le \norm{\boldsymbol{\epsilon}(\mathbf{v})}^2
\le \norm{\nabla \mathbf{v}}^2
\quad \forall \ \mathbf{v} \in \del{H^{1}_{0}(\Omega)}^{d},
\label{eq:div-grad-norm-inequality}
\end{equation}
(see Ref.~\citep[Eq.~(3.4)]{Grinevich:2009}) and the application of
Korn's inequality.
\end{proof}
We will consider finite elements that are inf-sup
stable~\citep{Brezzi:book} in the degenerate limit of $k = 0$, i.e.,
$a(\mathbf{u}, \mathbf{v})$ is coercive (see
Proposition~\ref{prop:a-stable}), $c(p, p) \ge 0 \ \forall p \in
M_{h}$ and for which there exists a constant $c_{1} > 0$ independent
of $h$ such that
\begin{equation}
\label{eq:lbb}
\max_{\mathbf{v}_{h} \in \mathbf{X}_{h}}
\frac{b(q_{h}, \mathbf{v}_{h})}{\norm{\nabla \mathbf{v}_{h}}}
\ge
c_{1} \norm{q_h} \qquad \forall q_{h} \in M_{h}.
\end{equation}
In particular, we will use Taylor--Hood ($P^{2}$--$P^{1}$) finite
elements on simplices. We note that while in the infinite-dimensional
setting the Stokes equations are recovered from \eqref{eq:magma} when
$k = 0$, this is not generally the case for the discrete weak
formulation in~\eqref{eq:fem_weak} when~$\alpha \ne 0$. Obtaining the
Stokes limit in the finite element setting when $\alpha \ne 0$
requires the non-trivial property that the divergence of functions in
${\bf X}_h$ lie in the pressure space~$M_{h}$. This is not the case
for Taylor--Hood finite elements.
The discrete system \eqref{eq:fem_weak} can be written in block matrix
form as
\begin{equation}
\label{eq:matrixForm}
\begin{bmatrix}
A & B^T
\\
B & -C_{k}
\end{bmatrix}
\begin{bmatrix}
u
\\
p
\end{bmatrix}
=
\begin{bmatrix}
f
\\
g
\end{bmatrix},
\end{equation}
where $u \in \mathbb{R}^{n_{u}}$ and $p \in N^{n_{p}} = \{q\in
\mathbb{R}^{n_{p}}| q\ne 1\}$ are, respectively, the vectors of the
discrete velocity and pressure variables with respect to appropriate
bases for $\mathbf{X}_{h}$ and~$M_{h}$. The space $N^{n_{p}}$ satisfies
the zero mean pressure condition.
For later convenience, we define the negative of the `pressure'
Schur complement~$S$:
\begin{equation}
S = B A^{-1} B^{T} + C_{k},
\label{eq:pressure-schur}
\end{equation}
and the scalar pressure mass matrix $Q$ such that
\begin{equation}
\norm{q_h}^2 = \langle Qq, q \rangle,
\label{eq:pressure-mass}
\end{equation}
for $q_h \in M_h$ and where $q \in \mathbb{R}^{n_{p}}$ is the vector
of the coefficients associated with the pressure basis and
$\langle \cdot, \cdot \rangle$ denotes the standard Euclidean scalar
product.
The differences between the matrix formulation of the magma/mantle
equations~\eqref{eq:magma} and the Stokes equations lie in the
matrices $A$ and $C_{k}$. In the case of the magma/mantle dynamics,
$A$ includes the discretisation of compaction stresses: a `grad-div'
term weighted by the factor~$\alpha$. Such `grad-div' terms are known
to be problematic in the context of multigrid methods as the modes
associated with lowest eigenvalues are not well represented on a
coarse grid~\citep{arnold:1997}. There have been a number of
investigations into this issue for $H({\rm div})$ finite element
problems, e.g.~\citep{Arnold:2000,Kolev:2012}. The second matrix
which differs from the Stokes discretisation is~$C_{k}$. For
sufficiently large $k$, this term provides Laplace-type pressure
stabilisation for elements that would otherwise be unstable for the
Stokes problem.
\section{Optimal block diagonal preconditioners}
\label{s:precond_stokes}
To model three-dimensional magma/mantle dynamics of subduction,
efficient iterative solvers together with preconditioning techniques
are needed to solve the resulting algebraic systems of equations. The
goal of this section is to introduce and prove optimality of a class
of block diagonal preconditioners for~\eqref{eq:matrixForm}.
To prove optimality of a block preconditioner for the McKenzie
problem, we first present a number of supporting results.
\begin{proposition}
\label{lem:coercivity_c} The bilinear form $c$
in~\eqref{eq:bilinear_parts} satisfies
%
\begin{equation}
c(q, q) \ge k_* \norm{\nabla q}^2 \qquad \forall q \in M^h.
\label{eq:c-coercive}
\end{equation}
\end{proposition}
\begin{proof}
This follows directly from
%
\begin{equation}
c(q, q) = \norm{k^{1/2} \nabla q}^2 \ge \norm{k_*^{1/2} \nabla q}^2.
\end{equation}
\end{proof}
\begin{lemma}
\label{lem:bounds_for_S_new}
For the matrices $A$, $B$ and $C_{k}$ given in
\eqref{eq:matrixForm}, the pressure Schur complement $S$ in
\eqref{eq:pressure-schur} and the pressure mass matrix $Q$ in
\eqref{eq:pressure-mass}, for an inf-sup stable formulation
satisfying~\eqref{eq:lbb}, the following bounds hold
%
\begin{equation}
\label{eq:upperLowerBound}
0 < c_q
\le \frac{\langle Sq, q\rangle}{\langle (Q + C_{k})q, q \rangle}
\le c^q, \quad \forall q \in N^{n_{p}},
\end{equation}
where $c^{q}$ is given by
\begin{equation}
\label{eq:upper_c}
c^{q} =
\begin{cases}
1/(1 - |\alpha|) & \mathrm{if}\ -1/3 \le \alpha < 0,\\
1 & \mathrm{if}\ \alpha \ge 0,
\end{cases}
\end{equation}
and $c_{q}$ by
\begin{equation}
\label{eq:lower_c}
c_{q} = \min \del{\frac{c_{1}^{2} + c_{P} k_{*}(1
+ |\alpha|)}{(1 + |\alpha|)(1 + c_{P} k_{*})}, \ 1},
\end{equation}
where $c_{1}$ is the inf-sup constant and $c_{P}$ the Poincar\'e
constant.
\end{lemma}
\begin{proof}
Since $A$ is symmetric and positive definite, and from the
definition of $S$
%
\begin{equation}
\begin{split}
\langle Sq, q \rangle &= \langle A^{-1} B^{T} q, B^{T} q \rangle
+ \langle C_{k} q, q \rangle
\\
&= \sup_{v \in \mathbb{R}^{n_{u}}}
\frac{\langle v, B^{T} q \rangle^{2}}{\langle A v, v \rangle}
+ \langle C_{k} q, q \rangle,
\end{split}
\end{equation}
%
for all $q \in N^{n_{p}}$. From
the definition of matrices $A, B, C_{k}$ and~$Q$ it then follows
that
%
\begin{equation}
\label{eq:Sqq}
\langle Sq, q \rangle
= \sup_{\mathbf{v}_{h} \in \mathbf{X}_{h}}
\frac{(q_{h}, \nabla \cdot \mathbf{v}_{h})^{2}}
{\norm{\boldsymbol{\epsilon}(\mathbf{v}_{h})}^{2}
+ \alpha \norm{\nabla\cdot \mathbf{v}_{h}}^{2}}
+ (k \nabla q_{h}, \nabla q_{h}).
\end{equation}
Using~\eqref{eq:div-grad-norm-inequality} and the Cauchy--Schwarz
inequality,
%
\begin{equation}
(q_{h}, \nabla \cdot \mathbf{v}_h)^{2}
\le \norm{q_{h}}^{2} \norm{\boldsymbol{\epsilon}(\mathbf{v}_{h})}^{2}.
\end{equation}
%
For $-1/3 \le \alpha < 0$,
%
\begin{equation}
\begin{split}
\norm{\boldsymbol{\epsilon}(\mathbf{v}_h)}^2
&= \frac{1}{1 + \alpha} \del{\norm{\boldsymbol{\epsilon}(\mathbf{v}_h)}^2
+ \alpha\norm{\boldsymbol{\epsilon}(\mathbf{v}_h)}^2}
\\
&\le \frac{1}{1 + \alpha}\del{\norm{\boldsymbol{\epsilon}(\mathbf{v}_h)}^2
+ \alpha\norm{\nabla \cdot \mathbf{v}_h}^2},
\end{split}
\end{equation}
%
and for $\alpha \ge 0$,
\begin{equation}
\norm{\boldsymbol{\epsilon}(\mathbf{v}_{h})}^{2}
\le \norm{\boldsymbol{\epsilon}(\mathbf{v}_{h})}^{2}
+ \alpha\norm{\nabla \cdot \mathbf{v}_{h}}^{2}.
\end{equation}
%
Hence,
%
\begin{equation}
\label{eq:uboundqdivu}
(q_{h}, \nabla \cdot \mathbf{v}_{h})^{2}
\le c^q \norm{q_{h}}^{2} \left(\norm{\boldsymbol{\epsilon}(\mathbf{v}_h)}^2
+ \alpha\norm{\nabla \cdot \mathbf{v}_h}^2\right),
\end{equation}
%
where
%
\begin{equation}
c^q =
\begin{cases}
1/(1 - |\alpha|) & \mathrm{if} \ -1/3 \le \alpha < 0,\\
1 & \mathrm{if} \ \alpha \ge 0.
\end{cases}
\end{equation}
%
Combining \eqref{eq:Sqq} and~\eqref{eq:uboundqdivu},
%
\begin{equation}
\label{eq:upperbound_Sqq}
\langle Sq, q\rangle \le c^q \norm{q_h}^2 + (k \nabla q_h, \nabla q_h)
= c^q \langle Qq, q\rangle + \langle C_{k}q, q \rangle
\le c^q \langle (Q + C_{k})q, q\rangle.
\end{equation}
%
This proves the upper bound in~\eqref{eq:upperLowerBound}.
Next we determine the lower bound. Using
\eqref{eq:div-grad-norm-inequality} and the inf-sup
condition~\eqref{eq:lbb},
\begin{equation}
\label{eq:Approx_ge_zero}
\begin{split}
\max_{\mathbf{v}_{h} \in \mathbf{X}_{h}} \frac{(q_h, \nabla
\cdot
\mathbf{v}_h)^2}{\norm{\boldsymbol{\epsilon}(\mathbf{v}_h)}^2
+ \alpha\norm{\nabla \cdot \mathbf{v}_h}^2} & \ge
\max_{\mathbf{v}_{h} \in \mathbf{X}_{h}} \frac{(q_h, \nabla
\cdot \mathbf{v}_h)^{2}}{(1 + |\alpha|) \norm{\nabla
\mathbf{v}_h}^{2}} \\ &\ge \frac{c_{1}^{2}}{1+|\alpha|}
\norm{q_{h}}^{2},
\end{split}
\end{equation}
%
which leads to
%
\begin{equation}
\label{eq:Sq1}
\langle Sq, q\rangle \ge \frac{c_{1}^{2}}{ 1 +|\alpha|}
\langle Qq, q \rangle
+ \langle C_{k}q, q \rangle.
\end{equation}
Using Proposition~\ref{lem:coercivity_c} and the Poincar\'e
inequality,
\begin{equation}
\label{eq:Cq1}
\begin{split}
\langle C_{k}q, q \rangle
&= (1 - \xi) c(q_h, q_h)
+ \xi \norm{k^{1/2} \nabla q_{h}}^{2}
\\
&\ge (1-\xi) c(q_h, q_h)
+ \xi c_{P} k_{*} \norm{q_h}^2
\\
&= (1-\xi) \langle C_{k}q, q \rangle
+ \xi c_{P} k_{*} \langle Qq, q\rangle,
\end{split}
\end{equation}
for any $\xi \in [0, 1]$. Combining~\eqref{eq:Sq1}
and~\eqref{eq:Cq1},
\begin{equation}
\langle Sq, q\rangle
\ge
\del{\frac{c_{1}^2}{1 + |\alpha|}
+ \xi c_{P} k_{*}} \langle Qq, q \rangle
+ (1 - \xi) \langle C_{k}q, q \rangle,
\end{equation}
and setting $\xi = (1 - c_{1}^{2}/(1 +
|\alpha|))/(1 + c_{P}k_{*})$ in the case that $c_{1}^{2}/(1 +
|\alpha|) \le 1$, and otherwise setting $\xi = 0$,
\begin{equation}
\label{eq:Sq2}
\langle Sq, q\rangle
\ge
\min \del{\frac{c_{1}^{2} + c_{P} k_{*}(1
+ |\alpha|)}{(1 + |\alpha|)(1 + c_{P} k_{*})}, \ 1}
\langle (Q + C_{k})q, q \rangle,
\end{equation}
from which $c_q$ is deduced.
\end{proof}
For the discretisation of the Stokes equations, it was shown that the
pressure mass-matrix is spectrally equivalent to the Schur
complement~\citep{Silvester:1994}. This is recovered from
Lemma~\ref{lem:bounds_for_S_new} when $k = 0$ everywhere and~$\alpha =
0$.
\begin{lemma}
\label{lem:bounds_for_S_Q}
For the matrices $A$, $B$ and $C_{k}$ in \eqref{eq:matrixForm},
$S$ in \eqref{eq:pressure-schur} and the pressure mass matrix $Q$ in
\eqref{eq:pressure-mass}, if the inf-sup condition in \eqref{eq:lbb}
is satisfied, then
%
\begin{equation}
\label{eq:upperBound_S_Q}
\frac{\langle (B^T (Q + C_{k})^{-1} B v, v\rangle}{\langle Av, v \rangle}
\le c^q \quad \forall v \in \mathbb{R}^{n_{u}},
\end{equation}
%
where $c^q$ is the constant from in~\eqref{eq:upper_c}.
\end{lemma}
\begin{proof}
From Lemma~\ref{lem:bounds_for_S_new}, symmetry of $A$ and positive
semi-definiteness of~$C$,
%
\begin{equation}
\frac{q^T B A^{-1} B^T q}{q^T \del{Q + C_{k}} q}
\le \frac{q^T \del{B A^{-1} B^T + C_{k}} q}{q^T \del{Q + C_{k}} q}
\le c^q \quad \forall q \in N^{n_{p}}.
\end{equation}
%
Inserting $q \gets (Q + C_{k})^{1/2} q$,
%
\begin{equation}
\frac{q^T (Q + C_{k})^{-1/2} B A^{-1} B^T (Q + C_{k})^{-1/2} q}{q^T q}
\le c^q \quad \forall q \in N^{n_{p}}.
\end{equation}
Defining $H = (Q + C_{k})^{-1/2} B A^{-1} B^T(Q +
C_{k})^{-1/2}$ and denoting the maximum eigenvalue of $H$ by
$\lambda_{\max}$ and associated eigenvector $x$, since $H$ is
symmetric it follows that $\lambda_{\rm max} \ge v^{T} H v/(v^{T} v)
\ \forall v \in \mathbb{R}^{n}$ and $\lambda_{\rm max} = x^{T} H
x/(x^{T} x)$. Hence, $\lambda_{\max} \le c^q$, and
\begin{equation}
(Q + C_{k})^{-1/2} B A^{-1} B^T(Q + C_{k})^{-1/2} x = \lambda_{\max} x,
\end{equation}
%
and pre-multiplying both sides by $ A^{-1/2} B^T(Q + C_{k})^{-1/2}$,
%
\begin{multline}
A^{-1/2} B^T(Q + C_{k})^{-1/2}(Q
+ C_{k})^{-1/2} B A^{-1/2} A^{-1/2} B^T (Q+C_{k})^{-1/2} x
\\
= \lambda_{\max} A^{-1/2} B^T (Q + C_{k})^{-1/2} x.
\end{multline}
%
Letting $v = A^{-1/2} B^T (Q + C_{k})^{-1/2} x$, the above becomes
%
\begin{equation}
A^{-1/2} B^T (Q + C_{k})^{-1} B A^{-1/2} v = \lambda_{\max} v,
\end{equation}
%
and it follows from $\lambda_{\max} \le c^q$ that
%
\begin{equation}
\frac{v^T A^{-1/2} B^T (Q + C_{k})^{-1} B A^{-1/2} v}{v^T v} \le c^q
\quad \forall v \in \mathbb{R}^{n_{u}},
\end{equation}
%
or, taking $v \leftarrow A^{-1/2} v$,
%
\begin{equation}
\frac{v^T B^T(Q + C_{k})^{-1} B v}{v^T A v} \le c^q
\quad \forall v \in \mathbb{R}^{n_{u}},
\end{equation}
%
and the Lemma follows.
\end{proof}
We now consider diagonal block preconditioners
for~\eqref{eq:matrixForm} of the form
\begin{equation}
\label{eq:precon_ideal}
\mathcal{P}
=
\begin{bmatrix}
P & 0 \\
0 & T
\end{bmatrix},\qquad
P\in\mathbb{R}^{n_u\times n_u},\quad
T\in\mathbb{R}^{n_p\times n_p}.
\end{equation}
We assume that $P$ and $T$ are symmetric and positive-definite, and
that they satisfy
\begin{equation}
\label{eq:boundsAPQT}
\delta_{AP} \le \frac{\langle Av, v\rangle}{\langle Pv, v\rangle}
\le \delta^{AP} \ \forall v\in\mathbb{R}^{n_u},
\quad
\delta_{QT} \le
\frac{\langle(Q + C_{k})q, q \rangle}{\langle Tq, q\rangle}
\le \delta^{QT} \ \forall q\in N^{n_p},
\end{equation}
where $\delta_{AP}$, $\delta^{AP}$, $\delta_{QT}$ and $\delta^{QT}$
are independent of $h$, but may depend on model parameters.
The discrete system in~\eqref{eq:matrixForm} is indefinite, and hence
has both positive and negative eigenvalues. The speed of convergence
of the MINRES Krylov method for the preconditioned system
\begin{equation}
\label{eq:precondsys}
\begin{bmatrix}
P & 0 \\
0 & T
\end{bmatrix}^{-1}
\begin{bmatrix}
A & B^T \\
B & -C_{k}
\end{bmatrix}
\begin{bmatrix}
u \\ p
\end{bmatrix}=
\begin{bmatrix}
P & 0 \\
0 & T
\end{bmatrix}^{-1}
\begin{bmatrix}
f \\ g
\end{bmatrix},
\end{equation}
depends on how tightly the positive and negative eigenvalues of the
generalised eigenvalue problem
\begin{equation}
\label{eq:geneigenvalue}
\begin{bmatrix}
A & B^T \\
B & -C_{k}
\end{bmatrix}
\begin{bmatrix}
v \\ q
\end{bmatrix}=
\lambda
\begin{bmatrix}
P & 0 \\
0 & T
\end{bmatrix}
\begin{bmatrix}
v \\ q
\end{bmatrix},
\end{equation}
are clustered~\citep[Section~6.2]{Elman:book}. Our aim now is to
develop bounds on the eigenvalues in~\eqref{eq:geneigenvalue} that are
independent of the mesh parameter~$h$.
\begin{theorem}
\label{lem:boundsPu}
Let $c_q$ and $c^q$ be the constants in
Lemma~\ref{lem:bounds_for_S_new}, and the matrices $A$, $B$ and
$C_{k}$ be those given in \eqref{eq:matrixForm}, $S$ be the pressure
Schur complement in \eqref{eq:pressure-schur} and $Q$ the pressure
mass matrix in~\eqref{eq:pressure-mass}. If $P$ and $T$ satisfy
\eqref{eq:boundsAPQT}, all eigenvalues $\lambda < 0$ of
\eqref{eq:geneigenvalue} satisfy
%
\begin{equation}
\label{eq:negEigs_diag}
- c^q \delta^{QT}
\le
\lambda
\le
\tfrac{1}{2}\del{\delta_{AP} - \sqrt{\delta_{AP}^2
+ 4 c_q \delta_{QT} \delta_{AP}}},
\end{equation}
%
and eigenvalues $\lambda > 0$ of \eqref{eq:geneigenvalue} satisfy
%
\begin{equation}
\label{eq:posEigs_diag}
\delta_{AP}
\le
\lambda
\le \delta^{AP} + c^q \delta^{QT}.
\end{equation}
\end{theorem}
\begin{proof}
Lemmas~\ref{lem:bounds_for_S_new} and~\ref{lem:bounds_for_S_Q}
provide the bounds
\begin{equation}
c_q
\le \frac{\langle Sq, q\rangle}{\langle (Q + C_{k})q, q \rangle}
\le c^q,
\quad
\frac{\langle (B^T (Q + C_{k})^{-1}B v, v\rangle}{\langle Av, v \rangle}
\le c^q,
\end{equation}
for all $q \in N^{n_p}$ and $\forall v \in
\mathbb{R}^{n_{u}}$. Using these bounds together with the bounds
given in \eqref{eq:boundsAPQT}, the result follows directly by
following the proof of Theorem 6.6 in~\citet{Elman:book}, or more
generally~\citet{Pestana:2013}.
\end{proof}
The main result of this section, Theorem~\ref{lem:boundsPu}, states
that the eigenvalues of the generalised eigenvalue
problem~\eqref{eq:geneigenvalue} are independent of the problem size.
From Theorem~\ref{lem:boundsPu} we see that
\begin{equation}
\lambda \in \sbr{-c^q \delta^{QT}, \ \frac{1}{2} \del{\delta_{AP}
- \sqrt{\delta_{AP}^{2} + 4c_q \delta_{QT} \delta_{AP}}}}
\bigcup \sbr{\delta_{AP}, \ \delta^{AP} + c^q \delta^{QT}},
\label{eq:lambda-interval}
\end{equation}
in which all constants are independent of the problem size
(independent of $h$). This tells us that if we can find a $P$ and $T$
that are spectrally equivalent to $A$ and $Q + C_{k}$,
respectively, then an iterative method with preconditioner
\eqref{eq:precon_ideal} will be optimal for~\eqref{eq:matrixForm}.
The interval in~\eqref{eq:lambda-interval} shows the dependence of the
eigenvalues on~$\alpha$ and~$k$. The upper and lower bounds on the
positive eigenvalues are well behaved, as is the lower bound on the
negative eigenvalues, for all $\alpha$ and $k$. It is only when $c_{q}
\ll 1$ that the upper bound on the negative eigenvalues tends to
zero. If this is the case, the rate of convergence of the iterative
method may slow. From \eqref{eq:lower_c}, we see that $c_{q} \ll 1$
only if $\alpha \gg 1$ and, at the same time,~$k_{*} \ll 1$.
\section{Preconditioner construction}
\label{s:pcconstruction}
Implementation of the proposed preconditioner requires the provision
of symmetric, positive definite matrices $P$ and $T$ that
satisfy~\eqref{eq:boundsAPQT}. Obvious candidates are $P = A$ and $T
= Q + C_k$, with a direct solver used to compute the action of
$P^{-1}$ and~$T^{-1}$. We will use this for small problems in the
following section to study the performance of the block
preconditioning; the application of a direct solver is not practical,
however, when $P$ and $T$ are large, in which case we advocate the use
of multigrid approximations of the inverse.
To provide more general guidance, we first reproduce the following
Lemma from \citet[Lemma 6.2]{Elman:book}.
\begin{lemma}
\label{lem:rho_bounds}
If $u$ is the solution to the system $A u = f$ and
\begin{equation}
u_{i + 1} = (I - P^{-1} A) u_{i} + P^{-1} f,
\end{equation}
then if the iteration error satisfies $\langle A (u - u_{i + 1}), u
- u_{i + 1} \rangle \le \rho \langle A (u - u_{i}), u - u_{i}
\rangle$, with $\rho < 1$,
\begin{equation}
1 - \rho
\le
\frac{\langle Av, v\rangle}{\langle Pv, v\rangle}
\le
1 + \rho \quad \forall v.
\label{eq:rho-convergence}
\end{equation}
\end{lemma}
\begin{proof}
See \citet[proof of Lemma 6.2]{Elman:book}.
\end{proof}
Lemma~\ref{lem:rho_bounds} implies that a solver that is optimal for
$A u = f$ will satisfy~\eqref{eq:boundsAPQT}, and is therefore a
candidate for $P$, and likewise for~$T$. The obvious candidates for
$P$ and $T$ are multigrid preconditioners applied to $A$ and $Q +
C_k$, respectively. However, as we will show by example in
Section~\ref{s:numsim}, as $\alpha$ increases, and therefore the
compaction stresses (a `grad-div' term) become more important,
multigrid for $P$ becomes less effective as a preconditioner. More
effective treatment of the large $\alpha$ case is the subject of
ongoing investigations.
\section{Numerical simulations}
\label{s:numsim}
In this section we verify the analysis results through numerical
examples. In all test cases we use $P^{2}$--$P^{1}$ Taylor--Hood
finite elements on simplices. The numerical examples deliberately
address points of practical interest such as spatial variations in the
parameter $k$, a wide range of values for $\alpha$ and large problem
sizes on unstructured grids of subduction zone-like geometries.
We consider two preconditioners. For the first, we take $P = A$ and $T
= Q + C_{k}$ in~\eqref{eq:precon_ideal} and apply a direct solver to
compute the action of the inverses. This preconditioner will be
referred to as the `LU' preconditioner. For the second, we use
$P^{-1} = A^{\rm AMG}$ and $T^{-1} =(Q + C_{k})^{\rm AMG}$, where we use
$(\cdot)^{\rm AMG}$ to denote the use of algebraic multigrid to
approximate the inverse of~$(\cdot)$. This preconditioner will be
referred to as the `AMG' preconditioner. The LU preconditioner is
introduced as a reference preconditioner to which the AMG
preconditioner can be compared. The LU preconditioner is not suitable
for large scale problems. Note that we never construct the inverse of
$P$ or $T$, but that we just use the action of the inverse.
All tests use the MINRES method, and the solver is terminated once a
relative true residual of $10^{-8}$ is reached. For multigrid
approximations of $P^{-1}$, smoothed aggregation algebraic multigrid
is used via the library ML~\citep{gee:2006}. For multigrid
approximations of $T^{-1}$, classical algebraic multigrid is used via
the library BoomerAMG~\citep{henson:2002}. Unless otherwise stated,
we use multigrid V-cycles, with two applications of Chebyshev with
Jacobi smoothing on each level (pre and post) in the case of smoothed
aggregation, and symmetric Gauss--Seidel for the classical algebraic
multigrid. The computer code is developed using the finite element
library DOLFIN~\citep{logg:2010}, with block preconditioner support
from PETSc~\citep{brown:2012} to construct the preconditioners. The
computer code to reproduce all examples is freely available in the
supporting material~\citep{supporting-material}.
\subsection{Verification of optimality}
\label{ss:tc1}
In this test case we verify optimality of the block preconditioned
MINRES scheme by observing the convergence of the solver for varying
$h$, $\alpha$, $k^{*}$ and $k_{*}$. We solve \eqref{eq:magma} and
\eqref{eq:bc} on the unit square domain $\Omega = (0, 1)^{2}$ using a
regular mesh of triangular cells. For the permeability, we consider
\begin{multline}
k =
\frac{k^{*} - k_{*}}{4\tanh(5)}
\left(\tanh(10 x - 5)
+ \tanh(10z - 5) \right.
\\
\left.
+ \frac{2(k^{*} - k_{*})
- 2 \tanh(5)(k_{*}
+ k^{*})}{k_{*} - k^{*}}
+ 2\right).
\end{multline}
We ignore body forces but add a source term $\mathbf{f}$ to the right
hand side of~\eqref{eq:magma_a}. The Dirichlet boundary condition
$\mathbf{g}$ and the source term $\mathbf{f}$ are constructed such
that the exact solution pressure $p$ and velocity $\mathbf{u}$ are:
\begin{align}
p &= -\cos(4 \pi x)\cos(2 \pi z),
\\
u_{x} &= k \partial_{x} p + \sin(\pi x)\sin(2\pi z) + 2,
\\
u_{z} &= k \partial_{z} p + \frac{1}{2}\cos(\pi x)\cos(2\pi z) + 2.
\end{align}
Table~\ref{tab:tc1_minres_mu} shows the number of iterations the
MINRES method required to converge using the LU and AMG
preconditioners with $k_{*} = 0.5$ and $k^{*} = 1.5$, when varying
$\alpha$ from $-1/3$ to $1000$. We clearly see that the LU
preconditioner is optimal (the iteration count is independent of the
problem size), as predicted by the analysis (see
Theorem~\ref{lem:boundsPu}). Using the AMG preconditioner, there is a
very slight dependence on the problem size. The results in
Table~\ref{tab:tc1_minres_mu} indicate that the LU preconditioner is
uniform with respect to~$\alpha$. Theorem~\ref{lem:boundsPu} indicates
a possible dependence on~$\alpha$ through the constant~$c_q$. However,
for $\alpha$ sufficiently small or sufficiently large, the dependence
of $c_{q}$ on $\alpha$ becomes negligible, and $\alpha$ has only a
small impact on the iteration count. The AMG preconditioner, on the
other hand, shows a strong dependence on~$\alpha$. The issue with the
`grad-div' for multigrid solvers was discussed in Section~\ref{s:fem},
and is manifest in Table~\ref{tab:tc1_minres_mu}. It has been
observed in tests that the effectiveness of a multigrid preconditioned
solver for the operator $A$ deteriorates with increasing~$\alpha$.
This is manifest in an increasing $\rho$ in~\eqref{eq:rho-convergence}
for increasing~$\alpha$.
\begin{table}
\caption{Number of iterations for the LU and AMG preconditioned
MINRES for the unit square test with different levels of mesh
refinement and for different values of~$\alpha$. The number of
degrees-of-freedom is denoted by~$N$. For the $\alpha = 1000$
case, four applications of a Chebyshev smoother, with one
symmetric Gauss-Seidel iteration for each application, was used.}
\begin{center}
\begin{tabular}{c|cc|cc|cc|cc|cc}
& \multicolumn{2}{c|}{$\alpha=-\tfrac{1}{3}$} & \multicolumn{2}{c|}{$\alpha=0$} & \multicolumn{2}{c|}{$\alpha=1$} & \multicolumn{2}{c|}{$\alpha=10$} & \multicolumn{2}{c}{$\alpha=1000$}\\
$N$ & LU & AMG & LU & AMG & LU & AMG & LU & AMG & LU & AMG$^{*}$ \\
\hline
9,539 & 9 & 29 & 9 & 30 & 9 & 35 & 8 & 67 & 7 & 202 \\
37,507 & 9 & 33 & 9 & 36 & 9 & 40 & 8 & 80 & 6 & 283 \\
148,739 & 8 & 39 & 8 & 40 & 9 & 47 & 7 & 96 & 6 & 366 \\
592,387 & 8 & 42 & 8 & 44 & 7 & 52 & 7 & 106 & 6 & 432
\end{tabular}
\label{tab:tc1_minres_mu}
\end{center}
\end{table}
Results for the case of large spatial variations in permeability $k$
are presented in Tables~\ref{tab:tc1_minres_k_a}
and~\ref{tab:tc1_minres_k_b} for the cases~$\alpha = 1$ and~$\alpha =
100$, respectively. A dependence of the iteration count on the
permeability is observed. The smaller $k^{*}$, the larger the
iteration counts for both the AMG and the LU preconditioners. We also
observe that for a given $k^{*}$ there is little influence of $k_{*}$
on the iteration count. Comparing the results in
Tables~\ref{tab:tc1_minres_k_a} and~\ref{tab:tc1_minres_k_b} we see
that the LU preconditioner shows no dependence on $\alpha$. For the AMG
preconditioner the iteration count increases as $\alpha$ increases
from 1 to 100.
\begin{table}
\caption{Number of iterations to reach a relative tolerance of
$10^{-8}$ using preconditioned MINRES for the unit square test
with varying levels of mesh refinement and varying $(k_{*},
k^{*})$ pairs for $\alpha=1$. The number of degrees of freedom is
denoted by~$N$.}
\begin{center}
\begin{tabular}{c|cc|cc|cc|cc}
$k^*=10^{-4}$ & \multicolumn{2}{c|}{$k_{*} = 0$}
& \multicolumn{2}{c|}{$k_{*} = 10^{-8}$}
& \multicolumn{2}{c}{$k_{*} = 10^{-6}$}
& \multicolumn{2}{c}{$k_{*} = 5\cdot 10^{-5}$}\\
$N$ & LU & AMG & LU & AMG & LU & AMG & LU & AMG \\
\hline
9,539 & 32 & 88 & 32 & 88 & 32 & 88 & 32 & 80\\
37,507 & 35 & 108 & 35 & 108 & 35 & 108 & 35 & 97\\
148,739 & 38 & 130 & 37 & 130 & 38 & 127 & 33 & 111\\
592,387 & 36 & 143 & 36 & 143 & 35 & 135 & 33 & 122\\
\multicolumn{9}{c}{}\\
$k^*=1$ & \multicolumn{2}{c|}{$k_{*} = 0$}
& \multicolumn{2}{c|}{$k_{*} = 0.1$}
& \multicolumn{2}{c}{$k_{*} = 0.5$}
& \multicolumn{2}{c}{$k_{*} = 0.9$}\\
$N$ & LU & AMG & LU & AMG & LU & AMG & LU & AMG\\
\hline
9,539 & 27 & 67 & 10 & 37 & 9 & 36 & 9 & 36\\
37,507 & 28 & 78 & 10 & 44 & 9 & 42 & 9 & 42\\
148,739 & 28 & 93 & 10 & 50 & 9 & 48 & 7 & 47\\
592,387 & 27 & 101 & 10 & 54 & 9 & 52 & 7 & 52\\
\multicolumn{9}{c}{}\\
$k^*=1000$ & \multicolumn{2}{c|}{$k_{*} = 0$}
& \multicolumn{2}{c|}{$k_{*} = 1$}
& \multicolumn{2}{c}{$k_{*} = 10$}
& \multicolumn{2}{c}{$k_{*} = 100$}\\
$N$ & LU & AMG & LU & AMG & LU & AMG & LU & AMG\\
\hline
9,539 & 3 & 24 & 3 & 26 & 3 & 24 & 3 & 24\\
37,507 & 3 & 27 & 3 & 27 & 3 & 27 & 3 & 30\\
148,739 & 3 & 34 & 3 & 33 & 3 & 34 & 3 & 33\\
592,387 & 3 & 37 & 3 & 37 & 3 & 37 & 3 & 40\\
\multicolumn{9}{c}{}\\
$k^*=10^{8}$ & \multicolumn{2}{c|}{$k_{*} = 0$}
& \multicolumn{2}{c|}{$k_{*} = 1$}
& \multicolumn{2}{c}{$k_{*} = 10^{3}$}
& \multicolumn{2}{c}{$k_{*} = 10^{6}$}\\
$N$ & LU & AMG & LU & AMG & LU & AMG & LU & AMG\\
\hline
9,539 & 1 & 15 & 1 & 15 & 1 & 15 & 1 & 15\\
37,507 & 2 & 18 & 2 & 18 & 2 & 18 & 2 & 18\\
148,739 & 2 & 21 & 2 & 21 & 2 & 21 & 2 & 21\\
592,387 & 2 & 21 & 2 & 21 & 2 & 21 & 2 & 21
\end{tabular}
\label{tab:tc1_minres_k_a}
\end{center}
\end{table}
\begin{table}
\caption{Number of iterations to reach a relative tolerance of
$10^{-8}$ using preconditioned MINRES for the unit square test
with varying levels of mesh refinement and varying $(k_{*},
k^{*})$ pairs for $\alpha=100$. The number of degrees of freedom is
denoted by~$N$.}
\begin{center}
\begin{tabular}{c|cc|cc|cc|cc}
$k^*=10^{-4}$ & \multicolumn{2}{c|}{$k_{*} = 0$}
& \multicolumn{2}{c|}{$k_{*} = 10^{-8}$}
& \multicolumn{2}{c}{$k_{*} = 10^{-6}$}
& \multicolumn{2}{c}{$k_{*} = 5\cdot 10^{-5}$}\\
$N$ & LU & AMG & LU & AMG & LU & AMG & LU & AMG \\
\hline
9,539 & 67 & 1605 & 67 & 1598 & 66 & 1557 & 58 & 1385\\
37,507 & 75 & 1922 & 75 & 1922 & 71 & 1909 & 62 & 1730\\
148,739 & 76 & 2179 & 76 & 2177 & 72 & 2146 & 59 & 1972\\
592,387 & 73 & 2356 & 73 & 2356 & 68 & 2311 & 59 & 2156\\
\multicolumn{9}{c}{}\\
$k^*=1$ & \multicolumn{2}{c|}{$k_{*} = 0$}
& \multicolumn{2}{c|}{$k_{*} = 0.1$}
& \multicolumn{2}{c}{$k_{*} = 0.5$}
& \multicolumn{2}{c}{$k_{*} = 0.9$}\\
$N$ & LU & AMG & LU & AMG & LU & AMG & LU & AMG\\
\hline
9,539 & 28 & 350 & 9 & 179 & 8 & 171 & 7 & 169\\
37,507 & 28 & 445 & 9 & 212 & 8 & 205 & 8 & 202\\
148,739 & 28 & 545 & 9 & 247 & 8 & 236 & 8 & 234\\
592,387 & 28 & 597 & 9 & 271 & 8 & 265 & 8 & 265\\
\multicolumn{9}{c}{}\\
$k^*=1000$ & \multicolumn{2}{c|}{$k_{*} = 0$}
& \multicolumn{2}{c|}{$k_{*} = 1$}
& \multicolumn{2}{c}{$k_{*} = 10$}
& \multicolumn{2}{c}{$k_{*} = 100$}\\
$N$ & LU & AMG & LU & AMG & LU & AMG & LU & AMG\\
\hline
9,539 & 3 & 75 & 3 & 75 & 3 & 75 & 3 & 75\\
37,507 & 3 & 94 & 3 & 94 & 3 & 94 & 3 & 94\\
148,739 & 3 & 116 & 3 & 116 & 3 & 116 & 3 & 116\\
592,387 & 3 & 139 & 3 & 139 & 3 & 139 & 3 & 139\\
\multicolumn{9}{c}{}\\
$k^*=10^{8}$ & \multicolumn{2}{c|}{$k_{*} = 0$}
& \multicolumn{2}{c|}{$k_{*} = 1$}
& \multicolumn{2}{c}{$k_{*} = 10^{3}$}
& \multicolumn{2}{c}{$k_{*} = 10^{6}$}\\
$N$ & LU & AMG & LU & AMG & LU & AMG & LU & AMG\\
\hline
9,539 & 1 & 11 & 1 & 11 & 1 & 11 & 1 & 11\\
37,507 & 1 & 13 & 1 & 13 & 1 & 13 & 1 & 13\\
148,739 & 1 & 20 & 1 & 20 & 1 & 20 & 1 & 20\\
592,387 & 1 & 23 & 1 & 23 & 1 & 23 & 1 & 23
\end{tabular}
\label{tab:tc1_minres_k_b}
\end{center}
\end{table}
\subsection{A magma dynamics problem in two dimensions}
\label{ss:tc2}
In this test case we solve \eqref{eq:magma} and \eqref{eq:bc} on a
domain $\Omega$, depicted in Figure~\ref{fig:2Dsubduction}, using
unstructured meshes with triangular cells. We take $L_{x}^{t} = 1.5$,
$L_{x}^{b} = 0.5$ and $L_{z} = 1$. We set the permeability as
$k = 0.9(1 + \tanh(-2r))$ with $r = \sqrt{x^{2} + z^{2}}$
and the porosity~$\phi = 0.01$.
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{wedge_geometry_2d}
\caption{Description of the wedge geometry for a two-dimensional
subduction zone.}
\label{fig:2Dsubduction}
\end{figure}
We consider two test cases for this geometry. The first test problem
we denote as the \emph{analytical corner flow} test problem and the
second as the \emph{traction-free} test problem. In both problems we
prescribe the following conditions: $\mathbf{u} =
\mathbf{u}_{\text{slab}} = (1, -1)/\sqrt{2}$ on $\Gamma_1$,
$\mathbf{u} = \mathbf{0}$ on $\Gamma_2$ and $-k\del{\nabla p -
\mathbf{e}_{3}} \cdot \mathbf{n} = 0$ on $\partial \Omega$.
\subsubsection{Analytic corner flow}
\label{ss:corner}
For the analytical corner flow problem we prescribe $\mathbf{u} =
\mathbf{u}_{\rm corner} = (u_{x}, u_{z})$ on $\Gamma_{3}$, which is
the analytic expression for
corner-flow~\citep[Section~4.8]{Batchelor:book}. The corner-flow
velocity components $u_{x}$ and $u_{z}$ are given by
\begin{equation}
u_{x} = \cos(\theta) u_{r} + \sin(\theta)u_{\theta},
\quad
u_{z} = -\sin(\theta) u_{r} + \cos(\theta)u_{\theta},
\end{equation}
where $\theta = -\arctan(\tilde{z}/x)$, $\tilde{z} = z - 1$ and
\begin{equation}
u_r = C \theta \sin(\theta) + D(\sin(\theta) + \theta\cos(\theta)),
\quad
u_{\theta} = C(\sin(\theta) - \theta\cos(\theta)) + D\theta\sin(\theta),
\end{equation}
with
\begin{equation}
C = \frac{\beta\sin(\beta)}{\beta^{2} - \sin^{2}(\beta)},
\quad
D = \frac{\beta\cos(\beta) - \sin(\beta)}{\beta^{2} - \sin^{2}(\beta)}.
\end{equation}
Here $\beta = \pi/4$ is the angle between $\Gamma_{1}$
and~$\Gamma_{2}$. In Figure~\ref{fig:2Dsimulation} we show the
computed streamlines of the magma and matrix velocity fields for this
problem.
\begin{figure}
\centering
\subfloat[S][$\alpha=1$]{
\includegraphics[width=0.4\textwidth]{wedge_2d_corner_contours_alpha_1}
\label{fig:2Dsimulationa}}
\
\subfloat[S][$\alpha=1000$]{
\includegraphics[width=0.4\textwidth]{wedge_2d_corner_contours_alpha_1000}
\label{fig:2Dsimulationb}}
\caption{Streamlines of the magma (light) and matrix (dark) velocity
fields in the wedge of a two-dimensional subduction zone using the
corner flow boundary condition on $\Gamma_{3}$. The solution was
computed on a mesh with 116176 elements.}
\label{fig:2Dsimulation}
\end{figure}
Table~\ref{tab:corner-2d} presents the number of solver iterations for
the LU and AMG preconditioners for different values of~$\alpha$. We
observe very similar behaviour as we saw for the test in
Section~\ref{ss:tc1}. The LU preconditioner is optimal and
uniform. The AMG preconditioner again shows slight dependence on the
problem size, and as $\alpha$ is increased the iteration count grows.
\begin{table}
\caption{Number of iterations required for the corner flow problem
using LU and AMG preconditioned MINRES for different levels of
mesh refinement and varying~$\alpha$. For the $\alpha = 1000$
case, four applications of a Chebyshev smoother, with one
symmetric Gauss-Seidel iteration for each application, was used.}
\begin{center}
\begin{tabular}{c|cc|cc|cc|cc}
& \multicolumn{2}{c|}{$\alpha=1$} & \multicolumn{2}{c|}{$\alpha=10$}
& \multicolumn{2}{c|}{$\alpha=100$} & \multicolumn{2}{c}{$\alpha=1000$} \\
$N$ & LU & AMG & LU & AMG & LU & AMG & LU & AMG$^{*}$ \\
\hline
34,138 & 26 & 69 & 30 & 140 & 30 & 367 & 28 & 572 \\
133,777 & 26 & 75 & 29 & 151 & 27 & 390 & 27 & 669 \\
526,719 & 24 & 81 & 29 & 171 & 26 & 446 & 27 & 758 \\
\end{tabular}
\label{tab:corner-2d}
\end{center}
\end{table}
\subsubsection{Traction-free problem}
For the traction-free problem, instead of prescribing $\mathbf{u}_{\rm
corner}$, we prescribe the zero-traction boundary condition,
$(\boldsymbol{\epsilon}(\mathbf{u}) -p \mathbf{I} + \alpha \nabla
\cdot \mathbf{u} \mathbf{I}\big) \cdot \mathbf{n} = 0$ on $\Gamma_3$.
Figure~\ref{fig:2Dsimulation_nostress} shows the computed streamlines
of the magma and matrix velocity fields for this problem.
\begin{figure}
\centering
\subfloat[S][$\alpha=1$]{
\includegraphics[width=0.4\textwidth]{wedge_2d_nostress_contours_alpha_1}
\label{fig:2Dsimulationa_nostress}}
\
\subfloat[S][$\alpha=1000$]{
\includegraphics[width=0.4\textwidth]{wedge_2d_nostress_contours_alpha_1000}
\label{fig:2Dsimulationb_nostress}}
\caption{Streamlines of the magma (light) and matrix (dark) velocity
fields in the wedge of a 2D subduction zone using no stress boundary
conditions on~$\Gamma_{3}$. The solution was computed on a mesh with
116176 elements.}
\label{fig:2Dsimulation_nostress}
\end{figure}
The solver iteration counts for this problem with different levels of
mesh refinement and for different values of $\alpha$ are presented in
Table~\ref{tab:no-stress-2d}. As for the analytic corner flow problem
of Section~\ref{ss:corner}, the LU-based preconditioner is optimal and
uniform. As expected, using the AMG-based preconditioner, the solver
is not uniform with respect to~$\alpha$.
\begin{table}
\caption{Number of iterations to reach a relative tolerance of
$10^{-8}$ using LU and AMG preconditioned MINRES for different
values of $\alpha$ for the no-stress test. For the $\alpha =
1000$ case, four applications of a Chebyshev smoother, with one
symmetric Gauss-Seidel iteration for each application, was used.}
\begin{center}
\begin{tabular}{c|cc|cc|cc|cc}
& \multicolumn{2}{c|}{$\alpha=1$} & \multicolumn{2}{c|}{$\alpha=10$} & \multicolumn{2}{c|}{$\alpha=100$} & \multicolumn{2}{c}{$\alpha=1000$} \\
$N$ & LU & AMG & LU & AMG & LU & AMG & LU & AMG$^{*}$ \\
\hline
34,138 & 24 & 65 & 29 & 143 & 27 & 375 & 25 & 626 \\
133,777 & 23 & 73 & 27 & 159 & 27 & 424 & 24 & 718 \\
526,719 & 23 & 80 & 26 & 175 & 27 & 475 & 24 & 798
\end{tabular}
\label{tab:no-stress-2d}
\end{center}
\end{table}
\subsection{Magma dynamics problem in three dimensions}
\label{ss:tc3}
In the final case we test the solver for a three-dimensional problem
that is geometrically representative of a subduction zone. We solve
\eqref{eq:magma} and \eqref{eq:bc} on the domain $\Omega$ depicted in
Figure~\ref{fig:3Dsubduction}. We set $L_{x}^{t} = 1.5$, $L_{x}^{b} =
0.5$, $L_{y} = 1$ and~$L_{z} = 1$, and use unstructured meshes of
tetrahedral cells. Again we set the permeability as $k = 0.9(1 +
\tanh(-2r))$, with $r = \sqrt{x^{2} + z^{2}}$, and the porosity~$\phi
= 0.01$.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{wedge_geometry_3d}
\caption{Description of the wedge in a three-dimensional subduction
zone.}
\label{fig:3Dsubduction}
\end{figure}
As boundary conditions, we prescribe $\mathbf{u} = \mathbf{u}_{\rm
slab} = (1, 0.1, -1)/\sqrt{2}$ on $\Gamma_1$, $\mathbf{u} =
\mathbf{0}$ on $\Gamma_{2}$, $\big(\boldsymbol{\epsilon}(\mathbf{u}) -
p \mathbf{I} + \alpha \nabla \cdot \mathbf{u} \mathbf{I} \big) \cdot
\mathbf{n} = 0$ on $\Gamma_3$ and $-k \del{\nabla p - \mathbf{e}_{3}}
\cdot \mathbf{n} = 0$ on $\partial \Omega$. In
Figure~\ref{fig:3Dsimulation} we show computed vector plots of the
matrix and magma velocities for $\alpha = 1$ and~$\alpha = 1000$.
\begin{figure}
\centering
\subfloat[S][Matrix velocity, $\alpha = 1$.]{
\includegraphics[width=0.4\textwidth]{wedge_3d_nostress_matrix_velocity_alpha_1}
\label{fig:3Dsimulationa}}
\
\subfloat[S][Magma velocity, $\alpha = 1$.]{
\includegraphics[width=0.4\textwidth]{wedge_3d_nostress_magma_velocity_alpha_1}
\label{fig:3Dsimulationb}}
\\
\subfloat[S][Matrix velocity, $\alpha = 1000$.]{
\includegraphics[width=0.4\textwidth]{wedge_3d_nostress_matrix_velocity_alpha_1000}
\label{fig:3Dsimulationc}}
\
\subfloat[S][Magma velocity, $\alpha = 1000$.]{
\includegraphics[width=0.4\textwidth]{wedge_3d_nostress_magma_velocity_alpha_1000}
\label{fig:3Dsimulationd}}
\caption{Vector plots of the magma and matrix velocities in the wedge
of a three-dimensional subduction zone for $\alpha = 1$ and $\alpha
= 1000$ using the stress-free boundary conditions on~$\Gamma_{3}$.}
\label{fig:3Dsimulation}
\end{figure}
Table~\ref{tab:tc3} shows the number of iterations needed for the AMG
preconditioned MINRES method for the three-dimensional wedge problem.
The LU preconditioned solver is not practical for this problem when
using reasonable mesh resolutions. All cases have been computed in
parallel using 16 processes. The computed examples span a range of
problem sizes, and only relatively small changes in the iteration
count are observed for changes in the number of degrees of
freedom. Again, as $\alpha$ becomes larger, so too does the iteration
count.
\begin{table}
\caption{Number of iterations required for AMG preconditioned MINRES
for the three-dimensional subduction model for different levels of
mesh refinement and and different values of~$\alpha$. The number
of degrees of freedom is denoted by~$N$. For the $\alpha = 1000$
case, four applications of a Chebyshev smoother, with one
symmetric Gauss-Seidel iteration for each application, was used.
All tests were run using 16 MPI processes.}
\begin{center}
\begin{tabular}{c|c|c|c|c}
$N$ & $\alpha=1$ & $\alpha = 10$ & $\alpha = 100$ & $\alpha=1000$ \\
\hline
88,500 & 42 & 127 & 363 & 654 \\
400,690 & 44 & 122 & 355 & 692 \\
1,821,991 & 43 & 122 & 367 & 732 \\
8,124,691 & 41 & 120 & 355 & 775
\end{tabular}
\label{tab:tc3}
\end{center}
\end{table}
\section{Conclusions}
\label{s:conclusions}
In this work we introduced and analysed an optimal preconditioner for
a finite element discretisation of the simplified McKenzie equations
for magma/mantle dynamics. Analysis of the preconditioner showed that
the Schur complement of the block matrix arising from the finite
element discretisation of the simplified McKenzie equations may be
approximated by a pressure mass matrix plus a permeability matrix. The
analysis was verified through numerical simulations on a unit square
and two- and three-dimensional wedge flow problems inspired by
subduction zones. For all computations we used $P^{2}$--$P^{1}$
Taylor--Hood finite elements as they are inf-sup stable in the
degenerate limit of vanishing permeability. Numerical tests
demonstrated optimality of the solver. We observed that the multigrid
version of the preconditioner was not uniform with respect to the
bulk-to-shear-viscosity ratio~$\alpha$. As $\alpha$ is increased, the
iteration count for the solver increases. We observe a similar
behaviour as $k^{*}$ increases.
The analysis and testing of an optimal block preconditioning method
for magma/mantle dynamics presented in this work lays a basis for
creating efficient and optimal simulation tools that will ultimately
be put to use to study the genesis and transport of magma in
plate-tectonic subduction zones. Optimality has been demonstrated, but
some open questions remain regarding uniformity with respect to some
model parameters.
\section*{Acknowledgements}
We thank L.~Alisic and J.~F.~Rudge for the many discussions held
related to this paper. We also thank the reviewers M.~Knepley,
M.~Spiegelman, C.~Wilson and one that remained anonymous, whose
comments helped improve this paper. The authors acknowledge the
support of the Natural Environment Research Council under grants
NE/I026995/1 and NE/I023929/1. Katz is furthermore grateful for the
support of the Leverhulme Trust.
\bibliographystyle{abbrvnat}
|
1,116,691,499,843 | arxiv | \section{Introduction}
\label{sec:intro}
Galactic globular clusters (GGCs) represent a key ingredient for the understanding of the evolution of the Galaxy, both from a chemical
and a stellar populations perspective. In particular, they represent the oldest systems in the Milky Way, and their relative ages
accordingly play an important role in constraining the Galaxy's formation timescale (e.g., Salaris \& Weiss 2002; Mar\'in-Franch et al.
2009; Dotter et al. 2010). A major advantage encountered in the study of GCs is the frequent presence of large RR Lyrae populations,
which are of great interest due to several properties, including a relative narrow range in magnitude (which makes them excellent standard
candles to determine distances) and also the presence of correlations between their pulsation properties and the properties of the GCs
to which they belong (see, e.g., Zorotovic et al. 2010 and Cohen et al. 2011, for some recent examples).
Another remarkable characteristic of the RR Lyrae stars in GCs is the Oosterhoff dichotomy (Oosterhoff 1939, 1944), which consists in
the sharp division of the GGC system into two groups, according to the average period of their ab-type (i.e., fundamental-mode) RR Lyrae
population: Oosterhoff type I (OoI), with $\langle P_{\rm ab} \rangle \approx 0.55$~d, and Oosterhoff type II (OoII), with
$\langle P_{\rm ab} \rangle \approx 0.65$~d. This dichotomous behavior applies exclusive to Galactic globulars, since nearby extragalactic
systems somehow preferentially occupy a period range that is intermediate between groups OoI and OoII (Catelan 2009; Catelan et al.
2012). In this sense, the Oosterhoff dichotomy provides important information that must be taken into account, when studying the early
formation history of the Milky Way and its dwarf satellite galaxy system. In this context, obtaining a complete census of variability
in GGCs is especially important, for it may help identify outliers in the Oosterhoff dichotomy context, and hence potentially point
to GCs of extragalactic origin.
Perhaps surprisingly, after more than a century of GGC variability studies, there is still a large number of GCs which lack
time-series analyses, particularly using modern techniques. In this sense, the advent of CCD detectors, combined with sophisticated
crowded-field photometry and image subtraction techniques (e.g., Stetson 1987, 1994; Alard \& Lupton 1998; Alard 2000; Bramich 2008)
has proven of key importance for unveiling the presence of large numbers of previously unknown variable stars in GCs, particularly in
their crowded cores.
\begin{figure*}[t]
\centering
\includegraphics[width=10cm]{fig1.pdf}
\caption{Acceptance ``box'' in the $B-I$, $V$ CMD ({\em red dots}). Stars within this ``box'' are considered to be cluster candidates
(plus field contaminants). The number of cluster stars outside the box ({\em black dots}) is assumed to be small.}\label{accbox}
\end{figure*}
According to the Dec. 2010 version of the Harris (1996) catalog, the position of \objectname{NGC~6981} is $\rm \alpha = 20^h53^m27^s.9$,
$\delta = -12\degr 32\arcmin 13 \arcsec$ (J2000), or $\ell = 35\fdg16$, $b = -32\fdg68$, with a low interstellar extinction of $E(\bv) = 0.05$.
Very recently, Bramich et al. (2011) published the results of time-series imaging of this cluster, performed with difference imaging, and
updated the cluster's variable star census. In the framework of our long-term project to complete the census of variable stars in GGCs
(Catelan et al. 2006), and with the advantage of a longer time-baseline based on archival and new images, we have revisited the variability
content of \objectname{NGC~6981}. Importantly, with the large number of images used and a large field of view (FOV), we were able to obtain a
much deeper, spatially resolved, high-precision CMD, which allowed us to study several properties of the cluster, including its surface
brightness profile and the presence of extratidal stars.
This paper is organized as follows. In Section~\ref{sec:data} our data and data analysis techniques are described. In Section~\ref{sec:extratidal}
we analyze the structural parameters of the cluster. Section~\ref{sec:cmd} provides our analysis of the different branches of the CMD.
In Section~\ref{sec:age}, we compare our results with another well-studied GC of similar metallicity, namely M3 (NGC~5272), in order to
position M72 in a relative age scale, compared with other GGCs. In Section~\ref{sec:variables}, we discuss our findings regarding the
cluster's variable star population. We close in Section~\ref{sec:conclu} with a summary of our results.
\begin{table*}[t]
\begin{center}
\footnotesize
\caption{\footnotesize{NGC 6981 Data Used in This Study.}}
\begin{tabular}{lccccccccccc}
\tableline
\tableline
Note & Run ID & Dates & Telescope & Camera/Detector & $U$ & $B$ & $V$ & $R$ & $I$ & 51 \\
\tableline
1 & cmr & 1986 Oct 28 & INT 2.5m & GEC & - & 3 & 3 & 2 & 3 & - \\
2 & full2 & 1989 Aug 01-02 & CTIO 0.9m & TI1 & - & 6 & 8 & 2 & - & - \\
3 & ct94jun & 1994 Jun 28-Jul 01 & CTIO 0.9m & Tek2K$_4$ & - & 12 & 12 & - & 12 & - \\
4 & ct94nov & 1994 Nov 25-28 & CTIO 0.9m & Tek2K$_4$ & - & 2 & 2 & - & 2 & - \\
5 & ct94sep & 1994 Sep 22-26 & CTIO 0.9m & Tek2K$_3$ & - & 25 & 25 & - & 24 & - \\
6 & ct95jun & 1995 Jun 21-25 & CTIO 0.9m & Tek2K$_3$ & - & 15 & 15 & - & 15 & - \\
7 & bond8 & 1996 Sep 25 & KPNO 0.9m & t2ka & 1 & 1 & 1 & - & 1 & - \\
8 & dmd & 1998 Jun 25 & JKT 1.0m & TEK4 & - & - & 2 & - & 2 & - \\
9 & int98 & 1998 Aug 20 & INT 2.5m & WFC & - & 2 & 2 & - & 2 & - & $\times 4$ \\
10 & fors9906 & 1999 Jun 17-21 & ESO VLT 8.0m & FORS1 & - & - & 5 & - & 24 & - \\
11 & fors9908 & 1999 Aug 02 & ESO VLT 8.0m & FORS1 & - & - & - & - & 2 & - \\
12 & fors9909 & 1999 Sep 02 & ESO VLT 8.0m & FORS1 & - & - & 19 & - & - & - \\
13 & wfi5 & 2002 Jun 20 & ESO/MPI 2.2m & WFI & - & 4 & 4 & - & - & - & $\times 8$ \\
14 & m72 & 2007 Apr 15-Aug 03 & CTIO 1.3m & ANDICAM Fairchild 447 & - & 41 & 42 & - & 41 & - \\
15 & emmi8 & 2007 Jul 16 & ESO NTT 3.6m & EMMI MIT/LL mosaic & - & 4 & 3 & 7 & - & - & $\times 2$ \\
16 & jul08 & 2008 Jul 27-29 & CTIO 4.0m & Mosaic2 & - & 2 & 17 & - & 17 & - & $\times 8$ \\
17 & aug08 & 2008 Aug 26-27 & CTIO 4.0m & Mosaic2 & 2 & 6 & 14 & - & 16 & 3 & $\times 8$ \\
18 & int11 & 2011 Aug 22-Sep 24 & INT 2.5m & WFC & 3 & 2 & 2 & - & 2 & - & $\times 4$ \\
\tableline
\end{tabular}
\label{tab:data}
\par \smallskip \scriptsize \textit{Notes}.
1. Observer ``CMR''
2. Observer L.~Fullton
3. Observer A.~R.~Walker
4. Observer A.~R.~Walker
5. Observer A.~R.~Walker
6. Observer A.~R.~Walker
7. Observer H.~E.~Bond
8. Observer ``DMD''
9. Observer unknown
10. Program ID 63.L-0342(B); ESO internal PI-COI ID 403; observer unknown
11. Program ID 63.L-0342(B); ESO internal PI-COI ID 403; observer unknown
12. Program ID 63.L-0342(B); ESO internal PI-COI ID 403; observer unknown
13. Program ID 69.D-0582(A); ESO internal PI-COI ID 361; observer unknown
14. PI unknown; observers C.~Aguilera, J.~Espinoza, A.~Pasten
15. Program ID 59.A-9002(A); ESO internal PI-COI ID 50002; observer unknown
16. Proposal ID 2008A-289; observer A.~R.~Walker
17. Proposal ID 2008B-155; observer A.~R.~Walker
\end{center}
\end{table*}
\section{Observations}\label{sec:data}
The observational material for this study consists of 1,139
individual CCD images from 18 observing runs. These images are
contained within a data archive maintained by PBS, and include~--
among others~-- images obtained specifically for this project.
Notable among these is one series of exposures dating from 2007
April 7 through 2007 August 3: observations were taken on 42 of
these 110 nights, with no gap greater than eight nights. Summary
details of the 18 observing runs are given in Table 1.
Considering all these images together, the median seeing for our
observations was 1.4 arcseconds; the 25th and 75th percentiles
were 1.2 and 1.6 arcseconds; the 10th and 90th percentiles were
1.0 and 1.8 arcseconds.
The photometric reductions were all carried out by PBS using
standard DAOPHOT/ALLFRAME procedures (Stetson 1987, 1994) to
perform profile-fitting photometry, which was then referred to a
system of synthetic-aperture photometry by the method of growth-curve
analysis (Stetson 1990). Calibration of these instrumental data
to the photometric system of Landolt (1992; see also Landolt 1973,
1983) was carried out as described by Stetson (2000, 2005). If we
define a ``dataset'' as the accumulated observations from one CCD
chip on one night with photometric observing conditions, or one
chip on one or more nights during a single observing run with
non-photometric conditions, the data for M72 were contained within
83 different datasets, each of which was individually calibrated
to the Landolt system. Of these 83 datasets, 60 were obtained and
calibrated as ``photometric,'' meaning that photometric zero
points, color transformations, and extinction corrections were
derived from all standard fields observed during the course of
each night, and were applied to the M72 observations. The other 23
datasets were reduced as "non-photometric"; in this case, color
transformations were derived from all the standard fields
observed during the course of the observing run, but the
photometric zero point of each individual CCD image was derived
from local M72 photometric standards contained within the image
itself.
The different cameras employed projected to different areas on the
sky, and of course the telescope pointings differed among the
various exposures. The WFI, WFC, Mosaic2, and EMMI imagers, in particular,
consist of mosaics of non-overlapping CCD detectors. Therefore,
although we have 1,139 images, clearly no individual star appears
in all those images. In fact, no star appeared in more than 169
$B$-band images, 200 $V$ images, or 190 $I$ images. Considering the
entire area surveyed, the {\it median\/} number of observations
for one of our stars is 13 in $B$, 34 in $V$, and 35 in $I$. In the
immediate vicinity of the cluster (within a radius of order 7
arcminutes), however, most stars appeared in at least 100 images
in each of $B$, $V$, and $I$.
There were insufficient $U$- and $R$-band images to define local
standards in the M72 field, so we have not calibrated the $U$ or
$R$ data and make no use of them here. In addition, during the
2008 August observing run on the CTIO Blanco 4m telescope, three
exposures (24 individual CCD images) were obtained in a DDO51
filter. This bandpass measures the strength of a molecular
band of MgH and the ``b'' triplet lines of neutral magnesium, and
can be used as a gravity/metallicity discriminator in very cool
stars to distinguish likely member giants from foreground field
dwarfs. While we do not employ the DDO51 data in the current
investigation, we may revisit these data in a future study.
Still, even though we do not make use of the $U$, $R$, and DDO51
data in our photometric analysis, these images were included in
the ALLFRAME reductions for the additional information they give
on the completeness of the star list and the precision of the
astrometry.
\section{Extratidal Stars in NGC~6981}\label{sec:extratidal}
The extended FOV available in our study motivates a new derivation of the tidal radius for \objectname{NGC~6981}. Several recent papers
in which the surface brightness distribution of GGCs was studied have revealed evidence of the presence of extratidal stars (e.g.,
Walker et al. 2011, hereafter W11; Correnti et al. 2011; Carballo-Bello et al. 2012; Salinas et al. 2012). This result is not entirely
surprising, for two main reasons. First, the number of wide-field CCD observations of GGCs has increased dramatically since the classical
studies dealing with the structural parameters of these objects were published (e.g., Grillmair et al. 1995). Second, clusters are expected
to undergo tidal disruption in the course of their lives, due to their interactions with the Galaxy as they orbit in the halo
\citep[e.g.,][]{fz01,bm03}, and direct evidence for this is being increasingly found in many GGCs \cite[e.g.,][]{moea01,moea03,gj06,jg10,ebea11}.
Indeed, there is growing evidence that many GGCs may indeed have been much more massive in the past, some possibly being the remains of disrupted
dwarf galaxies acting as building blocks in the hierarchical scenarios for galaxy formation \citep[e.g.,][and references therein]{tb08,vc11,smea12}.
To study the structural parameters of \objectname{NGC~6981} and to find the level of contamination by field stars, we followed the method
described in W11, which
is based on a similar dataset, and with a large FOV that extends more than 30\arcmin \ from the cluster center.
First, we identify an {\em acceptance box} in the $B-I$, $V$ plane, as shown in Figure~\ref{accbox}. We used the $B-I$, $V$ plane, as suggested
by W11, because of its stronger sensitivity to temperature, allowing a more robust determination of the ridgeline (more details on the ridgeline
construction are given in Sect.~\ref{sec:cmd}). Also, the usage of all three filters available helps avoid spurious detections.
\begin{figure*}[t]
\epsscale{1.0}
\centering
\includegraphics[width=10cm]{fig2.pdf}
\caption{{\em Top}: logarithmic surface density as a function of the inverse of the radial distance, for stars outside the acceptance box.
The mean value corresponds to the surface density of field stars outside the box of $\sim 5.8 \, {\rm stars/arcmin^2}$. {\em Middle}:
ratio between the number of stars inside the box and those outside the box, extending the annuli in the entire FOV.
{\em Bottom} Surface density profile of NGC~6981, before ({\em crosses}) and after ({\em filled circles})
subtraction of field stars. The error bars are Poissonian.}
\label{surfdensfield}
\end{figure*}
\begin{figure*}[htbp!]
\centering
\includegraphics[width=18cm]{fig3.pdf}
\caption{M72 CMD at different radial annuli. MS stars are distinguishable out to at least $r \approx 14\arcmin$.}
\label{radii}
\end{figure*}
\begin{figure}[htbp!]
\centering
\includegraphics[width=0.5\textwidth]{fig4.pdf}
\caption{$V$ magnitude vs. radius for stars with $\sigma(B-I) \leq 0.1$ ({\em top}) and $0.1 \leq \sigma(B-I) \leq 0.3$ ({\em bottom}).
More details in text.}
\label{Vradius}
\end{figure}
\begin{figure*}[htbp!]
\centering
\includegraphics[width=18cm]{fig5.pdf}
\caption{$BVI$ photometry for NGC~6981, in the $V$, $\bv$ plane ({\em left}), $V$, $B-I$ plane ({\em middle}), and $V$, $V-I$ plane
({\em right}). Selection criteria according to Figure~\ref{Vradius} was applied (see text for details).}
\label{BVI}
\end{figure*}
\begin{figure}[htbp!]
\centering
\includegraphics[width=0.5\textwidth]{fig6.pdf}
\caption{M72 CMD zoomed in around evolved sequences of the CMD. The solid black line corresponds to a Victoria-Regina model ZAHB
(VandenBerg et al. 2006) with $\rm [Fe/H] = -1.41$ and $\rm [\alpha/Fe] = +0.3$. The dashed line is the inferred ZAHB level,
at $V=$ 16.89 $\pm$ 0.02. The dashed horizontal lines denote the levels 2.0 and 2.5~mag above the HB in $V$.
The solid vertical red line is defined by the color of the RGB at the HB level, $(\bv)_{g}$.
The solid vertical blue lines intersecting the ridgeline at the upper RGB indicate the
$\rm \Delta V$ measurements at $(\bv)_0 = 1.0$, 1.2, and 1.4 respectively. See Section~\ref{sec:metal} for more details}
\label{ZAHB}
\end{figure}
\begin{figure}[htbp!]
\centering
\includegraphics[angle=90,width=0.5\textwidth]{fig7.pdf}
\caption{{\em Top}: Selection box of the RGB stars in the CMD. {\em Middle}: average-shifted histogram (differential luminosity function)
of the stars selected in the panel above. {\em Bottom}: cumulative luminosity function, showing the change
in slope brought about by the RGB bump. The dot-dashed line in the three panels marks the position of the bump, at $V_{\rm bump} =16.6$. }
\label{historgb}
\end{figure}
\begin{figure}[htbp!]
\centering
\includegraphics[width=0.5\textwidth]{fig8.pdf}
\caption{Parameter $\Delta V^{\rm bump}_{\rm HB}$ as a function of $\rm [Fe/H]_{CG97}$ ({\em top}) and $\rm [M/H]_{CG97}$ ({\em bottom}).
The diamonds correspond to data taken from F99, and the solid circle corresponds to the position of NGC~6981 measured in this study.}
\label{bumpFeH}
\end{figure}
\begin{figure}[htbp!]
\centering
\includegraphics[angle=90,width=0.5\textwidth]{fig9.pdf}
\caption{Parameter $R_{\rm bump}$ as a function of global metallicity. Data taken from Bono et al. (2001) ({\em empty diamonds}).
The {\em filled circle} indicates the value derived for M72 in our study.}
\label{Rbump}
\end{figure}
\begin{figure*}[t]
\centering
\includegraphics[width=18cm]{fig10.pdf}
\caption{{\em Left}: Selection box for BSS ({\em red}) and RGB ({\em green}) populations, within $r \leq 7.1\arcmin$. {\em Right}: Stars
with $r \geq 7.1\arcmin$, with the same selection boxes as in the other panel overplotted. }
\label{BSScmd}
\end{figure*}
\begin{figure}[htbp!]
\centering
\includegraphics[width=0.5\textwidth]{fig11.pdf}
\caption{Radial cumulative distribution for both RGB and BSS populations in NGC~6981.}
\label{BSScum}
\end{figure}
\begin{figure}[htbp!]
\centering
\includegraphics[width=0.5\textwidth]{fig12.pdf}
\caption{Comparison between our photometry for M3 ({\em points}) and the ridgeline of NGC~6981
({\em solid line}).}
\label{compM3}
\end{figure}
The selected box is assumed to contain most true cluster members, and so the number of bona-fide members falling outside this box is assumed
to be small.
The contamination from field stars inside the box is small but not negligible. We did not include in the acceptance box the blue straggler
star (BSS) population, since they will have a very small effect in the estimation of overall number counts in the cluster, and certainly will
not affect the outer radii, due to its mostly central concentration. More details on the BSS population are given in Section~\ref{sec:BSS}.
Once the acceptance box has been defined,
the field is divided in concentric annuli, each with a radius of 50\arcsec, in order to study the radial density
profile inside and outside the acceptance box. Since the cluster is not centered in the FOV, the limit radius considered to be complete
is the greatest circle within the FOV, this is $r \approx 14.1 \arcmin$, which corresponds to the edge closer to the center of the cluster.
In Figure~\ref{surfdensfield} ({\em top panel}) is plotted the logarithmic surface density of all stars outside the acceptance box, as
a function of the inverse of the radial distance. Based on the
assumption that in the outermost annuli the presence of true
cluster members is negligible, it follows that the mean
density value of the last points provides a reasonable estimate of the surface density of field stars
outside the acceptance box. This leads to an estimated field contribution of $5.8 \, {\rm stars/arcmin^2}$.
We then calculated the fraction of stars inside and outside the box. In this case, the fraction of counts in each case is not affected by
incompleteness of the sample at radii greater of $r \approx 14.1\arcmin$, so we extended the annuli to the complete FOV. This fraction is shown
in Figure~\ref{surfdensfield} ({\em middle panel}). The asymptotic value (at larger distances) of the logarithm of this fraction approaches
$-0.69 \pm 0.05$. Combining this number with the surface density of field stars outside the box, we obtain that the surface density of field
stars is $7 \, {\rm stars/arcmin^2}$, comprised of $5.8 \, {\rm stars/arcmin^2}$ outside the box and $1.2 \, {\rm stars/arcmin^2}$ inside the
box. The density profile of the cluster is shown in the bottom panel of Figure~\ref{surfdensfield}. The crosses correspond to the total density
profile of the cluster, and the filled circles are the surface density profile with the contamination of field stars subtracted.
The error bars correspond to the Poissonian error in the number counts. In fact, the profile does not seem to break at $r=14.1\arcmin$,
nor show any indication of flattening at large distances.
An analysis of the CMD at different radial distances is shown in Figure~\ref{radii}. Here, no selection criteria were used for the stars.
The selected annuli correspond to different values of the tidal radius of M72 found in literature: $r_t \approx 7.4\arcmin$ is taken from
the Dec. 2010 version of the Harris 1996 catalog; this value was calculated based on the listed value of the central concentration
$c=log(r_t/r_c) = 1.21$, with $r_c = 0.46\arcmin$. The value $r_t \approx 9.2\arcmin$ is taken from Trager et al. 1995, and $14.1\arcmin$
is the limit of our study. Figure~\ref{radii} shows that there is still a trace of (mostly main sequence, MS) stars belonging to the cluster
at radial distances further than $r_t \approx 7.4\arcmin$ and even further than $r_t \approx 9.2\arcmin$.
The existence of an extratidal component in \objectname{NGC~6981} was previously claimed in Grillmair et al. (1995), along with several other
GGCs. They conclude that these stars are probably unbound due to ongoing stripping episodes; furthermore, they speculate that GCs
may not have an observable limiting radii. This is also observed in $N$-body simulations (e.g., Combes et al. 1999; Capuzzo Dolcetta et al.
2005; K\"upper et al. 2010, 2012). In our study, the derived density profile does not show flattening at larger distances.
To put stronger constraints on the origin of the extratidal component of \objectname{NGC~6981}, covering more extended fields around
the cluster would prove of great interest.
\section{Color-Magnitude Diagram}\label{sec:cmd}
In Figure~\ref{accbox} we present the results from our photometry in the $V$, $B-I$ plane for all stars ($\approx 33,\!000$)
detected in the three filters, throughout the field, with no selection criteria applied. The presence of field stars and background galaxies
is clear, especially in the zone occupied by red giant branch (RGB) stars and the redder and fainter part of the CMD. To produce a tighter CMD,
we applied standard selection criteria used before in similar studies based on ALLFRAME photometry (see Stetson et al. 2003).
The initial cut was to consider stars with $\sigma(B-I) \leq$ 0.1. Stars with values $\sigma(B-I)$ larger than 0.1~mag are mostly very faint
stars ($V \gtrsim 23$). We then used the ALLFRAME index \texttt{sharp}, which gives an estimate of the residuals from the PSF fit.
We selected stars with $-1 \leq \texttt{sharp} \leq 1$, which removed the contamination due mostly to background galaxies
and very poorly measured stars. In order to obtain an accurate ridgeline and to study the more evolved populations, we applied one
more cut, based on the radial distance of the stars from the cluster center (e.g., Stetson et al. 2005). Figure~\ref{Vradius}
shows the distribution in magnitude of stars as a function of radial distance. The upper plot shows stars with $\sigma (B-I) \leq 0.1$,
whereas the lower panel shows those with $0.1 \leq \sigma (B-I) \leq 0.3$. For the sake of clarity, we have marked in the upper plot
the limits at $V = 20$, $r =100\arcsec$ and $r=316\arcsec$. This figure suggests that the detection limit
for stars with good photometry within the magnitude range $18 \leq V \leq 20$ is at a radius of $\approx 30\arcsec$. Moreover, we can consider
that the detection limit for stars with $V \geq 20$ is constant from radii greater than $100\arcsec$ until $\approx 316\arcsec$, where
there appears to be a break before field contamination increases at fainter magnitudes. We summarize all these considerations by applying
the following radial selection criteria:
\begin{eqnarray}
\begin{array}{rrl}
V \leq 18 & \,\,\, {\rm and} \,\,\, & r \leq 316\arcsec, \\
18 \leq V \leq 20 & \,\,\, {\rm and} \,\,\, & 30\arcsec \leq r \leq 316\arcsec, \\
20 \leq V & \,\,\, {\rm and} \,\,\, & 100\arcsec \leq r \leq 316\arcsec. \\
\end{array}
\end{eqnarray}
\noindent We remark that, as stated in the previous section, the limit of $316\arcsec$ does not correspond to the cluster's tidal limit,
and it is used only for the purpose of obtaining a cleaner CMD.
In Figure~\ref{BVI}, we plot the cleaner CMD in all available filters, obtained after applying the described selection criteria.
The CMD reveals an overall morphology in good agreement with previous studies (Dickens 1972; Piotto et al. 2002; Bramich et al. 2011),
but this study reveals a more extended MS, reaching down to $V \approx 24$, and with errors less than $\sim 0.1$~mag at this faint level.
Also, we note the presence of a well-defined RGB with moderate steepness, a horizontal branch (HB) with both red and blue populations,
a relatively populated asymptotic giant branch sequence, and the presence of a BSS population.
\begin{table}[t]
\begin{center}
\footnotesize
\caption{\footnotesize{Mean Fiducial Points for NGC~6981: $V$, $\bv$ plane.}}
\begin{tabular}{lc}
\tableline\tableline
\emph{V} & (\bv) \\
\tableline
\tableline
\multicolumn{2}{c}{MS + RGB}\\
\tableline
$0.79$ & $23.10$ \\
$0.61$ & $22.01$ \\
$0.52$ & $21.36$ \\
$0.48$ & $20.74$ \\
$0.47$ & $20.30$ \\
$0.48$ & $20.01$ \\
$0.51$ & $19.85$ \\
$0.56$ & $19.74$ \\
$0.59$ & $19.70$ \\
$0.64$ & $19.60$ \\
$0.65$ & $19.54$ \\
$0.68$ & $19.30$ \\
$0.72$ & $18.70$ \\
$0.75$ & $18.00$ \\
$0.78$ & $17.40$ \\
$0.84$ & $16.80$ \\
$0.91$ & $16.20$ \\
$1.03$ & $15.50$ \\
$1.20$ & $14.70$ \\
$1.30$ & $14.30$ \\
$1.42$ & $14.00$ \\
\tableline
\end{tabular}
\label{tab:fidpoints1}
\end{center}
\end{table}
\begin{table}[t]
\begin{center}
\footnotesize
\caption{\footnotesize{Mean Fiducial Points for NGC~6981: $V$, $B-I$ plane.}}
\begin{tabular}{lc}
\tableline\tableline
\emph{V} & $(B-I)$ \\
\tableline
\tableline
\multicolumn{2}{c}{MS + RGB}\\
\tableline
$1.90$ & $23.55$ \\
$1.78$ & $23.20$ \\
$1.64$ & $22.80$ \\
$1.55$ & $22.53$ \\
$1.45$ & $22.22$ \\
$1.36$ & $21.92$ \\
$1.30$ & $21.70$ \\
$1.23$ & $21.41$ \\
$1.17$ & $21.05$ \\
$1.13$ & $20.80$ \\
$1.12$ & $20.60$ \\
$1.11$ & $20.40$ \\
$1.12$ & $20.21$ \\
$1.14$ & $20.03$ \\
$1.17$ & $19.92$ \\
$1.23$ & $19.80$ \\
$1.30$ & $19.72$ \\
$1.37$ & $19.66$ \\
$1.45$ & $19.59$ \\
$1.50$ & $19.44$ \\
$1.56$ & $19.04$ \\
$1.62$ & $18.57$ \\
$1.68$ & $17.90$ \\
$1.75$ & $17.37$ \\
$1.82$ & $16.88$ \\
$1.91$ & $16.46$ \\
$2.00$ & $16.09$ \\
$2.11$ & $15.68$ \\
$2.22$ & $15.35$ \\
$2.36$ & $14.98$ \\
$2.47$ & $14.73$ \\
$2.60$ & $14.50$ \\
$2.75$ & $14.25$ \\
$2.87$ & $14.13$ \\
\tableline
\end{tabular}
\label{tab:fidpoints2}
\end{center}
\end{table}
\begin{table}[t]
\begin{center}
\footnotesize
\caption{\footnotesize{Mean Fiducial Points for NGC~6981: $V$, $V-I$ plane.}}
\begin{tabular}{lc}
\tableline\tableline
\emph{V} & $(V-I)$ \\
\tableline
\tableline
\multicolumn{2}{c}{MS + RGB}\\
\tableline
$0.96$ & $23.10$ \\
$0.86$ & $22.51$ \\
$0.78$ & $22.03$ \\
$0.70$ & $21.30$ \\
$0.64$ & $20.66$ \\
$0.64$ & $20.28$ \\
$0.65$ & $20.04$ \\
$0.68$ & $19.86$ \\
$0.72$ & $19.75$ \\
$0.76$ & $19.70$ \\
$0.79$ & $19.62$ \\
$0.83$ & $19.52$ \\
$0.85$ & $19.38$ \\
$0.87$ & $19.05$ \\
$0.89$ & $18.63$ \\
$0.91$ & $18.12$ \\
$0.94$ & $17.62$ \\
$0.99$ & $16.90$ \\
$1.04$ & $16.34$ \\
$1.11$ & $15.73$ \\
$1.18$ & $15.23$ \\
$1.26$ & $14.82$ \\
$1.36$ & $14.43$ \\
$1.42$ & $14.21$ \\
$1.51$ & $14.00$ \\
\tableline
\end{tabular}
\label{tab:fidpoints3}
\end{center}
\end{table}
As in many previous studies, including for instance Stetson et al. (2005) and Zorotovic et al. (2009),
the mean ridgelines for the \bv, $V-I$, $B-I$ versus $V$ planes were determined by eye in large-scale plots.
Then the normal points were overplotted in the CMD to perform some minor adjustments to obtain a smooth ridgeline.
The fiducial points are presented in Tables~\ref{tab:fidpoints1} to \ref{tab:fidpoints3}, and the resulting
rigeline, in the $B-I$ versus $V$ plane, is overplotted on the data in Figure~\ref{radii}.
We note that the apparent asymmetry of the MS around the fiducial line is probably due to binary sequence. Sollima et al. (2007), based
on HST/ACS photometry, estimated a binary fraction for \objectname{NGC~6981} of $\xi \sim 10\%$.
The zero-age horizontal branch (ZAHB) level was determine in the same way as in Ferraro et al.\ (1999, hereafter F99; see their
Figure~3), by fitting the Victoria-Regina ZAHB models (VandenBerg et al.\ 2006) at $\log T_{\rm eff} = 3.85$, assuming $[\rm Fe/H] = -1.41$
and $[\alpha/ \rm Fe] = +0.3$ (recall that the cluster has $[{\rm Fe/H}] = -1.42$, according to the December 2010 version of the Harris 1996
catalog, and that halo stars typically have a similar such level of $\alpha$-elements enhancement; see, e.g., Pritzl et al.\ 2005).
We matched the ZAHB in the dereddened CMD (using a value of $E($\bv$) = 0.05$; see Section~\ref{sec:metal} for details) by allowing vertical
shifts. The result is shown in Figure~\ref{ZAHB}. In this way we found $V_{\rm ZAHB} = 16.89 \pm 0.02$~mag, in good agreement with the value
listed in F99 for M72. The turn-off point (TO) was determined by fitting a parabola to the data points in a small area of the MS around the TO.
We obtain $V_{\rm TO} = 20.31 \pm 0.1$~mag at a color $(B-I)_{\rm TO}= 0.47$~mag.
\subsection{Reddening and Metallicity}\label{sec:metal}
The Dec. 2010 version of the Harris (1996) catalog lists the reddening for \objectname{NGC~6981} to be $E(\bv) = 0.05$,
based on previous photometric studies. For comparison, the Schlegel et al. (1998) dust maps give $E(\bv) = 0.06$.
We adopted the Harris value for the remainder of our analysis.
F99 reviewed observational indicators that are derived from the evolved sequences in the CMD and can be used to
estimate photometric metallicities (see their Table~4). We used the dereddened \bv, $V$ plane to measure the parameters
$(\bv)_{0,g}$ (intrinsic color of the RGB at the HB level), $S_{2.0}$, $S_{2.5}$ (intersection of the line connecting the point of intersection
between the RGB and the HB level and the point 2.0 and 2.5 magnitudes brighter than the HB), $\Delta V_{1.1}$, and $\Delta V_{1.2}$ (height of
the RGB brighter than the HB at color $(\bv)_0 = 1.1$, 1.2), as shown in Figure~\ref{ZAHB}.
The very red star close to the RGB tip level, at a color $\bv \approx 1.7$,
corresponds to variable star V42 (Figure~\ref{CMDvar}). This star is found to vary, but the
type of variability is not identified and it may not be a member of the
cluster (see sec.~\ref{sec:variables} for details). For this reason, we avoid extrapolating the ridgeline
to reach this star, and the parameter $\Delta V_{1.4}$ was not used in this work.
The values of the parameters obtained in our study as well as the resulting values for $\rm [Fe/H]$
(in the Carretta \& Gratton 1997 [CG97] metallicity scale) and $\rm [M/H]$ are summarized in Table~\ref{tab:met}. Taking the mean, the final
adopted values are $\rm [M/H] = -1.19 \pm 0.05$ and $\rm [Fe/H] = -1.32 \pm 0.17$.
It should be noted that recently, Carretta et al. (2009) presented a recalibration of the CG97 metallicity scale,
based on a homogeneous analysis of a large sample of spectra taken with UVES. The relation between both scales is
\begin{equation}
\rm [Fe/H]_{UVES} = 1.137 (\pm 0.060)\rm [Fe/H]_{CG97} - 0.003,
\end{equation}
\noindent as provided in their study.
The CG97 values were transformed to this scale, and are also summarized in Table~\ref{tab:met}. The final metallicity value
adopted in this paper, in the UVES scale is $\rm [Fe/H] = -1.50 \pm 0.19$.
Similarly, we transformed the values of $\rm [M/H]$ in Table~\ref{tab:met} to the UVES metallicity scale by using the theorethical equation
found by Salaris, Chieffi \& Straniero (1993), used in F99, this time using the new UVES $\rm [Fe/H]$ values
\begin{equation}
{\rm [M/H]_{UVES}} = {\rm [Fe/H]_{UVES}} + \log (0.638 \, f_{\alpha} + 0.362),
\end{equation}
\noindent where $f_{\alpha}$ is the enhancement factor of the $\alpha$-elements; in this case we use $f_{\alpha} = 10^{0.28}$, as suggested in
F99. The results are shown in column 5 of Table~\ref{tab:met}.
Since the slopes $S_{2.0}$ and $S_{2.5}$ do not depend on the adopted reddening, we use those values and the obtained value of $\rm [Fe/H]$
to find, for the reddened color of the RGB at the HB level, a value $(\bv)_{g} = 0.82$. This leads to a reddening value of $E(\bv) = 0.039$,
in good agreement with our adopted value.
Other methods to determine the reddening are based in the minimum light colors of ab-type RR Lyrae stars (description of the RR Lyrae population
and other variables is in Section~\ref{sec:variables}). Indeed, over the phase range $0.5 < \phi < 0.8$, the intrinsic colors of RRab's
are fairly uniform (e.g., Preston \& Spinrad 1959; Preston 1964). Therefore, we also estimated the reddening using Sturch's method
(Sturch 1966; Walker 1990), a relation between color at minimum light, period, and (more weakly) metallicity, as described by Walker (1998),
where the reddening is calculated with the following relation:
\begin{equation}
E(\bv) = (\bv)_{\rm min} -0.24 \, P - \rm 0.056[Fe/H] -0.036,
\end{equation}
\noindent where $(\bv)_{\rm min}$ is the color of the RRab star at minimum light, $P$ is the fundamental period
of the variable in days, and $\rm [Fe/H]$ is the metallicity in the Zinn \& West (1984; ZW84) scale. We use our result listed in
Table~\ref{tab:met} in the UVES scale, and transformed to the ZW84 scale using the second-order polynomial relation provided by
Carreta et al. (2009), giving a value of $\rm [Fe/H] = -1.57$.
By only considering stars with $\sigma(\bv)_{\rm min} < 0.06$~mag, we obtain a reddening value of $E(\bv) = 0.08$~mag, which is
slightly higher than our adopted value.
As shown by Guldenschuh et al. (2005), the $(V-I)_{\rm 0, min}$ colors of ab-type RR Lyrae stars are also remarkably uniform, with
$(V-I)_{\rm 0,min} = 0.58 \pm 0.02$~mag, irrespective of period, amplitude, and metallicity (Kunder et al. 2013). We calculated the
$V-I$ color at minimum light from our data, and on this basis we found a mean value of $E(V-I) = 0.07$~mag, or $E(\bv) = 0.06$~mag,
assuming $E(V-I) = 1.27 \, E(\bv)$~-- as obtained from eq.~1 in Dean et al. (1978), for $(\bv)_0 \approx 0.3$~mag (as appropriate
for RR Lyrae stars) and $E(\bv) \approx 0.05$~mag. This result is in excellent agreement with our adopted value.
It is worth noting that higher reddening values for \objectname{M72} have on occasion also favored in the literature,
including for instance $E(\bv) = 0.07$~mag (de Santis \& Cassisi 1999) and $E(\bv)= 0.11$~mag (Rodgers \& Harding 1990).
\begin{table*}[t]
\begin{center}
\footnotesize
\caption{\footnotesize{RGB parameters for NGC~6981}}
\begin{tabular}{lcccc}
\tableline\tableline
Parameter & ${\rm [Fe/H]_{CG97}}$ & ${\rm [M/H]}_{\rm CG97}$ & ${\rm [Fe/H]_{UVES}}$ & ${\rm [M/H]}_{\rm UVES}$ \\
\tableline
$(\bv)_{0,g} = 0.781$ & $-1.42$ & $-1.21$ & $-1.61$ & $-1.42$\\
$\Delta V_{1.1} = 1.96$ & $-1.29$ & $-1.10$ & $-1.47$ & $-1.28$\\
$\Delta V_{1.2} = 2.39$ & $-1.38$ & $-1.18$ & $-1.57$ & $-1.38$\\
$S_{2.0} = 6.07$ & $-1.03$ & $-1.23$ & $-1.18$ & $-0.98$\\
$S_{2.5} = 5.59$ & $-1.48$ & $-1.23$ & $-1.69$ & $-1.49$\\
Mean & $-1.32 $ & $-1.19 $ & $-1.50$ & $-1.31$\\
\tableline
\end{tabular}
\label{tab:met}
\end{center}
\end{table*}
\begin{center}
\begin{deluxetable*}{lcccccccccl}
\tablewidth{0pc}
\tabletypesize{\scriptsize}
\tablecaption{Total census of variable star population in NGC~6981}
\tablehead{\colhead{ID} & \colhead{RA (J2000)} & \colhead{DEC (J2000)} & \colhead{$P$} & \colhead{$A_B$} & \colhead{$A_V$} & \colhead{$A_I$} & \colhead{$\langle B \rangle$} & \colhead{$\langle V \rangle$} & \colhead{$\langle I \rangle$} & \colhead{Comments} \\
\colhead{} & \colhead{(h:m:s)} & \colhead{(deg:m:s)} & \colhead{(days)} & \colhead{(mag)} & \colhead{(mag)} & \colhead{(mag)} & \colhead{(mag)} & \colhead{(mag)} & \colhead{(mag)} & \colhead{} }
\startdata
V1 & $20:53:31.11$& $-12:33:11.6$ & $0.619784$ & $0.81$ & $0.62$ & $0.37$ & $17.28$ & $16.87$ & $16.28$ &RR{\em ab}\ \\
V2 & $20:53:34.56$& $-12:29:02.0$ & $0.465256$ & $1.58$ & $1.20$ & $0.75$ & $17.18$ & $16.87$ & $16.45$ & RR{\em ab}\ \\
V3 & $20:53:24.57$& $-12:33:17.0$ & $0.497602$ & $1.18$ & $0.88$ & $0.58$ & $17.11$ & $16.79$ & $16.30$ & RR{\em ab}\ \\
V4 & $20:53:20.80$& $-12:31:42.4$ & $0.552487$ & $1.14$ & $0.91$ & $0.59$ & $17.27$ & $16.89$ & $16.33$ &RR{\em ab}\ \\
V5 & $20:53:25.58$& $-12:32:41.4$ & $0.507262$ & $1.32$ & $0.96$ & $0.59$ & $17.09$ & $16.74$ & $16.26$ &RR{\em ab}\ \\
V7 & $20:53:27.77$& $-12:31:20.9$ & $0.524683$ & $1.16$ & $0.87$ & $0.55$ & $17.19$ & $16.84$ & $16.35$ &RR{\em ab}\ \\
V8 & $20:53:27.58$& $-12:30:47.7$ & $0.568377$ & $1.05$ & $0.81$ & $0.54$ & $17.27$ & $16.88$ & $16.34$ &RR{\em ab}\ \\
V9 & $20:53:28.66$& $-12:31:27.8$ & $0.602929$ & $0.88$ & $0.64$ & $0.40$ & $17.29$ & $16.88$ & $16.30$ &RR{\em ab}\ \\
V10 & $20:53:24.85$& $-12:33:30.8$ & $0.558182$ & $1.10$ & $0.90$ & $0.58$ & $17.25$ & $16.89$ & $16.34$ &RR{\em ab}\ \\
V11 & $20:53:32.05$& $-12:32:51.1$ & $0.520638$ & $1.12$ & $0.82$ & $0.46$ & $17.23$ & $16.87$ & $16.35$ &RR{\em ab}, Bl \\
V12 & $20:53:28.60$& $-12:32:38.4$ & $0.287861$ & $0.61$ & $0.47$ & $0.28$ & $17.00$ & $16.72$ & $16.32$ &RR{\em c}\ \\
V13 & $20:53:28.91$& $-12:32:01.9$ & $0.542034$ & $1.09$ & $0.86$ & $0.53$ & $16.94$ & $16.56$ & $16.03$ &RR{\em ab}\ \\
V14 & $20:53:27.17$& $-12:31:42.6$ & $0.607213$ & $0.86$ & $0.65$ & $0.39$ & $17.17$ & $16.76$ & $16.18$ &RR{\em ab}, Bl \\
V15 & $20:53:23.76$& $-12:32:38.6$ & $0.540480$ & $1.19$ & $0.85$ & $0.56$ & $17.21$ & $16.87$ & $16.33$ &RR{\em ab}, Bl \\
V16 & $20:53:27.86$& $-12:32:36.6$ & $0.575212$ & $1.01$ & $0.77$ & $0.49$ & $17.18$ & $16.79$ & $16.23$ &RR{\em ab}\ \\
V17 & $20:53:28.23$& $-12:32:59.7$ & $0.573540$ & $1.07$ & $0.84$ & $0.53$ & $17.24$ & $16.86$ & $16.32$ &RR{\em ab}\ \\
V18 & $20:53:26.22$& $-12:32:54.4$ & $0.535576$ & $1.01$ & $0.83$ & $0.56$ & $16.91$ & $16.62$ & $16.18$ &RR{\em ab}\ \\
V20 & $20:53:24.22$& $-12:32:03.7$ & $0.595047$ & $0.95$ & $0.71$ & $0.44$ & $17.28$ & $16.88$ & $16.29$ &RR{\em ab}\ \\
V21 & $20:53:22.40$& $-12:32:05.5$ & $0.531161$ & $1.28$ & $1.07$ & $0.70$ & $17.27$ & $16.92$ & $16.39$ &RR{\em ab}\ \\
V23 & $20:53:21.15$& $-12:30:21.9$ & $0.585121$ & $0.85$ & $0.67$ & $0.46$ & $17.30$ & $16.91$ & $16.33$ &RR{\em ab}, Bl \\
V24 & $20:53:27.18$& $-12:32:41.9$ & $0.327127$ & $0.43$ & $0.29$ & $0.14$ & $16.83$ & $16.42$ & $15.87$ &RR{\em c}\ \\
V25 & $20:53:18.90$& $-12:31:13.6$ & $0.353350$ & $0.54$ & $0.42$ & $0.27$ & $17.11$ & $16.80$ & $16.35$ &RR{\em c}\ \\
V27 & $20:53:42.60$& $-12:36:06.4$ & $0.673871$ & $1.37$ & $1.11$ & $0.66$ & $16.95$ & $16.62$ & $16.10$ &RR{\em ab}\ \\
V28 & $20:53:32.26$& $-12:30:55.7$ & $0.567251$ & $0.82$ & $0.69$ & $0.48$ & $17.27$ & $16.88$ & $16.34$ &RR{\em ab}, Bl \\
V29 & $20:53:25.77$& $-12:33:11.2$ & $0.597448$ & $1.04$ & $0.87$ & $0.54$ & $17.23$ & $16.85$ & $16.29$ &RR{\em ab}\ \\
V31 & $20:53:28.20$& $-12:31:42.7$ & $0.542349$ & $0.74$ & $0.69$ & $0.51$ & $17.20$ & $16.80$ & $16.27$ &RR{\em ab}, Bl \\
V32 & $20:53:18.84$& $-12:33:01.2$ & $0.528315$ & $1.26$ & $0.98$ & $0.62$ & $17.25$ & $16.91$ & $16.39$ &RR{\em ab}, Bl \\
V35 & $20:53:43.56$& $-12:31:52.1$ & $0.543749$ & $1.05$ & $0.84$ & $0.52$ & $17.25$ & $16.89$ & $16.37$ &RR{\em ab}, Bl \\
V36 & $20:53:26.90$& $-12:32:17.6$ & $0.582612$ & $1.02$ & $0.70$ & $0.46$ & $17.09$ & $16.70$ & $16.12$ &RR{\em ab}, Bl \\
V39 & $20:53:41.06$& $-12:28:15.7$ & $0.426785$ & $0.68$ & $0.46$ & $0.29$ & $17.73$ & $17.30$ & $16.70$ &RR{\em ab}, f?, Bl \\
V42 & $20:53:28.86$& $-12:32:16.5$ & $1.01109 $ & $0.47$ & $0.36$ & $0.17$ & $15.53$ & $13.85$ & $12.14$ &?? \\
V43 & $20:53:27.35$& $-12:32:22.1$ & $0.283497$ & $0.56$ & $0.49$ & $0.30$ & $16.97$ & $16.74$ & $16.42$ &RR{\em c}\ \\
V44 & $20:53:28.02$& $-12:32:29.4$ & $0.557440$ & $1.16$ & $0.89$ & $0.60$ & $17.25$ & $16.87$ & $16.32$ &RR{\em ab}\ \\
V45 & $20:53:28.66$& $-12:32:20.0$ & $0.364981$ & $0.64$ & $0.49$ & $0.28$ & $17.04$ & $16.77$ & $16.38$ &RR{\em c}\ \\
V46 & $20:53:28.96$& $-12:32:26.2$ & $0.286682$ & $0.37$ & $0.26$ & $0.15$ & $16.95$ & $16.67$ & $16.27$ &RR{\em c}\ \\
V47 & $20:53:29.73$& $-12:32:26.0$ & $0.649075$ & $0.32$ & $0.25$ & $0.18$ & $17.30$ & $16.86$ & $16.24$ &RR{\em ab}\ \\
V48 & $20:53:26.46$& $-12:32:27.0$ & $0.639765$ & $0.59$ & $0.46$ & $0.28$ & $17.27$ & $16.84$ & $16.24$ &RR{\em ab}\ \\
V49 & $20:53:28.27$& $-12:32:10.7$ & $0.578272$ & $1.00$ & $0.79$ & $0.50$ & $17.14$ & $16.73$ & $16.15$ &RR{\em ab}, Bl \\
V50 & $20:53:28.25$& $-12:31:58.2$ & $0.488881$ & $1.29$ & $0.93$ & $0.61$ & $17.00$ & $16.61$ & $16.04$ &RR{\em ab}\ \\
V51 & $20:53:28.42$& $-12:32:31.9$ & $0.548599$ & $0.90$ & $0.63$ & $0.31$ & $17.04$ & $16.58$ & $15.93$ &RR{\em ab}, Bl \\
V52 & $20:53:27.97$& $-12:32:02.0$ & $0.698690$ & $0.74$ & $0.61$ & $0.36$ & $17.17$ & $16.72$ & $16.09$ &RR{\em ab}\ \\
V53 & $20:53:27.00$& $-12:32:16.3$ & $0.652118$ & $0.57$ & $0.45$ & $0.25$ & $17.31$ & $16.87$ & $16.23$ &RR{\em ab}, Bl \\
V54 & $20:53:28.64$& $-12:32:02.1$ & $0.077343$ & $0.38$ & $0.22$ & $0.16$ & $18.33$ & $17.95$ & $17.49$ & SX Phe \\
V55 & $20:53:24.46$& $-12:31:27.1$ & $0.047033$ & $0.22$ & $0.17$ & $0.09$ & $19.65$ & $19.34$ & $18.95$ & SX Phe\\
V56 & $20:53:28.92$& $-12:33:05.8$ & $0.048154$ & $....$ & $....$ & $....$ & $.....$ & $.....$ & $.....$ & SX Phe \\
V57 & $20:53:27.36$& $-12:32:13.1$ & $0.335054$ & $0.44$ & $0.32$ & $0.17$ & $17.02$ & $16.63$ & $16.06$ &RR{\em c}\ \\
V59 & $20:53:48.89$& $-12:36:44.8$ & $0.603277$ & $0.91$ & $0.66$ & $0.44$ & $17.24$ & $16.86$ & $16.29$ &RR{\em ab}\ \\
V60 & $20:53:46.68$& $-12:27:33.0$ & $0.482069$ & $1.73$ & $1.25$ & $0.77$ & $17.23$ & $16.94$ & $16.48$ &RR{\em ab}\ \\
\enddata
\label{tab:variables}
\end{deluxetable*}
\end{center}
\subsection{RGB Bump}\label{sec:RGB}
An important feature of the RGB is the presence of a well-defined peak in its luminosity function (LF). This peak is known as the RGB
bump and was first predicted by Thomas (1967) and Iben (1968) as a consequence of the encounter of the H-burning shell and the chemical
composition discontinuity left behind after the convective envelope reaches the maximum depth during the first dredge-up. This produces an
increase in the H-abundance, which causes a sudden drop in the mean molecular weight $\mu$. Since the efficiency of the H-burning
shell is proportional to a high power of $\mu$, the luminosity decreases (Cassisi et al. 2011). Evolution of stars in this stage slows down
while they try to adjust to the new conditions, resulting in an overdensity in the LF. The RGB bump, first detected by King et al. (1985),
is known to be located at higher luminosities for more metal-poor GCs (e.g., Fusi Pecci et al. 1990; Recio-Blanco \& Laverny 2007).
The position of the RGB bump is a key quantity to study the evolutionary properties of RGB stars (e.g., Catelan 2007, and references therein.
To determine the position of the bump, we followed a method similar to the one described in Zoccali et al. (2001).
We select the RGB stars in the CMD, restricting ourselves to those stars within $r \leq 316\arcsec$, to avoid field
contamination (Figure~\ref{historgb} ({\em top}). Although it is possible that some field stars are present in the selected RGB sample,
these should be sufficiently few that our final result is not affected.
The luminosity function constructed for the selected stars is shown in Figure~\ref{historgb} ({\em middle}). We constructed an average-shifted
histogram (ASH, Scott 1985) by combining 8 different histograms, each calculated in an interval in magnitude of $\Delta V = 4$ and with
a bin width of 0.25 but with different origin values $x_0 = 14, 14.2, 14.4, 14.6, 14.8$. This method of smoothing a dataset allows us to
determine a significant peak in the data which does not depend on the choice of bin width and origin. Figure~\ref{historgb} ({\em bottom})
displays the cumulative distribution. The change in the slope corresponds to the position of the RGB bump. Based on these plots,
the location of the bump was measured at $V_{\rm bump} = 16.6$.
F99 studied the relation between $\Delta V^{\rm bump}_{\rm HB}$ (difference in magnitude between the RGB bump and the ZAHB level)
and the metallicity of the cluster, confirming a strong correlation between them. The relation is shown in Figure~\ref{bumpFeH}, for
$\rm [Fe/H]$ and global metallicity $\rm [M/H]$ in the CG97 scale. The data were taken from Table~5 in F99. The solid circle
represents the value for \objectname{NGC~6981} found in this study, with $\Delta V^{\rm bump}_{\rm HB} = -0.29$. Our result is
clearly consistent with the correlation found in F99.
Another useful parameter involving the RGB bump is $R_{\rm bump}$, defined in Bono et al. (2001).
This is the ratio between the number of RGB stars in the bump region (i.e., at $V_{\rm bump} \pm 0.4$) and the number of RGB stars
in the interval $V_{\rm bump} + 0.5 < V < V_{\rm bump} + 1.5$.
This parameter is an indicator of the ocurrence of deep mixing before the
H-shell encounters the chemical discontinuity left over by the first dredge-up, which would have an effect on the time spent in the bump region.
In Figure~\ref{Rbump} we plot the data taken from Bono et al. for $R_{\rm bump}$ as a function of global metallicity.
For \objectname{NGC~6981}, we obtained a value of $R_{\rm bump} = 0.61 \pm 0.10$ (solid circle in Figure~\ref{Rbump}). Bono et al. found
a large value of $R_{\rm bump}$ for \objectname{NGC~6981}, mostly due to a small number of bump stars ($N_{\rm bump}= 40$, see their Table~1).
We found instead $N_{\rm bump}= 58$, which leads to a slightly smaller $R_{\rm bump}$ value, more consistent~-- but still not completely so~--
with other clusters with similar metallicity. As found by Bono et al., this plot shows that the bulk of metal-poor clusters do not experience
deep mixing before stars reach the bump.
\subsection{HB Morphology}\label{sec:HB}
The HB of NGC~6981 (Figure~\ref{ZAHB}) does not show any prominent feature, such as an extended HB or gaps in the blue part, so it can be
well described by the HB index $\mathcal{L} = \mathcal{(B - R)/(B + V + R)}$ (Lee et al. 1994), were $\mathcal{B}$, $\mathcal{R}$, and
$\mathcal{V}$ correspond to the numbers of blue, red, and variable (RR Lyrae-type)
stars on the HB. To obtain the counts in each case, we selected stars within a radius of
100\arcsec, to avoid the contamination of field stars, particularly in the red HB; however, the inclusion of a couple of stars from the RGB,
scattered by photometric errors, might be possible. This yields a value of $\mathcal{L} = +0.02$, slightly less than the value
of $\mathcal{L} = +0.14$ listed by Mackey \& van den Bergh (2005).
Another significant parameter is $\mathcal{P}_{\rm HB} = \mathcal{(B{\rm 2} - R)/(B + V + R)}$ (Buonanno 1993), where $\mathcal{B{\rm 2}}$
corresponds to the number of stars in the HB with dereddened colors in the interval $-0.02 < (\bv)_0 < 0.18$. This parameter
was introduced in order to overcome the saturation of the $\mathcal{L}$ parameter for extreme blue HB morphologies. With the same constraints
as before, we found $\mathcal{P}_{\rm HB} = -0.27$ for M72.
\subsection{BSS Distribution}\label{sec:BSS}
BSS are a subpopulation present mostly in the cores of GCs. They appear as a brighter and hotter extension of the MS in the CMD,
beyond the cluster's TO point, suggesting the presence of stars more massive than normal MS stars.
There are currently two main scenarios for their origin: mass transfer
between primordial binaries and direct collisions of stars in dense environments (see Ferraro \& Lanzoni 2009 for a recent review). These two
scenarios are non-exclusive, and can even coexist (Ferraro et al. 2009).
Because of their high masses compared to normal MS stars, BSS are expected to be highly concentrated in the cluster centers.
In fact, the radial distribution of BSS is typically bimodal, which is explained in terms of mass segregation and the nature
of their progenitors (Mapelli et al. 2006; recent examples are found in Carraro \& Seleznev 2011 and Salinas et al. 2012),
but in some cases, the radial distribution of BSS might be flat (see Beccari et al. 2011 and Contreras Ramos et al. 2012 for
examples), indicating the presence of younger, not-yet-segregated populations.
The BSS population of \objectname{NGC~6981} is clearly identified in Figure~\ref{BSScmd}, framed by the selection box applied. We also selected
a subsample from the RGB in order to compare both populations. The boxes were chosen in order to avoid the regions of the CMD with spurious
blends. For the determination of the cumulative radial distribution of BSS, we chose as a limit the tidal radius at $r_t =7.4\arcmin$,
from Harris (2010; see sec.\ref{sec:extratidal}). The extratidal component found in Section~\ref{sec:extratidal} contains mostly MS stars
and it is too small to study the BSS. Within the tidal radius, we found 39 BSS and 330 RGB stars. The cumulative radial distribution
for both populations is displayed in Figure~\ref{BSScum}. This provides direct evidence of the more centrally concentrated distribution
of the BSS, supporting mass segregation. A Kolmogorov-Smirnov test reveals that the probability of the two populations being drawn
from the same parent distribution is less than 0.1\%.
\section{Relative Age Determination: Comparison with M3}\label{sec:age}
To obtain a relative age for \objectname{NGC~6981},
we compared this study with unpublished photometry for M3. This cluster has a similar
metallicity (according to the Dec. 2010 version of the Harris 1996 catalog, $\rm [Fe/H]_{\rm M3} = -1.50$) and its CMD morphology resembles
that of \objectname{NGC~6981}. The ridgeline and the TO point of M3 were determined using the same method described in Section~\ref{sec:cmd}
for \objectname{NGC~6981}. The $V$ magnitude of the HB was obtained using the Victoria-Regina ZAHB models, in the same way as with M72,
assuming $[\rm Fe/H] = -1.41$ and $[\alpha/ \rm Fe] = +0.3$. For M3, we determined the TO magnitude level and the HB level to be
at $V_{\rm TO} = 19.08 \pm 0.1$~mag and $V_{\rm HB} = 15.66 \pm 0.02$~mag, respectively.
In the case of \objectname{NGC~6981}, we recall from Section~\ref{sec:cmd} the values $V_{\rm TO} = 20.31 \pm 0.1$~mag and
$V_{\rm ZAHB} = 16.89 \pm 0.02$~mag.
\begin{figure}[t]
\centering
\includegraphics[width=0.5\textwidth]{fig13.pdf}
\caption{Positions of the detected variables in the CMD of NGC~6981.
RRab stars are shown as red circles, c-type RR Lyrae as blue circles. V54 and V55, two SX Phe stars, are shown as plus signs.
The green plus sign close to the RGB tip corresponds to the candidate variable V42 (see Sect.~\ref{sec:commentvar}
for further details).}
\label{CMDvar}
\end{figure}
\begin{figure}[ht]
\begin{center}
\includegraphics[width=.45\textwidth]{fig14a.pdf}
\includegraphics[width=.45\textwidth]{fig14b.pdf}
\includegraphics[width=.45\textwidth]{fig14c.pdf}
\includegraphics[width=.45\textwidth]{fig14d.pdf}
\caption{From top to bottom the light curves of variable stars
V27, V35, V39, and V44 are shown. For each individual star, from
top to bottom, the $B$, $V$, and $I$ light curves are displayed.}
\label{figvar1}
\end{center}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=.45\textwidth]{fig15.pdf}
\caption{As in Figure~\ref{figvar1}, but for V42.
The period used is 1.011089~d.}
\label{V42}
\end{figure}
\begin{figure}
\includegraphics[width=.45\textwidth]{fig16a.pdf}
\includegraphics[width=.45\textwidth]{fig16b.pdf}
\includegraphics[width=.45\textwidth]{fig16c.pdf}
\includegraphics[width=.45\textwidth]{fig16d.pdf}
\caption{As in Figure~\ref{figvar1}, but for V45, V51, V52, and V53.}
\label{figvar2}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=.45\textwidth]{fig17a.pdf}
\includegraphics[width=.45\textwidth]{fig17b.pdf}
\caption{As in Figure~\ref{figvar1}, but for the SX Phe stars V54-V55.}
\label{figSXPHE}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=.45\textwidth]{fig18a.pdf}
\includegraphics[width=.45\textwidth]{fig18b.pdf}
\includegraphics[width=.45\textwidth]{fig18c.pdf}
\caption{As in Figure~\ref{figvar1}, but for the newly discovered RR Lyrae stars (from top) V57, V59 and V60}.
\label{fignewvar}
\end{figure}
The difference in magnitude between the HB and TO level are related with the age according to the following relation:
\begin{equation}
\Delta {\log} \, t_9 = (0.44 + 0.04 \rm [Fe/H])\Delta
\end{equation}
\noindent \citep{buoea93}, where
$\Delta = \left(\Delta V_{\rm TO}^{\rm HB}\right)_{\rm GC1} - \left(\Delta V_{\rm TO}^{\rm HB}\right)_{\rm GC2}$,
and $t_9$ is the age in Gyr. For M3 we obtained
$\Delta V_{\rm TO}^{\rm HB} = 3.44$, whereas for \objectname{NGC~6981} we found $\Delta V_{\rm TO}^{\rm HB} = 3.42$. This leads to an age
difference of $\Delta {\log} \, t_9 \sim 0.008$, which translates to about 0.2~Gyr, around 12~Gyr
(using $\Delta t_9 = t_9 \, \ln (10) \, \Delta {\log} \, t_9$),
in the sense that M3 is formally older~-- but essentially implying that M3 and \objectname{NGC~6981}
are coeval, to within the errors.
Another way to approach this issue is to compare directly the M3 and M72 CMDs. This is done in Figure~\ref{compM3}, where the ridgeline obtained for
\objectname{NGC~6981} is overplotted on the M3 photometry.
Both the data and the ridgeline have been dereddened, using the $E(\bv)$ values from the Dec. 2010 version of the Harris (1996)
catalog. The ridgeline of \objectname{NGC~6981} has been shifted 1.3~mag brighter, to account for the difference in distance
modulus with respect to M3. The RGBs of the two clusters match reasonably well, confirming that they have similar
metallicities Also, the match in luminosity of their subgiant branches confirms that the ages of the two clusters
are basically indistinguishable.
\section{Variable Stars Revisited}\label{sec:variables}
Bramich et al. (2011, hereafter B11) published a revised catalog of variable stars in \objectname{NGC~6981}, based on $V$-, $R$-,
and $I$-band photometry. This was the first variability study for this cluster in $\sim 40$ years, using CCD imaging and differential
image analysis techniques . It allowed the discovery of 14 new variable stars, leading to a total census of 43 variable stars,
including 40 RR Lyrae and 3 SX Phoenicis (see their Table~2).
With the advantage of having a longer time baseline, we have revisited the variability in \objectname{NGC~6981}. We discovered 3 new RR~Lyrae
stars, including two fundamental-mode (RRab) and 1 first-overtone (RRc) pulsators. Following the B11 notation, we catalog these stars as
V57, V59 and V60 (V58 is discuss in the text below). The remainder of the variables found in B11 were recovered (including the SX Phe stars),
and in some cases, we have improved the phased light curve and the period determination. Concerning the Blazhko (1907) effect,
which is characterized by a long-term modulation of the light curve amplitude and shape,
we confirmed the presence of the effect, claimed by B11, in V11, V14, V15, V23, V28, V31, V32, V36, V49; the effect was also present
in V35, V39, V51 and V53. Although V10 was also claimed to present Blazhko effect, we found no evidence in the phased light curve.
The coordinates, periods and variability types are provided in Table~\ref{tab:variables}, and Figure~\ref{CMDvar} displays the positions
of the variables in the CMD. In some cases, B11 could not study stars with previous claims of variability, due to technical limitations.
We provide light curves for these stars in Figures~\ref{figvar1} through \ref{fignewvar}, with comments on a few individual stars given
in the next section.
After this paper had been submitted, Skottfelt et al. (2013), announced the discovery of a new variable star in M72, namely V57.
The classification and period of the variable matches our findings, but there is a discrepancy in the star's coordinates (they provide
$\rm \alpha=20^h53^m27\fs12$, $\rm \delta=-12\degr32\arcmin13.9\arcsec$). Moreover, they report another (unclassified) variable, V58, with a
period of 0.285~d, in a position near V57 in our catalog ($\rm \alpha=20^h53^m27\fs38$, $\rm \delta=-12\degr32\arcmin13.3\arcsec$). The reason
for this discrepancy is unknown; however, we note that the coordinates provided in our work match reasonably well those provided by B11.
To avoid confusion regarding the variable census, we kept their catalog identification for this variable.
\subsection{Comments on Individual Stars}\label{sec:commentvar}
\textbf{V27 and V35:} These variables were detected by Dickens \& Flinn (1972), being originally classified as ab-type RR Lyrae; however,
in the study of B11 they appeared out of their FOV, so even though they include them in their final catalog, no new periods could be
determined. We confirmed the RRab type for V27 and V35, with a period of 0.673871~d and 0.543749~d, respectively (Fig.~\ref{figvar1}),
consistent with the periods listed by Dickens \& Flinn. We also note that V35 displays the Blazhko (1907) effect.
\textbf{V38 and V42:} According to B11, Sawyer (1953) and Dickens \& Flinn
(1972) discovered that these stars were variable, but they were
unable to provide a light curve because V38 is in the proximity of a saturated star, and V42 was itself
saturated. In our analysis, V38 shows no sign of variability. In the case of V42, the light curve was phased with a period of 1.01109~d, as
shown in Figure~\ref{V42}. In the CMD, the star is located at the red extension of the bright RGB, with a position consistent with that
expected for the RGB tip, if the star is a bona-fide cluster member.
Such short periods for bright RGB stars are quite unusual. The proximity of the period to 1~d renders the derived
period suspicious. On the other hand, barring saturation effects, the variability amplitude, of order 0.5~mag in
$B$, 0.3~mag in $V$, and 0.2~mag in $I$, seems quite significant.
\textbf{V39:} This star shows no variability in B11, mostly due to outlier photometric measurements. In this study,
we found evidence of variability, and its light curve was phased with a period of 0.426785~d. The morphology of the light curve
suggests a RRab classification. The Blazhko (1907) effect might also be present. However we note that the star is fainter than the bulk of
the cluster's RR Lyrae population, with $V = 17.95$. (see Fig.~13). Thus, V39 may be a field variable, and we have accordingly not included
this star in our calculations of reddening and HB morphology parameters.
\textbf{V44, V45, V52 and V53:} B11 do not provide any period for these stars, since each of them suffered from poor phase coverage,
due to the closeness of a saturated star affecting the light curve. We present here the improved light curves in Figures~\ref{figvar1} and
\ref{figvar2}, and their derived periods are given in Table~\ref{tab:variables}.
\textbf{V51:} B11 listed a period of 0.357335~d, although they noted that the period is not reliable due to poor phase coverage
and blending. In fact, the light curve they provided does not show the maximum. Here we provide a more reliable period
of 0.548599~d, and our well-sampled light curve is shown in Figure~\ref{figvar2}.
\subsection{SX Phoenicis Stars}\label{sec:SXP}
The number of SX Phe stars in GCs has increased markedly in recent years. These variables are of particular interest because they are
located in the BSS zone in the CMD. Recently, Cohen \& Sarajedini (2012) compiled an updated SX Phe stars catalog in GCs,
establishing a period-luminosity relation for the sample. In the future, it is hoped that the pulsation properties of SX Phe
stars will help understand BSS formation and evolution.
In our study, we have recovered all 3 SX Phe stars found in B11. Using Period04 (Lenz \& Breger 2005) to analyze the Fourier
diagrams, the periods found correspond to the frequency of the largest amplitude oscillation, and the pulsation mode could not be identified
in two cases, V55 and V56, with $V$ amplitudes of 0.22 and 0.08~mag, respectively. Since there are no non-radial pulsators with $A_V > 0.15$
(Cohen \& Sarajedini 2012), V55 is likely a radial pulsator. However, the argument does not go in the other direction: double radial-mode
pulsators may have $A_V < 0.15$, and even some fundamental-mode SX Phe may have small amplitudes.
In the case of V54, we detected two significant frequencies, with a ratio of $f_1/f_2 \approx 0.78$, which are likely to correspond to the
fundamental mode ($f_2$) and the first radial overtone ($f_1$; Olech et al. 2005). This result is consistent with what B11 found for these
stars.
The periods of V54 and V56 are slightly different from B11, while for V55 we obtained a good agreement (see Figures~\ref{figSXPHE} for the
phased light curves). It should be noted that in the case of V56 we could not phase the light curve convincingly enough,
and the period may not be reliable. The amplitude found by Period04, $V \approx$~0.08, is comparable with the error in the photometry.
To close, we note that V54 appears to be slightly brighter than the limits of the BSS region, as defined by B11 (based on Harris 1993).
\section{Summary}\label{sec:conclu}
We present the most comprehensive study of the CMD and variable star content of the GGC \objectname{M72} to date, based on new and
archival $BVI$ CCD images. Our CMD reaches almost four magnitudes below the turn-off level, which allowed us to conclude that M72
has the same age as M3 (which has a similar metallicity and HB morphology),
to within the errors. Based on the measured RGB color and slope, we infer that the cluster has a metallicity
${\rm [Fe/H]} \simeq -1.50$ in the new UVES scale. We firmly detect the cluster's blue straggler population, which is found to be
more centrally concentrated than the RGB component. We also find evidence of extratidal cluster stars being present out to
$r \approx 14.1\arcmin$, or about twice M72's tidal radius, and speculate that tidal tails associated with the
cluster may exist. Finally, we revisit the variable star content of the cluster, recovering all previous known variables, including
three SX Phe stars, and discovering three previously unknown RR Lyrae (1 c-type and 2 ab-type).
\acknowledgments
We thank Alistair Walker for his help with the database
and the anonymous referee for some comments that helped improve this manuscript.
This project is supported by the Chilean Ministry for the
Economy, Development, and Tourism's Programa Iniciativa Cient\'{i}fica
Milenio through grant P07-021-F, awarded to The Milky Way Millennium
Nucleus; by the BASAL Center for Astrophysics and Associated Technologies
(PFB-06); by Proyecto Fondecyt Regular \#1110326; and by Proyecto Anillo
ACT-86. PA acknowledges the support by ALMA-CONICYT project \#31110002.
MZ acknowledges support by Fondecyt Regular 1110393 and by the John
Simon Guggenheim Memorial Foundation Fellowship. HAS thanks the U.S. National
Science Foundation for support under grants AST 0607249 and 0707756.
|
1,116,691,499,844 | arxiv | \section{Introduction}
\emph{``Everyone is an artist."} --- Joseph Beuys.
Being an artist means a fundamental ability to create and be creative, with productive imaginations, specialized experiences and fantastic inspirations. Rome wasn't built in a day, but your childhood dreams can be!
With the exponential evolution of generative models\cite{sty3, GLIDE, DDIM, Glow, BigGAN, Palette, DiffGans, UNet, ScoreDiff}, the focus of research on Text-to-Image synthesis has gradually shifted from GAN to Diffusion~\cite{GLIDE, RiFeGAN, ZeroShotGen, ctrlGAN, MirrorGAN, DFGAN, TIGAN, VQGAN-CLIP, PPGAN},
which probably realizes our artist dreams of producing creative, attractive, and fantastic image creations.
More inspiring, only given the texts with classifier~\cite{guid_cls} or classifier-free~\cite{guid_cls_free} guidance, large-scale text-to-image models~\cite{Dalle2, CogView, ImgGen, Parti, ERNIE}, \eg, stable diffusion model~\cite{LDM}, make it possible to synthesize high-resolution images with rich details and various characteristics, meeting our diverse personalized requirements.
Despite yielding impressive images, those models have absorbed large-scale images with prohibitive computation cost, and even need many words to depict a desirable image. In particular,
they are possibly overwhelmed by words of new concepts, styles or object entities that always emerge; unfortunately, it is not a piece of cake to re-train those large-scale models.
\textbf{Related work.}
To alleviate this problem, there are only two recent attempts Textual Inversion (TI)~\cite{TI} and DreamBooth~\cite{DreamBooth} (in \cref{fig:exp_cmp}).
They try to teach the pre-trained large-scale text-to-image models a new concept as new words from 3-5 images with text guidance.
DreamBooth~\cite{DreamBooth} employs the fine-tuning strategy on a pre-trained text-to-image model and
learns to bind a unique class-specific pseudo-word with that new concept.
TI~\cite{TI} learns an embedding as pseudo-words $S_*$ to represent the concepts in the input images by prompt-tuning.
Even though they present a great compelling potential for image generation, they also suffer from some typical limitations (\cref{tab:cmp}). The fine-tuning strategy in DreamBooth not only requires tremendous computing (over 40GB VRAM), but also severely over-fits the training set with catastrophic forgetting. Thus, it suffers from context-appearance entanglement and the generated images are monotonous (in \cref{fig:exp_cmp}). Alternatively, the prompt-tuning in TI is energy-saving, but its generated images suffer from heavy artifacts and distortions, with low diversity.
In the training phase, they require that any input noise guided by the pseudo-word should generate images as similar as possible to the training images. This makes the pseudo-word attract too much attention, so that it obscures almost all additional control signals, which leads to poor controllability and low diversity.
Besides, both of them have to collect 3-5 input images as reference, which are usually with similar features and depict the same object. But those are usually multi-view images for the same object. It is not so convenient to prepare them and even the selected images are too much diverse, inviting unexpected challenges to the models.
\textbf{Our work.}
In this paper, we propose a task of one-shot text-to-image generation, which uses only one image to teach the model to learn to represent its characteristics with a pseudo-word. And this word should generate diverse and highly controllable images like the model's other original words.
It effectively produces high-quality and diverse images with one given reference image (not a reference set) and embraces various new renditions in different contexts.
DreamArtist employs a learning strategy of contrastive prompt-tuning (CPT).
Rather than describing the entire image with just positive embeddings in conventional prompt-tuning,
our CPT jointly trains a paired positive and negative embedding ($S^p_*$ and $S^n_*$) with pre-trained fixed text encoder $\mathcal{B}$ and denoising u-net $\epsilon_\theta$.
$S^p_*$ aggressively to learn the characteristics of the reference image to drive the model diversified generation, while $S^n_*$ introspects in a self-supervised manner and rectifies the inadequacies of positive prompt in reverse. Due to the introduction of $S^n_*$, $S^p_*$ does not need to be forcibly aligned with the input image.
Thus $S^p_*$ does not need to attract as much attention as TI
to align the generated and training images,
which brings diversity and high controllability.
Without excessive attention to $S^p_*$, the characteristics included in the additional descriptions will not be obscured and will be clearly rendered in the synthesized images. Moreover, due to the rectification from $S^n_*$, these additional characteristics will be rendered as harmonious as those from $S^p_*$.
\begin{table}[t]
\footnotesize
\renewcommand\arraystretch{1}
\setlength\tabcolsep{0.6pt}
\begin{tabular}{|c||c|c|c|}
\hline
Method & TI~\cite{TI} & DreamBooth~\cite{DreamBooth} & Ours \\ \hline
Given image number & 3-5 & 3-5 & 1 \\ \hline
Parameters & 2K & 983M & 5K \\ \hline
Image quality & artifacts, mosaic & over-smooth, artifacts & vivid \\ \hline
Diversity & poor & very poor & highly diverse \\ \hline
Controllability & poor & poor & high \\ \hline
\end{tabular}
\vspace{-7pt}
\caption{Comparison with current state-of-the-art methods.}
\label{tab:cmp}
\vspace{-2.5ex}
\end{table}
Extensive experiments on generative models trained on the natural dataset LAION-5B~\cite{LAION} and the anime dataset Danbooru~\cite{danbooru2021} demonstrate
that our method substantially outperforms existing methods.
Various experiments have indicated that our method outperforms existing methods in several aspects, including the quality and diversity of the generated images, the style similarity to the reference image, and the controllability subject to additional descriptions.
In summary, our main contributions are as follows:
\begin{figure*}[!t]
\centering
\includegraphics[width=1\linewidth]{imgs/struct.pdf}
\vspace{-20pt}
\caption{The framework of our DreamArtist.
Only the embedding corresponding to positive and negative pseudo-words ($S^p_*$ and $S^n_*$) can be learned, and the rest of the parameters are fixed, where $f_m$ is the fusion function of $z^p$ and $z^n$ in \cref{equ:neg_prompt}.
}
\vspace{-2ex}
\label{fig:struct}
\end{figure*}
1. Empirically, we have introduced the one-shot text-to-image generation task, which requires the model to learn controllable characteristics (form, content, context, style, semantics, etc.) from a single image, without forgetting the model's original generative abilities.
2. Technically, we propose a DreamArtist method, which introduces contrastive prompt tuning to train positive and negative pseudo-words jointly, allowing the model to learn high quality, diverse, and highly controllable characteristics from a single image.
3. Experimentally, extensive qualitative and quantitative experiments on both natural data and anime data domains have demonstrated that our method substantially outperforms existing methods in content quality, style similarity, and detail quality. Our approach can render highly harmonious and realistic images even combined with complex additional descriptions. The generated images are difficult to distinguish from those created by real human beings.
\section{Methodology}
To overcome the limitations of existing methods aforementioned, enabling the model to synthesize highly realistic and diversity images with high controllability
through just one user-given image, our DreamArtist is proposed, shown in \cref{fig:struct}.
DreamArtist introduces both positive and negative embeddings and jointly trains them with contrastive prompt-tuning through introspection in a self-supervised manner. Allowing embeddings to describe not only the characteristics we need, but also those that need to be excluded.
\begin{figure*}[ht!]
\centering
\includegraphics[width=0.98\textwidth]{imgs/exp_cmp_.pdf}
\vspace{-8pt}
\caption{Comparison of our method with recent methods for one-shot text-to-image generation.}
\vspace{-5pt}
\label{fig:exp_cmp}
\end{figure*}
\subsection{Latent Diffusion Model}
With the remarkable capacity of image generation, the Latent Diffusion Model (LDM)~\cite{LDM} is utilized as the base model. Different from the conventional DDPM~\cite{DDPM, DDIM} that performs denoising operations in the image space, LDM conducts this in the image space. This can apply diffusion operations to the feature space.
Formally, firstly, an input image $x$ is encoded into the feature space by an AutoEncoder $z=\mathcal{E}(x), \hat{x}=\mathcal{D}(z)$ (with an encoder $\mathcal{E}$ and a decoder $\mathcal{D}$) pre-trained with a large number of images, and then a Denoising U-Net $\epsilon_{\theta}$ which consists of the Transformer is used to perform denoising on the feature map $z_{t-1}=\epsilon_{\theta}(z_t, t)$. The text-guided conditional image generation with text $S$ and encoded text feature $y=\mathcal{B}(S)$ is implemented by the cross attention mechanism, using the transformed image features as query $W_Q^{(i)} \cdot \varphi_i\left(z_t\right)$ and the transformed text features as key and value $W_K^{(i)} \cdot \tau_\theta(y)$ and $W_V^{(i)} \cdot \tau_\theta(y)$, then its training loss can be expressed as:
\begin{equation}
\mathcal {L}_{L D M}=\mathbb{E}_{\mathcal{E}(x), y, \epsilon \sim \mathcal{N}(0,1), t}\left[\left\|\epsilon-\epsilon_\theta\left(z_t, t, \tau_\theta(y)\right)\right\|_2^2\right],
\end{equation}
where $t$ represents the time step, $z_t$ is the diffusion feature map of $z$ at step $t$, and $\epsilon$ is the unscaled noise. In this training phase, AutoEncoder is fixed and only $\epsilon_{\theta}$ is learnable
\subsection{Contrastive Prompt-Tuning}
In essential, conventional prompt-tuning~\cite{PTuning2, PromptZeroshot, PromptCloze, CoPo, CoCoPo, PTVision, PromptTuning, PrefixTuning} optimistically considers only a positive prompt. Namely, it simply aligns it with the downstream task and constructs the mapping from the prompt to the training set.
However, this easily leads to collapse and over-fitting, when there are few samples given in the training stage.
Especially, for one-shot text-to-image generation,
it generates images with obvious artifacts and very low diversity.
Accordingly, we propose contrastive prompt-tuning to avoid these problems. That is, it enables the model not only aggressively to learn the characteristics in the image, but also to rectify its mistakes following an introspection mechanism.
This is also analogous to human beings that also learn to paint through introspection with self-supervision.
Keep trying and analyzing the differences with the masterpiece, and record and introspect the shortcomings to avoid repeating them in the future.
After thousands of repetitions and refinements, a new master is yielded.
If one just puts a Mona Lisa there and simply imitates it, without any introspection. Then, it easily has a consequence that this person can only draw some clumsy copy of the Mona Lisa, limiting the imagination for creations.
Our contrastive prompt-tuning also learns to paint through introspection in a self-supervised manner.
Given two identical noise maps $z_t$, they can be guided separately by positive texts and negative texts (e.g. ``a photo of a dog" and ``a photo of a cat") and yield two different feature maps $z^p$ and $z^n$. $z^p$ contains the characteristics we need, while $z^n$ contains the characteristics
we prefer to be excluded in the generated image.
According to the conclusion stated in \cite{GLIDE}, we can make the model extrapolated in the direction of $z^p$ and away from $z^n$, with the following guiding strategy:
\begin{equation}
\hat{z} = f_m(z^p, z^n) = z^n + \gamma (z^p-z^n)
\label{equ:neg_prompt}
\end{equation}
We add learnable pseudo-words ($S^p_*$ and $S^n_*$) to both positive and negative texts, respectively, and fix pre-trained text-encoder and diffusion models. Then, we can realize contrastive prompt-tuning. We jointly learn two different embeddings, representing characteristics that are relevant to the training image and characteristics that are away from the training image or are found to be avoided from introspection. Learning to guide the model not only extrapolated in the desired direction, but also avoid inappropriate directions. Then the contrastive prompt-tuning loss is:
\begin{equation}
\begin{aligned}
z^p = \epsilon_\theta(z_t, t, &\tau_\theta(\mathcal{B}(S^p_*))),
z^n = \epsilon_\theta(z_t, t, \tau_\theta(\mathcal{B}(S^n_*))) \\
\mathcal{L}_{cpt} &= \| f_m(z^p,z^n) - \mathcal{E}(x) \|^2_2
\end{aligned}
\end{equation}
where $x$ is the image for training and $\| \cdot \|^2_2$ is the $\ell 2$ loss.
$S^p_*$ portrays the primary forms of objects and contexts, while $S^n_*$ rectifies the inadequacies of $S^p_*$ in reverse.
With the involvement of $S^n_*$, $S^p_*$ no longer needs to attract excessive attention to force all the $z_T$ to be guided to the given image $x$.
This allows $S^p_*$ to drive diversified image generation and does not attract too much attention, which could overwhelm additional control signals leading to poor controllability.
Even when combined with some complex descriptions, DreamArtist is able to render high-quality and diverse images with the characteristics of these descriptions in the learned style harmoniously, which benefits from the rectifying effect from $S^n_*$.
\subsection{Reconstruction Constraint for Detail Enhancement}
Constraints in the feature space only, would make the generated images be smoothness and even with some deficiencies in details and colors. Thus, we add an additional pixel-level reconstruction constraint to enhance the embedding's ability on describing details and colors. After $\hat{z}$ is computed via \cref{equ:neg_prompt}, it is decoded by the decoder $\mathcal{D}$ and transformed into the image space $\hat{x}=\mathcal{D}(\hat{z})$. Make the decoded image $\hat{x}$ as consistent as possible with the training image $x$ at each pixel, thus enhancing the learning of details. Accordingly, the reconstruction loss can be written as:
\begin{equation}
\mathcal{L}_{rec} = \| \mathcal{D}(f_m(z^p, z^n)) - x \|
\end{equation}
where $\| \cdot \|$ is the $\ell 1$ loss.
\section{Experiments}
\begin{figure*}[ht!]
\centering
\includegraphics[width=0.98\textwidth]{imgs/exp_raw2_.pdf}
\vspace{-8pt}
\caption{One-shot text-to-image generation with only learned pseudo-words for DreamArtist. It can learn content and context from a single image without adding additional text descriptions, generating diversity and high-quality images in both natural and anime scenes.}
\vspace{-12pt}
\label{fig:exp_raw}
\end{figure*}
\subsection{Experimental Settings}
\noindent
\textbf{Dataset.} Similar to TI and DreamBooth, the LAION-5B dataset~\cite{LAION} is used for natural image generation. Additionally, an anime dataset, Danbooru~\cite{danbooru2021}, is added for a popular interest on many applications, \eg, games and animes.
\noindent
\textbf{Implementation details.}
The experiments on all domains were trained using one image with the learning rate of 0.0025 and the $\gamma$ of 5. The training was performed on an RTX2080ti using a batch size of 1 with about 2k-8k iterations.
We use an embedding occupying 6 words for TI, while we use 3 words for both positive and negative embeddings for our method.
DreamBooth then follows the settings in the paper and employs prior-preservation loss to reduce over-fitting.
All methods are trained with the one-shot setting.
\noindent
\textbf{Metrics.}
For quantitative analysis, we use LPIPS~\cite{LPIPS} and style loss~\cite{styloss} to measure feature similarity and style similarity with the training image, respectively. Another two evaluation metrics, CLIP detail score (CDS) and CSTD, are defined based on the CLIP~\cite{CLIP} model. CDS uses CLIP to get the probability of an image belonging to ``detailed" in B the set of [``little detail", ``detailed"]. CSTD calculates the standard deviation of the feature map of the image encoded by the image encoder of CLIP.
For controllability evaluation, we define a CLIP feature score (CFS). The description of a feature is input to CLIP together with multiple descriptions that are similar to its semantic category but different. And for each feature on multiple images, the average of the probability that CLIP considers the image to belong to the semantic category we describe is calculated.
\subsection{One-Shot Text-guided Image Synthesis}
We compare our DreamArtist with two existing works, including TI~\cite{TI} and DreamBooth~\cite{DreamBooth} for one-shot text-to-image generation. All methods are trained with only one image given as a reference for a fair comparison. Next, we will elaborate the comparison results from image quality, diversity, characteristics and style similarities.
\begin{table}[]
\renewcommand\arraystretch{1}
\setlength\tabcolsep{8pt}
\resizebox{\columnwidth}{!}{
\begin{tabular}{|c|cccc|}
\hline
\multicolumn{1}{|c|}{Method} & LPIPS↓ & Style loss↓ & CDS↑ & CSTD↑ \\
\hline
\rowcolor{gray!20}
\multicolumn{5}{|c|}{Natural Image Generation} \\ \hline
\multicolumn{1}{|c|}{TI} & 0.71 & 24.47 & 0.73 & \textbf{1.79} \\
\multicolumn{1}{|c|}{DreamBooth} & \textbf{0.33} & \textbf{5.12} & 0.63 & 0.69 \\
\multicolumn{1}{|c|}{Ours} & 0.62 & 9.84 & \textbf{0.74} & 1.53 \\
\hline \rowcolor{gray!20}
\multicolumn{5}{|c|}{Anime Image Generation} \\ \hline
TI & 0.63 & 7.47 & 0.41 & 0.87 \\
DreamBooth & \textbf{0.49} & 1.16 & 0.33 & 0.72 \\
Ours & 0.60 & \textbf{0.69} & \textbf{0.60} & \textbf{1.28} \\
\hline
\end{tabular}
}
\vspace{-7pt}
\caption{Quantitative comparison of our DreamArtist with existing methods for one-shot text-to-image generation.
}
\vspace{-3ex}
\label{tab:exp_base}
\end{table}
\noindent
\textbf{Image Quality and Diversity.}
From the qualitative analysis, it is shown in \cref{fig:exp_cmp} that the images generated by the TI have serious artifacts and distortions. And the diversity is also low for generated anime images and few meaningful details are presented, while most are artifacts.
DreamBooth generates images with few artifacts, but the diversity is incredibly low on both natural and anime scenes.
It generates images overly similar to the reference image,
which evidences an over-fitting issue. From \cref{fig:exp_cmp} and \ref{fig:exp_raw}, our DreamArtist can alleviate these problems and not only generates highly realistic images with remarkable light, shadow and detail, but also keeps the generated images highly diverse.
Quantitative analysis in \cref{tab:exp_base} can also reach the same conclusion. According to the results of style similarity and qualitative analysis,
it is an illusion that the CSTD metrics of TI are high. This is possibly caused by generated artifacts.
The generated images by DreamBooth usually have extremely low diversity and low image quality. Our method, instead, performs well in both natural and anime scenes in terms of image details, quality, and diversity.
\begin{table}[]
\centering
\footnotesize
\renewcommand\arraystretch{1.1}
\setlength\tabcolsep{1pt}
\resizebox{1.05\columnwidth}{!}{
\hspace{-4ex}
\begin{tabular}{|c|cccccccc|}
\hline
\multirow{2}{*}{Method} & \multicolumn{4}{c|}{Natural Image Generation} & \multicolumn{4}{c|}{Anime Image Generation} \\ \cline{2-9}
& CFS↑ & CSTD↑ & Style loss↓ & \multicolumn{1}{c|}{CDS↑} & CFS↑ & CSTD↑ & Style loss↓ & CDS↑ \\ \hline
TI & 0.37 & 1.46 & 17.26 & \multicolumn{1}{c|}{0.40} & 0.23 & 0.98 & 5.52 & 0.46 \\
DreamBooth & 0.24 & 1.19 & \textbf{1.36} & \multicolumn{1}{c|}{\textbf{0.69}} & 0.28 & 0.81 & \textbf{1.28} & 0.31 \\
Ours & \textbf{0.89} & \textbf{1.55} & 8.03 & \multicolumn{1}{c|}{0.57} & \textbf{0.63} & \textbf{1.15} & 2.71 & \textbf{0.58} \\ \hline
\end{tabular}
}
\vspace{-7pt}
\caption{Quantitative analysis of our DreamArtist compared with existing methods on feature controllability. The feature controllability of DreamArtist substantially exceeds existing methods.}
\vspace{-3ex}
\label{tab:exp_ctrl}
\end{table}
\begin{figure*}[ht!]
\centering
\includegraphics[width=0.98\textwidth]{imgs/exp_text_real_.pdf} \\
\vspace{-8pt}
\includegraphics[width=0.98\textwidth]{imgs/exp_text_anime_.pdf}
\vspace{-7pt}
\caption{The generated images of DreamArtist with the guidance of additional complex texts. DreamArtist exhibits a superiors capability of controllable generation: even with few words in the text guidance, diverse and faithful images are generated; with more words, vivid images with rich details are generated. More importantly,
DreamArtist can successfully render almost all the given words.
}
\vspace{-12pt}
\label{fig:exp_text}
\end{figure*}
\begin{figure*}[t!]
\centering
\includegraphics[width=0.98\textwidth]{imgs/exp_style_.pdf}
\vspace{-10pt}
\caption{Style cloning via DreamArtist, for example, stytles of wash painting, paper-cut art, Cyberpunk, comic of caricaturists and road map in a game (from left to right).
}
\vspace{-12pt}
\label{fig:exp_style}
\end{figure*}
\noindent
\textbf{Generates New Concepts}
From \cref{fig:exp_cmp}, it is observed that, for TI, the style in the generated image differs greatly from the input reference image in natural scenes; in anime scenes, not only the style differs greatly, but also the coat of the input image is learned as a dress and the eyes color is incorrect. This indicates that TI is limited in learning the characteristics of the input image, which cannot learn the content and style effectively. Although DreamBooth can well learn the characteristics in the input image, the over-fitting is too serious because it tends to simply remember the whole image. Our method generates images that are highly stylized and consistent with the input image; the form, content, and context are also well learned with aesthetics.
The quantitative results in \cref{tab:exp_base} also support the above observations. Style loss of TI is extremely high and LPIPS is also not low, indicating that it really cannot learn the features of the input image effectively. Our method, on the other hand, is able to show a high style and content similarity while maintaining considerable performance in all other aspects.
\noindent
\textbf{Style Cloning.}
Compared to the abstract style like Vincent Van Gogh's ``The Starry Night". More often than not, users need some more practical styles, for example, the style of a movie or game scene, the style of Cyberpunk or Steampunk, the style of a Chinese Brush Painting or paper-cut, or the painting style of your favorite artist.
\textls[-7]{Existing methods can only learn some highly abstract styles and the generated images are difficult to show much meaningful content, which is more like some textures. As can be seen in \cref{fig:exp_style}, our method can learn some practical and highly refined styles pretty well. The generated images are identical to the training images in terms of colors and texture features, cloning the style of the training images remarkably. Thus, our model substantially outperforms existing methods.}
In anime scenes, different artists have different painting styles of different brushwork, composition, light processing, color processing, scenery, and many other details. The different painting styles are not as diverse as the different styles in natural scenes, but they will give the reader a completely different impression. Our method can learn a painting style fairly well.
It is even possible to create images that are highly similar to other works by the same artist based on the text description, which is difficult for existing methods.
\subsection{Method Evaluation and Analysis}
\begin{figure*}[t!]
\centering
\includegraphics[width=0.98\textwidth]{imgs/exp_mix_.pdf}
\vspace{-10pt}
\caption{Results of concept compositions via DreamArtist. It presents a promising generation potential via the pseodu text guidance from the arbitrary combination of the learnt pseodu-words.
}
\vspace{-5pt}
\label{fig:exp_mix}
\end{figure*}
\noindent
\textbf{Controllability and Compatibility Analysis.}
A successful one-shot text-guided image synthesis method should not only
be able to learn the characteristics in the reference image, but also should enable these characteristics to be controlled by additional descriptions.
From the \cref{fig:exp_cmp}, we observe that the pseudo-words learned by the TI method are difficult to combine with some additional features.
For instance, in the natural scene ``on the moon with sakura behind it", only ``on the moon" works, while ``made of bronze" does not work at all.
DreamBooth has slightly better compatibility with the additional descriptions, but the generated images look almost the same due to severe overfitting, and some of the described features are still not rendered.
Our method, as an alternative, can effectively solve these problems. As can be seen in \cref{fig:exp_cmp} and \ref{fig:exp_text} and the CFS in \cref{tab:exp_ctrl} that DreamArtist can not only be easily compatible with additional complex descriptions, but also generate highly harmonious and diverse images with those descriptions and learned features.
For instance, the second row of the learned embedding of a mask can render richly diverse and highly realistic images in controlled by various complex descriptions.
Besides, the described features can be rendered, even if there is a conflict between the additional description and the learned features.
For example, for the last one in the fourth row of \cref{fig:exp_text}, the training image has pink hair with a pure background, while our method can generate descriptions that require characters with light green hair and with a city in the background.
DreamAritst can really follow the user's description to create and be creative, with productive imaginations, specialized experiences and fantastic inspirations.
\begin{figure*}[t!]
\centering
\includegraphics[width=0.98\textwidth]{imgs/exp4_edit_.pdf}
\vspace{-10pt}
\caption{Text-guided image editing via DreamArtist.
}
\vspace{-8pt}
\label{fig:exp_edit}
\end{figure*}
\begin{figure}[!t]
\centering
\includegraphics[width=0.46\textwidth]{imgs/exp_ab_.pdf}
\vspace{-10pt}
\caption{\textls[-7]{Ablation study on $S^p_*$ and $S^n_*$ in DreamArtist. It shows the generated images with the guidance of $S^p_*$, $S^n_*$ and both, respectively.}}
\vspace{-14pt}
\label{fig:exp_ab}
\end{figure}
\noindent
\textbf{Model Ablative Analysis.}
To verify the roles of $S^p_*$ and $S^n_*$, we visualize the generated images guided by $S^p_*$ and $S^n_*$, respectively, shown in \cref{fig:exp_ab}. While $S^p_*$ portrays the basic layout and form, it lacks in features, style, and details. $S^n_*$ shows some distortions and unreasonable styles to rectify inadequacies in $S^p_*$.
For example, the second row of $S^p_*$ generates a rough painting with the wrong style. $S^n_*$ points out and rectifies these defects very well.
Combining them can generate images that are not only rich in details with reasonable characteristics, but also highly consistent in style with the input image.
\subsection{Human Evaluation}
To demonstrate that our method can synthesize high-quality realistic images,
we have conducted a user study following the rules of the Turing test with 700 subjects for TI and our methods, respectively.
There are 12 synthetic and 8 real images in the test.
The TI method makes subjects have a failure rate of 26.6\% in the test.
While our method makes the subjects failure rate reach 34.5\%, which has significantly exceeded the Turing test requirement of 30\%.
This shows that the images generated by our method are fairly realistic and difficult to be distinguished from the real images.
Besides, in the study of judging which creation has higher quality, 52.31\% and 83.13\% of the subjects from various walks of life and even including some professional anime artists, have chosen the DreamArtist synthesized creation in the nature and anime scenes, respectively.
\subsection{Extended Task 1: Concept Compositions}
Our method can easily combine multiple learned pseudo-words, not only limited to combining objects and styles, but also using both objects or style, and generating some reasonable images. In combining these pseudo-words, it is necessary to add both parts of the embedding in the positive and negative prompts. As illustrated in \cref{fig:exp_mix}, combining multiple pseudo-words trained with our method show excellent results in both natural and anime scenes.
Each component of the pseudo-words can be rendered in the generated image, even combining two radically different objects or styles.
For example, we can have a robot painted in the style of an ancient painting, or make a dog have a robot body.
These are difficult to realize for existing methods. For example, in the work of TI, it mentions that TI is struggling to combine multiple pseudo-words~\cite{TI}.
\subsection{Extended Task 2: Prompt-Guided Image Editing}
\textls[-7]{As seen from \cref{fig:exp_edit}, the pseudo-words learned with our method works well for text-guided image editing that follows the paradigm of LDM~\cite{LDM}.
The modified areas not only show the learned form and content, but also integrate well into the environment, which looks harmonious. The performance of image editing with the learned features of our method is as effective as that of the original features from LDM.}
\vspace{-4pt}
\section{Conclusions}
We introduce a one-shot text-to-image generation task, using only one reference image to teach a text-to-image model to learn some new characteristics as a pseudo-word.
According to the shortcomings of existing methods, we propose a DreamArtist method.
DreamArtist employs a learning strategy of contrastive prompt-tuning in a self-supervised manner, enabling the model
to learn from introspection, which no longer forces the positive pseudo-word to align with the reference image.
With contrastive prompt-tuning, pseudo-words can not only make the model generates high-quality and diverse images, but also can be easily controlled by additional descriptions.
DreamArtist not only learns concepts in images, but also form, content and context.
Extensive qualitative and quantitative experimental analyses have demonstrated that our method substantially outperforms existing methods in various aspects.
Moreover, our DreamArtist method is highly controllable and can be used in combination with complex descriptions without losing any components, presenting a promising flexibility for deploying other models.
{\small
\bibliographystyle{ieee_fullname}
|
1,116,691,499,845 | arxiv | \section{Introduction}
Do gamma-ray bursts (GRBs) accelerate cosmic rays (CRs)? There are arguments in favor of such an idea \citep{DA04}. It is supposed that CRs are accelerated via the Fermi mechanism in which a particle crosses the shock many times and gradually gains its energy. For an ultra-relativistic shock, a steady-state universal power-law energy distribution of particles shall form \citep{Kirk+00,Acht+01}. The shock shall accelerate all particle species, but the electrons being much lighter than protons and ions will also loose their energy via synchrotron cooling. This radiation is thought to be observed as the delayed afterglow emission of GRB sources. A problem immediately arises here from a simple estimate: if the X-ray afterglow observed on a day timescale after the prompt burst is indeed the synchrotron radiation from the shock-accelerated electrons, then the pre-shock medium has to be highly magnetized with the fields of milligauss strengths \citep{LW06}. Thus, either the magnetic field is somehow generated in the shock upstream, or the conventional paradigm of the GRB afterglow needs revision.
In this paper, we present a self-similar model of the large region in front of a relativistic shock -- the foreshock. This region is populated with the shock-accelerated particles, which stream away from the shock into the collisionless ambient medium and generate magnetic field via a streaming (Weibel-type) instability. The model predicts the generation of strong, sub-gauss, magnetic fields in the entire foreshock whose thickness is $\sim R/(2\Gamma_{\rm sh}^2)$ and is comparable to the shock radius, $\sim10^{17}-10^{18}\ {\rm cm}$, in the afterglow phase a day or more after the explosion, when the shock is weakly relativistic or non-relativistic. The fields are sustained against dissipation by the anisotropy of newly accelerated particles. Moreover, these fields are relatively large-scale, with the coherence length being as large as a fraction of the foreshock size, $\sim10^{16}\ {\rm cm}$, which makes them effectively decoupled from dissipation. We speculate, that these mesoscale magnetic fields being ultimately advected into the shock downstream can significantly increase the radiative efficiency of GRB afterglows and, perhaps, explain the origin of the magnetic field in an external shock of a GRB. We remark here, however, that our study is analytical and cannot account for a number of nonlinear feedback effects of the generated fields and pre-conditioned external medium onto the shock structure and particle acceleration. Kinetic and hybrid computer modeling is essential for better and more accurate understanding of the foreshock structure.
\section{The model}
Overall, our model is as follows. A shock is a source of CRs which move away from it, thus forming a stream of particles through the ambient medium, say, the interstellar medium (ISM). If the ISM magnetic fields are negligible, i.e., their energy density is small compared to that of CRs, the streaming instability (either the pure magnetostatic Weibel or the mixed-mode electromagnetic oblique Weibel-type instability, depending on conditions) is excited and stronger magnetic fields are quickly generated. These fields further isotropize (thermalize) the CR stream. Since less energetic particles, having a greater number density and carrying more energy overall, are thermalized closer to the shock, the generated B-field will be stronger closer to the shock and fall off away from it, whereas its correlation length will increase with the increasing distance from the shock. More energetic particles keep streaming because of their larger Larmor radii and produce the magnetic field further away from the shock. This process stops at distances where either the CR flux starts to decrease (because of the finite distance the CR particles can get away from a relativistic shock or because of the shock curvature causing CR density to decrease as $\propto r^{-2}$ if the shock is sub- or non-relativistic) or where the generated magnetic fields become comparable to the ISM field and the instability ceases. Thus, a large upstream region --- the foreshock --- is populated with magnetic fields. We now derive its self-similar structure. We work in the shock co-moving frame unless stated otherwise.
Let's consider a relativistic shock moving along $x$-direction with the bulk Lorentz factor $\Gamma_{\rm sh}$; the shock is plane-parallel and lies in the $yz$-plane, and $x=0$ denotes the shock position. The shock continuously accelerates cosmic rays, which then propagate away from it into the upstream region. We conventionally assume that the CR distribution over the particle Lorentz factor is described by a power-law:
\begin{equation}
n_{\rm CR}=n_0(\gamma/\gamma_0)^{-s}
\label{nCR}
\end{equation}
for $\gamma>\gamma_0$ and zero otherwise. Here the index $s=p-1$ is approximately equal to 1.2 for ultrarelativistic shocks and $n_0$ is the normalization.\footnote{Conventionally the distribution is given as $dn/d\gamma\propto\gamma^{-p}$ with $p$ being $\sim2.2-2.3$ for relativistic shocks; hence the density of particles of energy $\sim\gamma$ is $n(\gamma)\propto\gamma^{-p}\delta\gamma\propto\gamma^{-p+1}$.}
We assume that the above energy distribution is the same everywhere in the upstream, that is, we neglect the nonlinear feedback of magnetic fields onto the particle distribution. The CR momentum distribution exhibits strong anisotropy: the parallel ($x$) components of CR momenta are much greater than their thermal spread in the perpendicular ($yz$) plane. Indeed, for a particle to move away from the shock, it should have the $x$-component of the velocity exceeding the shock velocity. Since both the shock and the particle move nearly at the speed of light, this puts a constraint on their relative angle of propagation to be less then $1/\Gamma_{\rm sh}$ in the lab (observer) frame. Hence, the transverse spread of the CR particle's momenta is $p_\perp\lesssim p_\|/ \Gamma_{\rm sh} \ll p_\|$. This is also seen in numerical simulations \citep{Spit08}.
The CR particles propagate through the self-generated foreshock fields and scatter off them. Lower energy particles are deflected in the fields more strongly and, therefore, izotropize faster than the higher energy ones, as having larger Larmor radii. At a position $x>0$ the CR distribution can roughly be divided into isotropic (themalized) component with $\gamma<\gamma_r(x)$ and streaming component with $\gamma>\gamma_r(x)$, where $\gamma_r(x)$ is the minimum Lorentz factor of the streaming particles at a location $x$; it is also the maximum Lorentz factor of the randomized component at this location. The streaming component is Weibel-unstable with a very short $e$-folding time $\tau\sim\omega_{p,{\rm rel}}^{-1}$, where $\omega_{p,{\rm rel}}=\left(4\pi e^2 \tilde n(\gamma)/m_p \gamma\right)^{1/2}$ is the relativistic plasma frequency, $\tilde n(\gamma)$ is the density of streaming particles of the Lorentz factor $\gamma$ (tilde denotes streaming particles). Note that the Weibel instability growth rate depends on $n$ of the lower density component -- cosmic rays, in our case -- measured in the center of mass frame of the streaming plasmas. For the lower-energy part of the CR distribution, the center of mass frame is approximately the shock co-moving frame, hence we evaluate the instability on the shock frame. This approximation is less accurate for the high-energy CR tail; however, the growth rate and the scale length are weak functions of the the shock Lorentz factor ($\propto\Gamma_{\rm sh}^{\mp 1/2}$), so the result will be accurate within an order of magnitude for all reasonable values of $\Gamma_{\rm sh}$ for GRB afterglows. Here we use the proton plasma frequency because the CR electron Lorentz factors are about $m_p/m_e$ times larger, so they behave almost like protons \citep{Spit08}. The instability is very fast: it rapidly saturates (the fields cease to grow) in a few tens of $e$-folding times $\tau$, that is in few tens of inertial lengths (also referred to as the ion skin length) $c/\omega_{p,{\rm rel}}$ in front of the shock. Thereafter the particles keep streaming in current filaments and the field around them amounts to $\xi_B\sim0.01-0.001$ or so of the kinetic energy of this group of particles:
\begin{equation}
B^2(\gamma)/8\pi\sim\xi_Bm_pc^2\gamma \tilde n(\gamma),
\label{B-gamma}
\end{equation}
where $\xi_B$ is the efficiency factor obtained from particle-in-cell (PIC) simulations; $\xi_B$ has the same meaning as the conventional $\epsilon_B$ parameter reserved here for the ratio of the total magnetic energy to the total kinetic energy of the shock and which, as is seen in PIC simulations, is larger than $\xi_B$ near the shock because of the nonlinear evolution and filament mergers. The correlation length of the field is of the order of the ion inertial length
\begin{equation}
\lambda(\gamma)\sim c/\omega_{p,{\rm rel}}=\left(m_pc^2 \gamma/4\pi e^2 \tilde n(\gamma)\right)^{1/2}.
\label{lambda-gamma}
\end{equation}
These random fields deflect CR particles and ultimately lead to their isotropization. The deflection angle of the particle on a field coherence length scale in the self-generated field is
\begin{equation}
\theta\sim\delta p_\perp/p\sim(\lambda/c)\omega_B,
\label{theta-gamma}
\end{equation}
where $\omega_B=eB/\gamma m_p c$, $p$ is the particle momentum and $\delta p_\perp$ is it's transverse change. Using Eqs. (\ref{B-gamma}), (\ref{lambda-gamma}), we obtain:
\begin{equation}
\theta\sim eB(\gamma)\lambda(\gamma)/(\gamma m_pc^2)\sim\sqrt{2\xi_B}.
\end{equation}
Note that the deflection angle is independent of the particle's energy, as long as the field is produced by the particles of the same energy $\gamma$. The particles diffuse in the field and their rms deflection angle after transiting through a distance $x$ is $\Theta\sim\theta\sqrt{x/\lambda}$. The group of particles thermalizes when $\Theta\sim1$, i.e., at the distance from the shock:
\begin{equation}
x_r\sim\lambda/\theta^2\sim\lambda(\gamma)/(2\xi_B).
\label{x-gamma}
\end{equation}
At this point, $x=x_r$, one has $\gamma=\gamma_r$ by definition; no field of the strength $B(\gamma_r)$ and the scale $\lambda(\gamma_r)$ can be produced at $x>x_r$. Similarly, one can estimate the randomization of the higher energy particles with $\gamma\gg\gamma_r$: $\theta(\gamma)\sim\sqrt{2\xi_B}(\gamma_r/\gamma)\ll\theta(\gamma_r)$, which means that these particles keep streaming through much larger distances $x\gg x_r$ and will produce the magnetic field further away from the shock. This field will be weaker and larger scale because of the lower density of the streaming particles $\tilde n(\gamma)\ll\tilde n(\gamma_r)$, according to Eqs. (\ref{nCR})--(\ref{lambda-gamma}).
Finally, the number density of streaming CR particles at $\gamma_r$ is $\tilde n(\gamma_r)=n_0(\gamma_r/\gamma_0)^{-s}$. Therefore,
\begin{equation}
\lambda(\gamma_r)
\sim\left(m_pc^2 \gamma_0/4\pi e^2 n_0\right)^{1/2}(\gamma_r/\gamma_0)^{(1+s)/2}
\equiv\lambda_0(\gamma_r/\gamma_0)^{(1+s)/2},
\end{equation}
where $\lambda_0$ is the inertial length of the lowest energy CR ``plasma". Inverting this expression yields:
\begin{equation}
\gamma_r\sim\gamma_0[\lambda(\gamma_r)/\lambda_0]^{2/(1+s)}
\sim\gamma_0(2\xi_Bx_r/\lambda_0)^{2/(1+s)}.
\label{gamma_r}
\end{equation}
Hereafter, the subscript ``$r$'' can be omitted without loss of clarity.
In a steady state, this field is continuously advected toward the shock (in the shock co-moving frame since the center of mass frame of the foreshock plasma differs from the shock frame) and may affect the onset and the saturation level of the Weibel instability. In addition, the current filaments producing the fields merge with time, so that $B$ and $\lambda$ change while being advected. These nonlinear feed-back effects are difficult to properly account for in a theoretical model; hence they are omitted in the current study. PIC simulations can help us to quantify the effects as well as to confirm or disprove our assumption that the shock and the foreshock do form a self-sustained, steady state structure.
\section{The self-similar foreshock}
The self-similar structure of the foreshock immediately follows from Eqs. (\ref{nCR}), (\ref{B-gamma}), (\ref{x-gamma}) and (\ref{gamma_r}). The magnetic field correlation length is proportional to the upstream distance from the shock,
\begin{equation}
\lambda(x)\sim x(2\xi_B),
\label{lambda-x}
\end{equation}
and its strength decreases with the distance as
\begin{equation}
B(x)\sim B_0 \left({x}/{x_0}\right)^{-\frac{s-1}{s+1}},
\label{B-x}
\end{equation}
where $B_0=\left(8\pi\xi_B m_pc^2n_0\gamma_0\right)^{1/2}$ and $x_0=\lambda_0/(2\xi_B)=\left(m_pc^2 \gamma_0/4\pi e^2 n_0\right)^{1/2}/(2\xi_B)$. In this estimate we neglected the advected fields $B(\gamma)$ as sub-dominant compared to $B(\gamma_r)$ for $\gamma>\gamma_r$. The $\epsilon_B$ parameter expresses the field energy normalized to the shock kinetic energy. The energy of cosmic rays is $U_{\rm CR}=\int n(\gamma/\gamma_0)(m_pc^2\gamma)\ d(\gamma/\gamma_0)\sim m_pc^2\gamma_0n_0$ and constitutes a fraction $\xi_{\rm CR}$ of the total shock energy, $U_{\rm sh}$. The efficiency of cosmic ray acceleration, $\xi_{\rm CR}$, can be as high as several tens percent, perhaps, up to $\xi_{\rm CR}\sim0.5$, as follows from the nonlinear shock modeling \citep{V+06,E+07}. The scaling of $\epsilon_B$ is:
\begin{equation}
\epsilon_B\sim\xi_{\rm CR}\xi_B\ \left({x}/{x_0}\right)^{-2\frac{s-1}{s+1}}.
\label{epsilonB-x}
\end{equation}
These scalings hold while the shock can be treated as planar and while the ISM magnetic fields are negligible compared to the Weibel-generated fields. If the shock is relativistic, CR particles can occupy a narrow region in front of it. Assuming CR to propagate nearly at the speed of light, their front is ahead of she shock at the distance $\Delta r=c t_{\rm rel}=c(R/c-R/v_{\rm sh})\simeq R-R/[(1-1/2\Gamma^2_{\rm sh})]$ measured in the lab (observer) frame, that is at the distance $\sim\Delta r/\Gamma_{\rm sh} \sim R/(2\Gamma_{\rm sh})$ in the shock frame. Also, when the radial distance in the lab frame $\Delta r=x/\Gamma_{\rm sh}$ becomes comparable to the shock radius $\Delta r\sim R$ the curvature of the shock can no longer be neglected: the density of CR particles, which was assumed to be constant in our model, starts to fall as $\propto r^{-2}$. This leads to a steeper decline of $B$ with distance. Obviously, the first constraint is more stringent for a relativistic shock, whereas both are very similar (within a factor of two) for a non-relativistic shock. Hence we use the first constraint hereafter. Meanwhile, at some distance $X$, the Weibel-generated fields can become comparable to the ambient magnetic field, $B(X)\sim B_{\rm amb}$ and the Weibel instability ceases; here we used that the ambient field in the shock frame is $B_{\rm amb}\sim B_{\rm ISM,\perp}\Gamma_{\rm sh}\sim B_{\rm ISM}\Gamma_{\rm sh}$. PIC simulations \citep{Spit05} indicate that for low magnetizations $\sigma<0.01$, i.e., $B(X)/B_{\rm amb}>0.1$, the shock behaves as unmagnetized and the Weibel instability dominates. Although there is no a sharp threshold, one sees the Weibel instability to be suppressed for lower values of $B(X)/B_{\rm amb}$. To the order of magnitude, we set $B_{\rm ISM}\Gamma_{\rm sh}\sim B(X)\sim B_0(X/x_0)^{-(s-1)/(s+1)}$, therefore $X\sim x_0\left[B_0/(B_{\rm ISM}\Gamma_{\rm sh})\right]^{(s+1)/(s-1)}$. To conclude, the the scalings, Eqs. (\ref{lambda-x})--(\ref{epsilonB-x}), hold at $x\lesssim x_{\rm max}$, where
\begin{equation}
x_{\rm max}={\rm Min}\left[R/(2\Gamma_{\rm sh}),\ X\right]={\rm Min}\left[R/(2\Gamma_{\rm sh}),\ x_0\left({B_0}/{B_{\rm ISM}\Gamma_{\rm sh}}\right)^{\frac{s+1}{s-1}}\right].
\label{xmax}
\end{equation}
The region filed with the magnetic field in front of the shock is large, so does the region where radiation emitted by the CR electrons. The power emitted by a relativistic electron in a magnetic field is $P_{B}=(4/3)\sigma_T c \gamma_e^2(B^2/8\pi)$, where $\sigma_T$ is the Thompson cross-section and $\gamma_e$ is the Lorentz factor of the emitting electron. This expression is accurate for both synchrotron and jitter radiation \citep{M00}. For the distribution of electrons (\ref{nCR}) homogeneously populating the foreshock, the power is dominated by the lowest energy particles with $\gamma_e\sim \epsilon_e(m_p/m_e)\gamma_p\sim \epsilon_e(m_p/m_e)\gamma_0\sim\epsilon_e(m_p/m_e)\Gamma_{\rm sh}$ (here $\epsilon_e$ is the efficiency of the electron heating) and the density $n_e\sim n_p\sim n_0\sim n_{\rm ISM}\Gamma_{\rm sh}$. Then $P_{\rm tot}=\int P_B(x)n_0dV$, where the volume element is $dV=4\pi R^2 dx$, so
\begin{equation}
P_{\rm tot}\propto\int_0^{x_{\rm max}}(x/x_0)^{-2\frac{p-1}{p+1}}dx \sim x_0 \left(x_{\rm max}/x_0\right)^{\frac{3-s}{1+s}}.
\end{equation}
Note that the co-moving radiated power {\em increases} with the foreshock thickness, $P_{\rm tot}\propto x_{\rm max}^{(3-s)/(1+s)}$, thus emission is not localized in a thin layer near the shock and is, in fact, dominated by large distances:
\begin{equation}
P_{\rm tot}\sim L_{\rm CR}\xi_B \gamma_e^2(2/3)\sigma_T n_0x_0 \left({x_{\rm max}}/{x_0}\right)^{\frac{3-s}{1+s}},
\label{Ptot}
\end{equation}
where $L_{\rm CR}=4\pi R^2(m_pc^2n_0\gamma_0)c$ is the kinetic luminosity of cosmic rays, which is a fraction $\xi_{\rm CR}<1$ of the total kinetic luminosity of a GRB, $L_{\rm CR}=\xi_{\rm CR} L_{\rm GRB}$, $R$ is the shock radius and the co-moving CR density is of order the density of the incoming ISM plasma, $n_0\sim n_{\rm ISM}\Gamma_{\rm sh}$.
The foreshock electrons are radiating in the synchrotron regime: the jitter parameter $\delta$ \citep{M00,M+07}, which is the average deflection angle of an electron in the foreshock fields, $\theta_e\sim(eB(x)/\gamma_e m_ec)(\lambda(x)/c)$ over the radiation beaming angle, $\sim1/\gamma_e$, is always much larger than unity:
\begin{equation}
\delta(x)\sim eB(x)\lambda(x)/(m_ec^2)\sim (m_p/m_e)\gamma_0\sqrt{2\xi_B} (x/x_0)^{2/(s+1)} \gg1.
\end{equation}
Although consideration of the post-shock fields is beyond the scope of the present paper, we can estimate the magnetic field spectrum at and after the shock jump as long as dissipation is not playing a role. The magnetic field of different correlation scales created in the foreshock is advected toward the shock, so a broad spectrum is accumulated:
\begin{equation}
B_\lambda\propto \lambda^{-\frac{s-1}{s+1}}\sim\lambda^{-0.091},
\label{Bspec}
\end{equation}
where Eqs. (\ref{lambda-x}) and (\ref{B-x}) were used and $s=p-1\sim1.2$ was assumed.
\section{The afterglow foreshock}
The relation between the shock radius $R$ and its Lorentz factor $\Gamma_{\rm sh}$ follows from a simple energy argument: the energy of an explosion is $E\sim(4\pi/3)R^3m_pc^2n_{\rm ISM}\Gamma_{\rm sh}^2$, therefore
\begin{equation}
R\sim(10^{18}\ {\rm cm})\ E_{52}^{1/3}n_{\rm ISM}^{-1/3}\Gamma_{\rm sh}^{-2/3}
\end{equation}
or $\Gamma_{\rm sh}\sim E_{52}^{1/2} n_{\rm ISM}^{-1/2} R_{18}^{-3/2}$, where $E_{52}=E/10^{52}\ {\rm erg}$ and similarly for other quantities.
The observed time of photons emitted at radius $R$ is the time since the very first photons (i.e, emitted at $R\sim0$) arrive, $t_{\rm obs}=R/c-R/v_{\rm sh}\simeq R/c-R/[c(1-1/2\Gamma^2_{\rm sh})]$, that is $t_{\rm obs}\sim R/(2\Gamma_{\rm sh}^2c)$. Using the equation for $R$, one gets
\begin{eqnarray}
\Gamma_{\rm sh}&\sim&3.7\ E_{52}^{1/8} n_{\rm ISM}^{-1/8} t_{\rm day}^{-3/8},\\
R&\sim&(4.2\times10^{17}\ {\rm cm})\ E_{52}^{1/4} n_{\rm ISM}^{-1/4} t_{\rm day}^{1/4}.
\label{Gamma-t}
\end{eqnarray}
Here we assumed a local GRB with $z=0$; to include the redshift time dilation is trivial.
The co-moving density is $n_0\sim n_{\rm ISM}\Gamma_{\rm sh}$ (assuming the CR efficiency $\xi_{\rm CR}\simeq0.5\sim1$) and the minimum Lorentz factor of CR protons is $\gamma_0\sim\Gamma_{\rm sh}$. Hence, the length scale $x_0\sim\lambda_0/(2\xi_B)$ ($\lambda_0$ is the skin length) and the field $B_0$ in the shock co-moving frame become
\begin{eqnarray}
x_0&\sim& (2\times 10^7\ {\rm cm})\ n_{\rm ISM}^{-1/2}/(2\xi_B)\sim(10^9\ {\rm cm})\ n_{\rm ISM}^{-1/2}, \\
B_0&\sim& (0.2\ {\rm gauss})\ \xi_B^{1/2}n_{\rm ISM}^{1/2}\Gamma_{\rm sh}\sim (1~{\rm gauss})\ E_{52}^{1/2}R_{17}^{-3/2},
\label{x0B0}
\end{eqnarray}
where we assumed $\xi_B\sim0.01$. Assuming $p=2.2$ and the interstellar fields, $B_{\rm ISM}$, to be of order a microgauss, we estimate $X$ as
\begin{equation}
X\sim x_0\left[ 0.2 (\xi_B n_{\rm ISM})^{1/2} B_{\rm ISM}^{-1}\right]^{\frac{s+1}{s-1}}
\sim 2\times10^{47}\ x_0\ n_{\rm ISM}^{5.5} B_{\rm ISM, -6}^{-11},
\end{equation}
independent of $\Gamma_{\rm sh}$. On the other hand,
\begin{equation}
R/(2\Gamma_{\rm sh})\sim (5\times10^{17}\ {\rm cm})\ E_{52}^{1/3}n_{\rm ISM}^{-1/3}\Gamma_{\rm sh}^{-5/3}\ll X,
\end{equation}
indicating that the ambient field is relatively unimportant, even for very steep energy spectra $p\sim3.5$ rarely observed in prompt GRBs. The foreshock thickness is
\begin{equation}
x_{\rm max}\sim R/(2\Gamma_{\rm sh}) \sim 5\times10^8\ x_0\ E_{52}^{1/3}n_{\rm ISM}^{-1/3}\Gamma_{\rm sh}^{-5/3}.
\end{equation}
Therefore, the typical field in the foreshock is of sub-gauss strength:
\begin{equation}
B(x_{\rm max})\sim(0.2\ {\rm gauss})\ E_{52}^{0.45}n_{\rm ISM}^{0.09}R_{18}^{-1.3}.
\end{equation}
This field is relatively large-scale, as it's co-moving correlation scale is
\begin{equation}
\lambda(x_{\rm max})\sim x_{\rm max}/(2\xi_B)\sim (5\times10^{17}\ {\rm cm})\ E_{52}^{-1/2} n_{\rm ISM}^{1/2} R_{18}^{5/2}.
\label{lambda-xmax}
\end{equation}
The power emitted by CR electrons from the foreshock amounts to
\begin{equation}
P_{\rm tot}^{\rm obs}\sim (2\times10^{39}\ {\rm erg~s}^{-1})\ E_{52}^{1.6}L_{\rm CR, 45} n_{\rm ISM}^{-0.68}R_{18}^{-3.0}
\label{P-obs}
\end{equation}
in the observer's frame and is emitted at a peak (synchrotron) frequency
\begin{equation}
\nu_m^{\rm obs}\sim(10^{11}\ {\rm Hz})\ E_{52}^{2.0} n_{\rm ISM}^{-1.4} R_{18}^{-5.8},
\label{nu-obs}
\end{equation}
which corresponds to the IR band at about one day after the explosion, where $R(t)$ is given in Eq. (\ref{Gamma-t}), so that $\nu_m\propto t_{\rm day}^{-\frac{7s+17}{8(s+1)}}\propto t_{\rm day}^{-1.4}$.
\section{Discussion}
Here we presented a model of a self-similar foreshock produced by protons scattered by a relativistic shock into the unmagnetized or weakly magnetized external medium. It is immediately applicable to the external shock producing a GRB afterglow. The model predicts that a large region in front of the shock, of thickness of order the shock radius, shall be filled with relatively strong and large-scale magnetic fields. The upstream magnetic field strength and correlation length depend on the distance from the shock and the power-law index of accelerated protons (cosmic rays) and are given by Eqs. (\ref{lambda-x}), (\ref{B-x}), (\ref{x0B0}) in the shock co-moving frame. The overall energetics of the field is dominated by large distances; hence the average foreshock B-field is of sub-milligauss strength and is increasing toward the shock while its typical coherence length is of order of few percent of the shock radius and is decreasing toward the shock.
The result is interesting, especially in the light of observational constraints on the particle acceleration in GRB afterglows. It has been shown that a few-milligauss magnetic fields are needed in front of the shock in order to efficiently Fermi-accelerate the electrons to the energies required to produce the observed X-ray emission \citep{LW06}. Our model provides a possible and rather natural mechanism for generation of such fields in the far upstream medium. We also make a prediction that the shock-accelerated electrons will be radiating in the foreshock fields at a characteristic synchrotron frequency given by Eqs. (\ref{nu-obs}). For reference, $\nu_m\sim 10\ {\rm THz}$ at about a day after the burst. It is possible that nonlinear effects omitted in this analysis (see discussion below) may limit the field strength to lower values compared to the present analysis, so the synchrotron peak can move to cm/mm-wave band. We can speculate that the emission from the foreshock can form an emission region separate from the afterglow shock and show up in the X-ray/optical band in the early afterglow phase and in the radio at the very late times.
The presented results can have interesting implications for the radiative efficiency of external shocks. Unlike internal shocks, where electron cooling is extremely fast and a thin shock layer of thickness of a hundred ion inertial lengths may be enough to produce the observed prompt emission \citep{MS09}, the Weibel shock model \citep{ML99} seems to face the efficiency problem when it is applied to an external shock. In such a shock, magnetic fields shall occupy a much larger region, perhaps the entire downstream region, in order for the shock to produce enough photons that will be observed as a delayed afterglow emission. This is not very likely (though not proved to be impossible, yet) provided that the small-scale fields generated {\em at} the shock can be subject to rapid dissipation. However, dissipation shall be of much lesser importance for the foreshock fields, which have a (much) larger coherence length, Eq. (\ref{lambda-x}). In the steady state, the fields generated in the upstream are advected to the shock and their strength is maintained against dissipation by the anisotropy of the continuously ``refreshed'' CR distribution. Hence, one can expect that the magnetic field near the shock will have the spectrum given in Eq. (\ref{Bspec}). Once these fields pass though the shock into the downstream, they are enhanced by the shock compression and begin to decay. Commonly, dissipation is proportional to the inverse gradient scale squared $\propto\partial_x^2\propto\lambda^{-2}$, so that the skin-length-scale fields may eventually disappear. However, the fields above a certain coherence length, $\lambda_{\rm diss}$, shall survive and fill up the post-shock medium. The mechanism of dissipation is not yet understood in detail, so it is premature to make any quantitative conclusions about $\lambda_{\rm diss}$, but it will certainly be much smaller than $\lambda(x_{\rm max})\lesssim R$, see Eq. (\ref{lambda-xmax}). Since $B_\lambda$ is a weak function of $\lambda$ one can speculate that relatively strong fields, perhaps of order tens or hundreds milligauss, to occupy the post-shock medium.
We want to note that a number of simplifying assumptions has been made in our analysis. In particular, {\em nonlinear feedback} effects of the upstream magnetic field on the particle distribution, on the shock structure and on Fermi acceleration were omitted. The inclusion of these effects is hardly possible in any analytical model. We also assumed that a steady state exists for the shock-foreshock system at hand. Apparently, it is not at all clear whether the steady state is at all possible or the system exhibits an intermittent behavior. One can envision a scenario in which the CRs overproduce upstream magnetic fields leading to enhanced particle scattering and the overall preheating of the ambient medium, which, in turn, can cause the shock to weaken, disappear and then re-appear in a different place further upstream. Presently available 2D PIC simulations of an electron-position shock do show the upstream field amplification and no steady state has been achieved: both upstream and downstream fields continue to grow for the duration of the simulations \citep{KKSW08}. We argue that extensive PIC or/and hybrid simulations of a shock are imperative for further study.
Finally, we mention that our model compliments other models of the magnetic field generation. It is reasonable to expect that the field can be amplified by vortical motions produced by the Richtmeyer-Meshkov instability, if the ambient medium is clumpy or if the shock velocity is not perfectly uniform \citep{GM07,SG07,Milos+08}. On the other hand, if the ambient magnetic fields are strong enough, the fields can be generated via non-resonant \citep{BellLucek01,Bell04,PLM08} or resonant \citep{DM06,DM07,Z03,V+06,E+07} mechanisms, or both \citep{Kirk+08}.
\ack
The authors thank colleagues at IKI and RRC ``KI" for discussions. This work has been supported by NSF grant AST-0708213, NASA ATFP grant NNX-08AL39G, Swift Guest Investigator grant NNX-07AJ50G and DOE grant DE-FG02-07ER54940.
\section*{References}
\begin{harvard}
\bibitem[Achterberg et al.(2001)]{Acht+01} Achterberg, A., Gallant, Y.~A., Kirk, J.~G., \& Guthmann, A.~W.\ 2001, {\it Mon. Not. R. Astron. Soc.}, 328, 393
\bibitem[Bell \& Lucek(2001)]{BellLucek01} Bell, A. R., \& Lucek, S. G. 2001, {\it Mon. Not. R. Astron. Soc.}, 321, 433
\bibitem[Bell(2004)]{Bell04} Bell, A.~R.\ 2004, {\it Mon. Not. R. Astron. Soc.}, 353, 550
\bibitem[Bret et al.(2005)]{Bret+05a} Bret, A., Firpo, M.-C.,
\& Deutsch, C.\ 2005, {\it Phys. Rev. Lett.}, 94, 115002
\bibitem[Bret et al.(2005)]{Bret+05b} Bret, A., Firpo, M.-C.,
\& Deutsch, C.\ 2005, Laser and Particle Beams, 23, 375
\bibitem[Dermer \& Atoyan(2004)]{DA04} Dermer, C.~D., \& Atoyan, A.\ 2004, New Astronomy Review, 48, 453
\bibitem[Diamond \& Malkov(2006)]{DM06} Diamond, P.~H., \& Malkov, M.~A.\ 2006, KITP Conference: Supernova and Gamma-Ray Burst Remnants, 18
\bibitem[Diamond \& Malkov(2007)]{DM07} Diamond, P.~H., \& Malkov, M.~A.\ 2007, {\it Astrophys. J.}, 654, 252
\bibitem[Ellison et al.(2007)]{E+07} Ellison, D.~C.,
Patnaude, D.~J., Slane, P., Blasi, P., \& Gabici, S.\ 2007, {\it Astrophys. J.}, 661, 879
\bibitem[Goodman \& MacFadyen(2007)]{GM07} Goodman, J., \& MacFadyen, A.~I.\ 2007, ArXiv e-prints, 706, arXiv:0706.1818
\bibitem[Keshet et al.(2008)]{KKSW08} Keshet, U., Katz, B., Spitkovsky, A., \& Waxman, E.\ 2008, ArXiv e-prints, 802, arXiv:0802.3217
\bibitem[Kirk et al.(2000)]{Kirk+00} Kirk, J.~G., Guthmann, A.~W., Gallant, Y.~A., \& Achterberg, A.\ 2000, {\it Astrophys. J.}, 542, 235
\bibitem[Li \& Waxman(2006)]{LW06} Li, Z., \& Waxman, E. 2006, {\it Astrophys. J.}, 651, 328
\bibitem[Medvedev \& Loeb(1999)]{ML99} Medvedev, M. V., \& Loeb,
A. 1999, {\it Astrophys. J.}, 526, 697
\bibitem[Medvedev(2000)]{M00} Medvedev, M. V. 2000, {\it Astrophys. J.}, 540, 704
\bibitem[Medvedev et al.(2007)]{M+07} Medvedev, M.~V.,
Lazzati, D., Morsony, B.~C., \& Workman, J.~C.\ 2007, {\it Astrophys. J.}, 666, 339
\bibitem[Medvedev \& Spitkovsky(2008)]{MS09} Medvedev, M.V. \& Spitkovsky, A.\ 2008, {\it Astrophys. J.}, submitted.
\bibitem[Milosavljevic et al.(2007)]{Milos+08} Milosavljevic, M., Nakar, E., \& Zhang, F.\ 2007, ArXiv e-prints, 708, arXiv:0708.1588
\bibitem[Pelletier et al.(2008)]{PLM08} Pelletier, G., Lemoine, M., \& Marcowith, A.\ 2008, arXiv:0807.3459
\bibitem[Reville et al.(2008)]{Kirk+08} Reville, B., O'Sullivan, S., Duffy, P., \& Kirk, J.~G.\ 2008, {\it Mon. Not. R. Astron. Soc.}, 386, 509
\bibitem[Sironi \& Goodman(2007)]{SG07} Sironi, L., \& Goodman, J.\ 2007, {\it Astrophys. J.}, 671, 1858
\bibitem[Spitkovsky(2005)]{Spit05} Spitkovsky, A.\ 2005, Astrophysical Sources of High Energy Particles and Radiation, AIP Conf. Proc. 801, 345
\bibitem[Spitkovsky(2008)]{Spit08} Spitkovsky, A.\ 2008, {\it Astrophys. J. Lett.}, 673, L39
\bibitem[Vladimirov et al.(2006)]{V+06} Vladimirov, A.,
Ellison, D.~C., \& Bykov, A.\ 2006, {\it Astrophys. J.}, 652, 1246
\bibitem[Zweibel(2003)]{Z03} Zweibel, E.~G.\ 2003, {\it Astrophys. J.}, 587, 625
\end{harvard}
\begin{figure}
\includegraphics[width=5.5in]{figforeshock.pdf}
\caption{A schematic representation of the foreshock magnetic fields: the coherence length is increasing with the upstream distance. Below are schematic graphs showing variation of the spectrum of the streaming part of cosmic rays and the corresponding self-generated fields (highlighted).
\label{f1}}
\end{figure}
\end{document}
|
1,116,691,499,846 | arxiv | \section{Introduction}
A graph structure with its Laplacian matrix provides a mathematical tool to analyze the similarities between data points: those points with large enough similarities are connected by an edge. One can also assign edge weights to quantify such similarities. In many applications, it is noticed that the representation of the data set can be vastly improved by endowing the edges of the graph additionally with linear transformations \cite{Harary53,SingerWu,BSS13}. For example, when the graph is representing a social network, we hope to attach to each edge an element from the one dimensional orthogonal group $O(1)=\{\pm 1\}$ to indicate two kinds of opposite relationships between members of the network (vertices). When the graph is representing higher dimensional data set, e.g., $2$-dimensional photos of a $3$-dimensional object from different views, one would like to assign to each edge an element of the orthogonal group $O(2)$ which optimally rotationally aligns photos when comparing their similarity (see, e.g., \cite{SingerWu,BSS13}). In theoretical research, assigning linear transformations to the edges of a graph also provides mathematical structures that have been found very useful in various topics, e.g., the study of Heawood map-coloring problem \cite{Gross74,GrossTucker74}, the construction of Ramanujan graphs \cite{BL06,MSS}, and the study of a discrete analogue of magnetic operators \cite{Sunada93,Shubin94}. The corresponding Laplacian of a graph with such an additional structure is called \emph{the connection Laplacian}, defined by Singer and Wu \cite{SingerWu}.
In fact, the connection Laplacian of a graph yields a very elegant and general mathematical framework for the analysis of massive data sets, which includes several extensively studied graph operators as particular cases, e.g., the classical Laplacian, the signless Laplacian \cite{DS09}, the Laplacian for Harary's signed graphs \cite{ZaslavskyMatrices,AtayLiu14}, and the discrete magnetic Laplacian \cite{Sunada93,Shubin94,LLPP15}.
In this paper, we study the spectra of the graph connection Laplacian, which are closely related to the geometric structure of the underlying graph with those transformations attached to its edges. We describe this geometric structure by introducing two types of quantities, Cheeger type constants and a discrete Ricci curvature. Our main theorem is concerned with higher order Buser type inequalities, showing the close relations between eigenvalues of the connection Laplacian and the Cheeger constants assuming nonnegativity of the discrete Ricci curvature. We also obtain a lower bound estimate of the first nonzero eigenvalue of the connection Laplacian by the lower Ricci curvature bound, i.e. we show a Lichnerowicz type eigenvalue estimate. In this process, the properties of the Cheeger constants and discrete Ricci curvature are explored systematically. In particular, our eigenvalues estimates help us to deepen the understanding of these two geometric quantities.
\subsection{Higher order Buser inequalities}
We now aim to state our main theorem (Theorem \ref{thm:introMain} below) more explicitly. We first introduce relevant notation. Let $G=(V,E)$ be an undirected simple finite graph with vertex set $V$ and edge set $E$. For simplicity, we restrict ourselves to unweighted $D$-regular graphs in this Introduction. Let $H$ be a group. For each edge $\{x,y\}\in E$, we assign an element $\sigma_{xy}\in H$ to it, such that
\begin{equation}\label{intro:signature}
\sigma_{yx}=\sigma_{xy}^{-1}.
\end{equation}
Actually, we are defining a map $\sigma:E^{or}\to H$, where $E^{or}:=\{(x,y), (y,x)\mid \{x,y\}\in E\}$ is the set of all oriented edges. We call $\sigma$ a \emph{signature} of the graph $G$. In this paper, we restrict the group $H$ to be the $d$ dimensional orthogonal group $O(d)$ or unitary group $U(d)$.
Then the \emph{(normalized) connection Laplacian} $\Delta^\sigma$, as a matrix, is given by
\begin{equation}\label{eq:introConnction Lap}
\Delta^\sigma:=\frac{1}{D}A^\sigma-\mathrm{I}_{Nd},
\end{equation}
where $D$ is the (constant) vertex degree and $\mathrm{I}_{Nd}$ is a $(Nd)\times (Nd)$-identity matrix, $N$ the size of vertex set $V$, and $A^\sigma$ is the $(Nd)\times (Nd)$-matrix, blockwisely defined as
\begin{equation}
(A^\sigma)_{xy}=\left\{
\begin{array}{ll}
0, & \hbox{if $\{x,y\}\not\in E$;} \\
\sigma_{xy}, & \hbox{if $(x,y)\in E^{or}$.}
\end{array}
\right.
\end{equation}
Due to (\ref{intro:signature}), $\Delta^\sigma$ is Hermitian. Hence all eigenvalues of the matrix $\Delta^{\sigma}$ are real. Note that the connection Laplacian $\Delta^\sigma$ in (\ref{eq:introConnction Lap}) is defined as a negative semidefinite matrix for our later purpose of defining the discrete curvature, due to a convention originating from Riemannian geometry. However, we still want to deal with nonnegative eigenvalues. Hence, when we speak of eigenvalues of the connection Laplacian $\Delta^{\sigma}$, we mean the eigenvalue of the matrix $-\Delta^\sigma$. They can be listed (counting multiplicity) as
\begin{equation}
0\leq \lambda_1^{\sigma}\leq \lambda_2^{\sigma}\leq \cdots\lambda^{\sigma}_{d}\leq \cdots\leq\lambda^{\sigma}_{(N-1)d+1}\leq\lambda^{\sigma}_{(N-1)d+2}\leq \cdots\leq \lambda^{\sigma}_{Nd}\leq 2.
\end{equation}
Observe that two different signatures do not necessarily lead to different spectra. Given a function $\tau:V\to H$ and a signature $\sigma$, we consider the new signature $\sigma^\tau$ defined by
\begin{equation}
\sigma_{xy}^\tau:=\tau(x)^{-1}\sigma_{xy}\tau(y),\,\,\forall\,(x,y)\in E^{or}.
\end{equation}
Then the corresponding connection Laplacians $\Delta^\sigma$ and $\Delta^{\sigma^\tau}$ are unitarily equivalent and hence share the same spectra. Indeed, it is easy to check that
\begin{equation}\label{intro:unitary}
\Delta^{\sigma^\tau}=(M_\tau)^{-1}\Delta^\sigma M_\tau,
\end{equation}
where $M_\tau$ stands for the matrix given blockwisely by $(M_\tau)_{xx}:=\tau(x)$. We call the function $\tau$ a \emph{switching function}. Two signatures $\sigma$ and $\sigma'$ are said to be \emph{switching equivalent}, if there exists a switching function $\tau$ such that $\sigma'=\sigma^\tau$. It follows from (\ref{intro:unitary}), the eigenvalues of the connection Laplacian $\Delta^\sigma$ are switching invariant.
The Cheeger type constants $\{h_k^\sigma, k=1,2,\ldots, N\}$ and the discrete Ricci curvature $K_\infty(\sigma)$ that we are going to introduce are also switching invariant. A signature $\sigma$ is said to be \emph{balanced} if it is switching equivalent to the trivial signature $\sigma_{\mathrm{triv}}:E^{or}\to id\in H$. In fact, the constants $\{h_k^\sigma, k=1,2,\ldots, N\}$ are quantifying the connectivity of the graph and the unbalancedness of the signature $\sigma$. The latter is described by the \emph{frustration index} $\iota^\sigma(S)$ of the signature $\sigma$ restricted to the induced subgraph of $S\subseteq V$, with the property that
$$\iota^\sigma(S)=0\,\,\Leftrightarrow\,\,\sigma \,\,\text{restricted on $S$ is balanced}.$$
By abuse of notation, we will also use $S$ to denote its induced subgraph. Denote by $|E(S, V\setminus S)|$ the number of edges connecting $S$ and its complement $V\setminus S$. We then define
\begin{equation}
\phi^\sigma(S):=\frac{\iota^\sigma(S)+|E(S,V\setminus S)|}{D\cdot |S|},
\end{equation}
where $|S|$ is the cardinality of the set $S$. Then the Cheeger constants $h_k^\sigma$ is defined as
\begin{equation}\label{intro:Cheeger}
h_k^\sigma:=\min_{\{S_i\}_{i=1}^k}\max_{1\leq i\leq k}\phi^\sigma(S_i),
\end{equation}
where the minimum is taken over all nonempty, pairwise disjoint subsets $\{S_i\}_{i=1}^k$ of the vertex set $V$. The above definition of Cheeger constants is a natural extension of the constants in \cite{AtayLiu14} and \cite{LLPP15}, where $H=O(1)$ and $U(1)$, respectively, and is closely related to the $O(d)$ frustration $\ell^1$ constant in \cite{BSS13} (see Remark \ref{remark:BSS} for a detailed explanation).
The nonnegativity of the discrete Ricci curvature $K_\infty(\sigma)$, or the \emph{curvature dimension inequality with a signature}, $CD^\sigma(0,\infty)$, is an extension of the classical curvature dimension inequality \`{a} la Bakry and \'{E}mery \cite{BaEm,Bakry} on graphs, which has been studied extensively in recent years, see, e.g., \cite{Schmuckenschlager98,LinYau,JostLiu14,ChungLinYau14,KKRT15,LiuPeyerimhoff14,HuaLin15}. For related notions of curvature dimension inequalities and their strong implications in establishing various Li-Yau inequalities for heat semigroups on graphs, we refer to \cite{BHLLMY13, Horn14,FM14,FM15}. The definition of $CD^\sigma(0,\infty)$ uses both the connection Laplacian $\Delta^\sigma$ and the graph Laplacian $\Delta$, capturing the structure of the graph (especially, its cycles) and the signature (especially, the signature of cycles) locally around each vertex (see Proposition \ref{pro:gamma2Matrix}). The curvature condition $CD^\sigma(0,\infty)$ can be characterized by properties of the classical heat semigroup $P_t:=e^{t\Delta}$ for functions and the heat semigroup $P_t^\sigma:=e^{t\Delta^\sigma}$ for vector fields (vector valued functions) of the underlying graph (see Theorem \ref{thm:curvature-characterization}). Another appealing feature of this curvature notion is that it can be calculated very efficiently. Indeed, calculating this curvature is equivalent to solving semidefinite programming problems.
Our main theorem is the following result.
\begin{thm}[Higher order Buser inequalities]\label{thm:introMain}
Assume that a graph $G$ with a signature $\sigma$ satisfies $CD^\sigma(0,\infty)$. Then for each natural number $1\leq k\leq N$, we have
\begin{equation}
\lambda_{kd}^\sigma\leq 16D(kd)^2\log(2kd)(h_k^\sigma)^2.
\end{equation}
\end{thm}
Note that $\lambda_{kd}^\sigma$ should be considered as the maximal value of the group of eigenvalues $\{\lambda_{(k-1)d+1}^\sigma, \ldots, \lambda_{kd}^\sigma\}$. There are $N$ different groups of eigenvalues and $N$ Cheeger constants, correspondingly.
In 1982, Peter Buser \cite{Buser82} showed that the first nonzero eigenvalue of the Laplace-Beltrami operator on a closed Riemannian manifold is bounded from above by the square of the Cheeger constant, up to a constant involving Ricci curvature. The authors of \cite{KKRT15} extend an argument of Ledoux \cite{Ledoux04} to establish analogous Buser type estimates on graphs satisfying the classical curvature dimension inequality $CD(0,\infty)$. In fact, Theorem \ref{thm:introMain} reduces to their result (see (\ref{intro:KK}) below) up to a constant, when $k=2$, $d=1$, and the signature $\sigma$
is balanced.
Higher order Buser inequalities were first proved by Funano \cite{Funano2013} on Riemannian manifolds, and later improved in \cite{Liu14} on manifolds, and in \cite{LiuPeyerimhoff14} for graph Laplacians, via showing an eigenvalue ratio estimate. However, the method in \cite{Liu14,LiuPeyerimhoff14} does not extend to the connection Laplacian for a general signature $\sigma: E^{or}\to H=O(d)$ or $U(d)$, except for the very special case $O(1)$ (see Example \ref{example:section 7}). We discuss extensions of the methods in \cite{Liu14,LiuPeyerimhoff14} for $H=O(1)$ signatures in Section \ref{section:O(1)}. For general signatures, our proof neatly extends Ledoux's \cite{Ledoux04} argument for Buser's inequality and provides new ideas for establishing higher order Buser inequalities.
In the following sections, we explain the ingredients of Theorem \ref{thm:introMain} in more details.
\subsection{Motivation and a dual Buser inequality}
In this subsection, we briefly recall some known results about Cheeger and dual Cheeger constants of a graph $G$ and the eigenvalues of the graph Laplacian $\Delta$, which can be listed as below,
$$0=\lambda_1\leq \lambda_2\leq\cdots\leq \lambda_N\leq 2.$$
This will explain one motivation of Theorem \ref{thm:introMain} from the spectral theory of the graph Laplacian $\Delta$. Recall that $\Delta:=\frac{1}{D}A-\mathrm{I}_N$, where $A$ is the adjacency matrix of $G$, i.e., $\Delta$ can be viewed as the connection Laplacian with the trivial signature $\sigma_{\mathrm{triv}}:E^{or} \to 1\in O(1)$. The above $\lambda_i$'s are eigenvalue of the matrix $-\Delta$.
By the results in \cite{AtayLiu14}, we know that if we assign to $G$ the trivial $O(1)$ signature $\sigma_{\mathrm{triv}}:E^{or}\to 1\in O(1)$, then the constant $h_2^{\sigma_{\mathrm{triv}}}$ coincides with the classical Cheeger constant of $G$. If, instead, we assign to $G$ the signature $-\sigma_{\mathrm{triv}}: E^{or}\to -1\in O(1)$, then the constant $h_1^{-\sigma_{\mathrm{triv}}}$ reduces to the bipartiteness ratio of Trevisan \cite{Trevisan2012}, or to one minus the dual Cheeger constant of Bauer and Jost \cite{BJ}. For details, we refer to \cite{AtayLiu14}. In fact, we have the following relations between eigenvalues, Cheeger constants and structural properties of the underlying graph:
\begin{align*}
\lambda_2=0\,\,&\Leftrightarrow\,\,h_2^{\sigma_{\mathrm{triv}}}=0\,\,\Leftrightarrow\,\, G\,\,\text{has at least two connected components};\\
2-\lambda_N=0\,\,&\Leftrightarrow\,\,h_1^{-\sigma_{\mathrm{triv}}}=0\,\,\Leftrightarrow\,\,G\,\,\text{has a bipartite connected component}.
\end{align*}
The Cheeger \cite{Dodziuk1984,AM1985,Alon1986} and dual Cheeger inequalities \cite{Trevisan2012,BJ} asserts that
\begin{equation*}
\frac{(h_2^{\sigma_{\mathrm{triv}}})^2}{2}\leq \lambda_2\leq 2h_2^{\sigma_{\mathrm{triv}}} \,\,\,\,\,\text{and}\,\,\,\,\,\frac{(h_1^{-\sigma_{\mathrm{triv}}})^2}{2}\leq 2-\lambda_N\leq 2h_1^{-\sigma_{\mathrm{triv}}}.
\end{equation*}
For many purposes, it is very useful to have further relations between $\lambda_2$ ($2-\lambda_N$, resp.) and $h_2^{\sigma_{\mathrm{triv}}}$ ($h_1^{-\sigma_{\mathrm{triv}}}$, resp.). The authors of \cite{KKRT15} prove the following \emph{Buser inequality}: If $G$ satisfies the curvature dimension inequality $CD(0,\infty)$, then
\begin{equation}\label{intro:KK}
\lambda_2\leq 16D (h_2^{\sigma_{\mathrm{triv}}})^2.
\end{equation}
The inequality $CD(0,\infty)$ is defined solely by the graph Laplacian $\Delta$: For any two functions $f,g:V\to \mathbb{R}$, we define two operators $\Gamma$ and $\Gamma_2$ as follows:
\begin{align}
&2\Gamma(f,g):=\Delta(fg)-f\Delta g-(\Delta f)g,\label{intr:Gamma}\\
&2\Gamma_2(f,g):=\Delta(\Gamma(f,g))-\Gamma(f,\Delta g)-\Gamma(\Delta f,g).\label{intr:Gamma2}
\end{align}
The graph $G$ \emph{satisfies $CD(0,\infty)$} if we have for any function $f:V\to \mathbb{R}$,
\begin{equation}
\Gamma_2(f,f)\geq 0.
\end{equation}
In particular, every cycle graph $\mathcal{C}_N$ with $N$ vertices satisfies $CD(0,\infty)$. Moreover, we have for the graph $\mathcal{C}_N$ (see, e.g., \cite[Proposition 7.4]{Liu13}),
\begin{equation}
(h_2^{\sigma_{\mathrm{triv}}})^2\leq \lambda_2(\mathcal{C}_N)\leq 5(h_2^{\sigma_{\mathrm{triv}}})^2,
\end{equation}
which is in line with the Cheeger inequality and Buser inequality, and also
\begin{equation}\label{intr:cycleDual}
0.3(h_1^{-\sigma_{\mathrm{triv}}})^2\leq2-\lambda_N(\mathcal{C}_N)\leq 5(h_1^{-\sigma_{\mathrm{triv}}})^2.
\end{equation}
A natural question then arises: Is there any similar generalization of the right hand side of (\ref{intr:cycleDual})? That is, we are asking for a possible \emph{dual Buser inequality} for the graph Laplacian $\Delta$.
Observe that the first eigenvalue of the connection Laplacian $\Delta^{-\sigma_{\mathrm{triv}}}$, also known as the signless Laplacian \cite{DS09},
is equal to $2-\lambda_N$. Indeed, one check that
$$-\Delta^{-\sigma_{\mathrm{triv}}}=2I_N-(-\Delta)=I_N+\frac{1}{D}A.$$
Therefore, Theorem \ref{thm:introMain} implies the following dual Buser inequality for $\Delta$.
\begin{coro}[Dual Buser inequality]
Assume that $G$ satisfies $CD^{-\sigma_{\mathrm{triv}}}(0,\infty)$. Then we have
\begin{equation}
2-\lambda_N\leq 16(\log 2) D (h_1^{-\sigma_{\mathrm{triv}}})^2.
\end{equation}
\end{coro}
This provides a "dual" version of the Buser inequality in (\ref{intro:KK}). We like to mention that every cycle graph $\mathcal{C}_N$ also fulfills the inequality $CD^{-\sigma_{\mathrm{triv}}}(0,\infty)$.
One may guess that the inequality $CD^{-\sigma_{\mathrm{triv}}}(0,\infty)$ is defined by replacing the Laplacian $\Delta$ in (\ref{intr:Gamma}) and (\ref{intr:Gamma2}) by $\Delta^{-\sigma_{\mathrm{triv}}}$. However, this does not work. The reason is that the corresponding heat semigroup $P_t^{-\sigma_{\mathrm{triv}}}:=e^{t\Delta^{-\sigma_{\mathrm{triv}}}}$ does not possess a probability kernel (the operator $P_t^{-\sigma_{\mathrm{triv}}}$ is not even nonnegative), a property which is essential for the proofs in \cite{Ledoux04,KKRT15}. In fact, our definition of $CD^{-\sigma_{\mathrm{triv}}}(0,\infty)$ involves both matrices $\Delta$ and $\Delta^{-\sigma_{\mathrm{triv}}}$, which will be explained in the next section.
\subsection{Curvature dimension inequalities with signatures}
It actually looks more natural to define the curvature dimension inequality with a signature using both matrices $\Delta$ and $\Delta^\sigma$ when we come back to the general setting: For a $d$-dimensional signature $\sigma$, the connection Laplacian $\Delta^\sigma$, as an operator, acts on vector fields, i.e. functions $f: V\to \mathbb{K}^d$, where $\mathbb{K}=\mathbb{R}$ or $\mathbb{C}$.
\begin{definition}\label{def:introGamma}
For any two functions $f,g: V\rightarrow \mathbb{K}^d$, we define
\begin{equation}\label{eq:introGamma}
2\Gamma^\sigma(f,g):=\Delta(f^T\overline{g})-f^T(\overline{\Delta^\sigma g}) - (\Delta^\sigma f)^T\overline{g},
\end{equation}
and
\begin{equation}\label{eq:introGammatwo}
2\Gamma^{\sigma}_2(f,g):=\Delta\Gamma^\sigma(f,g)-\Gamma^\sigma(f, \Delta^\sigma g)-
\Gamma^\sigma(\Delta^\sigma f, g).
\end{equation}
\end{definition}
Note that $\Gamma^\sigma(f,g)$ and $\Gamma^{\sigma}_2(f,g)$ are $\mathbb{K}$-valued functions on $V$.
We also write $\Gamma^{\sigma}(f):=\Gamma^{\sigma}(f,f)$ and
$\Gamma^\sigma_2(f):=\Gamma^\sigma_2(f,f)$, for short.
In (\ref{eq:introGamma}) and (\ref{eq:introGammatwo}), we use the graph Laplacian whenever we deal with a $\mathbb{K}$-valued function, and we use the graph connection Laplacian whenever we deal with a $\mathbb{K}$-vector valued function.
\begin{definition}[$CD^\sigma(K,\infty)$ inequality]\label{def:introCDineq}
Let $K\in \mathbb{R}$. We say the graph $G$ with a signature $\sigma$ satisfies the curvature dimension inequality $CD^\sigma(K,\infty)$ if we have for any vector field $f:V\to \mathbb{K}^d$ and any vertex $x\in V$,
\begin{equation}\label{eq:introCD}
\Gamma_2^\sigma(f)(x)\geq K\Gamma^\sigma(f)(x).
\end{equation}
The \emph{precise $\infty$-dimensional Ricci curvature lower bound} $K_\infty(\sigma)$ is defined as the largest constant $K$ such that (\ref{eq:introCD}) holds.
\end{definition}
In Section \ref{section:HeatChar}, we show that the above curvature condition $CD^\sigma(0,\infty)$ can be characterized in terms of the corresponding heat semigroups $P_t:=e^{t\Delta}$ and $P_t^\sigma=e^{t\Delta^\sigma}$ as follows:
\begin{equation*}
CD^\sigma(K,\infty)\,\,\Leftrightarrow\,\,\Gamma^\sigma(P_t^\sigma f)\leq e^{-2Kt}P_t(\Gamma^\sigma(f)), \,\,\forall\,f:V\to\mathbb{K}^d,\,\,\forall\,t\geq 0.
\end{equation*}
This is very useful for the proof of Theorem \ref{thm:introMain}.
It turns out that every graph $G$ with a signature $\sigma$ satisfies $CD^\sigma(\frac{2}{D}-1,\infty)$ (see Corollary \ref{cor:lower curvature bound}). This is shown by considering the switching invariance of $CD^\sigma(K,\infty)$ and $CD^\sigma$ inequalities of covering graphs (see Sections \ref{section:switching inv} and \ref{section:covering}). In particular, every (unweighted) cycle graph with a signature $\sigma: E^{or}\to O(d)$~or~$U(d)$ satisfies $CD^\sigma(0,\infty)$.
Given a graph $G$ and a signature $\sigma$, the curvature $K_\infty(\sigma)$ can be computed very efficiently by reformulating the $CD^\sigma(K,\infty)$ inequality as linear matrix inequalities at local neighborhoods of all vertices (see Section \ref{section:LMI}). Computing the precise Ricci curvature lower bound $K_\infty(\sigma)$ is then equivalent to solving semidefinite programming problems, for which efficient solvers exist. In particular, we derive the precise formula of $K_\infty(\sigma)$ for a triangle ($3$-cycle) graph with $\sigma: E^{or}\to U(1)$ in Section \ref{subsection:triangle}.
Moreover, the class of graphs with signatures satisfying $CD^\sigma(0, \infty)$ inequalities is rich since this curvature property is preserved by taking Cartesian products: Given two graphs $G_i=(V_i, E_i)$, $i=1,2$ with signatures $\sigma_i:E_i^{or}\to H_i=O(d_i)$ or $U(d_i)$, $i=1,2$. Denote their Cartesian product graph by $G_1\times G_2=(V_1\times V_2, E_{12})$. Let us assign a signature $\widehat{\sigma}_{12}:E_{12}^{or}\to H_1\otimes H_2$ to $G_1\times G_2$ as follows:
\begin{align}
\widehat{\sigma}_{12, (x_1,y)(x_2,y)}&:=\sigma_{1,x_1x_2}\otimes \mathrm{I}_{d_2}, \,\,\text{ for any }\,\,(x_1,x_2)\in E_1^{or}, y\in V_2;\notag\\
\widehat{\sigma}_{12, (x,y_1)(x,y_2)}&:=\mathrm{I}_{d_1}\otimes\sigma_{2,y_1y_2}, \,\,\text{ for any }\,\,(y_1,y_2)\in E_2^{or}, x\in V_1.\notag
\end{align}
Then we have the following theorem (see Theorem \ref{thm:CartesianII} and Remark \ref{remark:vertex measure}).
\begin{thm}
Let $G_i,i=1,2$ with signatures $\sigma_i,i=1,2$ satisfy $CD^{\sigma_1}(K_1,\infty)$ and $CD^{\sigma_2}(K_2,\infty)$, respectively. Then the Cartesian product graph $G_1\times G_2$ with the signature $\widehat{\sigma}_{12}$ satisfies $CD^{\widehat{\sigma}_{12}}(\frac{1}{2}\min\{K_1, K_2\},\infty).$
\end{thm}
In Appendix \ref{section:appendixCurCheeCar}, we discuss similar behavior of the curvature dimension inequality on the Cartesian product $G_1\times G_2$ when we assign to it various choices of edge weights, vertex measures and signatures.
\subsection{Frustration index via spanning trees}
The frustration index $\iota^\sigma(S)$, measuring the discrepancy of the signature $\sigma$ from being balanced when restricted to $S$, is an important ingredient for the definition of Cheeger type constants (\ref{intro:Cheeger}). In particular, for an $U(1)$ signature $\sigma:E^{or}\to U(1)$, it is defined as
\begin{equation}
\iota^\sigma(S):=\min_{\tau:S\to U(1)}\sum_{\{x,y\}\in E_S}|\sigma_{xy}\tau(y)-\tau(x)|,
\end{equation}
where $E_S$ stands for the edges of the induced subgraph of $S$, and the minimum is taken over all switching functions on $S$. For higher dimensional signatures, we need to choose a matrix norm to define $\iota^\sigma(S)$, see Section \ref{section:CheegerSig}.
For $U(1)$ signatures, we show that there is an easier way to calculate its frustration index: We can calculate $\iota^\sigma(S)$ by taking the minimum only over a finite set of switching functions. Let the induced subgraph of $S$ be connected. Given a spanning tree $T$ of $S$, we pick a switching function $\tau_T$ that switches the signature $\sigma$, restricted to $T$, to the trivial signature. Then we have
\begin{equation}\label{eq:introfrust}
\iota^\sigma(S):=\min_{T\in \mathbb{T}_S}\sum_{\{x,y\}\in E_S}|\sigma_{xy}\tau_T(y)-\tau_T(x)|,
\end{equation}
where $\mathbb{T}_S$ is the set of all spanning trees of $S$, which is a finite set (see Section \ref{subsection:spanning tree}). This provides combinatorial expressions for $\iota^\sigma(S)$ and hence the Cheeger constants.
Surprisingly, such a simplified expression (\ref{eq:introfrust}) of $\iota^\sigma(S)$ becomes false for signatures with dimension $\geq 2$. We present a counterexample in Appendix \ref{section:appendixSpanningTree}.
Frustration indices and hence Cheeger constants behave well under taking Cartesian products as in the case of curvature dimension inequalities. This is discussed in Appendix \ref{section:appendixCurCheeCar}.
\subsection{Lichnerowicz inequality and the jump of the curvature}
Let $\lambda^\sigma$ be the first nonzero eigenvalue of the connection Laplacian $\Delta^\sigma$. Suppose the graph $G$ is connected. We observed that when $\sigma$ is unbalanced, $\lambda_1^\sigma\neq 0$, and hence $\lambda^\sigma=\lambda^\sigma_1$. Moreover, when $\iota^\sigma(V)$ becomes very small, i.e., when $\sigma$ is very close to be balanced, $\lambda^\sigma=\lambda^\sigma_1$ becomes very close to $0$. Once $\sigma$ becomes balanced, $\lambda_1^\sigma=0$, and $\lambda^\sigma=\lambda_2^\sigma>0$. We say that the quantity $\lambda^\sigma$ \emph{jumps} when $\sigma$ becomes balanced.
We show the following Lichnerowicz type eigenvalue estimate in Section \ref{section:Lichnerowicz}.
\begin{thm}[Lichnerowicz inequality]\label{thm:introLic} Assume that the graph $G$ with a signature $\sigma$ satisfies $CD^\sigma(K,\infty)$. Then the first nonzero eigenvalue $\lambda^\sigma$ satisfies
\begin{equation}
\lambda^\sigma\geq K.
\end{equation}
\end{thm}
For another Lichnerowicz type eigenvalue estimate for the eigenvalues $\lambda_2$ and $2-\lambda_N$ of the graph Laplacian $\Delta$ in terms of the coarse Ricci curvature bound due to Ollivier \cite{Ollivier09}, we refer to \cite{BJL12}.
An interesting application of Theorem \ref{thm:introLic} is the following: The jump phenomenon of the quantity $\lambda^\sigma$ imposes a similar jump phenomenon on the curvature.
Figure \ref{FtriangleEigenJump} illustrates the jumps of the first nonzero eigenvalue $\lambda^\sigma$ and the curvature $K_{\infty}(\sigma)$ of the particular example of a triangle graph $\mathcal{C}_3$ with $\sigma: E^{or}\to U(1)$, when $\sigma$ becomes balanced. In Figure \ref{FtriangleEigenJump}, the complex variable $s=Sgn(\mathcal{C}_3)\in U(1)$ is the signature of the triangle (see (\ref{eq:sigC}) for the definition). The signature $\sigma$ is balanced if and only if $\mathrm{Re(s)}=1$. See Section \ref{subsection:triangle} for details.
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth]{EigenCurvTriangle.pdf}
\caption{Curvature and eigenvalues of a signed triangle\label{FtriangleEigenJump}}
\end{figure}
Moreover, Theorem \ref{thm:introLic} also establishes direct relations between Cheeger constants and the discrete Ricci curvature, see Section \ref{section:Lichnerowicz}.
\subsection{Organization of the paper}
In Section \ref{section:ConnLap}, we set up our general setting of a graph with edge weights, a general vertex measure and a signature, and discuss the associated connection Laplacian. In Section \ref{section:CurDimIneSig}, we discuss various basic properties of the curvature dimension inequalities with signatures and also their equivalent definitions. In Section \ref{section:MultCheeSig}, we introduce multi-way Cheeger constants with signatures and discuss some of the fundamental properties. Section \ref{section:BusIne} is devoted to the proof of our main result, that is, higher order Buser inequalities. In Section \ref{section:Lichnerowicz}, we prove a Lichnerowicz type eigenvalue estimate and discuss its applications. The special case of graphs with $O(1)$ signatures is treated in Section \ref{section:O(1)}, where an eigenvalue ratio estimate is obtained. In Appendix \ref{section:appendixCurCheeCar}, we provide a detailed discussion about the behavior on Cartesian product graphs of the two concepts, curvature dimension inequalities and Cheeger constants with signatures. Appendix \ref{section:appendixSpanningTree} contains a counterexample showing that a combinatorial expression of the frustration index via spanning trees, which we established in Section \ref{section:MultCheeSig} for graphs with $U(1)$ signatures, does no longer hold for $U(d)$ signatures with $d>1$.
\section{The Connection Laplacian}\label{section:ConnLap}
In this section, we introduce the basic setting of a graph with edge weights, a vertex measure and a signature, and the corresponding connection Laplacian.
\subsection{Basic setting} Throughout the paper, $G=(V, E, w)$ denotes an undirected weighted simple finite graph with vertex set $V$ and edge set $E$. If two vertices $x,y\in V$
are connected by an edge, we write $x\sim y$ and denote this edge by $\{x,y\}$. To each edge $\{x,y\}\in E$, we associate a positive symmetric weight $w_{xy}=w_{yx}$. Let $$d_x:=\sum_{y,y\sim x}w_{xy}$$ be the (weighted) vertex degree of $x\in V$.
For the vertex set $V$, we assign a finite positive measure $\mu: V\to \mathbb{R}_{>0}$. The following two
quantities $D_G^{non}$ and $D_G^{nor}$ will appear naturally in our
arguments:
\begin{equation}\label{eq:degrees}
D_G^{non}:=\max_{x\in V}\frac{d_x}{\mu(x)}, \,\,\text{ and }\,\, D_G^{nor}:=\max_{x\in V}\max_{y,y\sim x}\frac{\mu(x)}{w_{xy}}.
\end{equation}
Typically, one chooses $\mu(x)=1$ for all $x\in V$ ($\mu=\mathbf{1}_V$ for short), or $\mu(x)=d_x$ for all $x\in V$ ($\mu=\mathbf{d}_V$ for short). The superscripts in (\ref{eq:degrees}) are abbreviations for "nonnormalized" and "normalized", respectively. Observe that, $D_G^{non}=\max_{x\in V}d_x$ for the measure $\mu=\mathbf{1}_V$, while $D_G^{nor}=\max_{x\in V}d_x$ for the measure $\mu=\mathbf{d}_V$ and $w_{xy} = 1$ for all $\{x,y\} \in E$.
We write $(G, \mu, \sigma)$ to denote a graph $G=(V,E,w)$ with the vertex measure $\mu$ and the signature $\sigma:E^{or}\to H$ where $H$ is a group (recall (\ref{intro:signature})).
Recall from the Introduction that $\sigma$ is balanced if it is switching equivalent to the trivial signature $\sigma_{\mathrm{triv}}: E^{or}\to id\in H$. Actually, the original definition of balancedness of a signature by Harary \cite{Harary53} is defined via the signature of cycles of the underlying graph. Let $\mathcal{C}$ be a cycle of $G$, i.e., a subgraph composed of a sequence $(x_1,x_2), (x_2,x_3), \cdots, (x_{\ell-1},x_\ell),(x_\ell,x_1)$ of distinct edges. Then the signature $Sgn(\mathcal{C})$ of $\mathcal{C}$ is defined as the conjugacy class of the element
\begin{equation}\label{eq:sigC}
\sigma_{x_1x_2}\sigma_{x_2x_3}\cdots\sigma_{x_{\ell-1}x_\ell}\sigma_{x_\ell x_1}\in H.
\end{equation}
Note that, the signature of any cycle is switching invariant.
Harary \cite{Harary53} (see also \cite{Zaslavsky82}) defines a signature $\sigma:E^{or}\to H$ to be balanced if the
signature of every cycle of $G$ is (the conjugacy class of the) identity
element $id\in H$.
In fact, the above two definitions of balancedness of a signature are equivalent, see \cite[Corollary 3.3 and Section 9]{Zaslavsky82}.
For more historical background about signatures of graphs, we refer the reader to~\cite[Section 3]{LPV14}.
\subsection{Connection Laplacian}\label{subsection:Connection Laplacian}
Let $\mathbb{K}=\mathbb{R}$ or $\mathbb{C}$.
Throughout this paper, we restrict the group $H$ to be the orthogonal group $O(d)$ or the unitary group $U(d)$, of dimension $d$, $d\in \mathbb{Z}_{>0}$, when $\mathbb{K}=\mathbb{R}$ or $\mathbb{C}$, respectivly. For every edge $(x,y)\in E^{or}$, $\sigma_{xy}$ is a $(d\times d)$-orthogonal or unitary matrix and we have $\sigma_{yx}=\sigma_{xy}^{-1}=\overline{\sigma_{xy}^T}$.
For any vector-valued functions $f:V\rightarrow \mathbb{K}^d$ and any vertex $x\in V$,
the graph connection Laplacian $\Delta^\sigma$ is defined via
\begin{equation*}
\Delta^\sigma f(x):=\frac{1}{\mu(x)}\sum_{y,y\sim x}w_{xy}(\sigma_{xy}f(y)-f(x))\in \mathbb{K}^d.
\end{equation*}
Note that a function $f:V\rightarrow \mathbb{K}^d$ can also be considered as an $(Nd)$-dimensional column vector, which we denote by $\overrightarrow{f}\in \mathbb{K}^{Nd}$. This vector is well defined once we enumerate the vertices in $V$. The Laplacian can then be written as
\begin{equation}
\Delta^\sigma=(\mathrm{diag}_\mu)^{-1}(A^\sigma-\mathrm{diag}_D),
\end{equation}
where $\mathrm{diag}_\mu$ and $\mathrm{diag}_D$ are $(Nd)\times (Nd)$-diagonal matrices with the diagonal blocks $(\mathrm{diag}_\mu)_{xx}=\mu(x)I_d$ and $(\mathrm{diag}_D)_{xx}=d_x\mathrm{I}_d$ for $x\in V$, respectively. Here we use $\mathrm{I}_d$ for a $(d\times d)$-identity matrix. The matrix $A^\sigma$ is defined blockwise as follows. For $x,y\in V$, the $(d\times d)$-block of it is given by
\begin{equation}
(A^\sigma)_{xy}=\left\{
\begin{array}{ll}
0, & \hbox{if $\{x,y\}\not\in E$;} \\
w_{xy}\sigma_{xy}, & \hbox{$(x,y)\in E$.}
\end{array}
\right.
\end{equation}
Then we have $\overrightarrow{\Delta^\sigma f}=(\mathrm{diag}_\mu)^{-1}(A^\sigma-\mathrm{diag}_D)\overrightarrow{f}$.
If every edge has the trivial signature $1\in O(1)$, $\Delta^\sigma$ reduces to the graph Laplacian $\Delta$.
When $H=U(1)$, $\Delta^\sigma$ coincides with the discrete magnetic Laplacian \cite{Sunada93,Shubin94,LLPP15}.
Given two functions $f,g: V\to \mathbb{K}^d$, locally at a vertex $x$ the Hermitian inner product of $f(x)$ and $g(x)$ is given by $f(x)^T\overline{g(x)}$. The corresponding norm of $f(x)$ is denoted by $|f(x)|:=\sqrt{f^T(x)\overline{f(x)}}$. Globally, we have the following inner product between $f$ and $g$:
\begin{equation}\label{eq:inner product}
\langle f, g\rangle_{\mu}:=\sum_{x\in V}\mu(x)f(x)^T\overline{g(x)}.
\end{equation}
We denote by $\ell^2(V, \mathbb{K}^d; \mu)$ the corresponding Hilbert space of functions. The $\ell^2$ norm corresponding to (\ref{eq:inner product}) is denoted by $\Vert\cdot\Vert_{2,\mu}$. Note that $\Delta^{\sigma}$ is a self-adjoint operator on
$\ell^2(V, \mathbb{K}^d; \mu)$, i.e.,
\begin{equation}\label{eq:self-adjoint}
\langle \Delta^{\sigma}f, g\rangle_{\mu}=\langle f, \Delta^{\sigma}g\rangle_{\mu}.
\end{equation}
We call $\lambda^{\sigma}\in \mathbb{R}$ an eigenvalue of $\Delta^{\sigma}$ if there exists a non-zero function $f: V\to \mathbb{K}^d$ such that $\Delta^{\sigma}f=-\lambda^{\sigma}f$. In fact, all
$Nd$ eigenvalues of $\Delta^\sigma$ lie in the interval $[0,2D_G^{non}]$.
Let $\Sigma$ be the group generated by the elements of $\{\sigma_{xy}\mid (x,y)\in E^{or}\}$. We call $\Sigma$ the \emph{signature group} of the graph $(G,\sigma)$. If the action of $\Sigma$ on $\mathbb{K}^d$ is reducible, we have an orthogonal decomposition of $\mathbb{K}^d$, i.e.,
$$\mathbb{K}^d=U_1\oplus U_2\oplus\cdots\oplus U_r,\,\,\text{for some}\,\, r,$$
where the $U_i$'s are pairwise orthogonal w.r.t. the Hermitian inner product of $\mathbb{K}^d$ and each $U_i$ is an $\Sigma$-invariant subspace of $\mathbb{K}^d$ of dimension $d_i$ such that $\sum_{i=1}^rd_i=d$. Then there exist signatures $\sigma_i: E^{or}\to O(d_i)$ or $U(d_i),\,\,i=1,2,\ldots,r$, such that we can write
\begin{equation*}
\Delta^{\sigma}=\Delta^{\sigma_1}\oplus\Delta^{\sigma_2}\oplus\cdots\oplus \Delta^{\sigma_r},
\end{equation*}
by identifying each $U_i$ with the vector space $\mathbb{K}^{d_i}$.
\section{Curvature dimension inequalities with signatures}\label{section:CurDimIneSig}
In this section, we introduce the $CD^\sigma(K,n)$ inequality for $K\in \mathbb{R}$ and $n\in \mathbb{R}_+$ and discuss its basic properties. We will characterize the $CD^\sigma$ inequality in terms of linear matrix inequalities, and also in terms of heat semigroups for functions and vector fields.
\subsection{Definitions}
We start by discussing several basic properties of the operators $\Gamma^\sigma$ and $\Gamma^\sigma_2$ defined in Definition \ref{def:introGamma} (of course, we are using
the Laplacians in our current general setting). First, observe that they have the following Hermitian properties:
\begin{equation}\label{eq:gamma symmetric}
\Gamma^{\sigma}(f,g)=\overline{\Gamma^{\sigma}(g,f)}, \,\,\, \Gamma_2^{\sigma}(f,g)=\overline{\Gamma_2^{\sigma}(g,f)}, \,\,\,\forall f, g: V\to \mathbb{K}^d.
\end{equation}
Since the graph Laplacian $\Delta$ satisfies
\begin{equation}\label{eq:laplacian-integration}
\sum_{x\in V}\mu(x)\Delta(f^T\overline{g})(x)=0,
\end{equation}
the definition (\ref{eq:introGamma}) of $\Gamma^{\sigma}$ and the self-adjointness (\ref{eq:self-adjoint}) of $\Delta^{\sigma}$ lead to the following summation by part formula,
\begin{equation}\label{eq:summaBypart}
\sum_{x\in V}\mu(x)\Gamma^\sigma(f,g)(x)=-\langle f, \Delta^{\sigma}g\rangle_{\mu}=-\langle \Delta^{\sigma}f,g\rangle_{\mu}.
\end{equation}
Moreover, we have the following properties.
\begin{pro}\label{pro:Gammasigma}
For any two functions $f,g: V\rightarrow \mathbb{K}^d$ and any $x\in V$, we have
\begin{enumerate}[(i)]
\item $$\Gamma^\sigma(f,g)(x)=\frac{1}{2\mu(x)}\sum_{y,y\sim x}w_{xy}(\sigma_{xy}f(y)-f(x))^T(\overline{\sigma_{xy}g(y)-g(x)});$$
\item $$\left|\Gamma^\sigma(f,g)(x)\right|\leq\sqrt{\Gamma^\sigma(f)(x)}\sqrt{\Gamma^\sigma(g)(x)}.$$
\end{enumerate}
\end{pro}
\begin{proof}
The formula (i) follows from a direct calculation. (ii) is a consequence of (i) by applying the Cauchy-Schwarz inequality.
\end{proof}
\begin{definition}[$CD^\sigma$ inequality]\label{defn:CDinequality}
Let $K \in \mathbb {R}$ and $n \in {\mathbb R}_+$. We say
that $(G,\mu, \sigma)$ satisfies the $CD^\sigma$ inequality $CD^\sigma(K,n)$ if we have for any
vector field $f: V\to \mathbb{K}^d$ and any vertex $x\in V$,
\begin{equation}\label{eq:CDineq}
\Gamma_2^\sigma(f)(x)\geq \frac{1}{n}\left| \Delta^\sigma f(x) \right|^2+K\Gamma^\sigma(f)(x).
\end{equation}
We call $K$ a lower curvature bound of $(G,\mu, \sigma)$ and $n$ a dimension parameter. We define the \emph{$n$-dimensional Ricci curvature} $K_n(G,\mu,\sigma;x)$ of $(G,\mu, \sigma)$ at the vertex $x\in V$ to be the largest $K$ that the inequality (\ref{eq:CDineq}) holds for a given dimension parameter $n$. We further define the \emph{precise $n$-dimensional Ricci curvature lower bound} $K_n(G,\mu,\sigma)$ of $(G,\mu, \sigma)$ as
\begin{equation}
K_n(G,\mu,\sigma):=\min_{x\in V} K_n(G,\mu,\sigma;x).
\end{equation}
We also simply write $K_n(\sigma;x)$ and $K_n(\sigma)$ when the setting $(G,\mu)$ is clear.
\end{definition}
Note that for given $K\in \mathbb{R}$, and $n_1, n_2\in{\mathbb R}_+$ with $n_1\leq n_2$, the inequality $CD^\sigma(K,n_1)$ implies $CD^\sigma(K,n_2)$. In other words, $CD^\sigma(K,n)$ provides a lower curvature bound $K$ and an upper dimension bound $n$ of the graph.
We also remark that rescalling the measure $\mu$ by a constant $c>0$ leads to
\begin{equation}
K_n(G,c\mu,\sigma;x)=\frac{1}{c}K_n(G,\mu,\sigma;x).
\end{equation}
We will be particularly interested in graphs satisfying $CD^\sigma(K,\infty)$ in this paper.
The classical curvature-dimension inequality $CD(K,n)$ \`{a} la Bakry and \'{E}mery \cite{BaEm} on graphs is defined as follows: For any real-valued function $f:V\to \mathbb{R}$ and any vertex $x$, we have
\begin{equation}\label{eq:CDineqOriginal}
\Gamma_2(f)(x)\geq \frac{1}{n}\left| \Delta f(x) \right|^2+K\Gamma(f)(x).
\end{equation}
Recall the definitions of $\Gamma$ and $\Gamma_2$ from (\ref{intr:Gamma}) and (\ref{intr:Gamma2}).
When $\sigma=\sigma_{\mathrm{triv}}: E^{or}\to id\in U(d)$ is the trivial signature, the graph $(G,\mu,\sigma)$ satisfies the inequality $CD^\sigma(K,n)$ if and only if $(G,\mu)$ satisfies the inequality $CD(K,n)$. In fact, this follows immediately from the following general result.
\begin{pro}\label{pro:CDdecomposition}
Assume that the action of the signature group $\Sigma$ of the graph $(G,\mu,\sigma)$ is decomposable, i.e., we have
\begin{equation*}
\Delta^{\sigma}=\Delta^{\sigma_1}\oplus\Delta^{\sigma_2}\oplus\cdots\oplus \Delta^{\sigma_r},
\end{equation*}
where $\sigma_i:E^{or}\to U(d_i)\,\,\text{or}\,\,O(d_i), i=1,2,\ldots, r$.
Then the graph $(G, \mu, \sigma)$ satisfies the inequality $CD^{\sigma}(K,n)$ if and only if $(G, \mu, \sigma_i)$ satisfies $CD^{\sigma_i}(K,n)$ for each $i=1,2,\ldots,r$.
\end{pro}
\begin{proof}
By assumption, for any function $f: V\to \mathbb{K}^d$, there exist functions $f_i: V\to U_i\cong\mathbb{K}^{d_i}$, $i=1,2,\ldots,r$ such that $$f^T\overline{f}=f_1^T\overline{f_1}+f_2^T\overline{f_2}+\cdots+f_r^T\overline{f_r},$$ and,
$$ \Delta^{\sigma}f=\Delta^{\sigma_1}f_1\oplus\Delta^{\sigma_2}f_2\oplus\cdots\oplus \Delta^{\sigma_r}f_r.$$
Hence, for any $x\in V$, we obtain by Definition \ref{def:introGamma},
\begin{align*}
\Gamma^{\sigma}(f)(x)=\sum_{i=1}^r\Gamma^{\sigma_i}(f_i)(x),\,\,\text{ and }\,\,\Gamma_2^{\sigma}(f)(x)=\sum_{i=1}^r\Gamma_2^{\sigma_i}(f_i)(x).
\end{align*}
We also have $$\left| \Delta^{\sigma} f(x) \right|^2=\sum_{i=1}^r\left| \Delta^{\sigma_i} f_i(x) \right|^2.$$
Therefore, the inequality
$$\Gamma_2^\sigma(f)(x)\geq \frac{1}{n}\left| \Delta^\sigma f(x) \right|^2+K\Gamma^\sigma(f)(x),\,\,\forall x\in V,$$
is equivalent to the following inequality,
\begin{equation*}
\sum_{i=1}^r\Gamma_2^{\sigma_i}(f_i)(x)\geq \sum_{i=1}^r\left(\frac{1}{n}\left| \Delta^{\sigma_i} f_i(x) \right|^2+\Gamma^{\sigma_i}(f_i)(x)\right),
\end{equation*}
and the proposition follows immediately.
\end{proof}
Given a graph $(G,\mu,\sigma)$, where $\sigma:E^{or}\to O(d_1)$ or $U(d_1)$, we have a natural new signature
\begin{align*}
\sigma\otimes \mathrm{I}_{d_2}: E^{or}&\to O(d_1d_2) \,\,\text{ or }\,\,U(d_1d_2),\\
(x,y)&\mapsto \sigma_{xy}\otimes \mathrm{I}_{d_2},
\end{align*}
where $\mathrm{I}_{d_2}$ stands for the identity matrix of size $d_2\times d_2$. The following observation will be useful in our later discussion about the $CD^\sigma$ inequalities on Cartesian products of graphs in Appendix \ref{section:appendixCurCheeCar}.
\begin{coro}\label{cor:tensor}
A graph $(G, \mu, \sigma)$ satisfies $CD^{\sigma}(K, n)$ if and only if $(G, \mu, \sigma\otimes \mathrm{I}_{d_2})$ satisfies $CD^{\sigma\otimes \mathrm{I}_{d_2}}(K,n)$.
\end{coro}
\begin{proof}
We observe that the action of the signature group of $(G,\sigma\otimes \mathrm{I}_{d_2})$ on $\mathbb{K}^{d_1d_2}$ admits an orthogonal decomposition and, therefore, we have
$$\Delta^{\sigma\otimes \mathrm{I}_{d_2}}=\underbrace{\Delta^{\sigma}\oplus\cdots\oplus\Delta^{\sigma}}_{d_2\,\, \text{times}}.$$
Corollary \ref{cor:tensor} is then a direct consequence of Proposition \ref{pro:CDdecomposition}.
\end{proof}
\subsection{Switching invariance}\label{section:switching inv}
The $CD^\sigma$ inequality is switching invariant.
\begin{pro}\label{pro:curvature-switching}
If $(G, \mu, \sigma)$ satisfies $CD^{\sigma}(K,n)$, then $(G, \mu, \sigma^{\tau})$ satisfies $CD^{\sigma^\tau}(K,n)$ for any switching function $\tau: V\to H$.
\end{pro}
\begin{proof}
Recalling (\ref{intro:unitary}), we check that we have for any $\tau: V\to H$ and $f, g: V\to \mathbb{K}^d$,
\begin{equation}\label{eq:gamma-switching}
\Gamma^{\sigma^{\tau}}(f,g)=\Gamma^{\sigma}(\tau^{-1}f, \tau^{-1}g) \,\,\text{ and }\,\, \Gamma_2^{\sigma^{\tau}}(f,g)=\Gamma_2^{\sigma}(\tau^{-1}f, \tau^{-1}g),
\end{equation}
using $\overline{\tau(x)^T}=\tau^{-1}(x)$.
The proposition then follows immediately from (\ref{intro:unitary}) and (\ref{eq:gamma-switching}).
\end{proof}
The arguments in the above proof show also that $K_n(G,\mu,\sigma;x)$, introduced in Definition \ref{defn:CDinequality}, is switching invariant for any given $n$.
We denote by $\mathrm{dist}$ the canonical graph distance and define the ball of radius $r$ centered at $x\in V$ by
\begin{equation*}
B_r(x):=\{y\in V\mid \mathrm{dist}(x,y)\leq r\}.
\end{equation*}
\begin{pro}\label{pro:shortcycles}
Let $(G,\mu,\sigma)$ be given. If the signature of every cycle of length $3$ or $4$ is equal to (the conjugate class of) $id\in H$, then $(G,\mu,\sigma)$ satisfies $CD^{\sigma}(K,n)$ if and only if $(G,\mu)$ satisfies $CD(K,n)$.
\end{pro}
\begin{proof} Let $x\in V$ be a vertex.
Since all cycles of $3$ or $4$ have trivial signature, we can switch all the signatures of edges in the subgraph induced by the ball $B_2(x)$ to be trivial. Note that the inequality (\ref{eq:CDineq}) only involves the vertices in the ball $B_2(x)$. Then the proposition follows from Propositions \ref{pro:curvature-switching} and \ref{pro:CDdecomposition}.
\end{proof}
\subsection{Coverings and a general lower curvature bound}\label{section:covering}
Let $(\tilde{G}, \tilde{\mu}, \tilde{\sigma})$ and $(G, \mu, \sigma)$ be two graphs. Let $\pi: (\tilde{G}, \tilde{\mu}, \tilde{\sigma})\to (G, \mu, \sigma)$ be a graph homomorphism, namely, $\pi: \tilde{V}\to V$ is surjective, and if $\{\tilde{x}, \tilde{y}\}\in \tilde{E}$, then $\{\pi(\tilde{x}), \pi(\tilde{y})\}\in E$. Moreover, we require
\begin{equation}
\tilde{\sigma}_{\tilde{x}\tilde{y}}=\sigma_{\pi(\tilde{x})\pi(\tilde{y})}, \,\,\, \tilde{w}_{\tilde{x}\tilde{y}}=w_{\pi(\tilde{x}) \pi(\tilde{y})}\,\,\,\text{ and }\,\,\,\tilde{\mu}(\tilde{x})=\mu(\pi(\tilde{x})).
\end{equation}
Such a map $\pi$ is called a \emph{covering map} if, furthermore, the subgraph of $\tilde{G}$ induced by the ball $B_1(\tilde{x})$ centered at each vertex $\tilde{x}\in \tilde{V}$ is mapped bijectively to the subgraph of $G$ induced by the ball $B_1(x)$. If a covering map $\pi: (\tilde{G}, \tilde{\mu}, \tilde{\sigma})\to (G, \mu, \sigma)$ exists, we call the graph $(\tilde{G}, \tilde{\mu}, \tilde{\sigma})$ a \emph{covering graph} of $(G, \mu, \sigma)$.
\begin{thm}\label{thm:covering}
Let $(\tilde{G}, \tilde{\mu}, \tilde{\sigma})$ be a covering graph of $(G, \mu, \sigma)$. If $(\tilde{G}, \tilde{\mu}, \tilde{\sigma})$ satisfies $CD^{\tilde{\sigma}}(K, n)$, then $(G, \mu, \sigma)$ satisfies $CD^{\sigma}(K,n)$.
\end{thm}
\begin{proof}
For any function $f: V\to \mathbb{K}^d$, we can find a corresponding function $\tilde{f}: \tilde{V}\to \mathbb{K}^d$ such that
\begin{equation}
\tilde{f}(\tilde{x}):=f(\pi(\tilde{x}))\,\,\,\forall\,\, \tilde{x}\in \tilde{V},
\end{equation}
where $\pi$ is the covering map from $(\tilde{G}, \tilde{\mu}, \tilde{\sigma})$ to $(G, \mu, \sigma)$.
For any $x\in V$, and any $\tilde{x}\in \pi^{-1}(x)$, we can check by definition of a covering map that
\begin{equation}\label{eq:covering}
|\Delta^{\tilde{\sigma}}\tilde{f}(\tilde{x})|^2=\left|\Delta^{\sigma}f(x)\right|^2,\,\, \Gamma^{\tilde{\sigma}}(\tilde{f})(\tilde{x})=\Gamma^\sigma(f)(x),\,\, \Gamma_2^{\tilde{\sigma}}(\tilde{f})(\tilde{x})=\Gamma_2^\sigma(f)(x).
\end{equation}
Since $(\tilde{G}, \tilde{\mu}, \tilde{\sigma})$ satisfies $CD^{\tilde{\sigma}}(K,n)$, we obtain that for any $f: V\to \mathbb{K}^d$, and any vertex $x\in V$,
\begin{equation}
\Gamma_2^{\tilde{\sigma}}(\tilde{f})(\tilde{x})\geq \frac{1}{n}|\Delta^{\tilde{\sigma}}\tilde{f}(\tilde{x})|^2+K\Gamma^{\tilde{\sigma}}(\tilde{f})(\tilde{x}).
\end{equation}
Combining this with (\ref{eq:covering}) completes the proof.
\end{proof}
\begin{coro}\label{cor:lower curvature bound}
Any graph $(G, \mu, \sigma)$ satisfies the inequality $$CD^{\sigma}\left(\frac{2}{D^{nor}_G}-D_G^{non},2\right).$$ In particular, any unweighted cycle graph with constant vertex measure $\mu\equiv\nu_0\cdot \mathbf{1}_V$ and any signature $\sigma: E^{or}\to H$ satisfies $$CD^{\sigma}(0,2).$$
\end{coro}
\begin{proof}
Let $(T_G, \tilde{\mu}, \tilde{\sigma})$ be the universal covering of $(G, \mu, \sigma)$, i.e., $T_G$ is a tree. It is shown in \cite[Theorem 1.2]{LinYau} (see also \cite[Theorem 8]{JostLiu14}) that $(T_G, \tilde{\mu})$ satisfies the $CD$ inequality
$$CD\left(\frac{2}{D^{nor}_{T_G}}-D^{non}_{T_G},2\right).$$
Due to Proposition \ref{pro:shortcycles}, we know that $(T_G, \tilde{\mu}, \tilde{\sigma})$ satisfies
$$CD^{\tilde{\sigma}}\left(\frac{2}{D^{nor}_{T_G}}-D^{non}_{T_G},2\right),$$ since a tree has no cycles.
By the definition of a covering graph, we have $D_G^{nor}=D_{T_G}^{nor}$ and $D_G^{non}=D_G^{non}$. Then the corollary follows directly from Theorem \ref{thm:covering}.
\end{proof}
\subsection{$CD^\sigma$ inequality as linear matrix inequalities}\label{section:LMI}
In this subsection, we present an equivalent formulation of the $CD^\sigma$ inequality via linear matrix inequalities. As a consequence, the problem of calculating the Ricci curvature of a graph is reduced to solving semidefinite programming problems. In this process, we explore the geometrical information captured by the $CD^\sigma$ inequality of a graph.
By Definition \ref{def:introGamma}, the operators $\Gamma^\sigma$ and $\Gamma^\sigma_2$ can be considered as two symmetric sesquilinear forms. Hence they can be represented by Hermitian matrices. For our purpose, we are interested in considering the two symmetric sesquilinear forms locally at every vertex $x\in V$. There exist two $(Nd)\times (Nd)$-Hermitian matrices $\Gamma^\sigma(x)$ and $\Gamma^\sigma_2(x)$ such that for any two functions $f,g:V\to \mathbb{K}^d$,
\begin{equation}
\Gamma^\sigma(f,g)(x)=\overrightarrow{f}^T\Gamma^\sigma(x)\overline{\overrightarrow{g}}, \,\,\text{ and }\,\, \Gamma_2^\sigma(f,g)(x)=\overrightarrow{f}^T\Gamma_2^\sigma(x)\overline{\overrightarrow{g}}.
\end{equation}
Denote by $|B_r(x)|$ the cardinality of the set $B_r(x)$. Observe that the matrix $\Gamma^\sigma(x)$ only has a nontrivial block of size $|B_1(x)| \times |B_1(x)|$, while the matrix $\Gamma_2^\sigma$ only has a nontrivial block of size $|B_2(x)| \times |B_2(x)|$.
We denote by $\Delta^\sigma(x)$ the $(d\times Nd)$-matrix such that $\Delta^\sigma f(x)=\Delta^\sigma(x)\overrightarrow{f}$ for all functions $f: V\to \mathbb{K}^d$.
Given two Hermitian matrices $M_1$ and $M_2$, the inequality $M_1\geq M_2$ means that the matrix $M_1-M_2$ is positive semidefinite. Then we have the following equivalent definition of $CD^\sigma$ inequality.
\begin{definition}[$CD^\sigma$ inequality as linear matrix inequalities]
Let $K\in \mathbb{R}$ and $n\in \mathbb{R}_+$. A graph $(G,\mu,\sigma)$ satisfies the $CD^\sigma$ inequality $CD^\sigma(K,n)$ if and only if, for any vertex $x\in V$, the following linear matrix inequality holds,
\begin{equation}
\Gamma_2(x)\geq \frac{1}{n}\Delta^\sigma(x)^T\overline{\Delta^\sigma(x)}+K\Gamma^\sigma(x).
\end{equation}
\end{definition}
A direct consequence is the following proposition.
\begin{pro}[Semidefinite Programming]\label{pro:sdp}
The $n$-dimensional Ricci curvature $K_n(G,\mu,\sigma;x)$ of the the graph $(G,\mu,\sigma)$ at the vertex $x\in V$ is the solution of the following semidefinite programming,
\begin{align*}
&\text{maximize}\,\,\, K\\
&\text{subject to}\,\,\,\Gamma_2^\sigma(x)-\frac{1}{n}\Delta^\sigma(x)^T\overline{\Delta^\sigma(x)}\geq K\Gamma^\sigma(x).
\end{align*}
\end{pro}
Semidefinite programming can be efficiently solved. There are several software packages available.
In the following, we describe the explicit structure of the matrices $\Delta^\sigma(x), \Gamma^\sigma(x)$ and $\Gamma_2^\sigma(x)$. For simplicity, we restrict to the setting
\begin{equation}\label{setting:1}
\mu=\mathbf{1}_V,\,\,\text{i.e.},\,\,\mu(x)=1 \,\,\forall\,\,x\in V,
\end{equation}
and
\begin{equation}\label{setting:2}
w_{xy}=1\,\,\forall\,\,\{x,y\}\in E.
\end{equation}
Given a vertex $x\in V$, let us denote its neighbors by $y_1,y_2,\ldots,y_{d_x}$. By abuse of notation, we still write $\Delta^\sigma(x)$ and $\Gamma^\sigma(x)$ for their nontrivial blocks corresponding to the vertices $x,y_1,\ldots,y_{d_x}$. Then it is easy to see that
\begin{equation}
\Delta^\sigma(x)=\left(
\begin{array}{cccc}
-d_xI_d & \sigma_{xy_1} & \cdots & \sigma_{xy_{d_x}} \\
\end{array}
\right),
\end{equation}
and
\begin{equation}\label{eq:GammaMatrix}
2\Gamma^\sigma(x)=\left(
\begin{array}{cccc}
d_xI_d & -\overline{\sigma_{xy_1}} & \cdots & -\overline{\sigma_{xy_{d_x}}} \\
-\sigma_{xy_1}^T & I_d & \cdots & 0 \\
\vdots & \vdots & \ddots & \vdots \\
-\sigma_{xy_{d_x}}^T & 0 & \cdots & I_d \\
\end{array}
\right).
\end{equation}
For the matrix $\Gamma_2^\sigma(x)$, the structure of the subgraph induced by $B_2(x)$ is of relevance. We denote the sphere of radius $r$ centered at a vertex $x\in V$ by
$$\mathrm{S}_r(x):=\{y\in V\mid \mathrm{dist}(x,y)=r\}.$$
Then, the ball $B_2(x)$ has the decomposition $B_2(x)=\{x\}\sqcup \mathrm{S}_1(x)\sqcup\mathrm{S}_2(x)$.
We first introduce some natural geometric quantities before we present the entries of the matrix $\Gamma_2^\sigma(x)$. For any vertex $y\in \mathrm{S}_1(x)$, we have
\begin{equation}\label{eq:farneighbors}
|\mathrm{S}_1(y)\cap\mathrm{S}_2(x)|:=\sum_{z,z\sim y,z\not\sim x,z\neq x}1,
\end{equation}
and
\begin{equation}\label{eq:triangles}
\sharp_{\bigtriangleup}(x,y):=|\mathrm{S}_1(y)\cap\mathrm{S}_1(x)|:=\sum_{z,z\sim y,z\sim x}1.
\end{equation}
Note that (\ref{eq:triangles}) is the number of triangles (i.e., $3$-cycles) which contain the two neighbors $x$ and $y$. This justifies the notation $\sharp_{\bigtriangleup}(x,y)$.
For any vertex $z\in \mathrm{S}_2(x)$, we have
\begin{equation}\label{eq:nearneighbors}
|\mathrm{S}_1(z)\cap\mathrm{S}_1(x)|:=\sum_{y,y\sim x,y\sim z}1.
\end{equation}
Note that (\ref{eq:nearneighbors}) is related to the number of $4$-cycles which contain the two vertices $x$ and $z$.
\begin{remark}
The above three geometric quantities are all closely related to the growth rate of the cardinality of $B_r(x)$ (in other words, the volume of $B_r(x)$ w.r.t. the measure $\mu=\mathbf{1}_V$) when the radius $r$ increases.
\end{remark}
The quantity $\sharp_{\bigtriangleup}(x,y)$ counts the number of $3$-cycles regardless of their signatures. A signed version of this quantity is also important, and we define the following quantity describing the unbalancedness of the triangles containing the two neighbors $x$ and $y$:
\begin{equation}\label{eq:signedtriangles}
\sharp_{\bigtriangleup}^\sigma(x,y):=\sum_{z,z\sim y,z\sim x}\left(\mathrm{I}_d-\overline{\sigma_{xz}\sigma_{zy}\sigma_{yx}}\right).
\end{equation}
Note that the balanced triangles containing $x$ and $y$ do not contribute to the expression in (\ref{eq:signedtriangles}).
\begin{pro}\label{pro:gamma2Matrix}
Under the setting of (\ref{setting:1}) and (\ref{setting:2}), the nontrivial block of $\Gamma_2^\sigma(x)$, which is Hermitian and of size $|B_2(x)|\times |B_2(x)|$, is given by the following blocks:
\begin{align}
&(4\Gamma_2^\sigma(x))_{xx}=(3d_x+d_x^2)\mathrm{I}_d;\\
&(4\Gamma_2^\sigma(x))_{xy}=-\left(3+d_x+|\mathrm{S}_1(y)\cap\mathrm{S}_2(x)|+\sharp_{\bigtriangleup}^\sigma(x,y)\right)\overline{\sigma_{xy}},\\
&\hspace{7cm}\text{ for any $y\in \mathrm{S}_1(x)$;}\notag\\
&(4\Gamma_2^\sigma(x))_{xz}=\sum_{y,y\sim x,y\sim z}\overline{\sigma_{xy}\sigma_{yz}}, \text{ for any $z\in \mathrm{S}_2(x)$;}\\
&(4\Gamma_2^\sigma(x))_{yy}=\left(5-d_x+3|\mathrm{S}_1(y)\cap\mathrm{S}_2(x)|+4\sharp_{\bigtriangleup}(x,y)\right)\mathrm{I}_d,\\
&\hspace{7cm}\text{ for any $y\in \mathrm{S}_1(x)$;}\notag\\
&(4\Gamma_2^\sigma(x))_{y_1y_2}=2\overline{\sigma_{y_1x}\sigma_{xy_2}}-4\overline{\sigma_{y_1y_2}},\\
&\hspace{1cm}\text{ for any $y_1,y_2\in \mathrm{S}_1(x)$, $y_1\neq y_2$, where we use $\sigma_{y_1y_2}=0$ if $\{y_1,y_2\}\not\in E$;}\notag\\
&(4\Gamma_2^\sigma(x))_{yz}=-2\overline{\sigma_{yz}},\\
&\hspace{1cm}\text{ for any $y\in \mathrm{S}_1(x)$ and $z\in\mathrm{S}_2(x)$, where we use $\sigma_{yz}=0$ if $\{y,z\}\not\in E$;}\notag\\
&(4\Gamma_2^\sigma(x))_{zz}=|\mathrm{S}_1(z)\cap\mathrm{S}_1(x)|\mathrm{I}_d, \text{ for any $z\in \mathrm{S_2}(x)$};\\
&(4\Gamma_2^\sigma(x))_{z_1z_2}=0, \text{ for any $z_1,z_2\in \mathrm{S}_2(x)$, $z_1\neq z_2$.}
\end{align}
\end{pro}
\begin{proof}
This follows from a direct expansion of the identity
$$\Gamma_2^\sigma(f,g)(x)=\overrightarrow{f}^T\Gamma_2^\sigma(x)\overline{\overrightarrow{g}},\,\,\text{ for any $f,g:V\to \mathbb{K}^d$}.$$
We omit the details here.
\end{proof}
\begin{remark}\begin{enumerate}[(i)]
\item The block $(4\Gamma_2^\sigma(x))_{xz}$ above is a signed version of the quantity $|\mathrm{S}_1(z)\cap\mathrm{S}_1(x)|$ in (\ref{eq:nearneighbors}).
\item When $y_1,y_2\in \mathrm{S}_1(x)$ are neighbors, i.e. $\{y_1,y_2\}\in E$, we have a triangle containing $x,y_1$ and $y_2$. Then the block $(4\Gamma_2^\sigma(x))_{y_1y_2}$ can be rewritten as
$$-2\left(\mathrm{I}_d+\left(\mathrm{I}_d-\overline{\sigma_{y_1x}\sigma_{xy_2}\sigma_{y_2y_1}}\right)\right)\overline{\sigma_{y_1y_2}},$$
which describes the unbalancedness of this triangle.
\end{enumerate}
\end{remark}
\subsection{Example of a signed triangle}\label{subsection:triangle}
We consider a particular example of a triangle graph $\mathcal{C}_{3}$, which consists of three vertices $x,y,$ and $z$, as shown in Figure \ref{F1}. We set
\begin{equation}\label{setting:3}
\mu(x)=\mu(y)=\mu(z)=2,\,\,\,\text{ and }\,\,\, w_{xy}=w_{xz}=w_{yz}=1.
\end{equation}
Let $\sigma: E^{or}\to U(1):=\{z\in \mathbb{C}, |z|=1\}$ be a signature on $\mathcal{C}_{3}$. Assume that the signature of the cycle $\mathcal{C}_3$ is equal to (the conjugacy class of) $s\in U(1)$. Then $\sigma$ is switching equivalent to the signature given in Figure \ref{F1}, i.e.,
$$\sigma_{xy}=\sigma_{xz}=1 \,\,\,\text{ and }\,\,\, \sigma_{yz}=s.$$
\begin{figure}[h]
\begin{minipage}[t]{0.45\linewidth}
\centering
\includegraphics[width=0.6\textwidth]{SignedTriangle.pdf}
\caption{A signed triangle\label{F1}}
\end{minipage}
\hfill
\end{figure}
\begin{pro}\label{pro:curvature triangle}
Let $(\mathcal{C}_3, \mu, \sigma)$ be as above and $s=Sgn(\mathcal{C}_3)$. Then it has constant $\infty$-dimensional Ricci curvature at every vertex. As a function of $s$, $K_\infty(s):=K_\infty(\mathcal{C}_3, \mu, \sigma)$ is given by
\begin{equation}\label{eq:triangle Curvature}
K_\infty(s)=\left\{
\begin{aligned}
&\frac{5}{4}, &&\hbox{if $s=1$;} \\
&\frac{5-\sqrt{17+8\mathrm{Re}(s)}}{8}, &&\hbox{otherwise.}
\end{aligned}
\right.
\end{equation}
\end{pro}
\begin{remark}\label{remark:jumptriangle}
The curvature (\ref{eq:triangle Curvature}) is illustrated in Figure \ref{F2} as a function of the variable $\mathrm{Re}(s)$. The function $K_{\infty}(s)$ "jumps" at $s=1$. That is,
\begin{equation}
\lim_{s\to 1}K_\infty(s)=0, \,\,\,\text{ but }\,\,\, K_\infty(1)=\frac{5}{4}>0.
\end{equation}
We will show that such a "jump" appears in a more general setting in Section \ref{section:Lichnerowicz}.
\end{remark}
\begin{figure}[h]
\centering
\includegraphics[width=.5\textwidth]{ExampleTriangle.pdf}
\caption{$\infty$-dimensional Ricci curvature of a signed triangle}\label{F2}
\end{figure}
\begin{proof}[Proof of Proposition \ref{pro:curvature triangle}]
Since the curvature is switching invariant, we can switch the signature $\sigma$ to be as shown in Figure \ref{F1} before calculating the curvature $K_\infty(\sigma;x)$ at $x$. In fact, one can do similar operations for calculating $K_\infty(y)$ and $K_\infty(z)$. So $(\mathcal{C}_3, \mu, \sigma)$ has constant curvature and we only need to calculate the curvature at $x$.
By the fact (\ref{eq:GammaMatrix}) and Proposition \ref{pro:gamma2Matrix}, we can obtain the corresponding matrices $\Gamma^\sigma(x)$ and $\Gamma_2^\sigma(x)$. Note in this example, we choose a different measure (\ref{setting:3}) from that in (\ref{setting:1}). Hence these matrices differ by a scaling of $1/2$ and $1/4$ respectively. Therefore, under the current setting (\ref{setting:3}), we have
\begin{equation*}
2\Gamma^\sigma(x)=\frac{1}{2}\left(
\begin{array}{ccc}
2 & -1 & -1 \\
-1 & 1 & 0 \\
-1 & 0 & 1 \\
\end{array}
\right)
\,\text{ and }\,
4\Gamma_2^\sigma(x)=\frac{1}{4}\left(
\begin{array}{ccc}
10 & -6+s & -6+\overline{s} \\
-6+\overline{s} & 7 & 2-4\overline{s} \\
-6+s & 2-4s & 7 \\
\end{array}
\right).
\end{equation*}
By Proposition \ref{pro:sdp}, we need to solve the following semidefinite programming:
\begin{align}
&\text{maximize}\,\,K\notag\\
&\text{subject to}\,\,\Gamma_2^\sigma(x)\geq K\Gamma^\sigma(x).\label{eq:TriLMI}
\end{align}
Inequality (\ref{eq:TriLMI}) is equivalent to positive semidefiniteness of the following matrix:
\begin{equation}
16\Gamma_2^\sigma(x)-16K\Gamma^\sigma(x)=\left(
\begin{array}{ccc}
10-8K & -6+s+4K & -6+\overline{s}+4K \\
-6+\overline{s}+4K & 7-4K & 2-4\overline{s} \\
-6+s+4K & 2-4s & 7-4K \\
\end{array}
\right).
\end{equation}
By Sylvester's criterion, this is equivalent to nonnegativity of all principle minors of the above matrix. Calculating these principle minors, we translate the semidefinite programming to the following problem,
\begin{align*}
\text{maximize} \,\,&K\notag\\
\text{subject to}\,\,&10-8K\geq 0,\,\,7-4K\geq 0\\
&16K^2-8(6+\mathrm{Re}(s))K+(33+12\mathrm{Re}(x))\geq 0,\\
&16K^2-56K+(16\mathrm{Re}(s)+29)\geq 0,\\
&8(1-\mathrm{Re}(s))K^2-10(1-\mathrm{Re}(s))K+(1-Re(s))^2\geq 0.
\end{align*}
Rewriting the above inequalities, we obtain
\begin{align*}
\text{maximize} \,\,&K\notag\\
\text{subject to}\,\,&K\leq 5/4,\,\,K\leq 7/4,\\
&K\geq (5+\sqrt{17+8\mathrm{Re}(s)})/8\,\,\text{or}\,\,K\leq (5-\sqrt{17+8\mathrm{Re}(s)})/8,\\
&K\geq (7+2\sqrt{5-4\mathrm{Re}(s)})/4\,\,\text{or}\,\,K\leq (7-2\sqrt{5-4\mathrm{Re}(s)})/4,\\
&K\geq (6+\mathrm{Re}(s)+\sqrt{\mathrm{Re}(s)^2+3})/4\,\,\text{or}\,\,K\leq (6+\mathrm{Re}(s)-\sqrt{\mathrm{Re}(s)^2+3})/4.
\end{align*}
One can check directly that (\ref{eq:triangle Curvature}) is the solution of this optimization problem.
\end{proof}
Similarly, one can calculate the $\infty$-dimensional Ricci curvature of longer cycles $\mathcal{C}_N$ for $N\geq 4$.
\begin{pro}\label{pro:curLongCycles}
Let $(\mathcal{C}_N, \mu, \sigma)$ be a cycle of length $N$ with the edge weights and measure $\mu$ given in (\ref{setting:3}) and $s=Sgn(\mathcal{C}_N)$. Then $(\mathcal{C}_N, \mu, \sigma)$ has constant $\infty$-dimensional Ricci curvature at every vertex.
Moreover, we have
\begin{equation}\label{eq:4cycle Curvature}
K_\infty(\mathcal{C}_4, \mu, \sigma)=\left\{
\begin{array}{ll}
1, & \hbox{if $s=1$;} \\
0, & \hbox{otherwise,}
\end{array}
\right.
\end{equation}
and, for $N\geq 5$,
\begin{equation}\label{eq:5cycle Curvature}
K_\infty(\mathcal{C}_N, \mu, \sigma)=0.
\end{equation}
\end{pro}
We remark that new examples of graphs $(G,\sigma)$ satisfying the $CD^\sigma(0,\infty)$ inequality can be constructed by taking Cartesian products of known examples for various choices of the signature, edge weights, and vertex measure on the product graph. We refer to Appendix \ref{section:appendixCurCheeCar} for full details about the behavior of $CD^\sigma$ inequalities on Cartesian product graphs.
\subsection{Heat semigroup characterizations of $CD^{\sigma}$ inequalities}\label{section:HeatChar}
In this subsection, we derive characterizations of the $CD^{\sigma}$ inequality via the solution of the following associated continuous time heat equation,
\begin{equation}
\left\{\begin{aligned}
&\frac{\partial u(x,t)}{\partial t}=\Delta^{\sigma}u(x,t),\\
&u(x,0)=f(x),
\end{aligned}
\right.
\end{equation}
where $f: V\to \mathbb{K}^d$ is an initial function. The solution $u: V\times [0, \infty)\to \mathbb{K}^d$ is given by $P_t^{\sigma}f:=e^{t\Delta^{\sigma}}f$, where $P_t^\sigma$ is a linear operator on the space $\ell^2(V, \mathbb{K}^d; \mu)$. Clearly, we have $P^{\sigma}_0f=f$. It is straightforward to check the following properties of $P_t^\sigma$.
\begin{pro}\label{pro:heatsemigroup}
The operator $P^\sigma_t, t\geq 0$ satisfies the following properties:
\begin{enumerate}[(i)]
\item $P^\sigma_t$ is a self-adjoint operator on the space $\ell^2(V, \mathbb{K}^d; \mu)$;\label{pro:Ptsa}
\item $P^\sigma_t$ commutes with $\Delta^\sigma$, i.e. $P^\sigma_t\Delta^\sigma=\Delta^\sigma P_t^\sigma$;\label{pro:Ptcomm}
\item $P^\sigma_t P^\sigma_s=P^\sigma_{t+s}$ for any $t,s\geq 0$.\label{pro:Ptadd}
\end{enumerate}
\end{pro}
The solution of the heat equation corresponding to the graph Laplacian $\Delta$ is simply denoted by $P_t:=e^{t\Delta}$. The matrix $P_t$ has the following additional properties besides the ones listed in Proposition \ref{pro:heatsemigroup}.
\begin{pro}\label{pro:heatsemigroupunsigned}
\begin{enumerate}[(i)]
\item All matrix entries of $P_t$ are real and nonnegative;\label{pro:Ptnonnegative}
\item For any constant function $c$ on $V$, we have $P_tc=c$.\label{pro:Ptprobability}
\end{enumerate}
\end{pro}
In particular, the above properties imply that for a function $f:V\to \mathbb{R}$ with $0\leq f(x)\leq c,\,\,\forall\,\,x\in V$, we have $0\leq P_tf(x)\leq c,\,\,\forall\,\,x\in V$.
\begin{proof}
Recall that $\Delta$ can be written as the matrix $(\mathrm{diag}_\mu)^{-1}(A-\mathrm{diag}_D)$, where $\mathrm{diag}_D$ and $\mathrm{diag}_\mu$ are the diagonal matrices with $(\mathrm{diag}_D)_{xx}=d_x$ and $(\mathrm{diag}_\mu)_{xx}=\mu(x)$ for all $x\in V$, and $A$ is the weighted adjacency matrix.
Now we exploit the fact that
\begin{equation}\label{eq:Offdiag}
\text{all off-diagonal entries of }\,\,\Delta\,\,\text{ are nonnegative,}
\end{equation}
and, therefore, we can choose $C>0$ such that
$\Delta+C\cdot \mathrm{I}_N$ is entry-wise nonnegative.
Then $e^{\Delta+C\cdot \mathrm{I}_N}$ is also entry-wise nonnegative, which implies that the same holds for $P_t=e^{\Delta+C\cdot \mathrm{I}_N}\cdot e^{-C\cdot \mathrm{I}_N}$.
For the constant function $c$, we have
\begin{equation}\label{eq:heatNosign2}
\Delta c=0.
\end{equation}
Therefore, we have $\frac{\partial}{\partial t}P_tc=0$, which implies $P_tc=c$.
\end{proof}
\begin{remark}
Note that the two facts (\ref{eq:Offdiag}) and (\ref{eq:heatNosign2}) do not extend to general $P_t^\sigma$, even when $\sigma$ only takes values from $O(1)=\{\pm 1\}$. Therefore Proposition \ref{pro:heatsemigroupunsigned} does not hold for the more general operators $P_t^\sigma$.
\end{remark}
If $n=\infty$, the $CD^{\sigma}$ inequality is equivalent to the following local functional inequalities of $P^{\sigma}_tf$.
\begin{thm}\label{thm:curvature-characterization}
Let $(G, \mu, \sigma)$ be given. Then the following are equivalent:
\begin{enumerate}[(i)]
\item The inequality $CD^{\sigma}(K, \infty)$ holds, i.e., for any function $f:V\to \mathbb{K}^d$, we have
$$\Gamma_2^\sigma(f)\geq K\Gamma^\sigma(f);$$\label{eq:cdsigma}
\item For any function $f:V\to \mathbb{K}^d$ and $t\geq 0$, we have
$$\Gamma^{\sigma}(P_t^{\sigma}f)\leq e^{-2Kt}P_t(\Gamma^{\sigma}(f));$$\label{eq:BEgradient}
\item For any function $f:V\to \mathbb{K}^d$ and $t\geq 0$, we have
$$P_t(|f|^2)-|P_t^{\sigma}f|^2\geq \frac{1}{K}(e^{2Kt}-1)\Gamma^{\sigma}(P_t^{\sigma}f),$$
where we replace $(e^{2Kt}-1)/K$ by $2t$ in the case $K=0$.
\label{eq:BEgradient2}
\end{enumerate}
\end{thm}
\begin{remark} Theorem \ref{thm:curvature-characterization} is similar in spirit to \cite[Propostion 3.3]{Bakry}. Note that the Proposition \ref{pro:heatsemigroupunsigned} (\ref{pro:Ptnonnegative}), which is crucial for the proof of \cite[Propostion 3.3]{Bakry}, is not true for $P_t^\sigma$ in general. However, with our definitions of the operators $\Gamma^\sigma$ and $\Gamma_2^\sigma$, we avoid this difficulty.
\end{remark}
\begin{proof}
(\ref{eq:cdsigma}) $\Rightarrow$ (\ref{eq:BEgradient}): For any $0\leq s\leq t$, we consider
\begin{equation}
F(s):=e^{-2Ks}P_s(\Gamma^{\sigma}(P_{t-s}^{\sigma}f)).
\end{equation}
Since $F(0)=\Gamma^{\sigma}(P_t^{\sigma}f)$ and $F(t)=e^{-2Kt}P_t(\Gamma^{\sigma}(f))$, it is enough to prove $\frac{d}{ds}F(s)\geq~0$.
We calculate
\begin{equation*}
e^{2Ks}\frac{d}{ds}F(s)=-2KP_s(\Gamma^\sigma(P^\sigma_{t-s}f))+\Delta P_s(\Gamma^\sigma(P^\sigma_{t-s}f))+P_s(\frac{d}{ds}\Gamma^\sigma(P^\sigma_{t-s}f)),
\end{equation*}
and
\begin{equation*}
\frac{d}{ds}\Gamma^\sigma(P^\sigma_{t-s}f)=-\Gamma^\sigma(\Delta^\sigma P_{t-s}^\sigma f, P_{t-s}^\sigma f)-\Gamma^\sigma(P_{t-s}^\sigma f, \Delta^\sigma P_{t-s}^\sigma f).
\end{equation*}
Therefore, $\Delta P_s=P_s\Delta$ and the inequality $CD^{\sigma}(K, \infty)$ imply
\begin{equation*}
\frac{d}{ds}F(s)=2e^{-2Ks}P_s \big[ \Gamma^\sigma_2(P^\sigma_{t-s}f)-K\Gamma^\sigma(P^\sigma_{t-s}f) \big]\geq 0,
\end{equation*}
where we used Proposition \ref{pro:heatsemigroupunsigned} (\ref{pro:Ptnonnegative}). This proves (\ref{eq:BEgradient}).
(\ref{eq:BEgradient}) $\Rightarrow$ (\ref{eq:BEgradient2}): For $0\leq s\leq t$, we consider
\begin{equation}
G(s):=P_s(\left|P^\sigma_{t-s}f\right|^2).
\end{equation}
Note that $G(0)=\left| P^\sigma_tf \right|^2$ and $G(t)=P_t(|f|^2)$. Using the estimate (\ref{eq:BEgradient}) and Proposition \ref{pro:heatsemigroup}, we have
\begin{align*}
\frac{d}{ds}G(s)&=\Delta P_s(\left| P^\sigma_{t-s}f \right|^2)+P_s\left[-(P^\sigma_{t-s}f)^T(\overline{\Delta^\sigma P^\sigma_{t-s}f})-(\Delta^\sigma P^\sigma_{t-s}f)^T\overline{P^\sigma_{t-s}f}\right]\\
&=2P_s(\Gamma^\sigma(P^\sigma_{t-s}f))\geq 2e^{2Ks}\Gamma^\sigma(P^\sigma_tf).
\end{align*}
Therefore, we obtain
\begin{equation*}
G(t)-G(0)=\int_0^t \frac{d}{ds}G(s) ds\geq 2\Gamma^\sigma(P^\sigma_tf)\int_0^te^{2Ks}ds= \frac{e^{2Kt}-1}{K}\Gamma^\sigma(P^\sigma_tf).
\end{equation*}
This proves (\ref{eq:BEgradient2}).
(\ref{eq:BEgradient2}) $\Rightarrow$ (\ref{eq:cdsigma}): Here, we consider the inequality (\ref{eq:BEgradient2}) at $t=0$ and use the expansion
\begin{equation*}
P_t^\sigma=\mathrm{Id}+t\Delta^\sigma+\frac{t^2}{2}(\Delta^\sigma)^2+o(t^2).
\end{equation*}
Dividing (\ref{eq:BEgradient2}) by $2t^2$ and letting $t$ tend to zero, we obtain
\begin{align*}
&\frac{1}{4}\Delta^2(|f|^2)-\frac{1}{4}f^T\left(\overline{(\Delta^\sigma)^2f}\right)-\frac{1}{4}\left((\Delta^\sigma)^2f\right)^T\overline{f}-\frac{1}{2}(\Delta^\sigma f)^T(\overline{\Delta^\sigma f})\\
\geq & K\Gamma^{\sigma}(f)+\Gamma^{\sigma}(f, \Delta^\sigma f)+\Gamma^{\sigma}(\Delta^\sigma f, f).
\end{align*}
Using Definition \ref{def:introGamma}, the above inequality simplifies to
\begin{equation*}
\Gamma^\sigma_2(f)\geq K\Gamma^{\sigma}(f),
\end{equation*}
which shows (\ref{eq:cdsigma}).
\end{proof}
\section{Multi-way Cheeger constants with signatures}\label{section:MultCheeSig}
In this section, we introduce multi-way Cheeger constants with signatures for graphs $(G,\mu, \sigma)$.
\subsection{Cheeger constants with signatures}\label{section:CheegerSig}
Following the ideas of \cite{LLPP15}, we introduce a Cheeger type constant of $(G, \mu, \sigma)$ as a mixture of a frustration index and the expansion rate.
For any nonempty subset $S\subseteq V$, the frustration index $\iota^\sigma(S)$ is a measure of the unbalancedness of the signature $\sigma$ on the induced subgraph of $S$. For that purpose, we need to choose a norm on $H$, to measure the distance between different elements in $H$.
\begin{definition}
Given a $(d\times d)$-matrix $A=(a_{ij})$, we define the \emph{average $(2,1)$-norm} $|A|_{2,1}$ of $A$ as
\begin{equation}\label{defn:matrixNorm}
|A|_{2,1}:=\frac 1 d \sum_{i=1}^d\left(\sum_{j=1}^d|a_{ij}|^2\right)^{\frac{1}{2}}.
\end{equation}
If we denote the vector of the $i$-th column of $A$ by $A^i$, this norm can be rewritten as $|A|_{2,1}:=\frac 1 d \sum_{i=1}^d |A^i|$. Recall that $|A^i|^2:=(A^i)^T\overline{A^i}$.
\end{definition}
\begin{remark}
\begin{enumerate}[(i)]
\item The average $(2,1)$-norm is smaller or equal to the Frobenius norm (alternatively, called Hilbert-Schmidt norm), i.e., we have for any $(d\times d)$-matrix $A=(a_{ij})$,
\begin{equation}\label{eq:normwithFrobenius}
|A|_{2,1}\leq \frac{1}{\sqrt{d}}|A|_F,
\end{equation}
where $|A|_F:=\left(\sum_{i,j=1}^d|a_{ij}|^2\right)^{\frac{1}{2}}$ denotes the Frobenius norm of $A$. This is a straightforward consequence of the Cauchy-Schwarz inequality directly.
\item The average $(2,1)$-norm is not sub-multiplicative in general, i.e. $|AB|_{2,1}\leq |A|_{2,1}|B|_{2,1}$ is not necessarily true for any $(d\times d)$-matrices $A$ and $B$. However, if $B\in O(d)$ or $U(d)$, we have
\begin{equation}\label{eq:normrotationinvariant}
|BA|_{2,1}=|A|_{2,1}.
\end{equation}
Note that in this case, $|B|_{2,1}=1$.
\end{enumerate}
\end{remark}
\begin{definition}[Frustration index]\label{defn:frustration index}
Let $(G,\mu,\sigma)$ be given. We define the frustration index $\iota^\sigma(S)$ for $\emptyset\neq S \subseteq V$ as
\begin{align*}
\iota^\sigma(S) :=& \min_{\tau: S \to H}\sum_{\{x,y\}\in E_S}
w_{xy}|\sigma_{xy}\tau(y)-\tau(x)|_{2,1}\\
=&\min_{\tau: S \to H}\sum_{\{x,y\}\in E_S}
w_{xy}|\sigma^\tau_{xy}-id|_{2,1},
\end{align*}
where $E_S$ is the edge set of the induced subgraph of $S$ in $G$.
\end{definition}
\begin{remark}\begin{enumerate}[(i)]
\item By (\ref{eq:normrotationinvariant}), the quantity $$|\sigma_{xy}\tau(y)-\tau(x)|_{2,1}=|\sigma_{yx}\tau(x)-\tau(y)|_{2,1}$$ is independent of the orientation of the edge $\{x,y\}\in E$. Hence, the summation $\sum_{\{x,y\}\in E_S}
w_{xy}|\sigma_{xy}\tau(y)-\tau(x)|_{2,1}$ is well defined.
\item In the definition of the frustration index, we are taking the infimum over all possible switching functions. Hence,
the frustration index is a switching invariant quantity.
\item The average $(2,1)$-norm is only one possible choice which can be used in the definition of the frustration index. A more canonical norm to be used is the Frobenius norm. However, having the aim to present the strongest Buser type inequalities (\ref{eq:Buser}) in Section \ref{section:Buser}, we choose the average $(2,1)$-norm here (recall (\ref{eq:normwithFrobenius})).
\end{enumerate}
\end{remark}
We denote by $|E(S, V\setminus S)|$ the boundary measure of $S\subseteq V$, which is given by
$$
|E(S, V\setminus S)|:=\sum_{x\in S}\sum_{y\not\in S}w_{xy}.
$$
In the above, we use the convention that $w_{xy}=0$ if $x\not\sim y$. The $\mu$-volume of $S$ is given by $$\mu(S):=\sum_{x\in S}\mu(x).$$
\begin{definition}\label{def:subpartition}
We call $k$ subsets $\{S_i\}_{i=1}^k$ of $V$ a \emph{nontrivial $k$-subpartition} of $V$, if all $S_i$ are nonempty and pairwise disjoint.
\end{definition}
Now we are prepared to define the Cheeger constant.
\begin{definition}[Cheeger constant]\label{defn:Cheeger} Let $(G, \mu, \sigma)$ be given.
The \emph{$k$-way Cheeger constant} $h_k^\sigma$ is defined as
\[
h_k^\sigma := \min_{\{S_i\}_{i=1}^k} \max_{1 \leq i \leq k} \phi^\sigma(S_i),
\]
where the minimum is taken over all possible nontrivial $k$-subpartitions $\{S_i\}_{i=1}^k$ of $V$ and
\[
\phi^\sigma(S):=\frac{\iota^\sigma(S) + |E(S,V\setminus S)|}{\mu(S)}.
\]
\end{definition}
Note that the multi-way Cheeger constants defined above are switching invariant. Definition~\ref{defn:Cheeger} is a natural extension of the Cheeger constants developed in \cite{AtayLiu14,LLPP15}, and is related to the constants discussed in \cite{BSS13}.
\begin{remark}[Relations with Bandeira, Singer and Spielman's constants]\label{remark:BSS}
In \cite{BSS13}, Bandeira, Singer and Spielman introduced the so-called \emph{$O(d)$ frustration $\ell_1$ constant} $\nu_{G,1}$ as follows,
\begin{equation*}
\nu_{G,1}:=\min_{\tau: V\to O(d)}\frac{1}{\sqrt{d}\mu(V)}\sum_{x,y\in V}w_{xy}|\sigma_{xy}\tau(y)-\tau(x)|_F,
\end{equation*}
where $|\cdot|_F$ denotes the Frobenius norm of a matrix.
Modifying $\nu_{G,1}$ by also allowing zero matrices in the image of $\tau$, we obtain
\begin{equation*}
\nu^*_{G,1}:=\min_{\tau: V\to H\cup\{0\}}\frac{\sum_{x,y\in V}w_{xy}|\sigma_{xy}\tau(y)-\tau(x)|_F}{\sum_{x\in V}\mu(x)|\tau(x)|_F},
\end{equation*}
where we denote the $(d\times d)$-zero matrix by $0$. Note that $|\tau(x)|_F=\sqrt{d}$, for $\tau(x)\in H$.
We observe that the following relation between our Cheeger constant $h_1^\sigma$ and the constant $\nu^*_{G,1}$:
\begin{equation*}
h_1^\sigma\leq \frac{1}{2}\nu^*_{G,1},
\end{equation*}which is verified by the following calculation:
\begin{align*}
h_1^\sigma=&\min_{\emptyset\neq S\subseteq V}\frac{\iota^\sigma(S)+|E(S,V\setminus S)|}{\mu(S)}\\
=&\min_{\tau:V\to H\cup\{0\}}\frac{\sum_{\{x,y\}\in E}w_{xy}|\sigma_{xy}\tau(y)-\tau(x)|_{2,1}}{\sum_{x\in V}\mu(x)|\tau(x)|_{2,1}}\\
\leq&\min_{\tau:V\to H\cup\{0\}}\frac{\sum_{\{x,y\}\in E}w_{xy}|\sigma_{xy}\tau(y)-\tau(x)|_{F}}{\sqrt{d}\sum_{x\in V}\mu(x)|\tau(x)|_{2,1}}\\
=&\frac{1}{2}\nu^*_{G,1}.
\end{align*}
In the inequality above, we used (\ref{eq:normwithFrobenius}).
\end{remark}
For convenience, we call $\{S_i\}_{i=1}^k$ a \emph{connected}, nontrivial $k$-subpartition of $V$, if all sets $S_i\subseteq V$ are nonempty and pairwise disjoint, and if every subgraph induced by $S_i$ is connected. Then the Cheeger constants introduced in Definition \ref{defn:Cheeger} do not change if we restrict our considerations to connected, nontrivial $k$-subpartitions:
\begin{lemma}\label{lemma:connectedCheeger} Let $(G, \mu, \sigma)$ be given. Then we have
\[
h_k^\sigma = \min_{\{S_i\}_{i=1}^k} \max_{1 \leq i \leq k} \phi^\sigma(S_i),
\]
where the minimum is taken over all possible connected, nontrivial $k$-subpartitions $\{S_i\}_{i=1}^k$ of $V$.
\end{lemma}
\begin{proof}
Let $\{S_i\}_{i=1}^k$ be a possibly nonconnected, nontrivial $k$-subpartition achieving $h_k^\sigma$.
Suppose $S_i$ has the connected components $W^1_i, \ldots, W^{n(i)}_i$. Then,
\[
\phi^\sigma(S_i) \mu(S_i) = \sum_{j=1}^{n(i)} \phi^\sigma(W^j_i) \mu(W^j_i)
\]
and $\mu(S_i) = \sum_{j=1}^n \mu(W^j_i)$. Hence, there exists $j(i)\in\{1,2,\ldots, n(i)\}$ such that $\phi^\sigma(W^{j(i)}_i) \leq \phi^\sigma(S_i)$.
Consequently,
\[
\max_{1 \leq i \leq k} \phi^\sigma(W^{j(i)}_i) \leq \max_{1 \leq i \leq k} \phi^\sigma(S_i) = h_k^\sigma,
\]
and thus, $\{W^{j(i)}_i\}_{i=1}^k$ is a connected, nontrivial $k$-subpartition of $V$ with
\[
h_k^\sigma = \max_{1 \leq i \leq k} \phi^\sigma(W^{j(i)}_i)
\]
This implies the lemma.
\end{proof}
\subsection{Frustration index via spanning trees}\label{subsection:spanning tree} This subsection is motivated by the following question:
Is there any easier way to calculate the frustration index $\iota^\sigma(S)$ of a subset $S\subseteq V$?
We will provide an affirmative answer in the case $H=U(1)$.
Note that the average $(2,1)$-norm reduces to the absolute value of a complex number, and the frustration index $\iota^\sigma(S)$ for $S \subseteq V$ simplifies to
\[
\iota^\sigma(S) := \min_{\tau: S \to U(1)}\sum_{\{x,y\}\in E_S}
w_{xy}|\sigma_{xy}\tau(y)-\tau(x)|,
\]
where $E_S$ is the edge set of the induced subgraph of $S$.
Here our aim is to make it easier to calculate $\iota^{\sigma}(S)$ by considering all spanning trees of the induced subgraph of $S$ and taking the minmum over so-called \emph{constant functions on these trees with respect to the signature}. This reduces the original minimization problem to a finite combinatorial problem. We will show in Appendix \ref{section:appendixSpanningTree} via a counterexample that this reduction is no longer possible in the case of higher dimensional signature groups.
\begin{definition}
Let $(G,\sigma)$ be a finite, connected graph the signature $\sigma: E^{or}\to H$. A function $\tau: V\to H$ is \emph{constant on $G$ with respect to $\sigma$} if, for all $(x,y) \in E^{or}$, we have
\[
\sigma_{xy}\tau(y) = \tau(x).
\]
In other words, $\tau$ is a switching function such that $\sigma^\tau$ is trivial, i.e., $\sigma^\tau_{xy}=id\in H$ for all $(x,y)\in E^{or}$.
\end{definition}
Let $T=(S, E_T)$, $E_T\subseteq E_S$, be a spanning tree of the induced subgraph of $S$. We write $C_T(S):=\{\tau:S \to U(1) : \tau \mbox{ is constant on } T
\mbox{ with respect to } \sigma\}$. Moreover, we define $\mathbb{T}_S$ as the set of all of all spanning trees of the induced subgraph of $S$.
Since $T$ is a tree, the set $C_T(S)$ is not empty. Since $T$ is a spanning tree, we have $C_T(S)= \tau U(1) :=\{\tau z: S\to U(1)\mid z\in U(1)\}$ for any $\tau \in C_T(S)$.
\begin{thm}\label{thm:spanning tree}
Let $S \subseteq V$ be a nonempty subset of $V$ which induces a connected subgraph. Then,
\begin{equation}\label{eq:spanning tree}
\iota^\sigma(S) = \min_{T \in \mathbb{T}_S}\sum_{\{x,y\}\in E_S}
w_{xy}|\sigma_{xy}\tau_T(y)-\tau_T(x)|,
\end{equation}
where $\tau_T$ denotes an arbitrary representative of $C_T(S)$.
Moreover, if a function $\tau:S\to U(1)$ satisfies $\sum_{\{x,y\}\in E_S} w_{xy}|\sigma_{xy}\tau(y)-\tau(x)| = \iota^\sigma(S)$, then there is a spanning tree $T=(S, E_T)$ such that $\tau$ is constant on $T$ with respect to $\sigma$.
\end{thm}
We remark that in (\ref{eq:spanning tree}) we are taking the minimum over a finite set. Moreover, given a spanning tree $T\in \mathbb{T}_S$, only terms associated to edges of $E_S$ not belonging to the spanning tree contribute to the sum.
Theorem \ref{thm:spanning tree} can be considered an an extension of \cite[Theorem 2]{HararyKabell80}, where Harary
and Kabell derived this result on unweighted graphs for the case $H=O(1)=\{\pm 1\}$. Their proof depends in an essential way on the fact that the frustration index in their setting (which they called \emph{line index of balance}) only assumes integer values. Therefore, their proof cannot be extended to the current general setting.
We first prove a basic lemma.
\begin{lemma}\label{lemma:1dimgeometry}
Let $Z:= \{z_1,\ldots,z_n\} \subset U(1)$ and $w_1,\ldots,w_n > 0$. Then we have
\begin{equation}\label{eq:minsimp}
\min_{z\in U(1)} \sum_{k=1}^n w_k |z-z_k| = \min_{z\in Z} \sum_{k=1}^n w_k |z-z_k|.
\end{equation}
Moreover, if $z \in U(1) \setminus Z$, then
\[
\sum_{k=1}^n w_k |z-z_k| > \min_{z\in Z} \sum_{k=1}^n w_k |z-z_k|.
\]
\end{lemma}
\begin{proof}
The minimum over on the left hand side of (\ref{eq:minsimp}) exists, since $U(1)$ is compact and $\sum_{k=1}^n w_k |z-z_k|$ is continuous in $z$. Suppose that the minimum is assumed in $z_0= e^{it_0}$ with $z_0 \notin Z$.
That is, the function $\phi: {\mathbb R} \to {\mathbb R}, t \mapsto \sum_{k=1}^n w_k |e^{it}-z_k|$ assumes a local minimum in $t_0$. Since $z_0 \notin Z$, the second derivative $\phi''$ exists at $t_0$ and is not negative due to the minimum property.
But for all $k \in \{1,\ldots,n\}$, we can set $z_k=e^{i t_k}$ and compute
\[
\frac {d^2} {dt^2} |e^{it}-e^{i t_k}| (t_0) = 2 \frac {d^2} {dt^2} \left| \sin \frac{t-t_k} 2 \right| (t_0) < 0.
\]
This is a contradiction and, hence, $z_0 \in Z$. This finishes the proof of the lemma.
\end{proof}
Now, we prove the theorem with the help of the lemma.
\begin{proof}[Proof of Theorem \ref{thm:spanning tree}]
First, we notice that the expression $w_{xy}|\sigma_{xy}\tau_T(y)-\tau_T(x)|$ does not depend on the choice of $\tau_T \in C_T(S)$ since $C_T(S)= \tau_TU(1)$.
Hence, the restriction to the representative of $C_T(S)$ make sense.
Let $\tau_0:S \to U(1)$ be a minimizer of
$\sum_{\{x,y\}\in E_S}
w_{xy}|\sigma_{xy}\tau(y)-\tau(x)|.$
Denote by
\[
E_0:= \{\{x,y\}\in E_S : \sigma_{xy}\tau_0(y) = \tau_0(x) \}
\]
the set of edges where $\tau_0$ is constant with respect to $\sigma$.
It is sufficient to show that $G_0 = (S,E_0)$ is connected since then there is a spanning tree $T_0$ of $G_0$ such that $\tau_0$ is constant on $T_0$ with respect to $\sigma$.
Suppose $G_0$ is not connected. Then there is a connected component $W \subsetneq S$.
Denote $\partial_S W := \{(x,y)\in E^{or}_S : x \in W, y \in S \setminus W\}$. We have $\partial_S W \neq \emptyset$ since $S$ is connected. Moreover, we have $\sigma_{xy}\tau_0(y) \neq \tau_0(x)$ for all $(x,y) \in \partial_S W$, since $W$ is a connected component and, otherwise, $y$ would also belong to $W$, contradicting to $(x,y) \in \partial_S W$.
The previous lemma states that
\[
\min_{z\in U(1)}\sum_{(x,y)\in \partial_S W } w_{xy}|\sigma_{xy}\tau_0(y)- z \tau_0(x)| = \min_{z\in U(1)}\sum_{(x,y)\in \partial_S W } w_{xy}|\sigma_{xy}\tau_0(y)\overline{\tau_0(x)} - z |
\]
achieves the minimum only in elements of the set
$\{ \sigma_{xy}\tau_0(y)\overline{\tau_0(x)} : (x,y) \in \partial_S W \}$.
But $1 \notin \{ \sigma_{xy}\tau_0(y)\overline{\tau_0(x)} : (x,y) \in \partial_S W \}$, since $\sigma_{xy}\tau_0(y) \neq \tau_0(x)$ for all $(x,y) \in \partial_S W$.
Hence, there exists $z_0\in U(1)$ such that
\begin{equation}
\sum_{(x,y)\in \partial_S W } w_{xy}|\sigma_{xy}\tau_0(y)- z_0 \tau_0(x)|
< \sum_{(x,y)\in \partial_S W } w_{xy}|\sigma_{xy}\tau_0(y)- \tau_0(x)|. \label{tau1 tau0}
\end{equation}
We define $\tau_1: S\to U(1)$,
\[
\tau_1(x) := \begin{cases}
z_0 \tau_0(x), & \mbox{ if } x \in W; \\
\tau_0(x), & \mbox{ if }x \in S\setminus W.
\end{cases}
\]
Consequently,
\begin{eqnarray*}
&& \sum_{\{x,y\}\in E_S} w_{xy}|\sigma_{xy}\tau_1(y)-\tau_1(x)| \\
&=& \sum_{\{x,y\}\in E_W \cup E_{S\setminus W}} w_{xy}|\sigma_{xy}\tau_1(y)-\tau_1(x)| + \sum_{(x,y)\in \partial_S W} w_{xy}|\sigma_{xy}\tau_1(y)-\tau_1(x)| \\
&=& \sum_{\{x,y\}\in E_W \cup E_{S\setminus W}} w_{xy}|\sigma_{xy}\tau_0(y)-\tau_0(x)| + \sum_{(x,y)\in \partial_S W} w_{xy}|\sigma_{xy}\tau_0(y)- z_0 \tau_0(x)| \\
&\stackrel{(\ref{tau1 tau0})}{<}& \sum_{\{x,y\}\in E_W \cup E_{S\setminus W}} w_{xy}|\sigma_{xy}\tau_0(y)-\tau_0(x)| + \sum_{(x,y)\in \partial_S W} w_{xy}|\sigma_{xy}\tau_0(y)- \tau_0(x)| \\
&=& \sum_{\{x,y\}\in E_S} w_{xy}|\sigma_{xy}\tau_0(y)-\tau_0(x)|.
\end{eqnarray*}
This is a contradiction to the fact that $\tau_0$ is a minimizer of $\sum_{\{x,y\}\in E_S}
w_{xy}|\sigma_{xy}\tau(y)-~\tau(x)|$. Thus, $G_0$ has to be connected. This finishes the proof.
\end{proof}
Recall from Lemma \ref{lemma:connectedCheeger} that the Cheeger constant $h_k^\sigma$ is the minimum of $\max_{1\leq i\leq k}\phi^{\sigma}(S_i)$ over
all possible \emph{connected}, nontrivial $k$-subpartitions $\{S_i\}_{i=1}^k$. Therefore, Theorem \ref{thm:spanning tree} implies that the calculation of $h_k^\sigma$ reduces to a
finite combinatorial minimization problem if $H=U(1)$.
\section{Buser inequalities}\label{section:Buser}\label{section:BusIne}
In this section, we prove our main theorem, namely, higher order Buser type inequalities for nonnegatively curved graphs (cf. Theorem \ref{thm:introMain} in the Introduction).
\begin{thm}[Main Theorem]\label{thm:Buser}
Let $(G,\mu, \sigma)$ satisfy $CD^\sigma(0,\infty)$. Then for all $1\leq k\leq N$, we
have
\begin{equation}\label{eq:Buser}
\sqrt{\lambda^\sigma_{kd}}\leq 4 \sqrt{D_G^{nor}} \left(kd \sqrt{\log(2kd)}\right) h_k^\sigma.
\end{equation}
\end{thm}
Before we present the proof, we first discuss the following two lemmata.
We will use the following notation for the $\ell^{p}(V,\mathbb{K}^d;\mu)$ norm of functions, $1\leq p\leq \infty$,
$$\Vert f\Vert_{p,\mu}:=\left(\sum_{x\in V}\mu(x)|f(x)|^p\right)^{\frac{1}{p}}.$$
For simplicity, we omit the subscript $\mu$ in the following arguments.
\begin{lemma}\label{lemma:lonenorm}
Assume that $(G,\mu, \sigma)$ satisfies $CD^\sigma(0,\infty)$. Then for any function $f: V\to \mathbb{K}^d$ and $t\geq 0$,
we have
\begin{equation}\label{eq:lonenorm}
\Vert f-P^\sigma_tf\Vert_1\leq \sqrt{2t}\Vert\sqrt{\Gamma^\sigma(f)}\Vert_1.
\end{equation}
\end{lemma}
\begin{proof}
First, the equivalent formulation of the $CD^{\sigma}(0,\infty)$ inequality in Theorem \ref{thm:curvature-characterization} (\ref{eq:BEgradient2}) implies that
\begin{equation}\label{eq:linfty}
\Vert \sqrt{P_t(|f|^2)}\Vert_{\infty}\geq \sqrt{2t}\Vert \sqrt{\Gamma^\sigma(P^\sigma_tf)}\Vert_{\infty}.
\end{equation}
The inequality (\ref{eq:lonenorm}) is actually a dual version of the above one.
We set $$g(x):=\left\{
\begin{array}{ll}
0, & \hbox{if $f(x)-P_t^\sigma f(x)=0$;} \\
(f(x)-P_t^\sigma f(x))/|f(x)-P_t^\sigma f(x)|, & \hbox{otherwise,}
\end{array}
\right.
$$
and calculate
\begin{align*}
\Vert f-P^\sigma_tf\Vert_1&=\langle f-P_t^\sigma f, g\rangle_{\mu}=\langle -\int_0^t\frac{\partial}{\partial s}P_s^\sigma fds, g\rangle_{\mu}\\
&=-\int_0^t\langle \Delta^\sigma f, P_s^{\sigma}g\rangle_{\mu}ds\\
&=\int_0^t\sum_{x\in V}\mu(x)\Gamma^\sigma(f, P_s^\sigma g)(x)ds,
\end{align*}
where we used the self-adjointness of $P_t^\sigma$ and the summation by part formula (\ref{eq:summaBypart}). We further apply Proposition \ref{pro:Gammasigma} and the estimate (\ref{eq:linfty}) to derive
\begin{align*}
\Vert f-P^\sigma_tf\Vert_1&\leq \int_0^t\sum_{x\in V}\mu(x)\sqrt{\Gamma^\sigma(f)(x)}\sqrt{\Gamma^\sigma(P_s^\sigma g)(x)}ds\\
&\leq \int_0^t\Vert\sqrt{\Gamma^\sigma(f)}\Vert_1\Vert\sqrt{\Gamma^\sigma(P_s^\sigma g)}\Vert_{\infty}ds\\
&\leq \Vert\sqrt{\Gamma^\sigma(f)}\Vert_1\int_0^t\frac{1}{\sqrt{2s}}\Vert \sqrt{P_s(|g|^2)}\Vert_{\infty}ds\\
&\leq \sqrt{2t}\Vert\sqrt{\Gamma^\sigma(f)}\Vert_1.
\end{align*}
In the last inequality we used the fact $P_s(|g|^2)\leq \Vert|g|^2\Vert_{\infty}=1$, which follows from Proposition \ref{pro:heatsemigroupunsigned}.
\end{proof}
We still need the following technical lemma.
\begin{lemma}\label{lemma:sqrtGamma}
For any function $f: V\to \mathbb{K}^d$, we have
\begin{equation}\label{eq:sqrtGamma}
\Vert\sqrt{\Gamma^\sigma(f)}\Vert_1\leq \sqrt{2D_G^{nor}}\sum_{\{x,y\}\in E}w_{xy}|\sigma_{xy}f(y)-f(x)|
\end{equation}
\end{lemma}
\begin{proof}
It is straightforward to calculate
\begin{align*}
\Vert\sqrt{\Gamma^\sigma(f)}\Vert_1&=\sum_{x\in V}\mu(x)\sqrt{\frac{1}{2\mu(x)}\sum_{y,y\sim x}w_{xy}\left|\sigma_{xy}f(y)-f(x)\right|^2}\\
&\leq \sum_{x\in V}\sqrt{\frac{\mu(x)}{2}}\sum_{y,y\sim x}\sqrt{w_{xy}}\left|\sigma_{xy}f(y)-f(x)\right|\\
&\leq \sqrt{\frac{D_G^{nor}}{2}}\sum_{x\in V}\sum_{y,y\sim x}w_{xy}\left|\sigma_{xy}f(y)-f(x)\right|.
\end{align*}
This simplifies to (\ref{eq:sqrtGamma}), since the summands above are symmetric w.r.t. $x$ and $y$.
\end{proof}
Now, we have all ingredients for the proof of the Buser type inequality (\ref{eq:Buser}).
\begin{proof}[Proof of Theorem~\ref{thm:Buser}]
Let $\{S_i\}_{i=1}^k$ be an arbitrary nontrivial $k$-subpartition of $V$. For each $S_i$, let $\tau_i: S_i\to H$ be the function achieving the values $\iota^\sigma(S_i)$ introduced in Definition \ref{defn:frustration index}. We extend each $\tau_i$ trivially to a function on $V$, by assigning zero matrices to the vertices in $V\setminus S_i$. By abuse of notation, we denote this extension, again, by $\tau_i: V\to H$. Each $\tau_i$ gives rise to $d$ pairwise orthogonal functions in $\ell^2(V, \mathbb{K}^d;\mu)$:
\begin{equation}
\tau_i^l: V\to \mathbb{K}^{d},\,\,\,x\mapsto \left(\tau_i(x)\right)^l, \,\,l=1,2,\ldots, d,
\end{equation}
where $\left(\tau_i(x)\right)^l$ denotes for the $l$-th column vector of the matrix $\tau_i(x)$. Note that for $x\in S$, we have $|\tau_i^l(x)|=1$.
For every $1\leq i\leq k$, we apply Lemma \ref{lemma:sqrtGamma} to obtain
\begin{align}
&\frac 1 d \sum_{l=1}^d
\Vert\sqrt{\Gamma^\sigma(\tau_i^l)}\Vert_1 \notag\\
\leq & \frac 1 d \sum_{l=1}^d \sqrt{2D_G^{nor}}\left(\sum_{\{x,y\}\in E_{S_i}}w_{xy}|\sigma_{xy}\tau_i^l(y)-\tau_i^l(x)|+|E(S_i,V\setminus S_i)|\right) \notag\\
\leq &\sqrt{2D_G^{nor}}(\iota^\sigma(S_i)+|E(S_i,V\setminus S_i)|).\label{eq:B-tobecollone}
\end{align}
On the other hand, we have by Lemma \ref{lemma:lonenorm},
\begin{align}
\sqrt{2t}\Vert\sqrt{\Gamma^\sigma(\tau_i^l)}\Vert_1\geq & \sum_{x\in V}\mu(x)\left|\tau_i^l(x)-P_t^\sigma\tau_i^l(x)\right|\notag\\
\geq &\sum_{x\in V}\mu(x)\left|\tau_i^l(x)-P_t^\sigma\tau_i^l(x)\right|\cdot|\tau_i^l(x)|\notag\\
\geq & \sum_{x\in V}\mu(x)\mathrm{Re}\left((\tau_i^l(x))^T(\overline{\tau_i^l(x)-P_t^\sigma\tau_i^l(x)})\right),\notag
\end{align}
where $\mathrm{Re}(\cdot)$ denotes the real part of a complex number, and we used the Cauchy-Schwarz inequality in the last inequality. By Proposition \ref{pro:heatsemigroup}, we continue to calculate
\begin{align}\label{eq:B-tobecolltwo}
\sqrt{2t}\Vert\sqrt{\Gamma^\sigma(\tau_i^l)}\Vert_1\geq \mathrm{Re}\left( \langle \tau_i^l, \tau_i^l-P_t^\sigma\tau_i^l \rangle_{\mu} \right)=\Vert\tau_i^l\Vert_2^2-\Vert P_{t/2}^\sigma\tau_i^l\Vert_2^2.
\end{align}
Let $\{\psi_n\}_{n=1}^{Nd}$ be an orthonormal basis of $\ell^2(V, \mathbb{K}^d; \mu)$ consisting of the eigenfunctions corresponding to $\{\lambda^\sigma_n\}_{n=1}^{Nd}$, respectively. Setting
$$\alpha_{i,n}^l:=\langle \tau_i^l, \psi_n\rangle_{\mu},$$
we have
\begin{equation}\label{eq:B-tobecollthree}
\sum_ {n=1}^{Nd} \left| \alpha_{i,n}^l \right|^2 =\Vert\tau_i^l\Vert^2_2= \mu(S_i),
\end{equation}
and
\begin{equation}\label{eq:B-tobecollfour}
\Vert P_{t/2}^\sigma\tau_i^l\Vert_2^2=\sum_ {n=1}^{Nd}e^{-t\lambda^\sigma_n}\left| \alpha_{i,n}^l \right|^2.
\end{equation}
Now (\ref{eq:B-tobecollone}), (\ref{eq:B-tobecolltwo}), (\ref{eq:B-tobecollthree}), and (\ref{eq:B-tobecollfour}) together imply, for each $1\leq i\leq k$,
\begin{align}\label{eq:phi-lambda}
2\sqrt{D_G^{nor}t}\phi^{\sigma}(S_i)
\geq& \frac 1 d \sum_{l=1}^d \left( 1-\sum_ {n=1}^{Nd}e^{-t\lambda^\sigma_n}\frac{\left| \alpha_{i,n}^l \right|^2}{\mu(S_i)} \right) \notag \\
\geq&\frac 1 d \sum_{l=1}^d \left( 1-\sum_ {n=1}^{kd-1}\frac{\left| \alpha_{i,n}^l \right|^2}{\mu(S_i)}-e^{-t\lambda_{kd}^\sigma}\sum_{n=kd}^{Nd}\frac{\left| \alpha_{i,n}^l \right|^2}{\mu(S_i)} \right) \notag \\
\geq& 1- \frac 1 d \sum_{l=1}^d \sum_{n=1}^{kd-1}\frac{\left| \alpha_{i,n}^l \right|^2}{\mu(S_i)} - e^{-t\lambda_{kd}^\sigma}.
\end{align}
By (\ref{eq:B-tobecollthree}), we know
\begin{equation*}
1- \frac 1 d \sum_{l=1}^d \sum_{n=1}^{kd-1}\frac{\left| \alpha_{i,n}^l \right|^2}{\mu(S_i)}\geq 0,
\end{equation*}
but our aim is to show that for some $i_0\in \{1,2,\ldots,k\}$ this expression is strictly positive. We rewrite the summands as follows,
\begin{equation}
\frac{\left| \alpha_{i,n}^l \right|^2}{\mu(S_i)}=\left|\left\langle\frac{\tau_i^l}{\sqrt{\mu(S_i)}}, \psi_n\right\rangle\right|^2.
\end{equation}
Since the functions $\tau_i^l/\sqrt{\mu(S_i)}, i=1,2,\ldots, k, l=1,2,\ldots,d,$ are orthonormal in the space $\ell^2(V, \mathbb{K}^d;\mu)$, we obtain
\begin{equation}
\sum_{i=1}^k\sum_{l=1}^d\frac{\left| \alpha_{i,n}^l \right|^2}{\mu(S_i)}\leq \Vert\psi_n\Vert_2^2=1.
\end{equation}
Summation over $n$ yields
\begin{equation}
\sum_{i=1}^k\sum_{l=1}^d\sum_{n=1}^{kd-1}\frac{\left| \alpha_{i,n}^l \right|^2}{\mu(S_i)}\leq kd-1.
\end{equation}
Consequently, there exists an $i_0\in \{1,2,\ldots, k\}$ such that
\begin{equation}
\frac 1 d \sum_{l=1}^d \sum_{n=1}^{kd-1}\frac{\left| \alpha_{i_0,n}^{l} \right|^2}{\mu(S_i)}\leq 1-\frac{1}{kd}.
\end{equation}
We insert this estimate into inequality (\ref{eq:phi-lambda}) to obtain
\begin{equation}
2\sqrt{D_G^{nor}t}\max_{1\leq i\leq k}\phi^{\sigma}(S_{i})\geq \frac{1}{kd}-e^{-t\lambda_{kd}^\sigma}.
\end{equation}
Since the $k$-subpartition was chosen arbitrarily, we have
\begin{equation}
2 \sqrt{D_G^{nor}t} \cdot h_k^\sigma \geq \frac{1}{kd} - e^{-t\lambda_{kd}^\sigma}.
\end{equation}
For $\lambda_{kd}\neq 0$, we choose $t=\log(2dk)/\lambda_{kd}^\sigma$ to obtain
\begin{equation}
4 \sqrt{D_G^{nor}}kd \sqrt{\log(2dk)} h_k^\sigma \geq \sqrt{\lambda_{kd}^\sigma}.
\end{equation}
This completes the proof.
\end{proof}
Recall from Corollary \ref{cor:lower curvature bound} that any graph $(G,\mu,\sigma)$ has a specific finite lower curvature bound. In case of a negative lower curvature bound, we have the following result.
For a subset $S\subseteq V$, we define the following constant, which is no greater than $\iota^\sigma(S)$,
\begin{equation}
\widetilde{\iota^\sigma}(S) := \min_{\substack{f: S \to \mathbb{K}^{d}\\ |f(x)|=1,\,\forall x\in S}}\sum_{\{x,y\}\in E_S}
w_{xy}|\sigma_{xy}f(y)-f(x)|.
\end{equation}
Using this constant, we have the following isoperimetric type inequality.
\begin{thm}\label{thm:isoperimetry}
Let $(G,\mu, \sigma)$ satisfy $CD^\sigma(-K,\infty)$, for $K\geq 0$. Then for any subset $\emptyset\neq S\subseteq V$,
\begin{equation}
\widetilde{\iota^\sigma}(S)+|E(S, V\setminus S)|\geq\frac{1}{2\sqrt{2D_G^{nor}}}\min\left\{(1-e^{-1})\sqrt{\lambda_1^\sigma}, \frac{\lambda_1^\sigma}{2\sqrt{2K}}\right\}\mu(S).
\end{equation}
\end{thm}
\begin{proof}
Modifying the proof of Lemma \ref{lemma:lonenorm} for $K\geq 0$, we derive from the inequality $CD^\sigma(-K, \infty)$ that for any function $f: V\to \mathbb{K}^d$,
\begin{equation}
\Vert f-P^\sigma_tf\Vert_1\leq \int_0^t\sqrt{\frac{K}{1-e^{-2Ks}}}ds\Vert\sqrt{\Gamma^\sigma(f)}\Vert_1.
\end{equation}
Using $1-e^{-u}\geq u/2$ for $0\leq u\leq 1$, we have for $0\leq t\leq 1/(2K)$,
\begin{equation}
\Vert f-P^\sigma_tf\Vert_1\leq 2\sqrt{t}\Vert\sqrt{\Gamma^\sigma(f)}\Vert_1.
\end{equation}
Let $S$ be an arbitrary nonempty subset of $V$. Let $f^0: S\to \mathbb{K}^{d}$ with $|f^0(x)|=1$ for all $x\in S$ be the function achieving the value of $\widetilde{\iota^\sigma}(S)$. By similar reasoning as in the proof of Theorem \ref{thm:Buser}, we obtain for $0\leq t\leq 1/(2K)$,
\begin{equation}
2\sqrt{2D_G^{nor}t}\left(\widetilde{\iota^\sigma}(S)+|E(S, V\setminus S)|\right)\geq \mu(S)(1-e^{-t\lambda_1^\sigma}).
\end{equation}
If $\lambda_1^\sigma\geq 2K$, we set $t=1/\lambda^\sigma_1$ and obtain
\begin{equation}
2\sqrt{2D_G^{nor}}\left(\widetilde{\iota^\sigma}(S)+|E(S, V\setminus S)|\right)\geq \mu(S)(1-e^{-1})\sqrt{\lambda_1^\sigma}.
\end{equation}
If $\lambda_1^\sigma< 2K$, we set $t=1/(2K)$ and obtain
\begin{equation}
2\sqrt{\frac{D_G^{nor}}{K}}\left(\widetilde{\iota^\sigma}(S)+|E(S, V\setminus S)|\right)\geq \mu(S)(1-e^{-\frac{\lambda_1^\sigma}{2K}})\geq \mu(S)\frac{\lambda_1^\sigma}{4K}.
\end{equation}
Combining both cases completes the proof.
\end{proof}
We now define the following Cheeger type constant $\widetilde{h_1^\sigma}$ corresponding to $\widetilde{\iota^\sigma}(S)$.
\begin{definition}
Let $(G,\mu,\sigma)$ be given. The constant $\widetilde{h_1^\sigma}$ is defined as
\begin{equation*}
\widetilde{h_1^\sigma}=\min_{\emptyset\neq S\subseteq V}\frac{\widetilde{\iota^\sigma}(S)+|E(S,V\setminus S)|}{\mu(S)}.
\end{equation*}
\end{definition}
By definition, we observe that $\widetilde{h_1^\sigma}\leq h_1^\sigma$. Theorem \ref{thm:isoperimetry} implies the following estimate immediately.
\begin{coro}\label{thm:Buser-K}
Let $(G,\mu, \sigma)$ satisfy $CD^\sigma(-K,\infty), K\geq 0$. Then we
have
\begin{equation}\label{eq:Buser-K}
\lambda^\sigma_{1}\leq 8\max\{(e/(e-1))^2 D_G^{nor}(\widetilde{h_1^\sigma})^2, \sqrt{D_G^{nor}K}\widetilde{h_1^\sigma}\}.
\end{equation}
\end{coro}
Note that, for the constant $\widetilde{h_1^\sigma}$, the following Cheeger type inequality is proved in \cite[Theorem 4.1]{BSS13} (see also \cite[Theorem 4.6 and Remark 4.9]{LLPP15}).
\begin{thm}[\cite{BSS13}]\label{thm:Cheeger}
Let $(G,\mu,\sigma)$ be given. Then we have
\begin{equation}
\frac{2}{5D_G^{non}}\widetilde{h_1^\sigma}^2\leq \lambda_1^\sigma\leq 2\widetilde{h_1^\sigma}.
\end{equation}
\end{thm}
\begin{example}[Signed Triangle] We revisit the example of a signed triangle discussed in Section \ref{subsection:triangle} (see Figure \ref{F1}). In this case we have $H=U(1)$ and, therefore, $h_1^\sigma=\widetilde{h_1^\sigma}$. Using Theorem \ref{thm:spanning tree}, we can check
$$h_1^\sigma=\frac{|s-1|}{6}=\frac{\sqrt{2(1-\mathrm{Re}(s))}}{6}.$$
The Buser type inequality Theorem \ref{thm:Buser} tells us
\begin{equation}\label{eq:triangle Buser}
\lambda_1^\sigma\leq 32\log 2(h_1^\sigma)^2,
\end{equation}
while the Cheeger type inequality Theorem \ref{thm:Cheeger} gives
\begin{equation}\label{eq:triangle Cheeger}
\frac{2}{5}(h_1^\sigma)^2\leq \lambda_1^\sigma\leq 2h_1^\sigma.
\end{equation}
The comparison of the estimates (\ref{eq:triangle Buser}) and (\ref{eq:triangle Cheeger}) is shown in Figure \ref{FtriangleComp}, where we treat the quantities $\lambda_1^\sigma$ and $h_1^\sigma$ as functions of the variable $\mathrm{Re}(s)$.
\end{example}
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth]{CheegerBuserEstimates.pdf}
\caption{Comparison of Cheeger and Buser estimates for a signed triangle\label{FtriangleComp}}
\end{figure}
\section{Lichnerowicz estimate and applications}\label{section:Lichnerowicz}
We have the following Lichnerowicz type eigenvalue estimate (cf. Theorem \ref{thm:introLic} in the Introduction).
\begin{thm}[Lichnerowicz inequality]\label{thm:lichnerowicz}
Assume that $(G, \mu, \sigma)$ satisfies $CD^{\sigma}(K, n)$ for $K\in \mathbb{R}$ and $n\in \mathbb{R}_+$. Then we have for any non-zero eigenvalue $\lambda^{\sigma}$ of $\Delta^{\sigma}$,
\begin{equation}\label{eq:lichnerowicz}
\frac{n-1}{n}\lambda^{\sigma}\geq K,
\end{equation}
where we use the convention $(n-1)/n=1$ in the case $n=\infty$.
\end{thm}
\begin{proof}
Let $\psi: V\to \mathbb{K}^d$ be the corresponding eigenfunction of $\lambda^{\sigma}$ with unit $\ell^2(V,\mathbb{K}^d;\mu)$ norm. Integrating the inequality $CD^{\sigma}(K, n)$ over the measure $\mu$, we obtain
\begin{equation}\label{eq:cdineq-integration}
\sum_{x\in V}\mu(x)\Gamma_2^\sigma(\psi)(x)\geq \frac{1}{n}\sum_{x\in V}\mu(x)\left| \Delta^\sigma \psi(x) \right|^2+K\sum_{x\in V}\mu(x)\Gamma^\sigma(\psi)(x).
\end{equation}
By (\ref{eq:laplacian-integration}), we have
\begin{equation*}
\sum_{x\in V}\mu(x)\Gamma_2^\sigma(\psi)(x)=-\sum_{x\in V}\mu(x)\mathrm{Re}(\Gamma^\sigma(\psi, \Delta^{\sigma}\psi)(x))=\lambda^{\sigma}\sum_{x\in V}\mu(x)\Gamma^\sigma(\psi)(x).
\end{equation*}
Recalling the summation by part formula (\ref{eq:summaBypart}), we have
\begin{equation*}
\sum_{x\in V}\mu(x)\Gamma^\sigma(\psi)(x)=-\langle \psi, \Delta^{\sigma}\psi\rangle_{\mu}=\lambda^{\sigma}.
\end{equation*}
Therefore, (\ref{eq:cdineq-integration}) tells us that
\begin{equation}
(\lambda^{\sigma})^2\geq \frac{1}{n}(\lambda^{\sigma})^2+\lambda^{\sigma}K.
\end{equation}
This implies (\ref{eq:lichnerowicz}) in the case $\lambda^{\sigma}\neq 0$.
\end{proof}
Consequently, we have the following estimates about the lower curvature bound of a graph.
\begin{coro}\label{cor:CurUpperCheeger}
Let $(G,\mu, \sigma)$ satisfy $CD^\sigma(K,n)$, for some $K\in \mathbb{R}$ and $n\in \mathbb{R}_+$. Then we have the following facts:
\begin{enumerate}[(i)]
\item If $n=1$, we have $$K\leq 0;$$\label{eq:n=1}
\item If $0< n<1$, we have $K<0$. If, furthermore, $\sigma$ is not balanced, we have $$K\leq -\frac{2(1-n)}{5nD_G^{non}}(\widetilde{h_1^\sigma})^2;$$\label{eq:n<1}
\item If $1<n\leq \infty$ and $\sigma$ is not balanced, we have $$K\leq \frac{2(n-1)}{n}\widetilde{h_1^\sigma}\leq\frac{2(n-1)}{n}h_1^\sigma.$$\label{eq:n>1}
\end{enumerate}
\end{coro}
\begin{proof}
The estimate (\ref{eq:n=1}) follows directly from Theorem \ref{thm:lichnerowicz}.
Note that $\lambda_1^\sigma$ is positive when $\sigma$ is not balanced. Hence we can
combine Theorem \ref{thm:lichnerowicz} and Theorem \ref{thm:Cheeger} to conclude estimates (\ref{eq:n<1}) and (\ref{eq:n>1}).
\end{proof}
If the graph has a nonnegative lower curvature bound, we can improve the estimate Corollary \ref{cor:CurUpperCheeger} (\ref{eq:n>1}) by applying Corollary \ref{thm:Buser-K}.
\begin{coro}\label{cor:jump}
Let $(G,\mu, \sigma)$ satisfy $CD^\sigma(K,n)$, for some $K\geq 0$ and $1<n\leq \infty$. If the signature $\sigma$ is not balanced, then
we have
\begin{equation}\label{eq:curvature-Cheeger}
K\leq \frac{n-1}{n}\min\left\{2\widetilde{h_1^\sigma},\frac{8e^2}{(e-1)^2}D_G^{nor}(\widetilde{h_1^\sigma})^2\right\}.
\end{equation}
\end{coro}
\begin{proof}
Recall that $CD^\sigma(K,n)$ implies $CD^\sigma(K,\infty)$. Hence, Corollary \ref{thm:Buser-K} is applicable here.
\end{proof}
\begin{remark}[Jump of the curvature around a balanced signature]\label{remark:jump}
Suppose that a graph $(G, \mu)$ with a balanced signature has positive $n$-dimensional Ricci curvature, i.e., $K_{n}(\sigma_{\mathrm{triv}})>0$. (Recall that every balanced signature is switching equivalent to $\sigma_{\mathrm{triv}}$. Note that by Corollary \ref{cor:CurUpperCheeger}, $K_{n}(\sigma_{\mathrm{triv}})>0$ is only possible when $1<n\leq\infty$.) Then by Corollary \ref{cor:CurUpperCheeger}, we observe that the curvature $K_{n}(\sigma)$ of
$(G,\mu, \sigma)$, as a function of the signature $\sigma$, has the following "jump" phenomenon: For unbalanced signatures $\sigma$, when they are close to the balanced signature $\sigma_{\mathrm{triv}}$,
\begin{equation}
\limsup_{\iota^\sigma(V)\to 0}K_n(\sigma)\leq 0,\,\,\,\text{but}\,\,K_n(\sigma_{\mathrm{triv}})>0.
\end{equation}
In the above expression, we use $\iota^\sigma(V)$ as a measure for the difference between $\sigma$ and $\sigma_{\mathrm{triv}}$.
\end{remark}
The jump of the curvature is closely relate to the jump phenomenon of the first non-zero eigenvalue of $\Delta^\sigma$. When the signature $\sigma$ of a connected graph becomes balanced, the first non-zero eigenvalue jumps from $\lambda_1^\sigma$ to $\lambda_2^\sigma$.
\begin{example}[Signed Triangle]
We consider the example of a signed triangle again. Recall that we have observed the jump phenomenon of the curvature of a signed triangle in Remark \ref{remark:jumptriangle} (see Figure \ref{F2}). In Figure \ref{FtriangleEigenJump} of the Introduction, the jumps of the $\infty$-dimensional Ricci curvature and the first non-zero eigenvalue of a signed triangle are illustrated in the same diagram.
\end{example}
We conclude this section by an interesting application of the jump phenomenon of the curvature.
\begin{thm}\label{thm:ApplOFJump}
Suppose that a graph $G$ has at least one cycle, but no cycles of length $3$ or $4$. Then, for any signature $\sigma$, any edge weights and any vertex measure $\mu$, we have
$$K_n(G,\mu,\sigma)\leq 0,\,\,\text{ for any }\,\,n\in \mathbb{R}_+.$$
\end{thm}
\begin{proof}
Since $G$ contains at least one cycle, there exist unbalanced signatures on $G$. On the other hand, we have \begin{equation}\label{eq:contrToJump}K_n(G,\mu,\sigma)=K_n(G,\mu,\sigma_{\mathrm{triv}})\end{equation} by Proposition \ref{pro:shortcycles}, as $G$ has no cycles of length $3$ or $4$. Therefore, if $K_n(G,\mu,\sigma_{\mathrm{triv}})>0$, the equality (\ref{eq:contrToJump}) leads to a contradiction to the jump of the curvature observed in Remark \ref{remark:jump}. Hence we must have $K_n(G,\mu,\sigma)\leq 0$.
\end{proof}
Note that the conditions on the graph in Theorem \ref{thm:ApplOFJump} are purely combinatorial, whereas the curvature estimate holds for any edge weights and vertex measures.
Combining Theorem \ref{thm:ApplOFJump} and Corollary \ref{cor:lower curvature bound}, we obtain an indirect verification of (\ref{eq:5cycle Curvature}). Actually, we obtain the following more general result.
\begin{coro}\label{cor:5cycle}
Let $N\geq 5$ and $(\mathcal{C}_N,\mu,\sigma)$ be an unweighted cycle with constant vertex measure $\mu=\nu_0\cdot \mathbf{1}_V$. Then we have
$$K_n(\mathcal{C}_N,\mu,\sigma)=0 \,\,\text{for any}\,\,n\geq 2.$$
\end{coro}
\section{Eigenvalue ratios of graphs with $O(1)$ signatures}\label{section:O(1)}
In this section, we restrict our considerations to the setting of a graph $(G,\mu)$ with a signature
\begin{equation*}
\sigma: E^{or}\to O(1)=\{\pm 1\}.
\end{equation*}
We show that Theorem \ref{thm:Buser} can be applied to derive an upper bound for the ratio of the $k$-th eigenvalue $\lambda_k^\sigma$ to the first eigenvalue $\lambda_1^\sigma$ when $(G,\mu,\sigma)$ satisfies $CD^\sigma(0,\infty)$.
Note that the connection Laplacian reduces to an operator on $\ell^2(V,\mathbb{R};\mu)$. That is, for any real function $f:V\rightarrow \mathbb{R}$ and any vertex $x\in V$, we have
\begin{equation}\label{eq:signed Laplacian}
\Delta^\sigma f(x):=\frac{1}{\mu(x)}\sum_{y,y\sim x}w_{xy}(\sigma_{xy}f(y)-f(x))\in \mathbb{R}.
\end{equation}
The eigenvalues of $\Delta^\sigma$ can be listed as
\begin{equation*}
0\leq \lambda_1^\sigma\leq\cdots\leq \lambda_k^\sigma\cdots\leq \lambda_N^\sigma\leq 2D_G^{non}.
\end{equation*}
In \cite[Theorem 3]{AtayLiu14}, Atay and Liu prove the following estimate.
\begin{thm}[\cite{AtayLiu14}]\label{thm:AtayLiu}
For any graph $(G,\mu,\sigma)$ with $\sigma:E^{or}\to O(1)$ and any natural number $1\leq k\leq N$, we have
\begin{equation}\label{eq:AtayLiu}
h_1^\sigma\leq 16\sqrt{2D_G^{non}}k\frac{\lambda_1^\sigma}{\sqrt{\lambda_k^\sigma}}.
\end{equation}
\end{thm}
This result is an extension of the so-called \emph{improved Cheeger inequality} due to Kwok~et~al.~\cite{KLLGT2013} for the graph Laplacian $\Delta$. We also mention that in the current case of $O(1)$ signatures, the multi-way Cheeger constants, given in Definition \ref{defn:Cheeger}, have more explicit combinatorial expressions. We refer to \cite{AtayLiu14} for more details.
As an application of Theorem \ref{thm:Buser}, we prove the following eigenvalue ratio estimates.
\begin{thm}\label{thm:EigenRatio}
For any graph $(G,\mu,\sigma)$ with $\sigma:E^{or}\to O(1)$ satisfying $CD^\sigma(0,\infty)$ and any natural number $1\leq k\leq N$, there exists an absolute constant $C$ such that
\begin{equation}\label{eq:eigenRatio}
\lambda_k^\sigma\leq CD_G^{nor}D_G^{non}k^2\lambda_1^\sigma.
\end{equation}
\end{thm}
\begin{proof}
Since the inequality $CD^\sigma(0,\infty)$ is satisfied, we have by Theorem \ref{thm:Buser},
\begin{equation*}
\sqrt{\lambda_1^\sigma}\leq 4\sqrt{(\log 2)D_G^{nor}}h_1^\sigma.
\end{equation*}
Combining this with Theorem \ref{thm:AtayLiu}, we obtain
\begin{equation}\label{eq:o1Buser}
\sqrt{\lambda_1^\sigma}\leq 64\sqrt{2\log 2}\sqrt{D_G^{nor}D_G^{non}}k\frac{\lambda_1^\sigma}{\sqrt{\lambda_k^\sigma}}.
\end{equation}
This implies (\ref{eq:eigenRatio}) immediately.
\end{proof}
A direct corollary is the following Buser type inequality.
\begin{coro}\label{cor:BuserO1}
Let $(G,\mu,\sigma)$ with $\sigma:E^{or}\to O(1)$ satisfy $CD^\sigma(0,\infty)$. Then for all $1\leq k\leq N$, there exists an absolute constant $C$ such that
\begin{equation}\label{eq:O1coro}
\sqrt{\lambda_k^\sigma}\leq C\sqrt{D_G^{non}}D_G^{nor}kh_1^\sigma.
\end{equation}
\end{coro}
\begin{proof}
Combining (\ref{eq:o1Buser}) with Theorem \ref{thm:EigenRatio} leads to this result immediately.
\end{proof}
\begin{remark}
Comparing this result with Theorem \ref{thm:Buser}, the advantage of the estimate (\ref{eq:O1coro}) lies in the fact that $h_1^\sigma\leq h_k^\sigma$ and that the order of $k$ in (\ref{eq:O1coro}) is lower. However, in the estimate (\ref{eq:O1coro}), the orders of the degrees $D_G^{nor}$ and $D_G^{non}$ are higher than in Theorem \ref{thm:Buser}.
\end{remark}
Finally, we observe that Theorem \ref{thm:AtayLiu} and hence the estimate in Corollary \ref{cor:BuserO1} can not be true for general signatures $\sigma: E^{or}\to H$, even in the $1$-dimensional case $H=U(1)$. To explain the reason, let us revisit the example of a signed triangle.
\begin{example}[Signed Triangle]\label{example:section 7} The example of a signed triangle, discussed in Section \ref{subsection:triangle}, carries a $U(1)$ signature $\sigma$ is assigned (see Figure \ref{F1}). If $\mathrm{Re}(s)$ tends to $1$, i.e., if the signature on the triangle tends to be balanced, we observe that $\lambda_2^\sigma$ has a positive lower bound (see Figure \ref{FtriangleEigenJump}), while both $\lambda_1^\sigma$ and $h_1^\sigma$ tend to zero, but at a different rate (see Figure \ref{FtriangleComp}). In fact, by Theorem \ref{thm:Buser}, we have
\begin{equation}\label{eq:triangleBuser2}
\lambda_1^\sigma\leq 32\log 2(h_1^\sigma)^2.
\end{equation}
Assume that Theorem \ref{thm:AtayLiu} holds in this case for $k=2$. Combining this with (\ref{eq:triangleBuser2}), we obtain $1\leq Ch_1^\sigma$ for some absolute constant $C>0$. This is a contradiction. Hence, Theorem \ref{thm:AtayLiu} cannot hold for more general signatures.
\end{example}
|
1,116,691,499,847 | arxiv | \section{Introduction}
In this paper, we study the final value problem
\begin{equation}
\label{main equation}
\begin{cases}
\frac{\partial^\gamma u}{\partial t^\gamma} = \Delta u & x \in \mathbb{R}^n, \,\,\, t \in (0,T)\\
u(x,T) = g(x) & x \in \mathbb{R}^n,
\end{cases}
\end{equation}
where we aim at recovering the initial distribution $u(\cdot,0)$ given the final distribution $u(\cdot,T)$. In \eqref{main equation}, $\gamma \in (0,1)$ and $\frac{\partial^\gamma }{\partial t^\gamma}$ denotes the Caputo fractional derivative \cite{kilbas2006theory} defined as
\begin{equation*}
\label{Caputo Derivative}
\frac{\partial^\gamma u}{\partial t^\gamma} = \frac{1}{\Gamma(1-\gamma)} \int_0^t \frac{u'(s)}{(t-s)^\gamma} \,\mathrm{d} s,
\end{equation*}
where $\Gamma(\cdot)$ is nothing but the Gamma function : $\Gamma(z) = \int_0^{+\infty} t^{z-1} e^{-t} \,\mathrm{d} t$.
Time-fractional diffusion equations usually model sub-diffusion processes such as slow and anomalous diffusion processes which failed to be described by classical diffusion models \cite{ balakrishnan1985anomalous, chechkin2005fractional, metzler2000random, podlubnv1999fractional}. Due to the high diversity of such phenomena which are not properly modeled by classical diffusion, time-fractional diffusion problems have gained much attention in last decades. Beyond applications in diffusion processes, time-fractional equation \eqref{main equation} has also been applied to image de-blurring \cite{wang2013total} where the time-fractional derivative allows to capture the memory effect in image blurring.
It is well-known that the ill-posedness of equation \eqref{main equation} comes from the irreversibility of time of the diffusion equation, which is caused by the very smoothing property of the forward diffusion. As a result, very small perturbation of the final distribution $u(\cdot,T)$ may cause arbitrary large error in the initial distribution $u(\cdot,0)$. Hence, a regularization method is crucial in order to recover stable approximate of the initial distribution $u(\cdot,0)$. In this regard, many regularization methods have been applied to final value time-fractional diffusion equation. Let us mention the mollification method \cite{van2020mollification, yang2014mollification}, Fourier regularization \cite{xiong2012inverse,yang2015fourier}, the method of quasi-reversibility \cite{liu2010backward}, Tikhonov method \cite{wang2015optimal}, total variation regularization \cite{wang2013total}, boundary condition regularization \cite{yang2013solving}, non-local boundary value method \cite{hao2019stability}, truncation method \cite{wang2012data}. Yet, the set of regularization methods applied to backward time-fractional diffusion equation still presents some sparsity, especially compared to the set regularization methods for backward classical diffusion problems.
In this paper, we describe how in the context of regularization of the final value time-fractional diffusion equation \eqref{main equation}, the Fourier regularization \cite{yang2015fourier} and mollification \cite{van2020mollification} are nothing but examples of \textit{approximate-inverse} \cite{louis1990mollifier} regularization. Next, we investigate a regularization technique which yields a better trade-off between stability and accuracy compared to the Fourier regularization \cite{yang2015fourier} and the mollification technique of Van Duc N. et al \cite{van2020mollification}. We consider noisy setting where $u(\cdot,T)$ is approximated by a noisy data $g^\delta$ satisfying
\begin{equation*}
\label{nois data}
|| u(\cdot,T) - g^\delta ||_{L^2(\mathbb{R}^n)} \leq \delta,
\end{equation*}
and we derive order-optimal convergence rates between our approximate solution and the exact solution $u(\cdot,0)$ under classical Sobolev smoothness condition
\begin{equation}
\label{smoothness cond}
u(\cdot,0) \in H^p(\mathbb{R}^n) \quad \text{with} \quad ||u(\cdot,0)||_{H^p(\mathbb{R}^n)} \leq E, \quad p>0.
\end{equation}
We also provide error estimates under the more realistic setting where both the data $u(\cdot,T)$ and the forward diffusion operator are only approximately known. The motivation here being that, in practice, the Mittag-Leffler function which plays a major role in the resolution of equation \eqref{main equation} can only be approximated in practice.
The outline of this article is as follows:
In Section \ref{section regularization}, we discuss existence of solution of equation \eqref{main equation} and reformulate the equation into an operator equation of the form $ A u(\cdot,0) = g $ in which $A$ is a bounded linear operator on $L^2(\mathbb{R}^n)$. We present key estimates necessary for the regularization analysis and illustrate the ill-posedness of recovering $u(\cdot,0)$ from $g$. Next, we introduce the framework of regularization which includes Fourier regularization, mollification, and approximate inverse. At last, we introduce our regularization approach.
Section \ref{section error estimate} deals with error estimates and order-optimality of our regularization technique under the smoothness condition \eqref{smoothness cond}. In this section, we derive error estimates between the approximate solution and the exact solution $u(\cdot,0)$ in Sobolev spaces $H^l(\mathbb{R}^n)$ with $l \geq 0$. We also give error estimates for the approximation of early distribution $u(\cdot,t)$ with $t \in (0,T)$. We end the section by presenting analogous error estimates for the case where both the data $g$ and the forward diffusion operator $A$ are only approximately known.
Section \ref{section par choice rule} is devoted to parameter selection rules which is a critical step in the application of a regularization method. Here we propose a Morozov-like a-posteriori parameter choice rule leading to order-optimal convergence rates under smoothness condition \eqref{smoothness cond}. We also present analogous error estimates as that obtained in Section \ref{section error estimate} corresponding to the a-posteriori rule prescribed.
Finally, we study four numerical examples in Section \ref{section numerical experiments} to illustrate the effectiveness of the regularization approach coupled with the parameter choice rule described in Section \ref{section par choice rule}. Moreover, in this Section, we also carry out a numerical convergence rates analysis in order to confirm the theoretical convergence rates given in Section \ref{section error estimate}.
In the sequel, $||f||$ or $||f||_{L^2}$ always refers to the $L^2$-norm of the function $f$ on $\mathbb{R}^n$, $||f||_{H^p}$ denotes the Sobolev norm of $f$ on $\mathbb{R}^n$ and $||| \cdot ||||$ denotes operator norm of a bounded linear mapping. Throughout the paper, $\widehat{f}$ or $\mathcal{F}(f)$ (resp. $\mathcal{F}^{-1}(f)$) denotes the Fourier (resp. inverse Fourier) transform of the function $f$ defined as
\begin{equation*}
\label{def Fourier transform}
\widehat{f}(\xi) = \mathcal{F}(f)(\xi) = \frac{1}{\sqrt{2 \pi}^{n}}
\int_{\mathbb{R}^n} f(x)e^{- i x\cdot\xi} \,\mathrm{d} x, \quad \mathcal{F}^{-1}(f)(x) = \frac{1}{\sqrt{2 \pi}^{n}}
\int_{\mathbb{R}^n} f(\xi)e^{ i x\cdot\xi} \,\mathrm{d} \xi, \quad \xi, x \in \mathbb{R}^n.
\end{equation*}
\section{Regularization}\label{section regularization}
Let us start by the following result about the existence and uniqueness of solution of equation \eqref{main equation}.
\begin{Proposition}
\label{Prop Existence weak solution}
For all $\gamma \in (0,1)$ and $g \in H^2(\mathbb{R}^n)$, Problem \eqref{main equation} admits a unique weak solution $u \in C([0,T],L^2(\mathbb{R}^n)) \cap C((0,T],H^2(\mathbb{R}^n))$. That is, the first equation in \eqref{main equation} holds in $L^2(\mathbb{R}^n)$ for all $t\in (0,T)$ and $u(\cdot,t) \in H^2(\mathbb{R}^n)$ for all $t \in(0,T)$ with
$$
\lim_{t \rightarrow T} || u(\cdot,t) - g ||_{H^2} = 0.
$$
\end{Proposition}
Proposition \ref{Prop Existence weak solution} is merely generalization of \cite[Lemma 2.2]{yang2015fourier} where only the case $n=1$ is considered.
The idea of the proof is merely to check that the formal solution defined by \eqref{relation data and solution} is the weak solution.
Now, let us define the framework that we will consider for the regularization of problem \eqref{main equation}. Consider a data $g \in H^2(\mathbb{R}^n)$, by applying the Fourier transform in \eqref{main equation} with respect to variable $x$, we get
\begin{equation}
\label{fourier equation}
\begin{cases}
\frac{\partial^\gamma \widehat{u}(\xi,t)}{\partial t^\gamma} = - |\xi|^{2} \widehat{u}(\xi,t) & \xi \in \mathbb{R}^n, \,\,\, t \in (0,T)\\
\widehat{u}(\xi,T) = \widehat{g}(\xi) & \xi \in \mathbb{R}^n.
\end{cases}
\end{equation}
By applying the Laplace transform with respect to variable $t$ in \eqref{fourier equation}, one gets
\begin{equation}
\label{link equation data and sol in fourier domain}
\begin{cases}
\widehat{u}(\xi,t) = \widehat{u}(\xi,0) E_{\gamma,1}(-|\xi|^2 t^\gamma) & \xi \in \mathbb{R}^n \\
\widehat{u}(\xi,T) = \widehat{g}(\xi) & \xi \in \mathbb{R}^n,
\end{cases}
\end{equation}
where $E_{\gamma,1}$ is the Mittag Leffler function \cite{podlubnv1999fractional} defined as
\begin{equation*}
\label{def mittag-Leffler function}
E_{\gamma,1}(z) = \sum_{k=0}^{+\infty} \frac{z^k}{\Gamma(\gamma k + 1)}, \quad z \in \mathbb{C}.
\end{equation*}
From \eqref{link equation data and sol in fourier domain}, we can deduce the following relation between the solution $u(\cdot,0)$ and the data $g$ in the frequency domain:
\begin{equation}
\label{relation data and solution}
\widehat{u}(\xi,0) = \frac{\widehat{g}(\xi)}{E_{\gamma,1}(-|\xi|^2 T^\gamma)}.
\end{equation}
Moreover, we can also derive the following relation for early distribution $u(\cdot,t)$ with $t \in (0,T)$
\begin{equation}
\label{relation data and early distribution}
\forall t \in (0,T), \quad \widehat{u}(\xi,t) = \frac{E_{\gamma,1}(-|\xi|^2 t^\gamma)}{E_{\gamma,1}(-|\xi|^2 T^\gamma)}\widehat{g}(\xi).
\end{equation}
From equations \eqref{relation data and solution} and \eqref{relation data and early distribution}, we can see that the Mittag Leffler function $E_{\gamma,1}$ plays an important role in time-fractional diffusion equation \eqref{main equation}. Hence, let us recall some key estimates about the function $E_{\gamma,1}$ that will be repeatedly used in the sequel.
\begin{Lemma}
\label{Lemma esti Mittag Leffler func}
Let $\gamma \in [\gamma_0,\gamma_1] \subset (0,1)$, there exists constants $C_1$ and $C_2$ depending only on $\gamma_0$ and $\gamma_1$ such that
\begin{equation}
\label{key estimate E gamma}
\forall x \leq 0, \quad \frac{C_1}{\Gamma(1-\gamma)} \frac{1}{1-x} \leq E_{\gamma,1} (x) \leq \frac{C_2}{\Gamma(1-\gamma)} \frac{1}{1-x}.
\end{equation}
\end{Lemma}
For a proof of Lemma \ref{Lemma esti Mittag Leffler func}, see \cite[Lemma 2.1]{yang2015fourier}.
From Lemma \ref{Lemma esti Mittag Leffler func}, we can easily derive the next Lemma.
\begin{Lemma}
\label{Lemma E gamma}
Assume $\gamma \in [\gamma_0,\gamma_1] \subset (0,1)$, then for every $\xi \in \mathbb{R}^n$ and $t \in (0,T]$,
\begin{equation}
\label{bounds E gamma Xi t }
\frac{1}{(1 \vee t^\gamma)} \left( \frac{C_1}{\Gamma(1-\gamma)} \frac{1}{1+|\xi|^2} \right) \leq E_{\gamma,1} (-|\xi|^2 t^\gamma) \leq \frac{1}{(1 \wedge t^\gamma)} \left(\frac{C_2}{\Gamma(1-\gamma)} \frac{1}{1+|\xi|^2} \right).
\end{equation}
and
\begin{equation}
\label{bounds fract E gamma Xi t }
\frac{C_1}{C_2} \leq \frac{E_{\gamma,1} (-|\xi|^2 t^\gamma)}{E_{\gamma,1} (-|\xi|^2 T^\gamma)} \leq \frac{C_2}{C_1} \left( \frac{T}{t}\right)^\gamma.
\end{equation}
In \eqref{bounds E gamma Xi t }, $\vee$ denotes the maximum while $\wedge$ denotes the minimum, that is, $1 \vee t^\gamma = \max \,\,\{1, t^\gamma\}$ and $1 \wedge t^\gamma = \min\,\, \{1, t^\gamma\}$.
\end{Lemma}
From \eqref{relation data and early distribution} and \eqref{bounds fract E gamma Xi t }, we get that for every $t \in (0,T)$,
$
||u(\cdot,t)||_{L^2} \leq (C_2/C_1) \left( T/t\right)^\gamma ||g||_{L^2}.
$
which implies that the problem of recovering $u(\cdot,t)$ from $g$ is actually well posed. However, that of recovering $u(\cdot,0)$ is ill-posed. Indeed, from \eqref{relation data and solution} and \eqref{bounds E gamma Xi t }, we get that
\begin{equation}
\label{illus ill-posedness}
||u(\xi,0) || \geq C_\gamma (1 + |\xi|^2) |\widehat{g}(\xi)|, \quad \text{with} \quad C_\gamma = \Gamma(1-\gamma)(1 \wedge T^\gamma)/C_2.
\end{equation}
From \eqref{illus ill-posedness}, we see that very small perturbations in high frequencies in the data $g$ leads to arbitrary large errors in the solution $u(\cdot,0)$. Therefore, one needs a regularization method to recover stable estimates of $u(\cdot,0)$.
\begin{Remark}
From \eqref{relation data and solution} and \eqref{bounds E gamma Xi t }, we can also derive that
\begin{equation}
\label{middly ill-posedness}
||u(\cdot,t)||_{L^2} \leq D_\gamma (1 + |\xi|^2) |\widehat{g}(\xi)|, \quad \text{with} \quad D_\gamma = \Gamma(1-\gamma)(1 \vee T^\gamma)/C_1.
\end{equation}
Estimate \eqref{middly ill-posedness} illustrates the fact that the backward time-fractional diffusion equation is less ill-posed (mildly ill-posed) on the contrary to the classical backward diffusion equation which is exponentially ill-posed. This is actually due to the asymptotic slow decay of the Mittag Leffler function $E_{\gamma,1}(-|\xi|^2 )$ compared to $\exp(-|\xi|^2) = E_{1,1}(-|\xi|^2)$.
\end{Remark}
From \eqref{relation data and solution}, we can reformulate equation \eqref{main equation} into an operator equation
\begin{equation}
\label{operator equation}
A\, u(\cdot,0) = g.
\end{equation}
where $A: L^2(\mathbb{R}^n) \rightarrow L^2(\mathbb{R}^n)$ is the linear forward diffusion operator operator which maps the initial distribution $u(\cdot,0)$ to the final distribution $g$, that is,
\begin{equation}
\label{def operator A pb}
A = \mathcal{F}^{-1} \left( E_{\gamma,1}(-|\xi|^2 T^\gamma) \right) \mathcal{F}.
\end{equation}
In the sequel, given a data $g \in L^2(\mathbb{R}^n)$, we aim at recovering $u(\cdot,0) \in L^2(\mathbb{R}^n)$. Let $\varphi$ be a smooth real-valued function in $L^1(\mathbb{R}^n)$ satisfying $\int_{\mathbb{R}^n} \varphi(x) \,\mathrm{d} x =1$. It is well-known that the family of functions $(\varphi_\alpha)_{\alpha>0}$ defined by
\begin{equation*}
\label{def phi beta from phi}
\forall x\in \mathbb{R}^n,\quad \varphi_\alpha(x) := \frac{1}{\alpha^n}\varphi\left(\frac{x}{\alpha}\right),
\end{equation*}
satisfies
\begin{equation}
\label{prop phi beta}
\forall f \in L^2(\mathbb{R}^n), \quad \varphi_\alpha \star f \rightarrow f \quad \text{in} \quad L^2(\mathbb{R}^n) \quad \text{as} \quad \alpha \downarrow 0,
\end{equation}
where $\varphi_\alpha \star f$ is nothing but the convolution of the functions $\varphi_\alpha$ and $f$ defined as
$
\left( \varphi_\alpha \star f \right)(x) = \int_{\mathbb{R}^n} \varphi_\alpha(x-y)f(y) \,\mathrm{d} y.
$
For $\alpha>0$, let $M_\alpha$ be the mollifier operator defined by
\begin{equation}
\label{def oper Cbeta}
\forall \alpha > 0, \,\, \forall f \in L^2(\mathbb{R}^n), \quad M_\alpha f = \varphi_\alpha \star f.
\end{equation}
From \eqref{prop phi beta}, we see that the family of operators $(M_\alpha)_{\alpha>0}$ is an approximation of unity in $L^2(\mathbb{R}^n)$, that is,
\begin{equation}
\label{conver M alpha}
\forall f \in L^2(\mathbb{R}^n), \quad M_\alpha f \rightarrow f \quad \text{in} \quad L^2(\mathbb{R}^n) \quad \text{as} \quad \alpha\downarrow 0 .
\end{equation}
Let $u_\alpha$ be the solution of the equation
\begin{equation}
\label{equation u_alpha}
\begin{cases}
\frac{\partial^\gamma u}{\partial t^\gamma} = \Delta u & x \in \mathbb{R}^n, \,\,\, t \in (0,T)\\
u(x,T) = (M_\alpha g)(x) & x \in \mathbb{R}^n.
\end{cases}
\end{equation}
From \eqref{relation data and solution} and \eqref{relation data and early distribution}, by replacing $g$ by $M_\alpha g$, and using the fact that the $\widehat{M_\alpha g}(\xi) = \sqrt{2 \pi}^n \widehat{\varphi_\alpha}( \xi) \widehat{g}(\xi)$, one gets
\begin{equation}
\label{express u alpha in freq domain}
\begin{cases}
\widehat{u_\alpha}(\xi,0) = \frac{\sqrt{2 \pi}^n}{E_{\gamma,1}(-|\xi|^2 T^\gamma)} \widehat{\varphi_{\alpha}}(\xi) \widehat{g}(\xi) = \sqrt{2 \pi}^n \widehat{\varphi_{\alpha}}(\xi) \widehat{u}(\xi,0) \\
\widehat{u_\alpha}(\xi,t) = \sqrt{2 \pi}^n \frac{E_{\gamma,1}(-|\xi|^2 t^\gamma)}{E_{\gamma,1}(-|\xi|^2 T^\gamma)} \widehat{\varphi_{\alpha}}(\xi) \widehat{g}(\xi) = \sqrt{2 \pi}^n \widehat{\varphi_{\alpha}}(\xi) \widehat{u}(\xi,t), & t \in (0,T).
\end{cases}
\end{equation}
which yields that
\begin{equation}
\label{eq 00}
\forall t \in [0,T] \quad u_\alpha(\cdot,t) = M_\alpha u(\cdot,t).
\end{equation}
\begin{Proposition}
\label{Prop Reg method}
Assume that there exists a function $\varrho: \mathbb{R}_+^* \rightarrow \mathbb{R}_+$ such that the mollifier kernel $\varphi$ verifies
\begin{equation}
\label{cond reg method}
\forall \alpha>0, \quad \frac{\widehat{\varphi}(\alpha \xi) }{E_{\gamma,1}(-|\xi|^2 T^\gamma)} \leq \varrho(\alpha),
\end{equation}
Then the family $(u_\alpha)_{\alpha>0}$ solution of equation \eqref{equation u_alpha} defines a regularization method for problem \eqref{main equation} in $L^2(\mathbb{R}^n)$.
\end{Proposition}
\begin{Proof}
From \eqref{express u alpha in freq domain}, noticing that $\widehat{\varphi_\alpha}(\xi) = \widehat{\varphi}(\alpha \xi)$ and using the fact that the function $E_{\gamma,1}$ is increasing on $\mathbb{R}_{-}$ with $E_{\gamma,1}(0) = 1$, we can see that \eqref{cond reg method} implies that for every $t \in [0,T]$ and $\alpha >0$, the mapping $R_{\alpha,t}: L^2(\mathbb{R}^n) \rightarrow L^2(\mathbb{R}^n)$ which maps the data $g$ to $ u_\alpha(\cdot,t)$ is bounded with $|||R_{\alpha,t}||| \leq \sqrt{2 \pi}^n\varrho(\alpha)$. Moreover, from \eqref{conver M alpha} and \eqref{eq 00}, we deduce that for every $t \in [0,T]$, $u_\alpha(\cdot,t)$ converges to $u(\cdot,t)$ in $L^2(\mathbb{R}^n)$ as $\alpha$ goes to $0$.
\end{Proof}\\
From Proposition \ref{Prop Reg method}, we can see that, choosing a kernel $\varphi$ which satisfies condition \eqref{cond reg method} allows to defines a regularization method for equation \eqref{main equation}. Now, let us show that the family of regularization methods defined in this way actually coincides with \textit{approximate-inverse} introduced by Louis and Maass \cite{louis1990mollifier}.
From the first equation in \eqref{express u alpha in freq domain}, we can derive that
\begin{equation}
\label{exp u alpha 0}
u_\alpha(x,0) = \int_{\mathbb{R}^n} e^{i x \xi} \frac{\widehat{\varphi_{\alpha}}(\xi) \widehat{g}(\xi)}{E_{\gamma,1}(-|\xi|^2 T^\gamma)} \,\mathrm{d} \xi .
\end{equation}
From \eqref{eq 00} and \eqref{exp u alpha 0}, we can reformulate $u_\alpha(x,0)$ as follows
\begin{equation}
\label{simil approx inverse}
\begin{cases}
\vspace{0.25cm}
\forall x \in \mathbb{R}^n, \quad u_\alpha(x,0) = \scal{e_\alpha(x,\cdot)}{u(\cdot,0)}_{L^2}, & \text{with} \quad e_\alpha(x,y) = \varphi_\alpha(x-y) \\
\forall x \in \mathbb{R}^n, \quad u_\alpha(x,0) = \scal{v_{x,\alpha}}{g}_{L^2}, & \text{with} \quad v_{x,\alpha} = \mathcal{F}^{-1} \left( \frac{e^{i x \xi} \,\widehat{\varphi_{\alpha}}(\xi)}{E_{\gamma,1}(-|\xi|^2 T^\gamma)} \right) .
\end{cases}
\end{equation}
Moreover, one can easily check that $v_{x,\alpha}$ is nothing but the solution of the adjoint equation
$$
A^* f = e_\alpha(x,\cdot).
$$
Hence, we deduce that the solution $u_\alpha$ of equation \eqref{equation u_alpha} actually corresponds to the \textit{approximate-inverse} \cite{louis1990mollifier} regularized solution of equation \eqref{operator equation}, the pair $(v_{x,\alpha},e_\alpha)$ being what is usually called $\textit{reconstruction kernel - mollifier}$.
This setting of regularization we just described encompasses many regularization methods that appears distinctively in the literature of the regularization of the final value time-fractional diffusion equation. Each regularization method being a particular choice of the mollifier kernel $\varphi$.
For the Fourier regularization \cite{yang2015fourier} (where $n =1$), we have
\begin{equation}
\label{app sol Fourier reg}
\widehat{u_{\xi_{max}}}(\xi,t) = \chi_{[-\xi_{max},\xi_{max}]}(\xi) \frac{E_{\gamma,1}(-|\xi|^2 t^\gamma)}{E_{\gamma,1}(-|\xi|^2 T^\gamma)} \widehat{g}(\xi),
\end{equation}
where $\chi_{\Omega}$ denotes the characteristic function of the set $\Omega$ equal to $1$ on $\Omega$ and $0$ elsewhere. By comparing \eqref{app sol Fourier reg} and \eqref{express u alpha in freq domain}, we readily get that
\begin{equation}
\label{ee 11}
\alpha = 1/\xi_{max}, \quad \text{and} \quad \varphi(x) = \mathcal{F}^{-1} \left(\frac{1}{ \sqrt{2 \pi}} \chi_{[-1,1]} \right) = \frac{\sin(x)}{\pi x}.
\end{equation}
In this case, condition \eqref{cond reg method} merely reads
$$
\frac{\chi_{[-1,1]}(\xi/\xi_{max})}{E_{\gamma,1}(-|\xi|^2 T^\gamma)} \leq \varrho\left( \frac{1}{\xi_{max}} \right).
$$
Using \eqref{bounds E gamma Xi t }, we can see that this condition is satisfies with
$
\varrho(\alpha) = C (1 + 1/\alpha^2)$ where $C = (1\vee T^\gamma)\Gamma(1-\gamma) /C_1$.
For the mollification method of N. Van Duc et al. \cite{van2020mollification}, the mollifier operator $M_\alpha$ is denoted $S_\nu$ where $S_\nu$ is the convolution by the so called \textit{Dirichlet kernel} $D_\nu$ defined as
$$
D_\nu(x) = \frac{1}{\pi^n} \prod_{j=1}^n \frac{\sin(\nu x_j)}{x_j}, \quad \text{with} \quad \nu >0 \,\, \text{and} \,\, x \in \mathbb{R}^n.
$$
Hence we can deduce that, this merely corresponds to
\begin{equation}
\label{ee 22}
\alpha = 1/\nu, \quad \text{and} \quad \varphi(x) = \prod_{j=1}^n \frac{\sin(x_j)}{ \pi x_j}.
\end{equation}
Given that the Fourier transform of the kernel $D_ \nu$ is given by
\begin{equation}
\label{FT kernel Van duc}
\mathcal{F} \left(D_\nu \right)(\xi) = \chi_{\Lambda}(\xi), \quad \text{with} \quad\Lambda = \{x \in \mathbb{R}^n \, :\, |x_j| \leq \nu, \, \,\, j = 1,...,n \},
\end{equation}
we see that condition \eqref{cond reg method} merely reads
$$
\frac{\chi_{\Lambda}(\xi/\nu)}{E_{\gamma,1}(-|\xi|^2 T^\gamma)} \leq \varrho\left( \frac{1}{\nu} \right),
$$
which is fulfilled with
$\varrho(\alpha) = C (1 + \sqrt{n}/\alpha^2)$ where $C = (1\vee T^\gamma)\Gamma(1-\gamma) /C_1$.
By the way, from \eqref{ee 11} and \eqref{ee 22}, we can see that the Fourier regularization and the mollification approach of N. Van Duc et al. actually coincides, the latter approach being a generalization of the former to $n-$dimensional case.
From \eqref{FT kernel Van duc}, we can conclude that both regularization approaches are nothing but truncation methods. That is, the regularization is done by merely throwing away high frequency components of the data, which are responsible of the ill-posedness, and conserving unchanged the remaining frequency components. In order word, these two methods can be regarded as \textit{spectral cut-off} methods. It is important to notice that though high frequency components are responsible of ill-posedness, nevertheless, the still carry non-negligible information on the sought solution. Therefore, it is desirable to apply a regularization which do not suppress high frequency components but which applies much regularization to those components compared to low frequency components. Let us point out that mere truncation of high frequency components usually entails Gibbs phenomena and oscillation of the approximate solution which should be avoided as far as possible. This is actually possible by choosing a kernel $\varphi$ whose Fourier transform is supported on the whole domain $\mathbb{R}^n$.
Now on, let us consider a mollifier kernel $\varphi$ defined by
\begin{equation}
\label{def of our molllif kernel}
\widehat{\varphi}(\xi) = \frac{1}{\sqrt{2 \pi}^n} \exp(- \tau|\xi|^s),\,\,\, \tau >0,\,\,\, s>0, \quad \text{i.e.} \quad \varphi = \frac{1}{\sqrt{2 \pi}^n} \mathcal{F}^{-1} \left( \exp(- \tau|\xi|^s) \right),
\end{equation}
where $\tau$ and $s$ are two free positive parameters.
From \eqref{def of our molllif kernel}, we can see that $\varphi \in L^1(\mathbb{R}^n)\cap L^2(\mathbb{R}^n)$ and satisfies $\int_{\mathbb{R}^n} \varphi(x) \,\mathrm{d} x = \sqrt{2 \pi}^n \widehat{\varphi}(0) = 1$.
\begin{Lemma}
\label{Lemma bound our kernel}
Let $b$ and $d$ be two positive numbers, consider the function $f_{b,d}(x) = (1+ x) e^{- b x^d}$. Then there exists a constant $C$ depending only on $d$ such that
\begin{equation}
\label{key bound}
\sup_{x\geq 0} f_{b,d}(x) \leq \frac{C}{b^{1/d}} \quad \text{as} \quad b \downarrow 0.
\end{equation}
\end{Lemma}
The proof of Lemma \ref{Lemma bound our kernel} is deferred to appendix. Lemma \ref{Lemma bound our kernel} will help us to prove that the kernel $\varphi$ given by \eqref{def of our molllif kernel} allows us to define a regularization method for equation \eqref{main equation}.
\begin{Proposition}
\label{Prop our reg method}
Let $M_\alpha$ be the mollifier operator defined by \eqref{def oper Cbeta} with the kernel $\varphi$ given in \eqref{def of our molllif kernel} with $\tau$ and $s$ being two positive numbers. Then the family $(u_\alpha)_{\alpha>0}$ of function $u_\alpha$ solution of equation \eqref{equation u_alpha} defines a regularization method for equation \eqref{main equation}.
\end{Proposition}
\begin{Proof}
In view of Proposition \ref{Prop Reg method}, it suffices to prove that the kernel $\varphi$ given in \eqref{def of our molllif kernel} verifies \eqref{cond reg method}. By considering \eqref{def of our molllif kernel} and estimate \eqref{bounds E gamma Xi t }, we have
\begin{equation}
\label{ee 0011}
\frac{\widehat{\varphi}(\alpha \xi) }{E_{\gamma,1}(-|\xi|^2 T^\gamma)} \leq C (1 + |\xi|^2) \exp(-\tau \alpha^s |\xi|^s), \quad \text{with} \quad C = \Gamma(1-\gamma)(1 \vee T^\gamma) /\sqrt{2 \pi}^n C_1.
\end{equation}
The right hand side in \eqref{ee 0011} is nothing but $ C f_{b,d}(|\xi|^2)$ with $b = \tau \alpha^{s}$ and $d = s/2$. Hence from \eqref{key bound}, we deduce that there exists a constant $C$ independent on $\alpha$ such that
\begin{equation*}
\forall \xi \in \mathbb{R}^n, \quad \frac{\widehat{\varphi}(\alpha \xi) }{E_{\gamma,1}(-|\xi|^2 T^\gamma)} \leq \frac{C}{\alpha^2} \quad \text{as} \quad \alpha \rightarrow 0,
\end{equation*}
whence \eqref{cond reg method} with $\varrho(\alpha) = C/\alpha^2$.
\end{Proof}\\
\begin{Remark}
By defining the mollifier kernel $\varphi$ as in \eqref{def of our molllif kernel}, we can see that the regularization technique induces a more suitable treatment of frequency components. Indeed, with our choice of mollifier kernel, the amount of regularization smoothly depends on the magnitude of the frequency components: The higher the frequency, the stronger the regularization applied, and similarly, the lower the frequency, the lower the regularization applied. This is actually desirable for a regularization method given that as the frequency gets higher, the noise in the frequency components gets much more amplified, and as the frequency gets lower, the noise in the frequency components gets less and less amplified.
\end{Remark}
\begin{Remark}
From Proposition \ref{Prop Reg method}, we can see that the family of function $(\varphi_s)_{s>0}$ defined by \eqref{def of our molllif kernel} allows to define a family of regularization methods, each regularization method being determined by the choice of the free parameter $s>0$. For instance, the choice $s = 1$ means considering a Cauchy convolution kernel while $s=2$ means taking a Gaussian convolution kernel.
\end{Remark}
Let us end this section by the following Lemma which gives rates of convergence of the mollifier operator $M_\alpha$ corresponding to kernel $\varphi$ defined by \eqref{def of our molllif kernel} on Sobolev spaces $H^p(\mathbb{R}^n)$ with $p>0$.
\begin{Lemma}
\label{Lemma rate converg mollifier operator}
Let $p>0$ and $M_\alpha$ be the mollifier operator defined by \eqref{def oper Cbeta} with the mollifier kernel $\varphi$ given by \eqref{def of our molllif kernel}. Then
\begin{equation}
\label{speed conv mollifier operator}
\forall f \in H^p(\mathbb{R}^n), \quad || f - M_\alpha f||_{L^2} \leq \tau^{\frac{p \wedge s}{s} }\alpha^{p \wedge s} ||f||_{H^p}.
\end{equation}
\end{Lemma}
\begin{Proof}
Let $f \in H^p(\mathbb{R}^n)$.
If $p<s$, using Parseval identity, we have
\begin{eqnarray*}
\norm{ f - M_\alpha f}_{L^2} & = & \norm{ [1 - \sqrt{2 \pi}^n \widehat{\varphi}(\alpha \xi)] \widehat{f}(\xi) }_{L^2} \\
& = & \norm{ [ 1- \exp(-\tau (\alpha |\xi|)^s)]^{p/s} \widehat{f}(\xi) \times [1- \exp(-\tau (\alpha |\xi|)^s)]^{1 - p/s} }_{L^2} \\
& \leq & \norm{ [ 1- \exp(-\tau (\alpha |\xi|)^s)]^{p/s} \widehat{f}(\xi)}_{L^2} \\
& \leq & (\tau \alpha ^s)^{p/s} \norm{(|\xi|^s)^{p/s}\widehat{f}(\xi)}_{L^2} \leq \tau^{p/s} \alpha^{p} \norm{f}_{H^p}.
\end{eqnarray*}
If $p\geq s$, $
\norm{ f - M_\alpha f}_{L^2} = \norm{ [ 1- e^{-\tau (\alpha |\xi|)^s}] \widehat{f}(\xi)}_{L^2} \leq \tau \alpha ^s \norm{|\xi|^s \widehat{f}(\xi)}_{L^2} \leq \tau \alpha^{s} \norm{f}_{H^s} \leq \tau \alpha^{s} \norm{f}_{H^p}.
$
\end{Proof}
\section{Error estimates}\label{section error estimate}
Henceforth, $\varphi$ denotes the mollifier kernel defined by \eqref{def of our molllif kernel} and $g^\delta \in L^2(\mathbb{R}^n)$ denotes a noisy data satisfying the noise level condition
\begin{equation}
\label{noise level cond on data}
|| g - g^\delta ||_{L^2} \leq \delta,
\end{equation}
where $g = u(\cdot,T)$ is the exact final distribution. Let us introduce the regularized solution $u_\alpha^\delta$ corresponding to the noisy data $g^\delta$ as the solution of equation
\begin{equation}
\label{equation u_alpha delta}
\begin{cases}
\frac{\partial^\gamma u}{\partial t^\gamma} = \Delta u & x \in \mathbb{R}^n, \,\,\, t \in (0,T)\\
u(x,T) = (M_\alpha g^\delta)(x) & x \in \mathbb{R}^n,
\end{cases}
\end{equation}
Equivalently, we can define $u_\alpha^\delta$ in the frequency domain by
\begin{equation}
\label{express u alpha in freq domain delta}
\widehat{u_\alpha^\delta}(\xi,t) = \sqrt{2 \pi}^n \frac{E_{\gamma,1}(-|\xi|^2 t^\gamma)}{E_{\gamma,1}(-|\xi|^2 T^\gamma)} \widehat{\varphi}(\alpha\xi) \widehat{g^\delta}(\xi) , \quad t \in [0,T].
\end{equation}
It is well known that without assuming a smoothness condition on the exact solution $u(\cdot,0)$ (or on the exact data $g$), it is impossible to exhibit a rate of convergence of regularized solution towards the exact solution \cite{schock1985approximate}. Henceforth, we consider the following classical Sobolev smoothness condition:
\begin{equation}
\label{smoothness cond on u(cdot,0)}
u(\cdot,0) \in H^p(\mathbb{R}^n),\quad \norm{u(\cdot,0)}_{H^p} \leq E, \quad \text{with} \quad p>0, \,\,E>0.
\end{equation}
Before presenting the main results of this section, let us state some lemmas which will be useful in the sequel.
\begin{Lemma}
\label{Lemma bound H p+2 Hp}
Let $p\geq 0$, and $v$ be a solution of equation $\frac{\partial^\gamma v}{\partial t^\gamma} = \Delta v$ on $\mathbb{R}^n$. If $v(\cdot,0) \in H^p(\mathbb{R}^n)$, then for every $t \in (0,T]$, $v(\cdot,t) \in H^{p+2}(\mathbb{R}^n)$ and
\begin{equation}
\label{bound H p+2 Hp }
|| v(\cdot,t)||_{H^{p+2}} \leq \frac{C}{1 \wedge t^\gamma } || v(\cdot,0)||_{H^{p}}, \quad \text{with} \quad C = \frac{C_2}{\Gamma(1-\gamma)}.
\end{equation}
\end{Lemma}
\begin{Proof}
The proof follows readily by applying Parseval identity and estimate \eqref{bounds E gamma Xi t } to equation $\widehat{v}(\xi,t) = \widehat{v}(\xi,0) E_{\gamma,1}(-|\xi|^2 t^\gamma)$ which links $v(\cdot,0)$ and $v(\cdot,t)$ in the frequency domain.
\end{Proof}\\
The next lemma illustrates the fact that the Sobolev smoothness condition \eqref{smoothness cond on u(cdot,0)} is nothing but a Hölder source condition.
\begin{Lemma}
\label{Lemma link smoothness cond and holder source condition}
Let $p > 0$, $u$ be the solution of problem \eqref{main equation} and $A$ being the forward diffusion operator defined in \eqref{def operator A pb}. The smoothness condition $u(\cdot,0) \in H^p(\mathbb{R}^n)$ is equivalent to the Hölder source condition $u(\cdot,0) = (A^*A)^{p/4} w$ with $w \in L^2(\mathbb{R}^n)$, satisfying
\begin{equation}
\label{estima norm w and u}
\left(\frac{\Gamma(1-\gamma) (1 \wedge T^\gamma)}{C_2} \right)^{p/2} || u(\cdot,0)||_{H^p} \leq ||w ||_{L^2} \leq \left(\frac{\Gamma(1-\gamma) (1 \vee T^\gamma)}{C_1} \right)^{p/2} || u(\cdot,0)||_{H^p}
\end{equation}
\end{Lemma}
\begin{Proof}
For $u(\cdot,0) \in H^p(\mathbb{R}^n)$, formally define $w$ in the frequency domain by
$$
\widehat{w}(\xi) = E_{\gamma,1}(-|\xi|^2 T^\gamma)^{-p/2} \widehat{u}(\xi,0), \quad \Longleftrightarrow \quad \widehat{u}(\xi,0) = \left( E_{\gamma,1}(-|\xi|^2 T^\gamma)^2\right)^{p/4} \widehat{w}(\xi)
$$
From \eqref{def operator A pb}, we can verify that the above definition of $w$ from $u(\cdot,0)$ is merely reformulation of the equation $u(\cdot,0) = (A^*A)^{p/4} w$ in the frequency domain. Next, we can check that $w$ is well defined and belongs to $L^2(\mathbb{R}^n)$. Finally, estimate \eqref{estima norm w and u} is deduced from \eqref{bounds E gamma Xi t }.
\end{Proof}
\begin{Remark}
From Lemma \ref{Lemma link smoothness cond and holder source condition}, we can deduce that the order optimal convergence rate under smoothness condition \eqref{smoothness cond on u(cdot,0)} is nothing but $C E^{\frac{2}{p+2}} \delta^{\frac{p}{p+2}}$ with $C \geq 1$ independent of $E$ and $\delta$.
\end{Remark}
The next Lemma which generalizes \cite[Lemma 3]{van2020mollification} will be useful in the sequel for establishing Sobolev norm error estimates.
\begin{Lemma}
\label{Lemma yy}
Let $p\geq 0$ and $v$ be a solution of equation $\frac{\partial^\gamma v}{\partial t^\gamma} = \Delta v$ on $\mathbb{R}^n$. If $v(\cdot,0) \in H^p(\mathbb{R}^n)$ then
\begin{equation}
\label{key est H l 1 yy}
\forall l \in [0,p], \quad \forall t \in (0,T], \quad \norm{v(\cdot,t)}_{H^{l+2}} \leq \frac{C(\gamma)}{(1 \wedge t^\gamma)} \norm{v(\cdot,0)}_{H^p}^{\frac{2+l}{p+2}} \norm{v(\cdot,T)}_{L^2}^{\frac{p-l}{p+2}}.
\end{equation}
where
$
C(\gamma) = (1 \vee T^\gamma)C_2/C_1$.
Moreover,
\begin{equation}
\label{key est H l 0 yy}
\forall l \in [0,p], \quad \forall t \in [0,T], \quad \norm{v(\cdot,t)}_{H^l} \leq \bar{C}(\gamma) \norm{v(\cdot,0)}_{H^p}^{\frac{2+l}{p+2}} \norm{v(\cdot,T)}_{L^2}^{\frac{p-l}{p+2}},
\end{equation}
where $\bar{C}(\gamma) = \Gamma(1-\gamma)(1 \vee T^\gamma) /C_1$.
\end{Lemma}
\begin{Proof}
Let $l \in [0,p]$ and $v$ be a solution of equation $\frac{\partial^\gamma v}{\partial t^\gamma} = \Delta v$ with $v(\cdot,0) \in H^p(\mathbb{R}^n)$. For $t \in (0,T]$, using Hölder inequality, we have
\begin{eqnarray}
\label{eq 00xx11}
\norm{v(\cdot,t)}_{H^{l+2}}^2 & = & \int_{\mathbb{R}^n} (1+ |\xi|^2)^{l+2} \mod{E_{\gamma,1}(-|\xi|^2 t^\gamma) \widehat{v}(\xi,0)}^2 \,\mathrm{d} \xi \nonumber \\
& \leq & \frac{C_\gamma^2}{(1 \wedge t^\gamma)^2} \int_{\mathbb{R}^n} (1+ |\xi|^2)^{l} |\widehat{v}(\xi,0)|^2 \,\mathrm{d} \xi \quad \text{with} \quad C_\gamma = \frac{C_2}{\Gamma(1-\gamma)} \quad\text{using} \quad \eqref{bounds E gamma Xi t }\nonumber \\
& = & \frac{C_\gamma^2}{(1 \wedge t^\gamma)^2} \int_{\mathbb{R}^n} \left( (1+ |\xi|^2)^{\frac{p(l+2)}{p+2}} |\widehat{v}(\xi,0)|^{\frac{2(l+2)}{p+2}} \right) \left( (1+ |\xi|^2)^{\frac{2(l-p)}{p+2}} |\widehat{v}(\xi,0)|^{\frac{2(p-l)}{p+2}} \right) \,\mathrm{d} \xi \nonumber\\
& \leq & \frac{C_\gamma^2}{(1 \wedge t^\gamma)^2} \left( \int_{\mathbb{R}^n} (1+ |\xi|^2)^{p} |\widehat{v}(\xi,0)|^{2} \,\mathrm{d} \xi \right)^{\frac{l+2}{p+2}} \left( \int_{\mathbb{R}^n} (1+ |\xi|^2)^{-2} |\widehat{v}(\xi,0)|^{2} \,\mathrm{d} \xi \right)^{\frac{p-l}{p+2}} \\
& \leq & \left(\frac{C_2}{C_1} \frac{1 \vee T^\gamma}{(1 \wedge t^\gamma)}\right)^2 \norm{v(\cdot,0)}_{H^p}^{\frac{2(l+2)}{p+2}} \left( \int_{\mathbb{R}^n} \mod{ E_{\gamma,1}(-|\xi|^2 T^\gamma) \widehat{v}(\xi,0)}^{2} \,\mathrm{d} \xi \right)^{\frac{p-l}{p+2}} \quad\text{using} \quad \eqref{bounds E gamma Xi t }\nonumber \\
& = &\left(\frac{C_2}{C_1} \frac{1 \vee T^\gamma}{(1 \wedge t^\gamma)}\right)^2 \norm{v(\cdot,0)}_{H^p}^{\frac{2(l+2)}{p+2}} \norm{v(\cdot,T)}_{L^2}^{\frac{2(p-l)} {p+2}},\nonumber
\end{eqnarray}
whence \eqref{key est H l 1 yy}.
On the other hand, for every $t \in [0,T]$ and $l \in [0,p]$, we have
\begin{eqnarray*}
\label{eq 00xx22}
\norm{v(\cdot,t)}_{H^{l}}^2 & = & \int_{\mathbb{R}^n} (1+ |\xi|^2)^{l} \mod{E_{\gamma,1}(-|\xi|^2 t^\gamma) \widehat{v}(\xi,0)}^2 \,\mathrm{d} \xi \nonumber \\
& \leq & \int_{\mathbb{R}^n} (1+ |\xi|^2)^{l} \mod{\widehat{v}(\xi,0)}^2 \,\mathrm{d} \xi \nonumber \\
& \leq & \left( \int_{\mathbb{R}^n} (1+ |\xi|^2)^{p} |\widehat{v}(\xi,0)|^{2} \,\mathrm{d} \xi \right)^{\frac{l+2}{p+2}} \left( \int_{\mathbb{R}^n} (1+ |\xi|^2)^{-2} |\widehat{v}(\xi,0)|^{2} \,\mathrm{d} \xi \right)^{\frac{p-l}{p+2}} \nonumber \quad \text{from} \quad \eqref{eq 00xx11}\\
& \leq & \left(\frac{(1 \vee T^\gamma) \Gamma(1-\gamma)}{C_1}\right)^2 \norm{v(\cdot,0)}_{H^p}^{\frac{2(l+2)}{p+2}} \left( \int_{\mathbb{R}^n} \mod{ E_{\gamma,1}(-|\xi|^2 T^\gamma) \widehat{v}(\xi,0)}^{2} \,\mathrm{d} \xi \right)^{\frac{p-l}{p+2}} \quad\text{using} \quad \eqref{bounds E gamma Xi t }\nonumber \\
& = &\left(\frac{(1 \vee T^\gamma) \Gamma(1-\gamma)}{C_1}\right)^2\norm{v(\cdot,0)}_{H^p}^{\frac{2(l+2)}{p+2}} \norm{v(\cdot,T)}_{L^2}^{\frac{2(p-l)}{p+2}},
\end{eqnarray*}
whence \eqref{key est H l 0 yy}.
\end{Proof}\\
We are ready to state the first main result of this section, which exhibits convergence rates of the reconstruction error $u(\cdot,0) - u_\alpha^\delta(\cdot,0)$ in Sobolev spaces $H^l(\mathbb{R}^n)$ with $l \geq 0$.
\begin{Theorem}
\label{Theorem 1}
Assume that the solution $u$ of problem \eqref{main equation} satisfies the smoothness condition \eqref{smoothness cond on u(cdot,0)}. Consider a noisy approximation $g^\delta$ satisfying \eqref{noise level cond on data} and let $u_\alpha^\delta$ be the regularized solution defined by \eqref{express u alpha in freq domain delta}. Then for the a-priori selection rule $\alpha(\delta) = (\delta/E)^{\frac{1}{p+2}}$, we have
\begin{equation}
\label{first conv rate}
\forall l \in [0,p], \quad \text{such that} \quad p-l \leq s, \quad \norm{u(\cdot,0)- u_{\alpha(\delta)}^\delta(\cdot,0)}_{H^l} \leq C E^{\frac{2+l}{p+2}} \delta^{\frac{p-l}{p+2}},
\end{equation}
where $C$ is a constant independent of $\delta$ and $E$.
\end{Theorem}
\begin{Proof}
Using the Parseval identity, we have
\begin{eqnarray}
\label{est reg error}
\norm{u(\cdot,0)- u_\alpha(\cdot,0)}_{H^l} = \norm{\mod{1 - \sqrt{2 \pi}^n\widehat{\varphi}(\alpha \xi)} (1+|\xi|^2)^{l/2} \mod{\widehat{u}(\xi,0)} }_{L^2} = \norm{\tilde{u} - M_\alpha \tilde{u}}_{L^2},
\end{eqnarray}
where
$
\tilde{u} = \mathcal{F}^{-1} \left((1+ |\xi|^2)^{l/2} \widehat{u}(\xi,0) \right).
$
Since $u(\cdot,0) \in H^p(\mathbb{R}^n)$, $\tilde{u} \in H^{p-l}(\mathbb{R}^n)$, then applying \eqref{speed conv mollifier operator} to \eqref{est reg error}, we deduce that if $p - l \leq s$, then
\begin{equation}
\label{est reg erro fin}
\norm{u(\cdot,0)- u_\alpha(\cdot,0)}_{H^l} \leq \tau^{\frac{p-l}{s} }\alpha^{p -l} ||\tilde{u}||_{H^{p-l}} = \tau^{\frac{p-l}{s} }\alpha^{p -l} ||u(\cdot,0)||_{H^{p}} \leq \tau^{\frac{p-l}{s} }\alpha^{p -l} E.
\end{equation}
On the other hand, using \eqref{bounds E gamma Xi t } and \eqref{noise level cond on data}, we have
\begin{eqnarray}
\label{est data erro }
\norm{u_\alpha(\cdot,0)- u_\alpha^\delta(\cdot,0)}_{H^l} & = & \norm{(1+ |\xi|^2)^{l/2}\sqrt{2 \pi}^n \widehat{\varphi}(\alpha \xi) \frac{\widehat{g}(\xi) - \widehat{g^\delta}(\xi)}{E_{\gamma,1}(-|\xi|^2 T^\gamma)} }_{L^2} \nonumber \\
& \leq & \delta (1 \vee T^\gamma) \frac{\Gamma(1-\gamma)}{C_1}\norm{(1+ |\xi|^2)^{1 + l/2} \exp(-\tau (\alpha |\xi|)^s ) }_{\infty}.
\end{eqnarray}
But $(1+ |\xi|^2)^{1 + l/2} \exp(-\tau (\alpha |\xi|)^s ) = \left( f_{b,d}(|\xi|^2) \right)^{1+l/2}$ with $d = s/2$ and $b = \frac{\tau \alpha^s}{1+ l/2} \rightarrow 0$ as $\alpha \rightarrow 0$. Hence applying Lemma \ref{Lemma bound our kernel}, we get that there exists a constant $C$ independent of $\alpha$, such that
\begin{equation}
\label{eq xxrr}
\forall \xi \in \mathbb{R}^n, \quad (1+ |\xi|^2)^{1 + l/2} \exp(-\tau (\alpha |\xi|)^s ) \leq (C /\alpha^{2})^{1+l/2}.
\end{equation}
Applying \eqref{eq xxrr} to \eqref{est data erro }, we deduce the existence of a constant $\tilde{C}$ independent of $\alpha$, $\delta$ and $E$ such that
\begin{equation}
\label{est data err fin}
\norm{u_\alpha(\cdot,0)- u_\alpha^\delta(\cdot,0)}_{H^l} \leq \tilde{C} \frac{\delta}{\alpha^{2+l}}.
\end{equation}
Finally from \eqref{est reg erro fin} and \eqref{est data err fin}, we deduce that
\begin{equation}
\label{bound recons error H l}
\norm{u(\cdot,0)- u_\alpha^\delta(\cdot,0)}_{H^l} \leq \tau^{\frac{p-l}{s} }\alpha^{p -l} E + \tilde{C} \frac{\delta}{\alpha^{2+l}}.
\end{equation}
By choosing $\alpha(\delta) = (\delta/E)^{\frac{1}{p+2}}$ in \eqref{bound recons error H l}, we get \eqref{first conv rate}.
\end{Proof}
\begin{Remark}
By considering $l=0$ in Theorem \ref{Theorem 1}, we get that if $p \leq s$, then
$$
\norm{u(\cdot,0)- u_\alpha^\delta(\cdot,0)}_{L^2} \leq C E^{\frac{2}{p+2}} \delta^{\frac{p}{p+2}}.
$$
Hence, our regularization method is order-optimal under the classical smoothness condition \eqref{smoothness cond on u(cdot,0)}. Notice that the condition $p \leq s$ is not at all restrictive since the parameter $s$ is freely chosen in $\mathbb{R}_+^*$.
\end{Remark}
The next theorem shows rate of convergence of the error $u(\cdot,t) - u_\alpha^\delta(\cdot,t)$ when approximating earlier distribution $u(\cdot,t)$, $t \in (0,T)$ using the regularized solution $u_\alpha^\delta$.
\begin{Theorem}
\label{Theorem 2}
Consider the setting of Theorem \ref{Theorem 1}. By considering $\alpha(\delta) = (\delta/E)^{\frac{1}{p+2}}$, we have
\begin{equation}
\label{second conv rate}
\forall t \in (0,T], \,\,\, \forall l \in [0,p+2], \,\,\, \text{s. t.} \,\,\, p+2-l \leq s, \,\,\, \norm{u(\cdot,t)- u_{\alpha(\delta)}^\delta(\cdot,t)}_{H^l} \leq \frac{C}{t^\gamma} E^{\frac{l}{p+2}} \delta^{1-\frac{l}{p+2}},
\end{equation}
where $C$ is a constant independent of $\delta$ and $E$. Moreover,
\begin{equation}
\label{sec conv rate}
\forall t \in (0,T], \,\,\, \forall l \in [0,p], \,\,\, \text{s. t.} \,\,\, p-l \leq s, \,\,\, \norm{u(\cdot,t)- u_{\alpha(\delta)}^\delta(\cdot,t)}_{H^l} \leq \tilde{C} E^{\frac{2+l}{p+2}} \delta^{\frac{p-l}{p+2}},
\end{equation}
where $\tilde{C}$ is a constant independent of $\delta$, $E$ and $t$.
\end{Theorem}
\begin{Proof}
By noticing that $\mod{u(\xi,t)- u_\alpha^\delta(\xi,t)} = E_{\gamma,1}(-|\xi|^2 t^\gamma) \mod{u(\xi,0)- u_\alpha^\delta(\xi,0)} \leq \mod{u(\xi,0)- u_\alpha^\delta(\xi,0)}$, we can deduce \eqref{sec conv rate} from \eqref{first conv rate}. Let us prove \eqref{second conv rate}.
We have
\begin{eqnarray*}
\label{eqn a}
\norm{u(\cdot,t)- u_\alpha(\cdot,t)}_{H^l} = \norm{\mod{1 - \sqrt{2 \pi}^n\widehat{\varphi}(\alpha \xi)} (1+|\xi|^2)^{l/2} \mod{\widehat{u}(\xi,t)} }_{L^2} = \norm{\tilde{u} - M_\alpha \tilde{u}}_{L^2},
\end{eqnarray*}
where
$
\tilde{u} = \mathcal{F}^{-1} \left((1+ |\xi|^2)^{l/2} \widehat{u}(\xi,t) \right).
$
Since $u(\cdot,0) \in H^p(\mathbb{R}^n)$, from Lemma \ref{Lemma bound H p+2 Hp}, we get $u(\cdot,t) \in H^{p+2}(\mathbb{R}^n)$ which implies that $\tilde{u} \in H^{p+2-l}(\mathbb{R}^n)$, then applying \eqref{speed conv mollifier operator} and \eqref{bound H p+2 Hp }, we get that if $p +2 - l \leq s$, then
\begin{eqnarray}
\label{est reg erro fin bis}
\norm{u(\cdot,t)- u_\alpha(\cdot,t)}_{H^l} \leq \tau^{\frac{p+2-l}{s} }\alpha^{p +2-l} ||\tilde{u}||_{H^{p+2-l}}& = &\tau^{\frac{p+2-l}{s} }\alpha^{p+2 -l} ||u(\cdot,t)||_{H^{p+2}} \nonumber \\
& \leq & \frac{C_2\,\tau^{\frac{p+2-l}{s} }}{(1 \wedge t^\gamma)\Gamma(1-\gamma)}\alpha^{p +2-l} E.
\end{eqnarray}
On the other hand, using \eqref{bounds fract E gamma Xi t }, we have
\begin{eqnarray}
\label{est data erro bis}
\norm{u_\alpha(\cdot,t)- u_\alpha^\delta(\cdot,t)}_{H^l} & = & \norm{(1+ |\xi|^2)^{l/2}\sqrt{2 \pi}^n \widehat{\varphi}(\alpha \xi)\frac{E_{\gamma,1}(-|\xi|^2 t^\gamma)}{E_{\gamma,1}(-|\xi|^2 T^\gamma)} \mod{ \widehat{g}(\xi) - \widehat{g^\delta}(\xi) } }_{L^2} \nonumber \\
& \leq & \delta \,\frac{C_2}{C_1} \left( \frac{T}{t}\right)^\gamma \norm{(1+ |\xi|^2)^{l/2} \exp(-\tau (\alpha |\xi|)^s ) }_{\infty}.
\end{eqnarray}
When $l=0$, we can easily see from \eqref{est data erro bis} that
\begin{equation}
\label{ert}
\norm{u_\alpha(\cdot,t)- u_\alpha^\delta(\cdot,t)}_{H^l} \leq \delta \,\frac{C_2}{C_1} \left( \frac{T}{t}\right)^\gamma .
\end{equation}
When $l>0$, using similar reasoning yielding to \eqref{eq xxrr}, we have
\begin{equation}
\label{eq xxrr yy}
\forall \xi \in \mathbb{R}^n, \quad
(1+ |\xi|^2)^{l/2} \exp(-\tau (\alpha |\xi|)^s ) \leq (C /\alpha^{2})^{l/2}
\end{equation}
From \eqref{est data erro bis}, \eqref{ert} and \eqref{eq xxrr yy}, we deduce that for every $l\geq 0$ the exists a constant $C$ independent of $\alpha$, $\delta$ and $t$ such that
\begin{equation}
\label{eq rr tt}
\norm{u_\alpha(\cdot,t)- u_\alpha^\delta(\cdot,t)}_{H^l} \leq \frac{C}{t^\gamma} \frac{\delta}{\alpha^l}
\end{equation}
Lastly from \eqref{est reg erro fin bis} and \eqref{eq rr tt}, we get that if $p +2 -l \leq s$, then
\begin{equation}
\label{eq final t}
\norm{u(\cdot,t)- u_\alpha^\delta(\cdot,t)}_{H^l} \leq \frac{\bar{C}}{1\wedge t^\gamma} \alpha^{p+2-l} E + \frac{C}{t^\gamma} \frac{\delta}{\alpha^l},
\end{equation}
where $C$ and $\bar{C}$ are constants independent of $\alpha$, $\delta$, $E$ and $t$. For $\alpha(\delta) = (\delta/E)^{\frac{1}{p+2}}$, we get \eqref{second conv rate}.
\end{Proof}
\begin{Remark}
In Theorem \ref{Theorem 2}, in the estimate \eqref{sec conv rate}, the rate is lower than the rate in \eqref{second conv rate}, however, notice that the factor $1/t^\gamma $ in \eqref{second conv rate} blows up as $t$ decreases to $0$. Notice that the rate in \eqref{sec conv rate} cannot be improved without multiplying by a factor which blows up as $t$ goes to $0$.
\end{Remark}
\begin{Remark}
By choosing $l=0$ in Theorem \ref{Theorem 2}, we get the rate
$
\norm{u(\cdot,t)- u_\alpha^\delta(\cdot,t)}_{L^2} \leq \frac{C}{t^\gamma} \delta,
$
which means we can also recover earlier distribution $u(\cdot,t)$ with the best possible rate $\delta$.
\end{Remark}
Now let us study error estimates when both the data $g$ and the operator $A$ are only approximately known. Indeed, though the operator $A$ is explicitly known as \eqref{def operator A pb}, in practical implementation, this operator is only approximated given that the Mittag Leffler function can only be approximated though with desired accuracy \cite{podlubnv1999fractional}.
In the sequel we assume that $\psi_h$ is a positive function defined on $\mathbb{R}_+ \times (0,T]$ and satisfying
\begin{equation}
\label{cond appro E gamma}
\forall t \in (0,T], \quad \norm{\frac{\psi_h(|\xi|,t) - E_{\gamma,1}(-|\xi|^2t^\gamma)}{E_{\gamma,1}(-|\xi|^2t^\gamma)}}_{\infty} \leq h.
\end{equation}
With the function $\psi_h$, we can approximate operator $A$ by the operator $A_h$ defined by
\begin{equation}
\label{def approx operator A h}
A_h = \mathcal{F}^{-1} \psi_h(|\xi|,T) \mathcal{F} \quad \text{i.e.} \quad \widehat{A_h f}(\xi) = \psi_h(|\xi|,T) \widehat{f}(\xi).
\end{equation}
From \eqref{cond appro E gamma}, given that for every $t>0$, and $\xi \in \mathbb{R}^n$, $E_{\gamma,1}(-|\xi|^2t^\gamma) \in (0,1]$, we can deduce that $||| A - A_h ||| \leq h$. Let $u_{\alpha}^{\delta,h}$ be the regularized solution defined in the frequency domain by
\begin{equation}
\label{sol approx data and oper}
\begin{cases}
\vspace{0.2cm}
\widehat{u_{\alpha}^{\delta,h}}(\xi,0) = \sqrt{2 \pi}^{n}\widehat{\varphi}(\alpha \xi)\frac{\widehat{g^\delta}(\xi)}{\psi_h(|\xi|,T)} \\
\widehat{u_{\alpha}^{\delta,h}}(\xi,t) = \sqrt{2 \pi}^{n}\widehat{\varphi}(\alpha \xi)\frac{\psi_h(|\xi|,t) }{\psi_h(|\xi|,T)}\widehat{g^\delta}(\xi) & \text{for} \quad t \in (0,T].
\end{cases}
\end{equation}
The next theorem provides error estimates in the approximation of the initial distribution $u(\cdot,0)$ under the practical setting where both the data $g=u(\cdot,T)$ and the forward diffusion operator are only approximately known.
\begin{Theorem}
\label{Theorem approx data and operator}
Consider the setting of Theorem \ref{Theorem 1}. Assume that $h \leq 1/2$, let $\psi_h$ be a function satisfying \eqref{cond appro E gamma} and $u_{\alpha}^{\delta,h}$ be the approximate solution defined in \eqref{sol approx data and oper}. Then for the a-priori selection rule $\alpha(\delta,h) = (h + \delta/E)^{\frac{1}{p+2}}$, the following convergence rate holds:
\begin{equation}
\label{conv rate approx data and oper}
\forall l \in [0,p], \,\,\, \text{such that} \,\,\, p-l \leq s, \quad \norm{u(\cdot,0) - u_{\alpha(\delta,h)}^{\delta,h}(\cdot,0)}_{H^l} \leq C E^{\frac{2+l}{p+2}} (\delta +h \,E )^{\frac{p-l}{p+2}},
\end{equation}
where $C$ is a constant independent of $\delta$, $h$ and $E$.
\end{Theorem}
\begin{Proof}
For simplicity of notation, we set $\psi(|\xi|,t) = E_{\gamma,1}(-|\xi|^2 t^\gamma)$ for all $t \in [0,T]$.
For every $t \in (0,T]$ and $\xi \in \mathbb{R}^n$ we have
\begin{eqnarray}
\label{eqxxx}
\frac{E_{\gamma,1}(-|\xi|^2t^\gamma)}{\psi_h(|\xi|,t)} \leq \frac{\psi(|\xi|,t)}{\psi(|\xi|,t) - |\psi(|\xi|,t) - \psi_h(|\xi|,t)|}
= \frac{1}{1 - \frac{|\psi(|\xi|,t) - \psi_h(|\xi|,t)|}{\psi(|\xi|,t)}}
\leq \frac{1}{1 - h}
\leq 1 + 2 h \leq 2 .
\end{eqnarray}
The first inequality in \eqref{eqxxx} is due to the fact that $\psi(|\xi|,t) \leq |\psi(|\xi|,t) - \psi_h(|\xi|,t)| + \psi_h(|\xi|,t)$, the second inequality comes from \eqref{cond appro E gamma} and the last two inequalities are due to the fact that $h \leq 1/2$.
Let $l \in [0,p]$ such that $p-l \leq s$, we recall that from \eqref{est reg erro fin}, we have
\begin{eqnarray}
\label{est reg error app data and ope 0}
\norm{u(\cdot,0)- M_\alpha u(\cdot,0)}_{H^l} \leq \tau^{\frac{p-l}{s} } \alpha^{p -l} E.
\end{eqnarray}
By noticing that
\begin{eqnarray}
\label{ttt}
\mod{\widehat{g}(\xi) - \frac{\psi(|\xi|,T)}{\psi_h(|\xi|,T)}\widehat{g^\delta}(\xi)} & \leq & \mod{\left[ 1 - \frac{\psi(|\xi|,T)}{\psi_h(|\xi|,T)}\right] \widehat{g}(\xi) } + \mod{\frac{\psi(|\xi|,T)}{\psi_h(|\xi|,T)}[\widehat{g}(\xi) - \widehat{g^\delta}(\xi)]} \nonumber \\
& \leq & \mod{\left[ \frac{\psi(|\xi|,T) - \psi_h(|\xi|,T)}{\psi(|\xi|,T)}\right] \frac{\psi(|\xi|,T)}{\psi_h(|\xi|,T)}\widehat{g}(\xi) } + 2 \mod{\widehat{g}(\xi) - \widehat{g^\delta}(\xi)} \nonumber \\
& \leq & 2 h \mod{\widehat{g}(\xi) } + 2 \mod{\widehat{g}(\xi) - \widehat{g^\delta}(\xi)} \quad \text{using} \quad \eqref{cond appro E gamma} \quad \text{and} \quad \eqref{eqxxx},
\end{eqnarray}
we deduce that
\begin{eqnarray}
\label{data err prop approx data and opera}
\norm{M_\alpha u(\cdot,0) - u_\alpha^{\delta,h}(\cdot,0)}_{H^l} & = & \norm{\sqrt{ 2 \pi}^n \widehat{\varphi}(\alpha \xi) (1+ |\xi|^2)^{l/2}\left[ \frac{\widehat{g}(\xi)}{\psi(|\xi|,T)} - \frac{\widehat{g^\delta}(\xi)}{\psi_h(|\xi|,T)} \right] }_{L^2} \nonumber \\
& = & \norm{\sqrt{ 2 \pi}^n \widehat{\varphi}(\alpha \xi) \frac{(1+ |\xi|^2)^{l/2}}{\psi(|\xi|,T)}\left[ \widehat{g}(\xi) - \frac{\psi(|\xi|,T)}{\psi_h(|\xi|,T)}\widehat{g^\delta}(\xi) \right] }_{L^2} \nonumber \\
& \leq & \left( 2 h \norm{g}_{L^2} + 2 \delta \right) \norm{\frac{\exp(-\tau(\alpha \xi)^s) (1+ |\xi|^2)^{l/2}}{E_{\gamma,1}(-|\xi|^2T^\gamma)} }_{\infty} \,\, \text{using} \quad \eqref{ttt} \,\,\, \text{and} \,\,\, \eqref{noise level cond on data} \nonumber \\
& \leq & \left( 2 h E + 2 \delta \right) C_\gamma \norm{(1+ |\xi|^2)^{1 + l/2} \exp(-\tau (\alpha |\xi|)^s ) }_{\infty} \,\, \text{using} \quad \eqref{bounds E gamma Xi t } \,\,\, \text{and} \,\,\, \eqref{smoothness cond on u(cdot,0)} \nonumber \\
& \leq & C \frac{h E + \delta }{\alpha^{2+l}} \quad \text{using} \quad \eqref{eq xxrr},
\end{eqnarray}
where $C$ is a constant independent of $\alpha$, $\delta$ and $E$.
Finally, from \eqref{est reg error app data and ope 0} and \eqref{data err prop approx data and opera}, we get
$$
\norm{u(\cdot,0) - u_\alpha^{\delta,h}}_{H^l} \leq \tau^{\frac{p-l}{s} } \alpha^{p -l} E + C E\, \frac{h + \delta/E }{\alpha^{2+l}}
$$
from which \eqref{conv rate approx data and oper} follows by choosing $\alpha(\delta,h) = (h + \delta/E)^{\frac{1}{p+2}}$.
\end{Proof}
\begin{Remark}
From Theorem \ref{Theorem approx data and operator}, by choosing $l=0$ in \eqref{conv rate approx data and oper}, we get the rate
$$
\norm{u(\cdot,0) - u_{\alpha}^{\delta,h}}_{L^2} \leq C E^{\frac{2}{p+2}} (\delta + h \,E)^{\frac{p}{p+2}}.
$$
Hence in the practical setting where both the data and the operator are approximated, we are able to derive order-optimal convergence rates.
\end{Remark}
The next theorem exhibits error estimates when approximating $u(\cdot,t)$ with $t \in (0,T]$.
\begin{Theorem}
\label{Theo conv rates u t data and oper approx}
Consider the setting of Theorem \ref{Theorem approx data and operator}. Then for the a-priori selection rule $\alpha(\delta,h) = (h + \delta/E)^{\frac{1}{p+2}}$, the following convergence rate holds:
\begin{equation}
\label{conv rate approx data and oper t}
\forall t \in (0,T], \,\, \forall l \in [0,p+2], \,\, \text{s. t.} \,\, p+2-l \leq s, \,\, \norm{u(\cdot,t) - u_{\alpha(\delta,h)}^{\delta,h}(\cdot,t)}_{H^l} \leq \frac{C}{t^\gamma} E^{\frac{l}{p+2}} (\delta + h \, E)^{1 - \frac{l}{p+2}},
\end{equation}
where $C$ is a constant independent of $\delta$, $h$, $E$ and $t$.
\end{Theorem}
\begin{Proof} Let $u$ satisfies \eqref{smoothness cond on u(cdot,0)} and $t\in (0,T]$. For $ l \in [0,p+2]$ such that $p +2 - l \leq s$, recall that from \eqref{est reg erro fin bis}, we have
\begin{equation}
\label{Eq xxyy}
\norm{u(\cdot,t)- M_\alpha u(\cdot,t)}_{H^l} \leq \frac{C_2\,\tau^{\frac{p+2-l}{s} }}{(1 \wedge t^\gamma)\Gamma(1-\gamma)}\alpha^{p +2-l} E.
\end{equation}
On the other hand, using \eqref{cond appro E gamma}, we have
\begin{eqnarray}
\label{ttt 1}
\mod{\widehat{g}(\xi) - \frac{\psi_h(|\xi|,t)}{\psi(|\xi|,t)} \frac{\psi(|\xi|,T)}{\psi_h(|\xi|,T)}\widehat{g^\delta}(\xi)} & \leq & \mod{\left[ 1 - \frac{\psi_h(|\xi|,t)}{\psi(|\xi|,t)}\right] \widehat{g}(\xi) } + \mod{\frac{\psi_h(|\xi|,t)}{\psi(|\xi|,t)}\left[\widehat{g}(\xi) - \frac{\psi(|\xi|,T)}{\psi_h(|\xi|,T)}\widehat{g^\delta}(\xi)\right]} \nonumber \\
& \leq & \mod{\left[ \frac{\psi(|\xi|,t) - \psi_h(|\xi|,t)}{\psi(|\xi|,t)}\right] \widehat{g^\delta}(\xi)} + (1+h) \mod{\widehat{g}(\xi) - \frac{\psi(|\xi|,T)}{\psi_h(|\xi|,T)}\widehat{g^\delta}(\xi)} \nonumber \\
& \leq & h \mod{\widehat{g}(\xi) } + (3/2) \left( 2 h \mod{\widehat{g}(\xi) } + 2 \mod{\widehat{g}(\xi) - \widehat{g^\delta}(\xi)} \right) \quad \text{using} \quad \eqref{ttt} \nonumber \\
& = & 4 h \mod{\widehat{g}(\xi) } + 3 \mod{\widehat{g}(\xi) - \widehat{g^\delta}(\xi)} .
\end{eqnarray}
From \eqref{ttt 1}, we deduce that
\begin{eqnarray}
\label{data err prop approx data and opera t}
\norm{M_\alpha u(\cdot,t) - u_\alpha^{\delta,h}(\cdot,t)}_{H^l} & = & \norm{\sqrt{ 2 \pi}^n \widehat{\varphi}(\alpha \xi) (1+ |\xi|^2)^{l/2}\left[ \frac{\psi(|\xi|,t)}{\psi(|\xi|,T)}\widehat{g}(\xi) - \frac{\psi_h(|\xi|,t)}{\psi_h(|\xi|,T)}\widehat{g^\delta}(\xi) \right] }_{L^2} \nonumber \\
& = & \norm{\sqrt{ 2 \pi}^n \widehat{\varphi}(\alpha \xi) (1+ |\xi|^2)^{l/2} \frac{\psi(|\xi|,t)}{\psi(|\xi|,T)} \left[ \widehat{g}(\xi) - \frac{\psi_h(|\xi|,t)}{\psi(|\xi|,t)} \frac{\psi(|\xi|,T)}{\psi_h(|\xi|,T)}\widehat{g^\delta}(\xi) \right] }_{L^2} \nonumber \\
& \leq & \left( 4 h \norm{g}_{L^2} + 3 \delta \right)\frac{C_2}{C_1} \left( \frac{T}{t}\right)^\gamma \norm{\exp(-\tau(\alpha \xi)^s) (1+ |\xi|^2)^{l/2} }_{\infty} \,\, \text{using} \,\,\, \eqref{bounds fract E gamma Xi t }, \nonumber \\
& \leq & \frac{C}{t^\gamma} \frac{h E + \delta }{\alpha^{l}} \quad \text{using} \quad \eqref{eq xxrr yy},
\end{eqnarray}
where $C$ is a constant independent of $\alpha$, $\delta$, $E$ and $t$.
Finally, from \eqref{Eq xxyy} and \eqref{data err prop approx data and opera t}, we get
$$
\norm{u(\cdot,t) - u_\alpha^{\delta,h}(\cdot,t)}_{H^l} \leq \frac{C_2\,\tau^{\frac{p+2-l}{s} }}{(1 \wedge t^\gamma)\Gamma(1-\gamma)}\alpha^{p +2-l} E + \frac{C}{t^\gamma} E\, \frac{h + \delta/E }{\alpha^{l}}
$$
from which \eqref{conv rate approx data and oper t} follows by choosing $\alpha(\delta) = (h + \delta/E)^{\frac{1}{p+2}}$.
\end{Proof}
\begin{Remark}
By choosing $l=0$ in Theorem \ref{Theo conv rates u t data and oper approx}, we recover the best possible rate
$$
\norm{u(\cdot,t) - u_{\alpha}^{\delta,h}(\cdot,t)}_{L^2} \leq \frac{C}{t^\gamma} (\delta + h \,E).
$$
\end{Remark}
Now let us focus on the choice of the regularization parameter $\alpha$ when we don't have precise a-priori information about the smoothness of the sought solution $u(\cdot,0)$.
\section{A-posteriori parameter choice rule}\label{section par choice rule}
The choice of the regularization parameter is a crucial step for any regularization method. As a matter of fact, no matter the regularization method considered, a bad choice of the regularization parameter results in poor approximate solution.
Let us consider the following a-posteriori parameter choice rule based on the Morozov principle \cite{mathe2008general}.
\begin{equation}
\label{def beta a posteriori rule}
\alpha(\delta,g^\delta) = \sup \left\lbrace \alpha > 0, \quad \text{s.t.} \quad || g^\delta - M_\alpha g^\delta ||_{L^2} < \theta \delta \right\rbrace,
\end{equation}
where $\theta > 1$ is a free real parameter.
\begin{Proposition}
\label{Prop exist post par choice rule}
Assume that the noise level $\delta$ and the noisy data $g^\delta$ satisfies
\begin{equation}
\label{noise ration cond}
0 < \theta \delta < ||g^\delta||,
\end{equation}
then the parameter $\alpha(\delta,g^\delta)$ expressed in \eqref{def beta a posteriori rule} is well defined and satisfies
\begin{equation}
\label{char beta a posteriori rule}
|| u_{\alpha(\delta,g^\delta)}^\delta(\cdot,T) - g^\delta|| = \theta \delta \quad \text{i.e.} \quad \norm{ \mod{1- e^{-\tau (\alpha |\xi|)^s} }\widehat{g^\delta}(\xi) }_{L^2} = \theta \delta.
\end{equation}
\end{Proposition}
\begin{Proof}
Let the function $v: \mathbb{R}_+^* \rightarrow \mathbb{R}_+ $ be defined by $v(\alpha) := \norm{ g^\delta - M_\alpha g^\delta}_{L^2}^2$. By Parseval identity, we get that $v(\alpha) = \norm{ \mod{1- e^{-\tau (\alpha |\xi|)^s} }\widehat{g^\delta}(\xi) }_{L^2}^2$. Using the dominated convergence theorem, we can readily check that $\lim_{\alpha \rightarrow 0} v(\alpha) = 0$ and $\lim_{\alpha \rightarrow \infty} v(\alpha) = \norm{\widehat{g^\delta}(\xi) }_{L^2}^2 $. Moreover, we can also check using derivation under integral sign that $v$ is differentiable with $v'(\alpha) >0$ for all $\alpha>0$ which implies that $v$ is strictly increasing. Hence if \eqref{noise ration cond} is satisfied, then $\theta \delta$ is in the range of the one-to-one function $v$, whence the existence and uniqueness of $\alpha(\delta,g^\delta)$ defined in \eqref{def beta a posteriori rule} which is characterized by \eqref{char beta a posteriori rule}.
\end{Proof}
\begin{Remark}
The condition \eqref{noise ration cond} is quite reasonable as we do not expect to recover reasonable approximate solution if the data is dominated by noise.
\end{Remark}
\begin{Theorem}
\label{Theo rate posteriori rule 1}
Consider the setting of Theorem \ref{Theorem 1}. Assume that \eqref{noise ration cond} is satisfied and let $\alpha(\delta,g^\delta)$ satisfying \eqref{char beta a posteriori rule}. If $p +2 \leq s$, then the following holds:
\begin{equation}
\label{rate posteriori rule 1}
\forall l \in [0,p], \quad \forall t \in [0,T],, \quad \norm{ u(\cdot,t) - u_{\alpha(\delta,g^\delta)}^\delta(\cdot,t)}_{H^l} \leq C E^{\frac{2+l}{p+2}} \delta^{\frac{p-l}{p+2}}
\end{equation}
where $C$ is a constant independent of $\delta$, $E$ and $t$.
\end{Theorem}
\begin{Proof}
For simplicity of notation, let $\alpha:=\alpha(\delta,g^\delta)$ defined by \eqref{char beta a posteriori rule} and let $g_\alpha = M_\alpha g$ where $g = u(\cdot,T)$ and $g_\alpha^\delta = M_\alpha g^\delta$ . We have
\begin{equation}
\label{eq 001}
\norm{g - g_\alpha} \leq \norm{g - g^\delta} + \norm{g^\delta - g_\alpha^\delta } + \norm{g_\alpha^\delta - g_\alpha} \leq \delta + \theta \delta + \norm{e^{-\tau (\alpha |\xi|)^s} (\widehat{g} - \widehat{g^\delta}(\xi)} \leq (\theta +2)\delta.
\end{equation}
Let $p \geq 0$ such that $p+2 \leq s$, using \eqref{speed conv mollifier operator} and \eqref{bound H p+2 Hp }, we have
\begin{eqnarray}
\label{eq 002}
\theta \delta = \norm{\mod{1-e^{-\tau (\alpha |\xi|)^s} } \widehat{g^\delta}(\xi)} & \leq & \norm{\mod{1-e^{-\tau (\alpha |\xi|)^s} } \widehat{g}(\xi)} + \norm{\mod{1-e^{-\tau (\alpha |\xi|)^s} }\mod{ \widehat{g}(\xi)- \widehat{g^\delta}(\xi)}} \nonumber \\
& \leq & \tau^{(p+2)/s} \alpha^{p+2} ||g||_{H^{p+2}} + \delta \nonumber \\
&\leq & C(p,\gamma) \alpha^{p+2} E + \delta
\end{eqnarray}
From \eqref{eq 002}, we deduce that
\begin{equation}
\label{eq 00002}
(\theta -1) \delta \leq C(p,\gamma) \alpha^{p+2} E \quad \Rightarrow \quad \frac{1}{\alpha} \leq \left(\frac{C(p,\gamma) }{\theta -1} \right)^{\frac{1}{p+2}} E^{\frac{1}{p+2}} \delta^{-\frac{1}{p+2}}
\end{equation}
By noticing that $\norm{u(\cdot,0) - M_\alpha u(\cdot,0) }_{H^p} \leq \norm{u(\cdot,0) }_{H^p} \leq E$, and applying \eqref{key est H l 0 yy} of Lemma \ref{Lemma yy} together with \eqref{eq 001}, we get
\begin{equation}
\label{eq 003}
\forall l \in [0,p], \quad \forall t \in [0,T], \quad \norm{u(\cdot,t) - u_\alpha(\cdot,t)}_{H^l} \leq C(\gamma,p,\theta) E^{\frac{2+l}{p+2}}\delta^{\frac{p-l}{p+2}},
\end{equation}
where $u_\alpha(\cdot,t) = M_\alpha u(\cdot,t)$ is the solution of equation \eqref{equation u_alpha} and $C(\gamma,p,\theta) = \bar{C}(\gamma) (\theta-1)^{\frac{p-l}{p+2}}$.
On the other hand, from \eqref{est data err fin} and \eqref{eq 00002}, we have that for every $l \in [0,p]$ and $t \in [0,T]$,
\begin{eqnarray}
\label{eq 004}
\norm{u_\alpha(\cdot,t)- u_\alpha^\delta(\cdot,t)}_{H^l} \leq \norm{u_\alpha(\cdot,0)- u_\alpha^\delta(\cdot,0)}_{H^l}
\leq \tilde{C} \frac{\delta}{\alpha^{2+l}}
\leq C E^{\frac{2+l}{p+2}} \delta^{ \frac{p-l}{p+2}},
\end{eqnarray}
where $C$ is a constant independent of $\delta$, $E$ and $t$. Estimate \eqref{rate posteriori rule 1} follows from \eqref{eq 003} and \eqref{eq 004}.
\end{Proof}
\begin{Remark}
In Theorem \ref{Theo rate posteriori rule 1}, by choosing $l=0$ and $t = 0$ in \eqref{rate posteriori rule 1}, we obtain the rate
$$
\norm{ u(\cdot,t) - u_{\alpha(\delta,g^\delta)}^\delta(\cdot,t)}_{L^2} \leq C E^{\frac{2}{p+2}} \delta^{\frac{p}{p+2}},
$$
which means that the parameter choice rule \eqref{def beta a posteriori rule} leads to order-optimal rate.
\end{Remark}
Lastly, the next theorem exhibits rates of convergence when approximating early distribution $u(\cdot,t)$ with $t \in (0,T)$ with the parameter choice rule \eqref{def beta a posteriori rule}.
\begin{Theorem}
\label{Theorem rate posterori rule 2}
Consider the setting of Theorem \ref{Theo rate posteriori rule 1}. The following estimate holds
\begin{equation}
\label{rate posteriori 2}
\forall t \in (0,T], \,\, \forall l \in [0,p+2], \,\, \text{s. t.} \,\,\, p+2-l \leq s, \,\, \norm{u(\cdot,t) - u_{\alpha(\delta,g^\delta)}^\delta(\cdot,t)}_{H^l} \leq \frac{C}{t^\gamma} E^{\frac{l}{p+2}} \delta^{1 - \frac{l}{p+2}},
\end{equation}
where $C$ is a constant independent of $\delta$, $E$ and $t$.
\end{Theorem}
\begin{Proof}
Let $t \in (0,T]$, $l \in [0, p] $ and $\alpha := \alpha(\delta,g^\delta)$ defined by \eqref{char beta a posteriori rule}. By applying \eqref{key est H l 1 yy}, we get
\begin{eqnarray}
\label{eq ttyyww}
\norm{u(\cdot,t) - u_\alpha(\cdot,t) }_{H^{l+2}} & \leq & \frac{C(\gamma)}{1\wedge t^\gamma} \norm{u(\cdot,0) - u_\alpha(\cdot,0) }_{H^p}^{\frac{l+2}{p+2}} \norm{u(\cdot,T) - u_\alpha(\cdot,T) }_{L^2}^{\frac{p-l}{p+2}} \nonumber \\
& \leq & \frac{C(\gamma)}{1\wedge t^\gamma} E^{\frac{l+2}{p+2}} \norm{g - g_\alpha }_{L^2}^{\frac{p-l}{p+2}} \nonumber \\
& \leq & \frac{C(\gamma)}{1\wedge t^\gamma} (\theta +2)^{\frac{p-l}{p+2}} E^{\frac{l+2}{p+2}} \delta^{\frac{p-l}{p+2}} \quad \text{from} \quad \eqref{eq 001}.
\end{eqnarray}
From \eqref{eq ttyyww}, we deduce that for all $l \in [0,p+2]$,
\begin{equation}
\label{eq 008}
\norm{u(\cdot,t) - u_\alpha(\cdot,t) }_{H^{l}} \leq \frac{C}{1\wedge t^\gamma} E^{\frac{l}{p+2}} \delta^{\frac{p+2-l}{p+2}}.
\end{equation}
On the other hand, from \eqref{eq rr tt}, we have
\begin{eqnarray}
\label{eq 009}
\norm{u_\alpha(\cdot,t)- u_\alpha^\delta(\cdot,t)}_{H^l}
\leq \frac{C}{t^\gamma} \frac{\delta}{\alpha^l} \leq \frac{C}{t^\gamma} \left(\frac{C(p,\gamma) }{\theta -1} \right)^{\frac{l}{p+2}} E^{\frac{l}{p+2}} \delta^{1 - \frac{l}{p+2}} \quad \text{using} \quad \eqref{eq 00002}.
\end{eqnarray}
Estimate \eqref{rate posteriori 2} follows readily from \eqref{eq 008} and \eqref{eq 009}.
\end{Proof}
\begin{Remark}
In Theorem \ref{Theorem rate posterori rule 2}, taking $l=0$, we get the best possible rate
$
\norm{u(\cdot,t) - u_{\alpha(\delta,g^\delta)}^\delta(\cdot,t)}_{L^2} \leq C \frac{\delta}{t^\gamma}
$
for the posteriori parameter choice rule \eqref{def beta a posteriori rule}.
\end{Remark}
Let us end this section with the following algorithm for approximating the regularization parameter $\alpha(\delta,g^\delta)$.
\begin{algorithm
\begin{center}
\begin{algorithmic}[1]
\State Set $\alpha_0 \gg 1$ and $q \in (0,1)$
\State Set $\alpha = \alpha_0$ (initial guess)
\While{$|| \mod{1- e^{-\tau (\alpha |\xi|)^s} }\widehat{g^\delta}(\xi) || \,>\, \theta \delta $}
\State $\alpha = q \times \alpha$
\EndWhile
\end{algorithmic}
\end{center}
\caption{}
\label{Algo alpha }
\end{algorithm}
\section{Numerical experiments}\label{section numerical experiments}
In order to illustrate the effectiveness of our regularization approach, we consider four numerical examples in two-dimension space where we invariably set $T = 1$ and $\gamma = 0.8$.
\textbf{Example 1}: $u(x,0) = e^{-x_1^2 - x_2^2}$.
\textbf{Example 2}: $u(x,0) = e^{-\mod{x_1} - \mod{x_2}}$.
\textbf{Example 3}: $u(x,0) = v(x_1)v(x_2)$ where $v$ is triangle impulse set as $v(\lambda)=
\begin{cases}
1 + \lambda/3 & \text{if} \,\, \lambda \in [-3,0]\\
1 - \lambda/3 & \text{if} \,\, \lambda \in (0,3] \\
0 & \text{otherwise}.
\end{cases}
$
\textbf{Example 4}: $
u(x,0) =
\begin{cases}
1 & \text{if} \,\, x \in [-5,5]^2\\
0 & \text{otherwise}.
\end{cases}
$
Notice that, in Examples 1 2 and 3, $u$ is continuous on the contrary to Example 4. Moreover, in Example 1, $u(\cdot,0) \in H^{p}(\mathbb{R}^2)$ for all $p>0$; in Example 2, $u(\cdot,0) \in H^{p}(\mathbb{R}^2)$ for $p<1$; in Example 3, $u(\cdot,0) \in H^{1}(\mathbb{R}^2)$; and in Example 4, $u(\cdot,0) \in H^{p}(\mathbb{R}^2)$ for $p <1/2$.
In the four examples, the support of $u(\cdot,0)$ is considered $[-L,L]^2$ which is uniformly discretized as $(x_1(i),x_2(j))$ where $x_1(i)=x_2(i) = -L + (i-0.5)\kappa$ with $\kappa = 2L/N$ and $i,j=1,...,N$. In all the simulations, we set $L=10$ and $N =256$.
The noisy data $g^\delta$ is generated as
$
g^\delta(x) = u(x,T) + \eta\, \epsilon(x),
$
where $\epsilon(x)$ is a random number drawn from the standard normal distribution and $\eta$ is a parameter allowing to control to amount to noise added to the data. The noise level $\delta$ is nothing but $ \eta E ||\epsilon||_2 $ and the percentage of noise $perc\_noise$ is nothing but
$$
perc\_noise = \frac{100\times \eta E ||\epsilon||_2}{||u(\cdot,T)||_2} \,\,\%.
$$
For the mollifier kernel $\varphi$ defined in \eqref{def of our molllif kernel}, we invariably choose $s=4$ and $\tau = 1/2$.
The Fourier transform and inverse Fourier transform involved in the computation of the reconstructed solution $u_\alpha^\delta$ are quite rapidly evaluated with the numerical procedure from \cite{bailey1994fast} based on fast Fourier transform (FFT) algorithm. From the Shannon-Nyquist principle, we set the frequency domain to $[-\Omega,\Omega]^2$ with $\Omega = \pi N/2L$.
For all the results in this section, we consider the parameter choice rule $\alpha(\delta,g^\delta)$ defined by \eqref{def beta a posteriori rule} with $\theta =1.01$. We compute this parameter via Algorithm \ref{Algo alpha } with $q=0.99$ and $\alpha_0 = 10$. On Figures \ref{Fig comp sol 1 and 2 percent noise level} and \ref{Fig comp sol 3 and 4 percent noise level}, we illustrate the approximate solution for $perc\_noise = 1 \%$ in each example.
The reconstructed solution $u_{\alpha(\delta,g^\delta)}^\delta$ is computed via formula \eqref{express u alpha in freq domain delta} (with $t=0$) followed by inverse Fourier transform. For each approximate solution $u_{\alpha(\delta,g^\delta)}^\delta$, we assess the relative error
$$
rel\_err = \frac{|| u_{\alpha(\delta,g^\delta)}^\delta- u(\cdot,0)||_2}{||u(\cdot,0)||_2}.
$$
In order to numerically confirm the theoretical rates of the reconstruction error, on Figure \ref{Fig ill num conv rate}, we plot $ln(rel\_err)$ versus $ln(\delta)$ for various values of $\delta$. We recall that if the reconstruction error is of order $\mathcal{O}\left(\delta^r \right)$ as $\delta \rightarrow 0$, then the curve $(\ln(\delta),\ln(rel\_err))$ should exhibit a line shape with slope equal to $r$. From Figure \ref{Fig ill num conv rate}, we can see that for each example, the plot clearly exhibits a line shape, confirming thus the power rate in the reconstruction error. Though the numerical order $r_{num}$ observed is different from the theoretical rate $r = p/(p+2)$, however, the numerical order does increase as the solution gets smoother. In fact, the higher order ($0.5915$) is achieved for example 1 which corresponds to the smoothest case; next in example 2 and 3, where the solution $u(\cdot,0)$ has approximated the same regularity, the numerical orders in this cases are closed; lastly, the lowest numerical order ($0.16$) is achieved in Example 4 where the solution is the least regular.
\begin{figure}[h!]
\begin{center}
\includegraphics[scale=0.4]{joint_comp_solu_exple1_and_2.PNG}
\end{center}
\caption{Illustration reconstructed solution $u_{\alpha(\delta,g^\delta)}^\delta$ for $perc\_noise = 1\%$ in Example 1 (first row) and Example 2 (second row).}
\label{Fig comp sol 1 and 2 percent noise level}
\end{figure}
\begin{figure}[h!]
\begin{center}
\includegraphics[scale=0.4]{joint_comp_solu_exple3_and_4.PNG}
\end{center}
\caption{Illustration reconstructed solution $u_{\alpha(\delta,g^\delta)}^\delta$ for $perc\_noise = 1\%$ in Example 3 (first row) and Example 4 (second row).}
\label{Fig comp sol 3 and 4 percent noise level}
\end{figure}
\begin{figure}[h!]
\begin{center}
\includegraphics[scale=0.45]{rates_conve_rule_beta_theta.pdf}
\end{center}
\caption{Illustration of numerical rates of convergence for the a-posteriori rule \eqref{def beta a posteriori rule}.}
\label{Fig ill num conv rate}
\end{figure}
\begin{table}[h!]
\begin{center}
\includegraphics[scale=0.9]{Table_monte_carlo_exple_1_2.pdf}
\end{center}
\caption{Summary Monte Carlo simulation for Example 1 and 2 with 200 sample size.}
\label{Table MC exple 1,2}
\end{table}
\begin{table}[h!]
\begin{center}
\includegraphics[scale=0.9]{Table_monte_carlo_exple_3_4.pdf}
\end{center}
\caption{Summary Monte Carlo simulation for Example 3 and 4 with 200 sample size.}
\label{Table MC exple 3,4}
\end{table}
Finally, in order to assess the numerical stability and convergence of our method, we run a Monte Carlo simulations of $200$ replications of noise term for each example with various percentage of noise. The results are summarized in Tables \ref{Table MC exple 1,2} and \ref{Table MC exple 3,4}. From Tables \ref{Table MC exple 1,2} and \ref{Table MC exple 3,4}, we can see that :
\begin{itemize}
\item The reconstruction error and the regularization parameter $\alpha(\delta,g^\delta)$ both decrease as the noise level decreases. This indicates the numerical convergence of the regularization method coupled with the parameter choice rule \eqref{def beta a posteriori rule}.
\item The variance of the reconstruction error in all cases is not larger than $10^{-4}$. This is a significant indicator of the high numerical stability of the regularization technique coupled with parameter choice rule \eqref{def beta a posteriori rule}.
\item Lastly, the reconstruction error are much smaller in Example 1 and 3 compared to Examples 2 and 4. This observation confirms the following known fact: the smoother the sought solution, the better the reconstruction is expected. Indeed, the solution in Examples 1 and 3 are much smoother than the solution in Examples 2 and 4.
\end{itemize}
\section{Conclusion}
In this paper, we focused on the regularization of final value time-fractional diffusion equation on unbounded domain. We presented a broad class of regularization methods which encompasses some regularization methods appearing distinctively in literature for this problem. We proposed a simple regularization method which smoothly regularizes the problem without truncating high frequency components. We proved order-optimality of our regularization approach in the practical setting where not only the data are approximated but also the forward diffusion operator. We also showed that with our regularization approach, we can also approximate early distribution $u(\cdot,t)$ with $t \in (0,T)$ with the best possible rate. For full applicability of the method, based on the Morozov principle, we provided a parameter choice rule leading to order-optimal rates. Finally, the soundness and efficiency of the regularization technique couples with the parameter choice rule prescribed is illustrated through some numerical simulations in two-dimensional space.
\section*{Appendix}
\textbf{Proof of Lemma \ref{Lemma bound our kernel}:}
Let the function $f(x) = (1+ x) e^{- b x^d}$ with $b,d \in \mathbb{R}_+^*$.
The function $f$ is continuous on $[0,+\infty )$ with $f(0) = 1$ and $ \lim_{x\rightarrow +\infty} f(x) = 0$. Moreover, $f$ is smooth on $(0,\infty)$, hence, $f$ admits a global maximum on $\mathbb{R}_+$ attained at a critical point.
Now, notice that $f'(x) = p(x)e^{- b x^d}$ with $p(x) = 1 - b d x^{d-1}(1+x)$. Hence $f'(x) = 0$ implies that $p(x) = 0$.
But
\begin{equation}
\label{ee 000}
p(x) = 0 \Rightarrow x^{d-1}(1+x) = \frac{1}{b d} \rightarrow + \infty \quad \text{as} \quad b \downarrow 0.
\end{equation}
If $d \geq 1$, we can verify that $p$ has a unique root $\bar{x}$ on $ \mathbb{R}_+$ which maximizes $f$. Moreover, $\bar{x} \rightarrow +\infty $ as $b \downarrow 0$. Since $\bar{x} \rightarrow +\infty $ as $b \downarrow 0$, from \eqref{ee 000}, we deduce that $ \bar{x}^d \sim 1/bd$. Since $p(\bar{x}) = 0$, then, $(1 + \bar{x}) = (1/bd) \bar{x}^{1-d}$ so that
\begin{equation}
\label{ee 001}
\sup_{x\geq 0} f(x) = f(\bar{x}) \sim \frac{1}{bd} \bar{x}^{1-d} \exp(-b (b d)^{-1}) \sim \frac{1}{b d} (b d)^{-\frac{1-d}{d}} \exp(-1/d) = \frac{(d \exp(1))^{-1/d}}{b^{1/d}},
\end{equation}
whence the desired estimate \eqref{key bound}.
If $d<1$, we can verify that $p$ has two roots $x_0$ and $x_1$ on $\mathbb{R}_+$ with $x_0 < \bar{x}$ satisfying $x_0 \rightarrow 0$ as $b \downarrow 0$ and $x_1 \rightarrow +\infty $ as $b \downarrow 0$. Moreover we can check that $f$ admits a local minimum at $x_0$ and local maximum at $x_1$, so that the global maximum of $f$ on $[0,+\infty)$ is $\max(f(0),f(x_1)) = f(x_1)$ when $b \downarrow 0$. Similarly to the case $d\geq 1$, the roots $x_1$ satisfies $x_1^{d} \sim 1/bd$ and estimate \eqref{ee 001} is also valid for $\bar{x} = x_1$ whence \eqref{key bound}.
\bibliographystyle{abbrv}
|
1,116,691,499,848 | arxiv | \section{\bigskip Introduction}
Inspired by Witten's seminal paper published in 2004 \cite{witten}, there have
been tremendous developments on calculations of higher point and higher loop
Yang-Mills and gravity field theory amplitudes \cite{JA}. Many new ideas and
techniques have been proposed and suggested on this interesting and important
subject. On the other hand, string theory amplitudes have been believed to be
closely related to these new results derived in field theory amplitudes. One
interesting example was the gauge field theory BCJ relations for
color-stripped amplitudes proposed in 2008 \cite{BCJ} and their string origin
or the string BCJ relations suggested in 2009 \cite{stringBCJ,stringBCJ2}.
These field theory BCJ relations can be used to reduce the number of
independent $n$-point color-ordered gauge field theory amplitudes from
$(n-2)!,$ as suggested by the KK relations \cite{KK,KK2}, to $(n-3)!$.
On the other hand, a less known historically independent development of the
"string BCJ relations" was from the string theory side \textit{without}
refering to the field theory BCJ relations. This was the discovery of the four
point string BCJ relations in the high energy fixed angle or hard string
scattering (HSS) limit in 2006 \cite{Closed}. Moreover, one can combine these
string BCJ relations with the infinite linear relations among HSS amplitudes
conjectured by Gross \cite{GM,Gross,GrossManes} and discovered in
\cite{ChanLee1,ChanLee2,CHL,PRL, CHLTY,susy} to form the \textit{extended
linear relations }in the HSS limit. For a recent review, see \cite{review}. In
constrast to the field theory BCJ relation, these extended linear relations
relate HSS amplitudes of string states with different spins and different
channels, and can be used to reduce the number of independent HSS amplitudes
from $\infty$ down to $1$.
Historically, the motivation to probe string BCJ relations in this context was
the calculation of \textit{closed} HSS amplitudes \cite{Closed} by using the
KLT relations \cite{KLT}. Indeed, it was found \cite{Closed} that the saddle
point calculation of open HSS amplitudes was applicable only for the $t-u$
channel, but not reliable for the $s-t$ channel, neither for the closed HSS
amplitude calculation. In addition, it was also pointed out
\cite{Closed,Closed2} that the prefactor $\frac{\sin\left( \pi u/2\right)
}{\sin\left( \pi s/2\right) }$ in the string BCJ relations, which was
missing in the literature \cite{zeros,GM,Gross,GrossManes} for the HSS
amplitude calculations, had important physical interpretations. The poles give
infinite number of resonances in the string spectrum and zeros give the
coherence of string scatterings. These poles and zeros survive in the HSS
limit and can not be ignored. Presumably, the prefactor triggers the failure
of saddle point calculations mentioned above.
To calculate the closed HSS by KLT relation, one needed to calculate both
$s-t$ and $t-u$ channel HSS amplitudes. In constrast to the saddle point
method used in the $t-u$ channel, for the $s-t$ channel HSS amplitudes at each
fixed mass level $N,$ one first used a direct method to calculate the HSS
amplitude of the leading trajectory string state, and then extended the result
to other string states by using high energy symmetry of string theory or
infinite linear relations among HSS amplitudes of different string states at
each mass level $N$. As a result, the string BCJ relations in the HSS limit
can be derived \cite{Closed}. All these HSS amplitude calculations for $s-t$
and\ $t-u$ channels and the related string BCJ relations in the HSS limit were
inspired by Gross famous conjectures on high energy symmetries of string
theory \cite{Gross}, and thus were independent and different from the field
theory BCJ motivation discussed above.
In this paper, we will follow up the development on the string theory side of
string amplitude calculations. In section II, we will first review the
extended linear relations\textit{ }in the HSS limit discussed in
\cite{Closed}. We will also work out the corresponding \textit{extended
recurrence relations }\cite{LY,LY2014} of the Regge string scattering (RSS)
amplitudes \cite{KLY}. Similar to the extended linear relations\textit{ }in
the HSS limit discussed above, the extended recurrence relations can be used
to reduce the number of independent RSS amplitudes from $\infty$ down to $1$.
We will then give an explicit proof of the string BCJ relations in section III
by directly calculating $s-t$ and\ $t-u$ channel string scattering amplitudes
for arbitrary four string states. In constrast to the proof based on monodromy
of integration with constraints on the kinematic regime given in
\cite{stringBCJ} without calculating string amplitudes, our explicit string
amplitude calculation puts no constraints on the kinematic regime. In section
IV, we will calculate the level dependent and the extended recurrence
relations of low energy string scattering amplitudes\textit{ }in the
nonrelativistic limit. The existence of recurrence relations for low energy
nonrelativistic string scattering (NSS) amplitudes come as a surprise from
Gross point of view on HSS limit.
In the calculations of low energy extended recurrence relations in section IV,
we will take the NSS limit or $|\vec{k_{2}}|<<M_{S}$ limit to calculate the
mass level and spin dependent low energy NSS amplitudes. In constrast to the
zero slope $\alpha^{\prime}$ limit used in the literature to calculate the
massless Yang-Mills couplings \cite{ymzero1,ymzero2} for superstring and the
three point $\varphi^{3}$ scalar field coupling \cite{Bzero1,Bzero2,Bzero3}
for the bosonic string, we found it appropriate to take the nonrelativistic
limit in calculating low energy string scattering amplitudes for string states
with both higher spins and finite mass gaps. A brief conclusion will be given
in section V. In the following in section II, we\ first review historically
two independnt developments of the string BCJ relations from field theory and
from string theory point of views
\setcounter{equation}{0}
\renewcommand{\theequation}{\arabic{section}.\arabic{equation}
\section{Review of High energy String BCJ}
The four point BCJ relations \cite{BCJ} for Yang-Mills gluon color-stripped
scattering amplitudes $A$ were pointed out and calculated in 2008 to be
\begin{subequations}
\label{BCJ
\begin{align}
tA(k_{1},k_{4},k_{2},k_{3})-sA(k_{1},k_{3},k_{4},k_{2}) & =0,\\
sA(k_{1},k_{2},k_{3},k_{4})-uA(k_{1},k_{4},k_{2},k_{3}) & =0,\\
\text{ }uA(k_{1},k_{3},k_{4},k_{2})-tA(k_{1},k_{2},k_{3},k_{4}) & =0,
\end{align}
which relates field theory scattering amplitudes in the $s$, $t$ and $u$
channels. In the rest of this paper, we will discuss the relation for $s$ and
$u$ channel amplitudes only. Other relations can be similarly addressed.
\subsection{ Hard String Scatterings}
For string theory, in constrast to the field theory BCJ relations, one has to
deal with scattering amplitudes of infinite number of higher spin string
states. The first "string BCJ relation" discovered was the four point string
BCJ relation in the HSS limit \cite{Closed} worked out in 2006. For the
tachyon state, one can express the open string $s-t$ channel amplitude in
terms of the $t-u$ channel amplitude \cite{Closed}
\end{subequations}
\begin{align}
T_{\text{open}}^{\left( 4\text{-tachyon}\right) }\left( s,t\right) &
=\frac{\Gamma\left( -\frac{s}{2}-1\right) \Gamma\left( -\frac{t
{2}-1\right) }{\Gamma\left( \frac{u}{2}+2\right) }\nonumber\\
& =\frac{\sin\left( \pi u/2\right) }{\sin\left( \pi s/2\right)
\frac{\Gamma\left( -\frac{t}{2}-1\right) \Gamma\left( -\frac{u
{2}-1\right) }{\Gamma\left( \frac{s}{2}+2\right) }\nonumber\\
& \equiv\frac{\sin\left( \pi u/2\right) }{\sin\left( \pi s/2\right)
}T_{\text{open}}^{\left( 4\text{-tachyon}\right) }\left( t,u\right)
\label{teq
\end{align}
where\ we have used the well known formula
\begin{equation}
\Gamma\left( x\right) =\frac{\pi}{\sin\left( \pi x\right) \Gamma\left(
1-x\right) }. \label{math
\end{equation}
The\ string BCJ relation for tachyon derived above is valid for all energies.
For the $N$-point open string \textit{tachyon} amplitudes $B_{N}$ of
Koba-Nielson, some authors in the early days of dual models, see for example
\cite{Plahte}, discussed symmetry relations among $B_{N}$ functions with
different cyclic order external momenta by using monodromy of countour
integration of the amplitudes. However, no discussion was addressed for string
amplitudes with infinite number of \textit{higher spin} massive string states,
on which we will discuss next.
Since there is no reliable saddle point to calculate $s-t$ channel HSS
amplitudes, for $all$ other higher spin string states at arbitrary mass
levels, one first calculates the $s-t$ channel scattering amplitude with
$V_{2}=\alpha_{-1}^{\mu_{1}}\alpha_{-1}^{\mu_{2}}..\alpha_{-1}^{\mu_{n}
\mid0,k>$, the highest spin state at mass level $M_{2}^{2}$ $=2(N-1),$ and
three tachyons $V_{1,3,4}$ as \cite{CHLTY,Closed
\begin{equation}
\mathcal{T}_{N;st}^{\mu_{1}\mu_{2}\cdot\cdot\mu_{n}}=\sum_{l=0}^{N
(-)^{l}\binom{N}{l}B\left( -\frac{s}{2}-1+l,-\frac{t}{2}-1+N-l\right)
k_{1}^{(\mu_{1}}..k_{1}^{\mu_{n-l}}k_{3}^{\mu_{n-l+1}}..k_{3}^{\mu_{N})}.
\label{B
\end{equation}
The\ corresponding $t-u$ channel open string scattering amplitude can be
calculated to be \cite{Closed
\begin{equation}
\mathcal{T}_{N;tu}^{\mu_{1}\mu_{2}\cdot\cdot\mu_{n}}=\sum_{l=0}^{N}\binom
{N}{l}B\left( -\frac{t}{2}+N-l-1,-\frac{u}{2}-1\right) k_{1}^{(\mu_{1
}..k_{1}^{\mu_{N-l}}k_{3}^{\mu_{N-l+1}}k_{3}^{\mu_{N})}.
\end{equation}
The HSS limit of the string BCJ relation for these amplitudes was worked out
to be \cite{Closed
\begin{equation}
\mathcal{T}_{N}(s,t)\simeq(-)^{N}\frac{\sin\left( \pi u/2\right)
{\sin\left( \pi s/2\right) }\mathcal{T}_{N}(t,u) \label{HBCJ
\end{equation}
where
\begin{align}
\mathcal{T}_{N}(t,u) & \simeq\sqrt{\pi}(-1)^{N-1}2^{-N}E^{-1-2N}\left(
\sin\frac{\phi}{2}\right) ^{-3}\left( \cos\frac{\phi}{2}\right)
^{5-2N}\nonumber\\
& \cdot\exp\left[ -\frac{t\ln t+u\ln u-(t+u)\ln(t+u)}{2}\right] \label{tu
\end{align}
Note that unlike the case of tachyon, this relation was proved only for HSS
limit. The next key step was that the result of Eq.(\ref{HBCJ}) can be
generalized to the case of three tachyons and one arbitrary string states
\cite{PRL,CHLTY}, and then to the case of four arbitrary string states. This
generalization was based on the important result that, at each fixed mass
level $N,$ the high energy fixed angle string scattering amplitudes for states
differ from leading Regge trajectory higher spin state in the second vertex
are all proportional to each other \cite{PRL,CHLTY}
\begin{equation}
\frac{T_{st}^{(N,2m,q)}}{T_{st}^{(N,0,0)}}\simeq\frac{T_{tu}^{(N,2m,q)
}{T_{tu}^{(N,0,0)}}=\left( -\frac{1}{M}\right) ^{2m+q}\left( \frac{1
{2}\right) ^{m+q}(2m-1)!!. \label{HS
\end{equation}
Here $\mathcal{T}_{N}(t,u)=T_{tu}^{(N,0,0)}$ for the case of three tachyons
and one tensor, and $T^{(N,2m,q)}$ represents leading order hard open string
scattering amplitudes with three arbitrary string states and one higher spin
string state of the following form \cite{PRL,CHLTY}
\begin{equation}
\left\vert N,2m,q\right\rangle \equiv(\alpha_{-1}^{T})^{N-2m-2q}(\alpha
_{-1}^{L})^{2m}(\alpha_{-2}^{L})^{q}|0,k\rangle\label{GR
\end{equation}
where the polarizations of the higher spin string state with momentum $k_{2}$
on the scattering plane were defined to be $e^{P}=\frac{1}{M_{2}
(E_{2},\mathrm{k}_{2},0)=\frac{k_{2}}{M_{2}}$, $e^{L}=\frac{1}{M_{2
}(\mathrm{k}_{2},E_{2},0)$ and $e^{T}=(0,0,1)$, and we have omitted possible
tensor indices of the other three string states. This high energy symmetry of
string theory was first conjectured by Gross in 1988 \cite{Gross} and was
explicitly proved in \cite{ChanLee1,ChanLee2,PRL, CHLTY,susy}.
Finally, the string BCJ relations for arbitrary string states in the hard
scattering limit can be written as \cite{Closed
\begin{equation}
T_{st}^{(N,2m,q)}\simeq(-)^{N}\frac{\sin\left( \pi u/2\right) }{\sin\left(
\pi s/2\right) }T_{tu}^{(N,2m,q)}=\frac{\sin\left( \pi k_{2}.k_{4}\right)
}{\sin\left( \pi k_{1}.k_{2}\right) }T_{tu}^{(N,2m,q)}. \label{HBCJ2
\end{equation}
Eq.(\ref{HS}) and Eq.(\ref{HBCJ2}) can be combined to form the
\textit{extended linear relations in the HSS limit}. These relations relate
string scattering amplitudes of string states with different spins and
different channels in the HSS limit, and can be used to reduce the infinite
number of independent hard string scattering amplitudes from $\infty$ down to
$1$.
Note that historically the motivation to probe string theory BCJ relations in
this context was the calculation of high energy closed string scattering
amplitudes \cite{Closed} by using the KLT relations \cite{KLT}. Indeed, it was
found that the saddle point calculations of high energy fixed angle open
string scattering amplitudes were available only for the $t-u$ channel, but
not reliable for the $s-t$ channel neither for the closed string high energy
amplitude calculation \cite{Closed}. So it was important to use other method
to express $s-t$ channel hard string scattering amplitudes in terms of $t-u$
channel HSS amplitudes.
The factor in Eq.(\ref{HBCJ2}) $\frac{\sin\left( \pi u/2\right)
{\sin\left( \pi s/2\right) }$ which was missing in the literature
\cite{GM,zeros} has important physical interpretations \cite{Closed}. The
presence of poles give infinite number of resonances in the string spectrum
and zeros give the coherence of string scatterings. These poles and zeros
survive in the high energy limit and can not be dropped out. Presumably, the
factor in the string BCJ relation in Eq.(\ref{HBCJ2}) triggers the failure of
saddle point calculation in the $s-t$ channel.
The two relations in Eq.(\ref{teq}) and Eq.(\ref{HBCJ2}) can be written as the
four point \textit{string BCJ relation} which are valid to all energies a
\begin{equation}
A_{st}=\frac{\sin\left( \pi k_{2}.k_{4}\right) }{\sin\left( \pi k_{1
k_{2}\right) }A_{tu} \label{BCJ3
\end{equation}
\textit{if} one can generalize the proof of Eq.(\ref{HBCJ2}) to all energies.
This was done in a paper based on monodromy of integration in string amplitude
calculation published in 2009 \cite{stringBCJ}. However, the explicit forms of
the amplitudes $A_{st}$ and $A_{tu}$ were \textit{not} calculated in
\cite{stringBCJ} and the extended linear relations were not addressed there.
In the next section, we will provide an alternative proof of this string BCJ relation.
\subsection{Regge string scatterings}
Another interesting regime of the string BCJ relation in Eq.(\ref{BCJ3}) was
the Regge regime. The $s-t$ channel RSS amplitude of three tachyons and one
higher spin state in Eq.(\ref{GR}) was calculated to be \cite{KLY
\begin{align}
R_{st}^{(n,2m,q)}= & B\left( -1-\frac{s}{2},-1-\frac{t}{2}\right)
\sqrt{-t}^{n-2m-2q}\left( \frac{1}{2M_{2}}\right) ^{2m+q}\nonumber\\
\cdot & 2^{2m}(\tilde{t}^{\prime})^{q}U\left( -2m\,,\,\frac{t
{2}+2-2m\,,\,\frac{\tilde{t}^{\prime}}{2}\right) \label{13
\end{align}
where $\tilde{t}^{\prime}=t+M_{2}^{2}-M_{3}^{2}$ and $U$ is the Kummer
function of the second kind. The corrsponding $t-u$ channel amplitude was
calculated to be \cite{Closed2
\begin{align}
R_{tu}^{(n,2m,q)} & =(-)^{n}B(-1-\frac{t}{2},-1-\frac{u}{2})(\sqrt{-{t
})^{n-2m-2q}\left( \frac{1}{2M_{2}}\right) ^{2m+q}\nonumber\\
& \cdot2^{2m}(\tilde{t}^{\prime})^{q}U\left( -2m\,,\,\frac{t}{2
+2-2m\,,\,\frac{\tilde{t}^{\prime}}{2}\right) . \label{18
\end{align}
We can calculate the ratio of the two amplitudes as followin
\begin{align}
\frac{R_{st}^{(n,2m,q)}}{R_{tu}^{(n,2m,q)}} & =\left( -1\right) ^{n
\frac{B\left( -\frac{s}{2}-1,-\frac{t}{2}-1\right) }{B\left( -\frac{t
{2}-1,-\frac{u}{2}-1\right) }\nonumber\\
& =\frac{\left( -1\right) ^{n}\Gamma\left( -\frac{s}{2}-1\right)
\Gamma\left( \frac{s}{2}+2-n\right) }{\Gamma\left( -\frac{u}{2}-1\right)
\Gamma\left( \frac{u}{2}+2-n\right) }\nonumber\\
& =\frac{\left( -1\right) ^{n}\Gamma\left( -\frac{s}{2}-1\right)
\Gamma\left( \frac{s}{2}+2\right) }{\Gamma\left( -\frac{u}{2}-1\right)
\Gamma\left( \frac{u}{2}+2\right) }\cdot\frac{\left( \frac{u
{2}+2-n\right) \cdots\left( \frac{u}{2}+1\right) }{\left( \frac{s
{2}+2-n\right) \cdots\left( \frac{s}{2}+1\right) }.
\end{align}
One can now take the Regge limit, $t=$ fixed and $s\sim-u\rightarrow\infty$ to
ge
\begin{equation}
\frac{R_{st}^{(n,2m,q)}}{R_{tu}^{(n,2m,q)}}\simeq\frac{\left( -1\right)
^{n}\Gamma\left( -\frac{s}{2}-1\right) \Gamma\left( \frac{s}{2}+2\right)
}{\Gamma\left( -\frac{u}{2}-1\right) \Gamma\left( \frac{u}{2}+2\right)
}\cdot\left( -1\right) ^{n}=\frac{\sin\pi(\frac{u}{2}+2)}{\sin\pi(\frac
{s}{2}+2)}=\frac{\sin\pi\left( k_{2}\cdot k_{4}\right) }{\sin\pi\left(
k_{1}\cdot k_{2}\right) },
\end{equation}
which can be written a
\begin{equation}
R_{st}^{(n,2m,q)}\simeq\frac{\sin\left( \pi k_{2}.k_{4}\right) }{\sin\left(
\pi k_{1}.k_{2}\right) }R^{(n,2m,q)}(t,u), \label{RR1
\end{equation}
and is consistent with the string BCJ relation in Eq.(\ref{BCJ3}).
The result in Eq.(\ref{RR1}) can be generalized to the general leading order
Regge string states at each fixed mass level $N=\sum_{n,m,l>0}np_{n
+mq_{m}+lr_{l}$ \cite{KLY
\begin{equation}
\left\vert p_{n},q_{m},r_{l}\right\rangle =\prod_{n>0}(\alpha_{-n}^{T
)^{p_{n}}\prod_{m>0}(\alpha_{-m}^{P})^{q_{m}}\prod_{l>0}(\alpha_{-l
^{L})^{r_{l}}|0,k\rangle. \label{RR
\end{equation}
For this general case, the $s-t$ channel of the scattering amplitude in the
Regge limit was calculated to be \cite{LY2014
\begin{align}
R_{st}^{\left\vert p_{n},q_{m},r_{l}\right\rangle }= & \prod_{n=1}\left[
\left( n-1\right) !\sqrt{-t}\right] ^{p_{n}}\prod_{m=1}\left[ \left(
m-1\right) !\frac{\tilde{t}}{2M_{2}}\right] ^{q_{m}}\prod_{l=1}\left[
\left( l-1\right) !\frac{\tilde{t}^{\prime}}{2M_{2}}\right] ^{r_{l
}\nonumber\\
\cdot & F_{1}\left( -\frac{t}{2}-1;-q_{1},-r_{1};\frac{s}{2};-\frac
{s}{\tilde{t}},-\frac{s}{\tilde{t}^{\prime}}\right) B\left( -\frac{s
{2}-1,-\frac{t}{2}-1\right)
\end{align}
where $F_{1}$ is the first Appell function. On the other hand, one can
calculate the $t-u$ channel amplitude, which was missing in \cite{LY2014}, as followin
\begin{align}
R_{tu}^{\left\vert p_{n},q_{m},r_{l}\right\rangle }= & \int_{1}^{\infty
}dx\cdot x^{k_{1}\cdot k_{2}}(x-1)^{k_{2}\cdot k_{3}}\left[ \frac{k_{1}^{P
}{x}-\frac{k_{3}^{P}}{1-x}\right] ^{q_{1}}\left[ \frac{k_{1}^{L}}{x
-\frac{k_{3}^{L}}{1-x}\right] ^{r_{l}}\nonumber\\
\cdot & \prod_{n=1}\left[ -\frac{\left( n-1\right) !k_{3}^{T}}{\left(
1-x\right) ^{n}}\right] \prod_{m=2}\left[ -\frac{\left( m-1\right)
!k_{3}^{P}}{\left( 1-x\right) ^{m}}\right] \prod_{l=2}\left[
-\frac{\left( l-1\right) !k_{3}^{L}}{\left( 1-x\right) ^{l}}\right]
\nonumber\\
= & \prod_{n=1}\left[ \left( n-1\right) !\sqrt{-t}\right] ^{p_{n}
\prod_{m=1}\left[ \left( m-1\right) !\frac{\tilde{t}}{2M_{2}}\right]
^{q_{m}}\prod_{l=1}\left[ \left( l-1\right) !\frac{\tilde{t}^{\prime
}{2M_{2}}\right] ^{r_{l}}\nonumber\\
\cdot & \sum_{i}^{q_{1}}\binom{q_{1}}{i}\left( -\frac{s}{\tilde{t}}\right)
^{i}\sum_{j}^{r_{1}}\binom{r_{1}}{j}\left( -\frac{s}{\tilde{t}^{\prime
}\right) ^{j}\left( -1\right) ^{k_{2}\cdot k_{3}}\nonumber\\
\cdot & \int_{1}^{\infty}dx\cdot x^{-\frac{s}{2}-2+N-i-j}(1-x)^{-\frac{t
{2}-2+i+j}.
\end{align}
We can do the change of variable $y=\frac{x-1}{x}$ to ge
\begin{align}
R_{tu}^{\left\vert p_{n},q_{m},r_{l}\right\rangle }= & \left( -1\right)
^{N}\prod_{n=1}\left[ \left( n-1\right) !\sqrt{-t}\right] ^{p_{n}
\prod_{m=1}\left[ \left( m-1\right) !\frac{\tilde{t}}{2M_{2}}\right]
^{q_{m}}\prod_{l=1}\left[ \left( l-1\right) !\frac{\tilde{t}^{\prime
}{2M_{2}}\right] ^{r_{l}}\nonumber\\
\cdot & \sum_{i}^{q_{1}}\binom{q_{1}}{i}\left( -\frac{s}{\tilde{t}}\right)
^{i}\sum_{j}^{r_{1}}\binom{r_{1}}{j}\left( -\frac{s}{\tilde{t}^{\prime
}\right) ^{j}\left( -1\right) ^{-i-j}\int_{0}^{1}dy\cdot y^{\frac{-t
{2}-2+i+j}\left( 1-y\right) ^{\frac{-u}{2}-2}\nonumber\\
= & \left( -1\right) ^{N}\prod_{n=1}\left[ \left( n-1\right) !\sqrt
{-t}\right] ^{p_{n}}\prod_{m=1}\left[ \left( m-1\right) !\frac{\tilde{t
}{2M_{2}}\right] ^{q_{m}}\prod_{l=1}\left[ \left( l-1\right) !\frac
{\tilde{t}^{\prime}}{2M_{2}}\right] ^{r_{l}}\nonumber\\
\cdot & \sum_{i}^{q_{1}}\binom{q_{1}}{i}\left( \frac{-s}{\tilde{t}}\right)
^{i}\sum_{j}^{r_{1}}\binom{r_{1}}{j}\left( \frac{-s}{\tilde{t}^{\prime
}\right) ^{j}\left( -1\right) ^{-i-j}B\left( \frac{-t}{2}-1+i+j,\frac
{-u}{2}-1\right)
\end{align}
In the Regge limit, $t=$ fixed and $s\sim-u\rightarrow\infty,$ one get
\begin{align}
R_{tu}^{\left\vert p_{n},q_{m},r_{l}\right\rangle }\simeq & \left(
-1\right) ^{N}\prod_{n=1}\left[ \left( n-1\right) !\sqrt{-t}\right]
^{p_{n}}\prod_{m=1}\left[ \left( m-1\right) !\frac{\tilde{t}}{2M_{2
}\right] ^{q_{m}}\prod_{l=1}\left[ \left( l-1\right) !\frac{\tilde
{t}^{\prime}}{2M_{2}}\right] ^{r_{l}}\nonumber\\
\cdot & F_{1}\left( -\frac{t}{2}-1;-q_{1},-r_{1};\frac{s}{2};-\frac
{s}{\tilde{t}},-\frac{s}{\tilde{t}^{\prime}}\right) B\left( -\frac{t
{2}-1,-\frac{u}{2}-1+N\right) .
\end{align}
Finally, one can derive the string BCJ relation in the Regge limit
\begin{equation}
\frac{R_{st}^{\left\vert p_{n},q_{m},r_{l}\right\rangle }}{R_{tu}^{\left\vert
p_{n},q_{m},r_{l}\right\rangle }}=\frac{\left( -1\right) ^{N}B\left(
-\frac{s}{2}-1,-\frac{t}{2}-1\right) }{B\left( -\frac{t}{2}-1,-\frac{u
{2}-1+N\right) }=\frac{\sin\pi\left( \frac{u}{2}+2-N\right) }{\left(
-1\right) ^{N}\sin\pi\left( \frac{s}{2}+2\right) }=\frac{\sin k_{2}\cdot
k_{4}}{\sin k_{1}\cdot k_{2}}. \label{RR2
\end{equation}
In constrast to the linear relations calculated in the hard scattering limit
in Eq.(\ref{HS}), it was shown \cite{LY2014,LY} that there existed infinite
recurrence relation among RSS amplitudes. For example, the recurrence relation
\cite{LY2014} ($(N;q_{1},r_{1})$ etc. refer to states in Eq.(\ref{GR})
\begin{equation}
\sqrt{-t}\left[ R_{st}^{(N;q_{1},r_{1})}+R_{st}^{(N;q_{1}-1,r_{1}+1)}\right]
-MR_{st}^{(N;q_{1}-1,r_{1})}=0 \label{APP
\end{equation}
for arbitrary mass levels $M^{2}=2(N-1)$ can be derived from recurrence
relations of the Appell functions. Eq.(\ref{RR2}) and Eq.(\ref{APP}) can be
combined to form one example of the \textit{extended recurrence relations in
the RSS limit}. The possible connection of field theory BCJ relations
\cite{BCJ} and Regge string recurrence relations \cite{LY} was first suggested
in \cite{LY}. These relations relate string scattering amplitudes of string
states with different spins and different channels in the RSS limit. Similar
to the HSS limit, it can be shown \cite{LY2014,LY} that the complete extended
recurrence relations in the RSS limit can be used to reduce the infinite
number of independent RSS amplitudes from $\infty$ down to $1$.
We have seen in this section that the HSS and the RSS amplitudes calculated
previously are consistent with the string BCJ relation in Eq.(\ref{BCJ3}). In
the next section, we will give an explicit proof the string BCJ relation for
all energies. In addition in section IV, we will calculate the nonrelativistic
limit of string BCJ relation to obtain the extended recurrence relation in the
nonrelativistic scattering limit
\setcounter{equation}{0}
\renewcommand{\theequation}{\arabic{section}.\arabic{equation}
\section{Explicit Proof of String BCJ}
In this section, we generalize the explicit calculations of high energy four
point string BCJ relations reviewed in the last section to all energy. The
proof of $n$-point string BCJ relations using monodromy was given in
\cite{stringBCJ} without calculating string amplitudes, Here we will
explicitly calculate string scattering amplitudes for four arbitrary string
states for both $s-t$ and $t-u$ channels and directly prove the four point
string BCJ relations.
There are at least two motivations to calculate the string BCJ relation
explicitly and give an alternative proof of the relations. Firstly, the proof
in \cite{stringBCJ} assumed\ negative real parts of $k_{i}\cdot k_{j}$ , and
puts some constraints on the kinematic regime for the validity of the string
BCJ relations. Our explicit proof here is on the contrary valid for all
kinematic regimes. Secondly, in section II the explicit calculations of
scattering amplitudes in the high energy string BCJ relation had led to the
extended relations both in the hard scattering limit and the Regge limit. As
we will see in section IV, the explicit calculation of the string BCJ relation
in the nonrelativistic scattering limit will also lead to new recurrence
relations among low energy string scattering amplitudes. Similarly, we will
see along the calculation of this section that the equality of the string BCJ
relations can be identified as the equalities of coefficients of two
multi-linear polynomials of\ ${k_{1}^{\mu}}$ and ${k_{3}^{\nu}}$ in the $s-t$
and $t-u$ channel amplitudes.
Instead of path integral approach, we will use the method of Wick contraction
to do the open string scattering amplitude calculation. As usual, we will be
fixing $SL(2,R)$ by choosing string worldsheet coordinates to be
$x_{1}=0,x_{3}=1,x_{4}=\infty$. We first give the answer of a simple example
($\alpha^{\prime}=\frac{1}{2};s-t$ channel)
\begin{align}
\mathcal{T}^{\mu\nu} & =\in
{\textstyle\prod_{i=1}^{4}}
dx_{i}<e^{ik_{1}X}\partial^{2}X^{(\mu}\partial X^{\nu)}e^{ik_{2}X}e^{ik_{3
X}e^{ik_{4}X}>\nonumber\\
& =\frac{\Gamma(-\frac{s}{2}-1)\Gamma(-\frac{t}{2}-1)}{\Gamma(\frac{u}{2
+2)}\left[ \frac{t}{2}\left( {\frac{t^{2}}{4}-1}\right) {k_{1}^{\mu
k_{1}^{\nu}-}\left( {\frac{s}{2}+1}\right) {\frac{t}{2}}\left( {\frac{t
{2}+1}\right) {k_{1}^{\mu}k_{3}^{\nu}}\right. \nonumber\\
& \left. {+\frac{s}{2}}\left( {\frac{s}{2}+1}\right) \left( {\frac{t
{2}+1}\right) {k_{3}^{\mu}k_{1}^{\nu}-\frac{s}{2}}\left( {\frac{s^{2}}{4
-1}\right) {k_{3}^{\mu}k_{3}^{\nu}}\right] .
\end{align}
The result is a multi-linear polynomial of\ ${k_{1}^{\mu}}$ and ${k_{3}^{\nu
}$ due to the choice of worldsheet coordinates above. To prove the equality of
$s-t$ and $t-u$ channel calculation, we can just show the equality
of\ coefficients of a typical term in each channel.
There are two key observations before we proceed to do the calculation.
Firstly, we can drop out the fourth vertex $V_{4}(x_{4})$ in the real
calculation due to the choice $x_{4}=\infty.$ Secondly, there are two types of
contributions in the contractions between two vertex operators. They are
contraction between $\partial^{a}X$ and $\partial^{a^{\prime}}X$, and
contraction between $\partial^{a}X$ and $e^{ikX}$.
we are going to calculate the most general four point function of string
vertex
\begin{align}
& \left\langle V_{1}(x_{1})V_{2}(x_{2})V_{3}(x_{3})V_{4}(x_{4})\right\rangle
\nonumber\\
= & \left\langle \overset{a}{\left( \partial X\right) }\overset{b}{\left(
\partial X\right) }\overset{c}{\left( \partial X\right)
\overset{d}{\left( \partial X\right) }e^{ik_{1}X}(x_{1})\overset{e}{\left(
\partial X\right) }\overset{d^{\prime}}{\left( \partial X\right)
}\overset{f}{\left( \partial X\right) }\overset{g}{\left( \partial
X\right) }e^{ik_{2}X}(x_{2})\overset{g^{\prime}}{\left( \partial X\right)
}\overset{h}{\left( \partial X\right) }\overset{i}{\left( \partial
X\right) }\overset{b^{\prime}}{\left( \partial X\right) }e^{ik_{3}X
(x_{3})V_{4}(x_{4})\right\rangle .
\end{align}
We can write down the relavent three vertex operators as
\begin{subequations}
\label{vv
\begin{align}
V_{1}(x_{1}) & =\overset{a}{\left( \partial X\right) }\overset{b}{\left(
\partial X\right) }\overset{c}{\left( \partial X\right)
\overset{d}{\left( \partial X\right) }e^{ik_{1}X}(x_{1}),\\
V_{2}(x_{2}) & =\overset{e}{\left( \partial X\right) }\overset{d^{\prime
}}{\left( \partial X\right) }\overset{f}{\left( \partial X\right)
}\overset{g}{\left( \partial X\right) }e^{ik_{2}X}(x_{2}),\\
V_{3}(x_{3}) & =\overset{g^{\prime}}{\left( \partial X\right)
}\overset{h}{\left( \partial X\right) }\overset{i}{\left( \partial
X\right) }\overset{b^{\prime}}{\left( \partial X\right) }e^{ik_{3}X
(x_{3}),
\end{align}
where
\end{subequations}
\begin{subequations}
\label{vvv
\begin{align}
\overset{a}{\left( \partial X\right) } & =\prod_{a=1}^{A}\left(
i\varepsilon_{11a}^{(a)}\cdot\partial_{1}^{\alpha_{11a}}X_{1}\right)
,\overset{b}{\left( \partial X\right) }=\prod_{b=1}^{B}\left(
i\varepsilon_{12b}^{(b)}\cdot\partial_{1}^{\alpha_{12b}}X_{1}\right)
,\overset{c}{\left( \partial X\right) }=\prod_{c=1}^{C}\left(
i\varepsilon_{13c}^{(c)}\cdot\partial_{1}^{\alpha_{13c}}X_{1}\right) ,\\
\overset{d}{\left( \partial X\right) } & =\prod_{d=1}^{D}\left(
i\varepsilon_{14d}^{(d)}\cdot\partial_{1}^{\alpha_{14d}}X_{1}\right)
,\overset{e}{\left( \partial X\right) }=\prod_{e=1}^{E}\left(
i\varepsilon_{21a}^{(e)}\cdot\partial_{2}^{\alpha_{21e}}X_{2}\right)
,\overset{d^{\prime\prime}}{\left( \partial X\right) }=\prod_{d^{\prime
=1}^{D}\left( i\varepsilon_{22b}^{(d^{\prime})}\cdot\partial_{2
^{\alpha_{22d^{\prime}}}X_{2}\right) ,\\
\overset{f}{\left( \partial X\right) } & =\prod_{f=1}^{F}\left(
i\varepsilon_{23f}^{(f)}\cdot\partial_{2}^{\alpha_{23f}}X_{2}\right)
,\overset{g}{\left( \partial X\right) }=\prod_{g=1}^{G}\left(
i\varepsilon_{24g}^{(g)}\cdot\partial_{2}^{\alpha_{24g}}X_{2}\right)
,\overset{g^{\prime}}{\left( \partial X\right) }=\prod_{g^{\prime}=1
^{G}\left( i\varepsilon_{31e}^{(e^{\prime})}\cdot\partial_{3}^{\alpha
_{31g^{\prime}}}X_{3}\right) ,\\
\overset{h}{\left( \partial X\right) } & =\prod_{h=1}^{H}\left(
i\varepsilon_{32h}^{(h)}\cdot\partial_{3}^{\alpha_{32h}}X_{3}\right)
,\overset{i}{\left( \partial X\right) }=\prod_{i=1}^{I}\left(
i\varepsilon_{33i}^{(i)}\cdot\partial_{3}^{\alpha_{33i}}X_{3}\right)
,\overset{b^{\prime}}{\left( \partial X\right) }=\prod_{b^{\prime}=1
^{B}\left( i\varepsilon_{34b}^{(b^{\prime})}\cdot\partial_{3}^{\alpha_{34b
}X_{3}\right) .
\end{align}
In Eq.(\ref{vvv}), $\varepsilon_{11a}^{(a)}$ and $\alpha_{11a}$ etc. are
polarizations and orders of worldsheet differential respectively. In
Eq.(\ref{vv}), we have four groups of $\partial^{a}X$ for each vertex
operator. Two of them will contract with $\partial^{b}X$ of the other
two\ vertex operators respectively, and the rest two will contract with
$e^{ikX}$ of the other two vertex operators respectively. For illustration, we
have used the pair dummy indexes $b,b^{\prime};d,d^{\prime}$ and $g,g^{\prime
}$ for contractions. The other six indexes $a,c,e,f,h$ and $i$ are prepared
for contractions with $e^{ikX}.$
The mass levels of the three vertex operators ar
\end{subequations}
\begin{subequations}
\begin{align}
N_{1} & =S_{A}+\text{\ }S_{B}+S_{C}+S_{D},\\
N_{2} & =S_{E}+S_{D}^{\prime}+S_{F}+S_{G},\\
N_{3} & =S_{G}^{\prime}+S_{H}+S_{I}+S_{B}^{\prime},
\end{align}
where we define
\end{subequations}
\begin{subequations}
\begin{align}
S_{A} & =\sum_{a=1}^{A}\alpha_{11a}\text{, \ \ }S_{B}=\sum_{b=1}^{B
\alpha_{12b}\text{, \ \ }S_{C}=\sum_{c=1}^{C}\alpha_{13c}\text{, \ \
S_{D}=\sum_{d=1}^{D}\alpha_{14d},\\
S_{E} & =\sum_{e=1}^{E}\alpha_{21e}\text{, \ \ }S_{F}=\sum_{f=1}^{F
\alpha_{23f}\text{, \ \ }S_{G}=\sum_{g=1}^{G}\alpha_{24g}\text{, \ \
S_{H}=\sum_{h=1}^{H}\alpha_{32h},\\
S_{I} & =\sum_{i=1}^{I}\alpha_{33i}\text{, \ \ }S_{B}^{\prime
=\sum_{b^{\prime}=1}^{B}\alpha_{34b^{\prime}}\text{, \ \ }S_{D}^{\prime
=\sum_{d^{\prime}=1}^{D}\alpha_{22d^{\prime}}\text{, \ \ }S_{G}^{\prime
=\sum_{g^{\prime}=1}^{G}\alpha_{31g^{\prime}}.
\end{align}
Then we hav
\end{subequations}
\begin{subequations}
\begin{align}
s & =-(k_{1}+k_{2})^{2},t=-(k_{2}+k_{3})^{2},u=-(k_{1}+k_{3})^{2},\\
k_{1}\cdot k_{2} & =-\frac{s}{2}-2+N_{1}+N_{2},\\
k_{2}\cdot k_{3} & =-\frac{t}{2}-2+N_{2}+N_{3},\\
k_{1}\cdot k_{3} & =-\frac{u}{2}-2+N_{1}+N_{3},\\
s+t+u & =2N^{\prime}-8\text{, \ \ with\ }N^{\prime}=N_{1}+N_{2}+N_{3}+N_{4}.
\end{align}
We are now ready to do the calculation. After putting the $SL(2,R)$ gauge
choice, we ge
\end{subequations}
\begin{align}
T= & \prod_{a=1}^{A}\left[ \varepsilon_{11a}^{(a)}\cdot k_{3
(-1)(\alpha_{11a}-1)!\right] \prod_{b=1}^{B}\left[ \varepsilon_{12b
^{(b)}\cdot\varepsilon_{34b}^{(b)}(-1)^{\alpha_{34b}-1}(\alpha_{12b
+\alpha_{34b}-1)!\right] \nonumber\\
\cdot & \prod_{c=1}^{C}\left[ \varepsilon_{13c}^{(c)}\cdot k_{2
(-1)(\alpha_{13c}-1)!\right] \prod_{d=1}^{D}\left[ \varepsilon_{14d
^{(d)}\cdot\varepsilon_{22d}^{(d)}(-1)^{\alpha_{22d}-1}(\alpha_{14d
+\alpha_{22d}-1)!\right] \nonumber\\
\cdot & \prod_{e=1}^{E}\left[ \varepsilon_{21e}^{(e)}\cdot k_{1
(-1)^{^{\alpha_{21e}}}(\alpha_{21e}-1)!\right] \prod_{f=1}^{F}\left[
\varepsilon_{23f}^{(f)}\cdot k_{3}(-1)(\alpha_{23f}-1)!\right] \nonumber\\
\cdot & \prod_{g=1}^{G}\left[ \varepsilon_{24g}^{(g)}\cdot\varepsilon
_{31g}^{(g)}(-1)^{\alpha_{31g}-1}(\alpha_{24g}+\alpha_{31g}-1)!\right]
\prod_{h=1}^{H}\left[ \varepsilon_{32h}^{(h)}\cdot k_{2}(-1)(\alpha
_{32h}-1)!\right] \nonumber\\
\cdot & \prod_{i=1}^{I}\left[ \varepsilon_{33i}^{(i)}\cdot k_{1
(-1)^{\alpha_{33i}}(\alpha_{33i}-1)!\right] \nonumber\\
\cdot & \int dx|-x|^{k_{1}\cdot k_{2}}|x-1|^{k_{2}\cdot k_{3}}x^{-\left(
S_{C}+S_{D}+S_{D}^{\prime}+S_{E}\right) }\left( 1-x\right) ^{-\left(
S_{F}+S_{G}+S_{G}^{\prime}+S_{H}\right) }. \label{long
\end{align}
It is easy to see that only the last term of Eq.(\ref{long}) will be different
for the $s-t$ and $t-u$ channel calculations. For the $s-t$ channel, the last
term become
\begin{align}
& \int_{0}^{1}dx\text{ }x^{k_{1}\cdot k_{2}}\left( 1-x\right) ^{k_{2}\cdot
k_{3}}x^{-\left( S_{C}+S_{D}+S_{D}^{\prime}+S_{E}\right) }\left(
1-x\right) ^{-\left( S_{F}+S_{G}+S_{G}^{\prime}+S_{H}\right) }\nonumber\\
= & \int_{0}^{1}dx\times x^{-\frac{s}{2}-2+S_{A}+S_{B}+S_{F}+S_{G
}(1-x)^{-\frac{t}{2}-2+S_{E}+S_{D}+S_{I}+S_{B}^{\prime}}\nonumber\\
= & \frac{\Gamma\left( \frac{-s}{2}-1+S_{A}+S_{B}+S_{F}+S_{G}\right)
\Gamma\left( -\frac{t}{2}-1+S_{E}+S_{D}^{\prime}+S_{I}+S_{B}^{\prime}\right)
}{\Gamma\left( \frac{u}{2}+2-N_{4}-S_{C}-S_{D}-S_{G}^{\prime}-S_{H}\right)
}.
\end{align}
For the $t-u$ channel, we have the last ter
\begin{equation}
\int_{1}^{\infty}dx(x)^{k_{1}\cdot k_{2}}(x-1)^{k_{2}\cdot k_{3}}x^{-\left(
S_{C}+S_{D}+S_{D}^{\prime}+S_{E}\right) }(1-x)^{-\left( S_{F}+S_{G
+S_{G}^{\prime}+S_{H}\right) }.
\end{equation}
Defin
\begin{equation}
\text{ }K=-\left( S_{F}+S_{G}+S_{G}^{\prime}+S_{H}\right) ,
\end{equation}
and make the chane of variable $x=\frac{1}{1-y}$ in the integration, we end up wit
\begin{align}
& (-1)^{K}\int_{0}^{1}dy(y)^{\frac{-t}{2}-2+S_{E}+S_{D}+S_{I}+S_{B}^{\prime
}(1-y)^{-\frac{u}{2}-2+N_{4}+S_{C}+S_{D}+S_{G}^{\prime}+S_{H}}\nonumber\\
= & (-1)^{K}\frac{\Gamma\left( \frac{-t}{2}-1+S_{E}+S_{D}+S_{I
+S_{B}^{\prime}\right) \Gamma\left( -\frac{u}{2}-1+N_{4}+S_{C}+S_{D
+S_{G}^{\prime}+S_{H}\right) }{\Gamma\left( \frac{s}{2}+2-S_{A}-S_{B
-S_{F}-S_{G}\right) }.
\end{align}
We are now ready to calculate the rati
\begin{align}
\frac{T_{st}}{T_{tu}} & =\frac{(-1)^{K}\Gamma\left( \frac{-s}{2
-1+S_{A}+S_{B}+S_{F}+S_{G}\right) \Gamma\left( \frac{s}{2}+2-S_{A
-S_{B}-S_{F}-S_{G}\right) }{\Gamma\left( -\frac{u}{2}-1+N_{4}+S_{C
+S_{D}+S_{G}^{\prime}+S_{H}\right) \Gamma\left( \frac{u}{2}+2-N_{4
-S_{C}-S_{D}-S_{G}^{\prime}-S_{H}\right) }\nonumber\\
& =\left( -1\right) ^{-\left( S_{F}+S_{G}+S_{G}^{\prime}+S_{H}\right)
}\cdot\frac{\sin\pi\left( \frac{u}{2}+2-N_{4}-S_{C}-S_{D}-S_{G}^{\prime
}-S_{H}\right) }{\sin\pi\left( \frac{s}{2}+2-S_{A}-S_{B}-S_{F}-S_{G}\right)
}\nonumber\\
& =\left( -1\right) ^{-N_{1}-N_{4}}\frac{\sin\pi\left( \frac{u
{2}+2\right) }{\sin\pi\left( \frac{s}{2}+2\right) }=\frac{\sin\pi\left(
k_{2}\cdot k_{4}\right) }{\sin\pi\left( k_{1}\cdot k_{2}\right) },
\end{align}
where we have used the identity (\ref{math}). We thus have provd the four
point string BCJ relation by explicit calculation
\setcounter{equation}{0}
\renewcommand{\theequation}{\arabic{section}.\arabic{equation}
\section{Nonrelativistic String BCJ and Extended Recurrence Relations}
In this section, in constrast to the two high energy limits of string BCJ
relations discussed in section II, we discuss mass level dependent
nonrelativistic string BCJ relations. For simplicity, we will first calculate
both $s-t$ and $t-u$ channel NSS amplitudes of three tachyons and one leading
trojectory string state at arbitrary mass levels. We will then calculate NSS
amplitudes of three tachyons and one more general string state. We will see
that the mass and spin dependent nonrelativistic string BCJ relations can be
expressed in terms of Gauss hypergeometry functions. As an application, for
each fixed mass level $N$ we can then derive extended recurrence relations
among NSS amplitudes of string states with different spins and different channels.
\bigskip We choose $k_{2}$ to be momentum of the leading trojectory string
states and the rest are tachyons. In the CM fram
\begin{subequations}
\begin{align}
k_{1} & =\left( \sqrt{M_{1}^{2}+\vec{k_{1}}^{2}},-|\vec{k_{1}}|,0\right)
,\\
k_{2} & =\left( \sqrt{M_{2}^{2}+\vec{k_{1}}^{2}},+|\vec{k_{1}}|,0\right)
,\\
k_{3} & =\left( \sqrt{M_{3}^{2}+\vec{k_{3}}^{2}},-|\vec{k_{3}}|\cos
\phi,-|\vec{k_{3}}|\sin\phi\right) ,\\
k_{4} & =\left( \sqrt{M_{3}^{2}+\vec{k_{3}}^{2}},+|\vec{k_{3}}|\cos
\phi,+|\vec{k_{3}}|\sin\phi\right)
\end{align}
where $M_{1}=M_{3}=M_{4}=M_{tachyon}$ , $M_{2}=2(N-1)$ and $\phi$ is the
scattering angle on the scattering plane. Instead of the zero slope limt which
was used in the literature to get the field theory limit of the lowest mass
string state \cite{ymzero1,ymzero2}, \cite{Bzero1,Bzero2,Bzero3}, we will take
the nonrelativistic $|\vec{k_{1}}|<<M_{S}$ or large $M_{S}$ limit for the
massive string scattering amplitudes. In the nonrelativistic limit
($|\vec{k_{1}}|<<M_{S}$
\end{subequations}
\begin{subequations}
\begin{align}
k_{1}\simeq & \left( M_{1}+\frac{\vec{k_{1}}^{2}}{2M_{1}},-|\vec{k_{1
}|,0\right) ,\\
k_{2}\simeq & \left( M_{2}+\frac{\vec{k_{1}}^{2}}{2M_{2}},+|\vec{k_{1
}|,0\right) ,\\
k_{3}\simeq & \left( -\frac{M_{1}+M_{2}}{2}-\frac{1}{4}\frac{M_{1}+M_{2
}{M_{1}M_{2}}|\vec{k_{1}}|^{2},-\left[ \frac{\epsilon}{2}+\frac{(M_{1
+M_{2})}{4M_{1}M_{2}\epsilon}|\vec{k_{1}}|^{2}\right] \cos\phi,\right.
\nonumber\\
& \left. -\left[ \frac{\epsilon}{2}+\frac{(M_{1}+M_{2})}{4M_{1
M_{2}\epsilon}|\vec{k_{1}}|^{2}\right] \sin\phi\right) ,\\
k_{4}\simeq & \left( -\frac{M_{1}+M_{2}}{2}-\frac{1}{4}\frac{M_{1}+M_{2
}{M_{1}M_{2}}|\vec{k_{1}}|^{2},+\left[ \frac{\epsilon}{2}+\frac{(M_{1
+M_{2})}{4M_{1}M_{2}\epsilon}|\vec{k_{1}}|^{2}\right] \cos\phi,\right.
\nonumber\\
& \left. +\left[ \frac{\epsilon}{2}+\frac{(M_{1}+M_{2})}{4M_{1
M_{2}\epsilon}|\vec{k_{1}}|^{2}\right] \sin\phi\right) .
\end{align}
wher
\end{subequations}
\begin{equation}
\epsilon=\sqrt{(M_{1}+M_{2})^{2}-4M_{3}^{2}}.
\end{equation}
The three polarizations on the scattering plane are defined to be
\cite{ChanLee1,ChanLee2
\begin{subequations}
\begin{align}
e^{P} & =\frac{1}{M_{2}}\left( \sqrt{M_{2}^{2}+\vec{k_{1}}^{2}},|\vec
{k_{1}}|,0\right) ,\\
e^{L} & =\frac{1}{M_{2}}\left( |\vec{k_{1}}|,\sqrt{M_{2}^{2}+\vec{k_{1
}^{2}},0\right) ,\\
e^{T} & =(0,0,1),
\end{align}
which in the low energy limit reduce t
\end{subequations}
\begin{subequations}
\begin{align}
e^{P} & \simeq\frac{1}{M_{2}}\left( M_{2}+\frac{\vec{k_{1}}^{2}}{2M_{2
},|\vec{k_{1}}|,0\right) ,\\
e^{L} & \simeq\frac{1}{M_{2}}\left( |\vec{k_{1}}|,M_{2}+\frac{\vec{k_{1
}^{2}}{2M_{2}},0\right) ,\\
e^{T} & \simeq\left( 0,0,1\right) .
\end{align}
One can then calculate the\ following kinematics which will be used in the low
energy amplitude calculatio
\end{subequations}
\begin{subequations}
\begin{align}
k_{1}\cdot e^{L} & =k_{1}^{L}=\frac{-(M_{1}+M_{2})}{M_{2}}|\vec{k_{1
}|+O\left( |\vec{k_{1}}|^{2}\right) ,\\
k_{3}\cdot e^{L} & =k_{3}^{L}=\frac{-\epsilon}{2}\cos\phi+\frac{M_{1}+M_{2
}{2M_{2}}|\vec{k_{1}}|+O\left( |\vec{k_{1}}|^{2}\right) ,\\
k_{1}\cdot e^{T} & =k_{1}^{T}=0,\\
k_{3}\cdot e^{T} & =k_{3}^{T}=\frac{-\epsilon}{2}\sin\phi+O\left(
|\vec{k_{1}}|^{2}\right) ,\\
k_{1}\cdot e^{P} & =k_{1}^{P}=-M_{1}+O\left( |\vec{k_{1}}|^{2}\right) ,\\
k_{3}\cdot e^{P} & =k_{3}^{P}=\frac{M_{1}+M_{2}}{2}-\frac{\epsilon}{2M_{2
}\cos\phi|\vec{k_{1}}|+O\left( |\vec{k_{1}}|^{2}\right) ,
\end{align}
amd the Mandelstam variable
\end{subequations}
\begin{subequations}
\begin{align}
s & =\left( M_{1}+M_{2}\right) ^{2}+O\left( |\vec{k_{1}}|^{2}\right) ,\\
t & =-M_{1}M_{2}-2-\epsilon\cos\phi|\vec{k_{1}}|+O\left( |\vec{k_{1}
|^{2}\right) ,\\
u & =-M_{1}M_{2}-2+\epsilon\cos\phi|\vec{k_{1}}|+O\left( |\vec{k_{1}
|^{2}\right) .
\end{align}
\subsection{Leading Trojectory States}
We first calculate the nonrelativistic $s-t$ channel scattering amplitude of
three tachyons and one tensor string stat
\end{subequations}
\begin{equation}
V_{2}=(i\partial X^{T})^{p}(i\partial X^{L})^{r}(i\partial X^{P})^{q
e^{ik_{2}X}.
\end{equation}
wher
\begin{equation}
N=p+r+q.
\end{equation}
To the leading order in energy, the nonrelativistic amplitude can be
calculated to be
\begin{align}
A_{st}^{(p,r,q)}= & \int_{0}^{1}dx\left( \frac{k_{1}^{T}}{x}-\frac
{k_{3}^{T}}{1-x}\right) ^{p}\left( \frac{k_{1}^{L}}{x}-\frac{k_{3}^{L}
{1-x}\right) ^{r}\left( \frac{k_{1}^{P}}{x}-\frac{k_{3}^{P}}{1-x}\right)
^{q}|x|^{k_{1}\cdot k_{2}}|x-1|^{k_{2}\cdot k_{3}}\nonumber\\
\simeq & \int_{0}^{1}dx\left( \frac{\frac{\epsilon}{2}\sin\phi}{1-x}\right)
^{p}\left( \frac{\frac{\epsilon}{2}\cos\phi}{1-x}\right) ^{r}\left(
-\frac{M_{1}}{x}-\frac{\frac{M_{1}+M_{2}}{2}}{1-x}\right) ^{q}x^{k_{1}\cdot
k_{2}}(1-x)^{k_{2}\cdot k_{3}}\nonumber\\
= & \left( \frac{\epsilon}{2}\sin\phi\right) ^{p}\left( \frac{\epsilon
}{2}\cos\phi\right) ^{r}\left( -\frac{M_{1}+M_{2}}{2}\right) ^{q
\cdot\overset{q}{\underset{l=0}{\sum}}\binom{q}{l}\left( \frac{2M_{1}
{M_{1}+M_{2}}\right) ^{l}\nonumber\\
\cdot & B\left( 1-M_{1}M_{2}-l,\frac{M_{1}M_{2}}{2}+l\right) \nonumber\\
= & \left( \frac{\epsilon}{2}\sin\phi\right) ^{p}\left( \frac{\epsilon
}{2}\cos\phi\right) ^{r}\left( -\frac{M_{1}+M_{2}}{2}\right) ^{q}B\left(
1-M_{1}M_{2},\frac{M_{1}M_{2}}{2}\right) \nonumber\\
\cdot & \overset{q}{\underset{l=0}{\sum}}\left( -1\right) ^{l}\binom{q
{l}\left( \frac{2M_{1}}{M_{1}+M_{2}}\right) ^{l}\frac{\left( \frac
{M_{1}M_{2}}{2}\right) _{l}}{\left( M_{1}M_{2}\right) _{l}}.
\end{align}
Finally the summation above can be performed to get the Gauss hypergeometry
function $_{2}F_{1}$
\begin{align}
A_{st}^{(p,r,q)} & =\left( \frac{\epsilon}{2}\sin\phi\right) ^{p}\left(
\frac{\epsilon}{2}\cos\phi\right) ^{r}\left( -\frac{M_{1}+M_{2}}{2}\right)
^{q}B\left( 1-M_{1}M_{2},\frac{M_{1}M_{2}}{2}\right) \nonumber\\
& \cdot\text{ }_{2}F_{1}\left( \frac{M_{1}M_{2}}{2};-q;M_{1}M_{2
;\frac{2M_{1}}{M_{1}+M_{2}}\right) .
\end{align}
Similarly, we calculate the corresponding nonrelativistic $t-u$ channel
amplitude as
\begin{align}
A_{tu}^{\left( p,r,q\right) }= & \int_{1}^{\infty}dx\left( \frac
{k_{1}^{T}}{x}-\frac{k_{3}^{T}}{1-x}\right) ^{p}\left( \frac{k_{1}^{L}
{x}-\frac{k_{3}^{L}}{1-x}\right) ^{r}\left( \frac{k_{1}^{P}}{x}-\frac
{k_{3}^{P}}{1-x}\right) ^{q}|x|^{k_{1}\cdot k_{2}}|x-1|^{k_{2}\cdot k_{3
}\nonumber\\
\simeq & \left( -1\right) ^{N}\left( \frac{\epsilon}{2}\sin\phi\right)
^{p}\left( \frac{\epsilon}{2}\cos\phi\right) ^{r}\left( -\frac{M_{1}+M_{2
}{2}\right) ^{q}B\left( \frac{M_{1}M_{2}}{2},\frac{M_{1}M_{2}}{2}\right)
\nonumber\\
& \cdot\text{ }_{2}F_{1}\left( \frac{M_{1}M_{2}}{2};-q;M_{1}M_{2
;\frac{2M_{1}}{M_{1}+M_{2}}\right) .
\end{align}
We are now ready to calculate the ratio of $s-t$ and $t-u$ channel amplitudes
\begin{align}
\frac{A_{st}^{(p,r,q)}}{A_{tu}^{(p,r,q)}} & =\left( -1\right) ^{N
\frac{B\left( -M_{1}M_{2}+1,\frac{M_{1}M_{2}}{2}\right) }{B\left(
\frac{M_{1}M_{2}}{2},\frac{M_{1}M_{2}}{2}\right) }\nonumber\\
& =(-1)^{N}\frac{\Gamma\left( M_{1}M_{2}\right) \Gamma\left( -M_{1
M_{2}+1\right) }{\Gamma\left( \frac{M_{1}M_{2}}{2}\right) \Gamma\left(
-\frac{M_{1}M_{2}}{2}+1\right) }\simeq\frac{\sin\pi\left( k_{2}\cdot
k_{4}\right) }{\sin\pi\left( k_{1}\cdot k_{2}\right) }, \label{NBCJ
\end{align}
where, in the nonrelativistic limit, we have
\begin{subequations}
\begin{align}
k_{1}\cdot k_{2} & \simeq-M_{1}M_{2},\\
k_{2}\cdot k_{4} & \simeq\frac{\left( M_{1}+M_{2}\right) M_{2}}{2}.
\end{align}
So we have ended up with a consistent nonrelativistic\textit{ level }$M_{2
$\textit{ dependent string BCJ relations}. Similar relations for $t-u$ and
$s-u$ channel amplitudes can be calculated. We stress that the above relation
is the stringy generalization of the massless field theory BCJ relation to the
higher spin stringy particles. Moreover, as we will show now that, there exist
much more relations among these amplitudes.
There existed a recurrence relation of Gauss hypergeometry functio
\end{subequations}
\begin{equation}
_{2}F_{1}(a;b;c;z)=\frac{c-2b+2+(b-a-1)z}{(b-1)(z-1)}\text{ }_{2
F_{1}(a;b-1;c;z)+\frac{b-c-1}{(b-1)(z-1)}\text{ }_{2}F_{1}(a;b-2;c;z),
\label{rec
\end{equation}
which can be used to derive the recurrence relation
\begin{align}
\left( -\frac{M_{1}+M_{2}}{2}\right) A_{st}^{(p,r,q)}= & \frac
{M_{2}\left( M_{1}M_{2}+2q+2\right) }{\left( q+1\right) \left(
M_{2}-M_{1}\right) }\left( \frac{\epsilon}{2}\sin\phi\right) ^{p-p^{\prime
}}\left( \frac{\epsilon}{2}\cos\phi\right) ^{p^{\prime}-p+1}A_{st}^{\left(
p^{\prime},p+r-p^{\prime}-1,q+1\right) }\nonumber\\
+ & \frac{2\left( M_{1}M_{2}+q+1\right) }{\left( q+1\right) \left(
M_{2}-M_{1}\right) }\left( \frac{\epsilon}{2}\sin\phi\right) ^{p-p^{\prime
\prime}}\left( \frac{\epsilon}{2}\cos\phi\right) ^{p^{\prime\prime
-p+2}A_{st}^{\left( p^{\prime\prime},p+r-p^{\prime\prime}-2,q+2\right) }
\label{main
\end{align}
where $p^{\prime}$ and $p^{\prime\prime}$ are the polarization parameters of
the second and third Amplitudes on the rhs of Eq.(\ref{main}). For example,
for a fixed mass level $N=4,$ one can derive many recurrence relations for
either $s-t$ channel or $t-u$ channel amplitudes with $q=0,1,2.$ For say
$q=2,$ $(p,r)=(2,0),(1,1),(0,2).$ We have $p^{\prime}=0,1$ and $p^{\prime
\prime}=0.$ We can thus derive, for example for $(p,r)=(2,0)$ and $p^{\prime
}=1$, the recurrence relation among NSS amplitudes $A_{st}^{(2,0,2)
A_{st}^{(1,0,3)}A_{st}^{(0,0,4)}$ as following
\begin{equation}
\left( -\frac{M_{1}+M_{2}}{2}\right) A_{st}^{(2,0,2)}=\frac{M_{2}\left(
M_{1}M_{2}+6\right) }{3\left( M_{2}-M_{1}\right) }\left( \frac{\epsilon
}{2}\sin\phi\right) A_{st}^{(1,0,3)}+\frac{2\left( M_{1}M_{2}+4\right)
}{3\left( M_{2}-M_{1}\right) }\left( \frac{\epsilon}{2}\sin\phi\right)
^{2}A_{st}^{(0,0,4)}.
\end{equation}
Exactly the same relation can be obtained for $t-u$ channel amplitudes since
the $_{2}F_{1}(a;b;c;z)$ dependence in the $s-t$ and $t-u$ channel amplitudes
calculated above are the same. Moreover, we can for example replace
$A_{st}^{(2,0,2)}$ amplitude above by the corresponding $t-u$ channel
amplitude $A_{tu}^{(2,0,2)}$ through Eq.(\ref{NBCJ}) and obtai
\begin{align}
\frac{\left( -1\right) ^{N}}{2\cos\frac{\pi M_{1}M_{2}}{2}}\left(
-\frac{M_{1}+M_{2}}{2}\right) A_{tu}^{(2,0,2)} & =\frac{M_{2}\left(
M_{1}M_{2}+6\right) }{3\left( M_{2}-M_{1}\right) }\left( \frac{\epsilon
}{2}\sin\phi\right) A_{st}^{(1,0,3)}\nonumber\\
& +\frac{2\left( M_{1}M_{2}+4\right) }{3\left( M_{2}-M_{1}\right)
}\left( \frac{\epsilon}{2}\sin\phi\right) ^{2}A_{st}^{(0,0,4)}, \label{BCJJ
\end{align}
which relates higher spin NSS amplitudes in both $s-t$ and $t-u$ channels.
Eq.(\ref{BCJJ}) is one example of the \textit{extended recurrence relations in
the NSS limit}. For each fixed mass level $M_{2}$, the relation in
Eq.(\ref{BCJJ}) relates amplitudes of different spin polarizations and
different channels of the \textit{same} propogating higher spin particle in
the string spectrum. In the next subsection, we will consider a more general
extended recurrence relation which relates NSS amplitudes of
\textit{different} higher spin particles for each fixed mass level\ $M_{2}$ in
the string spectrum.
\subsection{More general string states}
Recently the structure of the most general NSS string amplitudes which can be
expressed in terms of Gauss hypergeometry functions were pointed out
\cite{LLY}. Here, as an illustration, we will calculate one example of
extended recurrence relation which relates NSS amplitudes of different higher
spin particles for each fixed mass level\ $M_{2}$. In particular, the $s-t$
channel of NSS amplitudes of three tachyons and one higher spin massive string
state at mass level $N=3p_{1}+q_{1}+3$ correspond to the following three
higher spin string state
\begin{align}
& A_{1}\symbol{126}\left( i\partial^{3}X^{T}\right) ^{p_{1}}\left(
i\partial X^{P}\right) ^{1}\left( i\partial X^{L}\right) ^{q_{1}+2},\\
& A_{2}\symbol{126}\left( i\partial^{2}X^{T}\right) ^{p_{1}}\left(
i\partial X^{P}\right) ^{2}\left( i\partial X^{L}\right) ^{p_{1}+q_{1
+1},\\
& A_{3}\symbol{126}\left( i\partial X^{T}\right) ^{p_{1}}\left( i\partial
X^{P}\right) ^{3}\left( i\partial X^{L}\right) ^{2p_{1}+q_{1}
\end{align}
can be calculated to b
\begin{align}
A_{1} & =\left[ 2!\frac{\epsilon}{2}\sin\phi\right] ^{p_{1}}\left[
-\left( 1-1\right) !\frac{M_{1}+M_{2}}{2}\right] ^{1}\left[ 0!\frac
{\epsilon}{2}\cos\phi\right] ^{q_{1}+2}\nonumber\\
& \times B\left( \frac{M_{1}M_{2}}{2},1-M_{1}M_{2}\right) \text{ }_{2
F_{1}\left( \frac{M_{1}M_{2}}{2},-1,M_{1}M_{2},\frac{-2M_{1}}{M_{1}+M_{2
}\right) ,\\
A_{2} & =\left[ 1!\frac{\epsilon}{2}\sin\phi\right] ^{p_{1}}\left[
-\left( 2-1\right) !\frac{M_{1}+M_{2}}{2}\right] ^{2}\left[ 0!\frac
{\epsilon}{2}\cos\phi\right] ^{p_{1}+q_{1}+1}\nonumber\\
& \times B\left( \frac{M_{1}M_{2}}{2},1-M_{1}M_{2}\right) \text{ }_{2
F_{1}\left( \frac{M_{1}M_{2}}{2},-2,M_{1}M_{2},\frac{-2M_{1}}{M_{1}+M_{2
}\right) ,\\
A_{3} & =\left[ 0!\frac{\epsilon}{2}\sin\phi\right] ^{p_{1}}\left[
-\left( 3-1\right) !\frac{M_{1}+M_{2}}{2}\right] ^{3}\left[ 0!\frac
{\epsilon}{2}\cos\phi\right] ^{2p_{1}+q_{1}}\nonumber\\
& \times B\left( \frac{M_{1}M_{2}}{2},1-M_{1}M_{2}\right) \text{ }_{2
F_{1}\left( \frac{M_{1}M_{2}}{2},-3,M_{1}M_{2},\frac{-2M_{1}}{M_{1}+M_{2
}\right) .
\end{align}
To apply the recurrence relation in Eq.(\ref{rec}) for Gauss hypergeometry
functions, we choos
\begin{equation}
a=\frac{M_{1}M_{2}}{2},b=-1,c=M_{1}M_{2},z=\frac{-2M_{1}}{M_{1}+M_{2}}.
\end{equation}
One can then calculate the extended recurrence relatio
\begin{align}
& 16\left( \frac{2M_{1}}{M_{1}+M_{2}}+1\right) \left( -\frac{M_{1}+M_{2
}{2}\right) ^{2}\left( \frac{\epsilon}{2}\cos\phi\right) ^{2p_{1}
A_{1}\nonumber\\
& =8\cdot2^{P_{1}}\left( \frac{M_{1}M_{2}}{2}+2\right) \left( \frac
{2M_{1}}{M_{1}+M_{2}}+2\right) \left( -\frac{M_{1}+M_{2}}{2}\right) \left(
\frac{\epsilon}{2}\cos\phi\right) ^{p_{1}+1}A_{2}\nonumber\\
& -2^{P_{1}}\left( M_{1}M_{2}+2\right) \left( \frac{\epsilon}{2}\cos
\phi\right) ^{2}A_{3
\end{align}
where $p_{1}$ is an arbitrary integer. More extendened recurrence relations
can be similarly derived.
The existence of these low energy stringy symmetries comes as a surprise from
Gross's high energy symmetries \cite{GM,Gross,GrossManes} point of view.
Finally, in constrast to the Regge string spacetime symmetry which\ was shown
to be related to $SL(5,C)$ of the Appell function $F_{1}$ \cite{LY2014}, here
we found that the low energy stringy symmetry is related to $SL(4,C)$
\cite{sl4c} of the Gauss hypergeometry functions $_{2}F_{1}.$
\section{Conclusion}
In this paper, we review historically two independent approaches of the four
point string BCJ relation. One originates from field theory BCJ relations
\cite{BCJ}, and the other from calculation of string scattering amplitudes in
the HSS limit \cite{Closed}. By combining string BCJ relations with infinite
linear relations of HSS amplitudes \cite{ChanLee1,ChanLee2,PRL, CHLTY,susy},
one obtains extended linear relations which relate HSS amplitudes of string
states with different spins and different channels. Moreover, these extended
linear relations can be used to reduce the number of independent HSS
amplitudes from $\infty$ down to $1$\cite{ChanLee1,ChanLee2,PRL, CHLTY,susy}.
Similar calculation can be performed in the RSS limit \cite{KLY}, and one
obtains extended recurrence relations in the RSS limit. These extended Regge
recurrence relations again can be used to reduce the number of independent RSS
amplitudes from $\infty$ down to $1$ \cite{LY,LY2014}.
We then give an explicit proof of four point string BCJ relations for all
energy. We found that the equality of the string BCJ relations can be
identified as the equalities of coefficients of two multi-linear polynomials
of\ ${k_{1}^{\mu}}$ and ${k_{3}^{\nu}}$ in the $s-t$ and $t-u$ channel
amplitudes. This calculation, which puts no constraints on the kinematic
regimes in constrast to the previous one \cite{BCJ}, provides an alternative
proof of the one based on monodromy of integration \cite{BCJ} in string
amplitude calculation.
Finally, we calculate both $s-t$ and $t-u$ channel NSS amplitudes of three
tachyons and one higher spin string state including the leading trojectory
string states at arbitrary mass levels. We discover that the mass and spin
dependent nonrelativistic string BCJ relations can be expressed in terms of
Gauss hypergeometry functions. As an application, we calculate examples of
extended recurrence relations of low energy NSS amplitudes. For each fixed
mass level $N,$ these extended recurrence relations relate low energy NSS
amplitudes of string states with different spins and different channels.
We believe that many string theory origins of properties of field theory
amplitudes remain to be understood, and many more stringy generalizations of
properties of field theory amplitudes remain to be uncovered in the near future.
\section{Acknowledgments}
We would like to thank Chung-I Tan for discussions. J.C. thanks the useful
correspondence of authors of reference \cite{stringBCJ}. He also thanks
Chih-Hau Fu for bringing his attention to reference \cite{stringBCJ}. This
work is supported in part by the Ministry of Science and Technology and S.T.
Yau center of NCTU, Taiwan.
|
1,116,691,499,849 | arxiv | \section{Introduction}
A Finsler structure on a surface $M$ can be regarded as a smooth 3-manifold $\Sigma\subset TM$ for which the canonical projection $\pi:\Sigma\to M$ is a surjective submersion and having the property that for each $x\in M$, the $\pi$-fiber $\Sigma_x=\pi^{-1}(x)$ is a strictly convex curve including the origin $O_x\in T_xM$. Here we denote by $TM$ the tangent bundle of $M$. This is actually equivalent to saying that such a geometrical structure $(M,F)$ is a surface $M$ endowed with a Minkowski norm in each tangent space $T_xM$ that varies smoothly with the base point $x\in M$ all over the manifold. Obviously $\Sigma$ is the unit sphere bundle $\{(x,y)\in TM:F(x,y)=1\}$, also called the indicatrix bundle. Even though the these notions are defined for arbitrary dimension, we restrict to surfaces hereafter (\cite{BCS}).
On the other hand, such a Finsler structure defines a 2-parameter family of oriented paths on $M$, one in every oriented direction through every point. This is a special case of the notion of path geometry. We recall that, roughly speaking, a path geometry on a surface $M$ is a 2-parameter family of curves on $M$ with the property that through each point $x \in M$ and in each tangent direction at $x$ there passes a unique curve in the family. The fundamental example to keep in mind is the family of lines in the Euclidean plane.
To be more precise, a path geometry on a surface $M$ is a foliation $\mathcal P$ of the projective tangent bundle $\mathbb{P} TM$ by contact curves, each of which is transverse to the fibers of the canonical projection $\pi:\mathbb{P} TM\to M$. Observe that even though $\mathbb{P} TM$ is independent of any norm $F$, actually there is a Riemannian isometry between $\mathbb{P} TM$ and $\Sigma$, fact that allows us to identify them in the Finslerian case(\cite{B}).
The 3-manifold $\mathbb{P} TM$ is naturally endowed with a contact structure. Indeed, observe that for a curve
be a smooth, immersed curve $\gamma:(a,b)\to M$, let us denote by $\hat{\gamma}:(a,b)\to \mathbb{P}TM$ its canonical lift to the projective tangent bundle $\mathbb{P} TM$. Then, the fact that the canonical projection is a submersion implies that, for each line $L \in \mathbb P TM $, the linear map $\pi_{*,L} : T _L \mathbb P TM \to T_x M$, is surjective, where $\pi ( L ) = x \in M$. Therefore $E_L := \pi_{*,L}^{ -1} ( L ) \subset T_L \mathbb P TM$ is a 2-plane in $T_L \mathbb P TM$ that defines a contact distribution and therefore a contact structure on $\mathbb P TM$. A curve on $\mathbb P TM$ is called contact curve if it is tangent to the contact distribution $E$. Nevertheless, the canonical lift $\hat{\gamma}$ to $\mathbb P TM$ of a curve $\gamma$ on $M$ is a contact curve.
A local path geometry on $M$ is a foliation $\mathcal P$ of an open subset $U \subset \mathbb P TM$ by contact curves, each of which is transverse to the fibers of $\pi : \mathbb P TM \to M$.
If $(M,F)$ is a Finsler surface, then the 3-manifold $\Sigma$ is endowed with a canonical coframe $(\omega^1,\omega^2,\omega^3)$ satisfying the structure equations
\begin{equation}\label{struct eq F}
\begin{split}
d\omega^1 & = -I\omega^1\wedge \omega^3+\omega^2\wedge \omega^3 \\
d\omega^2 & = \omega^3\wedge\omega^1 \\
d\omega^3 & = K\omega^1\wedge\omega^2-J\omega^1\wedge\omega^3,\\
\end{split}
\end{equation}
where the functions $I,J$ and $K:TM\to \ensuremath{\mathbb{R}}$ are the Cartan scalar, the Landsberg curvature and the Gauss curvature, respectively.
The 2-plane field $D:=\langle \hat{e}_2,\hat{e}_3\rangle$ defines a contact structure on $\Sigma$, where we denote $(\hat{e}_1,\hat{e}_2,\hat{e}_3)$ the dual frame of $(\omega^1,\omega^2,\omega^3)$. Indeed, it can be seen that the 1-form $\eta:=A \omega^1$ is a contact form for any function $A\neq 0$ on $\Sigma$. The structure equations \eqref{struct eq F} imply $\eta\wedge d\eta=A^2\omega^1\wedge \omega^2
\wedge \omega^3\neq 0$. Observe that in the Finslerian case, we actually have two foliations on the 3-manifold $\Sigma$:
\begin{enumerate}
\item $\mathcal P=\{\omega^1=0,\omega^3=0\}$ the geodesic foliation of $\Sigma$, i.e. the leaves are curves in $\Sigma$ tangent to the geodesic spray $\hat{e}_2$;
\item $\mathcal Q=\{\omega^1=0,\omega^2=0\}$ the indicatrix foliation of $\Sigma$, i.e. the leaves are indicatrix curves in $\Sigma$ tangent $\hat{e}_3$.
\end{enumerate}
The pair $(\mathcal P,\mathcal Q)$ is called sometimes a generalized path geometry (see \cite{Br}).
The {\it (forward) integral length} of a regular piecewise $C^{\infty}$-curve $\gamma:[a,b]\to M$ on a Finsler surface $(M,F)$ is given by
\begin{equation*}\label{integral length}
{\cal L}_{\gamma}:=\sum_{i=1}^k\int_{t_{i-1}}^{t_i}F(\gamma(t),\dot\gamma(t))dt,
\end{equation*}
where $\dot\gamma=\frac{d\gamma}{dt}$ is the tangent vector along the curve $\gamma|_{[t_{i-1},t_i]}$.
A regular piecewise $C^\infty$-curve $\gamma$ on a Finsler manifold is called a {\it forward geodesic} if $({\ensuremath{\mathcal{L}}_\gamma})'(0)=0$ for all piecewise $C^\infty$-variations of $\gamma$ that keep its ends fixed. In terms of Chern connection a constant speed geodesic is characterized by the condition $D_{\dot\gamma}{\dot\gamma}=0$. Observe that the canonical lift of a geodesic $\gamma$ to $\mathbb PTM$ gives the geodesics foliation $\mathcal P$ described above.
Using the integral length of a curve, one can define the Finslerian distance between two points on $M$. For any two points $p$, $q$ on $M$, let us denote by $\Omega_{p,q}$ the set of all piecewise $C^\infty$-curves $\gamma:[a,b]\to M$ such that $\gamma(a)=p$ and $\gamma(b)=q$. Then the map
\begin{equation*}
d:M\times M\to [0,\infty),\qquad d(p,q):=\inf_{\gamma\in\Omega_{p,q}}{\cal L}_{\gamma}
\end{equation*}
gives the {\it Finslerian distance} on $M$. It can be easily seen that $d$ is in general a quasi-distance, i.e., it has the properties $d(p,q)\geq 0$, with equality if and only if $p=q$, and $d(p,q)\leq d(p,r)+d(r,q)$, with equality if and only if $r$ lies on a minimal geodesic segment joining from $p$ to $q$ (triangle inequality).
A Finsler manifold $(M,F)$ is called {\it forward geodesically complete} if and only if any short geodesic $\gamma:[a,b)\to M$ can be extended to a long geodesic $\gamma:[a,\infty)\to M$. The equivalence between forward completeness as metric space and geodesically completeness is given by the Finslerian version of Hopf-Rinow Theorem (see for eg. \cite{BCS}, p. 168). Same is true for backward geodesics.
In the Finsler case, unlikely the Riemannian counterpart, forward completeness is not equivalent to backward one, except the case when $M$ is compact.
Any geodesic $\gamma$ emanating from a point $p$ in a compact Finsler manifold loses the global minimising
property at a point $q$ on $\gamma$. Such a point $q$ is called a cut point of $p$ along $\gamma$. The cut locus of a
point $p$ is the set of all cut points along geodesics emanating from $p$. This kind of points often appears
as an obstacle when we try to prove some global theorems in differential geometry being in the same time vital in analysis, where appear as a singular points set. In fact, the cut locus of a point $p$ in a complete Finsler manifold equals the closure of the set of all
non-differentiable points of the distance function from the point $p$. The structure of the cut locus
plays an important role in optimal control problems in space and quantum dynamics allowing to
obtain global optimal results in orbital transfer and for Lindblad equations in quantum control.
The notion of cut locus was introduced and studied for the first time by H. Poincare in 1905 for the Riemannian case. In the case of a two dimensional analytical sphere, S. B. Myers has proved in 1935 that the cut locus of a point is a finite tree in both Riemannian and Finslerian cases. In the case of an analytic Riemannian manifold, M. Buchner has shown the triangulability of the cut locus of a point $p$, and has determined its local structure for the low dimensional case in 1977 and 1978, respectively. The cut locus of a point can have a very complicated structure. For example, H. Gluck and D. Singer have constructed a $C^\infty$ Riemannian manifold that has a point whose cut locus is not triangulable (see \cite{SST} for an exposition). There are $C^k$-Riemannian or Finsler metrics on spheres with a preferential point whose cut locus is a fractal (\cite{IS}).
In the present paper we will study the local and global behaviour of the geodesics of a Finsler metric of revolution on topological cylinders. In special, we will determine the structure of the cut locus on the cylinder for such metrics and compare it with the Riemannian case.
Will focus on the Finsler metrics of Randers type obtained as solutions of the Zermelo's navigation problem for the navigation data $(M,h)$, where $h$ is the canonical Riemannian metric on the topological cylinder $h=dr^2+m^2(r)d\theta^2$, and $W=A(r)\frac{\partial}{\partial r}+B\frac{\partial}{\partial \theta}$ is a vector field on $M$. Observe that our wind is more general than a Killing vector field, hence our theory presented here is a generalization of the classical study of geodesics and cut locus for Randers metrics obtained as solutions of the Zermelo's navigation problem with Killing vector fields studied in \cite{HCS} and \cite{HKS}. Nevertheless, by taking the wind $W$ in this way we obtain a quite general Randers metric on $M$ which is a Finsler metric of revolution and whose geodesics and cut locus can be computed explicitly.
Our paper is organized as follows. In the Section \ref{sec_Randers_theory}, we recall basics of Finsler geometryusing the Randers metrics that we will actually use in order to obtain explicit information on the geodesics behaviour and cut locus structure. We introduce an extension of the Zermelo's navigation problem for Killing winds to a more general case $\widetilde{W}=V+W$, where only $W$ is Killing. We show that the geodesics, conjugate locus and cut locus can be determined in this case as well.
In the section \ref{sec:Surf of revol} we describe the theory of general Finsler surfaces of revolution. In the case this Finsler metric is a Riemannian one, we obtain the theory of geodesics and cut locus known already (\cite{C1}, \cite{C2}).
In the Section \ref{sec_Randers_metric} we consider some examples that ilustrate the theory depicted until here. In particular, in subsection \ref{sec_A(r),B} we consider the general wind $W=A(r)\frac{\partial}{\partial r}+B\frac{\partial}{\partial \theta}$ which obviously is not Killing with respect to $h$, where $A=A(r)$ is a bounded function and $B$ is a constant and determine its geometry here. Essentially, we are reducing the geodesics theory of the Finsler metric $\widetilde{F}$, obtained from the Zermelo's navigation problem for $(M,h)$ and $\widetilde{W}$, to the theory of a Riemannian metric $(M,\alpha)$.
Moreover, in the particular case $\widetilde{W}=A\frac{\partial}{\partial r}+B\frac{\partial}{\partial \theta}$ in Section \ref{sec_A,B}, where $A,B$ are constants, the geodesic theory of $\widetilde{F}$ can be directly obtained from the geometry of the Riemannian metric $(M,h)$. A similar study can be done for the case $W=A(r)\frac{\partial}{\partial r}$. We leave a detailed study of these Randers metrics to a forthcoming research.
\section{Finsler metrics. The Randers case}\label{sec_Randers_theory}
Finsler structures are one of the most natural generalization of Riemannian metrics. Let us recall here that a Finsler structure on a real smooth $n$-dimensional manifold $M$ is a function $F:TM\to [0,\infty)$ which is smooth on $\widetilde{TM}=TM\setminus O$, where $O$ is the zero section, has the {\it homogeneity property} $F(x,\lambda y)=\lambda F(x,y)$, for all $\lambda>0$ and all $y\in T_xM$ and also has the strong convexity property that the Hessian matrix
\begin{equation}
g_{ij}=\frac{1}{2}\frac{\partial^2F^2}{\partial y^i\partial y^j}(x,y)
\end{equation}
is positive definite at any point $(x,y)\in TM$.
\subsection{An ubiquitous family of Finsler structures: the Randers metrics}\label{sec_ubiquitous}
Initially introduced in the context of general relativity, Randers metrics are the most ubiquitous family of Finsler structures.
A {\it Randers metric} on a surface $M$ is obtained by a rigid translation of an ellipse in each tangent plane $T_xM$ such that the origin of $T_xM$ remains inside it.
\begin{figure}[h]
\begin{center}
\setlength{\unitlength}{1cm}
\begin{tikzpicture}
\draw[rotate=20,xshift=1cm] (0,0) ellipse (50pt and 40pt);
\draw[rotate=0,xshift=0cm] (0,0) ellipse (50pt and 40pt);
\draw [->] (0,0) -- (1.2,0.6) node[above]{$w$};
\draw [->] (-4,0) -- (4,0) node[below]{$y^1$};
\draw [->] (0,-3) -- (0,3) node[left]{$y^2$};
\node at (-1.6,-1.2) {$\Sigma_h$};
\node at (2,2) {$\Sigma_F$};
\end{tikzpicture}
\end{center}
\caption{Randers metrics: a rigid dispacement of an ellipse.}
\label{fig_1}
\end{figure}
Formally, on a Riemannian manifold $(M,\alpha)$, a Randers metric is a Finsler structure $(M,F)$ whose fundamental function $F:TM\to [0,\infty)$ can be written as
\begin{equation*}
F(x,y)=\alpha(x,y)+\beta(x,y),
\end{equation*}
where $\alpha(x,y)=\sqrt{a_{ij}(x)y^iy^j}$ and $\beta(x,y)=b_i(x)y^i$, such that the Riemannian norm of $\beta$ is less than 1, i.e. $b^2:=a^{ij}b_ib_j<1$.
It is known that Randers metrics are solutions of the {\it Zermelo's navigation problem} \cite{Z} which we recall here.
{\it Consider a ship sailing on the open sea in calm waters. If a mild breeze comes up, how should the ship be steered in order to reach a given destination in the shortest time possible?
}
The solution was given by Zermelo in the case the open sea is an Euclidean space, by \cite{Sh} in the Riemannian case and studied in detailed in \cite{BRS}.
Indeed, for a time-independent wind $W\in TM$, on a Riemannian manifold $(M,h)$, the paths minimizing travel-time are exactly the geodesics of the Randers metric
\begin{equation*}
F(x,y)=\alpha(x,y)+\beta(x,y)=\frac{\sqrt{\lambda\|y\|_h^2+W_0}}{\lambda}-\frac{W_0}{\lambda},
\end{equation*}
where $W=W^i(x)\frac{\partial}{\partial x^i}$, $\|y\|_h^2=h(y,y)$, $\lambda=1-|W|_h^2$, and $W_0=h(W,y)$. Requiring $\|W\|_h<1$ we obtain a positive definite Finslerian norm. In components, $a_{ij}=\frac{1}{\lambda}h_{ij}+\frac{W_i}{\lambda}$, $b_i(x)=-\frac{W_i}{\lambda}$, where $W_i=h_{ij}W^j$ (see \cite{R} for a general discussion).
The Randers metric obtained above is called {\it the solution of the Zermelo's navigation problem for the navigation data $(M,h)$ and $W$}.
\begin{Remark}
Obviously, at any $x\in M$, the condition $F(y)=1$ is equivalent to $\|y-W\|_h=1$ fact that assures that, indeed, the indicatrix of $(M,F)$ in $T_xM$ differs from the unit sphere of $h$ by a translation along $W(x)$ (see Figure \ref{fig_1}).
\end{Remark}
More generally, the Zermelo's navigation problem can be considered where the open sea is a given Finsler manifold (see \cite{Sh}).
We have
\begin{Proposition}
Let $(M,F)$ be a Finsler manifold and $W$ a vector field on $M$ such that $F(-W)<1$. Then the solution of the Zermelo's navigation problem with navigation data $F,W$ is th Finsler metric $\widetilde{F}$ obtained by solving the equation
\begin{equation}\label{eq_*1}
F(y-\widetilde{F}W)=\widetilde{F}, \ \text{for any} \ y\in TM.
\end{equation}
\end{Proposition}
Indeed, if we consider the Zermelo's navigation problem where the open sea is the Finsler manifold $(M,F)$ and the wind $W$, by rigid translation of the indicatrix $\Sigma_F$ we obtain the closed, smooth, strongly convex indicatrix $\Sigma_{\widetilde{F}}$, where $\widetilde{F}$ is solution of the equation $F\left(\frac{y}{\widetilde{F}}-W\right)=1$ which is clearly equivalent to \eqref{eq_*1} due to positively of $\widetilde{F}$ and homogeneity of $F$.
To get a genuine Finsler metric $\widetilde{F}$, We need for the origin $O_x\in T_xM$ to belong to the interior of $\Sigma_{\widetilde{F}}=\Sigma_F+W$, that is $F(-W)<1$.
\begin{Remark}\label{rem_F_positive}
Consider the Zermelo's navigation problem for $(M,F)$ and wind $W$, where $F$ is a (positive-defined) Finsler metric. If we solve the equation
$$
F\left(\frac{y}{\widetilde{F}}-W\right)=1\Leftrightarrow F(y-\widetilde{F}W)=\widetilde{F}
$$
let $\widetilde{F}$ we obtain the solution of this Zermelo's navigation problem.
In order that $\widetilde{F}$ is Finsler we need to check:
\begin{itemize}
\item[(i)] $\widetilde{F}$ is strongly convex
\item[(ii)] the indicatrix of $\widetilde{F}$ includes the origin
$O_x\in T_xM$.
\end{itemize}
Since indicatrix of $\widetilde{F}$ is the rigid translation by $W$ of the indicatrix of $F$, and indicatrix of $F$ is strongly convex, it follows indicatrix of $\widetilde{F}$ is also strongly convex.
Hence, we need to find the condition for (ii) only.
Denote
$$
B_F(1):=\{y\in T_xM:F(y)<1\},\quad
\widetilde{B}_{\widetilde{F}}(1):=\{y\in T_xM:\widetilde{F}(y)<1\}
$$
the unit balls of $F$ and $\widetilde{F}$, respectively.
The Zermelo's navigation problem shows
$$
B_{\widetilde{F}}(1)=B_F(1)+W.
$$
Hence
$$
O_x\in B_{\widetilde{F}}(1)\Leftrightarrow O_x\in B_F(1)+W\Leftrightarrow -W\in B_F(1)\Leftrightarrow F(-W)<1.
$$
Hence, indicatrix of $\widetilde{F}$ include $O_x \Leftrightarrow F(-W)<1$, where we denote by $O_x$ the zero vector.
\end{Remark}
\begin{Proposition}\label{prop_2steps_Zermelo}
Let $(M,F_1=\alpha+\beta)$ be a Randers space and $W=W^i(x)\frac{\partial}{\partial x^i}$ a vector field on $M$. Then, the solution of the Zermelo's navigation problem with navigation data $(M,F_1)$ and $W$ is also a Randers metric $F=\widetilde{\alpha}+\widetilde{\beta}$, where
\begin{equation}\label{eq_tilde a tilde b}
\begin{split}
\widetilde{a}_{ij}&= \frac{1}{\eta}\left(a_{ij}-b_ib_j\right)+\left(\frac{W_i-b_i[1+\beta(W)]}{\eta}\right)\left( \frac{W_j-b_j[1+\beta(W)]}{\eta}\right)\\
\widetilde{b}_{i}&= -\frac{W_i-b_i[1+\beta(W)]}{\eta},
\end{split}
\end{equation}
where $\eta=[1+\beta(W)]^2-\alpha^2(W)$, $W_i=a_{ij}W^j$.
\end{Proposition}
\begin{proof}[Proof of Proposition \ref{prop_2steps_Zermelo}]
Let us consider the equation
$$
F_1\left(\frac{y}{\widetilde{F}}-W\right)=1
$$
which is equivalent to
$$
F_1(y-\widetilde{F}W)=\widetilde{F}
$$
due to positively of $\widetilde{F}$ and 1-positive homogeneity of $F_1$.
If we use $F_1=\alpha+\beta$, it follows
$$
\alpha(y-\widetilde{F}W)=\widetilde{F}-\beta(y-\widetilde{F}W),
$$
using the linearity of $\beta$, i.e. $\beta(y-\widetilde{F}W)=\beta(y)-\widetilde{F}\beta(W)$, where $\beta(y)=b_iy^i$, $\beta(W)=b_iW^i$, and squaring this formula, we get the equation
\begin{equation}\label{eq_3*}
\alpha^2(y-\widetilde{F}W)=[\widetilde{F}(1+\beta(W))-\beta(y)]^2.
\end{equation}
Observe that
\begin{equation}\label{eq_1*}
\alpha^2(y-\widetilde{F}W)=\alpha^2(y)-2\widetilde{F}<y,W>_\alpha+\widetilde{F}^2\alpha^2(W)
\end{equation}
and
\begin{equation}\label{eq_2*}
[\widetilde{F}-\beta(y-\widetilde{F}W)]^2=[1+\beta(W)]^2\widetilde{F}^2-2\widetilde{F}\beta(y)[1+\beta(W)]+\beta^2(y),
\end{equation}
substituting \eqref{eq_1*}, \eqref{eq_2*} in \eqref{eq_3*} gives the 2nd degree equation
\begin{equation}\label{eq_4*}
\eta \widetilde{F}^2+2\widetilde{F}<y,\quad W-B[1+\beta (W)]>_\alpha-[\alpha^2(y)-\beta^2(y)]=0,
\end{equation}
where $B=b^i\frac{\partial}{\partial x^i}=(a^{ij}b_j)\frac{\partial}{\partial x^i}$ and $\eta:=[1+\beta(W)]^2-\alpha^2(W)$, i.e.
$$
<y,W-B[1+\beta(W)]>_\alpha=a_{ij}y^i(w^j-b^j[1+\beta(W)])=<y,W>_\alpha-\beta(y)[1+\beta(W)].
$$
The discriminant of \eqref{eq_4*} is
$$
D'=\{<y,W>_\alpha-\beta(y)[1+\beta(W)]\}^2+\eta[\alpha^2(y)-\beta^2(y)].
$$
Let us observe that $F_1(-W)<1$ implies $\eta>0$. Indeed
$$
F_1(-W)=\alpha(W)-\beta(W)<1 \Leftrightarrow \alpha^2(W)<[1+\beta(W)]^2
$$
hence $\eta>0$.
Moreover, observe that
$$
D'=\{\eta(a_{ij}-b_ib_j)+(w_i-b_i[1+\beta(W)])(w_j-b_j[1+\beta(W)])\}y^iy^j.
$$
The solution of \eqref{eq_4*} is given by
$$
\widetilde{F}=\frac{\sqrt{<y,W-B[1-\beta(W)]>_\alpha^2+\eta[\alpha^2(y)-\beta^2(y)]}}{\eta}-\frac{<y,W-B[1-\beta(W)]>_{\alpha}}{\eta}
$$
or equivalently to
$$
\widetilde{F}=\frac{\sqrt{\{\eta(a_{ij}-b_ib_j)+(w_i-b_i[1+\beta(W)])(w_j-b_j[1+\beta(W)])\}y^iy^j}}{\eta}-\frac{\{W_i-b_i[1+\beta(W)]\}y^i}{\eta},
$$
that is $\widetilde{F}=\widetilde{\alpha}+\widetilde{\beta}$, where $\widetilde{a}_{ij}$ and $\widetilde{b}_i$ are given by \eqref{eq_tilde a tilde b}.
Observe that $\widetilde{a}_{ij}$ is positive defined. Indeed, for any $v\in TM$, $\widetilde{\alpha}^2(v,v)=\widetilde{a}_{ij}v^iv^j=\eta[\alpha^2(v)-\beta^2(v)]+<v,W-B[1+\beta(W)]>^2$.
On the other hand, since $F_1=\alpha+\beta$ is Randers metric, $F_1(X)>0$ for any tangent vector $X\in TM$, hence for $X=v$ and $X=-v$ we get $\alpha(v)+\beta(v)>0$ and $\alpha(v)-\beta(v)>0$, respectively, hence $\alpha^2(v)-\beta^2(v)>0$ for any $v\in TM$.
This implies $\widetilde{a}_{ij}$ is positive defined.
\end{proof}
\subsection{A two steps Zermelo's navigation}\label{sec_two_steps_Zermelo}
We have discussed in the previous section the Zermelo's navigation when the open sea is a Riemannian manifold and when it is a Finsler manifold, respectively.
In order to obtain a more general version of the navigation, we combine these two approaches. We have
\begin{theorem}\label{thm_two_steps_Zermelo}
Let $(M,h)$ be a Riemannian manifold and $V$, $W$ two vector fields on $M$.
Let us consider the Zermelo's navigation problem on $M$ with the following data
\begin{enumerate}
\item[(I)] Riemannian metric $(M,h)$ with wind $V+W$ and assume condition $\|V+W\|_h<1$;
\item[(II)] Finsler metric $(M,F_1)$ with wind $W$ and assume $W$ satisfies condition $F_1(-W)<1$, where $F_1=\alpha+\beta$ is the solution of the Zermelo' s navigation problem for the navigation data $(M,h)$ with wind $V$, such that $\|V\|_h<1$.
\end{enumerate}
Then, the above Zermelo's navigation problems (I) and (II) have the same solution $F=\widetilde{\alpha}+\widetilde{\beta}$.
\end{theorem}
\begin{proof}[Proof of Theorem \ref{thm_two_steps_Zermelo}]
Let us consider case (I), i.e. the sea is the Riemannian metric $(M,h)$ with the wind $\widetilde{W}:=V+W$ such that $\|V+W\|_h<1$. The associated Randers metric through the Zermelo's navigation problem is given by $\widetilde{\alpha}+\widetilde{\beta}$, where
\begin{equation}\label{eq_1.1*}
\begin{split}
\widetilde{a}_{ij}:=\frac{1}{\Lambda}h_{ij}+\left(\frac{\widetilde{W}_i}{\Lambda}\right)\left(\frac{\widetilde{W}_j}{\Lambda}\right), \ \widetilde{b}_i:=-\frac{\widetilde{W}_i}{\Lambda},
\end{split}
\end{equation}
where $\Lambda=1-\|\widetilde{W}\|^2_h=1-\|V+W\|^2_h$, $\widetilde{W}_i=h_{ij}\widetilde{W}^j$.
Observe that \eqref{eq_1.1*} are actually equivalent to
\begin{equation}\label{eq_1.2*}
\begin{split}
\widetilde{a}_{ij}&:=\frac{1}{\Lambda}h_{ij}+\left(\frac{V_i^{(h)}+W_i^{(h)}}{\Lambda}\right)\left(\frac{V_j^{(h)}+W_j^{(j)}}{\Lambda}\right),\\
\widetilde{b}_i&:=-\frac{W_i^{(h)}}{\Lambda}-\frac{V_i^{(h)}}{\Lambda},
\end{split}
\end{equation}
where $V_i^{(h)}=h_{ij}V^j$ and $W_i^{(h)}=h_{ij}W^j$.
Next, we will consider the case (II) which we regard as a two steps Zermelo type navigation:
$\underline{\text{Step 1}}$. Consider the Zermelo's navigation with data $(M,h)$ and wind $V$, $\|V\|_h^2<1$ with the solution $F_1=\alpha+\beta$, where
\begin{equation*}
\begin{split}
a_{ij}=\frac{1}{\lambda}h_{ij}+\left(\frac{V_i^{(h)}}{\lambda}\right)\left(\frac{V_j^{(h)}}{\lambda}\right),\ b_i=-\frac{V_i^{(h)}}{\lambda},
\end{split}
\end{equation*}
where $\lambda=1-\|V\|_h^2$, $V_i^{(h)}=h_{ij}V^j$.
$\underline{\text{Step 2}}$. Consider the Zermelo's navigation with data $(M,F_1=\alpha+\beta)$ obtained at step 1, and wind $W$ such that $F_1(-W)<1$, with solution $\widetilde{F}=\widehat{\alpha}+\widehat{\beta}$ (see Proposition \ref{prop_2steps_Zermelo}), where
\begin{equation}\label{eq_1.3*}
\begin{split}
\widehat{a}_{ij}&=\frac{1}{\eta}(a_{ij}-b_ib_j)+\left(\frac{W_i^{(\alpha)}-b_i[1+\beta(W)]}{\eta}\right)\left(\frac{W_j^{(\alpha)}-b_j[1+\beta(W)]}{\eta}\right),\\
\widehat{b}_i&=-\frac{W_i^{(\alpha)}}{\eta}
\end{split}
\end{equation}
with
$$
\eta=[1+\beta(W)]^2-\alpha^2(W), \text{ and } W_i^{(\alpha)}=a_{ij}W^j.
$$
We will show that $\widetilde{a}_{ij}=\widehat{a}_{ij}$ and $\widetilde{b}_i=\widehat{b}_i$, respectively, for all indices $i,j\in\{1,\dots,n\}$. It is trivial to see that $\Lambda=\lambda-\|W\|_h^2-2<V,W>_h$.
Next, by straightforward computation we get
$$
\alpha^2(W)=a_{ij}W^iW^j=\frac{1}{\lambda}\|W\|_h^2+\left(\frac{h(V,W)}{\lambda}\right)^2,\ \beta(W)=-\frac{h(V,W)}{\lambda}.
$$
It follows that
$$
\eta=\left[1-\frac{h(V,W)}{\lambda}\right]^2-\frac{1}{\lambda}\|W\|_h^2-\frac{h^2(V,W)}{\lambda^2}=1-2\frac{h(V,W)}{\lambda}-\frac{1}{\lambda}\|W\|_h^2,
$$
we get
\begin{equation}\label{eq_L1}
\eta=\frac{\Lambda}{\lambda}.
\end{equation}
In a similar manner,
$$
\frac{W_i^{(\alpha)}-b_i[1+\beta(W)]}{\eta}=\frac{1}{\eta}\left[\frac{h_{ij}W^j}{\lambda}+\frac{V_i^{(h)}W^i}{\lambda}\frac{V_j^{(h)}W^j}{\lambda}+\frac{V_i^{(h)}}{\lambda}\left(1-\frac{h(V,W)}{\lambda}\right)\right],
$$
hence we obtain
$$
\frac{W_i^{(\alpha)}-b_i(1+\beta(W))}{\eta}=\frac{W_i^{(h)}+V_i^{(h)}}{\Lambda},
$$
that is $\widetilde{b}_i=\widehat{b}_i$.
It can be also seen that
$$
\frac{1}{\eta}(a_{ij}-b_ib_j)=\frac{1}{\Lambda}h_{ij},
$$
hence $\widetilde{a}_{ij}=\widehat{a}_{ij}$ and the identity of formulas \eqref{eq_1.1*} and \eqref{eq_1.3*} is proved. In order to finish the proof we show that the conditions
(i) $\|V+W\|^2_h<1$
and
(ii) $\|V\|_h^2<1$ and $F(-W)<1$
are actually equivalent.
Geometrically speaking, the 2-steps Zermelo's navigation is the rigid translation of $\Sigma_h$ by $V$ followed by the rigid translation of $\Sigma_{F_1} $ by $W$. This is obviously equivalent to the rigid translation of $\Sigma_h$ by $\widetilde{W}=V+W$.
\begin{figure}[H]\label{fig_indicatrix_tilde_V}
\begin{center}
\setlength{\unitlength}{1cm}
\begin{tikzpicture}[scale=1]
\draw[->](0,3) -- (9,3) node[below]{$y^1$};
\draw[->](3,0) -- (3,8) node[left]{$y^2$};
\draw[->](3,3) -- (4.5,3);
\draw[->](4.5,3) -- (4.5,4);
\draw[->](3,3) -- (4.5,4);
\draw(3,3) circle(2);
\draw(4.5,3) circle(2);
\draw(4.5,4) circle(2);
\draw(2.8,2.7) node{$0$};
\draw(3.8,2.7) node{$V$};
\draw(3.8,4) node{$\widetilde{W}$};
\draw(4.8,3.5) node{$W$};
\draw(1.5,1) node{$\Sigma_h$};
\draw(7,7) node{$T_xM$};
\draw(6.5,1) node{$\Sigma_{F_1}$};
\draw(7,5.1) node{$\Sigma_{\widetilde{F}}$};
\end{tikzpicture}
\end{center}
\caption{The $h$-indicatrix, $F_1$-indicatrix and $F$-indicatrix.}
\label{fig_2}
\end{figure}
The geometrical meaning of (i) is that the origin $O_x\in T_xM$ is in the interior of the translated indicatrix $\Sigma_{\widetilde{F}}$ (see Figure \ref{fig_2}. On the other hand, the relation in (ii) shows that the origin $O_x$ is in the interior of the translated indicatrix $\Sigma_h$ by $V$ and $\Sigma_{F_1}$ by $W$.
This equivalence can also be checked analytically.
For initial data $(M,h)$ and $V$, we obtain by Zermelo's navigation the Randers metric $F=\alpha+\beta$, where
\begin{equation*}
\begin{split}
a_{ij}=\frac{1}{\lambda}h_{ij}+\left(\frac{V_i}{\lambda}\right)\left(\frac{V_j}{\lambda}\right),\ b_i=-\frac{V_i}{\lambda},
\end{split}
\end{equation*}
with $V_i=h_{ij}V^j$ and $\lambda=1-\|V\|^2_h<1$.
Consider another vector field $W$ and compute
\begin{equation*}
\begin{split}
F(-W)&=\sqrt{\frac{1}{\lambda}\|W\|^2_h+\left(\frac{V_iW^i}{\lambda}\right)^2}+\frac{V_iW^i}{\lambda}\\
&=\frac{1}{\lambda}\left[\sqrt{\lambda\|W\|_h^2+h^2(V,W)}+h(V,W)\right].
\end{split}
\end{equation*}
Let us assume $F(-W)<1$, hence
$$
\sqrt{\lambda\|W\|_h^2+h^2(V,W)}+h(V,W)<\lambda,
$$
i.e.
\begin{equation*}
\begin{split}
& \hspace{0.65cm}\lambda\|W\|_h^2+h^2(V,W)<[\lambda-h(V,W)]^2\\
&\Leftrightarrow \lambda\|W\|_h^2+\cancel{h^2(V,W)}<\lambda^2-2\lambda h(V,W)+\cancel{h^2(V,W)}, \ \lambda >0\\
&\Leftrightarrow \|W\|_h^2<\lambda - 2 h(V,W)\Leftrightarrow\|W\|_h^2+2h(V,W)+\|V\|_h^2<1\\
&\Leftrightarrow \|W+V\|_h^2<1.
\end{split}
\end{equation*}
Conversely, if $\|V+W\|_h^2<1$, by reversing the computation above, we obtain $F(-W)<1$, provided $\lambda-h(V,W)>0$.
Indeed, observe that $\|V+W\|_h^2<1$ actually implies $\lambda-h(V,W)>0$, because $1-\|V\|_h^2-h(V,W)=1-h(V,V+W)>0\Leftrightarrow h(V,V+W)<1$.
However Cauchy-Schwartz inequality : $h(V,V+W)\leq \|V\|_h\|V+W\|_h<1$ using $\|V\|_h<1$ and $\|V+W\|_h<1$.
\end{proof}
The 2-steps Zermelo's navigation problem discussed above, can be generalized to $k$-steps Zermelo's navigation.
\begin{Remark}
Let $(M,F)$ be a Finsler space and let $W_0,W_1,\dots,W_{k-1}$ be $k$ linearly independent vector fields on $M$. We consider the following $k$-step Zermelo's navigation problem.
$\underline{\text{Step 0}}$. $F_1$ solution of $(M,F_0,W_0)$ with $F_0(-W_0)<1$, i.e.
\hspace{1cm} Solution of $F_0\left(\frac{y}{F_1}-W_0\right)=1$.
$\underline{\text{Step 1}}$. $F_2$ solution of $(M,F_1,W_1)$ with $F_1(-W_1)<1$, i.e.
\hspace{1cm} Solution of $F_1\left(\frac{y}{F_2}-W_1\right)=1$.
\quad $\vdots$
$\underline{\text{Step k-1}}$. $F_k$ solution of $(M,F_{k-1},W_{k-1})$ with $F_{k-1}(-W_{k-1})<1$, i.e.
\hspace{1cm} Solution of $F_{k-1}\left(\frac{y}{F_k}-W_{k-1}\right)=1$.
Then $F_k$ is the Finsler metric obtained as solution of the Zermelo's navigation problem with data $F_0,\widetilde{W}:=W_0+\dots+W_{k-1}$ with condition $F_0(-\widetilde{W})<1$.
\end{Remark}
\subsection{Geodesics, conjugate and cut loci}
\begin{Proposition}\label{lem_X}
Let $(M,h)$ be a Riemannian manifold, $V$ a vector field such that $\|V\|_h<1$, and let $F=\alpha+\beta$ be the solution of the Zermelo's navigation with data $(M,h)$ and $V$.
Then $d\beta=0$ if and only if $V$ satisfies the differential equation
\begin{equation}\label{eq_lem_1}
d\gamma=d\log\lambda\wedge\gamma,
\end{equation}
where $\gamma=V_i(x)dx^i$, $V_i=h_{ij}V^j$, and $\lambda=1-\|V\|_h^2$.
\end{Proposition}
\begin{proof}[Proof of Proposition \ref{lem_X}]
Observe that $b_i=-\frac{V_i}{\lambda}$ is equivalent to $\lambda \beta = -\gamma$, hence $d\lambda \wedge \beta + \lambda d \beta=-d\gamma$ and using $d\beta =0$ we obtain
$$
d\lambda \wedge \beta =-d\gamma.
$$
By using $\beta =-\frac{1}{\lambda}\gamma$ we get \eqref{eq_lem_1} easily. The converse is easy to show taking into account $\lambda\neq 0$.
\end{proof}
\begin{Remark}
The equation \eqref{eq_lem_1} can be written in coordinates
$$
\left(\frac{\partial V_i}{\partial x^j}-\frac{\partial V_j}{\partial x^i}\right)dx^i\wedge dx^j=\left(\frac{\partial \log \lambda}{\partial x^i}dx^i\right)\wedge (V_jdx^j),
$$
that is
$$
\frac{\partial V_i}{\partial x^j}-\frac{\partial V_j}{\partial x^i}=\frac{\partial \log \lambda}{\partial x^i}V_j-\frac{\partial \log \lambda}{\partial x^j}V_i.
$$
In the 2-dimensional case, we get the 1st order PDE
\begin{equation}\label{eq_lem_2}
\frac{\partial V_1}{\partial x^2}-\frac{\partial V_2}{\partial x^1}=-\frac{1}{\lambda}\left[\frac{\partial h^{ij}}{\partial x^1}V_2-\frac{\partial h^{ij}}{\partial x^2}V_1\right]V_iV_j-\frac{2}{\lambda}V^i\left[\frac{\partial V_i}{\partial x^1}V_2-\frac{\partial V_i}{\partial x^2}V_1\right], \ i,j=1,2.
\end{equation}
It can easily be seen that in the case of a surface of revolution $h=dr^2+m^2(r)d\theta^2$ the wind $V=A(r)\frac{\partial}{\partial r}$ is a solution of \eqref{eq_lem_1} and of \eqref{eq_lem_2}.
\end{Remark}
\begin{theorem}\label{cor: F1 conj points}
Let $(M,h)$ be a simply connected Riemannian manifold and $V=V^i\frac{\partial}{\partial x^i}$ a vector field on $M$ such that $\|V\|_h<1$, and let $F=\alpha+\beta$ be the Randers metric obtained as the solution of the Zermelo's navigation problem with this data.
If $V$ satisfies the differential relation
\begin{equation}\label{eq_cor}
d\eta=d(\log \lambda)\wedge \eta,
\end{equation}
where $\eta=V_i(x)dx^i$, $V_i=h_{ij}V^j$, then the followings hold good.
\begin{enumerate}
\item There exists a smooth function $f:M\to \ensuremath{\mathbb{R}}$ such that $\beta=df$.
\item The Randers metric $F$ is projectively equivalent to $\alpha$, i.e. the geodesics of $(M,F)$ coincide with the geodesics of the Riemannian metric $\alpha$ as non-parametrized curve.
\item The Finslerian length of any $C^\infty$ piecewise curve $\gamma:[a,b]\to M$ on $M$ joining the points $p$ and $q$ is given by
\begin{equation}
\mathcal L_{F}(\gamma)= \mathcal L_\alpha(\gamma)+f(q)-f(p),
\end{equation}
where $ L_\alpha(\gamma)$ is the Riemannian length with respect to $\alpha$ of $\gamma$.
\item The geodesic $\gamma$ is minimizing with respect to $\alpha$ if and only if it is minimizing with respect to $F$.
\item For any two points $p$ and $q$ we have
\begin{equation}
d_{F}(p,q)=d_\alpha(p,q)+f(q)-f(p),
\end{equation}
where $d_\alpha(p,q)$ is the Riemannian distance between $p$ and $q$ with respect to $\alpha$ of $\gamma$.
\item For an $F$-unit speed geodesic $\gamma$, if we put $p:=\gamma(0)$ and $q:=\gamma(t_0)$, then $q$ is conjugate to $p$ along $\gamma$ with respect to $F$ if and only if $q$ is conjugate to $p$ along $\gamma$ with respect to $\alpha$.
\item The cut locus of $p$ with respect to $F$ coincide with the cut locus of $p$ with respect to $\alpha$.
\end{enumerate}
\end{theorem}
\begin{proof}[Proof of Theorem \ref{cor: F1 conj points}]
\begin{enumerate}
\item Using Proposition \ref{lem_X}, it is clear that the differential equation \eqref{eq_cor} is equivalent to $\beta$ closed 1-form, i.e. $d\beta=0$.
On the other hand, since $M$ is simply connected manifold, any closed 1-form is exact, hence in this case \eqref{eq_cor} is equivalent to $\beta=df$.
\item Follows immediately from the classical result in Finsler geometry that a Randers metric $\alpha+\beta$ is projectively equivalent to its Riemannian part $\alpha$ if and only if $d\beta=0$ (see for instance \cite{BCS}, p.298).
\item The length of the curve $\gamma[a,b]\to M$, given by $x^i=x^i(t)$ is defined as
\begin{equation*}
\begin{split}
\ensuremath{\mathcal{L}}_{F_1}(\gamma)&=\int_a^bF_1(\gamma(t),\dot{\gamma}(t))dt=
\int_a^b\alpha(\gamma(t),\dot{\gamma}(t))dt+\int_a^b\beta(\gamma,\dot{\gamma}(t))dt\\&=\ensuremath{\mathcal{L}}_\alpha(\gamma)+f(q)-f(p)\\
\end{split}
\end{equation*}
where we use
$$
\int_a^b\beta(\gamma(t),\dot{\gamma}(t))dt=\int_a^bdf(\gamma(t),\dot{\gamma}(t))dt=f(\gamma(b))-f(\gamma(a))=f(q)-f(p).
$$
\item It follows from 3.
\item It follows immediately from 2 and 3 (see \cite{SSS} for a detailed discussion on this type of distance).
\item From (2) we know that $\alpha$ and $F=\alpha+\beta$ are projectively equivalent, i.e. their non-parametrized geodesics coincide as set points. More precisely, if $\gamma:[0,l]\to M$, $\gamma(t)=(x^i(t))$ is an $\alpha$-unit speed geodesic, and $\overline{\gamma}:[0,\widetilde{l}]\to M$, $\overline{\gamma}(s)=(x^i(s))$ is an $F$-unit speed geodesic, then there exists a parameter changing $t=t(s)$, $\frac{dt}{ds}>0$ such that $\gamma(t)=\overline{\gamma}(t(s))$ with the inverse function $s=s(t)$ such that $\overline{\gamma}(s)=\gamma(s(t))$.
\quad Observe that if $q=\gamma(a)$ then $q=\overline{\gamma}(\widetilde{a})$, where $t(\widetilde{a})=a$.
\quad Let us consider a Jacobi field $Y(t)$ along $\gamma$ such that
\begin{equation*}
\begin{cases}
Y(0)=0\vspace{0.2cm} \\
<Y(t),\frac{d\gamma}{dt}>_{\alpha}=0,
\end{cases}
\end{equation*}
and construct the geodesic variation $\gamma:[0,a]\times (-\varepsilon,\varepsilon)\to M$, $(t,u)\mapsto \gamma(t,u)$ such that
\begin{equation*}
\begin{cases}
\gamma(t,0)=\gamma(t) \vspace{0.2cm} \\
\frac{\partial \gamma}{\partial u}\Big\vert_{u=0}=Y(t).
\end{cases}
\end{equation*}
\quad Since the variation vector field $\frac{\partial \gamma}{\partial u}\Big\vert_{u=0}$ is Jacobi field it follows that all geodesics $\gamma_u(t)$ in the variation are $\alpha$-geodesics for any $u\in(-\varepsilon,\varepsilon)$.
\quad Similarly with the case of base manifold, every curve in the variation can be reparametrized as an $F$-geodesic. In other words, for each $u\in(-\varepsilon,\varepsilon)$ it exists a parameter changing $t=t(s,u)$, $\frac{\partial t}{\partial s}>0$ such that
$$
\gamma(t,u)=\overline{\gamma}(t(s,u),u).
$$
\quad We will compute the variation vector field of the variation $\overline{\gamma}(s,u)$ as follows
$$
\frac{\partial \overline{\gamma}}{\partial u}(s,u)=\frac{\partial \gamma}{\partial t}\Big\vert_{(t(s,u),u)} \frac{\partial t}{\partial u}(s,u)+\frac{\partial\gamma}{\partial u}\Big\vert_{(t(s,u),u)}.
$$
\quad If we evaluate this relation for $u=0$ we get
$$
\frac{\partial \overline{\gamma}}{\partial u}(s,0)=\frac{\partial \gamma}{\partial t}\Big\vert_{(t(s,0),0)} \frac{\partial t}{\partial u}(s,0)+\frac{\partial\gamma}{\partial u}\Big\vert_{(t(s,0),0)},
$$
that is
$$
\overline{Y}(s)=\frac{\partial \gamma}{\partial t}\Big\vert_{t(s,0),0}\frac{\partial t}{\partial u}\Big\vert_{(s,0)}+Y\Big\vert_{(t(s,0),0)}\in T_{\overline{\gamma}(s)}M\equiv T_{\gamma(t(s))}M.
$$
For a point $q=\gamma(a)=\overline{\gamma}(\widetilde{a})$ this formula reads
\begin{equation}\label{eq_proof_cor_6}
\begin{split}
\overline{Y}(\widetilde{a})&=\frac{\partial \gamma}{\partial t}\Big\vert_{\widetilde{a}}\frac{\partial t}{\partial u}\Big\vert_{(\widetilde{a},0)}+Y(t(\widetilde{a}))\\
&=\frac{d\gamma}{dt}\Big\vert_a\frac{\partial t}{\partial u}\Big\vert_{(\widetilde{a},0)}+Y(a)\in T_{\overline{\gamma}(\widetilde{a})}M\equiv T_{\gamma(a)}M,
\end{split}
\end{equation}
i.e. the Jacobi field $\overline{Y}(\widetilde{a})$ is linear combination of the tangent vector $\frac{\partial \gamma}{\partial t}(a)$ and $Y(a)$.
Let us assume $q=\overline{\gamma}(\widetilde{a})$ is conjugate point to $p$ along the $F$-geodesic $\overline{\gamma}$, i.e. $\overline{Y}(\widetilde{a})=0$. It results $\frac{d\gamma}{dt}(a)$ cannot be linear independent, hence $Y(a)=0$, i.e. $q=\gamma(a)$ is conjugate to $p$ along the $\alpha$-geodesic $\gamma$.
Conversely, if $q=\gamma(a)$ is conjugate to $p$ along the $\alpha$-geodesic $\gamma$ then \eqref{eq_proof_cor_6} can be written as
$$
Y(a)=\overline{Y}(s(a))-\frac{d\overline{\gamma}}{ds}(s(a))\frac{ds}{dt}\frac{dt}{du}
$$
and the conclusion follows from the same linearly independence argument as above.
\item Observe that $Cut_\alpha(p)\neq\emptyset \Leftrightarrow Cut_F(p)\neq \emptyset$.
Indeed, if $Cut_\alpha(p)= \emptyset$ all $\alpha$-geodesics from $p$ are globally minimizing. Assume $q\in Cut_F(p)$ and we can consider $q$ end point of $Cut_F(p)$, i.e. $q$ must be $F$-conjugate to $p$ along the geodesic $\sigma(s)$ from $p$ to $q$. This implies the corresponding point on $\sigma(t)$ is conjugate to $p$, this is a contradiction.
Converse argument is identical.
Let us assume $Cut_\alpha(p)$ and $Cut_F(q)$ are not empty sets.
If $q\in Cut_\alpha(p)$ then we have two cases:
\begin{itemize}
\item[(i)] $q$ is an end point of $Cut_\alpha(p)$, i.e. it is conjugate to $p$ along a minimizing geodesic $\gamma$ from $p$ to $q$. Therefore $q$ is closes conjugate to $p$ along the $F$-geodesic $\overline{\gamma}$ which is the reparametrization of $\gamma$ (see 6).
\item[(ii)] $q$ is an interior point of $Cut_\alpha(p)$. Since the set of points in $Cut_\alpha(p)$ founded at the intersection of exactly minimizing two geodesics of same length is dense in the closed set $Cut_\alpha(p)$ it is enough to consider this kind of cut points. In the case $q\in Cut_\alpha(p)$ such that there are 2 $\alpha$-geodesics $\gamma_1$, $\gamma_2$ of same length from $p$ to $q=\gamma_1(a)=\gamma_2(a)$, then from (4) it is clear that the point $q=\overline{\gamma}_1(\widetilde{a})=\overline{\gamma}_2(\widetilde{a})$ has the same property with respect to $F$.
Hence $Cut_\alpha(p)\subset Cut_F(p)$. This inverse conclusion follows from the same argument as above by changing roles of $\alpha$ with $F$.
\end{itemize}
\end{enumerate}
\end{proof}
\begin{Remark}
See \cite{INS} for a more general case.
\end{Remark}
We recall the following well-known result for later use.
\begin{Lemma}\label{lem_Y0}\textnormal{(\cite{HS},\cite{MHSS})}
Let $F=\alpha+\beta$ be the solution of Zermelo's navigation problem with navigation data $(h,V)$, $\|V\|_h<1$.
Then the Legendre dual of $F$ is Hamiltonian function $F^*=\alpha^*+\beta^*$ where ${\alpha^*}^2=h^{ij}(x)p_ip_j$ and $\beta^*=V^i(x)p_i$. Here $(x,p)$ are the canonical coordinates of the cotangent bundle $T^*M$.
Moreover, $g_{ij}(x,y){g^*}^{ik}(x,p)=\delta_{j}^k$, where $F^2(x,y)=g_{ij}(x,y)y^iy^j$ and ${F^*}^2(x,p)={g^*}^{ij}(x,p)p_ip_j$.
\end{Lemma}
The following result is similar to the Riemannian counterpart and we give it here with proof.
We recall that a smooth vector field $X$ on a Finsler manifold $(M,F)$ is called {\it Killing field} if every local one-parameter transformation group $\{\varphi_t\}$ of $M$ generated by $X$ consists of local isometries.
It is clear from our construction above that $W$ is Killing field on the surface of revolution $(M,F)$.
We also have
\begin{Proposition}\label{prop_Killing}
Let $(M,F)$ be a Finsler manifold (any dimension) with local coordinates $(x^i,y^i)\in TM$ and $X=X^i(x)\frac{\partial}{\partial x^i}$ a vector field on $M$. The following formulas are equivalent
\begin{enumerate}
\item[(i)] $X$ is Killing field for $(M,F)$;
\item[(ii)] $\ensuremath{\mathcal{L}}_{\widehat{X}}F=0$, where $\widehat{X}:=X^i\frac{\partial}{\partial x^i}+y^j\frac{\partial X^i}{\partial x^j}\frac{\partial}{\partial y^i}$ is the canonical lift of $X$ to $TM$;
\item[(iii)]
$$
\frac{\partial g_{ij}}{\partial x^p}X^p+g_{pj}\frac{\partial X^p}{\partial x^i}+g_{ip}\frac{\partial x^p}{\partial x^j}+2C_{ijp}\frac{\partial x^p}{\partial x^q}y^q=0;
$$
\item[(iv)] $X_{i|j}+X_{j|i}+2C_{ij}^pX_{p|q}y^q=0$, where `` $|$\,'' is the $h$-covariant derivative with respect to the Chern connection.
\end{enumerate}
\end{Proposition}
\begin{Lemma}\label{lem_Y1}
With the notation in Lemma \ref{lem_Y0}, the vector field $W=W^i(x)\frac{\partial}{\partial x^i}$ on $M$ is Killing field with respect to $F$ if and only if
$$
\{F^*,W^*\}=0,
$$
where $W^*=W^i(x)p_i$ and $\{\cdot,\cdot\}$ is the Poincar\'e bracket.
\end{Lemma}
\begin{proof}[Proof of Lemma \ref{lem_Y1}]
Recall that $W$ is Killing field of $(M,F)$ if and only if every local one-parameter transformation group $\{\varphi_t\}$ of $M$ generated by $W$ consists of local isometries.
A straight forward computation shows that $W$ is Killing on $(M,F)$ if and only if $\ensuremath{\mathcal{L}}_{\widehat{W}}F=0$, where
$$
\widehat{W}=W^i\frac{\partial}{\partial x^i}+y^j\frac{\partial W^i}{\partial x^j}\frac{\partial}{\partial y^i}
$$
is the canonical lift of $W$ to $TM$. In local coordinates this is equivalent to
\begin{equation}\label{eq_lem_Y1_1}
\frac{\partial g_{ij}}{\partial x^p}W^p+g_{pj}\frac{\partial W^p}{\partial x^i}+g_{ip}\frac{\partial W^p}{\partial x^j}+2C_{ijp}\frac{\partial W^p}{\partial x^q}y^q=0.
\end{equation}
Since the left hand side is 0-homogeneous in the $y$-variable, this relation is actually equivalent to the contracted relation by $y^iy^j$, i.e. \eqref{eq_lem_Y1_1} is equivalent to
$$
\left(\frac{\partial g_{ij}}{\partial x^p}W^p+g_{pj}\frac{\partial W^p}{\partial x^i}+g_{ip}\frac{\partial W^p}{\partial x^j}\right)y^iy^j=0,
$$
where we use $C_{ijk}y^i=0$. We get the equivalent relation
\begin{equation}\label{eq_lem_Y1_2}
\frac{\partial g_{ij}}{\partial x^p}W^py^iy^j+2g_{pj}\frac{\partial W^p}{\partial x^i}y^iy^j=0.
\end{equation}
Observe that $g_{ij}{g^*}^{jk}=\delta_i^k$ is equivalent to $\frac{\partial g_{ij}}{\partial x^p}{g^*}^{ik}=-g_{ij}\frac{\partial {g^*}^{ik}}{\partial x^p}$, hence \eqref{eq_lem_Y1_2} reads
$$
\frac{\partial g_{ij}}{\partial x^p}W^p\left({g^*}^{ik}p_k\right)\left({g^*}^{jl}p_l\right)+2g_{pj}\frac{\partial W^p}{\partial x^i}\left({g^*}^{ik}p_k\right)\left({g^*}^{jl}p_l\right)=0
$$
and from here
$$
-g_{ij}\frac{\partial {g^*}^{ik}}{\partial x^p}W^pp_k{g^*}^{jl}p_l+2g_{pj}\frac{\partial W^p}{\partial x^i}\left({g^*}^{ik}p_k\right)\left({g^*}^{jl}p_l\right)=0.
$$
We finally obtain
\begin{equation}\label{eq_lem_Y1_3}
-\frac{\partial {g^*}^{ik}}{\partial x^p}W^pp_ip_k+2{g^*}^{jk}\frac{\partial W^i}{\partial x^j}p_ip_k=0.
\end{equation}
On the other hand, we compute
\begin{equation*}
\begin{split}
\{{F^*}^2,W^*\}&=\{{g^*}^{ij}p_ip_j,W^sp_s\}\\
&=\frac{\partial ({g^*}^{ij}p_ip_j)}{\partial p_k}\frac{\partial (W^sp_s)}{\partial x^k}-\frac{\partial ({g^*}^{ij}p_ip_j)}{\partial x^k}\frac{\partial (W^sp_s)}{\partial p_k}\\
&=\left(\frac{\partial {g^*}^{ij}}{\partial p_k}p_ip_j+2{g^*}^{ik}p_i\right)\frac{\partial W^s}{\partial x^k}p_s-\frac{\partial {g^*}^{ij}}{\partial x^k}p_ip_jW^k\\
&=2{g^*}^{ik}\frac{\partial W^s}{\partial x^k}p_ip_s-\frac{\partial {g^*}^{ij}}{\partial x^k}W^kp_ip_j
\end{split}
\end{equation*}
which is the same with \eqref{eq_lem_Y1_3}. Here we have used the 0-homogeneity of ${g^*}^{ij}(x,p)$ with respect to $p$.
We also observe that for any functions $f,\ g:T^*M\to \ensuremath{\mathbb{R}}$ we have $\{f^2,g\}=2f\{f,g\}$.
Therefore, the following are equivalent
\begin{itemize}
\item[(i)] $W$ is Killing field on $(M,F)$;
\item[(ii)] $\ensuremath{\mathcal{L}}_{\widehat{W}}F=0$;
\item[(iii)] formula \eqref{eq_lem_Y1_2}
\item[(iv)] formula \eqref{eq_lem_Y1_3}
\item[(v)] $\{{F^*}^2,W^*\}=0$
\item[(vi)] $\{F^*,W^*\}=0$
\end{itemize}
and the lemma is proved.
\end{proof}
\begin{Proposition}[\cite{FM}] \label{lem_FM}
Let $(M,F)$ be a Finsler manifold and $W=W^i(x)\frac{\partial}{\partial x^i}$ a Killing filed on $(M,F)$ with $F(-W)<1$. If we denote by $\widetilde{F}$ the solution of the Zermelo's navigation problem with data $(F,W)$, then the following are true
\begin{enumerate}
\item The $\widetilde{F}$-unit speed geodesics $\ensuremath{\mathcal{P}}(t)$ can be written as
$$
\ensuremath{\mathcal{P}}(t)=\varphi(t,\rho(t)),
$$
where $\varphi_t$ is the 1-parameter flow of $W$ and $\rho$ is an $F$-unit speed geodesic.
\item For any Jacobi field $J(t)$ along $\rho(t)$ such that $g_{\dot{\rho}(t)}(\dot{\rho}(t),J(t))=0$, the vector field $\widetilde{J}(t):=\varphi_{t*}(J(t))$ is a Jacobi field along $\ensuremath{\mathcal{P}}$ and $\widetilde{g}_{\dot{\ensuremath{\mathcal{P}}}(t)}(\dot{\ensuremath{\mathcal{P}}}(t),\widetilde{J}(t))=0$.
\item For any $x\in M$ and any flag $(y,V)$ with flag pole $y\in T_xM$ and transverse edge $V\in T_xM$, the flag curvatures $K$ and $\widetilde{K}$ of $F$ and $\widetilde{F}$, respectively, are related by
$$
{K}(x,y,V)=\widetilde{K}(x,y+W,V)
$$
provided $y+W$ and $V$ are linearly independent.
\end{enumerate}
\end{Proposition}
In the 2-dimensional case, since any Finsler surface is of scalar flag curvature, we get
\begin{Corollary}
In the two-dimensional case, with the notation in Proposition \ref{lem_FM}, the Gauss curvature $K$ and $\widetilde{K}$ of $F$ and $\widetilde{F}$ are related by $K
(x,y)= \widetilde{K}(x,y+W)$, for any $(x,y)\in TM$.
\end{Corollary}
\begin{Lemma}\label{lem_SC}
Let $(M,F)$ be a (forward) complete Finsler manifold, and let $W$ be a Killing field with respect to $F$. Then $W$ is a complete vector field on $M$, i.e. for any $x\in M$ the flow $\varphi_x(t)$ is defined for any $t$.
\end{Lemma}
\begin{proof}[Proof of Lemma \ref{lem_SC}]
Since $W$ is Killing field, it is clear that its flow $\varphi$ preserves the Finsler metric $F$ and the field $W$. In other words, for any $p\in M$, the curve $\alpha:(a,b)\to M$, $\alpha(t)=\varphi_x(t)$ has constant speed.
Indeed, it is trivial to see that
\begin{equation*}
\begin{split}
\frac{d}{dt}F(\gamma(t),W\gamma(t))&=\frac{\partial F}{\partial x^i}\frac{d\gamma^i}{dt}+\frac{\partial F}{\partial y^i}\frac{\partial W^i}{\partial x^k}\frac{d\gamma^k}{dt}\\
&=\frac{\partial F}{\partial x^i}W^i+\frac{\partial F}{\partial y^i}\frac{\partial W^i}{\partial x^k}W^k=\ensuremath{\mathcal{L}}_WF(W)=0.
\end{split}
\end{equation*}
It means that the $F$-length of $\alpha$ is $b-a$, i.e. finite, hence by completeness it can be extended to a compact domain $[a,b],$ and therefore $\alpha$ is defined on whole $\ensuremath{\mathbb{R}}$. It results $W$ is complete.
\end{proof}
\begin{theorem}\label{lem_S}
Let $(M,F)$ be a Finsler manifold (not necessary Randers) and $W=W^i(x)\frac{\partial}{\partial x^i}$ a Killing field for $F$, with $F(-W)<1$.
If $\widetilde{F}$ is the solution of the Zermelo's navigation problem with data $(M,F)$ with the wind $W$ then the followings hold good:
\begin{enumerate}
\item[(i)] The point $\ensuremath{\mathcal{P}}(l)$ is $\widetilde{F}$-conjugate to $\ensuremath{\mathcal{P}}(0)$ along the $\widetilde{F}$-geodesic $\ensuremath{\mathcal{P}}(t)=\varphi(t,\rho(t))$ if and only if the corresponding point $\rho(l)=\varphi(-l,\ensuremath{\mathcal{P}}(l))$ is the $F$-conjugate point to $\ensuremath{\mathcal{P}}(0)=\rho(0)$ along $\rho$.
\item[(ii)] $(M,F)$ is (forward) complete if and only if $(M,\widetilde{F})$ is (forward) complete.
\item[(iii)] If $\rho$ is a $F$-global minimizing geodesic from $p=\rho(0)$ to a point $\widehat{q}=\rho(l)$, then $\ensuremath{\mathcal{P}}(t)=\varphi(t,\rho(t))$ is an $\widetilde{F}$-global minimizing geodesic from $p=\ensuremath{\mathcal{P}}(0)$ to $q=\ensuremath{\mathcal{P}}(l)$, where $l=d_F(p,\widehat{q})$.
\item[(iv)] If $\widehat{q}\in cut_F(p)$ is a $F$-cut point of $p$, then $q=\varphi(l,\widehat{q})\in cut_{\widetilde{F}}(p)$, i.e. it is a $\widetilde{F}$-cut point of $p$, where $l=d_F(p,\widehat{q})$.
\end{enumerate}
\end{theorem}
\begin{proof}[Proof of Theorem \ref{lem_S}]
\begin{enumerate}
\item[(i)] Since $\varphi_t(\cdot)$ is a diffeomorphism on $M$ (see Lemma \ref{lem_SC}), it is clear that its tangent map $\varphi_{t*}$ is a regular linear mapping (Jacobian of $\varphi_t$ is non-vanishing). Then Lemma \ref{lem_FM} shows that $\widetilde{J}$ vanishes if and only if $J$ vanishes, and the conclusion follow easily.
\item[(ii)] Let us denote by $\exp_p:T_pM\to M$ and $\widetilde{\exp}_p:T_pM\to M$ the exponential maps of $F$ and $\widetilde{F}$, respectively. Then $\ensuremath{\mathcal{P}}(t)=\varphi(t,\rho(t))$ implies
\begin{equation}\label{eq_proof_lem_S_1}
\widetilde{\exp}_p(ty)=\varphi_t\circ \exp_p(t[y-W(p)]).
\end{equation}
If $(M,F)$ is complete, Hopf-Rinow theorem for Finsler manifolds implies that for any $p\in M$, the exponential map $\exp_p$ is defined on all of $M$. Taking into account Lemma \ref{lem_SC}, from \eqref{eq_proof_lem_S_1} it follows $\widetilde{\exp}_p$ is defined on all of $T_pM$, and again by Hopf-Rinow theorem we obtain that $\widetilde{F}$ is complete. The converse proof is similar.
\item[(iii)] Firstly observe that $l=d_F(p,\widehat{q})=d_F(p,q)$, since $\widehat{q}=\rho(l)=\varphi(-l,\ensuremath{\mathcal{P}}(l))=\varphi(-l,q)$ and $q=\ensuremath{\mathcal{P}}(l)=\varphi(l,\rho(l))=\varphi(l,\widehat{q})$.
\begin{figure}[H]
\begin{center}
\setlength{\unitlength}{1cm}
\begin{tikzpicture}[scale=1.1]
\draw[->](3.25,0.5) -- (4,-1) node[below,right]{$-W$};
\draw(-0.2,-0.2) node{$p$};
\draw plot [smooth] coordinates {(0,0) (1.2,1.5) (3,2) (4,2)} node[right]{$\ensuremath{\mathcal{P}}$};
\draw plot [smooth,rotate=-45] coordinates {(0,0) (1.5,1.5) (3,2) (4,2)};
\draw plot [smooth,rotate=-90, yshift=1.5cm, xshift=-2.5cm] coordinates {(0,0) (1.5,1.5) (3,2) (4,2)};
\draw plot [smooth] coordinates {(0,0) (0.8,0.8) (3,2.4)} node[right]{$\ensuremath{\mathcal{P}}_s$};
\draw plot [smooth,rotate=-30] coordinates {(0,0) (0.8,0.8) (3,2.4)};
\draw(2.5,1.75)node{$q$};
\draw(3.4,0.7)node{$q_0$};
\draw(4,0.6)node{$\rho_s$};
\draw(3.2,-0.1)node{$\xi$};
\draw(3.6,-1)node{$\widehat{q}$};
\draw(4.4,-1.4)node{$\rho$};
\end{tikzpicture}
\end{center}
\caption{Riemannian and Finsler geodesics in Zermelo's navigation problem.}
\label{fig_3}
\end{figure}
We will proof this statement by contradiction (see Figure \ref{fig_3}).
For this, let us assume that, even though $\rho$ is globally minimizing, the flow-corresponding geodesic $\ensuremath{\mathcal{P}}$ from $p$ to $q$ is not minimizing anymore. In other words, there must exist a shorter minimizing geodesic $\ensuremath{\mathcal{P}}_s:[0,l_0]\to M$ from $p$ to $q=\ensuremath{\mathcal{P}}_s(l_0)$ such that $d_{\widetilde{F}}(p,q)=l_0<l$. (We use the subscript $s$ for short).
We consider next, the $F$-geodesic $\rho_s:[0,l_0]\to M$ obtained from $\ensuremath{\mathcal{P}}$ by flow deviation, i.e. $\rho_s(t)=\varphi(-t,\ensuremath{\mathcal{P}}_s(t))$, and denote $q_0=\rho_s(l_0)=\varphi(-l_0,\ensuremath{\mathcal{P}}(l_0))$. Then, triangle inequality in $pq_0\widehat{q}$ shows that
$$
\ensuremath{\mathcal{L}}_F(\rho)\leq \ensuremath{\mathcal{L}}_F(\rho_s)+\ensuremath{\mathcal{L}}_F(\xi),
$$
where we denote by $\xi$ the flow orbit from $W$ through $q$m oriented from $q_0$ to $\widehat{q}$. In other words $\dot{\xi}(t)=-W$, and using the hypothesis $F(-W)<1$, it follows
\begin{equation}\label{eq_proof_lem_S_2}
\ensuremath{\mathcal{L}}_F(\xi)=\int_a^bF(-W)dt<b-a=\ensuremath{\mathcal{L}}_F(\rho)-\ensuremath{\mathcal{L}}_F(\rho_s).
\end{equation}
By comparing relations \eqref{eq_proof_lem_S_1} with \eqref{eq_proof_lem_S_2} it can be seen that this is a contradiction, hence $\ensuremath{\mathcal{P}}$ must be globally minimizing.
\item[(iv)] It follows from (iii) and the definition of cut locus.
\end{enumerate}
\end{proof}
\begin{Remark}
Observe that statement (iii) and (iv) are not necessary and sufficient conditions, Indeed, from the proof of (iii) it is clear that for proving $\rho$ global minimizer implies $\ensuremath{\mathcal{P}}$ global minimizer we have used condition $F(-W)<1$, which is equivalent to the fact that $\widetilde{F}$-indicatrix includes the origin of $T_pM$, a necessary condition for $\widetilde{F}$ to be positive defined (see Remark \ref{rem_F_positive}).
Likewise, if we want to show that $\ensuremath{\mathcal{P}}$ global minimizer implies $\rho$ global minimizer, we need $F(W)<1$, that is, the indicatrix $\Sigma_F$ translated by $-W$ must also include the origin, i.e. the metric $\widetilde{F}_2$ defined by $F(y+\widetilde{F}_2W)=\widetilde{F}_2$, with the indicatrix $\Sigma_{\widetilde{F}_2}=\Sigma_F-W$ is also a positive defined Finsler metric.
In conclusion if we assume $F(-W)<1$ and $F(W)<1$ then the statements (iii) and (iv) in Theorem \ref{lem_S} can be written with ``if and only if''.
\end{Remark}
\begin{Lemma}\label{lem_Y2}
Let $F=\alpha+\beta$ be the solution of Zermelo's navigation problem with navigation data $(h,V)$.
Then a vector field $W$ on $M$ is Killing with respect to $F=\alpha+\beta$ if $W$ is Killing with respect to $h$ and $[V,W]=0$, where $[\cdot,\cdot]$ is the Lie bracket.
\end{Lemma}
\begin{proof}[Proof of Lemma \ref{lem_Y2}]
The proof is immediate from Lemmas \ref{lem_Y0} and \ref{lem_Y1}, Indeed, $W$ is Killing on $(M,F)$ if and only if $\{F^*,W^*\}=0$, hence $\{\alpha^*+\beta^*,W^*\}=\{\alpha^*,W^*\}+\{\beta^*,W^*\}=0$. If $\{\alpha^*,W^*\}=0$, i.e. $W$ is Killing with respect to $h$ and $\{\beta^*,W^*\}=\{V^*,W^*\}=0$. Let us observe that $\{V^*,W^*\}=0$ is actually equivalent to $[V,W]=0$. Geometrically, this means that the flows of $V$ and $W$ commute locally, then the conclusion follows.
\end{proof}
Observe that in local coordinates the conditions in Lemma \ref{lem_Y2} reads
\begin{equation*}
\begin{cases}
W_{i:j}+W_{j:i}=0 \vspace{0.2cm} \\
\sum_{i=1}^n\left(\frac{\partial W^k}{\partial x^i}V^i-\frac{\partial V^k}{\partial x^i}W^i\right)=0,
\end{cases}
\end{equation*}
where $:$ is the covariant derivative with respect to the Levi-Civita connection of $h$.
\begin{theorem}\label{thm: F cut points}
Let $(M,h)$ be a simply connected Riemannian manifold and $V=V^i\frac{\partial}{\partial x^i}$, $W=W^i\frac{\partial}{\partial x^i}$ vector fields on $M$ such that
\begin{enumerate}
\item[(i)] $V$ satisfies the differential relation
\begin{equation}\label{eq_d_eta}
d\eta=d(\log \lambda)\wedge \eta,
\end{equation}
where $\eta=V_i(x)dx^i$, $V_i=h_{ij}V^j$;
\item[(ii)] $W$ is Killing with respect to $h$ and $\{V^*,W^*\}=0$, where $V^*=V^ip_i$ and $W^*=W^ip_i$.
\end{enumerate}
Then
\begin{enumerate}
\item[(i)] The $\widetilde{F}$-unit speed geodesics $\ensuremath{\mathcal{P}}(t)$ are given by
$$
\ensuremath{\mathcal{P}}(t)=\varphi(t,\sigma(t)),
$$
where $\varphi$ is the flow of $W$ and $\sigma(t)$ is an $F_1$-unit speed geodesic.
Equivalently,
$$
\ensuremath{\mathcal{P}}(t)=\varphi(t,\gamma(s(t))),
$$
where $\gamma(s)$ is an $\alpha$-unit speed geodesic and $s=s(t)$ is the parameter change $t=\int_0^sF_1\left(\rho(\tau),\frac{d\rho}{d\tau}\right)d\tau$.
\item[(ii)] The point $\ensuremath{\mathcal{P}}(l)$ is conjugate to $\ensuremath{\mathcal{P}}(0)=p$ along the $\widetilde{F}-geodesic$ $\ensuremath{\mathcal{P}}(t)$ if and only if the corresponding point $\widehat{q}=\rho(l)=\varphi(-l,\ensuremath{\mathcal{P}}(l))$ on the $\widetilde{F}$-geodesic $\rho$ is conjugate to $p$, or equivalently, $\widehat{q}$ is conjugate to $p$ along the $\alpha$-geodesic from $p$ to $\widehat{q}$.
\item[(iii)] If $\widehat{q}\in Cut_\alpha(p)$ then $q=\varphi(l,\widehat{q})\in Cut_{\widetilde{F}}(p)$, where $l=d_{\widetilde{F}}(p,\widehat{q})=d_F(p,\widehat{q})+f(\widehat{q})-f(p)$.
\end{enumerate}
\end{theorem}
\begin{proof}[Proof of Theorem \ref{thm: F cut points}]
All statements follows immediately by combining Theorem \ref{cor: F1 conj points} with Theorem \ref{lem_S}.
\end{proof}
\begin{Remark}
Informally, we may say that the cut locus of $p$ with respect to $F$ is the $W$-flow deformation of the cut locus of $p$ with respect to $F_1$, that is, the the $W$-flow deformation of the cut locus of $p$ with respect to $\alpha$, due to Theorem \ref{cor: F1 conj points}, 7.
\end{Remark}
\section{Surfaces of revolution}\label{sec:Surf of revol}
\subsection{Finsler surfaces of revolution}
Let $(M,F)$ be a (forward) complete oriented Finsler surface, and $W$ a vector field on $M$, whose one-parameter group of transformations $\{\varphi_t:t\in I\}$ consists of $F$-isometries, i.e.
\begin{equation*}
F(\varphi_t(x),\varphi_{t,x}(y))=F(x,y),\quad\text{for all}\ (x,y)\in TM\ \text{and any} \ t\in\ensuremath{\mathbb{R}}.
\end{equation*}
This is equivalent with
\begin{equation*}
d_F(\varphi_t(q_1),\varphi_t(q_2))=d_F(q_1,q_2),
\end{equation*}
for any $q_1,q_2\in M$ and any given $t$, where $d_F$ is the Finslerian distance on $M$. If $\varphi_t$ is not the identity map, then it is known that $W$ must have at most two zeros on $M$.
We assume hereafter that $W$ has no zeros, hence from Poincar\'e-Hopf theorem it follows that $M$ is a surface homeomorphic to a plane, a cylinder or a torus. Furthermore, we assume that $M$ is the topological cylinder $\ensuremath{\mathbb{S}}^1\times\ensuremath{\mathbb{R}}$.
By definition it follows that, at any $x\in M\setminus\{p\}$, $W_x$ is tangent to the curve $\varphi_x(t)$ at the point $x=\varphi_x(0)$. The set of points $Orb_W(x):=\{\varphi_t(x):t\in\ensuremath{\mathbb{R}}\}$ is called the orbit of $W$ through $x$, or a {\it parallel circle}
and it can be seen that the period $\tau(x):=\min\{t>0 : \varphi_t(x)=x\}$ is constant for a fixed $x\in M$.
\begin{Definition}\label{def: Finsler surf of revol}
A (forward) complete oriented Finsler surface $(M,F)$ homeomorphic to $\ensuremath{\mathbb{S}}^1\times \ensuremath{\mathbb{R}}$, with a vector field $W$ that has no zero points,
is called a {\it Finsler cylinder of revolution},
and $\varphi_t$ a {\it rotation} on $M$.
\end{Definition}
It is clear from our construction above that $W$ is Killing field on the surface of revolution $(M,F)$.
\subsection{The Riemannian case}
The simplest case is when the Finsler norm $F$ is actually a Riemannian one.
A {\it Riemannian cylinder of revolution} $(M,h)$ is a complete Riemannian manifold $M=\ensuremath{\mathbb{S}}^1\times \ensuremath{\mathbb{R}}=\{(r,\theta):r\in\ensuremath{\mathbb{R}},\ \theta\in[0,2\pi)\}$ with a warped product metric
\begin{equation}\label{eq_Riemannian_metric_h}
h=dr^2+m^2(r)d\theta^2.
\end{equation}
of the real line $(\ensuremath{\mathbb{R}},dr^2)$ and the unit circle $(\ensuremath{\mathbb{S}}^1,d\theta^2)$.
Suppose that the warping function $m$ is a positive-valued even function.
Recall that the equations of an $h$-unit speed geodesic $\gamma(s):=(r(s),\theta(s))$ of $(M,h)$ are
\begin{equation}\label{eq 4}
\begin{cases}
\frac{d^2r}{ds^2}-mm'\left(\frac{d\theta}{ds}\right)^2=0\vspace{0.2cm} \\
\frac{d^2\theta}{ds^2}+2\frac{m'}{m}\frac{dr}{ds}\frac{d\theta}{ds}=0
\end{cases},
\end{equation}
with the unit speed parametrization condition
\begin{equation}\label{eq 1}
\left(\frac{dr}{ds}\right)^2+m^2\left(\frac{d\theta}{ds}\right)^2 =1.
\end{equation}
It follows that every profile curve $\{\theta=\theta_0\}$, or {\it meridian}, is an $h$-geodesic, and that a parallel $\{r=r_0\}$ is geodesic if and only if
$m'(r_0)=0$, where $\theta_0\in [0,2\pi)$ and $r_0\in \ensuremath{\mathbb{R}}$ are constants.
It is clear that two meridians do not intersect on $M$ and for a point $p\in M$, the meridian through $p$ does not contain any cut points of $p$, that is, this meridian is a ray through $p$ and hence $d_h(\gamma(0),\gamma(s))=s$, for all $s\geq 0$.
We observe that (\ref{eq 4}) implies
\begin{equation}\label{h-prime integral}
\frac{d\theta(s)}{ds}m^2(r(s)) = \nu, \quad \nu \text{ is constant},
\end{equation}
that is the quantity $\frac{d\theta}{ds}m^2$ is conserved along the $h$-geodesics.
\begin{figure}[H]
\begin{center}
\setlength{\unitlength}{1.1cm}
\begin{picture}(7,7)
\put(3.5,0){\vector(0,1){7}}
\put(3.5,3.5){\vector(1,0){3.5}}
\put(5,5){\vector(-1,-1){3}}
\qbezier(4.5,6.5)(4,5.625)(4.5,4.75)
\qbezier(4.5,4.75)(5,4)(5,3.5)
\qbezier(2.5,6.5)(3,5.625)(2.5,4.75)
\qbezier(2.5,4.75)(2,4)(2,3.5)
\qbezier(2.5,0.5)(3,1.375)(2.5,2.25)
\qbezier(2.5,2.25)(2,3)(2,3.5)
\qbezier(4.5,0.5)(4,1.375)(4.5,2.25)
\qbezier(4.5,2.25)(5,3)(5,3.5)
\qbezier(4,0.5)(3.75,1.375)(4,2.25)
\qbezier(4,2.25)(4.25,3)(4.25,3.5)
\qbezier(4,6.5)(3.75,5.625)(4,4.75)
\qbezier(4,4.75)(4.25,4)(4.25,3.5)
\qbezier(4.25,5.5)(4.25,5.3)(3.5,5.3)
\qbezier(3.5,5.3)(2.75,5.3)(2.75,5.5)
\qbezier(4.25,5.5)(4.25,5.7)(3.5,5.7)
\qbezier(3.5,5.7)(2.75,5.7)(2.75,5.5)
\qbezier(4.25,1.5)(4.25,1.3)(3.5,1.3)
\qbezier(3.5,1.3)(2.75,1.3)(2.75,1.5)
\qbezier(4.25,1.5)(4.25,1.7)(3.5,1.7)
\qbezier(3.5,1.7)(2.75,1.7)(2.75,1.5)
\put(4.2,3.05){\vector(1,1){1}}
\qbezier(3.5,2.5)(4.5,3)(4.8,4.25)
\put(4.2,3.05){\vector(0.1,1){0.3}}
\qbezier(4.25,3.3)(4.3,3.4)(4.45,3.3)
\qbezier(3.25,6.7)(3.5,6.6)(3.75,6.7)
\put(3.75,6.7){\vector(1,1){0.1}}
\put(3.5,7.2){$z$}
\put(4.55,6){$\frac{\partial}{\partial r}$}
\put(5.25,4){$\dot{\gamma}$}
\put(6.8,3.1){$x$}
\put(4.4,3){$\phi$}
\put(3.6,2.3){$\gamma$}
\put(5,0.95){$0-meridian$}
\put(0.4,0.95){$\pi-meridian$}
\put(4,0){$\theta_0-meridian$}
\put(2,1.7){$y$}
\put(0,5){$parallels$}
\put(1.2,5){\vector(1,0.35){1.5}}
\put(1.2,5){\vector(0.4,-1){1.4}}
\put(2.3,1){\vector(1,0){0.4}}
\put(4.8,1){\vector(-1,0){0.4}}
\put(4.2,0.2){\vector(-0.4,1){0.1}}
\end{picture}
\end{center}
\caption{The angle $\phi$ between $\dot{\gamma}$ and a meridian for a cylinder of revolution.}
\label{fig_4}
\end{figure}
If $\gamma(s)=(r(s),\theta(s))$ is a geodesic on the surface of revolution $(M,h)$, then the angle $\phi(s)$ between $\dot{\gamma}$ and the profile curve passing through a point $\gamma(s)$ satisfy Clairaut relation $m(r(s))\sin\phi(s)=\nu$.
The constant $\nu$ is called the {\it Clairaut constant} (see Figure \ref{fig_4}).
We recall the Theorem of cut locus on cylinder of revolution from \cite{C1}
\begin{theorem}\label{thm_Riemannian_cut_locus}
Let $(M,h)$ is a cylinder of revolution with the warping function $m:\ensuremath{\mathbb{R}}\to \ensuremath{\mathbb{R}}$ is a positive valued even
function, and the Gaussian curvature $G_h(r)=-\frac{m''(r)}{m(r)}$ is decreasing along the half meridian. If the Gaussian curvature of $M$ is positive on $r=0$, then the structure of the cut locus $C_q$ of a point $\theta(q)=0$ in $M$ is given as follows:
\begin{enumerate}
\item The cut locus $C_q$ is the union of a subarc of the parallel $r=-r(q)$ opposite to $q$ and the meridian opposite to $q$ if $|r(q)<r_0|:=\sup\{r>0|m'(r)<0\}$ and $\varphi(m(r(q)))<\pi$, i.e.
\begin{equation*}
C_q=\theta^{-1}(\pi)\cup(r^{-1}(-r(q))\cap\theta^{-1}[\varphi(m(r(q))),2\pi-\varphi(m(r(q)))]).
\end{equation*}
\item The cut locus $C_q$ is the meridian $\theta^{-1}(\pi)$ opposite to $q$ if $\varphi(m(r(q)))\geq \pi$ or if $|r(q)|\geq r_0$.
\end{enumerate}
Here the function $\varphi(\nu)$ on $(\inf m,m(0))$ is defined as
\begin{equation*}
\varphi(\nu):=2\int_{\xi(\nu)}^0\frac{\nu}{m\sqrt{m^2-\nu^2}}dr=2\int_0^{\xi(\nu)}\frac{\nu}{m\sqrt{m^2-\nu^2}}dr,
\end{equation*}
where $\xi(\nu):=\min\{r>0|m(r)=\nu\}$.
\end{theorem}
\begin{Remark}
\begin{enumerate}
\item
It is easy to see that if the Gauss curvature $G_h<0$ everywhere, then $h$-geodesics cannot have conjugate points. It follows that in the case the $h$-cut locus of a point $p\in M$ is the opposite meridian to the point.
\item
See \cite{C2} for a more general class of Riemannian cylinders of revolution whose cut locus can be determined.
\end{enumerate}
\end{Remark}
\section{Randers rotational metrics}\label{sec_Randers_metric}
\subsection{The navigation with wind $\widetilde{W}=A(r)\frac{\partial}{\partial r}+B\frac{\partial}{\partial \theta}$} \label{sec_A(r),B}
Let $(M,h)$ be the Riemannian metric \eqref{eq_Riemannian_metric_h} on the topological cylinder $M=\{(r,\theta):r\in\ensuremath{\mathbb{R}}, \ \theta\in[0,2\pi)\}$ such that the Gaussian curvature $G_h\neq 0$, i.e. $m(r)$ is not linear function. We will make this assumption all over the paper.
\begin{Proposition}\label{thm_A(r)}
Let $(M,h)$ be the topological cylinder $\ensuremath{\mathbb{R}}\times \ensuremath{\mathbb{S}}^1$ with its Riemannian metric $h$ and let $\widetilde{W}=A(r)\frac{\partial}{\partial r}+B\frac{\partial}{\partial \theta}$, be a vector filed on $M$ where $A=A(r)$ is smooth function on $\ensuremath{\mathbb{R}}$, $B$ constant, such that $A^2(r)-B^2m^2(r)<1$. Then
\begin{enumerate}
\item[(i)] The solution of the Zermelo's navigation problem for $(M,h)$ and wind $\widetilde{W}$ is the Randers metric $\widetilde{F}=\widetilde{\alpha}+\widetilde{\beta}$, where
\begin{equation}\label{eq_tilde_a}
(\widetilde{a}_{ij})=\frac{1}{\Lambda^2}\begin{pmatrix}
1-B^2m^2(r) & BA(r)m^2(r) \\
BA(r)m^2(r) ) & m^2(r)(1-A^2(r))
\end{pmatrix}, \
(\widetilde{b}_i)=\frac{1}{\Lambda}\begin{pmatrix}
-A(r) \\ -Bm^2(r)
\end{pmatrix},
\end{equation}
and $\Lambda:=1-\|\widetilde{W}\|_h^2=1-A^2(r)-B^2m^2(r)>0$.
\item[(ii)] The solution of Zermelo's navigation problem for the data $(M,h)$ and wind $V=A(r)\frac{\partial}{\partial r}$, $A^2(r)<1$ is the Randers metric $F=\alpha+\beta$, where
\begin{equation}\label{eq_a}
(a_{ij})=\frac{1}{\lambda^2}\begin{pmatrix}
1 & 0 \\ 0 & \lambda m^2(r)
\end{pmatrix}, \
(b_i)=\frac{1}{\lambda}\begin{pmatrix}
-A(r) \\ 0
\end{pmatrix},
\end{equation}
and $\lambda:=1-\|V\|_h^2=1-A^2(r)>0$.
\item[(iii)] The solution of Zermelo's navigation problem for $(M,F=\alpha+\beta)$ and wind $W=B\frac{\partial}{\partial \theta}$, $F(-W)<1$ is the Randers metric $\widetilde{F}=\widetilde{\alpha}+\widetilde{\beta}$ given in \eqref{eq_tilde_a}.
\end{enumerate}
\end{Proposition}
\begin{proof}[Proof of Proposition \ref{thm_A(r)}]
\begin{enumerate}
\item[(i)] The solution of Zermelo's navigation problem with $(M,h)$ and $\widetilde{W}=(\widetilde{W}^1,\widetilde{W}^2)=(A(r),B)$ is obtained from \eqref{eq_1.1*} with $\Lambda=1-\|\widetilde{W}\|_h^2=1-A^2(r)-B^2m^2(r)$.
Taking into account that $\widetilde{W}_i=h_{ij}\widetilde{W}^j$ it follows $(\widetilde{W}_1,\widetilde{W}_2)=(A(r),Bm^2(r))$ and a straightforward computation leads to \eqref{eq_tilde_a}.
\item[(ii)] Similar with (i) using $(M,h)$ and $V=(V^1,V^2)=(A(r),0)$, hence $(V_1,V_2)=(A(r),0)$ and $\lambda=1-\|V\|_h^2=1-A^2(r)$.
\item[(iii)] Follows from Theorem \ref{thm_two_steps_Zermelo}. We observe that $\Lambda=1-A^2(r)-B^2m^2(r)>0$ is actually equivalent to $A^2(r)<1$ and $F(-W)<1$.
Indeed,
\begin{equation*}
\begin{split}
1-A^2(r)-B^2m^2(r)>0 \Rightarrow 1-A^2(r) > B^2m^2(r) > 0 \Rightarrow A^2(r)<1.
\end{split}
\end{equation*}
and
\begin{equation*}
\begin{split}
B^2m^2(r)<1-A^2(r) \Rightarrow
\frac{Bm(r)}{\sqrt{1-A^2(r)}}<1 \Rightarrow F(-W)<1,
\end{split}
\end{equation*}
where we use $F(-W)=\sqrt{a_{22}(-B)^2}=\frac{Bm(r)}{\sqrt{1-A^2(r)}}$.
\end{enumerate}
\end{proof}
\begin{Remark}\begin{enumerate}
\item
Observe that we actually perform a rigid translation of the Riemannian indicatrix $\Sigma_h$ by $\widetilde{W}$, which is actually equivalent to translating $\Sigma_h$ by $V$ followed by the translation of $\Sigma_F$ by $W$ (see Remark \ref{rem_F_positive}).
\item Observe that the Randers metric given by \eqref{eq_tilde_a} on the cylinder $\ensuremath{\mathbb{R}}\times \ensuremath{\mathbb{S}}^1$ is rotational invariant, hence $(M,\widetilde{\alpha}+\widetilde{\beta})$ is a Finslerian surface of revolution. This type of Randers metircs are called {\it Randers rotational metrics}.
Indeed, let us denote $m_F(r):=F(\frac{\partial}{\partial \theta})$. Observe that in the case $A(r)$ is odd or even function, the function $m_F(r)$ is even function such that $m_F(0)>0$.
\end{enumerate}
\end{Remark}
Theorem \ref{thm: F cut points} implies
\begin{theorem}\label{thm_main_1}
Let $(M,h)$ be the topological cylinder $\ensuremath{\mathbb{R}}\times \ensuremath{\mathbb{S}}^1$ with the Riemannian metric $h=dr^2+m^2(r)d\theta^2$ and $\widetilde{W}=A(r)\frac{\partial}{\partial r}+B\frac{\partial}{\partial \theta}$, $A^2(r)+B^2m^2(r)<1$. If we denote by $\widetilde{F}=\widetilde{\alpha}+\widetilde{\beta}$ the solution of Zermelo's navigation problem for $(M,h)$ and $\widetilde{W}$, then the followings are true.
\begin{enumerate}
\item[(i)] The $\widetilde{F}$-unit speed geodesics $\ensuremath{\mathcal{P}}(t)$ are given by
$$
\ensuremath{\mathcal{P}}(t)=(r(s(t)),\theta(s(t))+B\cdot s(t)),
$$
where $\rho(s)=(r(s),\theta(s))$ are $\alpha$-unit speed geodesic and $t=t(s)$ is the parametric change $t=\int_0^s F(\rho(s),\dot{\rho}(s))ds$.
\item[(ii)] The point $q=\ensuremath{\mathcal{P}}(l)$ is conjugate to $\ensuremath{\mathcal{P}}(0)=p$ along $\ensuremath{\mathcal{P}}$ if and only if $\widehat{q}=(r(q),\theta(q)-Bl)$ is conjugate to $p$ with respect to $\alpha$ along the $\alpha$-geodesic from $p$ to $\widehat{q}$.
\item[(iii)] The point $\widehat{q}\in Cut_\alpha(p)$ is an $\alpha$-cut point of $p$ if and only if $q=(r(\widehat{q}),\theta(\widehat{q})+Bl)\in Cut_{\widetilde{F}}(p)$, where $l=d_{\widetilde{F}}(p,q)$.
\end{enumerate}
\end{theorem}
\begin{proof}[Proof of Theorem \ref{thm_main_1}]
First of all, observe that $V=A(r)\frac{\partial}{\partial r}$ and $W=B\frac{\partial}{\partial \theta}$ satisfy conditions (i), (ii) in the hypothesis of Theorem \ref{thm: F cut points}.
Indeed, since $(M,h)$ is surface of revolution and $V=(A(r),0)$ it results that $\eta=A(r)dr$ is closed form, hence \eqref{eq_d_eta} is satisfied.
Moreover $W=B\frac{\partial}{\partial \theta}$ is obviously Killing field with respect to $h$, and it is trivial to see that $[V,W]=\left[A(r)\frac{\partial}{\partial r},B\frac{\partial}{\partial \theta}\right]=0$.
The statements (i)-(iii) follows now from Theorem \ref{thm: F cut points} and the fact that the flow of $W=B\frac{\partial}{\partial \theta}$ is just $\varphi_t(r,\theta)=(r,\theta+Bt)$ for any $(r,\theta)\in M$, $t\in\ensuremath{\mathbb{R}}$.
In this case, $\beta(W)=0$, hence $F(-W)=F(W)=\alpha(W)<1$, hence (iii) is necessary and sufficient condition.
\end{proof}
We have reduced the geometry of the Randers type metric $(M,\widetilde{F})$ to the geometry of the Riemannian manifold $(M,\alpha)$, obtained from $(M,h)$ by \eqref{eq_a}.
\begin{Example}
Let us observe that there are many cylinders $(M,h)$ and winds $\widetilde{W}$ satisfying conditions in Theorem \ref{thm_main_1}.
For instance, let us consider the topological cylinder $\ensuremath{\mathbb{R}}\times \ensuremath{\mathbb{S}}^1$ with the Riemannian metric $h=dr^2+m^2(r)d\theta^2$ defined using the warp function $m(r)=e^{-r^2}$.
Consider the smooth function $A:\ensuremath{\mathbb{R}}\to \left(-\frac{1}{\sqrt{2}},\frac{1}{\sqrt{2}}\right)$, $A(r)=\frac{1}{\sqrt{2}}\frac{r}{\sqrt{r^2+1}}$ and any constant $B\in\left(-\frac{1}{\sqrt{2}},\frac{1}{\sqrt{2}}\right)$.
Then $A^2(r)+B^2m^2(r)<\frac{1}{2}+B^2m^2(r)\leq \frac{1}{2}+B^2<1$.
In this case $\widetilde{W}=\widetilde{\alpha}+\widetilde{\beta}$ is given by
\begin{equation*}
\begin{split}
(\widetilde{a}_{ij})=\frac{1}{\Lambda^2}\begin{pmatrix}
1-B^2e^{-2r^2} & \frac{B}{\sqrt{2}}\frac{re^{-2r^2}}{\sqrt{r^2+1}} \vspace{0.2cm} \\
\frac{B}{\sqrt{2}}\frac{re^{-2r^2}}{\sqrt{r^2+1}} & \frac{1}{2}\frac{(r^2+2)e^{-2r^2}}{r^2+1}
\end{pmatrix},\ (\widetilde{b}_i)=\frac{1}{\Lambda}\begin{pmatrix},
-\frac{1}{\sqrt{2}}\frac{r}{\sqrt{r^2+1}} \vspace{0.2cm} \\ -Be^{-2r^2}
\end{pmatrix},
\end{split}
\end{equation*}
where $\Lambda=\frac{1}{2}\frac{r^2+2}{r^2+1}-B^2e^{-2r^2}$.
Observe that $F=\alpha+\beta$ is given by
\begin{equation*}
\begin{split}
(a_{ij})=\frac{1}{\lambda}\begin{pmatrix}
1 & 0 \vspace{0.2cm} \\ 0 & \lambda e^{-r^2}
\end{pmatrix},\ (b_{i})=\frac{1}{\lambda}\begin{pmatrix}
-\frac{1}{\sqrt{2}}\frac{r}{\sqrt{r^2+1}} \vspace{0.2cm} \\ 0
\end{pmatrix}, \\
\end{split}
\end{equation*}
where $\lambda=\frac{1}{2}\frac{r^2+2}{r^2+1}$.
\end{Example}
Moreover, we have
\begin{Corollary}\label{cor_*2}
\begin{enumerate}
\item[(I)] With notations in Theorem \ref{thm_main_1} let us assume that $(M,h)$ has negative curvature everywhere $G_h(r)<0$, for all $r\in \ensuremath{\mathbb{R}}$.
\quad If there exist a smooth function $A:\ensuremath{\mathbb{R}}\to(-1,1)$ and a constant $B$ such that $A^2(r)+B^2m^2(r)<1$ and if $G_\alpha(r)<0$ everywhere, then the $\alpha$-cut locus and the $F=\alpha+\beta$ cut locus of a point $p\in M$ is the opposite meridian to the point $p$.
\quad Moreover, the $\widetilde{F}=\widetilde{\alpha}+\widetilde{\beta}$ cut locus of $p=(r_0,\theta_0)$ is the deformed opposite meridian by the flow of the vector field $W=B\frac{\partial}{\partial\theta}$, i.e. $(r(t),\theta_0+\pi+Bt)$, for all $t\in \ensuremath{\mathbb{R}}$, where $(r(t),\theta_0+\pi)$ is the opposite meridian to $p=(r_0,\theta_0)$, $r(0)=r_0$.
\item[(II)] With the notations in Theorem \ref{thm_main_1} let us assume that $(M,h)$ has Gaussian curvature $G_h(r)$ decreasing along any half meridian $[0,\infty)$ and $G_h(0)\geq 0$.
\quad If there exist a smooth function $A:\ensuremath{\mathbb{R}}\to(-1,1)$ and a constant $B$ such that $A^2(r)+B^2m^2(r)<1$, $G_\alpha(r)$ is decreasing along any half meridian and $G_\alpha \geq 0$, then the $\alpha$-cut locus and the $F$-cut locus of a point $p=(r_0,\theta_0)$ is given in Theorem \ref{thm_Riemannian_cut_locus}.
\end{enumerate}
Moreover, the $\widetilde{F}$-cut locus of $p$ is obtained by the deformation of the cut locus described in Theorem \ref{thm_Riemannian_cut_locus} by the flow of $W=B\frac{\partial}{\partial \theta}$.
\end{Corollary}
\begin{proof}[Proof of Corollary \ref{cor_*2}]
It is trivial by combining Proposition \ref{thm_A(r)} and Theorem \ref{thm_main_1}.
\end{proof}
\begin{Remark}
It is not trivial to obtain concrete examples satisfying conditions (I) and (II) in Corollary \ref{cor_*2} in the case $A\neq$ constant. We conjecture that such examples exist leaving the concrete construction for a forthcoming research. The case $A=$ constant is treated below.
\end{Remark}
\subsection{The case $\widetilde{W}=A\frac{\partial}{\partial r}+B\frac{\partial}{\partial \theta}$} \label{sec_A,B}
Consider the case $\widetilde{W}=(\widetilde{W}^1,\widetilde{W}^2)=(A,B)$, where $A$ and $B$ are constants, on the topological cylinder $M=\{(r,\theta):r\in \ensuremath{\mathbb{R}}, \ \theta\in [0,2\pi)\}$. Here $m:\ensuremath{\mathbb{R}}\to [0,\infty)$ is an even bounded function such that $m^2<\frac{1-A^2}{B^2}$, $|A|<1$, $B\neq 0$.
Proposition \ref{thm_A(r)} and Theorem \ref{thm_main_1} can be easily rewritten for this case by putting $A(r)=A=$ constant. We will not write them again here.
Instead, let us give some special properties specific to this case. A straightforward computation gives:
\begin{Proposition} Let $(M,h)$ be the Riemannian metric of the cylinder $\ensuremath{\mathbb{R}}\times\ensuremath{\mathbb{S}}^1$, and let $\widetilde{W}=A\frac{\partial}{\partial r}+B\frac{\partial}{\partial \theta}$, with $A,B$ real constants such that $m^2<\frac{1-A^2}{B^2}$, $|A|<1$, $B\neq 0$.
Then, the followings are true.
\begin{enumerate}
\item[(I)]
The Gauss curvatures $G_h$ and $G_\alpha$
of $(M,h)$ and $(M,\alpha)$, respectively, are proportional, i.e.
$$
G_\alpha(r)=\frac{1}{\lambda^2}G_h(r),
$$
where $\alpha$ is the Riemannian metric obtained in the solution of the Zermelo's navigation problem for $(M,h)$ and $V=A\frac{\partial}{\partial r}$.
\item[(II)] The geodesic flows $S_h$ and $S_\alpha$ of $(M,h)$ and $(M,\alpha)$, respectively, satisfy
$$
S_h=S_\alpha+\Delta,
$$
where $\Delta=-2A^2mm''(y^2)^2\frac{\partial}{\partial y^1}$ is the difference vector field on $TM$ endowed with the canonical coordinates $(r,\theta; y^1,y^2)$.
\end{enumerate}
\end{Proposition}
Moreover, we have
\begin{theorem}
In this case, if $(M,h)$ is a Riemannian metric on the cylinder $M=R\times \ensuremath{\mathbb{S}}^1$ with bounded warp function $m(r)<\frac{\sqrt{1-A^2}}{B}$ where $A,B$ are constants, $|A|<1$, $B\neq 0$, and wind $\widetilde{W}=A\frac{\partial}{\partial r}+B\frac{\partial}{\partial \theta}$ then the followings hold good.
\begin{enumerate}
\item[(I)] If $G_h(r)<0$ everywhere, then
\begin{enumerate}
\item[(i)] the $\alpha$-cut locus of a point $p$ is the opposite meridian.
\item[(ii)] the $F$-cut locus of a point $p$ is the opposite meridian, where $F=\alpha+\beta$,
\begin{equation*}
\begin{split}
(a_{ij})=\frac{1}{\lambda^2}\begin{pmatrix}
1 & 0 \\ 0 & \lambda m^2(r)
\end{pmatrix}\ (b_i)=\frac{1}{\lambda}\begin{pmatrix}
-A \\ 0
\end{pmatrix},
\end{split}
\end{equation*}
and $\lambda:=1-\|V\|_h^2=1-A^2>0$.
\item[(iii)] The $\widetilde{F}$-cut locus of a point $p$ is the twisted opposite meridian by the flow action $\varphi_t(r,\theta)=(r,\theta+Bt)$.
\end{enumerate}
\item[(II)] With the notations in Theorem \ref{thm_main_1} let us assume that $(M,h)$ has Gaussian curvature satisfying $G_h(r)$ is decreasing along any half meridian $[0,\infty)$ and $G_h(0)\geq 0$. Then in this case the cut locus of $\widetilde{F}=\widetilde{\alpha}+\widetilde{\beta}$ is a subarc of the opposite meridian is of the opposite parallel deformed by the flow of $W=B\frac{\partial}{\partial \theta}$.
\end{enumerate}
\end{theorem}
\begin{Example}
There are many examples satisfying this theorem. For instance the choice $m(r)=e^{-r^2}\leq 1$ gives $G_h(r)=-4r^2-2<0$ which is decreasing on $[0,\infty)$ and $G_h(0)=2>0$. Any choice of constants $A, B$ such that $1<\frac{\sqrt{1-A^2}}{B}$, i.e. $A^2+B^2<1$ is suitable (for instance $(A,B)=(\sin \omega,\cos\omega)$ for a fixed angle $\omega$) for $\widetilde{W}$. Many other examples are possible.
\end{Example}
\begin{Remark}
A similar study can be done for the case $B=0$.
\end{Remark}
\begin{Remark}
The extension to the Randers case of the Riemannian cylinders of revolution and study of their cut loci in \cite{C2} can be done in a similar manner.
\end{Remark}
|
1,116,691,499,850 | arxiv | \section{Introduction}
Causal inference increasingly plays a vitally important role in a wide range of fields including online marketing, precision medicine, political science, etc.
For example, a typical concern in precision medicine is whether an \emph{alternative} medication treatment for a certain illness will lead to a better outcome \footnote{\emph{Treatment} and \emph{outcome} are terms in the theory of causal inference, which for example denote a promotion strategy taken and its resulting profit, respectively}. Treatment effect estimation can answer this question by comparing outcomes under different treatments.
Estimating treatment effect is challenging, because only the factual outcome for a specific treatment assignment (say, treatment \texttt{A}) is observable, while the counterfactual outcome corresponding to alternative treatment \texttt{B} is usually unknown.
Aiming at deriving the absent counterfactual outcomes, existing causal inference from observations methods can be categorized into these main branches: re-weighting methods \cite{gruber2011tmle, austin2015moving}, tree-based methods \cite{chipman2007bayesian, hill2011bayesian, wager2018estimation}, matching methods \cite{rosenbaum1983central, dehejia2002propensity,stuart2011matchit} and doubly robust learners \cite{econml, 10.5555/3104482.3104620}.
In general, the matching approaches focus on finding the comparable pairs based on distance metrics such as propensity score or Euclidean distance, while re-weighting methods assign each unit in the population a weight to equate groups based on the covariates. Meanwhile, tree-based machine learning models including decision tree or random forest are utilized in the tree-based approach to derive the counterfactual outcomes. Doubly Robust Learner is another recently developed approach that combines the propensity score weighting with the regression outcome to produce an unbiased and robust estimator.
Existing treatment effect estimation from observational data faces two major challenges. First, most of previous studies focus on the deterministic intervention which sets each individual a fixed treatment value, incapable of dealing with dynamic and stochastic intervention~\cite{dudik2014doubly, pearl2000models, tian2012identifying}.
They can not address the question like ``how the health status changes (the desired outcome) for the patient if the doctor adopts 50\% dose reduction in the patient'', which might be of practical interest in real world.
Second, existing methods fail in exploiting the relationships between the desired response and the intervention on the treatment, resulting in black-box effect estimation.
To address these issues, we propose a novel influence function based model to provide sufficient causal evidence to answer decision-making questions about stochastic interventions. Particularly, our model consists of three novel components: \emph{stochastic propensity score}, \emph{stochastic intervention effect estimator} (SIE) and \emph{customized genetic algorithm} for stochastic intervention optimization (Ge-SIO).
The main contributions of our model are summarized below:
\begin{itemize}
\item
We propose a new stochastic propensity score learning the treatment effect trajectory, which tackles the limitation of existing approaches only dealing with deterministic intervention effects.
\item
Based on the general efficiency theory, we provide theoretical proof that SIE can achieve fast parametric convergence rates when the potential outcome model can not be perfectly estimated.
\item Ge-SIO is proposed to find the optimal intervention leading to the desired response, which can be widely applicable in domain-specific decision-making.
\end{itemize}
\section{Related works}
\label{section:related}
Conventionally, causal inference can be trickled by either the randomized experiment (also known as A/B testing in online settings) or observational data.
In randomized experiment, units are randomly assigned to a treatment and their responses are recorded. One treatment is selected as the best among the alternatives by comparing the predefined statistical criteria. While randomized experiments have been popular in traditional causal inference, it is prohibitively expensive \cite{chan2010evaluating, kohavi2011unexpected} and infeasible \cite{bottou2013counterfactual} in some real-world settings~\cite{li2017learning,wang2019polynomial, xu2020causality}.
As an alternative method, observational study is becoming increasingly critical and available in many domains such as medicine, public policy and advertising.
However, observational study needs to deal with data absence problem, which differs fundamentally from supervised learning~\cite{}. This is simply because only the factual outcome (symptom) for a specific treatment assignment (say, treatment \texttt{A}) is observable, while the counterfactual outcome corresponding to alternative treatment \texttt{B} in the same situation is always unknown.
\subsection{Treatment Effect Estimation}
The simplest way to estimate treatment effect in observational data is the matching method that finds the comparable units in the treated and controlled groups. The prominent matching methods include Propensity Score Matching (PSM) \cite{rosenbaum1983central, dehejia2002propensity} and Nearest Neightbor Matching (NNM) \cite{stuart2011matchit}. Particularly, for each treated individual, PSM and NNM select the nearest units in the controlled group based on some distance functions, and then calculate the difference between two paired outcomes. Another popular approach is reweighting method that involves in building a classifier model to estimate the probability of a treatment assigned to a particular unit, and uses the predicted score as the weight for each unit in dataset. TMLE~\cite{gruber2011tmle} and IPSW~\cite{austin2015moving} fall into this category. Ordinary Linear Regression (OLS)~\cite{goldberger1964econometric} is another commonplace method that fits two linear regression models for the treated and controlled group, with each treatment as the input features, and the outcome as the output. The predicted counterfactual outcomes thereafter are used to calculate the treatment effect. Meanwhile, decision tree is a popular non-parametric machine learning model, attempting to build the decision rules for the regression and classification tasks. Bayesian Additive Regression Trees (BART) \cite{chipman2007bayesian, hill2011bayesian} and Causal Forest \cite{wager2018estimation} are the prominent tree-based method in causal inference. While BART~\cite{chipman2007bayesian, hill2011bayesian} builds the decision tree for the treated and controlled units, Causal Forest~\cite{wager2018estimation} constructs the Random Forest model to derive the counterfactual outcomes, and then calculates the difference between the paired potential outcomes to obtain the average treatment effect. They are proven to obtain the more accurate treatment effect than matching methods and reweighting methods in the non-linear outcome setting.
Doubly Robust Learner \cite{econml, 10.5555/3104482.3104620} is the recently proposed approach that constructs a regression estimator predicting the outcome based on the covariates and treatment, and builds a classifier model to fit the treatment. DRL finally combines the both predicted propensity score and predicted outcome to estimate treatment effect.
\subsection{Stochastic Intervention Optimization}
Our work connects to the uplift modelling which optimizes the treatment effect by uplifting the expected response under the treatment policy \cite{zaniewicz2013support, Personalized_Medicine, hansotia2002incremental, manahan2005proportional}. Uplift modelling measures the effectiveness of a treatment and then predicts the corresponding expected response. The most popular and widely-used approach is Separate Model Approach (SMA) \cite{zaniewicz2013support,Personalized_Medicine} which builds two different regression models. The first one uses treated unit data, whilst another works the controlled unit data. Several state-of-the-art machine learning models such as Random Forest, Gradient Boosting Regression or Adaboost can be used to construct the predictive model \cite{liaw2002classification, solomatine2004adaboost, friedman2001greedy}. The predicted responses are then calculated, and the optimal treatments are selected as the result. SMA has been widely applied in marketing \cite{hansotia2002incremental} and customer segmentation \cite{manahan2005proportional}. However, when dealing with the data containing a great deal of noisy and missing information, the model outcomes are prone to be incorrect and biased, which leads to the poor performance. Other commonplace methods include Class Transformation Model \cite{jaskowski2012uplift} and Uplift Random Forest \cite{guelman2014package} that build the classification model for each outcome in the dataset. These techniques therefore can only handle the categorical outcomes, instead the continuous ones.
\section{Preliminaries and Problem Definition}
\label{section:pre}
\subsection{Notation}
In this study, we consider the observational dataset $Z=\{\boldsymbol{x}_i,{y}_i,t_i\}_{i=1}^n$ with $n$ units, where $\boldsymbol{x}\in\mathbb{R}^{n\times d}$ is the $d$-dimensional covariate,
$y$ and $t\in\{0,1\}$ are the outcome and the treatment for the unit, respectively.
The treatment variable is binary in many cases, thus the unit will be assigned to the control treatment if $t=0$, or the treated treatment if $t=1$. Accordingly, $y_0(\boldsymbol{x})$ and $y_1(\boldsymbol{x})$ are profit accrued from customer $i$ corresponding to either the controlled or treated group. The central goal of causal inference is to compare the potential outcomes of the same units under two or more treatment conditions, which is implemented by computing the average treatment effect (ATE), i.e.,
\begin{equation}
\tau_{\text{ATE}}=\mathbb{E}[y_0(\boldsymbol{x})-y_1(\boldsymbol{x})]
\label{eq:tau}
\end{equation}
\subsection{Propensity Score}
Rosenbaum and Rubin~\cite{rosenbaum1983central} first proposed propensity score technique to deal with the high-dimensional covariates. Particularly, propensity score can summarise the mechanism of treatment assignment and thus squeezes covariate space into one dimension to avoid the possible data sparseness issue~\cite{bang2005doubly, dehejia2002propensity, austin2015moving, hirano2003efficient}. The propensity score is defined as the probability that a unit is assigned to a particular treatment $t=1$ given the covariate $\boldsymbol{x}$, i.e.,
\begin{equation}
p_t(\boldsymbol{x}) = \mathbb{P}(t = 1 | \boldsymbol{x})
\end{equation}
In practice, one widely-adopted parametric model for propensity score $p_t(\boldsymbol{x})$
is the logistic regression
\begin{equation}
\hat{p}_t(\boldsymbol{x})=\frac{1}{1+
\exp{(\boldsymbol{w}^{\top}\boldsymbol{x}+\omega_0)}}
\label{eq:ps}
\end{equation}
where $\boldsymbol{w}$ and $\omega_0$ are estimated
by minimizing the negative log-likelihood~\cite{martens2008systematic}.
The propensity score is widely used in causal inference methods to estimate treatment effects from observational data~\cite{hirano2003efficient, pirracchio2016propensity, luo2010applying,abdia2017propensity}.
\subsection{Assumption}\label{sc:asp} Following the general practice in causal inference literature, the following two assumptions should be taken into consideration to ensure the identifiability of the treatment effect, i.e. \emph{Positivity} and \emph{Ignorability}.
\begin{assumption} [Positivity]. Each unit has a positive probability to be assigned by a treatment, i.e.,
\begin{equation}
p_t(\boldsymbol{x}) >0, \quad\forall \boldsymbol{x}\text{ and } t
\end{equation}
\end{assumption}
\begin{assumption}[Ignorability]
The assignment to the treatment $t$ is independent of the outcomes $\boldsymbol{y}$ given covariates $\boldsymbol{x}$
\begin{equation}
y_1 , y_0 \perp\mkern-9.5mu\perp t|\boldsymbol{x}
\end{equation}
\end{assumption}
\section{Stochastic Intervention Effect}
\label{section:estimation}
The stochastic intervention effect can be expressed by the difference between the observed outcome and the counterfactual outcome under the stochastic intervention. Because the observed outcome is fixed, stochastic intervention effect estimation is transformed as the problem of estimating the counterfactual outcome.
\subsection{Stochastic Counterfactual Outcome}
To estimate the counterfactual outcome, we first propose a flexible and task-specific stochastic propensity score to characterize the stochastic intervention.
\begin{definition}[Stochastic Propensity Score]{ The stochastic propensity score with respect to stochastic degree $\delta$ is
\begin{equation}
q_{t}( \boldsymbol{x},\delta) = \frac{\delta \cdot\hat{p}_t(\boldsymbol{x})}{\delta \cdot\hat{p}_t(\boldsymbol{x}) + 1 - \hat{p}_t(\boldsymbol{x})}
\label{eq:incrps}
\end{equation}
where $\hat{p}_t(\boldsymbol{x})$ is denoted by
}
\begin{equation}
\hat{p}_{t}(\boldsymbol{x})=\frac{\exp \left(\sum_{j=1}^{s} \beta_{j} g_{j}\left(\boldsymbol{x}\right)\right)}{1+\exp \left(\sum_{j=1}^{s} \beta_{j} g_{j}\left(\boldsymbol{x}\right)\right)}
\label{eq:ourps}
\end{equation}
\label{def:ips}
where $\{ g_1, \cdots, g_s\}$ are nonlinear basis functions.
\end{definition}
The proposed stochastic propensity score in Definition \ref{def:ips} has two promising properties compared with \eqref{eq:ps}.
On the one hand, propensity score \eqref{eq:ps} fails to quantify the causal effect under stochastic intervention.
So we introduce $\delta$ in \eqref{eq:incrps} to represent the stochastic intervention indicating the extent to which the propensity scores are fluctuated from their actual observational values.
For instance, the stochastic intervention that the doctor adopts 50\% dose increase in the patient can be expressed by $\delta = 1.5$.
On the other hand, the linear term $\boldsymbol{w}^{\top}\boldsymbol{x}+\omega_0$ in Eq.~\eqref{eq:ps} may lead to misspecification~\cite{dalessandro2012causally} if there are higher-order terms or non-linear trends among covariates $\boldsymbol{x}$.
So we propose to use a sum of nonlinear function $\sum_{j=1}^{s} \beta_{j} g_{j}$ in \eqref{eq:ourps} that captures the non-linearity involving in covariates to create an unbiased estimator of treatment effect.
On the basis of the stochastic propensity score, we propose an influence function specific to estimate counterfactual outcome under stochastic intervention.
Meanwhile, we also analyze the asymptotic behavior of the counterfactual outcome with theoretical guarantees. We prove that our influence function can achieve double robustness and fast parametric convergence rates.
\begin{theorem}
\label{th:sto}
With the stochastic intervention of degree $\delta$ on observed data $z=(\boldsymbol{x},y,t)$, we have
\begin{equation}
\varphi(z,\delta)=q_{t}(\boldsymbol{x},\delta)\cdot m_1(\boldsymbol{x},y)+(1-q_{t}(\boldsymbol{x},\delta))\cdot m_0(\boldsymbol{x},y)
\label{eq:varphi}
\end{equation}
being the efficient influence function for the resulting counterfactual outcome $\hat{\psi}$, i.e.,
\begin{equation}
\hat{\psi}
=\mathbb{P}_{n}\left[\varphi(z,\delta)\right]
\label{eq:e_psi}
\end{equation}
where $m_1(\boldsymbol{x},y)$ or $m_0(\boldsymbol{x},y)$ is given by
\begin{equation}
m_t(\boldsymbol{x},y)=\frac{\mathbb{I}_{t}\cdot({y-\hat{\mu}(\boldsymbol{x},t)})}{t\cdot\hat{p}_t(\boldsymbol{x})+(1-t)(1-\hat{p}_t(\boldsymbol{x}))}+\hat{\mu}(\boldsymbol{x},t)
\label{eq:m}
\end{equation}
and $\mathbb{I}_{t}$ is an indicator function, $\hat{p}_t$ is the estimated propensity score in Eq.~\eqref{eq:ourps} and $\hat{\mu}$ is potential outcomes model that can be fitted by machine learning methods.
\end{theorem}
\begin{proof}
Throughout we assume the observed data quantity $\psi$ can be estimated under the positivity assumption from Section~\ref{sc:asp}.
For the unknown ground-truth $\psi(\delta)$, we will prove $\varphi$ is the influence function of $\psi(\delta)$ in Eq.~\eqref{eq:e_psi} by checking
\begin{equation}
\int \hat{\psi}(y,x,t,\mathbb{P}) d \mathbb{P}=\int\left(\varphi(y,x,t,\delta)-\psi \right) d \mathbb{P}=0
\label{eq:property}
\end{equation} Eq.~\eqref{eq:property} indicates that the uncentered influence function $\varphi$ is unbiased for $\psi$. Given $q_t(\boldsymbol{x},\delta)$ as the stochastic propensity score in Eq.~\eqref{eq:incrps},
we check the property~\eqref{eq:property} by
\begin{equation*}
\small
\begin{split}
&\int\left(\varphi(y,x,t,\delta)-\psi\right) d \mathbb{P}\\
&=\int\left\{q_t\cdot m_1(\boldsymbol{x},y)+(1-q_t)m_0(\boldsymbol{x},y)-\psi(\delta)\right\}d\mathbb{P}(y,x,t,\delta)\\
&=\int\{q_t\frac{\mathbb{I}_{t=1}\cdot(y-\hat{\mu}(x,1))}{\hat{p}_t}+(1-q_t)\frac{\mathbb{I}_{t=0}\cdot(y-\hat{\mu}(x,0))}{1-\hat{p}_t}\\ &+q_t\hat{\mu}(x,1)+(1-q_t)\hat{\mu}(x,0)-\psi(\delta)\}d\mathbb{P}(y,x,t,\delta)\\
&=\int\{q_t\frac{\mathbb{I}_{t=1}\cdot(y-\hat{\mu}(x,1))}{\hat{p}_t}+(1-q_t)\frac{\mathbb{I}_{t=0}\cdot(y-\hat{\mu}(x,0))}{1-\hat{p}_t} \\ &+q_t\hat{\mu}(x,1)+(1-q_t)\hat{\mu}(x,0)-\mathbb{E}[q_t\hat{\mu}(x,1)\\
&+(1-q_t)\hat{\mu}(x,0)]\}d\mathbb{P}(y,x,t,\delta)\\
&\overset{(1)}{=}\int\left\{q_t\frac{\mathbb{I}_{t=1}\cdot(y-\hat{\mu}(x,1))}{\hat{p}_t}\right\}d\mathbb{P}(y,x,t,\delta)\\ &+\int\left\{(1-q_t)\frac{\mathbb{I}_{t=0}\cdot(y-\hat{\mu}(x,0))}{1-\hat{p}_t}\right\}d\mathbb{P}(y,x,t,\delta)\\
&=\int\left\{q_t\frac{\mathbb{I}_{t=1}\cdot y}{\hat{p}_t}+(1-q_t)\frac{\mathbb{I}_{t=0}\cdot y}{1-\hat{p}_t}\right\}d\mathbb{P}(y,x,t,\delta)\\
&-\int\left\{q_t\frac{\mathbb{I}_{t=1}\cdot\hat{\mu}(x,1)}{\hat{p}_t}-(1-q_t)\frac{\mathbb{I}_{t=0}\cdot\hat{\hat{\mu}}(x,0)}{1-\hat{p}_t}\right\}d\mathbb{P}(x,t,\delta)\\
&\overset{(2)}{=}0
\end{split}
\end{equation*}
The second equation (1) follows from the iterated expectation, and the second equation (2) follows from the definition of $\hat{\mu}(\boldsymbol{x},t)$ and the usual properties of conditional distribution $d\mathbb{P}(x,y,\delta)=d\mathbb{P}(y|x,\delta)d\mathbb{P}(x,\delta)$.
So far we have proved that $\varphi$ is the influence function of average treatment effect $\psi(\delta)$.
We have proved that the uncentered efficient influence function can be used to construct unbiased semiparametric estimator for $\psi(\delta)$, i.e., that
$\int \varphi\mathbb{P}=\psi$.
\end{proof}
\begin{algorithm}[!htbp]
\small
\caption{SIE: Stochastic Intervention Effect}
\label{alg:avg}
\begin{algorithmic}[1]
\renewcommand{\algorithmicrequire}{\textbf{Input:}}
\renewcommand{\algorithmicensure}{\textbf{Output:}}
\REQUIRE Observed units $\{z_i:(\boldsymbol{x}_i,t_i,y_i)\}_{i=1}^{n}$
\STATE Initialize a stochastic degree $\delta$.
\STATE Randomly split $Z$ into $k$ disjoint groups
\WHILE{each group}
\STATE Fit the propensity score $\hat{p}_t(\boldsymbol{x}_i)$ by Eq.~\eqref{eq:ourps}
\STATE Fit the potential outcome model $\hat{\mu}(\boldsymbol{x}_i,t_i)$
\STATE Compute $\tau_{i}=\hat{p}_{t}(\boldsymbol{x}_i) \hat{\mu}(\boldsymbol{x}_i,1)+\left(1-\hat{p}_{t}(\boldsymbol{x}_i)\right) \hat{\mu}(\boldsymbol{x}_i,0)$
\STATE Calculate $q_{t}( \boldsymbol{x}_i;\delta)$ by Eq.~\eqref{eq:incrps}
\STATE Calculate $m_{1}(\boldsymbol{x}_i)$ and $m_{0}(\boldsymbol{x}_i)$ by Eq.~\eqref{eq:m}
\STATE Calculate the influence function $\varphi(z_i,\delta)$ by Eq.~\eqref{eq:e_psi}.
\ENDWHILE
\STATE Compute $\hat{\tau}_{\text{ATE}}=\frac{1}{n} \sum_{i=1}^{n}\tau_{i}$
\STATE Compute $\hat{\tau}_{\text{SIE}}=\frac{1}{n} \sum_{i=1}^{n}(\varphi(z_i,\delta)-y_i)$
\ENSURE stochastic intervention effect $\tau_{\text{SIE}}$, ATE $\tau_{\text{ATE}}$
\end{algorithmic}
\end{algorithm}
\section{Stochastic Intervention Optimization}
Estimating the stochastic intervention effect is not enough, we are more interested in ``what is the optimal level/degree of treatment for a patient to achieve the most expected outcome?''.
In this section, we apply influence-based estimator to search for the optimal intervention
that achieves the optimal expected response over the whole population.
We model the stochastic intervention using the stochastic propensity score $\hat{q}_t(\boldsymbol{x},\delta)$, and
look for a set of stochastic interventions $\Delta=\{\delta_1^*,\cdots,\delta_n^*\}$ where the $i$-th intervention $\delta_i^*\in\Delta$ maximizes the expected response specific to $i$-th unit $z_i=(\boldsymbol{x}_i,y_i,t_i)$, denoted by $\varphi(z_i,\delta_i)$:
\begin{equation}
\delta_i^*=\arg\max_{\delta_i}\varphi(z_i,\delta_i)
\label{eq:delta}
\end{equation}
\label{section:strategy}
Note that the optimization problem in Eq.~\eqref{eq:delta} is non-differentiable.
To avoid using further assumptions for solving it, we formulate a customised genetic algorithm~\cite{whitley1994genetic} (Ge-SIO) to exploit the search space in an efficient and flexible manner.
The main advantage of Ge-SIO is model-agnostic which can handle with any black-box functions and data type. Therefore, with modifications specific to the intervention effect estimation, Ge-SIO solves Eq.~\eqref{eq:delta} through a process of natural selection. The input of Ge-SIO is the fitness function $\hat{\psi}(\cdot)$ and intervention regime $\Delta$.
For stochastic intervention optimization, each candidate solution is described by the $n$-dimensional intervention $\Delta$ (the ``genes'') and the objective values of the candidates are evaluated by Eq~\eqref{eq:delta}. Usually, a random population of solutions is initialized, which undergoes through the process of evolution to obtain the better fitness function until the stopping condition is reached.
Specifically, Ge-SIO first selects $m$ solutions as the population of parents based on their fitness values.
Among the selected parent solutions, $m$ solutions are chosen pairwise with the uniform distribution to produce children, which is called crossover process. The $n$-dimensional $\Delta$ are recombined by the simulated binary crossover recombinator. Crossover takes $m$ selected parents and combines them, for the sake of diversity to the solutions.
The children, which constitute solutions, are modified by the mutation operator. Mutation has a small chance to change $\Delta$, which may create more fitter solutions.
Thus, the Ge-SIO first generates children by crossover and modifies them by mutation thereafter. After the process of evolution is done, the fittest $\Delta$ is returned as the optimal solution to the desired expected response $\hat{\psi}$.
We run it with the number of generations to repeat the above process so as to find the optimal solution. The full stochastic intervention optimization algorithm is shown in Algorithm~\ref{alg:IEO}.
\begin{algorithm}[H]
\small
\caption{Ge-SIO: Stochastic Intervention Optimization}
\label{alg:IEO}
\begin{algorithmic}[1]
\renewcommand{\algorithmicrequire}{\textbf{Input:}}
\renewcommand{\algorithmicensure}{\textbf{Output:}}
\REQUIRE Observed units $\{z_i:(\boldsymbol{x}_i,t_i,y_i)\}_{i=1}^{n}$
\STATE Initialize a batch of population $\Gamma=\{\boldsymbol{\Delta}_1,\cdots,\boldsymbol{\Delta}_m\}$ with $\boldsymbol{\Delta}_i\sim\mathcal{N}(\boldsymbol{\mu},\boldsymbol{\nu})$
\FOR{$G$ generation}
\FOR{$k=1,\cdots,m$}
\FOR{$i=1,\cdots,n$}
\STATE Compute $q_t(\boldsymbol{x},\delta_i)$ by Eq.~\eqref{eq:incrps}
\STATE Calculate $m_{1}(\boldsymbol{x}_i)$ and $m_{0}(\boldsymbol{x}_i)$ by Eq.~\eqref{eq:m}
\STATE Calculate $\varphi(z_i,\delta)$ by Eq.~\eqref{eq:varphi}.
\ENDFOR
\STATE Compute $k$-th fitness $\Phi(\Delta_k)=\sum_{i=1}^{n}\varphi(z_i,\delta)$
\ENDFOR
\STATE Select $\Delta_1,\cdots,\Delta_m\in\Gamma$ based on its fitness function
\STATE Randomly pair $\lceil m/2\rceil$ $\{\Delta_1,\Delta_2\}\in\Gamma$
\FOR{each pair $\{\Delta_1,\Delta_2\}$}
\STATE Perform uniform $\text{crossover}(\Delta_1,\Delta_2)\rightarrow \Delta_1^{\prime},\Delta_2^{\prime}$
\STATE Perform uniform mutation $\Delta_1^{\prime}\rightarrow \tilde{\Delta}_1,\Delta_2^{\prime}\rightarrow \tilde{\Delta}_2$
\STATE Update $\Gamma$ by replacing $\{\Delta_1,\Delta_2\}$ with $\{\tilde{\Delta}_1,\tilde{\Delta}_2\}$
\ENDFOR
\ENDFOR
\STATE Choose $\Delta^*=\arg\max_{\Delta\in\Gamma}\,\Phi(\boldsymbol{\Delta})$
\ENSURE $\Delta^*$
\end{algorithmic}
\end{algorithm}
\section{Experiments and Results}
\label{section:experiment}
In this section, we conduct intensive experiments and compare our methods with state-of-the-art methods on two tasks: average treatment effect estimation and stochastic intervention effect optimization. Recall that the influence-based estimator $\varphi$ depends on the nuisance function of propensity score $p_t$ and outcome $\mu$. We first perform average treatment effect estimation to confirm that $\hat{p}_t$ and $\hat{\mu}$ are unbiased and robust estimators.
Moreover, the stochastic intervention optimization task is carried out to demonstrate the effectiveness of our Ge-SIO, as well as investigate the impact of stochastic parameter $\delta$ on the expected response.
\subsection{Baselines}
\label{sec:base}
We briefly describe the comparison methods which are used in two tasks of treatment effect estimation and stochastic intervention optimization.
\subsubsection{Treatment effect estimation}
We can not able to directly evaluate SIE on the estimation of stochastic intervention effect, because no dataset with ground-truth stochastic counterfactual outcome is available. On the contrary, the benchmark datasets having two potential outcomes are available for ATE estimation. Therefore, we perform ATE estimation to evaluate the robustness of $\hat{p}_t$ and $\hat{\mu}$ thus to indirectly evaluate the performance of SIE.
We use Gradient Boosting Regression with 100 regressors for the potential outcome models $\hat{\mu}$.
We compare our proposed estimator (SIE) with the following baselines including Doubly Robust Leaner~\cite{10.5555/3104482.3104620}, IPWE~\cite{austin2015moving}, BART~\cite{hill2011bayesian}, Causal Forest~\cite{wager2018estimation, athey2019generalized}, TMLE~\cite{gruber2011tmle} and OLS~\cite{goldberger1964econometric}.
Regarding implementation and parameters setup, we adopt Causal Forest~\cite{wager2018estimation, athey2019generalized} with 100 trees, BART \cite{hill2011bayesian} with 50 trees and TMLE~\cite{gruber2011tmle} from the libraries of cforest, pybart and zepid in Python. For Doubly Robust Learner (DR) \cite{10.5555/3104482.3104620}, we use the two implementations, i.e. LinearDR and ForestDR from the package EconML~\cite{econml} with Gradient Boosting Regressor with 100 regressors as the regression model, and Gradient Boosting Classifier with 200 regressors as the propensity score model. Ultimately, we use package DoWhy \cite{sharma2020dowhy} for IPWE \cite{austin2015moving} and OLS.
\subsubsection{Stochastic Intervention Optimization} We compare our proposed method (Ge-SIO) with Separate Model Approach (SMA) with different settings. SMA \cite{zaniewicz2013support,Personalized_Medicine} aims to build two separate regression models for the outcome prediction in the treated and controlled group, respectively. Under the setting of SMA, we apply four well-known models for predicting outcome including Random Forest (SMA-RF) \cite{soltys2015ensemble, grimmer2017estimating}, Gradient Boosting Regressor (SMA-GBR) \cite{friedman2001greedy}, Support Vector Regressor (SMA-SVR) \cite{zaniewicz2013support}, and AdaBoost (SMA-AB) \cite{solomatine2004adaboost}. We also compare the performance of these models with the random policy to justify that optimization algorithms can help to target the potential customers to generate greater revenue.
For the settings of SMA, we use Gradient Boosting Regressor with 1000 regressors, AdaBoost Regression with 50 regressors, and Random Forest Tree Regressor with 100 trees.
\subsection{Datasets}
\texttt{IHDP} \cite{hill2011bayesian} is a standard semi-synthetic dataset used in the \emph{Infant Health and Development Program}, which is a popularly used semi-synthetic benchmark containing both the factual and counterfactual outcomes. We conduct the experiment on 100 simulations of \texttt{IHDP} dataset, in which each dataset is divided into training and testing set. The training dataset is highly imbalanced with 139 treated and 608 controlled units out of total 747 units, respectively, whilst the testing dataset has 75 units. Each unit has 25 covariates representing the individuals' characteristics. The outcomes are their IQ scores at age 3~\cite{dorie2016npci}.
Online promotion dataset (\texttt{OP} Dataset) provided by EconML project \cite{econml} is chosen to evalute stochastic intervention optimization \footnote{\url{https://msalicedatapublic.blob.core.windows.net/datasets/Pricing/pricing\_sample.csv}}. This dataset consists of 10k records in online marketing scenario with the treatment of discount price and the outcome of revenue, each represents a customer with 11 covariates. We split the data into two part: 80\% for training and 20\% for testing set. We run 100 repeated experiments with different random states to ensure the model outcome reliability. With this dataset, we aim to investigate how different price policies applied to different customers will result in the best generated revenue. We directly model the revenue as the expected response for the uplift modelling algorithm.
\subsection{Evaluation Metrics}
In this section, we briefly describe the two evaluation metrics used for treatment effect estimation and optimization.
Based on Eq.~\eqref{eq:tau}, we define the metric for evaluating the task of treatment effect estimation as the mean absolute error between the estimated and true ATE:
\begin{equation}
\epsilon_{ATE} = |\hat{\tau}_{\text{ATE}} - \tau_{\text{ATE}}|
\end{equation}
Moreover, the main performance metric in the task is the expected value of the response under the proposed treatment, followed by the uplifting models study~\cite{zhao2017uplift, hitsch2018heterogeneous}.
\subsection{Results and Discussions}
In this section, we aim to report the experimental results of 1) how our proposed estimator (SIE) can accurately estimate the average treatment effect; 2) how our optimization algorithm (Ge-SIO) can be used for finding optimal stochastic intervention in online promotion application; and 3) how the impacts of data size and stochastic degree are.
\subsubsection{Treatment Effect Estimation}
The results of $\epsilon_{ATE}$ derived from \texttt{IHDP} dataset with 100 simulations and \texttt{OP} dataset with 100 repeated experiments are presented in the Table~\ref{table:ihdp} and Table~\ref{table:customer}, respectively. As seen clearly, amongst all approaches, our proposed method SIE achieves the best performance of the estimated ATE, while the Doubly Robust Learner performs next satisfactorily. Particularly, on \texttt{IHDP}, SIE outperforms all other methods in both training and testing set. In order to investigate the impact of data size chosen on estimation, we also run experiments and plot the performance of models in different data sizes in Figure~\ref{fig:ihdp}. Notably, SIE consistently produces the more accurate average treatment effect than others as the data size increases. Causal Forest and Doubly Robust Learner also produce the very competitive results, whereas the lowest performance belongs to IPWE. Turning to the experimental results on online promotion dataset in Table~\ref{table:customer}, SIE also has an outstanding performance consistently. Additionally, Doubly Robust Learner methods are ranked second, while the competitive results are recorded with BART. It is also worthy to note that although TMLE performs well in training set, its performance likely degrades when dealing with out-of-sample data in testing set. Overall, these results validate that our proposed SIE estimator proves to be effective and has an outstanding performance in the small and highly imbalanced dataset (\texttt{IHDP}) as well as in real-world application dataset (\texttt{OP}).
\begin{table}[!htb]
\centering
\small
\caption{$\epsilon_{ATE}$ on 100 simulations of \texttt{IHDP} for training and testing (lower is better).}
\begin{tabular}{ccc}
\hline
\multirow{2}{*}{Method} &\multicolumn{2}{c}{\texttt{IHDP Dataset} ($\epsilon_{\mathrm{ATE}}\pm\texttt{std}$)}
\\ \cmidrule{2-3}
& Train & Test \\
\hline
OLS & 0.746 $\pm$ 0.140 &1.264 $\pm$ 0.250 \\
BART & 1.087 $\pm$ 0.120 & 2.808 $\pm$ 0.100 \\
Causal Forest & 0.360 $\pm$ 0.050 & 0.883 $\pm$ 0.614\\
TMLE & 0.326 $\pm$ 0.060 & 0.831 $\pm$ 1.750 \\
ForestDRLearner & 1.044 $\pm$ 0.040 & 1.224 $\pm$ 0.080\\
LinearDRLearner & 0.691 $\pm$ 0.080 & 0.797 $\pm$ 0.170 \\
IPWE & 1.701 $\pm$ 0.140 & 5.897 $\pm$ 0.300 \\
\hline
SIE & \textbf{0.284 $\pm$ 0.050} & \textbf{0.424 $\pm$ 0.090 } \\
\hline
\end{tabular}
\label{table:ihdp}
\end{table}
\begin{figure}[!htb]
\label{fig:ihdp}
\centerline{\includegraphics[width=0.46\textwidth]{figure/mae1.png}}
\caption{$\epsilon_{ATE}$ on \texttt{IHDP} under different datasize}
\label{fig:ihdp}
\end{figure}
\begin{table}[!htb]
\centering
\small
\caption{$\epsilon_{ATE}$ on \texttt{OP} dataset in 100 repeated experiments (lower is better). }
\begin{tabular}{ccc}
\hline
\multirow{2}{*}{Method} &\multicolumn{2}{c}{\texttt{OP Dataset} ($\epsilon_{\mathrm{ATE}}\pm \texttt{std}$)}
\\ \cmidrule{2-3}
& Train & Test \\
\hline
OLS & 5.906 $\pm$ 0.004 & 5.907 $\pm$ 0.000 \\
BART & 0.504 $\pm$ 0.042 & 0.505 $\pm$ 0.043 \\
Causal Forest & 3.520 $\pm$ 0.034 & 3.520 $\pm$ 0.034 \\
TMLE & 0.660 $\pm$ 0.000 & 3.273 $\pm$ 0.000 \\
ForestDRLearner & 0.240 $\pm$ 0.014 & 0.241 $\pm$ 0.013 \\
LinearDRLearner & 0.139 $\pm$ 0.009 & 0.139 $\pm$ 0.008 \\
IPWE & 5.908 $\pm$ 0.004 & 5.908 $\pm$ 0.015 \\
\hline
SIE & \textbf{0.137 $\pm$ 0.000} & \textbf{0.119 $\pm$ 0.000} \\
\hline
\end{tabular}
\label{table:customer}
\end{table}
\subsubsection{Stochastic Intervention Optimization}
For the online promotion scenario, we model the revenue in dataset as the expected response of each customer under proposed treatment. Figure~\ref{fig:revenue} presents the revenue of uplifting modeling methods with different data sizes including 1000, 5000 and 10000 records. We set 100 generations for our Ge-SIO. Apparently, Ge-SIO generally produces the greatest revenue in all three datasizes, while SMA-ABR achieves the second-best performance with a very competitive result. Moreover, there is no significant difference in the performance of SMA with different settings. In contrast, the lowest revenue is generated by the random stochastic intervention that fails to choose the target customers to provide the promotion. The possible reason behind our proposed method's outstanding performance is that instead of getting the uplift signal like SMA, we directly intervene into the propensity score to produce the best stochastic intervention. From the business view, this emphasizes the crucial importance of the stochastic intervention optimization in online marketing campaign.
On the other hand, Figure~\ref{fig:variation} provides the information on the expected response with the various stochastic degree $\delta$ in \texttt{OP} and \texttt{IHDP} dataset with 90\% confidence interval. More specifically, when increasing degree $\delta$ from 0 to 5, the expected revenue also increases accordingly. The revenue thereafter reaches the highest point and remains nearly stable when $\delta$ is greater than 5. Similarly, the expected IQ score per children in the \texttt{IHDP} dataset also witnesses the same trend: the IQ score climbs gradually as stochastic degree $\delta$ rises. The plot of the relationship between the expected response and stochastic degree $\delta$ provides valuable insights into the degree of intervention we should make to achieve the optimal stochastic intervention, which can greatly facilitate the decision-making process.
\begin{figure}[!htb]
\centerline{\includegraphics[width=0.44\textwidth]{figure/expected_response5.png}}
\caption{Expected revenue per customer from \texttt{OP} dataset by different models}
\label{fig:revenue}
\end{figure}
\begin{figure}[!htb]
\centerline{\includegraphics[width=0.5\textwidth, scale=2]{figure/response_delta6.png}}
\caption{(a) Expected revenue per customer from \texttt{OP} dataset with uniform 90\% confidence. (b) Expected IQ score per children from \texttt{IHDP} dataset with uniform 90\% confidence}
\label{fig:variation}
\end{figure}
\section{Conclusion}
\label{section:conclusion}
Causal inference increasingly gains the attention from both academia and industry as a powerful tool to deal with the scenario where people are not only interested to know the treatment effect but also the optimal intervention for the expected responses~\cite{wang2020joint,yin2021leveraging}.
To extend causal inference to addressing stochastic interventions, this paper focuses on the dynamic intervention that is not discussed much in the recent study. In general, the contribution of this study is twofold. Based on stochastic propensity score, we propose a novel stochastic intervention effect estimator along with a customised genetic algorithm for stochastic intervention optimization. Our method can learn the trajectory of the stochastic intervention effect, providing causal insights for decision-making applications.
Theoretical and numerical results justify that our methods outperform state-of-the-art baselines in both treatment effect estimation and stochastic intervention optimization.
\section*{Acknowledgement}
This work is partially supported by the Australian Research Council (ARC) under Grant No. DP200101374 and LP170100891.
\bibliographystyle{unsrt}
|
1,116,691,499,851 | arxiv | \section{Introduction}
Tracking and imaging a high-speed moving object have great application prospects in military, biomedical, computer vision, and other fields. The blurring caused by motion and the short exposure time caused by high frame rate shooting are the two main reasons that lead to the deterioration of imaging quality for a high-speed moving object. The high-speed camera\cite{kondo2012development} was invented to capture moving objects with a very high frame rate and relatively high signal-to-noise ratio, but the price is expensive and the data flux is very high. Various tracking and imaging methods for moving objects have been proposed based on spatial light modulators (SLM) and single-pixel detectors (SPD) with a wide spectral response\cite{jiang2017adaptive,jiang2018efficient,shi2019fast,zhang2019image,deng2020image,zha2021single,zha2022complementary,yang2022image,zhang2013improving,jiao2019motion,zhang2017fast,xu20181000,jiang2020imaging,hahamovich2021single,jiang2021single,monin2021single,sun2019gradual,yang2020compressive,wu2021fast,sun2022simultaneously,Yang2022Anti-motion,guo2022fast}.
Among these methods, the image-free method can achieve tracking a moving object at a very high frame rate. Zhang et al. proposed a real-time image-free tracking method based on the Fourier basis patterns and finally achieved a temporal resolution of 1666 frames per second(fps) with 10000$~\mathrm{Hz}$ digital micromirror device (DMD)\cite{zhang2019image}. Deng et al. extended the method to realize the three-dimensional trajectory tracking of a fast-moving object with 1666 $~\mathrm{fps}$\cite{deng2020image}. Zha et al. proposed a fast moving object tracking method based on the geometric moment patterns and reached a frame rate of 7400 $~\mathrm{Hz}$\cite{zha2021single}. Then Zha et al. continued proposing a complementary measurement scheme, which increased the frame rate of the method to 11.1$~\mathrm{kHz}$\cite{zha2022complementary}. However, the above methods cannot image the moving object when tracking the object at a high frame rate.
The single-pixel imaging(SPI) \cite{edgar2019principles} based on SPD needs to use many modulation patterns to image the object, while the operating rate of the SLM used to modulate is limited, which causes the conflict between the sampling time and the reconstructed image quality. For a moving object, the sampling time allocated to a single moving frame is very short, and combining multiple moving frames for calculation will lead to motion blur. To address this problem in SPI, a moving object can be imaged by estimating the moving speed of the object using an algorithm\cite{zhang2013improving,jiao2019motion}, choosing the proper modulation patterns or increasing the speed of SLM to reduce the sampling time\cite{zhang2017fast,xu20181000,jiang2020imaging,hahamovich2021single,jiang2021single,monin2021single}, and estimating motion information based on low-resolution images\cite{sun2019gradual,yang2020compressive,wu2021fast}. The methods by estimating motion information of the moving object are commonly used to image a moving object in recent years. Zhang et al. proposed a method to image a uniformly moving object by modifying the pattern and velocity parameters during reconstruction\cite{zhang2013improving}. Jiao et al. proposed a method to estimate the motion parameters of the object under the assumption that the type of object motion is known\cite{jiao2019motion}. In addition, many methods have been proposed to obtain the motion information of the object, such as calculating cross-correlation\cite{sun2019gradual,wu2021fast} or low-order moments\cite{yang2020compressive} of the images, using laterally shifting patterns\cite{sun2022simultaneously}, and projecting two-dimensional projective patterns\cite{Yang2022Anti-motion}. Even so, the frame rate of these methods is far lower than that of the imaging-free tracking method. Inspired by the above methods, an idea of tracking and imaging moving an object naturally appears: we can first determine the object motion information through the image-free method, and then transform the spatial-coding patterns of the object using motion information; when the number of coding patterns is sufficient, the image of moving object can be reconstructed using the compressed sensing\cite{donoho2006compressed,candes2008introduction,duarte2008single} algorithm. A similar idea is applied in the newest research. Guo et al. combined the geometric moment patterns and Hadamard patterns to achieve tracking and imaging of the
moving object at a frame rate of 5.55 $~\mathrm{kHz}$\cite{guo2022fast} .
In this paper, we design a new pattern sequence to achieve high frame rate trajectory detection and imaging of a high-speed moving object. Four binary Fourier patterns and two differential Hadamard patterns are used to modulating one frame of the object, then the modulated light signals are obtained by single-pixel detection. The displacement of the moving object for each moving frame can be determined by these six detection values. Based on the determined displacements and patterns, we can recalculate the reconstruction matrix and reconstruct the moving object image. Using this pattern sequence, The frame rate of tracking a moving object can catch up with that of the image-free method in Ref.[\citenum{zhang2019image}]. The proposed method is verified by both simulations and experiments.
\section{Method}
In Fourier single-pixel imaging(FSPI)\cite{zhang2015single}, the spatial information of the object is encoded by SLM using the Fourier basis patterns, and the series of modulated total light intensities are detected by a SPD. The required Fourier basis patterns are usually described by a pair of spatial frequencies and an initial phase. A Fourier basis pattern $P(x,y)$ can be represented by its corresponding spatial frequency pair $(f_x,f_y)$ and corresponding initial phase $\phi _0$:
\begin{equation}
\label{eq:fs}
P\left(x, y \mid f_{x}, f_{y}, \varphi_{0}\right)=a_0+b_0 \cos \left[2 \pi\left(f_{x} x+f_{y} y\right)+\varphi_{0}\right],
\end{equation}
where $a_0$ is the average intensity of the Fourier basis pattern, $b_0$ is the contrast of the basis pattern, and $(x, y)$ corresponds to the two-dimensional spatial coordinate of the basis pattern. The modulated total light intensity $I$ can be obtained using the above Fourier basis patterns to modulate the illumination light or the detection area:
\begin{equation}
\label{eq:fimg}
I =\sum\nolimits_{x,y} O(x, y)P(x, y), \\
\end{equation}
where $O(x, y)$ is the object image. Based on the linear response of the single pixel detector to the light intensity within its effective detection range, the modulated light intensity can be replaced by the measured value of the SPD. The Fourier coefficients of the corresponding Fourier domain are obtained by these measured values. Four-step phase shift method and three-step phase shift method are two commonly used methods to obtain Fourier coefficients in FSPI\cite{zhang2017fast}. The four-step phase shift method requires four Fourier basis patterns with the same spatial frequency but different phases to obtain a Fourier coefficient. These four patterns are denoted as $P(f_x,f_y,0)$, $P(f_x,f_y,\pi/2)$, $P(f_x,f_y,\pi)$, and $P(f_x,f_y,3\pi/2)$, respectively. The corresponding single-pixel detection values are denoted as $I_0$, $I_{\pi/2}$, $I_{\pi}$, and $I_{3\pi/2}$, respectively. Then the corresponding Fourier coefficient is given by Eq.(\ref{eq:4step}):
\begin{equation}
\widetilde{O}(f_x,f_y)=(I_{\pi}-I_{0})+j(I_{3\pi/2}-I_{\pi/2}),
\label{eq:4step}
\end{equation}
it should be noted that the patterns $P(f_x, f_y, 0)$ is the inverse of the pattern $P (f_x, f_y, \pi)$, ;$P (f_x, f_y, \pi/2)$ is the inverse of the pattern $P (f_x, f_y, 3\pi/2)$. Similarly, the three-step phase shift method requires three Fourier basis patterns with the same spatial frequency but different initial phases. These three patterns are respectively denoted as $P(f_x,f_y,0)$, $P(f_x,f_y,2\pi/3)$, and $P(f_x,f_y,4\pi/3)$. The corresponding single-pixel detection values are denoted as $I_0$, $I_{2\pi/3}$, and $I_{4\pi/ 3}$, and the corresponding Fourier coefficient is given by Eq.(\ref{eq:3step}):
\begin{equation}
\widetilde{O}(f_x,f_y)=(2I_0-I_{2\pi/3}-I_{4\pi/3})+\sqrt{3}j(I_{2\pi/3}-I_{4\pi/3}),
\label{eq:3step}
\end{equation}
the commonly used spatial light modulator in SPI is the DMD. These Fourier basis patterns are grayscale and cannot be directly loaded on the DMD. Binarization is usually required when these patterns are used for modulation. The grayscale patterns can be binarized using the upsampling scheme\cite{zhang2017fast} and Floyd-Steinberg dithering method\cite{floyd1976adaptive}. After binarization, the pattern $P(f_x, f_y, 0)$ plus the pattern $P (f_x, f_y, \pi)$ equals the all-one pattern; the pattern $P (f_x, f_y, \pi/2)$ plus the pattern $P (f_x, f_y, 3\pi/2)$ equals the all-one pattern, too.
In the Fourier transform, all points in the spatial domain will contribute to each coefficient in the Fourier domain, and the result is that the displacement change in the spatial domain will directly affect the Fourier coefficient. Based on the linear phase shift property of Fourier transform, Zhang et al. proposed a method to detect the object motion trajectory using two Fourier coefficients for each frame\cite{zhang2019image}. The specific principle is that the displacement$(\Delta x,\Delta y)$ of the object image $O(x,y)$ in the spatial domain will lead to the phase shift in the Fourier domain $(-2\pi f_x\Delta x,-2\pi f_y\Delta x)$, which can be expressed as:
\begin{equation}
\label{eq:cmpxy}
O\left(x-\Delta x, y-\Delta y\right)=F^{-1}\left\{\tilde{O}\left(f_{x}, f_{y}\right) \exp \left[-j 2 \pi\left(f_{x} \Delta x+f_{y} \Delta y\right)\right]\right\},
\end{equation}
where $(f_x,f_y)$ is the spatial frequency coordinate in the Fourier domain, $\widetilde{O}(f_x,f_y)$ is the Fourier spectrum of the original image $O(x,y)$, and $F^{-1}$ represents the inverse Fourier transform. The trajectory of the object can be calculated by measuring the phase shift term $\varphi=-2\pi(f_x \Delta x+f_y \Delta y)$ of each frame. The displacement $\Delta x$ and $\Delta y$ is finally calculated by obtaining $\widetilde{O}(f_x,0)$ and $\widetilde{O}(0,f_y)$ of each frame\cite{zhang2019image}:
\begin{equation}
\label{eq:cmpdx}
\begin{aligned}
&\Delta x=-\frac{1}{2 \pi f_{x}} \cdot \arg \left\{\left[\tilde{O}\left(f_{x}, 0\right) - \tilde{O}_{\mathrm{bg}}\left(f_{x}, 0\right)\right]\right\}, \\
&\Delta y=-\frac{1}{2 \pi f_{y}} \cdot \arg \left\{\left[\tilde{O}\left(0, f_{y}\right) - \tilde{O}_{\mathrm{bg}}\left(0, f_{y}\right)\right]\right\},
\end{aligned}
\end{equation}
where $arg\{\}$ is the argument operation, $\tilde{O}_{\mathrm{bg}}\left(f_{x}, 0\right)$ and $\tilde{O}_{\mathrm{bg}}\left(0, f_{y}\right)$ represent the two Fourier coefficients obtained at the initial position before the object starts moving, $\widetilde{O}(0,f_y)$ and $\widetilde{O}(f_x,0)$ represent the two Fourier coefficients obtained at the current moving frame. It has been verified that six binary Fourier basis patterns for each frame can realize real-time tracking of moving object trajectories by using the three-step phase shift method in Ref.[\citenum{zhang2019image}].
As shown in Fig.\ref{fgr:fig1}, the proposed pattern sequence consists of six patterns for each frame.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.9\textwidth]{Figure.1.pdf}
\caption{Schematic diagram of pattern design. Each motion frame corresponds to six patterns, of which four are binary Fourier basis patterns and two are differential Hadamard basis patterns. The Fourier patterns of all frames are the same, and the corresponding phases are $0$ and $\pi/2$, respectively. The Hadamard patterns corresponding to different motion frames are sorted according to the TV ordering method\cite{yu2020super}.}
\label{fgr:fig1}
\end{figure}
Two differential Hadamard basis patterns that encoded the object spatial information are embedded in every four Fourier basis patterns for achieving the high rate of Ref.[\citenum{zhang2019image}] and finally imaging the moving objects.The first four binary Fourier basis patterns of all motion frames are the same which is corresponding to the Fourier basis patterns of $P(f_x,0,0)$, $P(f_x,0,\pi/2)$, $P(0,f_y,0)$, and $P(0,f_y,\pi/2)$. The spatial frequency $f_x$ and $f_y$ in this work are both $2/m$, where $m \times m$ is the spatial resolution of the image. The two differential Hadamard patterns $H_+^k$ and $H_-^k$ are calculated from the $k^{th}$ Hadamard pattern $H^k$:
\begin{equation}
\begin{aligned}
&H_+^k=(H^k+1)/2, \\
&H_-^k=(1-H^k)/2,
\end{aligned}
\end{equation}
$H_+^k$ plus $H_-^k$ also equals the all-one pattern. The Hadamard patterns in each motion frame are different. Each Hadamard pattern is selected according to the total variation(TV) sorted method\cite{yu2020super}. For six patterns of each frame, the corresponding single-pixel detection values are $I_{x0}$, $I_{x\pi/2}$, $I_{y0}$, $I_{y\pi/2}$, $I_{H+}$, and $I_{H-}$, respectively. Based on the four-step phase shift method, The two Fourier coefficients are calculated by:
\begin{equation}
\begin{aligned}
&\widetilde{O}(f_x,0)=(I_{H+}+I_{H-}-2I_{x0})+j(I_{H+}+I_{H-}-2I_{x\pi/2}), \\
&\widetilde{O}(0,f_y)=(I_{H+}+I_{H-}-2I_{y0})+j(I_{H+}+I_{H-}-2I_{y\pi/2}),
\end{aligned}
\end{equation}
and the Hadamard coefficient can be calculated by:
\begin{equation}
I_H=I_{H+}-I_{H-},
\end{equation}
\begin{figure}[htbp]
\centering
\includegraphics[width=0.5\textwidth]{Figure.2.pdf}
\caption{Schematic diagram of modulating pattern transformation. (a) Initial position of the object; (b) Position of the object after movement; (c) Modulating pattern of the object after movement; (d) Transformed modulation pattern.}
\label{fgr:fig2}
\end{figure}
The displacement of the object can be calculated by Eq.(\ref{eq:cmpdx}), and then the displacement of the current frame can be determined. At the same time, four binary Fourier basis patterns, one Hadamard basis pattern, and the corresponding five single-pixel detection values are obtained for each frame which could be used for the final imaging procedure. The displacement of the object during the pattern modulation is equivalent to that the object is static and the pattern moves in the opposite direction to the object motion, then the object image can be reconstructed from the recorded single-pixel values and transformed patterns when transformed patterns are sufficient. The process of transforming pattern is shown in Fig.\ref{fgr:fig2}. Total variation augmented lagrangian alternative direction algorithm (TVAL3)\cite{li2013efficient} is an efficient and widely used compressed sensing\cite{donoho2006compressed,candes2008introduction,duarte2008single} algorithm. The TVAL3 solver can be adopted to reconstruct the object using the transformed patterns sequence and corresponding single-pixel detection values.
\section{Results and discussion}
\subsection{Simulations}
In the simulation, An object with two trajectories is simulated for verifying the proposed method. The object image is shown in Fig.\ref{fgr:fig3}(a), and the resolution is $128 \times 128$ pixels. The total number of moving frames is 1666, and each frame corresponds to six patterns, meaning that a total of 9996 patterns are used in the simulation. As the two fastest image-free tracking methods, Zhang et al.'s method using binary Fourier patterns \cite{zhang2019image} and Zha et al.'s method using geometric moment patterns\cite{zha2021single} are also simulated for comparison. The total number of patterns for Zhang et al.'s method\cite{zhang2019image} is also 9996, and 4998 patterns for Zha et al.'s method\cite{zha2021single} correspond to 1666 moving frames. Hadamard patterns are selected to modulate and reconstruct the image of the moving object as the conventional single-pixel method, and 9996 differential Hadamard patterns are employed according to the TV order\cite{yu2020super} for better quality. A Gaussian white noise with $\sigma=0.1$ is added to the measurement for simulating the real noisy experiments.
Mean Square Error (MSE) is introduced to evaluate the accuracy of the reconstructed trajectory. The MSE of the reconstructed trajectory coordinate $Y$ and the original coordinate $X$ is defined as follows:
\begin{equation}
MSE(X,Y)=\frac{1}{n}\sum_{1}^{n}(X_i-Y_i)^2,
\end{equation}
Where $n$ is the total number of frames. The smaller the MSE, the closer the reconstructed trajectory is to the original trajectory. The peak signal-to-noise ratio (PSNR) and correlation coefficient (CC) are introduced to evaluate the quality of reconstructed images. The PSNR between the original image $x$ and the reconstructed image $y$ is defined as follows:
\begin{equation}
PSNR\left(x,y\right)=10\log \frac{peakval^{2}}{MSE\left(x,y\right)},
\label{eqn:psnr}
\end{equation}
Where $MSE (x, y)$ is the mean square error between $x$ and $y$. $Peakval$ is the maximum value of the image data type. The higher the PSNR value, the higher the reconstruction quality. The CC between the original image $x$ and the reconstructed image $y$ is defined as follows:
\begin{equation}
CC\left(x,y\right)=\frac{cov\left(x,y\right)}{\sigma_x \sigma_y},
\label{eqn:cc}
\end{equation}
Where $cov\left(x,y\right)$ is the covariance of $x$ and $y$, $\sigma_x$ is the standard deviation of $x$, and $\sigma_y$ is the standard deviation of $y$. The value range of CC is $[-1, 1]$. The larger the CC value, the higher correlation between the two images.
The simulations with different noise distributions were repeated five times. Figure.\ref{fgr:fig3} shows the results of the corresponding method. The comparisons between the trajectories reconstructed by the three methods and the original trajectories are shown in Fig.\ref{fgr:fig3}(b) and Fig.\ref{fgr:fig3}(c).
For trajectory (type-\uppercase\expandafter{\romannumeral1}) in Fig.\ref{fgr:fig3}(b),
\begin{figure}[htbp]
\centering
\includegraphics[width=1\textwidth]{Figure.3.pdf}
\caption{Simulation results of one object with two trajectories. (a) Simulated object; (b) The type-\uppercase\expandafter{\romannumeral1} trajectories reconstructed by the three methods; (c) The type-\uppercase\expandafter{\romannumeral2} trajectories reconstructed by the three methods. (d) and (f) are the reconstructed images by the conventional SPI method. (e) and (g) are the reconstructed images by the proposed method.}
\label{fgr:fig3}
\end{figure}
\begin{table}[htb]
\small
\centering
\caption{Comparisons of reconstructed trajectories and images using different methods }
\label{tbl:noise_sim}
\begin{tabular*}{0.9\textwidth}{@{\extracolsep{\fill}}ccccc}
\hline
\multirow{2}*{Trajectory}& \multirow{2}*{Method}&{Reconstructed trajectory}&\multicolumn{2}{c}{Reconstructed image} \\ \cline{3-5}
& & {MSE} & {PSNR} & {CC} \\ \hline
\multirow{4}*{Type-\uppercase\expandafter{\romannumeral1}} &{Conventional SPI} &{N/A} &16.5285 &0.7278 \\
& {Zha et al.} & 4.0643& {N/A} &{N/A} \\
& {Zhang et al.} & \textbf{0.0216} & {N/A} &{N/A} \\
& {Our method} & 0.3653 &\textbf{25.5548} & \textbf{0.9792}\\ \hline
\multirow{4}*{Type-\uppercase\expandafter{\romannumeral2}} &{Conventional SPI} & {N/A} &16.1369& 0.7000 \\
& {Zha et al.} & 4.1906& {N/A} &{N/A}\\
&{ Zhang et al.} & \textbf{0.0212} & {N/A} &{N/A}\\
& {Our method} & 0.2384 &\textbf{26.3454} & \textbf{0.9834} \\ \hline
\end{tabular*}
\end{table}
the image reconstructed by conventional SPI and the proposed method are shown in Fig.\ref{fgr:fig3}(d) and Fig.\ref{fgr:fig3}(e), respectively. For trajectory (type-\uppercase\expandafter{\romannumeral2}) in Fig.\ref{fgr:fig3}(c), the images reconstructed by conventional SPI and the proposed method are shown in Fig.\ref{fgr:fig3}(f) and Fig.\ref{fgr:fig3}(g). Table.\ref{tbl:noise_sim} shows the mean MSEs,PSNRs, and CCs of these methods.It can be seen from the above results that both Zhang et al.'s method\cite{zhang2019image} and the proposed method can reconstruct the trajectories of the moving object well in the presence of noise, among which the Zhang et al.'s method\cite{zhang2019image} is the best, followed by the proposed method, and the Zha et al.'s method \cite{zha2021single} has a large deviation in the reconstructed trajectories, indicating that the geometric moment patterns are more sensitive to noise. The reconstructed images of the conventional SPI method are blurred and degraded because of the motion of the object in the measurement process, while the proposed method can reconstruct the images of the moving object well.
The influence of the number of moving frames on the imaging quality should be also considered by reconstruct the image using sequentially reducing number of frames. The reconstructed images of moving along the Type-\uppercase\expandafter{\romannumeral2} trajectory were studied in this simulation. Twenty groups of data were selected, including 5\%, 10\%, 15\%,..., 95\%, and 100\% of the total 1666 frames, respectively, to calculate the reconstructed images. The corresponding PSNRs and CCs are shown in Fig.\ref{fgr:fig4}(a).
\begin{figure}[htbp]
\centering
\includegraphics[width=0.8\textwidth]{Figure.4.pdf}
\caption{The influence of the number of moving frames measured by the proposed method on the imaging quality. (a) The PSNRs and CCs of the reconstructed images using different numbers of frames ; (b-e) the reconstructed images using 166, 583, 999, and 1416 frames, respectively.}
\label{fgr:fig4}
\end{figure}
Figure\ref{fgr:fig4}(b-e) shows the reconstructed images with the corresponding frame numbers 166, 583, 999, and 1416. The results indicate that considering the influence of noise when the number of sampling frames is more than a certain frame number (e.g., 333 frames), the moving object image with good quality can be reconstructed.
\subsection{Experiments}
The proposed method is verified through experiments. The experimental system device consists of a light-emitting diode (LED) source with a maximum power of 5$~\mathrm{W}$, a linear motorized stage(GCD-302003M, Daheng Optics), a digital micromirror device (DMD, Texas Instruments DLP7000), a photomultiplier tube(PMT, H10682-01, Hamamatsu Photonics), and a data acquisition board, as show in Fig.\ref{fgr:fig5}.
\begin{figure}[htbp]
\centering
\includegraphics[width=1\textwidth]{Figure.5.pdf}
\caption{Experimental setup. A motorized stage moves the object in one directions. The transmitted object is imaged on the digital micromirror device (DMD) after being illuminated by the light-emitting diode (LED) source. After being modulated by the DMD, the modulated light is collected into the photomultiplier tube (PMT) and converted into the measurement values through the data acquisition board. NDF: neutral density filter.}
\label{fgr:fig5}
\end{figure}
The modulation patterns are preloaded into the DMD in advance for modulation. The transmitted object is imaged on the DMD after being illuminated by the light source. After being modulated by the DMD, the modulated light is collected into the PMT and converted into the measurement values through the data acquisition board. In this section, a total of two groups of experiments are set. In the first group of experiments, a linear motorized stage with a maximum speed of 9.87$~\mathrm{mm/s}$ was used to experiment of moving objects at a constant speed. In the second group of experiments, the acceleration movement experiment was realized with a mounting platform that slid along a rail. The mounting platform was connected to a suspended weight using a pulley. The DMD operated at a high refresh rate of 20000 $~\mathrm{Hz}$. The pixel size of the modulation patterns located on the DMD was $256 \times 256$ piexls, and every $2 \times 2$ pixels were merged into one super pixel, which meant that the image size of the moving object was $128 \times 128$ pixels. The Fourier basis patterns used in the method were binarized by the Floyd-Steinberg dithering algorithm\cite{floyd1976adaptive} with an upsampling rate of one. In each group of experiments, a total of 9,996 patterns were used corresponding to a frame numbers of 1666, and the total measurement time was 0.4998$~\mathrm{s}$ for modulating rate of 20000$~\mathrm{Hz}$.
\subsubsection{Moving objects at a constant speed}
In the experiment in this section, the moving objects were a transmitted object "B" with a size of $2.5~\mathrm{mm} \times 3.2 ~\mathrm{mm}$ and a transmitted object "star" with a size of $10~\mathrm{mm} \times 10~\mathrm{mm}$,
\begin{figure}[htbp]
\centering
\includegraphics[width=1\textwidth]{Figure.6.pdf}
\caption{The imaging and tracking results of two moving objects. (a) and (b) are the Ground truth images.(b) and (d) are the reconstructed moving objects by conventional SPI. (c) and (g) are the reconstructed moving objects by the proposed method using raw data. (d) and (h) are the reconstructed moving objects by the proposed method using filtered data. (i) and (j) are the comparison of the real trajectory and calculated trajectory.}
\label{fgr:fig6}
\end{figure}
and the spatial resolution of both objects' images was $128 \times 128$ pixels. Both moving objects move linearly along the diagonal of the image at the maximum speed of 9.87$~\mathrm{mm/s}$ of the linear motorized stage. The ground truth images were obtained by imaging the static object using the conventional SPI method, as shown in Fig.\ref{fgr:fig6}(a) and Fig.\ref{fgr:fig6}(e). To obtain a clear static image, the DMD was operated at a modulating rate of 1000 $~\mathrm{Hz}$. A total of 9996 differential Hadamard patterns were also used corresponding to a sample ratio of 30.51\%. The reconstructed images for moving objects using the conventional SPI method are shown in Fig.\ref{fgr:fig6}(b) and Fig.\ref{fgr:fig6}(f), which get blurred because of motion.
\begin{table}[htb]
\small
\centering
\caption{The quality of reconstructed trajectories and images without/with a filter}
\label{tbl:exp}
\begin{tabular*}{0.8\textwidth}{@{\extracolsep{\fill}}ccccc}
\hline
\multirow{2}*{Object}& \multirow{2}*{Data}&{Reconstructed trajectory}&\multicolumn{2}{c}{Reconstructed image} \\ \cline{3-5}
& & {MSE} & {PSNR} & {CC} \\ \hline
\multirow{2}*{"B"} &{Raw data} &{5.6002} &29.4822 &0.9303 \\
& {Filtered data} & \textbf{4.1789} &\textbf{29.6327} & \textbf{0.9432}\\ \hline
\multirow{2}*{"Star"} &{Raw data} &{5.3944} &28.2537 &0.7837 \\
& {Filtered data} & \textbf{2.9082} &\textbf{29.5700} & \textbf{0.8165}\\ \hline
\end{tabular*}
\end{table}
Two images of moving objects with good quality were calculated by the proposed method, as shown in Fig.\ref{fgr:fig6}(c) and Fig.\ref{fgr:fig6}(g). Figure.\ref{fgr:fig6}(i) and Figure.\ref{fgr:fig6}(j) are the comparisons between the real trajectory and the calculated trajectory by the proposed method. The real trajectory of the object was determined by imaging the static object along the displacement axis of the motorized stage. Five static object images reconstructed by the conventional SPI method were combined to obtain the real trajectory.The quality of reconstructed trajectories and images improves by adopting a five-order mean filter to the calculated Fourier coefficients, as shown in Tab.\ref{tbl:exp}. And the reconstructed images using filtered data are shown in Fig.\ref{fgr:fig6}(d) and Fig.\ref{fgr:fig6}(h).
\subsubsection{The acceleration movement experiment}
The moving object "star" was selected to perform an acceleration movement experiment. The object was placed on a mounting platform that slid along a rail and connected by a horizontal thin rope to a fixed pulley and a weight of appropriate size. By adjusting the weight, the transmission object could be accelerated along the linear guide rail from a standstill. When the object started to move, the detection module began counting until all the patterns of 9996 frames were loaded. The average speed of the moving object was 55.7$~\mathrm{mm/s}$. The reconstructed trajectory using Zhang et al.'s method\cite{zhang2019image} is also plotted in Fig.\ref{fgr:fig7}(a) for comparison. It could be seen from Fig.\ref{fgr:fig7}(a) that Zhang et al.'s method \cite{zhang2019image} and the proposed method reconstruct the same trajectories of the moving object in the experiment, among which the trajectory using filtered data had a smaller deviation. The trajectory of the object in the X and Y direction indicates the characteristics of the object's accelerated motion, as shown in Fig.\ref{fgr:fig7}(b). The reconstructed images using the conventional SPI method is shown in Fig.\ref{fgr:fig7}(c), which get more blurred than the above uniform motion experiment. Figure.\ref{fgr:fig6}(d) shows the reconstructed image by the proposed method using raw data. Figure.\ref{fgr:fig6}(e) is the reconstructed image by proposed method using filtered data.
The influence of the number of moving frames measured on the imaging quality was studied in this experiment. Twenty groups of measured data were selected, including 5\%, 10\%, 15\%,..., 95\%, and 100\% of the total 1666 frames, respectively, to calculate the reconstructed images. The corresponding PSNRs and CCs are shown in Fig.\ref{fgr:fig8}(a). The reconstructed images
\begin{figure}[htbp]
\centering
\includegraphics[width=1\textwidth]{Figure.7.pdf}
\caption{Experimental results of an accelerating moving object. (a) The comparison of calculated trajectory; (b) The trajectory of the object in the X and Y direction.(c)The reconstructed moving objects by conventional SPI.(d)The reconstructed moving objects by the proposed method using raw data.(e)The reconstructed moving objects by the proposed method using filtered data.}
\label{fgr:fig7}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=0.8\textwidth]{Figure.8.pdf}
\caption{Experiment on the contribution of the number of measured moving frames to reconstructed imaging quality. (a) The PSNRs and CCs of the reconstructed images using different numbers of frames; (b-e) the reconstructed images using 166, 583, 999, and 1416 frames, respectively.}
\label{fgr:fig8}
\end{figure}
with the corresponding frame numbers 166, 583, 999, and 1416 are shown in Fig.\ref{fgr:fig8}(b-e). The above experimental results are similar to that in the simulation.
\subsection{Discussion}
The key of the proposed method is to use the object motion information determined by the image-free tracking method. The higher the frame rate and the more accurate the calculated trajectory, the more accurate the reconstructed image of the moving object. In the experiment, six modulation patterns were used for each motion frame. Combining a DMD with the refresh rate of 20000$~\mathrm{Hz}$, we achieved tracking at a frame rate of 3332$~\mathrm{Hz}$ and finally imaged the moving object. The higher frame rate can be achieved by using an SLM with a higher refresh rate. Although a higher tracking frame rate can be achieved based on geometric moments patterns, the simulation indicates that the binary geometric moment patterns\cite{zha2021single} require a higher upsampling rate when binarization and lack of differential measurement, which will lead to loss of spatial resolution and worse robustness to noise than the Fourier patterns. Four-step phase shift method has usually better noise resistance than the three-step phase shift method to obtain a Fourier coefficient\cite{zhang2017hadamard}, but the variance of the trajectories reconstructed by our method is slightly larger than Zhang et al.'s method\cite{zhang2019image}. The main reason is that six measured values are used to calculate the Fourier coefficients rather than the eight values required by the four-step phase shift method. The median filter is used to reduce the fluctuation of the reconstructed trajectory and obtain better images. The simulated and experimental results indicate that with the increase in the number of measurement frames, the quality of the reconstructed image will increase sharply and then tend to be stable. We also acknowledge that the proposed method has several limitations. Firstly, it can only track and image a transnational object with a simple background in the field of view. Secondly, the image cannot be reconstructed in real-time due to the iterative algorithm. Thirdly, the method can not reconstruct the image if the object moves too fast to obtain enough useful measurements. Solving the above limitations will be our future work.
\section{Conclusion}
A single-pixel detection method for object tracking and imaging is proposed in this paper. The displacement of a moving object for each moving frame can be determined by four Fourier patterns plus two differential Hadamard patterns. Based on the determined displacements and patterns, we can recalculate the reconstruction matrix and reconstruct the image of the moving object. This method does not need any prior knowledge of the object and its motion. It has been verified by simulations and experiments which achieves a frame rate of 3332 $~\mathrm{Hz}$ at a spatial resolution of 128 × 128 pixels by using a 20000 $~\mathrm{Hz}$ DMD. Future works can focus on extending to the imaging of rotating objects and accelerating the reconstruction process to finally realizing real-time imaging.
\begin{backmatter}
\bmsection{Funding}
\bmsection{Disclosures}
The authors declare no conflicts of interest.
\bmsection{Data availability} Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.
\end{backmatter}
|
1,116,691,499,852 | arxiv | \subsection*{Introduction}
Non-Hermitian singularities arise in multivalued complex functions \cite{Needham-VCA,Ablowiz-CV} as points where the Taylor series expansion fails. In the context of non-Hermitian Hamiltonians, these points, commonly referred to as exceptional points (EPs) feature special degeneracies where two or more eigenvalues along with their associated eigenfunctions become identical \cite{Heiss2004JPA,Muller2008JPA}. An EP of order $N$ (EPN) is formed by $N$ coalescing eigenstates. Recently, the exotic features of EPs have been subject of intense studies \cite{El-Ganainy2007OL,El-Ganainy2008PRL-O,El-Ganainy2008PRL-B,El-Ganainy2010NP} with various potential applications in laser science \cite{Hodaei2014S,Feng2014S, El-Ganainy2015PRA,Teimourpour2016SR} , optical sensing \cite{Wiersig2014PRL, Hodaei2017N,Chen2017N}, photon transport engineering \cite{Lin2011PRL,Zhu2013OL} and nonlinear optics \cite{El-Ganainy2015OL,Zhong2016NJP} just to mention few examples. For recent reviews, see Refs \cite{El-Ganainy2018NP,Feng2017NP}.
Very often, EPs are points of measure zero in the eigenspectra of non-Hermitian Hamiltonians which makes them very difficult to access, even with careful engineering. Yet, their effect can be still felt globally. Particularly, an intriguing aspect of non-Hermitian systems is the eigenstate exchange along loops that trace closed trajectories around EPs. In this regard, stroboscopic encircling of EP2 has been studied theoretically \cite{Heiss1999EPD,Cartarius2007PRL} and demonstrated experimentally in various platforms such as microwave resonators \cite{Dembowski2001PRL,Dietz2011PRL} and exciton-polariton setups \cite{Gao2015N}. Complementary to these efforts, the dynamic encircling of EPs was shown to violate the standard adiabatic approximation \cite{Raam2011JPA,Berry2011JPA,Berry2011JO,Hassan2017PRL}. These predictions were recently confirmed experimentally by using microwave waveguides platforms \cite{Doppler2016N} and optomechanical systems\cite{Xu2016N}.
Notably, the aforementioned studies focused only systems having only one EP of order two. Richer scenarios involving multiple and/or higher order EPs have been largely neglected, with rare exceptions that treated special systems (admitting simple analytical solutions) on a case by case basis \cite{Ryu2012PRA, Demange2012JPA}. This gap in the literature is probably due to the complexity of the general problem and its perceived experimental irrelevance. However, recent progress in experimental activities that explore the physics of non-Hermitian systems are quickly changing the research landscape, and controlled experiments that probe more complicated structures with multiple EPs will be soon within reach. These developments beg for a general approach that can provide a deeper theoretical insight into these complex systems.
In this work, we bridge this gap by introducing a general formalism for treating the eigenstate exchange along arbitrary loops enclosing multiple EPs. More specifically, our approach utilizes the power of group theory together with group representations to decompose the final action of any loop into more elementary exchange processes across the relevant branch cuts (BCs). This formalism simplifies the analysis significantly, which in turn allows us to gain an insight into the problem at hand and unravel a number of intriguing results: (1) Trajectories that encircle the same EPs starting from the same initial point and having the same direction do not necessary lead to an identical exchange between the eigenstates; (2) Establishing such equivalence between the loops (i.e. same eigenstate exchange) is guaranteed only by invoking the topological notion of homotopy. As a bonus, our approach can also paint a qualitative picture of the dynamical properties of the system.
\subsection*{General Formalism for Encircling Multiple Exceptional Points}
Before we start our analysis, we first describe the simple case of EP2. These are special points associated with the multivalued square root function in the complex plane. The Riemann surface of this function is shown in the top panel of Fig. \ref{FigEPs}(a). Clearly, as two parameters are varied in the complex plane to trace a closed loop, the initial point on the surface ends up on a different sheet. This process can be also viewed by considering the projection on the complex plane after adding a BC (lower panel). As we mentioned before, this simple scenario has been studied in the literature in both the stroboscopic and dynamical cases. Consider however what happens in more complex situations where there are more than one EP. For instance, Fig. \ref{FigEPs}(b) depicts a case with three EPs. One can immediately see that this scenario exhibits an additional complexity that is absent from the previous case. Namely, there are now different ways for encircling the same EPs (as shown by the solid and dashed loops in the figure). This in turn raises the question as whether these loops lead to the same results or not. These are the type of questions that we would like to address in this work. As we will see, in resolving these questions, our analysis also reveal several peculiar scenarios.
\begin{figure}
\includegraphics[width=\linewidth*3/4]{FigEPs.pdf}
\caption{\textbf{Different ways of encircling multiple EPs.} (a) Illustration of Riemann surface associated with the square root function associated with an archetypal $2 \times 2$ non-Hermitian Hamiltonian. A loop that encircles the exceptional point (also known as the branch point) starting from the state $s_1$ will map it onto $s_2$ and vice versa. In the complex plane (lower panel), this is represented by adding a BC. (b) A scenario that exhibit three EPs. In this case, loops can encircle EPs in different ways as illustrated by the two loops (solid/dashed lines) that enclose $\text{EP}_{1,3}$ starting from the same point (gray dot).
}
\label{FigEPs}
\end{figure}
\noindent \textit{Permutation operators and the exchange of eigenstates---}
Consider an $n$-dimensional non-Hermitian discrete Hamilton. The Riemann surface associated with the real (or imaginary) part of its eigenvalues will consist of $n$ sheets corresponding to different solution branches. We will label these $n$ branches as $b_1$, $b_2$, ..., $b_n$. In the complex plane, these branches are separated by BCs. Thus, an initial point on any trajectory in the complex plane will correspond to $n$ initial eigenstates, which we will label as $s_1$, $s_2$, ..., $s_n$. The eigenvalue for each state $s_i$ will be denoted by $\lambda_i$. As the encircling parameters are varied, the eigenstates will move along the trajectory, crossing from one branch to another across the BCs. The crucial point here is that, we will always fix the initial subscript of the state as it changes. We now describe the initial configuration on the trajectory by the mapping:
\begin{equation}
\label{Eq:C0}
\mathcal{C}_0=
\begin{bmatrix}
\tilde{s}_0 \\
\tilde{b}_0
\end{bmatrix},
\end{equation}
where $\tilde{s}_0=(s_1 , s_2,...,s_n)$ and $\tilde{b}_0=(b_1 , b_2 , ..., b_n)$ are two ordered sets. In our notation, $\mathcal{C}_0$ maps (or associates) every element of $\tilde{s}_0$ to the corresponding element in $\tilde{b}_0$. Note that we can change the orders of the elements in both $\tilde{s}_0$ and $\tilde{b}_0$ identically without changing $\mathcal{C}_0$. In other words, we have several different ways for the same configuration. As the loop crosses BCs, the exchange between the eigenstates will result in new configurations which, again, can be described in different ways. Two particular choices are interesting here. In the first one, we always fix $\tilde{s}_0$ and allow the elements of $\tilde{b}_0$ to shuffle, effectively creating a new $\tilde{b}$. In the second, we just do the converse. We will call these two equivalent notations the s- and b-frames, respectively. This is explained by the cartoon picture in Fig. \ref{FigPermutation}(a).
\begin{figure}
\includegraphics[width=\linewidth*3/4]{FigPermutation.pdf}
\caption{\textbf{Different permutation frames.} (a) A simple illustration of the two different frames used for representing the same configuration. (b) The mathematical formulation of the concept in (a) in terms of permutation mappings as discussed in details in the text.}
\label{FigPermutation}
\end{figure}
The first step in our analysis is to choose a scheme for sorting the eigenstates and locating the BCs accordingly. We will discuss the details of the sorting later but for now we assume that we have a certain number of BCs and we label each one with a unique integer value (positive for a crossing in certain direction and negative for reverse crossing). Next we determine how the eigenstates are redistributed across an infinitesimal trajectory across each BC (see discussion later on sorting schemes). For every loop, we then create an ordered list $\sigma$ that contains the number of the crossed BCs in the order they are crossed by the loop. In other words, the element $\sigma(j)$ is the number of the $j$-th crossed BC. Clearly the set $\sigma$ will be in general different from loop to another and even can be different for the same loop depending on the initial point or the encircling direction. Then the final configuration in both the s- and b-frames is given by:
\begin{equation}\label{Eq:Cm}
\begin{aligned}
\mathcal{C}^\text{s}_\sigma&=
\begin{bmatrix}
\tilde{s}_0 \\
\tilde{b}_\sigma
\end{bmatrix}
\equiv
\begin{bmatrix}
\tilde{s}_0 \\
\mathcal{P} [\prod \pi_{\sigma(j)}] \circ \tilde{b}_0
\end{bmatrix},\\
\mathcal{C}^\text{b}_\sigma&=
\begin{bmatrix}
\tilde{s}_\sigma \\
\tilde{b}_0
\end{bmatrix}
\equiv
\begin{bmatrix}
\{\mathcal{P} [\prod \pi_{\sigma(j)}]\}^{-1} \circ \tilde{s}_0 \\
\tilde{b}_0
\end{bmatrix},
\end{aligned}
\end{equation}
where $\mathcal{P}$ denotes the ordering operator which arranges the multiplication of the permutation operators $\pi_{\sigma(j)}$ from right to left according to the order of crossing the BCs; and the product runs across the index $j$. For example, if $\sigma=(3,1,2)$, then the $\mathcal{P} [\prod \pi_{\sigma(j)}]=\pi_{\sigma(3)} \circ \pi_{\sigma(2)} \circ \pi_{\sigma(1)}=\pi_2 \circ \pi_1 \circ \pi_3$. The permutation operator $\pi_{k}$ associated with BC $k$ is the standard permutation mapping that, which when applied to a set will shuffles the order of its elements \cite{Hassani-MP}. Here it is used to describe how the eigenstates are redistributed when a trajectory crosses a BC. For instance, if the permutation exchange the order of the first two elements of $\tilde{b}_0$ across a BC $k$, then $\pi_k(b_{1,2})=b_{2,1}$, and $\pi_k(b_i)=b_i$ for $i>2$. Figure \ref{FigPermutation}(b) illustrates the relation between the s- and b-frame calculations as expressed by Eqs. (\ref{Eq:Cm}).
\noindent \textit{From permutations to matrices---}
The above discussion can be directly mapped into linear algebra by using representation theory. To do so, we define the vectors $\vec{s}_0=(s_1,s_2, ..., s_n)^T$ and $\vec{b}_0=(b_1,b_2, ..., b_n)^T$. In the s-frame, we will fix $\vec{s}_0$ and allow $\vec{b}$ to vary in order to represent the change in configuration. In the b-frame, we just do the opposite. For instance, if after crossing a BC, eigenstate 1 moves to branch $n$, eigenstate 2 moves to branch 1 and eigenstate $n$ moves to branch 2, this will be expressed as $\vec{b}_1=(b_n,b_1, ..., b_2)^T$ in the s-frame; and $\vec{s}_1=(s_2,s_n,...s_1)^T$ in the b-frame. After a loop completes its full cycle, the final vector is then compared with the initial one to determine the exchange relations between the eigenstates. For instance, if the above vector was the final result, the exchange relations will be: $\{s_1,s_2,...,s_n\} \rightarrow \{s_n, s_1,...,s_2\}$, which means that after the evolution $s_1$ became $s_n$, $s_2$ became $s_1$ and $s_n$ became $s_2$.
We can now express the action of the permutation operators $\pi_k$ by the matrices $\textbf{\text{P}}_{\pi_k}$ whose elements are obtained according to the rule $\mathbf{P}_{\pi_k}(m,l) = 1$ if $b_l = \pi_k(b_m)$, and 0 otherwise \cite{Brualdi-CMC}. In the s- $\&$ b-frames, the redistribution of the eigenstates across the branches in Eq. (\ref{Eq:Cm}) can be then described by:
\begin{equation} \label{Eq: Matrix product}
\begin{aligned}
\vec{b}_\sigma &=\{\mathcal{P}[\prod \mathbf{M}_{{\sigma(j)}}]\}^{-1}\vec{b}_0, \\
\vec{s}_\sigma &=\mathcal{P}[\prod \mathbf{M}_{{\sigma(j)}}]\vec{s}_0,
\end{aligned}
\end{equation}
where $\mathbf{M}_k=\mathbf{P}_{\pi_k}^{-1}$. In arriving at the above equation, we have used standard results from group theory: $\mathbf{P}_{\pi_2 \circ \pi_1}=\mathbf{P}_{\pi_1} \mathbf{P}_{\pi_2}$ and $\mathbf{P}_{\pi^{-1}}=\mathbf{P}_{\pi}^{-1}$.
In the rest of this manuscript, we adopted the b-frame with matrices $\mathbf{M}$. This approach offers a clear advantage: the order of the matrices acting on the state vectors $\vec{s}$ is consistent with the order of crossing the BCs. As we will see shortly, this will allow us to develop the topological features of the equivalent loops in a straightforward manner. Finally, we note that if crossing a BC from one direction to another is associated with a matrix $\mathbf{M}$, the reverse crossing will be described by $\mathbf{M}^{-1}$. In some cases (such as with EP2), we can have $\mathbf{M}^{-1}=\mathbf{M}$ but this is not the general case.
\noindent \textit{Sorting of the eigenstates---} The discussion so far focused on developing the general formalism by assuming that the eigenstates of the system are somehow classified according to a certain criterion. This is equivalent to say that we divide the associated Riemann surface into different sheets, each harboring a solution branch. Of course, one can pick any such criterion to classify the solutions. In previous studies that involved one EP of order two or three, the eigenstates were classified based on the analytical solution of the associated characteristic polynomial. This however has two drawbacks: (1) It generates relatively complex branches on the Riemann sheet; (2) It cannot be applied for discrete Hamiltonians having dimensions larger than four since analytical solutions do not exist for polynomials of order five or larger. Thus our analysis above is useful only if one can find a sorting scheme that circumvents the above problems. Interestingly, such a sorting scheme is easy to find. Particularly, we can sort the eigenstates based on the ascending (or descending) order of the real or imaginary parts of their eigenvalues. This scheme can be easily applied to any system of arbitrarily high dimensions. Moreover, it lends itself to straightforward numerical implementations. To compute the a permutation operator $\pi_k$ and its associated matrix $\mathbf{M}_k$ across a BC $k$, one choses an infinitesimal trajectory that crosses the BC and calculate how the eigenvalues evolve along this trajectory, comparing their order before and after crossing the BC. That will immediately provide information about the permutations. We illustrate this using concrete example in the Methods.
\subsection*{Equivalent Loops and Homotopy}
In this section, we employ the predictive power of our formalism to address the following question: are there any global features that characterize the equivalence between different loops regardless of their geometric details? In answering this question, we will first focus on the stroboscopic case and later discuss the implication for the dynamical behavior.
Here, two loops are called equivalent if they lead to identical static eigenstates exchange. It is generally believed that two similar loops starting at the same point and encircling the same EPs in the same direction are equivalent. Surprisingly, we will show below that this common belief is wrong.
In general two loops will be equivalent if they have the same matrix product in Eq. (\ref{Eq: Matrix product}). This can occur for two unrelated loops which we will call accidental equivalence. However, We are particularly interested in establishing the conditions that guarantee this equivalence. To do so, we invoke the notion of homotopy between loops. In topology, two simple paths, having the same fixed endpoints in a space $S$, are called homotopic if they can be continuously deformed into each other \cite{Hatcher-AT}. Here the word ``simple" means injective, that is, each path does not intersect itself. If the two endpoints of a path are identical, this path is a loop with the identical endpoint as a basepoint. The space $S$ here will be a two dimensional punctured parameter space (for example, the space spanned by Re$[\kappa]$--Im$[\kappa]$ in the examples discussed in the Methods) after removing all the EPs. Based on these definitions, we can now state the main results of this section: \textbf{(a) Homotopy is a sufficient condition for equivalence between loops; (b) Loops that are connected by free homotopy (continuous deformation between loops without any fixed points) can be equivalent for some starting points and inequivalent for others}.
\begin{figure}
\includegraphics[width=\linewidth]{FigHomotopy.pdf}
\caption{\textbf{Homotopy between loops.} Illustration of equivalence between homotopic loops in the parameter space of a generic Hamiltonian. (a) Loop \textcircled{a} encloses two EPs associated with matrices $\mathbf{M}_o$ and $\mathbf{M}_p$. (b) Loop \textcircled{b} encloses the same two EPs yet it cannot be deformed into \textcircled{a} without crossing EP associated with $\mathbf{M}_r$. Consequently it has different matrix product (assuming not accidental equivalence). On the other hand, loops \textcircled{c} and \textcircled{d} in (c) and (d) can be deformed into \textcircled{a} without crossing any EP. As a result, they are equivalent (have the same matrix product) as shown in the text. (d) Presents a peculiar case of free homotopy. Loop \textcircled{e} is homotopic with \textcircled{a} for the starting point $z$ but not for $z'$. As a result, the two loops are equivalent for the former point but not for the latter. The discussion here is very generic and can be extended easily to any other configuration of EPs and BCs.}
\label{FigHomotopy}
\end{figure}
In order to validate this statement, we consider a generic Hamiltonian having a number of EPs and, without any loss of generality, we focus only on a subset of the spectrum as shown in Fig. \ref{FigHomotopy}. The axes on the figures represent any two parameters of the Hamiltonian. We define the space $S$ to be the two dimensional parameter space excluding the EPs. Figure \ref{FigHomotopy}(a) depicts a loop \textcircled{a} that encircles two EPs starting from point $z$ in the counterclockwise(CCW) direction. Consequently, its final permutation matrix is given by $\mathbf{M}_p\mathbf{M}_o$. Consider now what happens when loop \textcircled{a} is deformed continuously to a new loop. Here different scenarios can arise: (1) The deformation can take place only by crossing additional EP any number of times. This case is shown in Fig. \ref{FigHomotopy}(b), where it is clear that the new matrix product of loop \textcircled{b} ($\mathbf{M}_p\mathbf{M}_r\mathbf{M}_o\mathbf{M}_r^{-1}$) is in general different than the initial one. In this case, the two loops are not equivalent (unless accidental equivalence takes place). (2) The deformation can occur without changing the number or order of the crossed BCs, in which case the loops are equivalent. (3) The deformation can change the number of the crossed BCs in pairs traversed consecutively back and forth as shown in Fig. \ref{FigHomotopy}(c). Here the two loops \textcircled{a} and \textcircled{c} are also equivalent because the matrix product is still the same: $\mathbf{M}_p\mathbf{M}_q^{-1}\mathbf{M}_q\mathbf{M}_o=\mathbf{M}_p\mathbf{M}_o$. (4) The deformation can occur without crossing any EP but it changes the number of the crossed BCs in pairs traversed back and forth but not consecutively as shown in Fig. \ref{FigHomotopy}(d). In this case, the final matrix product is given by $\mathbf{M}_p\mathbf{M}_q^{-1}\mathbf{M}_o\mathbf{M}_q$. It is not immediately clear if this product is equivalent to $\mathbf{M}_p\mathbf{M}_o$. However, since the intersection point of the BCs (point $A$) is not an EP, then by definition, encircling point $A$ with a loop that does not enclose any EP must give the identity operator. In terms of matrices, this translates into $\mathbf{M}_o \mathbf{M}_q \mathbf{M}_o^{-1} \mathbf{M}_q^{-1} =\mathbf{I}$, or $[\mathbf{M}_o , \mathbf{M}_q]=0$. Consequently, $\mathbf{M}_p\mathbf{M}_q^{-1}\mathbf{M}_o\mathbf{M}_q=\mathbf{M}_p\mathbf{M}_o\mathbf{M}_q^{-1}\mathbf{M}_q=\mathbf{M}_p\mathbf{M}_o$, i.e. loops \textcircled{d} and \textcircled{a} are equivalent. (5) Finally we can also have a loop similar to \textcircled{e} as shown in Fig. \ref{FigHomotopy}(e). This probably the most intriguing situation. For a starting point at $\kappa_0$, both loops \textcircled{a} and \textcircled{e} have the same matrix product $\mathbf{M}_p\mathbf{M}_o$ which is consistent with the fact that they can be deformed into one another without crossing any EP. On the other hand, for a different starting point such as $z'$, the matrix product of loop \textcircled{e} is given by $\mathbf{M}_r^{-1} \mathbf{M}_o \mathbf{M}_p \mathbf{M}_r$, i.e. different than that of loop \textcircled{a}, which is given by $\mathbf{M}_o \mathbf{M}_p$. Note that for this starting point, the two loops cannot be deformed into each other without crossing any EP. In topology, continuous deformation that do not involve fixed points are called free homotopy. This completes our argument.
The above discussion focused only on the stroboscopic case. However, as we will show in the explicit example presented in Methods, homotopy is also relevant to the dynamical encircling of EPs. Particularly, our numerical calculations show that homotopic loops tend to have the same outcome, despite the failure of the adiabatic perturbation theory. Intuitively, this interesting result can be roughly understood by noting that homotopic loops explore very similar landscape in the complex domain. However a deeper understanding of this behavior requires further investigation.
\subsection*{Conclusion}
In conclusion, we have introduced a general formalism based on permutation groups and representation theory for describing the stroboscopic encircling of multiple EPs. By using this tool, we uncovered the following counterintuitive results: trajectories that enclose the same EPs starting from the same initial parameters and traveling in the same direction, do not necessarily result in identical exchange between the states. Instead, we have shown that this equivalence can be established only between homotopic loops. Finally we have also discussed the implication of these results for the dynamic encircling of EPs. Our work may find applications in various fields including the recent interesting work on the relationship between exceptional points and topological edge states \cite {leykam2017PRL, Hu2017PRB}.
\subsection*{Method}
\noindent\textit{Illustrative Examples ---}
We now discuss a concrete numerical example to demonstrate the application of our formalism and confirm the various predictions presented in the main text.
\noindent \textit{Model---} Consider the following Hamiltonian:
\begin{equation}\label{H}
\renewcommand\arraystretch{0.75}
H=\begin{bmatrix}
i \gamma & J & 0 & 0 \\
J & 0 & \kappa & 0 \\
0 & \kappa & 0 & J \\
0 & 0 & J & -i \gamma
\end{bmatrix},
\end{equation}
where $i$ is the imaginary unit, $\kappa$ $\&$ $J$ are coupling coefficients and $\gamma$ is the non-Hermitian parameter. In what follows, the four eigenvalues of $H$ will be investigated as a function of the complex $\kappa$ by fixing $J=\gamma=1$ (in certain physical platforms such as optics, it might be practically easier to fix all the parameters and change $\gamma$, but that will not affect the main conclusions of this work).
\begin{figure}
\includegraphics[width=\linewidth]{FigExample1}
\caption{\textbf{Numerical illustration of our approach.} (a) The branches of Riemann surface of the real part of eigenvalues of $H$ in Eq. (\ref{H}) are distinguished by different colors according to the magnitude of Re[$\lambda$]. The EPs and their corresponding BCs (red lines) are illustrated in (b). Each BC is related with a permutation matrix $\mathbf{M}_k$ in Eq. (\ref{Matrix}). One closed loop (blue line) encircles EP$_1$ and EP$_2$ CCW, starting from the gray points (solid or hollow) on the loop. Loops intersecting with BCs would lead to eigenvalues moving from one branch to another, and result in the swap of eigenstates finally. (c) The stroboscopic evolution of complex eigenvalues are plotted as a parametric function of $\kappa$ when it moves along the loop CCW. The eigenvalues at the starting point are labeled as gray points on their trajectory. The colors in the eigenvalue trajectory represent which branch the eigenvalues are located at instantaneously. The joints of two colors are where the $\kappa$ crosses the BCs. The gray points (solid or hollow) and arrows illustrate the evolution of eigenvalues for starting from $\kappa_0$ or $\kappa'_0$, and therefore the evolution of eigenstates is $\{s_1,s_2,s_3,s_4\} \rightarrow \{s_3,s_1,s_4,s_2\}$ and $\rightarrow\{s_2,s_4,s_1,s_3\}$, respectively.}
\label{FigExample1}
\end{figure}
Under these conditions, $H$ has three pairs of EPs at $\kappa=\pm1$, $\pm\sqrt{2\sqrt{3}-3}$, $\pm i \sqrt{2\sqrt{3}+3}$, which we will denote by EP$_1$, EP$_1'$, EP$_2$, EP$_2'$, EP$_3$, EP$_3'$, respectively. In each group, EP$_{1,2,3}'$ has same properties as EP$_{1,2,3}$. The Riemann surface and the distribution of the EPs in the complex $\kappa$ plane are shown in Fig. \ref{FigExample1}(a) and (b), respectively.
As discussed in the main text, the first step in our approach is to identify a simple sorting method. Here we chose to sort the eigenvalues according to the magnitude of their real parts as shown in Fig. \ref{FigExample1}(a) where every branch is distinguished by a distinct color. From this figure, we can also identify the features of the EPs as follows: EP$_1$ $\&$ EP$_1'$ are of second order and connect branches 2 and 3; EP$_2$ $\&$ EP$_2'$ are of second order and connect branches 1 and 2 on one hand, and branches 3 and 4 on the other; and finally EP$_3$ $\&$ EP$_3'$ are of second order and connect branches 1 and 3 as well as branches 2 and 4 (In fact all the four surfaces of Re$[\lambda]$ are connected at EP$_3$ $\&$ EP$_3'$ and one has to look at the Im$[\lambda]$ surface, which is not shown here, to infer the connectivity). Equivalently, the surface connectivity across the EPs can be characterized by using a two dimensional plane spanned by the real and imaginary parts of $\kappa$ along with the lines that separate the different solution branches (BCs) and the information on the transition between the different branches across each line. The latter can be expressed in terms permutation matrices. Our sorting scheme of the eigenvalues of $H$ results in six BCs as shown in Fig. \ref{FigExample1}(b), but one can identify only three different permutation matrices:
\begin{equation}\label{Matrix}
\renewcommand\arraystretch{0.5}
\begin{split}
\mathbf{M}_1&=\begin{bmatrix}
1 & 0 & 0 & 0\\
0 & 0 & 1 & 0\\
0 & 1 & 0 & 0\\
0 & 0 & 0 & 1
\end{bmatrix},
\mathbf{M}_2=\begin{bmatrix}
0 & 1 & 0 & 0\\
1 & 0 & 0 & 0\\
0 & 0 & 0 & 1\\
0 & 0 & 1 & 0
\end{bmatrix},
\mathbf{M}_3=\begin{bmatrix}
0 & 0 & 0 & 1\\
0 & 0 & 1 & 0\\
0 & 1 & 0 & 0\\
1 & 0 & 0 & 0
\end{bmatrix}.
\end{split}
\end{equation}
The correspondence between these matrices and the BCs is depicted in Fig. \ref{FigExample1}(b). It is not difficult to see that the above matrices have the following properties: (1) $\mathbf{M}_1^2=\mathbf{M}_2^2=\mathbf{M}_3^2=\mathbf{I}$; (2) $[\mathbf{M}_1, \mathbf{M}_3]=[\mathbf{M}_2, \mathbf{M}_3]=0$.
\noindent \textit{Stroboscopic encircling of EPs---} We now focus on the loop encircling both EP$_1$ and EP$_2$, as shown in Fig. \ref{FigExample1}(b). Clearly, the final exchange relation is determined by the product of $\mathbf{M}_1$ and $\mathbf{M}_2$. Since $[\mathbf{M}_1,\mathbf{M}_2] \neq 0$, one has to be more specific about the starting point and direction. For sake of illustration, let us choose counterclockwise direction, and $\kappa_0$ or $\kappa'_0$ as the starting point. In the first case, the loop intersects the BC associated with $\mathbf{M}_2$ first before it crosses that of $\mathbf{M}_1$. As such, we have $\mathbf{M}_1 \mathbf{M}_2 (s_1, s_2, s_3, s_4)^T=(s_2, s_4, s_1, s_3)^T$, which in turn implies the exchange $\{s_1, s_2, s_3, s_4\} \rightarrow \{s_3, s_1, s_4, s_2\}$. Similarly, the starting point $\kappa_0^{\prime}$ will give $\mathbf{M}_2 \mathbf{M}_1 (s_1, s_2, s_3, s_4)^T=(s_3, s_1, s_4, s_2)^T$ which leads to $\{s_1, s_2, s_3, s_4\} \rightarrow \{s_2, s_4, s_1, s_3\}$. These exchange relations are also evident from the eigenvalues trajectories in Fig. \ref{FigExample1}(c). Another important consequence for the absence of commutation between $\mathbf{M}_1$ and $\mathbf{M}_2$ is that $\mathbf{M}_2 \mathbf{M}_1$$\mathbf{M}_2 \mathbf{M}_1 \neq \mathbf{I}$. Hence encircling the loop in Fig. \ref{FigExample1}(b) twice still lead to nontrivial exchange. For example, the state $s_1$ will evolve into $s_3$, $s_4$ and $s_2$ after encircling the loop two, three and four times, respectively.
\noindent \textit{Topological features of equivalent loops---} Here, we further elucidate on the topological features of equivalent loops in the context of the example given by Eq. (\ref{H}). In this case, the space $\bar{S}$ would be the space spanned by Re$[\kappa]$ and Im$[\kappa]$ after removing the points EP$_{1,2,3}$ and EP$_{1,2,3}'$. By inspecting the two loops \textcircled{1} and \textcircled{2} in Fig. \ref{FigExample2}(a), it is clear that they are not homotopic for the starting point $\kappa_0$. Indeed the net permutation matrix associated with loop \textcircled{1} is $\mathbf{M}_1\mathbf{M}_2 \mathbf{M}_1 \mathbf{M}_2$, resulting in $\{s_1, s_2,s_3, s_4\}\rightarrow \{s_4, s_3,s_2, s_1\}$. However, the permutation matrix associated with loop \textcircled{2} is $\mathbf{M}_1 \mathbf{M}_3 \mathbf{M}_1 \mathbf{M}_3=\mathbf{I}$. Consequently their exchange relations are in general different as shown in Fig. \ref{FigExample2}(b) and (c).
\begin{figure}
\includegraphics[width=\linewidth]{FigExample2.pdf}
\caption{\textbf{Numerical example of homotopic relations between loops.} (a) Depicts two similar loops \textcircled{1} and \textcircled{2} that encircle EP$_1$ and EP$_2$. The two loops are non-homotopic for any starting point including $\kappa_0$ (which is considered for the example), since they cannot be deformed into one another without crossing $\text{EP}_3$. Their corresponding matrix product is $\mathbf{M}_1\mathbf{M}_2\mathbf{M}_1\mathbf{M}_2$ and $\mathbf{I}$, respectively. This is confirmed by their eigenvalue trajectories as shown in (b) and (c). (d) The two similar loops \textcircled{3} and \textcircled{4} are non-homotopic for the starting point $\kappa_0$ but homotopic for $\kappa_0'$. This is also reflected in the exchange relations of the eigenvalues as shown in (e) and (f).}
\label{FigExample2}
\end{figure}
Next, we investigate a scenario that highlight the case of free homotopy. The two loops \textcircled{3} and \textcircled{4} in Fig. \ref{FigExample2}(d) are similar (enclose the same EPs), yet they are not homotopic for the starting point $\kappa_0$, i.e. they cannot be transformed into one another while keeping the starting point fixed and without crossing EP$_2$. Thus the two loops are not necessarily equivalent. Indeed the net redistribution matrix associated with loop \textcircled{3} is $\mathbf{M}_1$, resulting in $\{s_1, s_2,s_3, s_4\}\rightarrow \{s_1, s_3,s_2, s_4\}$; while for loop \textcircled{4}, the permutation matrix is $\mathbf{M}_2 \mathbf{M}_1 \mathbf{M}_2$, which gives $\{s_1, s_2,s_3, s_4 \} \rightarrow \{s_4, s_2,s_3, s_1 \}$. On the other hand, if we consider the same loops \textcircled{3} and \textcircled{4} but with a different starting point $\kappa_0'$, they are homotopic and the net permutation matrix is $\mathbf{M}_1$ for both loops. Figures \ref{FigExample2}(e) and (f) confirm these results.
\noindent \textit{Implications for dynamical evolution---}
So far we have discussed the stroboscopic (or static) exchange between the eigenstates as a result of encircling EPs. Whereas this type of ``evolution" can be in general accessed experimentally (see Refs. \cite{Dembowski2001PRL,Dietz2011PRL,Gao2015N} for the case of second order EPs), recent theoretical and experimental efforts are painting a different picture for the dynamic evolution, showing that the interplay between gain and loss will inevitably break adiabaticity \cite{Raam2011JPA,Berry2011JPA,Berry2011JO,Hassan2017PRL,Doppler2016N,Xu2016N}. It will be thus interesting to investigate whether the homotopy between the loops (or its lack for that matter) has any impact on the dynamic evolution. Here we do not attempt to answer this question rigorously but will rather consider illustrative example. To do so, we focus again on the same loops \textcircled{3} and \textcircled{4} shown in Fig. \ref{FigExample2}(d), and we perform numerical integration to compute the dynamical evolution around these loops starting from either $\kappa_0$ or $\kappa_0'$. As we discussed before, the loops are similar for both initial conditions but homotopic only for the later one. The computational details are presented below but the main results confirm our conclusion in the main text: (1) When the two loops are homotopic (i.e when the initial point on the the loop is $\kappa_0'$) any initial state $s_i$, with $i=1,2,3,4$, will end up at state $s_2$ regardless of the considered loop; (2) For similar but non-homotopic loops (i.e when the initial point on the the loop is $\kappa_0$), the initial states on loop \textcircled{3} always evolve to $s_3$ while those on loop \textcircled{4} will evolve to $s_1$.
These results suggest that homotopy between the loops plays a much greater role than just describing the static exchange between the states. Particularly, it might be also useful in classifying the dynamic evolution. We plan to investigate this interesting direction in future work.
\noindent\textit{Numerical calculation of dynamic evolution---} Here we present the details of the numerical calculations for the dynamic evolution. First, we choose the point $\kappa_0 = (0.4,-0.15)$ in Fig. 6. Next, choose the loop \textcircled{4} in Fig. 6(a) as:
\begin{equation}
\begin{aligned}
\text{Re}[\kappa(t)]&=\begin{cases}
c_1+r_1 \cos(\omega t), & t \in [0,T/4) \\
c_2+r_2 \cos(\omega t), & t \in [T/4,T/2) \\
c_1-r_2 \cos(\omega t), & t \in [T/2,3T/4) \\
c_3+r_2 \cos(\omega t), & t \in [3T/4,T]
\end{cases}, \\
\text{Im}[\kappa(t)]&=\begin{cases}
r_1 \sin(\omega t), & t \in [0,T/4) \\
r_2 \sin(\omega t), & t \in [T/4,T]
\end{cases} ,
\end{aligned}
\end{equation}
where $c_1=0.7$, $c_2=0.4$ and $c_3=1$. Note that the centers of the semicircles associated with loop \textcircled{4} in Fig. 6(a) are given by the coordinates $(c_{1,2,3},0)$. The associated radii are $r_1=0.45$ and $r_2=0.15$. The quantity $T=4 \pi/|\omega|$ is the time needed to complete one cycle.
The exact position of point $\kappa_0'$ can be now chosen to be the intersection between the line passing through $\kappa_0$ and EP$_1$ and the top large semi-circle, and $\kappa'_0 \approx (1.148,0.03711)$.
Finally, loop \textcircled{3} in Fig. 6(b) was chosen to be a titled ellipse with the line connecting $\kappa_0$ and $\kappa'_0$ as the major axis. This ellipse has semi-major axis $a \approx 0.3782$, focal distance $c=a-0.002$ and a rotating angle $\theta=\arctan\frac{1}{4}$. Therefore the parametric function of loop \textcircled{3} is:
\begin{equation}
\begin{aligned}
\text{Re}[\kappa(t)]&= c_x + a \cos(\omega t) \cos\theta - b \sin(\omega t) \sin\theta,\\
\text{Im}[\kappa(t)]&= c_y + a \cos(\omega t) \sin\theta + b \sin(\omega t) \cos\theta,
\end{aligned}
\end{equation}
where $b=\sqrt{a^2-c^2}$ is the semi-minor axis of the ellipse and $(c_x,c_y)=(c_2+a \sin\theta, -r_2 + a \cos \theta)$ is the center of the ellipse.
In all simulations, we chose the encircling speed $\omega=\pm 10^{-4}$ (the positive/negative signs CCW/CW respectively).
\begin{figure}
\center
\includegraphics[width=\linewidth*3/4]{FigNumerical.pdf}
\caption{\textbf{Trajectories of dynamical evolutions.} The details of loops \textcircled{3} and \textcircled{4} used in the numerical simulation of dynamic evolution of eigenstates in the main text are illustrated in (a) and (b). Loop \textcircled{3} is a titled ellipse with the line connecting $\kappa_0$ and $\kappa'_0$ as the major axis. Loop \textcircled{4} is a combination of one large semi-circle and three identical small semi-circles.}
\label{FigNumerical}
\end{figure}
\newpage
\noindent \textit{Data availability---} The data that support the findings of this study are available from the corresponding author upon reasonable request.
\subsection*{Author Contribution}
R.E. conceived the project. Q.Z. and R.E. developed the theoretical framework with support from D.N.C. All Authors contributed to the analysis and manuscript writing.
\bibliographystyle{naturemag.bst}
|
1,116,691,499,853 | arxiv | \section{The renormalized g-tensor for confined electrons}
In this subsection, we calculate the renormalization of the confined electron g-factor in the presence of an applied magnetic field. Through the
Schrieffer-Wolff transformation, the effective Hamiltonian (up to second order in spin-orbit interaction and to all orders in Zeeman coupling)
is [5,12]
\begin{eqnarray}
H_{eff} &=& H_d + H_Z + \Delta H, \\
\Delta H &=& \frac{1}{2}[S,H_{so}],
\end{eqnarray}
where the transformation matrix $S$ is introduced in the main text of the paper. By calculating the above commutator, one can obtain the
(second-order) corrections to the Hamiltonian and the electron spin g-tensor. The resulting Hamiltonian has a rather complicated form, thus we
only present two special cases of particular interest:
1. {\it B in-plane and along the x axis}
\begin{widetext}
\begin{eqnarray}
\Delta H_{\parallel} &=& - \frac{\hbar^2}{2m} \left(\frac{1}{\lambda_+^2}+\frac{1}{\lambda_-^2} \right)
+ \frac{\hbar^2 \tilde \beta_2}{2m \lambda_+} + \frac{\hbar \, \beta_1}{m \lambda_+} p_x^2 \, \sigma_x
- \frac{\hbar}{2 m \lambda_-} \{ p_x, p_y \} \, \sigma_y
+ \left[ \frac{\hbar}{m \lambda_+ \lambda_-} (x p_y - y p_x) - \frac{\hbar \, \tilde \beta_2}{m \lambda_-} x p_y \right] \sigma_z, \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;
\end{eqnarray}
\end{widetext}
where $\{U, V\} \equiv UV + VU$ is the anticommutator of $U$ and $V$. In an in-plane $B$ field, generally all components of the g-tensor is
renormalized, depending on the electron orbital state. However, for an electron in the ground state, the quantum mechanical average of the
orbital operators multiplying $\sigma_y$ and $\sigma_z$ vanish, and only $\tilde g_{xx}$ differs from the bulk g-factor.
2. {\it B perpendicular to the x-y plane}
\begin{widetext}
\begin{eqnarray}
\Delta H_{\perp} &=& - \frac{\hbar^2}{2m} \left(\frac{1}{\lambda_+^2}+\frac{1}{\lambda_-^2}
+ \frac{\alpha_2}{\lambda_+} -\frac{\beta_2}{\lambda_-} - \frac{\tilde \alpha_2}{\lambda_-}
- \frac{\tilde \beta_2}{\lambda_+} \right) - \frac{\hbar^2 \omega_c}{2} \left(\frac{\alpha_1}{\lambda_+}
+\frac{\beta_1}{\lambda_-} + \frac{\tilde \alpha_1}{\lambda_-} - \frac{\tilde \beta_1}{\lambda_+} \right) \nonumber \\
&& +\frac{\hbar}{m}\left[\frac{1}{\lambda_+ \lambda_-}(xp_y - y p_x) + \frac{1}{\lambda_+}\left(\tilde \alpha_1
+ \beta_1\right) p_x^2 +
\frac{1}{\lambda_-}\left( \alpha_1 - \tilde\beta_1\right) p_y^2 + \frac{1}{\lambda_-}\left(\alpha_2 - \tilde \beta_2 \right)
x p_y
+ \frac{1}{\lambda_+}\left(\tilde \alpha_2 + \beta_2 \right) y p_x \right] \sigma_z \;\;\;\;\;\;\;\;\;\;
\end{eqnarray}
\end{widetext}
In this case, only $g_{zz}$ is renormalized, for any electronic orbital state.
\section{The leakage rate out of the singlet-triplet subspace}
Here we calculate the two-electron spin relaxation rate for a double dot in a {\it perpendicular} magnetic field, for the regime where $a,b \ll
E_Z - J$. In a perpendicular $B$ field, there is no coupling between $S - T_0$, in the leading order of the spin-orbit interaction. The only
leakage channel is $S \rightarrow T_+$ [see Eq. (12) in the text],
\begin{widetext}
\begin{eqnarray}
\Gamma_{S \rightarrow T_+} &=& \frac{8 (e e_{14})^2}{\pi \kappa^2 \rho \hbar^2 s_t^3} (n_{q_0}+1)
\cdot e^ {-\frac{d^2 (l^4+4l_B^4)}{l^2 l_B^4} } \cdot \frac{ (a_x - b_y)^2 + (a_y - b_x)^2 }{E_Z - J}
\cdot \{ g (d,q_0) +\frac{f(r)}{8} \}, \label{rate-perp} \;\;\; \\
\mbox{\boldmath $a$} &=& \frac{\mu B_z}{2} \left( -\frac{d_x}{\lambda_+} \, \hat x+ \frac{ d_y}{\lambda_-} \, \hat y \right), \\
\mbox{\boldmath $b$} &=& \frac{\mu B_z \, l^2}{4 \, l_B^2} \left( \frac{d_y}{\lambda_+} \, \hat x - \frac{ d_x}{\lambda_-} \,
\hat y \right), \\
g (d, q_0) &=& \int_0^\pi d \theta \sin^3\theta \cos^2\theta \; e^ {-l^2 q_0^2 \sin^2\theta /2 }
\; I_0 (\frac{2 l^2 d q_0}{l_B^2} \sin\theta) \nonumber\\
&+& \frac {l_B^4}{4 l^4 d^2 q_0^2} \int_0^\pi d \theta \sin^3\theta \; e^ {-l^2 q_0^2 \sin^2\theta /2 }
\; \left\{ I_2 (\frac{2 l^2 d q_0}{l_B^2} \sin\theta) + \frac{2 l^2 d q_0}{l_B^2} \sin\theta \cdot I_3
(\frac{2 l^2 d q_0}{l_B^2} \sin\theta) \right\}, \nonumber
\end{eqnarray}
\end{widetext}
where $q_0 = \sqrt 2 \; r/l$, $r \equiv l(E_Z -J)/(\sqrt {2} \hbar s_t)$, $l = l_B/\sqrt[4]{\frac{1}{4}+\frac{\omega_0^2}{\omega_c^2}}$, $l_B
=\sqrt{\frac{\hbar c}{e B}}$, and $\omega_c = \frac{e B}{m c}$. The function $f(x)$ is introduced in the main text of the paper [see Eq.~(14)],
and $I_n(x)$ are the modified Bessel functions of the $n$-th kind. Note that the rate in Eq.~(\ref{rate-perp}) has a geometrical dependence on
the orientation of the setup with respect to the crystallographic axes. In Fig.~\ref{Relaxation-supp}, we have plotted this leakage rate as a
function of the magnetic field for a DQD along the $x$ axis. We find that the rates are typically very small and much slower than the
single-spin (and the hyperfine-induced two-spin) relaxation rates.
\begin{figure}
\begin{center}
\includegraphics[angle=0,width=0.5\textwidth]{rates-B-perp.pdf}
\caption{\small The singlet-triplet ($S \rightarrow T_+$) relaxation time, for a DQD along the $x$ axis, as a function of applied
(perpendicular) magnetic field. The material parameters are chosen for GaAs quantum dots with $\lambda_{so} = 5 \; \mu$m, $l= 30$ nm (at
$B=0$), $e_{14} = 1.4 \times 10^9$ V/m, $\kappa=13.1$, $\rho = 5.3 \times 10^3$ kg/m$^3$, and $s_t = 2.5 \times 10^3$ m/s. }
\label{Relaxation-supp}
\end{center}
\end{figure}
|
1,116,691,499,854 | arxiv | \section{\label{intro}Introduction}
Acoustic metamaterials comprised of sub-wavelength inclusions have attracted much attention due to their extraordinary acoustical effects. By proper design of the inclusions one can achieve a variety of exotic effective material properties such as negative bulk modulus and density,~\cite{Liu000,Li04a,Fang06} zero index,~\cite{Huang11,Dubois2017a} anisotropic and pentamode properties.~\cite{Popa2016,Su2017a,Hedayati2017} Numerous applications have been demonstrated, e.g. acoustic cloaking,~\cite{Fang11,Chen2017} negative refraction,~\cite{Zhang2004,Hladky-Hennion13} and nonreciprocal transmission.~\cite{Liang2010,Fleury2014b} The medium of particular interest in this paper is known as a Willis material, which requires strain-velocity cross-coupling in elasticity or pressure-velocity coupling in acoustics.~\cite{Willis1980,Willis1981,Milton07,Norris12,Muhlestein2016}
It is well-known that monopole and dipole moments are dominant sources of the scattered field for sub-wavelength scatterers. For Willis type acoustic scatterers, the scattering requires cross-coupling between pressure and velocity fields.\cite{Sieck2017} Recently, \citet{Quan2018} described this type of coupling in the context of acoustic bianisotropic polarization due to its similarity to bianisotropy in electromagnetics. One interesting finding in their work is that both the pressure and velocity fields can produce monopole and dipole responses. In some circumstances, the cross-coupling induced scattering can be dominant for properly designed sub-wavelength scatterers. The interaction between pressure and velocity fields makes the scattering problem more complicated, while also presents novel wavefront manipulation possibilities that conventional metasurfaces do not have.~\cite{Zhao2013,Xie2014,Li2015,Dubois2017,Liu2017,Su2018} For example, \citet{Koo2016} took advantage of the pressure-velocity coupling to achieve simultaneous control of transmitted and reflected wavefronts. \citet{Li2018} proposed a systematic design approach to improve the efficiency of acoustic metasurfaces using bianisotropic unit cells. The aforementioned two examples are based on the cross-coupling in uniaxial materials, the coupling in two- and three-dimensional (2D and 3D) Willis scatterers contain richer physics and can be potentially used in unprecedented applications. In order to design such scatterers, a rigorous and handy retrieval method for the polarizability tensor needs to be established. There are several retrieval methods available in the literature of electromagnetics.~\cite{Arango2013,Mirmoosa2014,Asadchy2014,Liu2016} However, the retrieval method must be redeveloped in acoustics due to the difference in the constitutive equations.
Examples of acoustic retrieval of Willis properties are limited. Thus, \citet{Muhlestein2017a} established a method for extracting the effective bulk modulus and mass density for a one-dimensional (1D) Willis material.
More recently, \citet{Quan2018} provided a robust method for retrieving the polarizability tensor for 2D structures by directly calculating the monopole and dipole moments based on the orthogonality of each mode. The latter method requires knowledge of the scattered field in all directions for different types of wave incidence, i.e.\ an infinite data set. The polarizability tensor contains a finite number of independent elements, 3 in 1D and 6 in 2D, and it therefore should be possible to retrieve these using finite data sets of similar size. The purpose of this paper is to present a simple and efficient approach for obtaining the Willis polarizability tensor in both 2D and 1D that requires only limited data. We do this by first interpreting the polarizability tensor in terms of a scattering (or T-) matrix.
This places the Willis polarizability in the context of standard scattering theory in which the T-matrix has full coupling. The scattering approach also implies bounds on the polarizability in a natural way by using the relation between the T-matrix and the S-matrix.
The bounds are a consequence of energy conservation which implies that the eigenvalues of the S-matrix must be of magnitude less than or equal to unity.
Explicit formulae are given for retrieving the polarizability tensor from finite sets of scattering data and are illustrated using numerical simulations. The method developed here is significantly simpler than that of \citet{Quan2018} in that it only requires a few probes of the scattered pressure in the near- or far-field for several plane wave excitations. For a Willis scatterer in 2D free field, there are nine polarizability components in total but only six are independent. One only needs to simulate plane wave excitations along the $\pm x$- and $\pm y$-directions, and probe the far-field scattered pressure along the $\pm x$- and $\pm y$-directions. Note that the choice of the incident directions and probe locations are not unique. For a Willis scatterer in an 1D waveguide, the total number of polarizability components reduces to four with three independent ones. In this case one only needs incidence along opposite directions and data for the scattered pressure on both sides. Such an easy-to-implement method can drastically facilitate the optimization process during the inclusion design. It also offers the experimentalist a simple method to measure the polarizability. Two examples will be presented to demonstrate our retrieval method. It should be noted that higher order multipole moments also exist in the scattered field, but their contributions are negligible for deep sub-wavelength scatterers. We will see from the two examples that the curves obtained using the method developed here match well with that obtained using the method in Ref.~\citen{Quan2018}. In addition, the parameters retrieved using the present approach automatically satisfy the constraints imposed by reciprocity and energy conservation.
This paper is arranged as follows. In Sec.~\ref{scath}, we formulate the scattered fields for both 2D and 1D problems. Then we present our retrieval method in Sec.~\ref{retr} with Eqs.~\eqref{mod2d} and \eqref{retralphap} being the main results. Numerical examples are shown in Sec.~\ref{examp1} to validate our method. Section \ref{Conc} concludes the paper.
\section{\label{scath}Scattering from sub-wavelength Willis acoustic elements}
The type of scatterer considered here is one that couples monopole and dipole terms with little or no contribution from higher order multipoles. In this way it is the simplest embodiment of a Willis material, analogous to a standard lumped element in uni-dimensional acoustics. For that reason we refer to the scatterer as a Willis scatterer or a Willis element.
The scattered fields from a Willis element under arbitrary incidence can be written in terms of the polarizability tensor as first introduced by~\citet{Quan2018}. Explicit expressions for the pressure fields can be found by relating them to the multipole components of the scattered wave. In this section we derive the equations for the scattered fields using the polarizability tensor and reveal several general properties.
Consider sound radiation from a point source in a background medium with mass density $\rho$ and sound speed $c$. The radiated pressure can be written in terms of the Green's function as
\begin{equation}\label{scatgre}
p_s({\bf x})=\omega^2MG({\bf x})-\omega^2{\bm {D}}\cdot \nabla G({\bf x})+\cdots ,
\end{equation}
where $\omega$ is the angular frequency $(e^{-i\omega t}$ assumed). All high order multipole terms are dropped in the multipole expansion except for the monopole mass $M$ and dipole moment $\bm D$. The Green's function $G({\bf x})$ takes the form
\begin{equation}\label{gf}
\begin{aligned}
G({\bf x})=\begin{cases}
\frac{1}{i2kS}e^{ik|{\bf x}|}, &{\text {1D waveguide}},\\
\frac{1}{i4}H_0^{(1)}(k|{\bf x}|), &{\text {2D free field}},\\
\frac{-1}{4\pi|{\bf x}|}e^{ik|{\bf x}|} &{\text {3D free field}},
\end{cases}
\end{aligned}
\end{equation}
where $S$ is the cross-sectional area of the 1D waveguide; $k=\omega /c$ is the wavenumber in the background medium; $H_n^{(1)}$ is the Hankel function of the first kind. The mass and dipole moment are proportional to the incident pressure $p_i$ and velocity ${\bm v_i}$,
\begin{equation}\label{md}
\begin{aligned}
\begin{pmatrix}
M\\ {\bm D}
\end{pmatrix}={\bm \alpha}
\begin{pmatrix}
p_i\\ {\bm v_i}
\end{pmatrix},
\end{aligned}
\end{equation}
where ${\bm \alpha}$ is the polarizability tensor with components
\begin{equation}\label{tensor}
\begin{aligned}
{\bm \alpha}=\begin{pmatrix}
\alpha^{pp} &{\bm \alpha^{pv}}^T\\
{\bm \alpha^{vp}} &{\bm \alpha^{vv}}
\end{pmatrix}.
\end{aligned}
\end{equation}
The diagonal terms in Eq.~\eqref{tensor} correspond to the pressure excited monopole and velocity excited dipole; the off-diagonal terms correspond to the cross-coupling induced monopole and dipole moment. The objective of this paper is to provide a method to determine the components of $\bm \alpha$.
We focus on the retrieval method for the 2D and 1D cases, the 3D case can be derived in a manner similar to the 2D case.
\subsection{Scattering from a Willis element in 2D
Consider an acoustically small asymmetric scatterer in 2D, with incident pressure and velocity fields at the scatterer location defined as
\begin{equation}\label{2Dinci}
\begin{aligned}
\begin{pmatrix}
p_i\\ v_{xi}\\ v_{yi}
\end{pmatrix}=
\begin{pmatrix}
1 &0 &0\\ 0 &\frac{1}{\sqrt{2}\rho c} &0\\ 0 &0 &\frac{1}{\sqrt{2}\rho c}
\end{pmatrix}
\begin{pmatrix}
A_0\\ A_x\\ A_y
\end{pmatrix}.
\end{aligned}
\end{equation}
Using Eqs.~\eqref{scatgre} and \eqref{gf}, the scattered pressure field from a Willis element is
\begin{equation}\label{sctp2d}
\begin{aligned}
p_s({\bf x})&=\frac{\omega^2}{i4}\Big ( MH_0^{(1)}(kr) + k \hat{\bf x}\cdot {\bm D}H_1^{(1)}(kr)\Big )\\
&=T_0H_0^{(1)}(kr)+\big ( T_x\cos \theta + T_y \sin \theta \big )i\sqrt{2}H_1^{(1)}(kr),
\end{aligned}
\end{equation}
where
\begin{equation}\label{tdef2d}
\begin{aligned}
{\bm t}\equiv \begin{pmatrix}
T_0\\ T_x\\ T_y
\end{pmatrix}=
\frac{\omega^2}{i4}
\begin{pmatrix}
1 &0 &0\\ 0 &-\frac{ik}{\sqrt{2}} &0\\ 0 &0 &-\frac{ik}{\sqrt{2}}
\end{pmatrix}
\begin{pmatrix}
M\\ D_x\\ D_y
\end{pmatrix}.
\end{aligned}
\end{equation}
The 2D (scattering) T-matrix ${\bm T}$ is defined by ${\bm t}={\bm T}{\bm a}$ with ${\bm a}=(A_0\ A_x\ A_y)^T$,
and hence
\begin{align}\label{TM2d}
{\bm T}&=\frac{\omega^2}{i4}\begin{pmatrix}
1 &0 &0\\ 0 &-\frac{ik}{\sqrt{2}} &0\\ 0 &0 &-\frac{ik}{\sqrt{2}}
\end{pmatrix}{\bm \alpha}
\begin{pmatrix}
1 &0 &0\\ 0 &\frac{1}{\sqrt{2}\rho c} &0\\ 0 &0 &\frac{1}{\sqrt{2}\rho c}
\end{pmatrix}
\notag \\
&=i\frac{\omega^2}{8}{\bm \alpha}^\prime,
\end{align}
where the modified polarization tensor ${\bm \alpha}^\prime$ is
\begin{align}\label{modalpha2d}
{\bm \alpha}^\prime &= \begin{pmatrix}
\alpha^{pp\prime} &\alpha_x^{pv\prime} &\alpha^{pv\prime}_y\\
\alpha^{vp\prime}_x &\alpha^{vv\prime}_{xx} &\alpha^{vv\prime}_{xy}\\
\alpha^{vp\prime}_y &\alpha^{vv\prime}_{yx} &\alpha^{vv\prime}_{yy}
\end{pmatrix}
\notag \\ &=
\begin{pmatrix}
-2\alpha^{pp} &-\frac{\sqrt{2}}{\rho c}\alpha_x^{pv} &-\frac{\sqrt{2}}{\rho c}\alpha^{pv}_y\\
ik\sqrt{2}\alpha^{vp}_x &\frac{ik}{\rho c}\alpha^{vv}_{xx} &\frac{ik}{\rho c}\alpha^{vv}_{xy}\\
ik\sqrt{2}\alpha^{vp}_y &\frac{ik}{\rho c}\alpha^{vv}_{yx} &\frac{ik}{\rho c}\alpha^{vv}_{yy}
\end{pmatrix}.
\end{align}
This expression is consistent with that in Ref.~\citen{Quan2018}. Reciprocity implies that $\alpha_{xy}^{vv\prime}=\alpha_{yx}^{vv\prime}$, $\alpha_x^{pv\prime}=-\alpha_x^{vp\prime}$ and $\alpha_y^{pv\prime}=-\alpha_y^{vp\prime}$. We will see later in the numerical examples that the retrieved parameters satisfy these reciprocity requirements.
The explicit form of the scattered field for any incident wave can be obtained using the above equations. For example, a plane wave incident along the $+x$-direction with unit magnitude is defined by $A_0=A_x/\sqrt{2}=1$ and $A_y=0$; the unit plane wave incident along the positive diagonal direction corresponds to $A_0=A_x=A_y=1$.
The importance of using the T-matrix formalism is that it is related to the S-matrix,
${\bm S}={\bm I}+2{\bm T}$, which is unitary in the absence of absorption: ${\bm S}{\bm S}^\dagger = {\bm I}$.
This places direct limits on the polarizability tensor, specifically that the eigenvalues of
${\bm I}+ \frac i{4} \omega^2{\bm \alpha}^\prime $ must be of unit magnitude. In the case of energy dissipation the eigenvalue magnitudes must be less than or equal to unity. This result and its implications, which generalizes the bounds obtained by \cite{Quan2018}, will be discussed at further length separately.
\subsection{Scattering from a Willis element in a 1D waveguide}
The scattering problem in a 1D waveguide is simpler in that it only involves forward and backward scattered waves. In this case the incident pressure and velocity fields at the scattering location are
\begin{equation}\label{pv}
\begin{aligned}
\begin{pmatrix}
p_i\\ {v_i}
\end{pmatrix}=
\begin{pmatrix}
1 &0\\ 0 &\frac{1}{\rho c}
\end{pmatrix}\begin{pmatrix}
A_1\\ A_2
\end{pmatrix}.
\end{aligned}
\end{equation}
Using Eqs.~\eqref{scatgre} and \eqref{gf}, the scattered pressure field from a Willis element in the waveguide is
\begin{align}\label{scat}
p_s(x)&=\frac{\omega c}{i2S}\Big (M-ikD \sgn x \Big ) e^{ik|x|}\notag \\
&=\big (T_0+T_1\sgn x\big ) e^{ik|x|}.
\end{align}
Note that the size of the scatterer must be much smaller than the radius of the waveguide, so we do not need to consider the narrow region nearby. The frequency range is sufficiently low so that only the fundamental waveguide mode propagates. By analogy with the 2D case,
\begin{equation}\label{scattm}
\begin{aligned}
{\bm t}\equiv \begin{pmatrix}
T_0\\ T_1
\end{pmatrix}
=\frac{i\omega c}{2S}
\begin{pmatrix}
-1 &0 \\ 0 &ik
\end{pmatrix}
\begin{pmatrix}
M\\ D
\end{pmatrix}.
\end{aligned}
\end{equation}
The scattering matrix, defined by ${\bm t} ={\bm {T}}(A_1\ A_2)^T$, is
\begin{equation}\label{bigt}
{\bm T}=\frac{i\omega c}{2S}
\begin{pmatrix}
-1 &0\\ 0 &ik
\end{pmatrix}{\bm \alpha}
\begin{pmatrix}
1 &0\\ 0 &\frac{1}{\rho c}
\end{pmatrix}=\frac{i\omega c}{2S}{\bm {\alpha^\prime}},
\end{equation}
and the modified polarizability tensor is
\begin{equation}\label{ptensor}
\begin{aligned}
{\bm \alpha^\prime}=\begin{pmatrix}
\alpha^{pp\prime} &\alpha^{pv\prime}\\
\alpha^{vp\prime} &\alpha^{vv\prime}
\end{pmatrix}=\begin{pmatrix}
-\alpha^{pp} &-\frac{1}{\rho c}\alpha^{pv}\\
ik\alpha^{vp} &\frac{ik}{\rho c}\alpha^{vv}
\end{pmatrix}.
\end{aligned}
\end{equation}
Reciprocity requirements impose the constraint: $\alpha^{pv\prime}=-\alpha^{vp\prime}$. It should be pointed out that the monopole and dipole moments are more dominant in this case since only the fundamental mode can propagate within the frequency range of interest. Therefore, the two eigenvalues of the S-matrix ${\bm S}={\bm I}+2{\bm T}$ must be close to unity.
The explicit form of the scattered field can be obtained as in 2D. The form of the incident waves are simpler with $A_1=A_2=1$ corresponding to a unit amplitude wave incident wave in the $+x$-direction, while $A_1=-A_2=1$ corresponds to a wave incident along the $-x$-direction.
\section{Retrieval method for the polarizability tensor}\label{retr}
The coupling between monopoles and dipoles in a Willis scatterer is achieved by a physically asymmetric object, for which there are rarely closed-form expressions available for the polarizability tensor ${\bm \alpha}$. It is therefore essential to have a rigorous and efficient numerical retrieval method. The objective of this section is to provide a simple recipe for retrieving ${\bm \alpha}$ from FEM simulations or experimental data.
\subsection{2D free field}
In this section we will use the nine-component form of the $3\times 3$ polarizability tensor for later comparison, even though it has only six independent components due to reciprocity.
We consider four plane wave excitations along the $\pm x$- and $\pm y$-directions. These are defined by taking appropriate amplitudes in Eq.~\eqref{2Dinci}. For instance, a plane wave incident along the $+x$-direction with pressure and velocity at the scatterer location (the origin) equal to $1$ and $1/\rho c$, respectively,
corresponds to $A_0=A_x/\sqrt{2}=1$ and $A_y=0$. Similarly, $A_0=-A_x/\sqrt{2}=1$ and $A_y=0$ for $-x$-incidence; $A_0=A_y/\sqrt{2}=1$ and $A_x=0$ for $y$-incidence; $A_0=-A_y/\sqrt{2}=1$ and $A_x=0$ for $-y$-incidence.
The scattered pressure for the four cases are
\begin{widetext}
\begin{equation}\label{2dcase}
\begin{aligned}
^\pm_xp_s({\bf x}) &=\frac{i\omega^2}{8}\big (\alpha^{pp\prime}\pm\sqrt{2}\alpha_x^{pv\prime}\big )H_0^{(1)}(kr)
-\frac{\sqrt{2}\omega^2}{8}\Big [ \big (\alpha_x^{vp\prime}\pm\sqrt{2}\alpha_{xx}^{vv\prime}\big )\cos \theta + \big (\alpha_y^{vp\prime}\pm\sqrt{2}\alpha_{yx}^{vv\prime}\big ) \sin \theta \Big ]H_1^{(1)}(kr),\\
^\pm_yp_s({\bf x}) &=\frac{i\omega^2}{8}\big (\alpha^{pp\prime}\pm\sqrt{2}\alpha_y^{pv\prime}\big )H_0^{(1)}(kr)
-\frac{\sqrt{2}\omega^2}{8}\Big [ \big (\alpha_x^{vp\prime}\pm\sqrt{2}\alpha_{xy}^{vv\prime}\big )\cos \theta + \big (\alpha_y^{vp\prime}\pm\sqrt{2}\alpha_{yy}^{vv\prime}\big ) \sin \theta \Big ]H_1^{(1)}(kr),
\end{aligned}
\end{equation}
\end{widetext}
where the superscript and subscript on the left side of $p_s$ denote the incident direction.
For example, $^-_xp_s({\bf x})$ is the scattering solution for incidence along the $-x$-direction.
The retrieval method uses four scattered pressure measurements at distance $r=|{\bf x}|$ along $\pm x$- and $\pm y$-directions for each excitation, implying 16 data points.
The probed pressure values are $^{\pm}_xp_{sx}^{\pm}(r)$, $^{\pm}_xp_{sy}^{\pm}(r)$, $^{\pm}_yp_{sx}^{\pm}(r)$ and $^{\pm}_yp_{sy}^{\pm}(r)$, with the superscript and subscript on the right side of $p_s$ denoting the location $r=|{\bf x}|$, at which the pressure is measured along the specific axis.
For instance, $^-_xp_{sy}^{+}(r)$ is the scattered pressure at distance $r$ along the $+y$-direction for incidence along the $-x$-direction. However, the quadrupole component is not negligible when $ka\sim 1$. In order to obtain more accurate results, we filter out the quadrupole based on the orthogonality of each harmonic. For example, the quadrupole components in the four scattered fields for $+x$-incidence can be filtered by setting ${_x^+R} = ({^+_xp_{sx}^+} - {^+_xp_{sy}^+} + {^+_xp_{sx}^-} - {^+_xp_{sy}^-})/4$ and redefining the pressures as ${^+_xp_{sx}^+} - {_x^+R} \rightarrow {^+_xp_{sx}^+}$, ${^+_xp_{sx}^-} - {_x^+R} \rightarrow {^+_xp_{sx}^-}$, ${^+_xp_{sy}^+} + {_x^+R} \rightarrow {^+_xp_{sy}^+}$ and ${^+_xp_{sy}^-} + {_x^+R} \rightarrow {^+_xp_{sy}^-}$. All the other probed data corresponding to different incidences should be filtered in a similar fashion, such that they only include monopole and dipole components.
Plugging the 16 filtered pressures into Eq.~\eqref{2dcase} and omitting $(r)$ for conciseness, we may invert the extracted data to get the modified polarizability components:
\begin{widetext}
\begin{equation}\label{mod2d}
\begin{aligned}
\alpha^{pp\prime} &=\frac{-i}{\omega^2H_0^{(1)}(kr)} \Big ( {^+_xp_{sx}^-} + {^+_xp_{sx}^+} + {^+_yp_{sy}^-} + {^+_yp_{sy}^+} + {^-_xp_{sx}^-} + {^-_xp_{sx}^+} + {^-_yp_{sy}^-} + {^-_yp_{sy}^+} \Big ),\\
\alpha_x^{pv\prime} &=\frac{-i}{\sqrt{2}\omega^2H_0^{(1)}(kr)} \Big ( {^+_xp_{sx}^-} + {^+_xp_{sx}^+} + {^+_xp_{sy}^-} + {^+_xp_{sy}^+} - {^-_xp_{sx}^-} - {^-_xp_{sx}^+} - {^-_xp_{sy}^-} - {^-_xp_{sy}^+} \Big ),\\
\alpha_y^{pv\prime} &=\frac{-i}{\sqrt{2}\omega^2H_0^{(1)}(kr)} \Big ( {^+_yp_{sy}^-} + {^+_yp_{sy}^+} + {^+_yp_{sx}^-} + {^+_yp_{sx}^+} - {^-_yp_{sy}^-} - {^-_yp_{sy}^+} - {^-_yp_{sx}^-} - {^-_yp_{sx}^+} \Big ),\\
\alpha_x^{vp\prime} &=\frac{1}{\sqrt{2}\omega^2H_1^{(1)}(kr)} \Big ( {^+_xp_{sx}^-} - {^+_xp_{sx}^+} + {^-_xp_{sx}^-} - {^-_xp_{sx}^+} + {^+_yp_{sx}^-} - {^+_yp_{sx}^+} + {^-_yp_{sx}^-} - {^-_yp_{sx}^+} \Big ),\\
\alpha_y^{vp\prime} &=\frac{1}{\sqrt{2}\omega^2H_1^{(1)}(kr)} \Big ( {^+_yp_{sy}^-} - {^+_yp_{sy}^+} + {^-_yp_{sy}^-} - {^-_yp_{sy}^+} + {^+_xp_{sy}^-} - {^+_xp_{sy}^+} + {^-_xp_{sy}^-} - {^-_xp_{sy}^+} \Big ),\\
\alpha_{xx}^{vv\prime}&=\frac{1}{\omega^2H_1^{(1)}(kr)} \Big ( {^+_xp_{sx}^-} - {^+_xp_{sx}^+} - {^-_xp_{sx}^-} + {^-_xp_{sx}^+} \Big ),\\
\alpha_{xy}^{vv\prime}&=\frac{1}{\omega^2H_1^{(1)}(kr)} \Big ( {^+_yp_{sx}^-} - {^+_yp_{sx}^+} - {^-_yp_{sx}^-} + {^-_yp_{sx}^+} \Big ),\\
\alpha_{yx}^{vv\prime}&=\frac{1}{\omega^2H_1^{(1)}(kr)} \Big ( {^+_xp_{sy}^-} - {^+_xp_{sy}^+} - {^-_xp_{sy}^-} + {^-_xp_{sy}^+} \Big ),\\
\alpha_{yy}^{vv\prime}&=\frac{1}{\omega^2H_1^{(1)}(kr)} \Big ( {^+_yp_{sy}^-} - {^+_yp_{sy}^+} - {^-_yp_{sy}^-} + {^-_yp_{sy}^+} \Big ).
\end{aligned}
\end{equation}
\end{widetext}
Note that the combinations of the pressures in Eq.~\eqref{mod2d} for each polarizability is not unique (at least 4 out of the 16 probed pressures for each polarizability), here we took more pressure data into account to give more robust results. When implementing this retrieval method in FEM simulations, one only needs to simulate the aforementioned four plane wave excitations and extract the necessary parameters for Eq.~\eqref{mod2d}.
Thus, all the components in ${\bm \alpha}^\prime$ can be determined, and the original polarizability tensor $\bm \alpha$ follows from the relations in Eq.~\eqref{modalpha2d}.
The retrieved parameters should satisfy the constraints imposed by reciprocity: $\alpha_x^{pv\prime}=-\alpha_x^{vp\prime}$, $\alpha_y^{pv\prime}=-\alpha_y^{vp\prime}$ and $\alpha_{xy}^{vv\prime}=\alpha_{yx}^{vv\prime}$. These relations are in fact automatically satisfied by the retrieval method of Eq.\ \eqref{mod2d} by virtue of reciprocity identities for the scattered pressure. For instance, reciprocity under the interchange of source and receiver yields
$ {^+_xp_{sx}^+} = {^-_xp_{sx}^-}$, which combined with Eq.\ \eqref{mod2d} implies that $\alpha_x^{pv\prime}=-\alpha_x^{vp\prime}$. The identity $\alpha_y^{pv\prime}=-\alpha_y^{vp\prime}$ follows in the same way from
the relation $ {^+_yp_{sy}^+} = {^-_yp_{sy}^-}$. Finally, $\alpha_{xy}^{vv\prime}=\alpha_{yx}^{vv\prime}$ is a consequence of four reciprocal relations for the scattered pressure:
${^+_yp_{sx}^-}={^+_xp_{sy}^-}$, ${^-_yp_{sx}^+} ={^-_xp_{sy}^+}$,
${^+_yp_{sx}^+} = {^-_xp_{sy}^-} $ and $ {^-_yp_{sx}^-} = {^+_xp_{sy}^+}$.
\subsection{1D waveguide}
The retrieval procedure for Willis scatterers in a 1D waveguide is simpler than the previous 2D case since there are only four polarizability components to be found. Consider a wave incident along the $\pm x$-direction (axial direction) such that the incident pressure and velocity at the scatterer location are $1$ and $\pm 1/\rho c$, respectively. Hence, taking $A_1=\pm A_2=1$, the scattered fields for these two incidences are, with obvious notation,
\begin{equation}\label{scat1dp}
^\pm p_s(x) =\frac{i\omega c}{2S}\Big [ \alpha^{pp\prime}\pm \alpha^{pv\prime}+
\big( \alpha^{vp\prime} \pm \alpha^{vv\prime} \big) \sgn x \Big ]e^{ik|x|}.
\end{equation}
Following the 2D procedure, the polarizabilities can be expressed in terms of the forward and backward scattered pressure as
\begin{equation}\label{retralphap}
\begin{pmatrix}
\alpha^{pp\prime} \\
\alpha^{pv\prime} \\
\alpha^{vp\prime} \\
\alpha^{vv\prime}
\end{pmatrix}
= \frac{Se^{-ik|x|}}{2i\omega c }
\begin{pmatrix}
1 & 1 & 1 & 1
\\
1 & 1 & -1 & -1
\\
1 & -1 & 1 & -1
\\
1 & -1 & -1 & 1
\end{pmatrix}
\begin{pmatrix}
{^+p_{s}^+} \\
{^+p_{s}^-} \\
{^-p_{s}^+} \\
{^-p_{s}^-}
\end{pmatrix} .
\end{equation}
Hence, in order to retrieve the polarizabilities we only need to simulate plane wave incidence from the two opposite directions and probe on both sides of the scatterer.
The retrieved parameters should satisfy the reciprocity relation $\alpha^{pv\prime}=-\alpha^{vp\prime}$. It is clear from Eq.\ \eqref{retralphap} that this is equivalent to the identity $^+p_{s}^+ = ^-p_{s}^-$, which is guaranteed by invariance under the interchange of source and receiver. This implies that the transmission coefficient $T$ is independent of the direction of incidence, where $T$ is defined such that $^+p_{s}^+ = ^-p_{s}^- = (T-1)e^{ik|x|}$. The related
reflection coefficients $R_{\pm}$ are defined by $^+p_{s}^- =R_+e^{ik|x|}$ and $^-p_{s}^+ =R_-e^{ik|x|}$.
Using $\alpha^{pv\prime}=-\alpha^{vp\prime}$, we have
\begin{equation}\label{tr}
\begin{aligned}
T &= 1+\frac{i\omega c}{2S}\Big (\alpha^{pp \prime}+\alpha^{vv\prime}\Big ),\\
R_{\pm} &= \frac{i\omega c}{2S}\Big (\alpha^{pp\prime} \pm \alpha^{pv\prime}-\alpha^{vv\prime}\Big ) .
\end{aligned}
\end{equation}
This form is similar to the transmission and reflection coefficients for two-dimensional bianisotropic materials under normal incidence in electromagnetics.\cite{Yazdi2015}
Clearly, as an alternative to Eq.\ \eqref{retralphap} one may write the polarizability tensor in terms of the transmission and reflection coefficients,
\begin{equation}\label{2=1}
\begin{pmatrix}
\alpha^{pp\prime} \\
\alpha^{vv\prime} \\
\alpha^{pv\prime}
\end{pmatrix}
= \frac{S}{2i\omega c }
\begin{pmatrix}
2 & 1 & 1
\\
2 & -1 & -1
\\
0 & 2 & -2
\end{pmatrix}
\begin{pmatrix}
T-1 \\
R_+ \\
R_-
\end{pmatrix}
\end{equation}
with $ \alpha^{vp\prime} = - \alpha^{pv\prime}$.
The asymmetric reflection for waves incident from opposite directions is a characteristic property of Willis elements, which is not observed for similarly sub-wavelength monopole or dipole scatterers.
The difference in the reflection coefficients is induced by the pressure-velocity cross-coupling term $\alpha^{vp\prime}$ $( = - \alpha^{pv\prime})$. More specifically, the pressure excited dipole and the velocity excited monopole interfere destructively in the forward direction, but constructively for the backward scattered wave. In the absence of material loss, the two reflected waves have the same magnitude but different phases. For lossy Willis scatterers, the magnitudes are unequal because the momentum exchange processes and hence the absorption depends on the direction of incidence. This feature can potentially be used to design unidirectional perfect absorbers or unidirectional reflectionless materials.
\section{Numerical examples}\label{examp1}
Two examples are presented to demonstrate how our retrieval method works in 2D free field and 1D waveguide situations. The common procedure is to simulate plane wave excitation and probe the scattered pressure according to the algorithms presented in Sec.~\ref{retr}, then use Eq.~\eqref{mod2d} or Eq.~\eqref{retralphap} to calculate the components of the modified polarizability tensor.
\subsection{2D free field}
We consider a Willis element with a radius on the order of $\lambda/10$, where $\lambda$ is the wavelength in the background medium, see Fig.~\ref{ret2dexa}(a). The wall of the scatterer is acoustically rigid. The scatterer consists of two separate cavities with openings in separate directions and of different volume so that the scatterer does not display any symmetry. In this way, the cross-coupling induced scattering is non-zero. Our objective here is not to maximize the cross-coupling, but rather to demonstrate that the retrieval method works when the off-diagonal terms are on the same order of magnitude as the diagonal terms.
\begin{figure*}[ht]
\includegraphics[width=0.95\textwidth]{fig1.eps}
\caption{Retrieved polarizability components for the 2D Willis scatterer. The nine polarizabilities are shown in (a) and (d); the cross-coupling terms are shown in (b), (c), (e) and (f). The curves in (a), (b) and (c) are obtained using the present retrieval method; the curves in (d), (e) and (f) are calculated using the method of \citet{Quan2018}.} \label{ret2dexa}
\end{figure*}
The background medium is air with bulk modulus $B=1.42\times 10^5$ Pa and mass density $\rho=1.225$ kg/m$^3$. The acoustically rigid and small Willis scatterer was placed at the origin of the Cartesian coordinate system. Four plane wave excitations along the $\pm x$- and $\pm y$-directions were simulated. Then four probes of the scattered pressure were taken at a fixed distance $r$ from the origin along the $\pm x$- and $\pm y$-directions for each excitation, thus providing the sixteen scattered pressure data needed for the parameter retrieval.
Plugging the probed pressure into Eq.~\eqref{mod2d} yields the nine components of the modified polarizability tensor. The full wave FEM simulations were performed using COMSOL Multiphysics.
Figure~\ref{ret2dexa} shows the frequency dependence of the polarizability components from $ka=0.1$ to $ka=1$, where $a$ is the scatterer radius. The results in panels (a), (b) and (c) are calculated using the retrieval method developed in this paper; the curves in panels (d), (e) and (f) are obtained using the method presented by \citet{Quan2018}, which requires an infinite set of data as compared with the small data set used here. It is clear that the results obtained by these two methods match to a remarkable degree. Figures \ref{ret2dexa}(a) and \ref{ret2dexa}(d) show all nine components of the polarizability tensor. It can be seen that the $\alpha^{pp\prime}$ component responsible for the pressure excited monopole is on the same order of magnitude as the the cross-coupling terms $\alpha_x^{pv\prime}$, $\alpha_x^{vp\prime}$, $\alpha_y^{pv\prime}$ and $\alpha_y^{vp\prime}$.
The off-diagonal terms satisfy the constraints imposed by reciprocity, i.e. $\alpha_x^{pv\prime}=-\alpha_x^{vp\prime}$, $\alpha_y^{pv\prime}=-\alpha_y^{vp\prime}$ and $\alpha_{xy}^{vv\prime}=-\alpha_{yx}^{vv\prime}$, indicating that the numerical simulation is physically consistent. The plots in this section only show the components of the modified polarization tensor ${\bm \alpha}^\prime$, one may also calculate ${\bm \alpha}$ using Eq.~\eqref{modalpha2d}. The retrieval procedure can be used to analyze more complicated scatterers including loss effects.
The T-matrix formalism has a special implication on the S-matrix, ${\bm S} = {\bm I}+2{\bm T}$, that its eigenvalue magnitudes must be less than or equal to unity due to energy conservation. In the absence of absorption, the retrieved polarizability components lead to three unitary eigenvalues for the S-matrix as shown in Fig.~\ref{2deig}. It is clear that the magnitudes are close to unity at the low frequency range, and start to decrease when $ka\sim 1$ since higher order multipoles come into play.
\begin{figure}[ht]
\centering
\includegraphics[width=.95\columnwidth]{fig2.eps}
\caption{Eigenvalues of the S-matrix. The solid lines represent the absolute values of the eigenvalues; the dashed lines are the corresponding phases.} \label{2deig}
\end{figure}
\subsection{1D waveguide}
We consider an acoustically small Willis scatterer centered in a circular rigid waveguide. The radius of the waveguide is much larger than the radius of the scatterer, and only the fundamental mode is allowed to propagate within the frequency range of interest. The scatterer has rotational symmetry about the waveguide axis, with cross-sectional view shown in Fig.~\ref{1dexamp}. As we can see, the Willis element is simply a spherical Helmholtz resonator which is usually considered as a monopole scatterer. However, the scattered field from such a resonator, even though deeply sub-wavelength, does depend on the direction of incidence.
This type of directional scattering is more evident if the resonator is asymmetrically positioned in a waveguide, as in Fig.~\ref{1dexamp}. As we will see, the directional scattering dependence can be attributed to the cross-coupling between the monopole and dipole modes.
\begin{figure}[ht]
\centering
\includegraphics[width=.95\columnwidth]{fig3.eps}
\caption{Retrieved polarizability components for the 1D Willis scatterer in a circular waveguide (side view; the size of the scatterer is exaggerated).} \label{1dexamp}
\end{figure}
Full wave FEM simulations (2D axisymmetric) were performed to retrieve the polarizability tensor. The background material properties are the same as in the previous example, and the scatterer and the waveguide are both acoustically rigid. In the 1D case, we only need to simulate plane wave incidence from each side of the scatterer. Then we probe the scattered pressure on both sides of the scatterer. Plugging the measured pressures into Eq.~\eqref{retralphap} yields the four modified polarizability components as shown in Fig.~\ref{1dexamp}. Here the off-diagonal terms also satisfy the reciprocity constraint $\alpha^{pv\prime}=-\alpha^{vp\prime}$. It is obvious that the cross-coupling terms are on the same order of magnitude compared with the pressure excited monopole and velocity excited dipole. Due to the directional dependence of the cross-coupling terms, the forward scattered fields are identical for the two incidences but the backward scattered fields are different. In the absence of material loss, the cross-coupling only contribute to different phase changes in the reflected waves. As shown in Fig.~\ref{1dexamp2}, the transmitted phases are the same for two incident directions but the reflected phases are evidently different.
\begin{figure}[ht]
\centering
\includegraphics[width=.95\columnwidth]{fig4.eps}
\caption{Phases of the transmitted and reflected waves for the two incident directions calculated using Eq.~\eqref{tr}.} \label{1dexamp2}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=.95\columnwidth]{fig5.eps}
\caption{Eigenvalues of the S-matrix. The black and dashed red lines correspond to the absolute value of the eigenvalues; the blue and dashed green lines represent the phases of the eigenvalues.} \label{1dexamp3}
\end{figure}
As mentioned earlier, the monopole and dipole moment are extremely dominant in the waveguide at low frequencies. Hence the retrieved polarizabilities should lead to two unitary eigenvalues for the S-matrix ${\bm S} = {\bm I}+2{\bm T}$, which is verified in Fig.~\ref{1dexamp3}. This indicate that most of the energy is contained in the monopole and dipole scattering, and that higher order multipoles are negligible in such a system. Finally, it is worth mention that the method developed here also works for lossy scatterers which displays more interesting phenomena such as asymmetric absorption.~\cite{Yazdi2015}
\section{Conclusion}\label{Conc}
We have presented a simple retrieval method for extracting the polarizability tensors for 2D and 1D Willis elements using a finite set of scattering amplitudes. Two examples have been presented to show the implementation procedure. The retrieval method is based on the assumption that only monopole and dipole moment contribute to the far-field, this reduces the T-matrix to a closed $(d+1)\times(d+1)$ matrix where $d$ is the dimension. It can be seen from the 2D example that the retrieved parameters agree well with those obtained using full monopole and dipole integration. In addition, the eigenvalues of the S-matrix in both examples are close to unity satisfying the energy conservation requirement. Therefore, our method can be used to evaluate acoustically small Willis scatterers effectively. Although the retrieval method for 3D scattering is not presented, the derivation is straightforward following the procedure for the 2D case. The retrieval method presented in this paper is also suitable for experimental realizations. In conclusion, the retrieval method developed in this paper can be used to design and optimize Willis inclusions for advanced wave-steering and sound absorption applications.
\section*{Acknowledgments}
This work was supported by the Office of Naval Research through MURI Grant No.\ N00014-13-1-0631.
\section*{References}
|
1,116,691,499,855 | arxiv |
\section{Introduction}
Single-image upscaling, known as Single-Image Super-Resolution (SISR), is a classical computer vision problem, used to increase the spatial resolution of digital images. The process aims to recover fine detail High-Resolution (HR) image from a single coarse Low-Resolution (LR) image. It is an inherently ill-posed inverse problem, as multiple downsampled HR images could correspond to a single LR image. Reciprocally, upsampling an LR image with a 2$\times$ scale factor requires mapping one input value into four values in the high-resolution image, which is usually intractable. It has a wide range of applications, such as digital display technologies, remote sensing, medical imaging care, and security and data content processing. Classical upscaling methods, based on interpolation operators, have been used for this problem for decades, and it remains an active topic of research in image processing. Despite their achieved progress, the upscaled images are still lacking fine details in texture-rich areas. Recently, the development of deep learning methods has witnessed remarkable progress achieving performance on various benchmarks \cite{dong2015image, kim2016accurate} both quantitatively and qualitatively with enhanced details in texture-rich areas.
The computer vision community has paid more attention to the development of visible sensor images, and over the last decade, non-visible light sensors such as infrared, acoustic, and depth imaging sensors were only used in very specific applications.~The images produced by those sensors lack the consideration of the potential benefits of the non-visible spectrum due to their low spatial resolution, the high cost incurred as their price increases dramatically with the increase of their resolution \cite{almasri2018multimodal}, and the lack of publicly available datasets. At this time, there is no acoustic imaging dataset designed for the SR problem. With the need to develop a vision-based system that benefits from the non-visible spectrum, acoustic sensors have recently received much attention as they allow visualization of the intensity of sound waves. The sound intensity in an acoustic heat map format can be graphically represented in order to facilitate the identification and localization of sound~sources.
In contrast to visible or infrared cameras, there is not a single sensor for acoustic imaging, but rather an array of sensors. As a result, the image resolution in acoustic cameras is directly related to their computational demand, requiring hardware accelerators such as Graphical Processor Units (GPUs)~\mbox{\cite{frechette2020low}} or Field-Programmable Gate Arrays (FPGAs)~\mbox{\cite{dasilva2018multimode,Vandendriessche2021M3AC}}. Consequently, available acoustic cameras offer a relatively low resolution, ranging from $40 \times 40$ to $320 \times 240$ pixels per image and at a relatively low frames-per-second rate~\mbox{\cite{zimmermann2010FPGA, izquierdo2016design}}. Acoustic imaging presents a high computational cost and is therefore often prohibitive for embedded systems without hardware accelerators.~Moreover, it also suffers from a subsampling error in the phase delay estimation.~As a result, there is a significant degradation in the quality of the output acoustic image~\mbox{\cite{grondin2019svd}}, which manifests in artifacts that directly affect the sound source localization~\mbox{\cite{zotkin2004accelerated}}. Due to the limitation in acoustic data acquisition, methods to enhance the precision of a measurement with respect to spatial resolution and to reduce artifacts becomes more important.
Learning-based SISR methods rely on high- and low-resolution image pairs generated artificially. The conventional approach is generally by downsampling the images using a bicubic interpolation operator and adding both noise and blur to generate corresponding low-resolution images. However, such image pairs do not exist in real-world problems, and the artificially generated images have a considerable number of artifacts from smoothing, removing sensor noise, and other natural image characteristics. This poses a problem as models trained on these images cannot be generalized to another unknown source of image degradation or natural image characteristics. There are a few contributions where the image pairs were obtained from different cameras in the visible imaging, but none in the acoustic imaging.
Based on these facts, the main contributions of the proposed work are threefold:
\begin{itemize}
\item {A novel backprojection model architecture was proposed to improve the resolution of the acoustic images. The proposed XCycles BackProjection model (XCBP), in contrast to the feedforward model approach, fully uses the iterative correction procedure. It takes low- and high-resolution encoded features together to extract the necessary residual error correction for the encoded features in each cycle.}
\item {The acoustic map imaging {dataset} (\url{https://doi.org/10.5281/zenodo.4543786}),
provided simulated and real captured images with multiple scales factor ($\times$2, $\times$4, $\times$8). Although these images shared similar technical characteristics, they lacked artificial artifacts caused by the conventional downsampling strategy. Instead, they had more natural artifacts simulating the real-world problem. The dataset consisted of low- and high-resolution images with double-precision fractional delays and sub-sampling phase delay error.~To the best of the authors' knowledge, this is the first work to provide such a large dataset with its specific characteristics for the SISR problem and the sub-sampling phase delay error problem;}
\item {The proposed benchmark and the developed method outperformed the classical interpolation operators and the recent feedforward state-of-the-art models and drastically reduced the sub-sampling phase delay error estimation.}
\item {\textls[-5]{The proposed model contributed to the Thermal Image Super-Resolution Challenge---}PBVS 2021 \cite{rivadeneira2021thermal} and won the first place with superior performance in the second evaluation when the LR image and HR image are captured with different cameras.}
\end{itemize}
\section{Related Work}
Plentiful methods have been proposed for image SR in the computer vision community. The original work of introducing Deep Learning based methods for the SR problem by Dong et al \cite{dong2015image} opened new horizons in this SR problem domain. Their proposed model SRCNN achieved superior performance against all previous works. The architecture formulation of the SRCNN aims to learn hierarchical sequence of encoded features and upsample them to reconstruct the final SR image, where the entire learning process can be seen as an end-to-end feedforward manner. In this work, we focus on works related to convolutional neural network (CNN) architecture formulations proposed for residual error correction approach in contrast to the feedforward manner.
The SRCNN aims to lean an end-to-end mapping function between the Bicubic upsampled image and its correspondent high-resolution image, where the last reconstruction layer serves as an encoder from the features space to the super-resolved image space. To speed up the procedure and to reduce the problem complexity, which is proportional to the input image size, Dong et al. \cite{dong2016accelerating} and Shi et al. \cite{shi2016real} proposed faster and learnable upsampling models. Instead of using an interpolated image as an input, they speed up the procedure by reducing the complexity of the encoded features while training upsampling modules at the very end of the network.
The HR image can be seen as components of low-frequency (LF) features (coarse image) and high-frequency (HF) features (residual fine detail image). The results of the classical interpolation operators and the previous deep learning based models have high peak signal-to-noise ratios (PSNR), but they are lacking HF features \cite{ledig2017photo}. As the super-resolved image contains the LF features, Kim et al. \cite{kim2016accurate} proposed the VDSR model that predicts the residual HF features and adds them to the coarse super-resolved image. Their proposed model showed superior performance compared to the previous approaches. Introducing residual networks and skip-connections with residual features correction exhibit improved performance in the SR problem \cite{kim2016accurate, kim2016deeply, ledig2017photo, lim2017enhanced, zhang2018image}, which also allows the end-to-end feedforward to have deeper networks.
In contrast to this simple feedforward approach, Irani et al. \cite{irani1991improving} proposed a model that reconstructs the back projected error in the LR space and adds the residual error to the super-resolved image. Influenced by this work, Haris et al. \cite{haris2018deep} proposed an iterative error correction feedback mechanism to guide the reconstruction of the final SR output. Their approach is a model of iterative up- and downsampling blocks that takes all previously upscaled features and fuses them to reconstruct the SR image. Inspired by \cite{irani1991improving, haris2018deep}, the VCycles Backprojection Upscaling Network (VCBP) ~\cite{rivadeneira2020thermal} was first introduced in the Perception Beyond the Visible Spectrum (PBVS2020) challenge. The model is designed in iterative modules with shared parameters to produce a light SR model dedicated for thermal applications, which limits its performance. In VCBP, the iterative error correction happens in the low-resolution space, and in each iteration, the reconstructed residual features are added to the HR encoded feature space. In VCBPv2 \cite{wei2020aim} the parameters are not shared within the modules, and the iterative error correction happens in both low- and high-resolution space. It follows the design of Haris et al. \cite{haris2018deep} of using iterative up and downsampling layers to process the features in the Inner Loop. Although this technique increases the efficiency of the model, it restricts the depth from extracting important residual features from the encoded features space.
Iterative error correction mechanism in the features space is very important for the HF features reconstruction in the SR problem. If the model pays more attention to this correction procedure in the upsmapled HR and the LR features space, it might be possible to obtain improvements in the residual HF detail in the super-resolved image. This paper proposes a fully iterative backprojection error mechanism network to reconstruct the residual error correction for the encoded features in both low- and high-resolution spaces.
\section{Acoustic Beamforming}
\label{sec:acousticBeamforming}
Acoustic cameras acquire their input signal from arrays of microphones placed in well defined patterns for high spatial resolution.
Microphone arrays come in different shapes and sizes, offering different possibilities for locating a neighbouring sound source.
The process to locate a sound source with a microphone array is referred as the beamforming method.
Beamforming methods comprise several families of algorithms, including the Delay-and-Sum (DaS) beamformers \cite{tashev2005new,soundcompass,Taghizadeh}, the Generalized Sidelobe Cancellation (GSC) beamformers \cite{herbordt2001computationally,lepauloux2010computationally,rombouts2008generalized}, beamformers based on the MUltiple SIgnal Classification (MUSIC) algorithm \cite{gao2018modified,birnie2019sound} and beamformers based on the Estimation of Signal Parameters via Rotational Invariance Technique (ESPRIT) algorithm \cite{jo2018direction,chen2018direction}.
\subsection{Delay and Sum Beamforming}
\label{sec:DaS}
The well known DaS is the most popular beamformer and is the one selected to generate the acoustic images in this work.
The beamformer steers the microphone array in a particular direction by delaying samples from the different microphones.
The acoustic signals originating from the steering direction are amplified, while acoustic signals coming from other directiosn are suppressed.
The principle of DaS beamforming can be seen in Figure~\mbox{\ref{fig:delay_and_sum}} as:
\begin{equation}
o(\vec{u},t)=\sum\limits_{m=0}^{M-1}s_m(t-\Delta_m(\vec{u}))
\label{eq:steering_all_directions}
\end{equation}
Here, $o(\vec{u},t)$ is the output of the beamformer for a microphone array of $M$ microphones and $s_m(t-\Delta_m)$ is the sample from microphone $m$ delayed by a time $\Delta_m$.
The time delay $\Delta_m(\vec{u})$ in a given steering direction is obtained by computing the dot product between the vector $\vec{r}_m$, describing the location of microphone $m$ in the array, and the unitary steering vector $\vec{u}$. The delay factor is normalized by the speed of sound ($c$) in air.
\begin{equation}
\Delta_m(\vec{u})=\frac{\vec{r}_m\cdot \vec{u}}{c}
\label{eq:delay_m}
\end{equation}
\begin{figure}[ht]
\centering
\includegraphics[width=1.0\textwidth]{images/delay_and_sum_Combined}
\caption{Principle of acoustic beamforming based on the Delay and Sum method.}
\label{fig:delay_and_sum}
\end{figure}
The principle described is valid in the continuous time domain. Therefore, the principle of DaS is extended with the sampling frequency $F_s$ so that a form of index $\Delta'_m(\vec{u})$ is obtained:
\begin{equation}
\Delta'_m(\vec{u})=F_s\cdot\frac{\vec{r}_m\cdot \vec{u}}{c}
\label{eq:discrete_delta_m}
\end{equation}
This $\Delta'_m(\vec{u})$ value can be rounded to the nearest integer $\Delta'_{m,round}(\vec{u})$ to facilitate array indexing:
\begin{equation}
\Delta'_{m,round}(\vec{u})=round\left(F_s\cdot\frac{\vec{r}_m\cdot \vec{u}}{c}\right)
\label{eq:discrete_delta_m_rounded}
\end{equation}
For sufficiently high sampling frequencies $F_s$ at the DaS stage, $\Delta'_{m,round}(\vec{u})$ offers a fine grained indexing nearing $\Delta'_m$ so that:
\begin{equation}
\Delta'_{m,round}(\vec{u})\approx\Delta'_m(\vec{u})
\label{eq:discrete_delay_m_high_fs}
\end{equation}
Based on these equations, the output values $o[\vec{u},i]_{rounded}$ of the DaS method in the time domain with current reference sample index $i$ yields:
\begin{equation}
o[\vec{u},i]_{rounded}=\sum\limits_{m=0}^{M-1}s_m[i-\Delta'_{m,round}(\vec{u})].
\label{eq:delaysumoutput}
\end{equation}
Equation \ref{eq:delaysumoutput} can be transformed into the z-domain by applying the z-domain delay identity so that:
\begin{equation}
O(\vec{u},S,z)=\sum\limits_{m=0}^{M-1}S_m(z)\cdot z^{-\Delta'_{m,round}(\vec{u})}.
\label{eq:z_steering_all_directions}
\end{equation}
From Equation \ref{eq:z_steering_all_directions} one can compute the average Steered Response Power (SRP) $P(\vec{u},S,z)$ over $L$ samples for each of the steering vectors with:
\begin{equation}
P(\vec{u},S,z) = \frac{1}{L}\sum\limits_{k=0}^{L-1}\left|O(\vec{u},S,z)[k]\right|^2.
\label{eq:z_power}
\end{equation}
By computing the SRP values for each of the steering vectors, a matrix of SRP values can be generated. When all steering vectors have the same elevation, the matrix will be one dimensional and a polar plot can be used for visualisation and finding the origin of the sound source. On the other hand, when the steering vectors have a changing elevation, the matrix will be two dimensional. When this matrix is normalised and, optionally, a colormap is applied to it, the acoustic heatmap is obtained.
Examples of the SRP are depicted in Figure \ref{fig:steering_examples}. In (a), a regular beamforming in 2 dimensions is computed and is pointing to an angle of $180^{\circ}$ towards an acoustic source of 4\,kHz. The same principle can be applied to obtain an acoustic image (c,d) when the steering vectors are distributed in 3D space (changing elevation). In (b), the frequency response of a given microphone array can be found. In this case, a sound source is located at an angle of $180^{\circ}$ relative to the microphone array. The frequency response allows the user to identify whether a given microphone array is likely to detect a given set of acoustic frequencies well.
\begin{figure}
\centering
\includegraphics[width=0.9\textwidth]{images/ac_image_tikz}
\caption{Examples of an SRP obtained with the microphone array described in Section~\ref{sec:dataset}. A traditional 2-dimensional SRP of a single frequency pointing to an angle of $180^{\circ}$ can be obtained (a). By combining multiple frequencies into one waterfall diagram, one can visualize the frequency response of a given microphone array (b). An acoustic heatmap of a 3D situation (c and d) can also be obtained where the yellow color depicts the highest probability of finding a sound source. The first heatmap (c) is obtained with fractional delays in double precision format, whereas the last heatmap (d) is obtained without fractional delays.}
\label{fig:steering_examples}
\end{figure}
\subsection{Fractional Delays}
The DaS beamforming technique properly delays the input audio samples to generate constructive interference for a particular steered direction.
These time delays are obtained based on Eq.~\mbox{\ref{eq:discrete_delta_m}} and rounded following Eq.~\mbox{\ref{eq:discrete_delta_m_rounded}}.
The DaS beamforming technique based on these rounded indices provides accurate results when sufficiently high sampling frequencies $F_s$ (i.e. typically beyond 1~MHz) at the DaS stage are chosen. This method, however, may suffer from output degradation in the opposite case ~\cite{maskell1999estimation}, \cite{pedamallu2012microphone}.
Due to the sampling frequency, the demodulation of the Pulse Density Modulation (PDM) signals and filtering of the PDM Micro Electro-Mechanical Systems (MEMS) microphones there exist an error in the estimation of the time delays corresponding to the phase delays of the microphones signals when applying DaS beamforming.
Variations of this phenomenon has also been observed in early modem synchronization, speech coding, musical instruments modelling and realignment of multiple telecommunication signals~\cite{laakso1996splitting}.
This kind of degradation is shown in Figure \ref{fig:beam_frac_delays_2khz}, where the input sampling frequency is set to 3125\,kHz, but the sampling frequency at the DaS stage is limited to 130,2\,kHz. A sound source is placed at an angle of $180^{\circ}$ from the UMAP microphone array. In the case of the DaS method with index rounding (Fractional disabled, red curve), the microphone array allows the user to find the sound source in an angular area of approximately $30^{\circ}$. However, the staircase like response suggests 2 different closely located sound sources since the graph describes a valley at the supposed $180^{\circ}$ steering orientation. To alleviate this phenomenon, fractional delays can be used to minimize the effects of rounded integer delays and are generally used at the FIR filtering stage~\cite{laakso1996splitting}. Fractional delays can be used in both the time and frequency domains. The method has the advantage of being more flexible in the frequency domain with the added cost of demanding more intense computations. Several time-domain-based implementations do also exist and are generally based on sample interpolation.
Equations \ref{eq:discrete_delta_m} and \ref{eq:discrete_delta_m_rounded} are rewritten to obtain the floor and the ceiling of the delaying index:
\begin{equation}
\left\{
\begin{array}{ll}
\Delta'_{m,floor}(\vec{u})&=\left\lfloor F_s\cdot\frac{\vec{r}_m\cdot \vec{u}}{c}\right\rfloor\\[5pt]
\Delta'_{m,ceil}(\vec{u})&=\left\lceil F_s\cdot\frac{\vec{r}_m\cdot \vec{u}}{c}\right\rceil
\end{array}
\right.
\label{eq:discrete_delta_m_floor}
\end{equation}
Based on the floor and the ceiling of the $\Delta'_m$ index, a linear interpolation can be applied at the DaS stage:
\begin{equation}
o[\vec{u},i]_{interp}=\sum\limits_{m=0}^{M-1}\frac{\left(\alpha\cdot s_m[i-\Delta'_{m,floor}(\vec{u})] + (1-\alpha)\cdot s_m[i-\Delta'_{m,ceil}(\vec{u})]\right)}{2}
\label{eq:sample_double_interpolation}
\end{equation}
With the double precision weighing coefficient $\alpha(\vec{u})$:
\begin{equation}
\begin{array}{lr}
\alpha(\vec{u})=\Delta'_m(\vec{u})-\Delta'_{m,floor}(\vec{u}), & where\;\alpha\in\mathbb{R}\;and\;[0,1[
\end{array}
\label{eq:alpha_value}
\end{equation}
Double precision computations demand quite some computational power, which is unavailable on constrained devices such as embedded systems, and can lead to intolerable execution times and low frame rates.
Luckily, double precision delays can be changed into fractional delays $\alpha'(n,\vec{u})$. To do so, the double precision weighing coefficient is scaled with the number of bits $n$ used in the fraction and rounded to the nearest natural number.
\begin{equation}
\begin{array}{lr}
\alpha'(n,\vec{u})=round(\alpha\cdot 2^{n}), & where\;\alpha'(n)\in\mathbb{N}\;and\;[0,2^{n}[
\end{array}
\label{eq:alpha_bitwidth}
\end{equation}
The fractional delays $\alpha'(n)$ range from $zero$ up to $2^{n}-1$. In the case where the rounding function returns $2^{n}$, both the $\Delta'_{m,floor}(\vec{u})$ and the $\Delta'_{m,ceil}(\vec{u})$ are increased with one index, while $\alpha'(n,\vec{u})$ is reset to $zero$. This approach prevents index overlap between 2 rounding areas. The resulting output values $o[\vec{u},i,n]_{interp}$ can be calculated using:
\begin{equation}
o[\vec{u},i,n]_{interp}=\sum\limits_{m=0}^{M-1}\frac{\left(\alpha'(n)\cdot s_m[i-\Delta'_{m,floor}(\vec{u})] + (2^{n}-\alpha'(n))\cdot s_m[i-\Delta'_{m,ceil}(\vec{u})]\right)}{2^{n+1}}
\label{eq:sample_fractional_interpolation}
\end{equation}
In Equation \ref{eq:sample_fractional_interpolation}, the nominator and denominator are both scaled with a factor $2^{n}$ as compared to Equation \ref{eq:sample_double_interpolation}. The main advantage of the latter is that since the denominator remains a power of 2, a simple bitshift operation can be used instead of a full division mechanism in computationally constrained devices.
The effects of the fractional delays are demonstrated in Figure \ref{fig:beam_frac_delays_2khz}. A higher value of the bitwidth $n$ allows a more fine-grained DaS computation. For values of $n$ beyond 4, the resulting response is almost equal to the response of the DaS with double precision interpolation.
\begin{figure}
\centering
\includegraphics[width=1.0\textwidth]{images/UMAP_2kHz}
\caption{Effects of fractional DaS while detecting a sound source of 2\,kHz with the UMAP microphone array. The theoretical response is given as a reference.}\label{fig:beam_frac_delays_2khz}
\end{figure}
In addition to a single response result, the effect of fractional delays is also visible in the waterfall diagrams of the microphone array (Figure \ref{fig:waterfall_diagrams}).
When no fractional delays are used, many vertical stripes are visible, indicating truncation errors during beamforming. By using fractional delays, these stripes gradually disappear until 8 bits (``Fractional 8'') are used. Fractional delays with a resolution of 8 bits and beyond result in the same response as double precision fractional delays. Due to the chosen beamforming architecture, the frequency waterfall diagram of the proposed architecture differs from the theoretically obtainable diagram.
\begin{figure}[t]
\begin{center}
\includegraphics[scale=.65]{images/waterfall_maps/waterfall_maps.png}
\end{center}
\caption{Waterfall diagrams of the UMAP microphone array with different settings of fractional delays, ranging from beamforming without fractional delays (``Fractional disabled'') until fractional delay with a resolution of 8 bits. The methods with 8 bits and double precision fractional delays are represented by the ``double'' since they both give the same results. The waterfall diagram of the theoretical beamforming is given as a reference.}
\label{fig:waterfall_diagrams}
\end{figure}
\section{Acoustic Map Imaging Dataset}
\label{sec:dataset}
This section first introduces the characteristics of the target acoustic camera~\cite{Vandendriessche2021M3AC} and the imaging emulator~\cite{segers2019cabe}. The procedure for capturing multiple scale acoustic map images, the analysis of the dataset, and the applied standardization procedure are also described.
\subsection{Acoustic Camera Characteristics}
Acoustic cameras, such as the Multi-Mode Multithread Acoustic Camera (M3-AC) described in~\cite{Vandendriessche2021M3AC}, acquire the acoustic signal using multiple microphones, which convert the acoustic signal into a digital signal. In addition, microphone arrays allow the calculation of the Direction of Arrival (DoA) for a given sound source. The microphone array geometry has a direct impact on the acoustic camera response for DaS beamforming~(Eq.~\ref{eq:steering_all_directions}).
Figure~\ref{fig:UMAP} depicts the microphone array geometry used by the M3-AC. The microphones are distributed in two circles, with 4 microphones located in the inner circle while the remaining 8 microphones are located in the outer circle. The shortest distance between two microphones is 23.20\,mm and the longest distance is 81.28\,mm.
\begin{figure}[t]
\centering
\includegraphics[width=0.7\textwidth]{images/UMAP_Diagram.pdf}
\caption{The microphone array used by the M3-AC consists of 12 digital(PDM) microphones. The microphone array is shown in the left image, while the right diagram represents the microphone layout.}
\label{fig:UMAP}
\end{figure}
The type of microphone, the sampling methods, and the signal processing methods influence the final outcome of the beamforming. For instance, the microphone array of M3-AC is composed of MEMS microphones with a PDM output. Despite the benefit of using digital MEMS microphones, there is a need for a PDM demodulation in order to retrieve the acquired audio. Each microphone converts the acoustic into a one-bit PDM signal by using a Sigma-Delta modulator \cite{hegde2010seamlessly}. This modulator typically runs between 1 and 3\,MHz and over-samples the signal.
To retrieve the encoded acoustic signal, a set of cascaded filters are applied to simultaneously decimate and recondition the signal into a Pulse Coded Modulation (PCM) format (Figure~\ref{fig:Architecture}). Both the geometry of the microphone array and the signal processing of the acoustic signal have a direct impact on the acoustic camera response.
Evaluation tools such as the Cloud based Acoustic Beamforming Emulator (CABE) enable an early evaluation of an array's geometries and the frequency response of the acoustic cameras ~\cite{segers2019cabe}.
\begin{figure}[t]
\centering
\includegraphics[width=0.8\textwidth]{images/Architecture.pdf}
\caption{Several cascaded filters are used to demodulate the 1-bit PDM signal generated by the digital MEMS microphones in order to retrieve the original acoustic information in Pulse Coded Modulation (PCM) format. The audio signals are then beamformed by the DaS beamformer. The SRP values are finally used to generate acoustic heatmaps.}
\label{fig:Architecture}
\end{figure}
\subsection{Generation of acoustic datasets}
Traditional datasets for SR consist of high-resolution images that are downsampled and blurred to generate the low-resolution image or they use different cameras to generate two images of the same scene~\cite{Rivadeneira_ThermalImageSR}. When using two cameras the frames require some realignment to compensate the different position of the two cameras.
CABE is used to generate the acoustic images.
CABE can emulate the behavior of the traveling sound, microphones and the stages that are required for generating the acoustic heatmap.
The main advantages of using an emulator over real life acoustic heatmaps are consistency and space. First, considering the consistency, using an emulator makes it is possible to replicate the exact same acoustic circumstances multiple times for different resolutions and different configurations.One could generate the same acoustic image with two different microphone arrays for example or use a different filtering stage.
Second, considering space, an emulator eliminates the requirement of having access to anechoic boxes or an anechoic chamber. This allows one to generate acoustic images with sound sources that are several meters away from the microphone array. In order to achieve the same results in a real world scenario, a large anechoic chamber is required. If one has access to such chamber, CABE could still be used. CABE has the option to use PDM signals from real life captured recordings to generate the acoustic heatmaps. Doing so will omit everything that comes before the filtering stage and replace it by the PDM signals from the real life recording.
In order to have a representative dataset, the same architecture as in~\cite{segers2019cabe} is used. The order of the filters and the decimation factor can be found in Table~\ref{tbl:CABE_Settings}.
To compensate for the traveling time of the wave, each emulation starts after 50~ms and lasts for 50~ms. For all emulations a Field of View (FOV) of 60\degree{} in both directions is used.
\begin{table}
\centering
\begin{tabular}{|l|l|}
\hline
\textbf{Parameter} & \textbf{Value} \\
\hline
Microphone array & UMAP \\
Beamforming method & Filtering + Delay and Sum \\
Filtering method & 3125khz\_cic24\_fir1\_ds4 \\
\hline
Sampling Frequency ($F_S$) & 3125kHz \\
Order CIC Filter ($N_{CIC}$) & 4 \\
Decimation factor CIC Filter ($D_{CIC}$) & 24 \\
Order FIR Filter ($N_{FIR}$) & 23 \\
Decimation factor FIR Filter ($D_{FIR}$) & 4 \\
\hline
SRP in block mode & yes \\
SRP length & 64 \\
Emulation start time & 50 ms \\
Emulation end time & 100 ms \\
\hline
\end{tabular}
\caption{Configuration used in CABE to generate the acoustic images.}
\label{tbl:CABE_Settings}
\end{table}
\subsection{Dataset Properties}
\textbf{Scale}:
The key application of our proposed acoustic map imaging dataset is to upscale the spatial resolution of a low-resolution image by multiple scale factors (x2, x4, x8). To realise this, 8 different sets of images were generated, each containing 4624 images. Four different resolutions were used: the HR ground truth images of size ($640 \times 480$) and three different scale sets of LR images of size ($320 \times 240$, $160 \times 120$ and $80 \times 60$). For each resolution, one dataset was generated using fractional delays and another without fractional delays as shown in Figure~\ref{fig:dpvsndp} with a total of 36992 images. In real-world use, acoustic sensor resolution is very small and suffer from a sub-sampling error in the phase delay estimation. This results in artifacts and poor image quality compared to the simulated images. The benchmark used was chosen to simulate these real world difficulties in order to enhance the super-resolved images. This is also consistent with the proposed real captured images set.
\begin{figure}[t]
\begin{center}
\resizebox{1.1\textwidth}{!}{\input{images/acoustic_maps/acoustic_maps.pgf}}
\end{center}
\caption{Acoustic map examples of the test set with double precision high-resolution (top) and fractional delays precision high-resolution (bottom).}
\label{fig:dpvsndp}
\end{figure}
\textbf{Real captured images}: Acoustic maps are generated from recordings of the M3-AC acoustic camera\mbox{\cite{Vandendriessche2021M3AC}}.
The sound sources are placed at different angles without any relation to the positions of the sound sources for the acoustic images generated using CABE.
The acoustic heatmaps are generated with the resolutions corresponding to the x2, x4, x8 scale factors without fractional delays and one double precision set representing the corresponding HR ground truth.
Real world captured images could have different natural characteristics.
This poses a problem as models trained on artificial images cannot generalize to another unknown source of image degradation.
The purpose of the real captured images is to evaluate whether the proposed method can generalize over the unknown data.
For this reason, the real captured images are not used during training, instead they are all exclusively used as test data.
\textbf{Acquisition}:
Each image contains two sound sources, positioned at a distance of one meter from the center of the array and mirrored from each other.
The sound sources are placed at angles between 60\degree{} and 90\degree{} in steps of 2\degree{} for a total of 16 positions.
No vertical elevation was used.
The frequency of the two sound sources are changed independently of each other from 2~kHz to 10~kHz in steps of 500~Hz, across the 8 different sets.
By using two sound sources, some acoustic images suffer from problems with the angular resolution (Rayleigh criterion) where the distance between the two sound sources becomes too small to distinguish one from another.
For instance, when both sound sources are placed at 90\degree{}, they overlap and become one sound source.
\textbf{Normalization}:
Natural images in SISR problem are logged in uint8 data type, which is in agreement with the most recent thermal image benchmarks. Although acoustic and thermal sensors allow generation of raw data logged in float representation, which consists of rich information, this could produce a problem in the validation consistency between benchmarks. To avoid technical problems with the validation, the proposed benchmark was standardized with the current research line to be compatible with published datasets in other SISR domains.
The registered sound amplitude in acoustic map imaging depends on the sound volume, the chosen precision during the computation, and the number of microphones. The more microphones that are used during the computations, the higher the amplitude will be in the registered map. This can generate high variant values in the minimum-maximum value range. Due to this, it is not possible to preserve the local dynamic range of the images and to normalize them using fixed global values. It may also cause a problem with the unknown examples. Any method of contrast or histogram equalization could harm the images and cause loss of important information. Consequently, instance min-max normalization as in Eq. \ref{eq:1} and uint8 data type were used. We denote $I$ and $\bar{I}$ the original image and its normalized version, $\bar{I}_{max} = 1$ and $\bar{I}_{min} = 0$. The images are then converted to grayscale in the range [0 - 255] and saved as PNG format with zero compression.
\begin{equation}
\bar{I} = (I - I_{min}) \frac{\bar{I}_{max} - \bar{I}_{min}}{I_{max} - I_{min}} + \bar{I}_{min}
\label{eq:1}
\end{equation}
\textbf{Baseline approach}:
The authors believe that this is the first work that provides as large as a dataset of acoustic map imaging pairs captured with four different scales in an emulator and in the real-world. Peak signal-to-noise ratio (PSNR) and the structural similarity index measure (SSIM) metrics are reported for reference in SISR problems. A bicubic algorithm is used as a baseline model for validation comparison for the super-resolved images. To reduce quality degradation caused by subsampling errors in phase delay estimation, a Gaussian Kernel with different kernel sizes was used on top of the Bicubic output to reduce the artifacts. Figure.~\ref{fig:psnr} shows that a Gaussian kernel with size 8 achieved the best PSNR results for the three resolutions.
\begin{figure}[t]
\begin{center}
\includegraphics[width=1\linewidth]{images/SR/BicbuicGauss3.png}
\end{center}
\caption{PSNR distribution of the upscaled test set images without fractional delays with scales factor of (x2, x4, x8). The upscaling is done with the bicubic interpolation operator. Gaussian kernels ranging from $0$ to $20$ are used on top to smooth the results. The upscaled images with Gaussian kernel of size 8 achieved the best PSNR results for the three resolutions.}
\label{fig:psnr}
\end{figure}
\textbf{Train and Test set}:
Because of the large number of images in the dataset, and to avoid possible overfitting due to shared similarities in the images, 96 samples were drawn for the test set from the images with low PSNR value. The test set sampling procedure is processed on the PSNR distribution built on the 8x bicubic upscaled images. The sampling toward the low PSNR value images is processed using a Kernel Density Estimator (KDE) skewed distribution. In the end, the test set became biased with more complex examples.
\section{XCycles Backprojection Network (XCBP)}
\label{sec:NN}
\subsection{Network Architecture}
The baseline model of the proposed method comprises two main modules: Cycle Features Correction (CFC) and Residual Features Extraction (RFE). As shown in Figure~\ref{fig:model}, the architecture of XCBP is unfolded with $X$ number of CFCs, as each cycle contains one RFE module.The value of $x$ is an odd number since two consecutive cycles are mandatory for the final results. The model uses only one convolutional layer (Encoder/E) to extract the shallow features $F$ from the low-resolution input image $I_{LR}$ and its pre-upsampled version $ \uparrow I_{LR}$, shown in Eq. \ref{eq:fex}. The pre-usampling module can take any method, such as a classical pre-defined upsampling operator, transposed convolution \cite{dong2016accelerating}, sub-pixel convolution \cite{shi2016real} or resize-convolution \cite{dumoulin2016learned}.
\begin{equation}
\begin{aligned}
&F_{LR,0} = E(I_{LR}) \\
&F_{SR,0} = E(\uparrow I_{LR})
\label{eq:fex}
\end{aligned}
\end{equation}
The term $F_{LR,0}$ denotes the encoded features in the low-resolution space, whereas $F_{SR,0}$ denotes the encoded features of the pre-upsampled image in the high-resolution space. The (Decoder/$D$) of only one convolutional layer uses the final features $F_{SR,X}$ corrected by the CFC in cycle $X$ to reconstruct the super-resolved image $I_{SR}$.
\begin{equation}
\begin{aligned}
&I_{SR} = D(F_{SR,X})
\end{aligned}
\end{equation}
\begin{figure}[t]
\begin{center}
\includegraphics[width=1\linewidth]{images/SR/model2.sep1.png}
\end{center}
\caption{XCycles Backprojection Network architecture.}
\label{fig:model}
\end{figure}
The CFC module serves as a feature correction mechanism. It is designed to supply the encoded features of the two parallel features spaces to its RFE module for further feature extraction, and corrects the encoded features once at a time. The output of its RFE module is backprojected by addition to one of the two parallel features spaces. The backprojection serves to correct the previous features location in both encoded features manifold in contrast to \cite{rivadeneira2020thermal}. By having the two features spaces as input, the model uses the $F_{LR,x}$ encoded features and its corresponding super-resolved features $F_{SR,x}$ to find the best correction in each feature space. This correction is very helpful for images captured with different devices, with different geometrical registration, that suffer from a misalignment problem.
The CFC adds the output in an alternate cycle. For each cycle it adds the correction either to the low-resolution features space $F_{LR,x}$ or to the super-resolved features space $F_{SR,x}$. In the $F_{SR,x}$, the output of the RFE passes by the (Upsampler/$U$) before adding the correction to match the scale of the features, as shown in Eq. \ref{eq:alternatives}.
\begin{equation}
\begin{aligned}
&F_{LR,x} = F_{LR,x-1} + CFC(F_{LR,x-1}, F_{SR,x-1}) \\
&F_{SR,x} = F_{SR,x-1} + U(CFC(F_{LR,x-1}, F_{SR,x-1}))
\label{eq:alternatives}
\end{aligned}
\end{equation}
The (Upsampler/$U$) is a resize-convolution ~\cite{dumoulin2016learned} sub-module consisting of a pre-defined nearest-neighbor interpolation operator of scale factor $x2$, and a convolution layer with a receptive field of size $5x5$ pixels represented by two stacked $3x3$ convolutions.
\subsection{Residual Features Extraction Module}
The Residual Features Extraction module in each CFC depicted as (RFE) in Figure~\ref{fig:model} is designed to extract features from the two parallel features spaces $F_{LR,x}$ and $F_{SR,x}$. After each cycle correction in one of the features spaces, the encoded features change their characteristics and allocate a new location in the feature space. The RFE module takes both features as input and extracts new residual features for the next feature space correction procedure, based on similarity and non-similarity between previously corrected features spaces.
As depicted in Figure~\ref{fig:module}, the RFE module has two identical sub-modules (internal features encoder/$I$) responsible for extracting deep features from the two parallel spaces. One ($I$) for each of the features spaces of only one convolutional layer, with straided convolution in the high-resolution space to adapt to the different resolution scale. The two deep encoded features are then concatenated, and a pointwise convolution layer ~\cite{lin2013network} transforms them to their original channel space size.
\begin{figure}[t]
\begin{center}
\includegraphics[width=1\linewidth]{images/SR/model2.sep2.png}
\end{center}
\caption{Residual Features Extraction Module}
\label{fig:module}
\end{figure}
The main core of the RFE module consists of three levels $L$ of double activated convolution layers connected sequentially. The output of the three $L$ levels, defined as inner skip connections, are concatenated together, creating dense residual features before a pointwise convolution layer returns them to their original channel space size. Finally, the output of the merger layer is fed to a channel attention module, inspired by the RCAN model, ~\cite{zhang2018image} to weight each residual channel priority before it is added to the outer skip connection of the main merger.
\subsection{Implementation Details}
The final XCBP proposed model is constructed with X = 8 cycles. All convolution layers are set to 3x3 except for the channel reduction, whose kernel size is 1x1, to transform the concatenated features to their original channel space size. All features are encoded and decoded with 128 feature channels. Convolution layers with kernel size 3x3 uses reflection-padding strategy and Prelu \mbox{\cite{he2015delving}} activation function when activation is stated. The reduction ratio is set to 16 in the channel attention module.
\section{Experiments}
\label{experiments}
\subsection{Training settings}
The experiments are implemented in Pytorch 1.3.1 and performed on an NVIDIA TITAN XP. Ninety percent of the training images were selected, with a total of 4,067 image pairs. Data augmentation is performed on the training images with random rotation of $90^{\circ{}}$, horizontal and vertical flip. A single configuration was used for all experiments and all scale factors. The Adam optimizer and L1 loss were adopted \cite{kingma2014adam} using default parameter values of zero weight decay, and a learning rate initialized to $10^{−4}$ with step decay of $\gamma = 0.5$ after 500 epochs. The output SR image size for all experiments is $192 \times 192$ with a minibatch size of 8 batches.
After every epoch the validation metric is run on the current model, and the model with the highest PSNR value is recorded for inference. The model of scale x2 is trained first. Subsequently, the model is frozen and the model of scale x4 is added and trained. The same procedure goes for the model of scale x8.
\subsection{Results}
Table \ref{tbl:psnr_results} shows a quantitative comparison of the best average score for the $\times$2, $\times$4, and $\times$8 super-resolved images compared to the baseline methods: bicubic (MATLAB bicubic operator is used in all experiments), bicubic with a Gaussian of kernel eight, and deep learning SoTA models. The proposed model outperformed the baselines in all of the experiments with significant results. It is also important to note that the model achieved better results on the real captured images as compared with the baselines. This demonstrated that the proposed method could generalize on an unknown acoustic map imaging distribution.
\begin{table}
\centering
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
\multirow{2}{*}{methods} & \multicolumn{3}{c|}{simulated} & \multicolumn{3}{c|}{real captured} \\ \cline{2-7}
& scale x2 & scale x4 & scale x8 & scale x2 & scale x4 & scale x8 \\ \hline
bicubic & 38.00/0.9426 & 38.16/0.9548 & 37.81/0.9728 & 37.36/0.9513 & 37.31/0.9615 & 36.93/0.9764 \\ \hline
bicubic-gaussin & 46.34/0.9942 & 45.47/0.9943 & 41.48/0.9935 & 40.99/0.9954 & 40.46/0.9954 & 38.82/0.9946 \\ \hline
SRCNN & 47.00/0.9934 & 46.49/0.9941 & 44.87/0.9938 &
42.24/0.9940 & 42.25/0.9941 & 42.07/0.9943 \\ \hline
VDSR & 50.98/0.9963 & 50.89/0.9963 & 49.98/0.9954 &
44.23/0.9952 & 44.28/0.9950 & 43.470.9942 \\ \hline
RCAN & 54.65/0.9978 & 55.19/0.9980 & 54.63/0.9978 &
\textbf{46.82/0.9971} & 46.57/0.9967 & \textbf{48.88/0.9962} \\ \hline
XCBP-AC & \textbf{54.83/0.9977} & \textbf{55.49/0.9979} &
\textbf{55.77/0.9980} & 44.64/0.9970 & \textbf{46.66/0.9970} & 46.58/0.9968 \\ \hline
\end{tabular}
\caption{Average PSNR/SSIM comparison for x2, x4 and x8 scale factors in the test set between our solutions, interpolation operators and STOA methods. Best numbers are shown in bold.}
\label{tbl:psnr_results}
\end{table}
\begin{figure}
\begin{center}
\includegraphics[width=0.92\linewidth]{images/SR/resultsx4.png}
\end{center}
\caption{Visual comparison on a cropped region of x4 SR results between: (HR) ground truth high-resolution image. (B) bicubic upscaled image. (B+G) Bicubic and Gaussian upscaled image. (XCBP) our model upscaled image.}
\label{fig:resultsX4}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.92\linewidth]{images/SR/resultsx8.png}
\end{center}
\caption{Visual comparison on a cropped region of x8 SR results between: (HR) ground truth high-resolution image. (B) bicubic upscaled image. (B+G) Bicubic and Gaussian upscaled image. (XCBP) our model upscaled image.}
\label{fig:resultsX8}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=1\linewidth]{images/SR/image_profile.png}
\end{center}
\caption{Row profile comparison of real captured x8 SR result. Bucubic operator propagated the sub-sampling error to the SR image creating false positive peak amplitude. Bicubic + Gaussian and XCBP remove the sub-sampling artifacts and look very similar to the HR ground-truth image with XCBP result being the closest output to the HR values.}
\label{fig:rowprofile}
\end{figure}
Despite the quality of the experimental quantitative comparison, the super-resolution problem demands an analytical comparison, as it is not possible to rely only on quantitative standardized metrics such as the PSNR/SSIM. Figures~\ref{fig:resultsX4} and~\ref{fig:resultsX8} show comparisons of the achieved results with the baselines: bicubic upscaled image and bicubic and Gaussian upscaled image on the ($\times$4, $\times$8) scale factors, using images from the simulated test set. It was observed that for the three scaling factors, the proposed model achieved better perceptual results. Given the smooth nature of the acoustic map imaging, applying a Gaussian kernel to the upscaled images greatly enhanced their quality and reduced the artifacts. Although the (bicubic + Gaussian) model achieved excellent results, it was shown that this proposed model surpassed it in the quantitative and analytical comparison.
To confirm further, a row profile test was run on the images to observe the output with the closest result to the ground truth with fewer artifacts. In Figure~\ref{fig:rowprofile}, it is seen that using only the bicubic operator had several false positive sound source maxima because of the sub-sampling artifacts, which were propagated by the operator from the LR image to the super-resolved image, unlike the (bicubic + Gaussian) model and the proposed one, which removed the false positive artifacts and came closer to the ground truth. It was also observed that the proposed model outperformed the (bicubic + Gaussian) model and had higher similarities to the row profile ground truth.
The obtained images were indeed very similar to the original ground truth HR images. Though this was subjectively confirmed in the cropped regions, the same conclusions can be drawn after observing the entire image such as in Figure~\ref{fig:realcaptured_x8}. Thus, it was shown that this model can upscale the LR image and correct artifacts caused by the sub-sampling error in the three scale factors.
Both the increase in resolution and the reduction of artifacts helped to improve the quality of the acoustic images and acoustic cameras. First of all, the reduction in noise helped the overall image quality and readability for humans of set images. A second improvement was the frame rate of acoustic cameras. In order to increase the resolution of an acoustic camera without upscaling, one needs to compute more steering vectors and perform more beamforming operations. Beamforming operations are computationally intensive, meaning that increasing the number of steering vectors or the resolution of an acoustic image also increases the computational load and time to generate one acoustic image, giving a trade-off between resolution and frame rate. Using super-resolution to both upscale and reduce the artifacts of nonfractional delays tackled these two problems at the same time. The acoustic camera could generate the acoustic image at a lower resolution and higher frame rate without the need for fractional delays because the super-resolution improved the resolution and quality, without affecting the frame rate.
\begin{figure}
\centering
\begin{subfigure}[b]{0.23\textwidth}
\includegraphics[width=\textwidth]{images/results/x8/real/hr/PDMs_UMAPv2_3MHz_3_5kHz.png}
\end{subfigure}
\begin{subfigure}[b]{0.23\textwidth}
\includegraphics[width=\textwidth]{images/results/x8/real/bicubic/PDMs_UMAPv2_3MHz_3_5kHz.png}
\end{subfigure}
\begin{subfigure}[b]{0.23\textwidth}
\includegraphics[width=\textwidth]{images/results/x8/real/bicubic_gauss/PDMs_UMAPv2_3MHz_3_5kHz.png}
\end{subfigure}
\begin{subfigure}[b]{0.23\textwidth}
\includegraphics[width=\textwidth]{images/results/x8/real/model/PDMs_UMAPv2_3MHz_3_5kHz.png}
\end{subfigure} \\
\begin{subfigure}[b]{0.23\textwidth}
\includegraphics[width=\textwidth]{images/results/x8/real/hr/PDMs_UMAPv2_3MHz_4_5kHz.png}
\end{subfigure}
\begin{subfigure}[b]{0.23\textwidth}
\includegraphics[width=\textwidth]{images/results/x8/real/bicubic/PDMs_UMAPv2_3MHz_4_5kHz.png}
\end{subfigure}
\begin{subfigure}[b]{0.23\textwidth}
\includegraphics[width=\textwidth]{images/results/x8/real/bicubic_gauss/PDMs_UMAPv2_3MHz_4_5kHz.png}
\end{subfigure}
\begin{subfigure}[b]{0.23\textwidth}
\includegraphics[width=\textwidth]{images/results/x8/real/model/PDMs_UMAPv2_3MHz_4_5kHz.png}
\end{subfigure} \\
\begin{subfigure}[b]{0.23\textwidth}
\includegraphics[width=\textwidth]{images/results/x8/real/hr/PDMs_UMAPv2_3MHz_5kHz.png}
\end{subfigure}
\begin{subfigure}[b]{0.23\textwidth}
\includegraphics[width=\textwidth]{images/results/x8/real/bicubic/PDMs_UMAPv2_3MHz_5kHz.png}
\end{subfigure}
\begin{subfigure}[b]{0.23\textwidth}
\includegraphics[width=\textwidth]{images/results/x8/real/bicubic_gauss/PDMs_UMAPv2_3MHz_5kHz.png}
\end{subfigure}
\begin{subfigure}[b]{0.23\textwidth}
\includegraphics[width=\textwidth]{images/results/x8/real/model/PDMs_UMAPv2_3MHz_5kHz.png}
\end{subfigure} \\
\begin{subfigure}[b]{0.23\textwidth}
\includegraphics[width=\textwidth]{images/results/x8/real/hr/UMAPv2_3MHz_6khz_3D_PDMs.png}
\end{subfigure}
\begin{subfigure}[b]{0.23\textwidth}
\includegraphics[width=\textwidth]{images/results/x8/real/bicubic/UMAPv2_3MHz_6khz_3D_PDMs.png}
\end{subfigure}
\begin{subfigure}[b]{0.23\textwidth}
\includegraphics[width=\textwidth]{images/results/x8/real/bicubic_gauss/UMAPv2_3MHz_6khz_3D_PDMs.png}
\end{subfigure}
\begin{subfigure}[b]{0.23\textwidth}
\includegraphics[width=\textwidth]{images/results/x8/real/model/UMAPv2_3MHz_6khz_3D_PDMs.png}
\end{subfigure} \\
\begin{subfigure}[b]{0.23\textwidth}
\includegraphics[width=\textwidth]{images/results/x8/real/hr/UMAPv2_3MHz_8khz_3D_PDMs.png}
\end{subfigure}
\begin{subfigure}[b]{0.23\textwidth}
\includegraphics[width=\textwidth]{images/results/x8/real/bicubic/UMAPv2_3MHz_8khz_3D_PDMs.png}
\end{subfigure}
\begin{subfigure}[b]{0.23\textwidth}
\includegraphics[width=\textwidth]{images/results/x8/real/bicubic_gauss/UMAPv2_3MHz_8khz_3D_PDMs.png}
\end{subfigure}
\begin{subfigure}[b]{0.23\textwidth}
\includegraphics[width=\textwidth]{images/results/x8/real/model/UMAPv2_3MHz_8khz_3D_PDMs.png}
\end{subfigure} \\
\vspace{2pt}
\caption{Visual comparison of real captured x8 SR results. Left to right: High-resolution image. Bicubic upscaled image. Bicubic and Gaussian upscaled image. XCBP model upscaled image.}
\label{fig:realcaptured_x8}
\end{figure}
The super-resolution also allowed better pinpointing the direction sound was coming from and distinguishing sound sources. Outliers caused by the fractional delays were removed, and the increased resolution helped to better estimate the angle of arrival of the sounds.
In order to determine what the origin of the sound was instead of the angle of arrival, acoustic images were combined with images from RGB cameras. RGB cameras have a higher resolution.
To overlay acoustic images with RGB images, both must have the same resolution by increasing the resolution of the acoustic image, decreasing the resolution of the RGB image, or a combination of both. Here, the super resolution can help to match the resolution of the acoustic image with the resolution of the RGB camera.
The similarities in the acquisition of the developed acoustic map imaging dataset may lead to characteristics similarities in the image distribution and to overfitting the data. Given this, the proposed model was tested on real captured images to study its ability to generalize over an unseen data distribution. Although the real captured images still shared characteristics with the simulated images in terms of the number of sound sources and their frequency, they were recorded using different equipment and in another environment, which reduced the possibility of overfitting. Figure~\ref{fig:realcaptured_x8} shows comparisons of the results with the baselines: bicubic upscaled image, bicubic and Gaussian upscaled image on a $\times$8 scale factor, and using real captured images from the test set. It was observed that the proposed model was more capable of upscaling images and reducing artifacts on unseen data as compared to other interpolation operators.
\section{Conclusions}
\label{sec:conclusions}
This work proposed the XCycles BackProjection model (XCBP) for highly accurate acoustic map image super-resolution. The model extracts the necessary residual features in the parallel (HR and LR) spaces and applies feature correction. The model outperforms the current state-of-the-art models and drastically reduces the sub-sampling phase delay error estimation in the acoustic map imaging. An Acoustic map imaging dataset, which provides simulated and real captured images with multiple scale factors (x2, x4, x8) was also proposed. The dataset contains low- and high-resolution images with double precision fractional delays and sub-sampling phase delay error. The proposed dataset can encourage the development of better solutions related to acoustic map imaging.
\vspace{6pt}
\funding{This work was partially supported by the European Regional Development Fund (ERDF) and the Brussels-Capital Region-Innoviris within the framework of the Operational Programme 2014-2020 through the ERDF-2020 Project ICITYRDI.BRU. This work is also part of the COllective Research NETworking (CORNET) project \textit{"AITIA: Embedded AI Techniques for Industrial Applications"}~\cite{brandalero2020aitia}. The Belgian partners are funded by VLAIO under grant number HBC.2018.0491, while the German partners are funded by the BMWi (Federal Ministry for Economic Affairs and Energy) under IGF-Project Number 249 EBG.}
\dataavailability{The dataset is available online at \url{https://doi.org/10.5281/zenodo.4543785}.}
\reftitle{References}
\externalbibliography{yes}
|
1,116,691,499,856 | arxiv | \section{Introduction}
Nothing in the life of a massive star becomes it like the leaving
it. A supernova (SN) is one of the most impressive spectacles that the
Universe can afford an astronomer. However, there is some uncertainty
as to the range of stars that undergo this most spectacular
demise. Observations indicate that stars evolve into red supergiants
if their initial masses are between about 12 and ${30\,\rm M_\odot}$
\citep{Levesque2006}. Stars that are less massive undergo second
dredge--up and end their lives as asymptotic giant branch (AGB) stars
\citep{2007MNRAS.376L..52E}. Mass--loss rates increase with mass and
stars that are more massive suffer enough mass--loss to remove their
hydrogen envelopes before they die. They become Wolf--Rayet stars, naked
helium stars with thick winds \citep{Crowther}. These explode as
hydrogen--free Type Ib/c SNe, rather than the more common Type II SNe
that are believed to result from the death of red supergiants
\citep{Filippenko,Smartt2009}. Type II SNe are in turn divided into
Types IIP, IIL, IIn and IIb. Types IIP and IIL are identified by their
SN light curves. The former have a plateau and the latter show only
a linear decline. The plateau is the result of the photosphere
maintaining a constant radius, moving inward in mass as the ejecta
expands \citep{Filippenko}. This in turn is due to the ionised
hydrogen recombining. Type IIb SNe have weak hydrogen lines and light
curves similar to hydrogen--free Type Ib SNe, implying that they
contain only a small percentage of hydrogen. There seems to be a
sequence from Type IIP to Type IIL to Type IIb SNe, driven by
increased mass-loss and a consequently reduced mass of hydrogen in the ejecta. Type IIn SNe
are distinguished by narrow line hydrogen emission, the result of
shock interaction with circumstellar material
\citep{2011MNRAS.412.1522S}. The mass limits have some dependence on
metallicity because metal--rich stars have higher mass--loss rates and thus
the minimum initial mass of Wolf--Rayet stars is approximately ${25\,\rm
M_\odot}$ at Solar metallicity \citep{2004MNRAS.353...87E}. We
therefore expect Type II progenitors up to this limit.
The red supergiant problem was first reported by \citet{Smartt2009}.
They compared 20 Type IIP SN progenitor detections and non--detections
with stellar evolution models to determine the minimum and maximum
initial mass limits for the progenitors of these SNe. They found that
the minimum mass required for stars to explode as Type IIP SNe was
${\rm 8.5_{-1.5}^{+1} \, M_\odot}$, which is consistent with the
observed upper limit for white dwarf formation
\citep{Weidemann,2009ApJ...693..355W}. More surprisingly, they found
an upper limit of ${\rm 16_{-1.5}^{+1.5} \, M_\odot}$, a 95 per cent
confidence upper limit of ${\rm 21 \, M_\odot}$. In essence, the red
supergiant problem is that this estimate is well below the maximum
mass estimated for red supergiants. There therefore appears to be an
absence of higher--mass red supergiant SN progenitors, leaving the fate
of stars with masses between 16 and 25-${30\,\rm M_\odot}$ uncertain.
It should be mentioned that the alternative to deducing masses from
progenitor models is to model the supernova directly. This has the
advantage of not requiring pre--explosion images, although it does
depend on detailed follow--up observations of the supernova
itself. Utrobin and Chugai have used a one--dimensional hydrodynamic
code \citep{2004AstL...30..293U} to model the SNe 2005cs
\citep{2008A&A...491..507U} and 2004et \citep{2009A&A...506..829U}.
Both have detected progenitors and in this paper we deduce initial
masses of $9^{+1}_{-4}$ and ${12^{+1}_{-1}\,\rm M_\odot}$
respectively. This compares with 18 and ${28\,\rm M_\odot}$ for the
hydrodynamic masses. If these masses are correct, the red supergiant
problem ceases to be. However, three earlier independent attempts to
model the progenitor of 2005cs gave masses of ${\rm 9^{+3}_{-2} \,
M_\odot}$ \citep{2005MNRAS.364L..33M}, ${\rm 10^{+3}_{-3} \,
M_\odot}$ \citep{2006ApJ...641.1060L} and between 6 and ${8\,\rm
M_\odot}$ \citep{2007MNRAS.376L..52E}, a discrepancy noted by the
authors. There are other approaches; \citet{2011MNRAS.410.1739D} have
constructed models that include the nebula phase of the supernovae,
which is more amenable to detailed simulation. These SNe modelling
approaches have great potential but wait on more sophisticated models.
Another approach is to consider the ratios of the different types of
SNe. One then integrates the initial mass function (IMF) to find the
limits that provide the desired number of
stars. \citet{2011MNRAS.412.1522S} considered the fractions of core
collapse SNe from the Lick Observatory Supernova Search (LOSS) and
observed that the proportion of Type Ib/c SNe was much too high for
all of them to be the result of single star evolution. In addition,
with the standard Salpeter IMF, the observed fraction of Type IIP SNe
was such that it could be produced by single stars with initial masses
in the range ${8.5-–13.7 \rm M_\odot}$. These facts imply that binary
interaction allows the production of hydrogen--free progenitors at
lower masses than would otherwise be the case.
\citet{2011MNRAS.412.1522S} suggest that if binaries are included the
upper limit for red supergiants would be in the range of 18 to 24
${\rm M_{\odot}}$.
However, this study found a lower fraction of Type IIP SNe than
previous work. The core collapse SNe broke down as 48 per cent Type
IIP and 22 per cent Type Ib/c, compared with 59 and 29 respectively
for the survey of \citet{2009ARA&A..47...63S}. The reason for this is
not clear. \citet{2009ARA&A..47...63S} considered a 28 Mpc
volume--limited survey using all detected SNe within that volume,
whereas \citet{2011MNRAS.412.1522S} used 60 Mpc and only those SNe
detected in LOSS. Type IIP SNe tend to be dimmer than other types so
those at large distances may have been missed. While these selection
effects were accounted for, such adjustments are, by their nature,
very uncertain. Another reason for the difference could be that
\citet{2011MNRAS.412.1522S} had more complete spectroscopic and light
curve follow-ups and so had greater accuracy in their classifications.
It is possible to obtain the required SNe fractions with an
appropriately chosen stellar population. \citet{2011MNRAS.tmp..692E}
showed that a population composed of a mixture of binary and single
stars could explain the rates of
\citet{Smartt2009}. \citet{2011MNRAS.412.1522S} used their own binary
population models to explain their results. The uncertainity in the
rates means that it is hard to use these fractions to tightly
constrain the progenitor mass range, particularly when the shape of
the IMF means that small changes in the lower limit result in large
changes in the population fractions. In contrast large changes in the
upper limit result in small changes in the population
fractions. Hopefully future surveys will resolve the discrepancy.
If we accept the existence of the red supergiant problem then we must
consider a number of possible explanations. First, that these massive
red supergiants form black holes with faint or non--existent SNe
\citep[e.g.][]{2003ApJ...591..288H}. Secondly, that their envelopes
are unstable and eject a large amount of mass pre--SN
\citep[e.g.][]{2010ApJ...717L..62Y}. Thirdly, that they explode as a
different type of SN \citep[e.g.][]{2006A&A...460L...5K}. We suggest a
fourth explanation, that the mass estimates may be systematically
inaccurate at the high--mass end. Mass estimates are based on
mass--luminosity relations, so extra intrinsic extinction close to the
red supergiant progenitors would give reduced luminosities and hence
lower predicted masses. \citet{Smartt2009} were careful to provide
extinction estimates when possible from measurements of nearby
supergiants and from the supernova itself. These could be
underestimates. If red supergiants produce extra dust local to the
star, it would be destroyed in the supernova explosion. This is very
plausible. It is known that red supergiants form dust in their winds
\citep{2005ApJ...634.1286M}. Furthermore, IR interferometry has shown
that this dust can be found very close to the star itself
\citep{DanchiBester}. \citet{2011MNRAS.412.1522S} also
suggest dust as a solution to the red supergiant problem.
In this work, we first describe the theoretical models from which we
derived mass--magnitude relations. We then show how these were used to
deduce masses from a population of Type IIP SN progenitors. Finally,
we deduce the most probable upper and lower mass limits and present our
conclusions.
\section{Simulating red supergiants}
We begin by considering the Cambridge ${\sc \rm STARS}$ code, the source
of our stellar models. These models were processed to generate
observed colours with the BaSeL V3.1 model atmosphere grid
\citep{Westera} and the relevant broad--band filter functions. We used
the dust production rate observed by \citet{2005ApJ...634.1286M} to
estimate the amount of circumstellar extinction that would then
manifest. This allowed the calculation of mass--magnitude relations
both with and without the inclusion of circumstellar dust. We also
consider the nature of dust production, including non--spherical
behaviour.
\subsection{The Cambridge ${\sc \rm STARS}$ code}
The Cambridge ${\sc \rm STARS}$ code was originally developed by Peter Eggleton in
the 1960s \citep{Eggleton}. It uses a non--Lagrangian mesh, where the
mesh function ensures that the points are distributed so that no
quantity of physical interest is allowed to vary by a large amount in
the intervals. The code has been gradually improved and updated and
the work in this paper is based on the code described by
\citet{Stancliffe} and those referenced by them.
Convection is included in the code by the mixing length theory of
\citet{BohmVitense}, with a solar--calibrated mixing length parameter
of $\alpha=2.0$. Convective overshooting is obtained with the method
of \citet{Schroder}, with an overshooting parameter of
${\rm \delta_{OV}=0.12}$. This method involves the addition of a ${\rm \delta}$
term to the adiabatic gradient, allowing mixing to occur in regions
that are weakly stable by the Schwarzchild criterion. The code follows the chemical evolution of ${\rm^{1}H}$, ${\rm ^{3}He}$,
${\rm ^{4}He}$, ${\rm ^{12}C}$, ${\rm ^{14}N}$, ${\rm ^{16}O}$ and ${\rm ^{20}Ne}$, together with structural variables.
We use the mass--loss scheme described by
\citet{2004MNRAS.353...87E}. For main--sequence OB stars the mass--loss
rates are calculated according to \citet{2001A&A...369..574V} and for
all other stars we use the rates of \citet{1988A&AS...72..259D}, where
the metallicity scaling goes as ${\rm (z/z_{\odot})^{0.5}}$. This
theory is older but the the rates have been recently tested for red
supergiants and been shown to be still the best rates available for
them \citep{2011A&A...526A.156M}.
We have created a library of evolution models, with values of $Z$, the
metallicity fraction by mass, equal to 0.02, 0.01, 0.008 and
0.006. These cover the metallicity range of the IIP progenitors given
by \citet{Smartt2009}. The fractions of hydrogen and helium were
determined on the assumption of constant helium enrichment from the primordial
condition of $X$=0.75, $Y$=0.25 and calibrating to a Solar composition of
$X$=0.70, $Y$=0.28 and $Z$=0.02, i.e. that $Y=0.25+1.5Z$. We evolve our
models through to core neon burning, a few years before core collapse.
\subsection{The colour--magnitude diagrams}
The Cambridge ${\sc \rm STARS}$ code outputs the physical parameters for
a stellar model at each time--step. These include the bolometric
luminosity and the surface temperature. Because the stellar observations
are in the form of colours and magnitudes, either the luminosity and the surface
temperature must be estimated from the observations or a method of
calculating colours and magnitudes from the models must be developed. We
choose the latter course because this requires fewer
assumptions. \citet{Smartt2009} used the opposite approach and, for
that reason, the masses they deduced sometimes differ by a small
amount. We use the method described by \citet{EldridgeStanway2009} to
process the code output and calculate magnitudes to compare to those
observed. The BaSel v3.1 grid of model atmospheres is arranged over
surface temperature and effective gravity. Using the values of these
variables from the code, we obtained appropriate templates for the
SEDs for each model at each time--step. We then applied the filter
functions to extract the magnitudes in the various
bands. This allowed us to produce colour--magnitude diagrams from
the evolution tracks.
To check the validity of our synthetic colours we compared our models
to red supergiants observed in the Magellanic Clouds by
\citet{Levesque2006} and in the Milky Way by
\citet{2005ApJ...628..973L}. These observations included estimates for
the total extinction based on spectrophotometric modelling, so it was
possible to process the data to get the absolute colours and
magnitudes in the absence of extinction. This meant that the models
did not yet have to take into account the effects of circumstellar
extinction. This is shown in Figure~\ref{fig:redsupergiants}. We see
that the models agree well with the observations, with the red
supergiants appearing towards the end of the evolution tracks and
ranging in mass between about 10 and ${30\,\rm M_\odot}$. Some of the
SMC stars are cooler than predicted and may have higher
metallicities. This is not unexpected with a large and heterogeneous
population.
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{./SMCtracks.eps}
\includegraphics[width=0.45\textwidth]{./LMCtracks.eps}
\includegraphics[width=0.45\textwidth]{./GalacticRSG.eps}
\caption{Evolution tracks for every integer mass between 5 and
${30\,\rm M_\odot}$, with the multiples of 5 indicated by dashed
lines. The crosses are the observed red supergiants. The SMC stars
are at the top and the models use $Z=0.004$. The LMC stars are in
the middle and the models use $Z=0.01$. The Milky Way stars are at
the bottom and the models use $Z=0.02$.}
\label{fig:redsupergiants}
\end{figure}
To test the validity of the models further, we took the data and made
cumulative frequency plots in $M_K$. We made similar plots from the
models by first identifying which stars became red supergiants during
their lifetimes. We required that $V-K>2.5$ so as to get the reddest
stars. The lower limit in $K$ was set to be $-9.5$ for the Clouds
models and $-8.5$ for the Galactic models. These limits are reasonable
for red supergiants and were chosen to reflect the distribution of the
observations. We weighted the $K$ magnitudes by the timestep to
reflect a greater probability of observation, and the Salpeter IMF. We
used metallicities of $Z=0.004$ for the SMC, $Z=0.01$ for the LMC and
$Z=0.02$ for the Galactic population. The comparison is shown in
Figure~\ref{fig:cumul}.
The agreement is quite good, although there are fewer very luminous
stars in the LMC and SMC sets than this simple application of our models
predicts. This is probably because all three data sets were not
compiled to reflect a stellar population but as an observationally
convenient group of red supergiants. In addition, because we weight by the
timestep and the IMF only, we are implicitly assuming a constant rate
of star formation, a doubtful supposition. We have only considered
single stars and binary interactions may change these predicted
synthetic frequencies \citep{2008MNRAS.384.1109E} and we may be
underestimating the mass--loss rates of the most luminous red
supergiants \citep{2010ApJ...717L..62Y}.
\begin{figure}
\centering \includegraphics[width=0.5\textwidth]{./SMCcumul.eps}
\centering \includegraphics[width=0.5\textwidth]{./LMCcumul.eps}
\centering \includegraphics[width=0.5\textwidth]{./MWcumul.eps}
\caption{The cumulative frequency diagrams, in $K$ band magnitude, for
red supergiants observed in the SMC, the LMC and the Milky way. They
are compared with cumulative frequency curves from synthetic
populations derived from the same models used for
Figure~\ref{fig:redsupergiants}}
\label{fig:cumul}
\end{figure}
\subsection{Circumstellar dust}
When the surface temperature of a red supergiant falls below about
5000 K, dust begins to condense out of the stellar wind at a distance
of around ${\rm 5-10 R_{star}} \approx 1000 R_{\odot}$
\citep{Massey2009conf}. It might be expected that the amount of dust
production would correlate with the mass--loss, which in turn
correlates with the luminosity, because this is responsible for the
stellar wind \citep{VanLoon}. \citet{2005ApJ...634.1286M} showed that the dust
production rate correlates with the bolometric luminosity, with a
least squares fit of ${\rm log_{10}(\dot{M_{dust}})=-0.43M_{bol} - 12.0}$,
where the dust production rate has units of ${\rm M_\odot / year}$.
The dynamics of the wind and the dust is complicated and simulations
indicate an absence of spherical symmetry. \citet{Woitke} found that
various instabilities such as Rayleigh--Taylor or Kelvin--Helmholtz lead to the formation of arcs and caps of dust despite the
spherical initial conditions. This means that one would expect
variation in the observed mass--loss and extinction that is entirely due to
the behaviour of the dusty wind along the line of sight. This cannot be
accounted for with the ${\rm \sc STARS}$ code, because in the absence of observations
of the dust, one can only use the relation between the luminosity and
the dust production to estimate an average extinction. The
additional variation due to the lack of spherical symmetry means an
additional source of uncertainty.
The amount of extinction that is due to circumstellar dust depends on the past
history of the star, which has a continuous loss of mass over time. In contrast,
the ${\rm \sc STARS}$ code calculates the properties of the stars at intervals
determined by the time--step. To account for this, we calculated the
launch velocity of the dust and the dust--production rate for each time
in the output data. This, together with the stellar radii, when
interpolated, gave the distribution of dust with distance from the
star. Following \citet{2005ApJ...634.1286M} we begin by referring to
\citet{Whittet} who showed how the extinction owing a thin shell of
dust can be determined. If one assumes a dust grain density of $s={\rm
2500\,kg\,m^{-3}}$, applicable to low-density silicates and a
refractive index of $m=1.50$, one can obtain the extinction ${
A_V}$ in terms of the path length $L$.
\begin{equation}
\rho_d = (3.7\times10^{-8}) s \frac{m^2+2}{m^2-1}\frac{A_V}{L}.
\end{equation}
After the substitutions are made, one obtains the density in terms of the
path length and the extinction only.
\begin{equation}
\rho_d = (3.1\times10^{-4}) \frac{A_V}{L}.
\end{equation}
If a thin shell of dust is used, the thickness of the shell ${\delta R}$ cancels with the path length. The extinction caused
by the shell is then in terms of the dust mass ${M_{d}}$ and the
radius $R$.
\begin{equation}
A_V =\frac{(3.2\times10^3) L M_d}{4 \pi R^2 \delta R}.
\end{equation}
\begin{equation}
A_V =\frac{(3.2\times10^3) M_d}{4 \pi R^2}.
\end{equation}
The dust was modelled as a series of these thin shells, over which the
total extinction was integrated at each timestep. The ${A_V}$ was
then used to calculate the extinction in the other pass--bands by using
the extinction law and associated ratios described in
\citet{Cardelli1989}. These are shown in Table \ref{tab:filter}.
\begin{table}
\caption{Extinction at standard and HST pass--bands relative to the V band, as given by \citet{Cardelli1989}.}
\begin{tabular}{| l | l | }
\hline \hline
Filter & ${\rm A_{\lambda}/A_V}$ \\
\hline
U & 1.569 \\
B & 1.337 \\
V & 1.000 \\
R & 0.751 \\
I & 0.479 \\
J & 0.282 \\
H & 0.190 \\
K & 0.114 \\
F555W & 0.996 \\
F606W & 0.885 \\
F814W & 0.597 \\
\hline \hline
\end{tabular}
\label{tab:filter}
\end{table}
The density of dust falls off according to an inverse square law as
each shell moves outwards in the stellar wind. Therefore only material
very near to the star has a significant effect. We find about 95 per
cent of the extinction is due to material closer than ${50 \, R_{star}}$ in all the models. This means that the dust is unlikely
to affect the SN because it is rapidly swept up and destroyed in the
explosion. The inverse square law also means that the distance at
which dust first forms in the wind is important. We chose to set this
to ${10\, R_{star}}$, an upper estimate, to ensure that our extinctions are modest underestimates.
\section{The supernova progenitors}
We use the compilation of SN detections and
non--detections of \citet{Smartt2009}. All progenitor information can be found in that paper
and the references therein. We supplement this with SN 2009md \citep{2009md}.
The metallicities are based on neighbouring O/H number ratios, where
${\rm [O/H] = log_{10}(O/H)} + 12$. For ${\rm [O/H]>8.7}$, the $Z=0.02$
models were used. Similarly, for ${\rm 8.5<[O/H]\leq8.7}$, $Z=0.01$, for
${\rm 8.4<[O/H]\leq8.5}$, $Z=0.008$ and for ${\rm 8.2<[O/H]\leq8.4}$,
$Z=0.006$. This is a more precise division by metallicity than was
possible for \citet{Smartt2009}. Errors are given, when known, and the
errors for the absolute magnitude were obtained by combining the other
errors in quadrature.
We assigned masses to the progenitors by comparing the absolute
magnitudes, together with their error bars, with a plot of red supergiant final
luminosities against initial mass. We decided to plot the minimum and
maximum magnitudes in the lifetime of our theoretical red supergiants,
the lifetime being between the end of core helium burning and the
termination of the model. This gave a range in luminosity over which a
star of a given initial mass might explode. Masses were first obtained from the original models and again after processing to include
the effects of circumstellar dust. The error in the magnitudes gave
the error in the masses.
Figure~\ref{fig:MLrelation1} shows how the predicted magnitudes for
the $Z=0.02$ models varies with initial mass in the $V$, $R$, $I$, and $K$
bands. As expected, dust is much more of a problem at shorter
wavelengths, when extinction is more severe, and at higher masses,
when mass--loss is greater.
\begin{figure*}
\centering
\includegraphics[width=0.45\textwidth]{./ML2curve_V.eps}
\includegraphics[width=0.45\textwidth]{./ML2curve_R.eps}
\includegraphics[width=0.45\textwidth]{./ML2curve_I.eps}
\includegraphics[width=0.45\textwidth]{./ML2curve_K.eps}
\caption{The final magnitudes in $V$, $R$, $I$, and $K
$ from our Z=0.02 stellar models. Red indicates the dust free models and
black the models with dust. The thinner lines are for the minimum magnitudes and the thicker are for the maximum magnitudes.}
\label{fig:MLrelation1}
\end{figure*}
However, although extinction is less important at longer wavelengths,
changes in stellar luminosity have more of an effect. When a red
supergiant becomes more luminous, most of this increase in output is at
longer wavelengths. One can see from Figure~\ref{fig:MLrelation1} that
the difference between the minimum and maximum magnitudes for a given
mass is highest in the $K$ band. This is particularly acute for the
lower masses because they end core helium burning as blue supergiants,
setting a very low minimum magnitude in the infrared.
The lower--mass models also have higher maximum magnitudes than the
extrapolation of the behaviour of the more massive stars might
predict. These stars undergo second dredge--up, becoming more luminous
asymptotic giant branch (AGB) stars \citep{2007MNRAS.376L..52E}. In
both cases, increasing the magnitude range also increases the
uncertainty in the progenitor mass--luminosity relation. Finally the
sharp increase in the maximum $V$ magnitude at the high mass end indicates incipient
Wolf--Rayet star formation.
\subsection{The non--detections}
For the non--detections, no progenitor was confidently identified in
the pre--images. This still provides useful information because it sets a
limit on the magnitude of the progenitor. If the star were
brighter, it would have been detected. If the errors in the magnitudes
are approximately normal, there is an 84 per cent probability that the
magnitude of the non--detected progenitor is less than the upper error
bar. This is a sufficiently high confidence level that we took the upper mass
limit to be the lowest mass with a magnitude range entirely
brighter than this upper error bar.
Some of the SNe had non--detections in several pass--bands. The
magnitude limits quoted in Table \ref{tab:nondetection} are those
which gave the lowest upper mass limit. The other pass--bands merely
set a higher upper mass limit which added no additional
information. If we compare the mass limits deduced from the dustless
models with those with dust we can see that the difference
between the two increases with mass. Most notably that for SN 2003ie changes from
22 to ${25 \, \rm /M_\odot}$.
\begin{table*}
\caption{The observed parameters and estimated upper mass limits for Type IIP
supernova progenitors that were not detected in pre--explosion images. We
include the masses without considering extinction due to intrinsic dust,
${\rm M_{dustless}}$, and the masses taking intrinsic dust into account
${\rm M_{dust}}$. Note that we consider 2004A to be a non--detection. The observation of a progenitor was doubtful.}
\begin{tabular}{| l | l | l | l | l | l | l | l | l |}
\hline \hline
SN & Metallicity & Distance & Apparent & Absolute & ${\rm M_{dustless}}$ & ${\rm M_{dust}}$ \\
& /dex & /Mpc & magnitude & magnitude & ${\rm /M_\odot}$ & ${\rm /M_\odot}$ \\
\hline
1999an & 8.3 & 18.5$\pm$1.5 & ${\rm m_{F606W}>24.7\pm0.2}$ & ${\rm M_{F606W}>-7\pm0.3}$ & 18 & 21\\
1999br & 8.4 & 14.1$\pm$2.6 & ${\rm m_{F606W}>24.91}$ & ${\rm M_{F606W}>-5.89\pm0.4}$ & 11 & 12\\
1999em & 8.6 & 11.7$\pm$1.0 & ${\rm m_I>22.0}$ & ${\rm M_{I}>-8.5\pm0.2}$ & 18 & 19\\
1999gi & 8.6 & 10.0$\pm$0.8 & ${\rm m_{F606W}>24.9\pm0.2}$ & ${\rm M_{F606W}>-5.7\pm0.3}$ & 12 & 13\\
2001du & 8.5 & 18.2$\pm$1.4 & ${\rm m_{F814W}>24.25}$ & ${\rm M_{F814W}>-7.4\pm0.2}$ & 13 & 13\\
2003ie & 8.5 & 15.5$\pm$1.2 & ${\rm m_{R}>22.65}$ & ${\rm M_{R}>-8.3\pm0.2}$ & 22 & 25\\
2004A & 8.3 & 20.3$\pm$3.4 & ${\rm m_{F814W}>24.25}$ & ${\rm M_{F814W}>-7.4\pm0.2}$ & 14 & 15\\
2004dg & 8.5 & 20.0$\pm$2.6 & ${\rm m_{F814W}>25.0}$ & ${\rm M_{F814W}>-6.9\pm0.3}$ & 11 & 11\\
2006bc & 8.5 & 14.7$\pm$2.6 & ${\rm m_{F814W}>24.45}$ & ${\rm M_{F814W}>-6.8\pm0.5}$ & 11 & 12\\
2006my & 8.7 & 22.3$\pm$2.6 & ${\rm m_{F814W}>24.8}$ & ${\rm M_{F814W}>-7.0\pm0.2}$ & 11 & 11\\
2006ov & 8.9 & 12.6$\pm$2.4 & ${\rm m_{F814W}>24.2}$ & ${\rm M_{F814W}>-6.3\pm0.4}$ & 10 & 11\\
2007aa & 8.4 & 20.5$\pm$2.6 & ${\rm m_{F814W}>24.44}$ & ${\rm M_{F814W}>-7.2\pm0.3}$ & 12 & 13\\
\hline \hline
\end{tabular}
\label{tab:nondetection}
\end{table*}
\subsection{The detected progenitors}
The actual detections are fewer in number than the non--detections. For progenitors observed in
multiple bands, a mass for each band was calculated and these were
averaged. For SN 2009bk \citet{2008ApJ...688L..91M} used the
progenitor SED to deduce a total extinction of ${A_V=1.0\pm0.5}$
and the absolute magnitudes take this into account. Because this includes any
extinction from circumstellar dust, there was no need to use the dusty
models and the predicted mass is the same in both cases.
\subsection{Other deduced masses}
Table \ref{tab:baddetection} lists the properties of SNe 2004am and 2004dj, both of
which are Types IIP but lack detected progenitors. \citet{Smartt2009}
describe how population synthesis codes were used to deduce progenitor
masses from their parent clusters. These SNe were part of the survey
and are included for completeness.
\begin{table*}
\caption{The observed parameters and estimated masses for Type IIP
supernovae that were detected in pre--explosion images. We
include the masses without considering extinction by intrinsic dust,
${\rm M_{dustless}}$, and the masses taking intrinsic dust into account,
${\rm M_{dust}}$.}
\begin{tabular}{| l | l | l | l | l | l | l | l | l |}
\hline \hline
Supernova & Metallicity & Distance & Apparent & Absolute & ${\rm M_{dustless}}$ & ${\rm M_{dust}}$ \\
& /dex & /Mpc & magnitude & magnitude & ${\rm /M_\odot}$ & ${\rm /M_\odot}$ \\
\hline
1999ev & 8.5 & 15.14$\pm$2.6 & ${\rm m_{F555W}=24.64\pm0.17}$ & ${\rm M_{F555W}=-6.7\pm0.4}$ & $18^{+3}_{-3}$ & $20^{+6}_{-4}$\\
2003gd & 8.4 & 9.3$\pm$1.8 & ${\rm m_{V}=25.8\pm0.15}$ & ${\rm M_{V}=-4.47\pm0.5}$ & $8^{+2}_{-1}$ & $8^{+2}_{-2}$\\
& & & ${\rm m_{I}=23.13\pm0.13}$ & ${\rm M_{I}=-6.92\pm0.4}$ & & \\
2004et & 8.3 & 5.9$\pm$0.4 & ${\rm m_{I}=22.06\pm0.12}$ & ${\rm M_{I}=-7.4\pm0.2}$ & $11^{+1}_{1}$ & $12^{+1}_{-1}$\\
2005cs & 8.7 & 8.4$\pm$1.0 & ${\rm m_{I}=23.48\pm0.22}$ & ${\rm M_{I}=-6.3\pm0.3}$ & $9^{+1}_{-4}$ & $9^{+1}_{-4}$\\
2008bk & 8.4 & 3.9$\pm$0.5 & ${\rm m_{I}=22.20\pm0.19}$ & ${\rm M_{I}=-7.2\pm0.4}$ & $12^{+2}_{-4}$ & $12^{+2}_{-4}$\\
& & & ${\rm m_{H}=18.78\pm0.11}$ & ${\rm M_{H}=-9.4\pm0.3}$ & & \\
& & & ${\rm m_{K}=18.34\pm0.07}$ & ${\rm M_{K}=-9.7\pm0.3}$ & & \\
& & & ${\rm m_{J}=19.50\pm0.06}$ & ${\rm M_{J}=-8.7\pm0.3}$ & & \\
2009md & 9.0 & 21.3$\pm$2.2 & ${\rm m_{V}=27.32\pm0.15}$ & ${\rm M_{V}=-4.63^{0.3}_{-0.4}}$ & $8^{+4}_{-2}$ & $8^{+5}_{-2}$\\
& & & ${\rm m_{I}=24.89\pm0.08}$ & ${\rm M_{I}=-6.92^{+0.4}_{-0.3}}$ & & \\
\hline \hline
\end{tabular}
\label{tab:detection}
\end{table*}
\begin{table*}
\caption{The observed parameters and estimated masses for Type IIP
supernovae that have mass estimates derived from
observations of their host clusters.}
\begin{tabular}{| l | l | l | l | l | l | l | l | l |}
\hline \hline
SN & Metallicity & Distance & Apparent & Absolute & ${\rm M_{nodust}}$ & ${\rm M_{dust}}$ \\
& /dex & /Mpc & magnitude & magnitude & ${\rm /M_\odot}$ & ${\rm /M_\odot}$ \\
\hline
2004am & 8.7 & 3.7$\pm$0.3 & n/a & n/a & $12^{+7}_{-3}$ & n/a \\
2004dj & 8.4 & 3.3$\pm$0.3 & n/a & n/a & $15^{+3}_{-3}$ & n/a \\
\hline \hline
\end{tabular}
\label{tab:baddetection}
\end{table*}
\subsection{The maximum likelihood limits}
The masses for the detections and non--detections have been drawn from
a distribution of Type IIP progenitors with unknown parameters. If we
assume that the progenitors are drawn from a population described by
the initial mass function (IMF), the nature of the
IMF and the range of masses which explode as Type IIP SNe are the important parameters. Following
the method of \citet{Smartt2009}, we used maximum likelihood theory to
find parameters that gave the greatest probability of generating
the data. If ${P_i}$ is the probability of the $i$th detection or
non--detection being made, the likelihood ${\rm \mathcal{L}}$ is the
probability of observing the whole dataset.
\begin{equation}
{\mathcal{L} = \prod_{i=1}^{i=N} P_i(m)}.
\end{equation}
Taking the natural logarithm converts the product into a sum and
simplifies matters because maximising ${\rm \log_e \mathcal{L}}$ is equivalent
to maximising ${\rm \mathcal{L}}$.
\begin{equation}
{\rm \log_e \mathcal{L} = \sum_{i=1}^{i=N} \log_e[P_i(m)]}.
\end{equation}
For the non--detections, the probability of the event is the
probability that a randomly chosen star has a mass between the
lower limit and the detection limit. Thus we integrate the IMF
between these limits and normalise. The IMF is generally assumed to
be describable by a power law for supersolar masses. Here $\gamma$
is the index such that the the default Salpeter law gives
$\gamma=-1.35$. The parameters to be varied are ${m_{min}}$, the
lower mass mass limit and ${m_{max}}$, the upper mass limit.
\begin{equation}
{\rm P_i \propto \int_{m_{min}}^{m_{i}} m^{\gamma-1} dm}.
\end{equation}
\begin{equation}
{\rm P_i = \frac{m_{min}^{\gamma}-m_{i}^{\gamma}}{m_{min}^{\gamma}-m_{max}^{\gamma}}}.
\end{equation}
The detection limits ${m_i}$ are the 84 per cent confidence
limits. The probability of non detection if ${m_i}$ exceeds ${m_{max}}$ is 1 and the probability if ${m_i}$ is less than ${m_{min}}$ is 0.16.
For the detections, the probability of the event is the probability
that a star has this deduced mass subject to the errors. The
distribution of the errors is unclear and the simplest way of
accounting for the uncertainty is to integrate the IMF between the
upper and lower limits set by the errors. However, this skews the
distribution towards lower masses. We follow \citet{Smartt2009} by
instead integrating the IMF from the upper limit to the predicted mass
and then in a straight line from the value of the IMF at the predicted
mass to zero at the lower limit. If the upper error mass ${m_{i+}}$
exceeds ${m_{max}}$ then the integral is truncated at
${m_{max}}$. Similarly if the lower error mass ${m_{i-}}$ is less than
${m_{min}}$ then the integral is truncated at ${m_{min}}$.
The maximum likelihood values are of little interest without some
measure of how probable alternatives are. To do this the confidence
regions must be determined. For two parameters we have the 68, 90
and 95 per cent confidence regions when $\chi=2.3,4.6,6.2$
\citep{Press}.
\begin{equation}
{\rm \ln \mathcal{L}_{max} - ln \mathcal{L} = \frac{1}{2}\chi}.
\end{equation}
We calculated the maximum likelihood contours when both the upper and
lower limits are varied. Initially we used only the detections. The
non--detections favour arbitrarily low values for the lower limit
because the slope of the IMF makes low--mass stars more
probable. Then, with the lower limit fixed from the detections, the
non--detections were included to see what effect they had on the upper
limit.
The contrast between the two models is shown in
Figure~\ref{fig:dust12}. The models without dust predicted an upper
limit of ${\rm 18^{+2}_{-2} M_{\odot}}$ and a 95 per cent confidence
limit of 25 ${\rm M_{\odot}}$. However, the dust models give ${\rm
21^{+2}_{-1} M_{\odot}}$ and, more significantly, a 95 per cent
limit of more than 30 ${\rm M_{\odot}}$. This means that we can no
longer say with certainty that there is population of red supergiants
that do not end as Type IIP SNe. This upper limit is
consistent with that obtained by modelling the
population of SN progenitors and accounting for binary evolution, i.e. \citet{2011MNRAS.412.1522S} and \citet{2011MNRAS.tmp..692E}.
\begin{figure*}
\centering
\includegraphics[width=0.45\textwidth]{./contour1.ps}
\includegraphics[width=0.45\textwidth]{./contour2.ps}
\caption{Maximum likelihood contours. The dashed lines are for the
detections only and the solid are when the non--detections are
included.}
\label{fig:dust12}
\end{figure*}
In Figure~\ref{fig:dustB}, we present plots of the deduced initial
masses of the progenitors with and without dust, in order of
increasing mass. They are contrasted with the curves representing a
population of stars following the Salpeter distribution, with upper limits of 16.5 and 25 $M_{\odot}$. In both cases the lower limit is
8.5 $M_{\odot}$. It should be noted that SN 1999ev, the most massive
progenitor, undergoes the greatest change in predicted mass when dust
is considered, from $18^{+3}_{-3}$ to $20^{+6}_{-4}$ ${\rm
/M_\odot}$. No other star is more influential in deducing the upper
limit and our results will be more certain when we have more similarly
massive stars.
\begin{figure*}
\centering
\includegraphics[width=0.45\textwidth]{./fig_nodust.ps}
\includegraphics[width=0.45\textwidth]{./fig_dust.ps}
\caption{Diagram showing the derived masses of progenitors compared to the expected mass distribution if the maximum Type
IIP SNe progenitor mass is 16.5 ${\rm M_\odot}$ or 25 ${\rm M_\odot}$.}
\label{fig:dustB}
\end{figure*}
Recently a progenitor to SN 2009kr was identified \citep{Fraser} but
it is not clear yet whether it is a single or a binary star, a Type
IIP or a Type IIL \citep{2010ApJ...714L.254E}. If it is a single star,
the observations of ${\rm M_V=-7.6\pm0.6}$ and ${\rm
M_V-M_I=1.1\pm0.25}$ imply initial masses of ${\rm 21^{+3}_{-4}
M_{\odot}}$ and ${\rm 23^{+4}_{-5} M_{\odot}}$ from the models
without and with dust. However, the star is more of a
yellow supergiant and may have evolved through the red supergiant
phase \citep{2010ApJ...714L.254E}, in which case our models would not
be appropriate.
\section{Discussion}
We have presented evidence that the red supergiant problem is caused
by aliasing of the higher masses. This is illustrated in
Figure~\ref{fig:alias} where we have plotted the initial mass of red
supergiants at solar metallicity against the mass that would be
deduced if we were to take the magnitudes of the dust--extincted
models and compare them with the mass--luminosity relation from the
models without dust.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{./predictcurve.eps}
\caption{Actual compared with deduced initial masses of red supergiants for theoretical observations in $V$, $I$, and $K$. Models are from the $Z=0.02$ series.}
\label{fig:alias}
\end{figure}
The model for circumstellar dust is fairly crude, based on a best fit
between luminosity and dust production from which there is
considerable deviation. It generates an average extinction of never
more than about ${1 \, \rm A_V}$, whereas observations have revealed
stars with several times that value \citep{2005ApJ...634.1286M}.
Detailed models of dusty winds have shown that they do not maintain
spherical symmetry but form transient cap structures
\citep{Woitke}. This implies that that the amount of extinction varies
with the line of sight and over time. This variability was observed by
\citet{Massey2009}, who found an average change of 0.5 mag in the $V$
band in a sample of red supergiants in M31. This change occurred after
only three years. Significantly, there was no change in the $K$ band,
strongly implying that the change in the magnitudes was driven by
variable extinction. It is also likely that dust production varies
with metallicity but we have not taken this into account.
The main achievement of our dust model is to show that, even with an
unnaturally unvarying but realistic amount of dust production, the red
supergiant problem ceases to be. The increased extinction of the
higher--mass models introduces such uncertainty into the observations
as to make it impossible to confidently set an upper mass limit lower
than 25--30${\rm\, M_\odot}$, the red supergiant upper limit.
The upper mass limit could be determined more precisely by obtaining
more pre--explosion images in the infrared, where the effect of
extinction is much less. Alternatively, good spectroscopy of the
detected progenitors could be used to calculate the total
extinction. This would require waiting for one of the relatively small
number of nearby well--studied red supergiants to explode. Models of
the SNe themselves are also likely to yield more progenitor masses in
the future and recent work in that direction has been encouraging
\citep{2011MNRAS.410.1739D}. Until then the uncertain but significant
circumstellar extinction means that we do not need to look for
alternatives to Type IIP SNe for the death of red supergiants.
\section*{Acknowledgements}
JJW and JJE would like to thank the anonymous referee for their
suggestions. They have led to a much improved paper. JJW thanks the
STFC for his studentship and JJE is supported by the IoA's STFC theory
rolling grant. The authors would also like to thank Stephen Smartt and
Christopher Tout for discussion and comments.
\bibliographystyle{mn2e}
|
1,116,691,499,857 | arxiv | \section{Summary and Conclusions}
\label{conclusions}
In this article, we addressed various basic problems related to what
we call pseudo-Hermitian quantum mechanics. Our starting point was
the observation that a given quantum system admits an infinity of
unitary-equivalent representations in terms of Hilbert
space-Hamiltonian operator pairs. This freedom in the choice of
representation can be as useful as gauge symmetries of elementary
particle physics.
We have surveyed a variety of mathematical concepts and tools to
establish the foundations of pseudo-Hermitian quantum on a solid
ground and to clarify the shortcomings of the treatment of the
subject that is based on the so-called charge operator $\cC$. We
showed that it is the metric operator $\etap$ that plays the central
role in pseudo-Hermitian quantum mechanics. Although, one can in general
introduce a $\cC$ operator and express $\etap$ in terms of $\cC$,
the very construction of observables of the theory and the
calculations of the physical quantities requires the knowledge
of $\etap$. This motivates addressing the problem of the computation
of a metric operator for a given quasi-Hermitian operator. We have
described different approaches to this problem.
We have discussed a number of basic issues related to the
classical-to-quantum correspondence to elucidate the status of the
classical limit of pseudo-Hermitian quantum mechanics. We have also
elaborated on the surprising limitation on the choice of
time-dependent quasi-Hermitian Hamiltonians, the role of the metric
operator in path-integral formulation of the theory, a treatment
of the systems defined on complex contours, and a careful study of
the geometry of the space of states that seems to be indispensable
for clarifying the potential application of quasi-Hermitian
Hamiltonians in generating fast quantum evolutions.
Finally, we provided a discussion of various known applications and
manifestations of pseudo-Hermitian quantum mechanics.
Among the subjects that we did not cover and suffice to provide a few
references for are pseudo-supersymmetry and its extensions
\cite{p4,sr-jmp-2005,ss-jpa-2005,sr-jpa-2006}, weak pseudo-Hermiticity
\cite{solombrino-2002,bq-pla-2002,znojil-pla-2006,jmp-2006b}, and
the generalizations of $\cP\cT$-symmetry \cite{bbm-jpa-2002,jpa-2008a}.
This omission was particularly because of our intention not to treat
the results or methods with no direct or concrete implications for the
development of pseudo-Hermitian quantum mechanics. We particularly
avoided discussing purely formal results and speculative ideas.
\section*{Acknowledgments}
I would like to express my gratitude to Patrick Dorey and his
colleagues, Clare Dunning and Roberto Tate, for bringing to my
attention an error in an earlier version of this manuscript. I am
also indebted to Emre Kahya and Hossein Mehri-Dehnavi to help me
find and correct a number of typos. This project was supported by
the Turkish Academy of Sciences (T\"UBA).
\begin{appendix}
\input{app}
\end{appendix}
\input{p68_bib}
\ed
\section*{Appendix: Reality of Expectation values Implies
Hermiticity of the Observables} \label{appendix-1}
\noindent \textbf{Theorem~3:} \emph{Let ${\cal H}$ be a Hilbert
space with inner product $\br\cdot|\cdot\kt$ and $A:{\cal H}\to{\cal
H}$ be a (densely-defined, closed) linear operator satisfying ${\cal
D}(A)={\cal D}(A^\dagger)$, i.e., $A$ and its adjoint $A^\dagger$
have the same domain ${\cal D}(A)$. Then $A$ is a Hermitian operator
if and only if $\br\psi|A\psi\kt$ is real for all $\psi\in{\cal
D}(A)$.}
\noindent {\bf Proof:} If $A$ is Hermitian, we have
$\br\phi|A\psi\kt=\br A\phi|\psi\kt$ for all $\psi,\phi\in{\cal
D}(A)$. Then according to property (ii) of Subsection~\ref{sec-inn},
$\br\psi|A\psi\kt\in\R$ for all $\psi\in{\cal D}(A)$. Next, suppose
that for all $\psi\in{\cal D}(A)$, $\br\psi|A\psi\kt\in\R$. We will
show that this condition implies the Hermiticity of $A$ in two
steps.
\noindent \emph{Step 1}: Let $A_+:=\frac{1}{2}\,(A+A^\dagger)$ and
$A_-:=\frac{1}{2i}\,(A-A^\dagger)$. Then $A=A_++iA_-$, ${\cal
D}(A_\pm)={\cal D}(A)$, and $A_\pm$ are Hermitian operators. In view
of the first part of the theorem, this implies that
\be
\br\psi|A_\pm\psi\kt\in\R,~~~~~~\mbox{for all $\psi\in{\cal D}(A)$}.
\label{app2.1}
\ee
Furthermore, according to $A=A_++iA_-$ and the hypothesis of the
second part of the theorem,
$\br\psi|A_+\psi\kt+i\br\psi|A_-\psi\kt=\br\psi|A\psi\kt\in\R$. This
relation and (\ref{app2.1}) show that
\be
\br\psi|A_-\psi\kt=0~~~~\mbox{for all $\psi\in{\cal D}(A)$}.
\label{app-3}
\ee
\noindent \emph{Step 2}: Let $\phi,\psi$ be arbitrary elements of
${\cal D}(A)$, $\xi_\pm:=\phi\pm\psi$, and $\zeta_\pm=\phi\pm
i\psi$. Then a direct calculation, using the property (\emph{iii})
of Subsection~\ref{sec-inn}, shows that
$\br\phi|A_-\psi\kt=\frac{1}{4}\left(\br\xi_+|A_-\xi_+\kt
-\br\xi_-|A_-\xi_-\kt-i\br\zeta_+|A_-\zeta_+\kt+i
\br\zeta_-|A_-\zeta_-\kt\right)=0$,
where the last equality follows from (\ref{app-3}) and the fact that
$\xi_\pm,\zeta_\pm\in{\cal D}(A)$. This establishes
$\phi|A_-\psi\kt=0$ for all $\phi,\psi\in{\cal D}(A)$. In particular
setting $\phi=A_-\psi$, we find $\br A_-\psi|A_-\psi\kt=0$ which in
view of the property (\emph{i}) of Subsection~\ref{sec-inn} implies
$A_-\psi=0$ for all $\psi\in{\cal D}(A)$. Hence $A_-=0$, and
according to $A=A_++iA_-$, we finally have $A=A_+$. But $A_+$ is
Hermitian.~~~$\square$
\section{Introduction and Overview}
\label{sec1}
General Relativity (GR) and Quantum Mechanics (QM) are the most
important achievements of the twentieth century theoretical physics.
Their discovery has had an enormous impact on our understanding of
Nature. Ironically, these two pillars of modern physics are
incompatible both conceptually and practically. This has made their
unification into a more general physical theory the most fundamental
problem of modern theoretical physics. The unification of Special
Relativity and QM, which is by far an easier task, has been the
subject of intensive research since late 1920's. It has led to the
formulation of various quantum field theories. A most successful
example is the Standard Model which provides a satisfactory
description of all available observational data in high energy
particle physics. In spite of the immensity of the amount of
research activity on the subject and the fact that this is conducted
by the most capable theoretical physicists of our times, the
attempts at quantizing gravity have not been as successful. In fact,
one can claim with confidence that these attempts have so far failed
to produce a physical theory offering concrete experimentally
verifiable predictions. This state of affairs has, over the years,
motivated various generalizations of GR and QM. Although none of
these generalizations could be developed into a consistent physical
theory capable of replacing GR or QM, the hope that they might
facilitate the discovery of a unified theory of quantum gravity
still motivates research in this direction.
The development of the special relativistic quantum theories has
also involved attempts at generalizing QM. Among these is an idea
initially put forward by Dirac in 1942 \cite{Dirac-1942} and
developed by Pauli \cite{pauli-1943} into what came to be known as
the \emph{indefinite-metric} quantum theories
\cite{sudarshan,nagy,nakanishi}. This is a rather conservative
generalization of QM in which one considers in addition to the
physical states of the system a set of hypothetical states, called
\emph{ghosts}, whose function is mainly to improve the regularity
properties of the mathematical description of the physical model.
The indefinite-metric quantum theories lost their interest by mid
1970's and perhaps unfortunately never found a detailed coverage in
standard textbooks on relativistic quantum mechanics and quantum
field theory.\footnote{In fact, Pauli who had made fundamental
contributions to the very foundations of the subject had developed
strong critical views against it. For example in his Nobel Lecture
of 1946, he identifies a ``correct theory'' with one that does not
involve ``a hypothetical world'', \cite{pauli-nobel}. For a
comprehensive critical assessment of indefinite-metric quantum field
theories, see \cite{nakanishi}.}
A more recent attempt at generalizing QM is due to Bender and his
collaborators \cite{bbj,bbj-ajp} who adopted all its axioms except
the one that restricted the Hamiltonian to be Hermitian. They
replaced the latter condition with the requirement that the
Hamiltonian must have an exact ${\cal PT}$-symmetry. Here ${\cal P}$
and ${\cal T}$ are the parity and time-reversal operators whose
action on position wave functions $\psi(x)$ is given by $({\cal
P}\psi)(x):=\psi(-x)$ and $({\cal
T}\psi)(x):=\psi(x)^*$.\footnote{We use asterisk to denote
complex-conjugation.} The exact ${\cal PT}$-symmetry of a
Hamiltonian operator $H$ means that it has a complete set of ${\cal
PT}$-invariant eigenvectors $\psi_n$, i.e., ${\cal
PT}\psi_n=a_n\psi_n$ for some complex numbers $a_n$. This condition
assures the reality of the spectrum of $H$. A class of thoroughly
studied examples is provided by the ${\cal PT}$-symmetric
Hamiltonians of the form
\be
H_\nu=\frac{1}{2}\,p^2-(ix)^\nu,
\label{pt-sym-nu}
\ee
where $\nu$ is a real number not less than 2, and the eigenvalue
problem for $H_\nu$ is defined using an appropriate contour $\Gamma$
in the complex plane $\C$ which for $\nu<4$ may be taken as the real
line $\R$. A correct choice for $\Gamma$ assures that the spectrum
of $H$ is discrete, real, and positive,
\cite{bender-prl,bender-jmp,dorey,shin,jpa-2005a}. Another example
with identical spectral properties is the ${\cal PT}$-symmetric
cubic anharmonic oscillator,
\be
H=\frac{1}{2}\,p^2+\frac{1}{2}\,\mu^2 x^2+i\epsilon x^3,
\label{pt-sym-3}
\ee
whose coupling constants $\mu$ and $\epsilon$ are real and its
eigenvalue problem is defined along the real axis ($\Gamma=\R$),
\cite{bender-prd-2004,jpa-2005b}.
This ${\cal PT}$-symmetric modification of QM leads to an
indefinite-metric quantum theory
\cite{japaridze,tanaka-2006,cjp-2006}, if one endows the Hilbert
space with the indefinite inner product,
\be
\br\cdot|\cdot\kt_{_{\cal P}}:=\br\cdot|{\cal P}\cdot\kt,
\label{inn-PT}
\ee
known as the ${\cal PT}$-inner product \cite{bbj}. The symbol
$\br\cdot|\cdot\kt$ that appears in (\ref{inn-PT}) stands for the
$L^2$-inner product:
$\br\phi|\psi\kt:=\int_\Gamma\phi(z)^*\psi(z)dz$ that defines the
Hilbert space $L^2(\Gamma)$ of square-integrable functions
$\psi:\Gamma\to\C$, where $\Gamma$ is the contour in complex plane
that specifies the ${\cal PT}$-symmetric model \cite{jpa-2005a}.
The main advantage of the indefinite inner product (\ref{inn-PT})
over the positive-definite inner product $\br\cdot|\cdot\kt$ is that
the former is invariant under the time-evolution generated by the
Schr\"odinger equation \cite{znojil-pr-2001,p1,jmp-2004}, i.e., if
$\phi(t)$ and $\psi(t)$ are solutions of the Schr\"odinger equation
for the ${\cal PT}$-symmetric Hamiltonian $H$,
$\br\phi(t)|\psi(t)\kt_{_{\cal P}}$ does not depend on time.
In order to employ the standard formulation of indefinite metric
quantum theories \cite{nagy} for a ${\cal PT}$-symmetric model we
proceed as follows \cite{japaridze}.
\begin{enumerate}
\item
We split the space ${\cal H}$ of state-vectors into the
subspaces ${\cal H}_\pm:=\{\psi\in{\cal H}\,|\,{\rm sgn}(
\br\psi|\psi\kt_{_{\cal P}})=\pm\}$,
where sgn$(a):=a/|a|$ if $a$ is a nonzero real
number and sgn$(0):=0$. ${\cal H}_\pm$ are orthogonal subspaces
in the sense that for all $\psi_\pm\in{\cal H}_\pm$,
$\br\psi_-|\psi_+\kt_{_{\cal P}}=0$.\footnote{The assumption of the
existence of such an orthogonal decomposition is referred to as
``decomposability'' of the model in indefinite-metric theories,
\cite{nagy}. For ${\cal PT}$-symmetric systems considered in the
literature, this assumption is valid if there is a complete basis of
common eigenvectors of the Hamiltonian and ${\cal PT}$.}
\item We impose a superselection rule that
forbids interactions mixing the elements of $\cH_-$ and $\cH_+$ and
try to devise a solution for the difficult problem of providing a
physical interpretation of the theory \cite{nagy,nakanishi}. The
simplest way of dealing with this problem is to
identify the elements of $\cH_+$ with physical state-vectors
\cite{wald,itzikson-zuber} and effectively discard the rest of
state-vectors as representing unphysical or ghost states.
\end{enumerate}
An alternative formulation of the theory that avoids the
interpretational difficulties of indefinite-metric theories is the
one based on the construction of a genuine positive-definite inner
product on $\cH$. This construction was initially obtained in
\cite{p2} as a by-product of an attempt to derive a necessary and
sufficient condition for the reality of the spectrum of a general
Hamiltonian operator $H$ that possesses a complete set of
eigenvectors \cite{p1,p2,p3}. Under the assumption that the spectrum
of $H$ is discrete, one can show that it is real if and only if
there is a positive-definite inner product $\br\cdot|\cdot\kt_+$
that makes it Hermitian, i.e., $\br\cdot|H\cdot\kt_+=\br
H\cdot|\cdot\kt_+$, \cite{p2,p3}. The proof of this statement
involves an explicit construction of $\br\cdot|\cdot\kt_+$. This
inner product depends on the choice of the Hamiltonian through its
eigenvectors. Hence it is not unique
\cite{p4,jmp-2003,geyer-cjp,jpa-2006a}.
In \cite{bbj} the authors propose a different approach to the
problem of identifying an appropriate inner product for the
$\cP\cT$-symmetric Hamiltonians such as (\ref{pt-sym-nu}). They
introduce a generic symmetry of these Hamiltonians which they term
as $\cC$-symmetry and construct a class of positive-definite inner
products, called the \emph{${\cal CPT}$-inner products}, that, as we
show in Section~3.4, turn out to coincide with the inner products
$\br\cdot|\cdot\kt_+$, \cite{jmp-2003,jpa-2005a}. The approach of
\cite{bbj} may be related to a much older construction originally
proposed in the context of the indefinite-metric quantum theories
\cite{nagy,nakanishi}. It involves the following two steps.
\begin{enumerate}
\item Suppose that $\cH=\cH_+\oplus\cH_-$ where $\cH_\pm$ are the
orthogonal subspaces we defined above, and that both $\cH_\pm$
admit a basis consisting of the eigenvectors of $H$.
\item Let $\Pi:{\cal H}\to{\cal H}$ be the projection operator
onto $\cH_+$, so that for all $\psi\in{\cal H}$,
$\psi_+:=\Pi\psi\in{\cal H}_+$
and $\psi_-:=\psi-\psi_+\in{\cal H}_-$. Clearly $\Pi^2=\Pi$ and
$\Pi\psi_-=0$.
\item Endow ${\cal H}$ with the positive-definite inner product:
$(\phi,\psi):=\br\phi_+|\psi_+\kt_{_{\cal P}}-
\br\phi_-|\psi_-\kt_{_{\cal P}}.$
\item Let $\cC:\cH\to\cH$ be defined by ${\cal
C}\psi:=\psi_+-\psi_-$. Then, in view of the orthogonality of ${\cal
H}_\pm$,
\be
(\phi,\psi)=\br\phi|{\cal C}\psi\kt_{_{\cal P}}=
\br{\cal C}\phi|\psi\kt_{_{\cal P}},
\label{inn-cpt-1}
\ee
\end{enumerate}
It is not difficult to see that ${\cal C}=2\Pi-I$, where $I$ stands
for the identity operator acting in ${\cal H}$, i.e., the operator
that leaves all the elements of ${\cal H}$ unchanged. Obviously for
all $\psi\in{\cal H}_\pm$, ${\cal C}\psi=\pm\psi$. Hence, ${\cal C}$
is a grading operator associated with the direct sum decomposition
${\cal H}={\cal H}_+\oplus{\cal H}_-$ of ${\cal H}$. Furthermore, in
view of the assumption~1 above, the eigenvectors of $H$ have a
definite grading. This is equivalent to the condition that $\cC$
commutes with the Hamiltonian operator, $[{\cal C},H]=0$,
\cite{AGK}. It turns out that the ${\cal CPT}$-inner product
introduced in \cite{bbj} coincides with the inner product
(\ref{inn-cpt-1}).
In \cite{bbj,bbj-ajp}, the authors use the ${\cal CPT}$-inner
product to formulate a unitary quantum theory based on the ${\cal
PT}$-symmetric Hamiltonians (\ref{pt-sym-nu}). They identify the
observables $O$ of the theory with the ${\cal CPT}$-symmetric
operators\footnote{To ensure that the spectrum of such a ${\cal
CPT}$-symmetric operator $O$ is real, one demands that $O$ has an
exact ${\cal CPT}$-symmetry, i.e., its eigenstates are left
invariant under the action of ${\cal CPT}$. This does not however
ensure the completeness of the eigenvectors of $O$.}, in particular
\be
{\cal CPT}\: O\: {\cal CPT}=O.
\label{spt-sym-add}
\ee
This definition is motivated by the demand that the structure of the
theory must not involve mathematical operations such as Hermitian
conjugation and be determined only using physical conditions. The
definition (\ref{spt-sym-add}) does fulfil this demand\footnote{The
authors of \cite{bbj} emphasize this point by stating that: ``In
effect, we replace the mathematical condition of Hermiticity, whose
physical content is somewhat remote and obscure, by the physical
condition of space-time and charge-conjugation symmetry.'' One must
note however that in this theory the usual coordinate operator $x$
no longer represents a physical observable. As a result ${\cal P}$
does not affect a reflection in the physical space, and there is no
reason why one should still refer to ${\cal PT}$-symmetry as the
physical ``space-time reflection symmetry'' as done in
\cite{bbj,bbj-ajp}.}, but it suffers from a serious dynamical
inconsistency in the sense that in the Heisenberg picture an
operator $O(t):=e^{itH}O(0)e^{-itH}$ that commutes with $\cC\cP\cT$
at $t=0$ may not commutes with this operator at $t>0$. Therefore, in
general, under time-evolution an observable can become unobservable
\cite{critique,comment}! This inconsistency rules out
(\ref{spt-sym-add}) as an acceptable definition of a physical
observable. As noticed in \cite{bbj-erratum,bbrr}, it can be
avoided, if one replaces the symmetry condition (\ref{spt-sym-add})
with:
\be
{\cal CPT}\: O\:{\cal CPT}=O^T,
\label{revised}
\ee
where all operators are identified with their matrix representation
in the coordinate-basis and $O^T$ stands for the transpose of $O$.
In particular, $\br x|O^T|x'\kt:=\br x'|O|x\kt$.
The presence of the mathematical operation of transposition in
(\ref{revised}) shows that apparently the theory could not be
defined just using ``the physical condition of space-time and
charge-conjugation symmetry'' as was initially envisaged,
\cite{bbj,bbj-ajp}. Note also that (\ref{revised}) puts an implicit
and difficult-to-justify restriction on the Hamiltonian $H$. Being
an observable commuting with ${\cal CPT}$, $H$ must satisfy $H^T=H$,
i.e., it is necessarily symmetric!\footnote{In mathematical
literature the term ``symmetric operator'' is usually used for a
different purpose, as discussed in footnote~\ref{symmetric-op}
below. To avoid possible confusion we will not adopt this
terminology.} Therefore (\ref{revised}) cannot be used to determine
the observables of a theory that has a nonsymmetric Hamiltonian. The
restriction to symmetric Hamiltonians may be easily lifted, if one
is willing to adopt the conventional definition of the observables,
namely identifying them with the operators that are Hermitian with
respect to the ${\cal CPT}$-inner product \cite{critique,cjp-2004b}.
\be
(\cdot,O\cdot)=(O\cdot,\cdot),
\label{Hermitian-cpt}
\ee
Indeed this definition is forced upon us by a well-known
mathematical theorem that we present a detailed proof of in the
appendix. It states that \emph{if a linear operator $O$ has real
expectation values computed using a given inner product, then $O$ is
necessarily Hermitian with respect to this inner product}. This
shows that the requirement of the Hermiticity of observables and in
particular the Hamiltonian has a simple ``physical''
justification.\footnote{What is however not dictated by this
requirement is the choice of the inner product.}
An important motivation for considering this so-called \emph{${\cal
PT}$-symmetric Quantum Mechanics} is provided by an interesting idea
that is rooted in special relativistic local quantum field theories
(QFT). Among the most celebrated results of QFT is the ${\cal {\rm
C}PT}$-theorem. It states that every field theory satisfying the
axioms of QFT is ${\cal {\rm C}PT}$-invariant, \cite{haag}, where
${\rm C}$ is the charge-conjugation operator. Clearly replacing the
axiom that the Hamiltonian is Hermitian with the statement of the
${\cal {\rm C}PT}$-theorem might lead to a generalization of QFT.
The implementation of this idea in a nonrelativistic setting
corresponds to the requirement that the Hamiltonian possesses an
exact ${\cal PT}$-symmetry. This in turn allows for the construction
of a ${\cal C}$-operator that similarly to the charge-conjugation
operator ${\rm C}$ of QFT squares to identity and generates a
symmetry of the system. The idea that this establishes a
nonrelativistic analog of the ${\cal {\rm C}PT}$-theorem is quite
tempting. Yet there are clear indications that this is not really
the case. For example, unlike the charge-conjugation operator of
QFT, ${\cal C}$ depends on the choice of the Hamiltonian. It turns
out that in fact this operator does not play the role of the
relativistic charge-conjugation operator, it is merely a useful
grading operator for the Hilbert space.\footnote{One can more
appropriately compare ${\cal C}$ with the chirality operator
($\gamma^5$) for the Dirac spinors.} In this sense the adopted
terminology is rather unfortunate.
One of the aims of the present article is to show that ${\cal
PT}$-symmetric QM is an example of a more general class of theories,
called \emph{Pseudo-Hermitian Quantum Mechanics}, in which ${\cal
PT}$-symmetry does not play a basic role and one does not need to
introduce a ${\cal C}$-operator to make the theory well-defined. The
analogs of ${\cal PT}$ and ${\cal C}$ operators can nevertheless be
defined in general \cite{jmp-2003}, but they do not play a
fundamental role. All that is needed is to determine the class of
non-Hermitian Hamiltonians that are capable of generating a unitary
evolution and a procedure that associates to each member of this
class a positive-definite inner product that renders it Hermitian.
It turns out that there are always an infinite class of
positive-definite inner products satisfying this condition. Each of
these defines a separate physical Hilbert space with a complete set
of observables. In this way one obtains a set of quantum systems
that have the same Hamiltonian but different Hilbert spaces.
Therefore, they are dynamically equivalent but kinematically
distinct.
In order to elucidate the conceptual foundations of Pseudo-Hermitian
QM we will next examine some of the basic properties of the
mathematical notions of the ``transpose'' and
``Hermitian-conjugate'' of a linear operator. For clarity of the
presentation we will only consider the operators that act in the
space of square-integrable functions $L^2(\R)$. The discussion may
be generalized to square-integrable functions defined on a complex
contour \cite{jpa-2005a}.
In the literature on ${\cal PT}$-symmetric QM, notably
\cite{bbj,bbj-ajp,bbrr}, the transpose $O^T$ and Hermitian-conjugate
$O^\dagger$ of a linear operator $O$ are respectively defined with
respect to the coordinate-basis, $\{|x\kt\}$, according to
\bea
\br x_1|O^T|x_2\kt&:=&\br x_2|O|x_1\kt,
\label{transpose}\\
\br x_1|O^\dagger|x_2\kt&:=&\br x_2|O|x_1\kt^*,
\label{Hermitian}
\eea
where $x_1,x_2$ are arbitrary real numbers. Therefore the terms
``symmetric'' and ``Hermitian'' respectively refer to the conditions
$\br x_1|O|x_2\kt=\br x_2|O|x_1\kt$ and $\br x_1|O|x_2\kt^*=\br
x_2|O|x_1\kt$. These definitions reflect the inclination to treat
operators as matrices. This is certainly permissible provided that
one specifies the particular basis one uses for this purpose. In
this sense the following equivalent definitions are more preferable.
\be
O^T:=\int dx_1\int dx_2\: \br x_2|O|x_1\kt~ |x_1\kt\br x_2|,~~~~~
O^\dagger:=\int dx_1\int dx_2\: \br x_2|O|x_1\kt^*~
|x_1\kt\br x_2|.
\label{spec}
\ee
A nice feature of (\ref{Hermitian}) that is not shared with
(\ref{transpose}) is that it is invariant under the basis
transformations that map $\{|x\kt\}$ onto any \emph{orthonormal
basis}. For example one can easily show that if (\ref{Hermitian})
holds, so do
\be
\br p_1|O^\dagger|p_2\kt=\br p_2|O|p_1\kt^*~~~~~{\rm and}~~~~~
O^\dagger=\int dp_1\int dp_2 \br p_2|O|p_1\kt^*
|p_1\kt\br p_2|.
\label{Hermitian-p}
\ee
This invariance under orthonormal basis transformations stems from
the fact that $O^\dagger$ admits a basis-independent definition: It
is the linear operator satisfying
\be
\br\phi|O^\dagger\psi\kt=\br O\phi|\psi\kt,
\label{adj}
\ee
i.e., the \emph{adjoint operator} for $O$.\footnote{A rigorous
definition of the adjoint operator is given in
Subsection~\ref{sec-Hermitian}.}
The notions of ``transpose'' and ``symmetric operator'' introduced
above and employed in \cite{bbj,bbj-ajp,bbj-erratum,bbrr} do not
share this invariance property of ``Hermitian-conjugate'' and
``Hermitian operator''. For example, it is easy to see that $\br
x_1|(ip)|x_2\kt=-\br x_2|(ip)|x_1\kt$ while $\br p_1|(ip)|p_2\kt=\br
p_2|(ip)|p_1\kt$. Therefore, $ip$ is represented by a symmetric
matrix in the $p$-basis while it is represented by an antisymmetric
matrix in the $x$-basis. This shows that there is no
basis-independent notion of the transpose of an operator or a
symmetric operator.\footnote{One can define a notion of the
transpose of a linear operator $O$ acting in a complex inner product
space ${\cal V}$ in terms of an arbitrary antilinear involution
$\tau:{\cal V}\to{\cal V}$ according to $O^T=\tau O^\dagger \tau$,
\cite{akhiezer-glazman}. A function $\tau:{\cal V}\to{\cal V}$ is
called an \emph{antilinear operator} if
$\tau(a\psi+b\phi)=a^*\tau\psi+b^*\tau\phi$ for all complex numbers
$a,b$ and all elements $\psi,\phi$ of ${\cal V}$. It is called an
\emph{involution} if $\tau^2=I$, where $I:{\cal V}\to{\cal V}$ is
the identity operator. The choice of a basis to define $O^T$ is
equivalent to the choice of an antilinear involution $\tau$. The
notion of the transpose used in \cite{bbj,bbj-ajp,bbrr} corresponds
to choosing the time-reversal operator ${\cal T}$ as
$\tau$.\label{antilinear}}
Obviously once we specify a basis, there is no danger of using
definition (\ref{transpose}). But we must keep in mind that any
theory in which one uses the notion of transposition in the sense of
(\ref{transpose}) involves the implicit assumption that the
coordinate-basis is a preferred basis. The use of the notion of
Hermitian-conjugation as defined by (\ref{Hermitian}) does not rely
on such an assumption. As we will explain in the following section
the choice of an orthonormal basis is equivalent to the choice of an
inner product. This is why one can define $O^\dagger$ using its
basis-independent defining relation (\ref{adj}) which only involves
the inner product $\br\cdot|\cdot\kt$. In summary, while the use of
the terms ``transpose'' and ``symmetric operator'' involves making a
particular choice for a preferred basis, the use of the term
``Hermitian-conjugate'' and ``Hermitian operator'' involves making a
particular choice for an inner product.
In conventional QM the inner product is fixed from the outset. Hence
the notions of Hermitian-conjugation and Hermitian operator are
well-defined. The opposite is true about the notions of
transposition and symmetric operator. This does not cause any
difficulty, because they never enter into quantum mechanical
calculations, and in principle one does not need to introduce them
at all. We will see that the same is the case in Pseudo-Hermitian
QM. In particular, in the discussion of ${\cal PT}$-symmetric
systems, there is no need to identify physical observables using
(\ref{revised}).
The main reason for making a universal and preassigned choice for
the inner product in QM is the curious fact that up to
unitary-equivalence there is a unique inner product.\footnote{This
will be explained in detail in Subsection~\ref{sec-unitary}.} This
means that using different inner products leads to physically
identical theories, or more correctly to different representations
of a single physical theory. In conventional QM, one eliminates the
chance of employing these alternative representations by adopting
the usual ($L^2$-) inner product as the only viable choice. The
situation resembles a gauge theory in which one fixes a gauge from
the outset and then forgets about the gauge symmetry. This will have
no effect on the physical quantities computed using such a theory,
but it is clearly not recommended. It is quite possible that an
alternative choice of gauge would facilitate a particular
calculation.
We wish to argue that because \emph{no one has ever made an
independent measurement of the inner product of the Hilbert space},
it must be kept as a degree of freedom of the formulation of the
theory. This is the basic principle underlying Pseudo-Hermitian QM.
We will see that any inner product may be defined in terms of a
certain linear operator $\eta_+$. It is this so-called \emph{metric
operator} that determines the kinematics of pseudo-Hermitian quantum
systems. The Hamiltonian operator $H$ that defines the dynamics is
linked to the metric operator via the pseudo-Hermiticity relation,
\be
H^\dagger=\eta_+ H\eta_+^{-1}.
\label{ph}
\ee
We will explore some of the consequences of this equation whose
significance has not been fully noticed or appreciated in its
earlier investigations, notably in the context of the
indefinite-metric quantum theories \cite{pauli-1943}.
We wish to point out that there is a very large number of
publications on the subject of this article. Many of these focus on
the mathematical issues related to the investigation of the spectrum
of various non-Hermitian operators or formalisms developed to study
such problems. Here we will not deal with these issues. The
interested reader may consult the review article
\cite{dorey-review}. Another related line of research is the
mathematical theory of linear spaces with an indefinite metric and
its applications \cite{bognar,azizov}. This is also beyond the aim
and the scope of the present article. There are a series of review
articles by Bender and his collaborators
\cite{bbj-ajp,bbrr,b-cp-2005,b-ijmpa-2005,b-rpp-2007} that also
address the physical aspects of the subject. The approach pursued in
these articles is based on the use of the $\cC\cP\cT$-inner products
and the definition of observables given by (\ref{revised}). This
restricts the domain of validity of their results to symmetric
Hamiltonians. The discussion of the classical limit of
$\cP\cT$-symmetric systems offered in these articles is also not
satisfactory, because it does not involve a quantization scheme that
would relate classical and quantum observables.
It is our opinion that to gain a basic understanding of the subject
demands a careful study of the underlying mathematical structures
without getting trapped in the physically irrelevant mathematical
details and technicalities. The need for a comprehensive and
readable treatment of basic mathematical notions and their physical
consequences has not been met by any of the previously published
reviews. In the first part of the present article (Sections~2 and
3), we intend to address this need. Here we only discuss the
mathematical tools and results that are necessary for addressing the
conceptual issues of direct relevance to the physical aspects of the
subject. In section~3, we use the mathematical machinery developed
in Section~2 to present a complete formulation of pseudo-Hermitian
QM and its connection with $\cP\cT$- and $\cC$-symmetries. The
second part of the article (Sections 4-9) aims to survey various
recent developments. In Section~4 we survey different methods of
computing metric operators. In Sections 5-8 we explore systems
defined on complex contours, the classical limit of pseudo-Hermitian
quantum systems, the subtleties involving time-dependent
Hamiltonians and the path-integral formulation of the theory, and
the quantum Brachistochrone problem, respectively. In Section~9 we
discuss some of the physical applications of pseudo-Hermitian
Hamiltonians, and in Section~10 we present our concluding remarks.
\section{Mathematical Tools and Conceptual Foundations} \label{sec2}
In this section we survey the necessary mathematical tools and
elaborate on a number of conceptual issues that are helpful in
clarifying various existing misconceptions on the subject. We also
offer a through discussion of the motivation for considering a more
general formulation of QM.
One of the axioms of QM is that pure physical states of a quantum
system are rays in a Hilbert space ${\cal H}$. Each ray may be
determined in a unique manner by a nonzero element $\psi$ of ${\cal
H}$ which we call a \emph{state-vector}. The physical quantities
associated with a pure state is computed using a corresponding
state-vector and the inner product of the Hilbert space. We begin
our discussion by a precise description of Hilbert spaces, inner
products, bases, Hermitian and unitary operators, biorthonormal
systems, and their relevance to our investigation.
We will use the following notations and conventions: $\R$, $\R^+$,
$\C$, $\Z$, $\Z^+$, $\N$ denote the sets of real numbers, positive
real numbers, complex numbers, integers, positive integers, and
nonnegative integers (natural numbers), respectively. The symbol
``$~:=~$'' means that the left-hand side is defined to be the
right-hand side, ``$~=:~$'' means that the converse is true, and
``$~\in~$'' and ``$~\subseteq~$'' respectively stand for ``is an
element of'' and ``is a subset of''. Throughout this paper we will
only consider time-independent Hamiltonian operators unless
otherwise is explicitly stated.
\subsection{Hilbert Spaces and Riesz Bases}
\label{sec-inn}
Consider a complex vector space ${\cal V}$ and a function
$\br\cdot|\cdot\kt:{\cal V}\times{\cal V}\to\C$ that assigns to any
pair $\psi,\phi$ of elements of ${\cal V}$ a complex number
$\br\psi|\phi\kt$. Suppose that $\br\cdot|\cdot\kt$ has the
following properties.
\begin{itemize}
\item[]({\em i}) It is \emph{positive-definite}, i.e., for all
nonzero elements $\psi$ of ${\cal V}$, $\br\psi|\psi\kt$ is a
positive real number, and it vanishes if and only if $\psi=0$, where
we use $0$ also for the zero vector.
\item[]({\em ii}) It is Hermitian, i.e., for any pair
$\psi,\phi$ of elements of ${\cal V}$,
$\br\psi|\phi\kt^*=\br\phi|\psi\kt$.
\item[]({\em iii}) It is linear in its second slot, i.e., for all
$\psi,\phi,\chi\in{\cal V}$ and all $a,b\in\C$,
$\br\psi|a\phi+b\chi\kt=a\br\psi|\phi\kt+b\br\psi|\chi\kt$.
\end{itemize}
Then $\br\cdot|\cdot\kt$ is called an \emph{inner
product}\footnote{We use the terms ``inner product'' and
``positive-definite inner product'' synonymously.} on ${\cal V}$,
and the pair $({\cal V},\br\cdot|\cdot\kt)$ is called an \emph{inner
product space.}
An inner product $\br\cdot|\cdot\kt$ on ${\cal V}$ assigns to each
element $\psi$ of ${\cal V}$ a nonnegative real number,
$\parallel\psi\parallel:=\sqrt{\br\psi|\psi\kt}$, that is called the
\emph{norm} of $\psi$. We can use the norm to define a notion of
distance between elements of ${\cal V}$, according to $\parallel
\psi-\phi\parallel$, and develop analysis and geometry on the inner
product space $({\cal V},\br\cdot|\cdot\kt)$.
A \emph{Hilbert space} ${\cal H}$ is an inner product space which
fulfills an additional technical condition, namely that its norm
defines a \emph{complete metric space}, i.e., for any infinite
sequence $\{\psi_k\}$ of elements $\psi_k$ of ${\cal H}$, the
condition that $\lim_{j,k\to\infty}\parallel
\psi_j-\psi_k\parallel=0$ implies that $\{\psi_k\}$ converges to an
element $\psi$ of ${\cal H}$;
$\lim_{k\to\infty}\parallel\psi-\psi_k\parallel=0$. In other words,
a Hilbert space is a complete inner product space.
A subset ${\cal S}$ of a Hilbert space ${\cal H}$ is said to be
\emph{dense}, if every element of ${\cal H}$ may be obtained as the
limit of a sequence of elements of ${\cal S}$. A Hilbert space is
said to be \emph{separable}, if it has a countable dense subset. It
turns out that ${\cal H}$ is separable if and only if it has a
countable \emph{basis}. The latter is a sequence $\{\chi_n\}$ of
linearly independent elements of ${\cal H}$ such that the set of its
finite linear combinations,
\be
{\cal L}(\{\chi_n\})=\left\{\sum_{n=1}^K c_n\chi_n~|~
K\in\Z^+,~c_n\in\C\right\},
\label{finite-span}
\ee
is a dense subset of ${\cal H}$. For an infinite-dimensional Hilbert
space ${\cal H}$, $\{\chi_n\}$ is an infinite sequence and the
assertion that ${\cal L}(\{\chi_n\})$ is a dense subset means that
every element $\psi$ of ${\cal H}$ is the limit of a convergent
series of the form $\sum_{n=1}^\infty c_n\chi_n$ whose coefficient
$c_n$ are assumed to be uniquely determined by $\psi$.\footnote{This
notion of basis is sometimes called a \emph{Schauder basis}
\cite{young}. It is not to be confused with the algebraic or Hamel
basis which for infinite-dimensional Hilbert spaces is always
uncountable \cite{halmos}.}
It is not difficult to show that any finite-dimensional inner
product space is both complete and separable. In this sense
infinite-dimensional separable Hilbert spaces are natural
generalizations of the finite-dimensional inner product spaces. In
the following we will use the label $N$ to denote the dimension of
the Hilbert space in question. $N=\infty$ will refer to an
infinite-dimensional separable Hilbert space.
An important difference between finite and infinite-dimensional
Hilbert spaces is that the definition of a basis in a
finite-dimensional Hilbert space does not involve the inner product
while the opposite is true about the infinite-dimension Hilbert
spaces. The requirement that (\ref{finite-span}) be a dense subset
makes explicit use of the norm. Therefore whether a given sequence
of linearly independent vectors forms a basis of an
infinite-dimensional Hilbert space ${\cal H}$ depends in a crucial
manner on the inner product of ${\cal H}$.
Given a basis $\{\chi_n\}$ of a separable Hilbert space ${\cal H}$,
one can apply the Gram-Schmidt process \cite{roman,cheney} to
construct an \emph{orthonormal basis}, i.e., a basis $\{\xi_n\}$
satisfying
\be
\br\xi_m|\xi_n\kt=\delta_{mn}~~~~\mbox{for all
$m,n\in\{1,2,3,\cdots,N\}$,}
\label{ortho}
\ee
where $\delta_{mn}$ denotes the Kronecker delta symbol:
$\delta_{mn}:=0$ if $m\neq n$ and $\delta_{nn}:=1$ for all $n$. For
an orthonormal basis $\{\xi_n\}$, the coefficients $c_n$ of the
basis expansion,
\be
\psi=\sum_{n=1}^N c_n\xi_n,
\label{expand-1}
\ee
of the elements $\psi$ of ${\cal H}$ are given by
\be
c_n=\br\xi_n|\psi\kt.
\label{expand-2}
\ee
Furthermore, in view of (\ref{ortho}) -- (\ref{expand-2}),
\be
\br\phi|\psi\kt=\sum_{n=1}^N
\br\phi|\xi_n\kt\br\xi_n|\psi\kt~~~~
\mbox{for all $\phi,\psi\in{\cal H}$}.
\label{norm}
\ee
In particular,
\be
\parallel\psi\parallel^2=\sum_{n=1}^N |\br\xi_n|\psi\kt|^2
~~~~\mbox{for all $\psi\in{\cal H}$}.
\label{norm2}
\ee
Eq.~(\ref{norm}) implies that the operator ${\cal I}$ defined by
${\cal I}\,\psi:=\sum_{n=1}^N \br\xi_n|\psi\kt\,\xi_n$ equals the
identity operator $I$ acting in ${\cal H}$; ${\cal
I}=I$.\footnote{Let $\psi_1,\psi_2\in{\cal H}$ be such that for all
$\phi\in{\cal H}$, $\br\phi|\psi_1\kt=\br\phi|\psi_2\kt$. Then
$\br\phi|\psi_1-\psi_2\kt=0$ for all $\phi\in{\cal H}$. Setting
$\phi=\psi_1-\psi_2$, one then finds
$\parallel\psi_1-\psi_2\parallel^2=0$ which implies $\psi_1=\psi_2$.
In view of this argument, $\br\phi|{\cal
I}\,\psi\kt=\br\phi|\psi\kt$ for all $\psi,\phi\in{\cal H}$ implies
${\cal I}\,\psi=\psi$ for all $\psi\in{\cal H}$. Hence ${\cal I}=I$.
\label{trick}} This is called the \emph{completeness relation} which
in Dirac's bra-ket notation takes the familiar form: $\sum_{n=1}^N
|\xi_n\kt\br\xi_n|=I$.
Next, we wish to examine if \emph{given a basis $\{\zeta_n\}$ of a
separable Hilbert space ${\cal H}$, there is an inner product
$(\cdot|\cdot)$ on ${\cal H}$ with respect to which $\{\zeta_n\}$ is
orthonormal}. Because $\{\zeta_n\}$ is a basis, for all
$\psi,\phi\in{\cal H}$ there are unique complex numbers $c_n,d_n$
such that $\psi=\sum_{n=1}^Nc_n\zeta_n$ and
$\phi=\sum_{n=1}^Nd_n\zeta_n$. We will attempt to determine
$(\psi|\phi)$ in terms of $c_n$ and $d_n$.
First, consider the finite-dimensional case, $N<\infty$. Then, in
view of Eq.~(\ref{norm}), the condition that $\{\zeta_n\}$ is
orthonormal with respect to $(\cdot|\cdot)$ defines the latter
according to
\be
(\psi|\phi):=\sum_{n=1}^N c_n^*d_n.
\label{inn-basis}
\ee
We can easily check that $(\cdot|\cdot)$ possesses the defining
properties (\emph{i}) -- (\emph{iii}) of an inner product and
satisfies $(\zeta_m|\zeta_n)=\delta_{mn}$. It is also clear from
Eq.~(\ref{norm}) that any other inner product with this property
must satisfy (\ref{inn-basis}). This shows that $(\cdot|\cdot)$ is
the unique inner product that renders $\{\zeta_n\}$ orthonormal.
The case $N=\infty$ may be similarly treated, but in general there
is no guarantee that the right-hand side of (\ref{inn-basis}) is a
convergent series. In fact, it is not difficult to construct
examples for which it is divergent. Therefore, an inner product that
makes an arbitrary basis $\{\zeta_n\}$ orthonormal may not exist.
The necessary and sufficient condition for the existence of such an
inner product $(\cdot|\cdot)$ is that $\psi=\sum_{n=1}^\infty
c_n\zeta_n$ implies $\sum_{n=1}^\infty |c_n|^2<\infty$ for all
$\psi\in{\cal H}$.\footnote{If $(\cdot|\cdot)$ exists,
(\ref{inn-basis}) must hold because $\{\zeta_n\}$ is orthonormal
with respect to $(\cdot|\cdot)$. Hence the left-hand side of
(\ref{inn-basis}) is well-defined and its right-hand must be
convergent. In particular, for $\phi=\psi$ we find that
$(\psi|\psi)=\sum_{n=1}^\infty |c_n|^2$ must be finite. This
establishes the necessity of the above condition. Its sufficiency
follows from the inequality: For all $K\in\Z^+$, $|\sum_{n=1}^K
c_n^*d_n|^2\leq \sum_{n=1}^\infty |c_n|^2 \sum_{m=1}^\infty
|d_m|^2<\infty$.} Furthermore, we shall demand that the inner
product space ${\cal H}'$ obtained by endowing the underlying vector
space of ${\cal H}$ with the inner product $(\cdot|\cdot)$ is a
Hilbert space. As we will discuss in Subsection~\ref{sec-unitary},
any two infinite-dimensional separable Hilbert spaces, in particular
${\cal H}$ and ${\cal H}'$, have the same topological properties,
i.e., the set of open subsets of ${\cal H}$ coincide with those of
${\cal H}'$. This restricts $(\cdot|\cdot)$ to be
\emph{topologically equivalent} to $\br\cdot|\cdot\kt$, i.e., there
are positive real numbers $c_1$ and $c_2$ satisfying
$c_1\br\psi|\psi\kt\leq (\psi|\psi)\leq c_2 \br\psi|\psi\kt$ for all
$\psi\in{\cal H}$. It turns out that the inner product
(\ref{inn-basis}) that renders the basis $\{\zeta_n\}$ orthonormal
and is topologically equivalent to $\br\cdot|\cdot\kt$ exists and is
unique if and only if it is obtained from an orthonormal basis
$\{\xi_n\}$ through the action of an everywhere-defined bounded
invertible linear operator $A:{\cal H}\to{\cal H}$, i.e.,
$\zeta_n=A\xi_n$. A basis $\{\zeta_n\}$ having this property is
called a \emph{Riesz basis}, \cite{gohberg-krein,young}. In summary,
we can construct a new separable Hilbert space ${\cal H}'$ in which
$\{\zeta_n\}$ is orthonormal if and only if it is a Riesz basis. We
will give a derivation of a more explicit form of the inner product
of ${\cal H}'$ in Subsection~\ref{sec-biortho}.
\subsection{Bounded, Invertible, and Hermitian Operators}
\label{sec-Hermitian}
Consider two Hilbert spaces ${\cal H}_1$ and ${\cal H}_2$ with inner
products $\br\cdot|\cdot\kt_{_1}$ and$\br\cdot|\cdot\kt_{_2}$,
respectively, and a linear operator $A$ that maps ${\cal H}_1$ to
${\cal H}_2$. The \emph{domain} ${\cal D}(A)$ of $A$ is the subset
of ${\cal H}_1$ such that the action of $A$ on any element of ${\cal
D}(A)$ yields a unique element of ${\cal H}_2$. The range ${\cal
R}(A)$ of $A$ is the subset of ${\cal H}_2$ consisting of elements
of the form $A\psi_1$ where $\psi_1$ belongs to ${\cal D}(A)$. If
${\cal D}(A)={\cal H}_1$, one says that $A$ has \emph{full domain},
or that it is \emph{everywhere-defined}. If ${\cal R}(A)={\cal
H}_2$, one says that $A$ has \emph{full range}, i.e., it is an
\emph{onto} function. As an example consider the momentum operator
$p$ acting in ${\cal H}_1={\cal H}_2=L^2(\R)$,
$(p\psi)(x):=-i\hbar\frac{d}{dx}\,\psi(x)$. Then ${\cal D}(p)$
consists of the square-integrable functions that have a
square-integrable derivative, and ${\cal R}(p)$ is the set of
square-integrable functions $\psi_2$ such that
$\psi_1(x)=\int_{-\infty}^x \psi_2(u)du$ is also square-integrable.
In particular $p$ is not everywhere-defined, but its domain is a
dense subset of $L^2(\R)$. Such an operator is said to be
\emph{densely-defined}. All the operators we encounter in this
article and more generally in QM are densely-defined.
A linear operator $A:{\cal H}_1\to{\cal H}_2$ is said to be
\emph{bounded} if there is a positive real number $M$ such that for
all $\psi\in{\cal D}(A)$, $\parallel A\psi\parallel_2\:\leq M
\parallel\psi\parallel_1$, where $\parallel\cdot\parallel_1$ and
$\parallel\cdot\parallel_2$ are respectively the norms defined by
the inner products $\br\cdot|\cdot\kt_{_1}$ and
$\br\cdot|\cdot\kt_{_2}$. The smallest $M$ satisfying this
inequality is called the norm of $A$ and denoted by $\parallel
A\parallel$. A characteristic feature of a bounded operator is that
all its eigenvalues $a$ are bounded by its norm, $|a|\leq\,\parallel
A\parallel$. Furthermore, a linear operator is bounded if and only
if it is continuous \cite{reed-simon}.\footnote{A function
$f:\cH_1\to\cH_2$ is said to be continuous if for all
$\psi\in\cD(f)$ and every sequence $\{\psi_n\}$ in $\cD(f)$ that
converges to $\psi$, the sequence $\{f(\psi_n)\}$ converges to
$f(\psi)$.} Linear operators relating finite-dimensional Hilbert
spaces are necessarily bounded. Therefore the concept of boundedness
is only important for infinite-dimensional Hilbert spaces.
$A:{\cal H}_1\to{\cal H}_2$ is called an \emph{invertible operator}
if it satisfies both of the following conditions
\cite{hislop-sigal}.\footnote{Some authors do not require the second
condition. The above definition is the most convenient for our
purposes.}
\begin{enumerate}
\item $A$ is one-to-one and onto, so $A^{-1}:{\cal H}_2\to{\cal H}_1$
exists and has a full domain;
\item $A^{-1}$ is a bounded operator.
\end{enumerate}
If $A$ is bounded, one-to-one and onto, then according to a theorem
due to Banach its inverse is also bounded; it is invertible,
\cite{kolmogorov-fomin}.\footnote{A continuous one-to-one onto
function with a continuous inverse is called a \emph{homeomorphism}.
The domain and range of a homeomorphism have the same topological
properties. If $f:{\cal H}_1\to{\cal H}_2$ is a homeomorphism
relating two Hilbert spaces ${\cal H}_1$ and ${\cal H}_1$, a
sequence $\{x_n\}$ of elements $x_n$ of ${\cal H}$ converges to an
element $x\in{\cal H}$ if and only if the $\{f(x_n)\}$ converges to
$f(x)$ in ${\cal H}_2$.\label{homeo1}} An important class of bounded
invertible operators that play a fundamental role in QM is the
unitary operators. We will examine them in
Subsection~\ref{sec-unitary}.
Next, consider a linear operator $A:{\cal H}_1\to{\cal H}_2$ that
has a dense domain ${\cal D}(A)$. Let ${\cal D}'$ be the subset of
${\cal H}_2$ whose elements $\psi_2$ satisfy the following
condition: For all $\psi_1\in{\cal D}(A)$, there is an element
$\phi_1$ of ${\cal H}_1$ such that
$\br\psi_2|A\psi_1\kt_{_2}=\br\phi_1|\psi_1\kt_{_1}$. Then there is
a unique linear operator $A^\dagger:{\cal H}_2\to{\cal H}_1$
fulfilling \cite{reed-simon}
\be
\br\psi_1|A^\dagger\psi_2\kt_{_1}=\br A\psi_1|\psi_2\kt_{_2}~~~~
\mbox{for all}~\psi_1\in{\cal D}(A)~{\rm and}~
\psi_2\in{\cal D'}.
\label{dagger}
\ee
This operator is called the \emph{adjoint} or
\emph{Hermitian-conjugate} of $A$. By construction, ${\cal D}'$ is
the domain of $A^\dagger$; ${\cal D}(A^\dagger)={\cal D}'$.
For the case ${\cal H}_2={\cal H}_1=:{\cal H}$, where ${\cal H}$ is
a Hilbert space with inner product $\br\cdot|\cdot\kt$, a linear
operator $A:{\cal H}\to{\cal H}$ having a dense domain ${\cal D}(A)$
is called a \emph{self-adjoint} or \emph{Hermitian}
operator\footnote{In some mathematics texts, e.g.,
\cite{reed-simon}, the term ``Hermitian'' is used for a more general
class of operators which satisfy (\ref{self-adjoint}) but have
${\cal D}(A)\subseteq{\cal D}(A^\dagger)$. A more commonly used term
for such an operator is ``symmetric operator''. We will avoid using
this terminology which conflicts with the terminology used in the
literature on ${\cal PT}$-symmetric QM, \cite{bbj,bbj-ajp,bbrr}.
\label{symmetric-op}} if $A^\dagger=A$. In particular, ${\cal
D}(A^\dagger)={\cal D}(A)$ and
\be
\br\psi_1|A\psi_2\kt=\br A\psi_1|\psi_2\kt~~~~
\mbox{for all}~\psi_1,\psi_2\in{\cal D}(A).
\label{self-adjoint}
\ee
An occasionally useful property of Hermitian operators is that
\emph{every Hermitian operator having a full domain is necessarily
bounded}. This is known as the \emph{Hellinger-Toeplitz theorem}
\cite{reed-simon}.
Hermitian operators play an essential role in QM mainly because of
their spectral properties \cite{von-neumann}. In particular, their
spectrum\footnote{For a precise definition of the spectrum of a
linear operator see Subsection~\ref{sec-spec-prop}.} is real, their
eigenvectors with distinct eigenvalues are orthogonal, and they
yield a \emph{spectral resolution of the identity operator} $I$. For
a Hermitian operator $A$ with a discrete spectrum the latter takes
the following familiar form, if we use the Dirac bra-ket notation.
\be
I=\sum_{n=1}^N |\alpha_n\kt\br\alpha_n|,
\label{resolution-I}
\ee
where $\{\alpha_n\}$ is an orthonormal basis consisting of the
eigenvectors $\alpha_n$ of $A$ whose eigenvalues $a_n$ are not
necessarily distinct,
\be
A\alpha_n=a_n\alpha_n,~~~~~~~\mbox{for all $n\in\{1,2,3,\cdots,N\}$}.
\label{eg-va}
\ee
Eqs.~(\ref{resolution-I}) and (\ref{eg-va}) imply that $A$ is
\emph{diagonalizable} and admits the following \emph{spectral
representation}
\be
A=\sum_{n=1}^N a_n |\alpha_n\kt\br\alpha_n|.
\label{resolution-A}
\ee
Well-known analogs of (\ref{resolution-I}) and (\ref{resolution-A})
exist for the cases that the spectrum is not discrete, \cite{kato}.
An important property that makes Hermitian operators indispensable
in QM is the fact that for a given
densely-defined\footnote{Observables must have dense domains, for
otherwise one can construct a state in which an observable cannot be
measured!} linear operator $A$ with $\cD(A)=\cD(A^\dagger)$, the
expectation value $\br\psi|A\psi\kt$ is real-valued for all unit
state-vectors $\psi\in{\cal D}(A)$ if and only if the Hermiticity
condition (\ref{self-adjoint}) holds, \cite{kato}.\footnote{A simple
proof of this statement is given in the appendix.} This shows that
in a quantum theory that respects von Neumann's measurement
(projection) axiom, the observables cannot be chosen from among
non-Hermitian operators even if they have a real spectrum. The same
conclusion may be reached by realizing that the measurement axiom
also requires the eigenvectors of an observable with distinct
eigenvalues to be orthogonal, for otherwise the reading of a
measuring device that is to be identified with an eigenvalue of the
observable will not be sufficient to determine the state of the
system immediately after the measurement \cite{jpa-2004b}. This is
because an eigenvector that is the output of the measurement may
have a nonzero component along an eigenvector with a different
eigenvalue. This yields nonzero probabilities for the system to be
in two different physical states, though one measures the eigenvalue
of one of them only!
Let $\{\xi_n\}$ be an orthonormal basis of ${\cal H}$ and $A$ be a
Hermitian operator acting in ${\cal H}$, then according to property
(\emph{ii}) of the inner product and the Hermiticity condition
(\ref{self-adjoint}) we have
\be
A_{mn}:=\br\xi_m|A\xi_n\kt=\br\xi_n|A\xi_m\kt^*=A_{nm}^*.
\label{matrix-hermit}
\ee
This shows that $A$ is represented in the basis $\{\xi_n\}$,
according to
\be
A=\sum_{n=1}^N A_{mn}|\xi_m\kt\br\xi_n|,
\label{matrix-rep}
\ee
using the $N\times N$ \emph{Hermitian matrix}\footnote{A square
matrix $M$ is called Hermitian if its entries $M_{mn}$ satisfy
$M_{mn}=M_{nm}^*$ for all $m$ and $n$.} $\underline{A}:=(A_{mn})$.
It is essential to realize that a Hermitian operator can be
represented by a non-Hermitian matrix in a non-orthonormal basis.
This implies that having the expression for the matrix
representation of an operator and knowing the basis used for this
representation are not sufficient to decide if the operator is
Hermitian. One must in addition know the inner product and be able
to determine if the basis is orthonormal. \emph{Referring to an
operator as being Hermitian or non-Hermitian (using its matrix
representation) without paying attention to the inner product of the
space it acts in is a dangerous practice.}
For example, it is not difficult to check that the Hermitian matrix
$\psigma_1=\mbox{\scriptsize
$\left(\begin{array}{cc}0&1\\1&0\end{array}\right)$}$ represents the
operator $L:{\C}^2\to{\C}^2$ defined by $L${\scriptsize$
\left(\begin{array}{c} z_1\\
z_2\end{array}\right):=\left(\begin{array}{c} z_1\\
z_1-z_2\end{array}\right)$} in the basis ${\cal
B}:=${\scriptsize$\left\{ \left(\begin{array}{c}
1\\0\end{array}\right), \left(\begin{array}{c}
1\\1\end{array}\right)\right\}$}. The same operator is represented
in the standard basis ${\cal B}_0:=${\scriptsize$\left\{
\left(\begin{array}{c} 1\\0\end{array}\right),
\left(\begin{array}{c} 0\\1\end{array}\right)\right\}$} using the
non-Hermitian matrix {\scriptsize
$\left(\begin{array}{cc}1&0\\1&-1\end{array}\right)$}. We wish to
stress that this information is, in fact, not sufficient to
ascertain if $L$ is a Hermitian operator, unless we fix the inner
product on the Hilbert space $\C^2$. For instance, if we choose the
standard Euclidean inner product which is equivalent to requiring
${\cal B}_0$ to be orthonormal, then $L$ is a non-Hermitian
operator. If we choose the inner product that makes the basis ${\cal
B}$ orthonormal, then $L$ is a Hermitian operator. We can use
(\ref{inn-basis}) to construct the latter inner product. It has the
form $(\vec z|\vec w ):=z_1^*(w_1-w_2)+z_2^*(-w_1+2w_2)$, where $
\vec z=\mbox{\scriptsize$\left(\begin{array}{c}
z_1\\z_2\end{array}\right)$}$, and $
\vec w=\mbox{\scriptsize$
\left(\begin{array}{c}
w_1\\w_2\end{array}\right)$}$.
The above example raises the following natural question. Given a
linear operator $H$ that is not represented by a Hermitian matrix in
an orthonormal basis, is there another (non-orthonormal) basis in
which it is represented by a Hermitian matrix? This is equivalent to
asking if one can modify the inner product so that $H$ becomes
Hermitian. The answer to this question is: ``No, in general.'' As we
will see in the sequel, there is a simple necessary and sufficient
condition on $H$ that ensures the existence of such an inner
product.
\subsection{Unitary Operators and Unitary-Equivalence}
\label{sec-unitary}
Equations (\ref{expand-2}) and (\ref{norm}) may be employed to
derive one of the essential structural properties of separable
Hilbert spaces, namely that up to unitary-equivalence they are
uniquely determined by their dimension. To achieve this we first
explain how one compares inner product spaces. Two inner product
spaces ${\cal H}_1$ and ${\cal H}_2$ with inner products
$\br\cdot|\cdot\kt_{_1}$ and $\br\cdot|\cdot\kt_{_2}$ are said to be
\emph{unitary-equivalent}, if there is an everywhere-defined onto
linear operator $U:{\cal H}_1\to{\cal H}_2$ such that for every
$\phi_1,\psi_1$ in ${\cal H}_1$ we have
\be
\br U\phi_1|U\psi_1\kt_{_2}=\br\phi_1|\psi_1\kt_{_1}.
\label{isometry}
\ee
Such an operator is called a {\em unitary operator}. In view of
(\ref{adj}) and (\ref{isometry}), we have
\be
U^\dagger U=I_1,
\label{UU=I}
\ee
where $I_1$ denotes the identity operator acting on ${\cal H}_1$.
One can use (\ref{isometry}) and the ontoness property of $U$ to
show that $U$ is an invertible operator and its inverse $U^{-1}$,
that equals $U^\dagger$, is also unitary.
Unitary-equivalence is an equivalence relation.\footnote{This means
that every inner product space is unitary-equivalent to itself; if
${\cal H}_1$ is unitary-equivalent to ${\cal H}_2$, so is ${\cal
H}_2$ to ${\cal H}_1$; if ${\cal H}_1$ is unitary-equivalent to
${\cal H}_2$ and ${\cal H}_2$ is unitary-equivalent to ${\cal H}_3$,
then ${\cal H}_1$ is unitary-equivalent to ${\cal H}_3$.} Therefore
to establish the unitary-equivalence of all $N$-dimensional
separable Hilbert spaces it suffices to show that all of them are
unitary-equivalent to a chosen one. The most convenient choice for
the latter is the Hilbert space
\[{\cal H}_0^N=\left\{\begin{array}{ccc}
\C^N &{\rm for}& N\neq\infty\\
\ell^2 &{\rm for}& N=\infty,\end{array}\right.\]
where $\C^N$ is the set of $N$-dimensional complex column vectors
endowed with the standard Euclidean inner product $\br\vec w|\vec
z\kt:=\vec w^*\cdot\vec z$, a dot denotes the usual dot product,
$\ell^2$ is the set of square-summable sequences,
$\ell^2:=\left\{~\{c_n\}\left|~ c_n\in\C~,~
\sum_{n=1}^\infty|c_n|^2 <\infty\right.\right\}$,
equipped with the inner product:
$\br\,\{\tilde c_n\}\,|\,\{c_n\}\,\kt:=\sum_{n=1}^\infty
\tilde c_n^*c_n$, and $\{\tilde c_n\},\{c_n\}\in\ell^2$.
Now, let ${\cal H}$ be any $N$-dimensional separable Hilbert space
with inner product $(\cdot|\cdot)$, $\{\xi_n\}$ be an orthonormal
basis of ${\cal H}$, and $U:{\cal H}\to{\cal H}_0^N $ be defined by
$U(\psi):=\{\,(\xi_n|\psi)\,\}$ for all $\psi$ in ${\cal H}$. It is
not difficult to see that $U$ is an everywhere-defined and onto
linear map. Furthermore in view of (\ref{norm}) it satisfies, $\br
U\phi|U\psi\kt=\sum_{n=1}^N(\xi_n|\phi)^*(\xi_n|\psi)= (\phi|\psi)$,
for all $\phi,\psi\in{\cal H}$. Hence, (\ref{isometry}) holds, $U$
is a unitary operator, and ${\cal H}$ is unitary-equivalence to
${\cal H}_0^N$.
For a quantum system having $\R$ as its configuration space, one
usually uses the coordinate wave functions $\psi(x)$ to represent
the state-vectors. The latter are elements of the Hilbert space
$L^2(\R)$ of square-integrable complex-valued functions
$\psi:\R\to\C$. The inner product of $L^2(\R)$ has the form
\be
\br\phi|\psi\kt=\int_{-\infty}^\infty dx~\phi(x)^*\psi(x).
\label{L2-inn}
\ee
A concrete example for an orthonormal basis $\{\xi_n\}$ for
$L^2(\R)$ is the basis consisting of the standard normalized
eigenfunctions $\xi_n=\psi_{n-1}$ of the unit simple harmonic
oscillator Hamiltonian \cite{messiah}, or equivalently that of the
operator $-\frac{d^2}{dx^2}+x^2$, i.e., $\psi_m(x)=\pi^{-1/4}(m!
2^m)^{-1/2} e^{-x^2/2}H_m(x),$ where
$H_m(x)=e^{x^2/2}(x-\frac{d}{dx})^me^{-x^2/2}$ are Hermite
polynomials and $m\in\N$. For this example, switching from the
coordinate-representation of the state-vectors to their
representation in terms of the energy eigenbasis of the above
harmonic oscillator corresponds to affecting the unitary operator
$U$.
Unitary operators have a number of important properties that follow
from (\ref{isometry}). For example if $U:{\cal H}_1\to{\cal H}_2$ is
a unitary operator relating two separable Hilbert spaces ${\cal
H}_1$ and ${\cal H}_2$, then
\begin{enumerate}
\item $U$ is an everywhere-defined, bounded, and invertible
operator.
\item If $\{\xi_n\}$ is an orthonormal basis of
${\cal H}_1$, then $\{U\xi_n\}$ is an orthonormal basis of
${\cal H}_2$.\footnote{The converse is also true in the
sense that given an orthonormal basis $\{\xi'_n\}$ of ${\cal H}_2$
there is a unique unitary operator $U:{\cal H}_1\to{\cal H}_2$ such
that $\xi'_n=U\xi_n$ for all $n\in\{1,2,\cdots,N\}$.}
\item Let $A_1:{\cal H}_1\to{\cal H}_1$ be a Hermitian
operator with domain ${\cal D}(A_1)$. Then $U A_1 U^\dagger:
{\cal H}_2\to{\cal H}_2$ is a Hermitian operator with domain
$U(\cD(A_1))=
\{U\psi_1\in{\cal H}_2|\psi_1\in{\cal D}(A_1)\}$.
\end{enumerate}
A direct consequence of statement~1 above and the fact that the
inverse of a unitary operator is unitary is that unitary operators
are homeomorphisms\footnote{See footnote~\ref{homeo1} for the
definition.}. As a result, all $N$-dimensional separable Hilbert
spaces have identical topological properties. In particular, if two
separable Hilbert spaces ${\cal H}_1$ and ${\cal H}_2$ share an
underlying vector space, a sequence $\{x_n\}$ converges to $x$ in
${\cal H}_1$ if and only if it converges to $x$ in ${\cal H}_2$.
Next, we recall that every quantum system $s$ is uniquely determined
by a separable Hilbert space $\cH$ that determines the kinematic
structure of $s$ and a Hamiltonian operator $H:\cH\to\cH$ that gives
its dynamical structure via the Schr\"odinger equation. Let $s_1$
and $s_2$ be quantum systems corresponding to the Hilbert spaces
${\cal H}_1$, ${\cal H}_2$ and the Hamiltonians $H_1:{\cal
H}_1\to{\cal H}_1$, $H_2:{\cal H}_2\to{\cal H}_2$. By definition the
observables of the system $s_i$, with $i\in\{1,2\}$, are Hermitian
operators $O_i:{\cal H}_i\to{\cal H}_i$. $s_1$ and $s_2$ are
physically equivalent, if there is a one-to-one correspondence
between their states and observables in such a way that the physical
quantities associated with the corresponding states and observables
are identical. Such a one-to-one correspondence is mediated by a
unitary operator $U:{\cal H}_1\to{\cal H}_2$ according to
\be
\psi_1\to \psi_2:=U\psi_1~~~~~~O_1\to O_2:=U\:O_1U^\dagger.
\label{unitary-eq2}
\ee
In particular, if $\psi_i(t)$ is an evolving state-vector of the
system $s_i$, i.e., it is a solution of the Schr\"odinger equation
$i\hbar\frac{d}{dt}\psi_i(t)=H_i\psi_i(t)$, we have
$\psi_2(t)=U\psi_1(t)$. The necessary and sufficient condition for
the latter is $H_2=U\:H_1U^\dagger$. This observation motivates the
following theorem.
\begin{center}
\parbox{15.5cm}{\textbf{Theorem~1:} \emph{As physical systems
$s_1=s_2$, if there is a unitary operator $U:\cH_1\to\cH_2$
satisfying $H_2=U\:H_1U^\dagger$.}}
\end{center}
To prove this assertion we recall that all physical quantities in QM
may be expressed as expectation values of observables. Suppose that
such a unitary operator exists, and let us prepare a state of $s_1$
that is represented by $\psi_1\in{\cal H}_1$ and measure the
observable $O_1$. The expectation value of this measurement is given
by
\be
\frac{\br \psi_1|O_1\psi_1\kt_1}{\br \psi_1|\psi_1\kt_1}=
\frac{\br U^\dagger\psi_2|U^\dagger
O_2U\psi_1\kt_1}{\br U^\dagger\psi_2|U^\dagger\psi_2\kt_1}=
\frac{\br \psi_2|
O_2\psi_2\kt_2}{\br \psi_2|\psi_2\kt_2},
\label{ex-va=}
\ee
where $\br\cdot|\cdot\kt_i$ is the inner product of ${\cal H}_i$,
and we have used (\ref{unitary-eq2}) and the fact that $U^\dagger$
is also a unitary operator. Eqs.~(\ref{ex-va=}) show that the above
measurement is identical with measuring $O_2$ in a state of $s_2$
represented by the state-vector $\psi_2$. This argument is also
valid for the case that $\psi_1$ is an evolving state-vector. It
shows that the existence of $U$ implies the physical equivalence of
$s_1$ and $s_2$.\footnote{The converse of Theorem~1 can be
formulated similarly to the Wigner's symmetry theorem
\cite[p91]{weinberg}.}
If the hypothesis of Theorem~1 holds, i.e., there is a unitary
operator $U:\cH_1\to\cH_2$ satisfying $H_2=U\:H_1U^\dagger$, we say
that the pairs $(\cH_1,H_1)$ and $(\cH_2,H_2)$ and the quantum
systems they define are \emph{unitary-equivalent}.
In conventional QM one mostly considers unitary operators that act
in a single Hilbert space ${\cal H}$. These generate linear
transformations that leave the inner product of the state-vectors
invariant. They form the unitary group ${\cal U}({\cal H})$ of the
Hilbert space which includes the time-evolution operator
$e^{-itH/\hbar}$ as a one-parameter subgroup.\footnote{All possible
symmetry, dynamical, and kinematic groups are also subgroups of
${\cal U}({\cal H})$, \cite{nova}.} Given a quantum system $s$ with
Hamiltonian $H$, one may use the unitary operators $U\in{\cal
U}({\cal H})$ to generate unconventional unitary-equivalent systems
$s_{_U}$ having the same Hilbert space. If $x$ and $p$ are the
standard position and momentum operators of $s$, the position,
momentum, and Hamiltonian operators for $s_{_U}$ are respectively
given by $x_{_U}:=U\:x\:U^\dagger$, $p_{_U}:=U\:p\:U^\dagger$, and
$H_{_U}:=U\:H\:U^\dagger$. The transformation $x\to x_{_U}$, $p\to
p_{_U}$, and $H\to H_{_U}$ is the quantum analog of a classical
time-independent canonical transformation. Therefore, unitary
transformations generated by the elements of $\cU(\cH)$ play the
role of canonical transformations.
\subsection{Biorthonormal Systems}
\label{sec-biortho}
Let $\{\psi_n\}$ be a basis of an $N$-dimensional separable Hilbert
space ${\cal H}$, with $N\leq\infty$, and $\{\xi_n\}$ be the
orthonormal basis obtained by performing the Gram-Schmidt process on
$\{\psi_n\}$. Because $\{\psi_n\}$ is a basis, there are unique
complex numbers $B_{mn}\in\C$ such that for all
$m\in\{1,2,3,\cdots,N\}$
\be
\xi_m=\sum_{n=1}^N B_{nm}\psi_n,
\label{xi-zeta}
\ee
and because $\{\xi_n\}$ is an orthonormal basis,
\bea
\psi_n&=&\sum_{k=1}^N \br\xi_k|\psi_n\kt\:\xi_k,
\label{zeta-xi}
\eea
for all $n\in\{1,2,3,\cdots,N\}$. If we substitute (\ref{zeta-xi})
in (\ref{xi-zeta}) and use the orthonormality of $\xi_n$, we find
$\sum_{n=1}^N B_{nm}\br\xi_j|\psi_n\kt=\delta_{mj}$ for all
$m,j\in\{1,2,3,\cdots,N\}$. This shows that the $N\times N$ matrix
$\underbar{B}=(B_{mn})$ is invertible and the entries of $\underbar
B^{-1}$ are given by $B^{-1}_{mn}=\br\xi_m|\psi_n\kt$. We can use
this relation to express (\ref{zeta-xi}) in the form
\be
\psi_n=\sum_{k=1}^N B^{-1}_{kn}\:\xi_k.
\label{zeta-xi-2}
\ee
This equation suggests that the vectors $\phi_m$ defined by
\be
\phi_m:=\sum_{j=1}^N B_{mj}^*\:\xi_j
~~~~\mbox{for all $m\in\{1,2,3,\cdots,N\}$},
\label{phi-m=}
\ee
fulfil
\be
\br\phi_m|\psi_n\kt=\delta_{mn}~~~~
\mbox{for all $m,n\in\{1,2,3,\cdots,N\}$}.
\label{biortho}
\ee
Furthermore, employing (\ref{expand-1}), (\ref{zeta-xi-2}), and
(\ref{phi-m=}), we can show that for all $\psi\in{\cal H}$,
$\sum_{n=1}^N\br\phi|\psi\kt\psi_n=\psi$. We can use Dirac's bra-ket
notation to express this identity as
\be
\sum_{n=1}^N |\psi_n\kt\br\phi_n|=I.
\label{bi-complete}
\ee
This is a generalization of the more familiar completeness relation
(\ref{resolution-I}).
A sequence $\{(\psi_n,\phi_n)\}$ of ordered pairs of elements of
${\cal H}$ that satisfy (\ref{biortho}) is called a {\em
biorthonormal system} \cite{sadovnichii,schmeidler,young}. A
biorthonormal system satisfying (\ref{bi-complete}) is said to be
\emph{complete}.
Let $\{\psi_n\}$ be a basis of a separable Hilbert space ${\cal H}$,
and $\{\phi_n\}$ be a sequence in $\cH$ such that
$\{(\psi_n,\phi_n)\}$ is a complete biorthonormal system. Then, one
can show that $\{\phi_n\}$ is the unique sequence with this property
and that it is necessarily a basis of ${\cal H}$, \cite{young}.
$\{\phi_n\}$ is called the \emph{biorthonormal basis} associated
with $\{\psi_n\}$, and the biorthonormal system
$\{(\psi_n,\phi_n)\}$ is called a \emph{biorthonormal extension} of
$\{\psi_n\}$.
If $N<\infty$, the right-hand side of (\ref{phi-m=}) is well-defined
and the above construction yields the biorthonormal basis
$\{\phi_n\}$ associated with every basis $\{\psi_n\}$ of ${\cal H}$.
If $N=\infty$, $\{\phi_m\}$ can be constructed provided that the
right-hand side of (\ref{phi-m=}) converges. This is the case if and
only if
\be
\sum_{j=1}^\infty |B_{mj}|^2<\infty~~~~
\mbox{for all $m\in\Z^+$}.
\label{bounded-B}
\ee
A theorem due to Bari \cite{gohberg-krein} states that given a basis
$\{\psi_n\}$, the biorthonormal system $\{(\psi_n,\phi_n)\}$ exists
and $\sum_{n=1}^\infty |\br\psi_n|\psi\kt|^2$ and $\sum_{n=1}^\infty
|\br\phi_n|\psi\kt|^2$ both converge for all $\psi\in{\cal H}$, if
and only if $\{\psi_n\}$ is a Riesz basis, i.e., there are an
orthonormal basis $\{\chi_n\}$ and an everywhere-defined bounded
invertible operator $A:{\cal H}\to{\cal H}$ such that
$\psi_n=A\chi_n$.\footnote{As any pair of orthonormal bases are
related by a unitary operator $U:{\cal H}\to{\cal H}$ which is an
everywhere-defined bounded invertible operator, one can take
$\chi_n=\xi_n$ without loss of generality. This allows for the
identification of the infinite matrix $\underbar B$ with the matrix
representation of $A^{-1}$ in the basis $\{\xi_n\}$, for we have
$B_{mn}=\br\xi_m|A^{-1}\xi_n\kt$.} In this case $\{\phi_n\}$ is also
a Riesz basis, and $\{(\psi_n,\phi_n)\}$ is the unique biorthonormal
extension of $\{\psi_n\}$, \cite{gohberg-krein,sadovnichii,young}.
The coefficients of the expansion of a vector $\psi\in{\cal H}$ in a
Riesz basis $\{\psi_n\}$ can be expressed in terms of its
biorthonormal basis $\{\phi_n\}$. Given $\psi=\sum_{n=1}^Nc_n\psi_n$
we have, in light of (\ref{biortho}), $c_n=\br\phi_n|\psi\kt$.
Hence, for all $\psi\in{\cal H}$,
\be
\psi=\sum_{n=1}^N\br\phi_n|\psi\kt\:\psi_n.
\label{biortho-exp}
\ee
Clearly the roles of $\{\phi_n\}$ and $\{\psi_n\}$ are
interchangeable. In particular, $\sum_{n=1}^N
|\phi_n\kt\br\psi_n|=I$ and
$\psi=\sum_{n=1}^N\br\psi_n|\psi\kt\:\phi_n$ for all $\psi\in{\cal
H}$.
Let $\{\psi_n\}$ be a Riesz basis and $\{(\psi_n,\phi_n)\}$ be its
biorthonormal extension. As we explained in
Subsection~\ref{sec-inn}, we can construct a unique inner product
$(\cdot|\cdot)$ on ${\cal H}$ that makes a Riesz basis orthonormal.
We can use (\ref{inn-basis}) and (\ref{biortho-exp}) to obtain the
following simplified expression for this inner product.
\be
(\psi|\phi):=\sum_{n=1}^N \br\psi|\phi_n\kt\br\phi_n|\phi\kt=
\br\psi|\eta_+\phi\kt,~~~~\mbox{for all $\psi,\phi\in
{\cal H}$},
\label{inn-basis-1}
\ee
where we have introduced the operator $\eta_+:{\cal H}\to{\cal H}$
according to
\be
\eta_+\psi:=\sum_{n=1}^N \br\phi_n|\psi\kt\:\phi_n.
\label{eta-plus=}
\ee
Using Dirac's bra-ket notation we can write it in the form
\be
\eta_+=\sum_{n=1}^N |\phi_n\kt\br\phi_n|.
\label{eta-plus=2}
\ee
Because $\{\psi_n\}$ is a Riesz basis, the inner
product~(\ref{inn-basis-1}) is defined for all $\psi,\phi\in{\cal
H}$. This shows that $\eta_+$ is everywhere-defined. Furthermore, it
is not difficult to see, by virtue of (\ref{biortho}), that it has
an everywhere-defined inverse given by
\be
\eta_+^{-1}:=\sum_{n=1}^N |\psi_n\kt\br\psi_n|.
\label{eta-plus-inverse=}
\ee
This shows that $\eta_+$ is a one-to-one onto linear operator. As
suggested by (\ref{eta-plus=2}) it is also Hermitian, which in
particular implies that both $\eta_+$ and $\eta_+^{-1}$ are bounded.
Finally, in view of (\ref{eta-plus=2}),
$\br\psi|\eta_+\psi\kt=\sum_{n=1}^N|\br\phi_n|\psi\kt|^2$, for all
$\psi\in\cH$. Therefore $\eta_+$ is a positive operator. Moreover
because it is an invertible operator, its spectrum is strictly
positive. A Hermitian operator with this property is called a
\emph{positive-definite operator}. The operator $\eta_+$ constructed
above is an everywhere-defined, bounded, positive-definite,
invertible operator. A linear operator with these properties is
called a \emph{metric operator}.
\subsection{Metric Operators and Conventional QM}
\label{sec-metric}
Consider a pair of separable Hilbert spaces ${\cal H}_1$ and ${\cal
H}_2$ that are identical as vector spaces but have different inner
products. We will denote the inner products of ${\cal H}_1$ and
${\cal H}_2$ by $\br\cdot|\cdot\kt_{_1}$ and
$\br\cdot|\cdot\kt_{_2}$, respectively, and view
$\br\cdot|\cdot\kt_{_2}$ as an alternative inner product on ${\cal
H}_1$. Our aim is to find a way to express $\br\cdot|\cdot\kt_{_2}$
in terms of $\br\cdot|\cdot\kt_{_1}$. We will first consider the
case that the underlying vector space ${\cal V}$ of both ${\cal
H}_1$ and ${\cal H}_2$ is finite-dimensional, i.e., $N<\infty$.
Let $\{\xi_n\}$ be an orthonormal basis of ${\cal H}_1$. As we
argued above, it satisfies the completeness relation:
$\sum_{n=1}^N|\xi_n\kt_{_1}\, _{_1}\!\br\xi_n|=I$, where $I$ is the
identity operator of ${\cal V}$ . In general, $\{\xi_n\}$ is not an
orthonormal basis of ${\cal H}_2$ and the operator
\be
\eta_+:=\sum_{n=1}^N|\xi_n\kt_{_2}\, _{_2}\!\br\xi_n|,
\label{eta-def-1}
\ee
does not coincide with $I$. Eq.~(\ref{eta-def-1}), which we can
express in the more conventional form:
\be
\eta_+\,\psi:=
\sum_{n=1}^N\br\xi_n|\psi\kt_{_2}~\xi_n,~~~~
\mbox{for all $\psi\in{\cal V}$},
\label{eta-def}
\ee
defines $\eta_+$ as a linear operator $\eta_+:{\cal V}\to{\cal V}$
having a full domain.
Now, let $\phi$ be an arbitrary element of ${\cal V}$. In view of
(\ref{expand-1}), we can express it as
\be
\phi=\sum_{m=1}^N\br\xi_m|\phi\kt_{_1}\,\xi_m.
\label{expand-3}
\ee
Using Eqs.~(\ref{eta-def}), (\ref{expand-3}) and the properties
(\emph{ii}) and (\emph{iii}) shared by both the inner products, we
can easily show that $\eta_+$ fulfils
\be
\br\phi|\psi\kt_{_2}=\br\phi|\eta_+\,\psi\kt_{_1}~~~~~~~
\mbox{for all $\psi,\phi\in{\cal V}$.}
\label{eta-prop1}
\ee
Employing properties (\emph{i}) and (\emph{ii}) of
$\br\cdot|\cdot\kt_2$, we can also verify that
\bea
&&\br\phi|\eta_+\,\psi\kt_{_1}^*=\br\psi|\eta_+\,\phi\kt_{_1}
~~~~\mbox{for all $\psi,\phi\in{\cal V}$,}
\label{eta-prop3}\\
&&\br\psi|\eta_+\,\psi\kt_{_1}>0~~~~~~~~~~~~\mbox{for all
nonzero $\psi\in{\cal V}$}.
\label{eta-prop2}
\eea
Eq.~(\ref{eta-prop3}) shows that $\eta_+$ is a Hermitian operator
acting in ${\cal H}_1$, in particular it has a real spectrum.
Eq.~(\ref{eta-prop2}) implies that indeed the spectrum of $\eta_+$
is strictly positive; $\eta_+$ is a positive-definite operator.
If ${\cal H}_1$ and ${\cal H}_2$ are infinite-dimensional, the sum
appearing in (\ref{eta-def}) stands for an infinite series and we
must address the issue of its convergence. The convergence of this
series is equivalent to the requirement that $\sum_{n=}^\infty
|\br\xi_n|\psi\kt_{_2}|^2<\infty$ for all $\psi\in{\cal V}$.
According to the above-mentioned theorem of Bari this requirement is
fulfilled provided that $\{\xi_n\}$ is a Riesz basis of ${\cal
H}_2$. Under this assumption (\ref{eta-def}) defines a linear
operator $\eta_+$ acting in ${\cal H}_1$ and satisfying
(\ref{eta-prop1}). It is a positive-definite (in particular
Hermitian) operator with a full domain. Being Hermitian and
everywhere-defined it is also necessarily bounded.
Next, we exchange the roles of ${\cal H}_1$ and ${\cal H}_2$. This
gives rise to an everywhere-defined bounded positive-definite
operator $\eta_+'$ acting in ${\cal H}_2$ such that
$\br\phi|\psi\kt_{_1}=\br\phi|\eta_+'\,\psi\kt_{_2}$ for all
$\psi,\phi$ in ${\cal V}$. Combining this equation and
(\ref{eta-prop1}) yields
$\br\phi|\psi\kt_{_1}=\br\phi|\eta_+\eta_+'\,\psi\kt_{_1}$ for all
$\psi,\phi$ in ${\cal V}$. This in turn implies that
$\eta_+'\eta_+=I$.\footnote{The proof uses the argument given in
footnote~\ref{trick}.} Because $\eta_+'$ is a bounded operator with
a full domain, $\eta_+$ is an invertible operator with inverse
$\eta_+^{-1}=\eta_+'$.\footnote{If we use the prescription of
Subsection~\ref{sec-inn} to obtain the inner product $(\cdot|\cdot)$
on ${\cal H}_2$ that renders the Riesz basis $\{\xi_1\}$
orthonormal, we find by its uniqueness that
$(\cdot|\cdot)=\br\cdot|\cdot\kt_{_1}$. According to
(\ref{inn-basis-1}), there is an everywhere-defined bounded
invertible operator $\tilde\eta_+$ such that
$\br\cdot|\cdot\kt_{_1}=\br\cdot|\tilde\eta_+\cdot\kt_{_2}$. It is
given by $\tilde\eta_+=\eta_+^{-1}$.} Therefore $\eta_+$ is a metric
operator.
The above construction of the metric operator $\eta_+$ is based on
the choice of an orthonormal basis $\{\xi_n\}$ of ${\cal H}_1$. We
will next show that indeed $\eta_+$ is independent of this choice.
Let $\eta'_+:{\cal V}\to{\cal V}$ be an everywhere-defined linear
operator satisfying
\be
\br\phi|\psi\kt_{_2}=\br\phi|\eta_+'\,\psi\kt_{_1}~~~~~~~
\mbox{for all $\psi,\phi\in{\cal V}$.}
\label{eta-prop1-prime}
\ee
Eqs.~(\ref{eta-prop1}) and (\ref{eta-prop1-prime}) show that
$\br\phi|(\eta'_+-\eta_+)\psi\kt_{_1}=0$ for all $\psi,\phi\in{\cal
V}$. In view of the argument presented in footnote~\ref{trick}, this
implies $\eta'_+\psi=\eta_+\psi$ for all $\psi\in{\cal V}$, i.e.,
$\eta'_+=\eta_+$. This establishes the uniqueness of the metric
operator $\eta_+$ which in turn means that \emph{the inner products
that make a complex vector space ${\cal V}$ into a separable Hilbert
space are in one-to-one correspondence with the metric operators
$\eta_+$ acting in one of these Hilbert spaces}.
To employ the characterization of the inner products in terms of
metric operators we need to select a Hilbert space ${\cal H}$. We
will call this Hilbert space a \emph{reference Hilbert space}. We
will always fix a reference Hilbert space and use its inner product
$\br\cdot|\cdot\kt$ to determine if a given linear operator acting
in ${\cal V}$ is Hermitian or not. The following are some typical
examples.
\begin{itemize}
\item
For systems having a finite number ($N$) of linearly
independent state-vectors, i.e., when ${\cal V}=\C^N$,
${\cal H}$ is defined by the Euclidean inner product,
$\br\vec\phi|\vec\psi\kt:=\vec\phi^*\cdot\vec\psi$, for all
$\vec\phi,\vec\psi\in\C^N$.
\item For systems whose configuration space is a differentiable
manifold $M$ with an integral measure $d\mu(x)$, ${\cal V}$ is the
space $L^2(M)$ of all square-integrable functions $\psi:M\to\C$
and ${\cal H}$ is defined by the $L^2$-inner product:
$\br\phi|\psi\kt:=\int_{M}\phi(x)^*\psi(x)\,d\mu(x)$,
for all $\phi,\psi\in L^2(M)$. The systems whose configuration
space is a Euclidean space $(M=\R^d)$ or a complex contour
$(M=\Gamma)$ belong to this class. We will discuss the latter
systems in Section~\ref{sec-contour}.
\end{itemize}
In summary, given a separable Hilbert space ${\cal H}$ with inner
product $\br\cdot|\cdot\kt$, we can characterize every other inner
product on ${\cal H}$ by a metric operator $\eta_+:{\cal H}\to{\cal
H}$ according to
\be
\br\cdot|\cdot\kt_{_{\eta_+}}:=\br\cdot|\eta_+\,\cdot\kt.
\label{inn}
\ee
Each choice of $\eta_+$ defines a unique separable Hilbert space
${\cal H}_{_{\eta_+}}$. Because $\eta_+$ is a positive-definite
operator, we can use its spectral representation to construct its
positive square root $\rho:=\sqrt\eta_+$. As a linear operator
acting in ${\cal H}$, $\rho$ is a Hermitian operator satisfying
$\rho^2=\eta_+$. We can use this observation to establish
\be
\br\phi|\psi\kt_{\eta_+}=\br\phi|\eta_+\psi\kt=
\br\rho^\dagger\phi|\rho\,\psi\kt=
\br\rho\,\phi|\rho\,\psi\kt,~~~~~~\mbox{for all $\phi,\psi\in\cH$}.
\label{rho}
\ee
This relation shows that as a linear operator mapping ${\cal
H}_{_{\eta_+}}$ onto ${\cal H}$, $\rho$ is a unitary
operator.\footnote{Strictly speaking (\ref{rho}) shows that $\rho$
is an isometry. However, in view of the fact that $\eta_+$ is
invertible, so does $\rho$. This implies that $\rho:{\cal
H}_{_{\eta_+}}\to{\cal H}$ is a genuine unitary operator.} It
provides a realization of the unitary-equivalence of the Hilbert
spaces ${\cal H}_{_{\eta_+}}$ and ${\cal H}$.
In ordinary QM one fixes the physical Hilbert space of the system to
be one of the reference Hilbert spaces listed above and develops a
theory based on this preassigned Hilbert space. The argument that
the unitary-equivalence of all separable Hilbert spaces justifies
this convention is not quite acceptable. For example, although for
all $d\in\Z^+$, $L^2(\R^d)$ is unitary-equivalent to $L^2(\R)$, we
never use $L^2(\R)$ to describe a system having more than one real
degree of freedom. \emph{The choice of the particular Hilbert space
should in principle be determined by physical considerations or left
as a freedom of the formulation of the theory.} In view of lack of a
direct measurement of the inner product or the associated metric
operator, we propose to adopt the second option. We will see that
this does not lead to a genuine generalization of QM, but it reveals
a set of alternative and equally useful representations of QM which
could not be utilized within its conventional formulation.
Furthermore, the introduction of the freedom in choosing the metric
operator may be used as an interesting method of extending QM by
relaxing some of the restrictions put on the metric operator. Indeed
the indefinite-metric quantum theories are examples of such a
generalization.
\section{Pseudo-Hermitian QM: Ingredients and Formalism}
\label{sec-phqm}
\subsection{Quasi-Hermitian versus Pseudo-Hermitian QM}
\label{sec-quasi}
To the best of our knowledge, the first publication investigating
the consequences of the freedom in the choice of the metric operator
is the article by Scholtz, Geyer, and Hahne \cite{geyer} in which
the choice of the metric operator is linked to that of an
\emph{irreducible} set of linear operators. The latter is any
(minimal) set ${\cal S}$ of operators $O_\alpha$ acting in a vector
space ${\cal V}$ that do not leave any proper subspace of ${\cal V}$
invariant, i.e., the only subspace ${\cal V}'$ of ${\cal V}$ that
satisfies: ``~$\mbox{$O_\alpha\in{\cal S}$ and $\psi\in{\cal V}'$
imply $O_\alpha\psi\in{\cal V}'$,}$~'' is ${\cal V}$.
The approach pursued in \cite{geyer} involves using the physical
characteristics of a given system to determine an irreducible set of
operators (that are to be identified with the observables of the
theory) and employing the latter to fix a metric operator and the
associated inner product of the Hilbert space. We will call this
formalism \emph{Quasi-Hermitian Quantum Mechanics}. The main problem
with this formalism is that it is generally very difficult to
implement. This stems from the fact that the operators belonging to
an irreducible set must in addition be compatible, i.e., there must
exist an inner product with respect to which all the members of the
set are Hermitian. In order to use this formalism to determine the
inner product, one must in general employ a complicated iterative
scheme.
\begin{itemize}
\item Select a linear operator $O_1$ with a complete set of eigenvectors
and a real spectrum;
\item Find the set $\fU_1$ of all possible metric operators
that make $O_1$ Hermitian, and select a linear operator $O_2$,
linearly independent of $O_1$, from among the linear operators that
are Hermitian with respect to the inner product defined by some
$\eta_+\in\fU_1$;
\item Find the set $\fU_2$ of all possible metric
operators that make $O_2$ Hermitian, and select a linear operator
$O_3$, linearly independent of $O_1$ and $O_2$, from among the
linear operators that are Hermitian with respect to the inner
product defined by some $\eta_+\in\fU_1\cap \fU_2$, where $\cap$
stands for the intersection of sets;
\item Repeat this procedure until the inner product
(respectively metric operator $\eta_+$) is fixed up to a constant
coefficient.
\end{itemize}
As we see, in trying to employ Quasi-Hermitian QM, one needs a
procedure to compute the most general metric operator associated
with a given diagonalizable linear operator with a real spectrum.
This is the main technical tool developed within the framework of
Pseudo-Hermitian QM.
Pseudo-Hermitian QM differs from Quasi-Hermitian QM in that in the
former one chooses $O_1$ to be the Hamiltonian, finds $\fU_1$ and
leaves the choice of $O_2$ arbitrary, i.e., identifies all the
operators $O$ that are Hermitian with respect the inner product
associated with some unspecified metric operator $\eta_+$ belonging
to $\fU_1$. The metric operator $\eta_+$ fixes a particular inner
product and defines the ``\emph{physical Hilbert space}'' ${\cal
H}_{\rm phys}$ of the system. The ``\emph{physical observables}''
are the Hermitian operators $O$ acting in ${\cal H}_{\rm phys}$. We
can use the unitary-equivalence of ${\cal H}_{\rm phys}$ and ${\cal
H}$ realized by $\rho:=\sqrt\eta_+$ to construct the physical
observables $O$ using those of the conventional QM, i.e., Hermitian
operators $o$ acting in the reference Hilbert space ${\cal H}$. This
is done according to \cite{critique,jpa-2004b}
\be
O=\rho^{-1}o\rho.
\label{O=ror}
\ee
Note that as an operators mapping ${\cal H}_{\rm phys}$ onto ${\cal
H}$, $\rho$ is a unitary operator. Hence $O$ is a Hermitian operator
acting in ${\cal H}_{\rm phys}$ if and only if $o$ is a Hermitian
operator acting in ${\cal H}$.
For instance, let ${\cal H}=L^2(\R^d)$ for some $d\in\Z^+$. Then we
can select the usual position $x_i$ and momentum $p_i$ operators to
substitute for $o$ in (\ref{O=ror}). This defines a set of basic
physical observables,
\be
X_i:=\rho^{-1}x_i\rho,~~~~~~~~P_i:=\rho^{-1}p_i\rho,
\label{X-P}
\ee
which we respectively call the \emph{$\eta_+$-pseudo-Hermitian
position and momentum operators}. They furnish an irreducible
unitary representation of the Weyl-Heisenberg algebra,
\be
[X_i,X_j]=[P_i,P_j]=0,~~~~~~[X_i,P_j]=\hbar\delta_{ij}I,~~~~
\mbox{for all $i,j\in\{1,2,\cdots,d$\}}.
\label{W-H}
\ee
In principle, we can express the Hamiltonian $H$ as a function of
$X_i$ and $P_i$ and attempt to associate a physical meaning to it by
devising a quantum-to-classical \emph{correspondence principle}. One
way of doing this is to define the underlying classical Hamiltonian
$H_c$ for the system as
\be
H_c(\vec x_c,\vec p_c):=\left.\lim_{\hbar\to 0}
H(\vec X,\vec P)\right|_{\mbox{\tiny$\begin{array}{c}
\vec X\to\vec x_c\\
\vec P\to\vec p_c\end{array}$}},
\label{class-H}
\ee
where $\vec w:=(w_1,w_2,\cdots,w_d)^T$ for $\vec w=\vec X,\vec
P,\vec x_c,\vec p_c$; and $\vec x_c$ and $\vec p_c$ stand for
classical position and momentum variables. Supposing that the
right-hand side of (\ref{class-H}) exists, one may reproduce the
quantum system described by the Hilbert space ${\cal H}_{\rm phys}$
and Hamiltonian $H$ by quantizing the classical system corresponding
to $H_c$ according to
\be
\vec x_c\to \vec X,~~~~~~~
\vec p_c\to \vec P,~~~~~~~
\{\cdot,\cdot\}_c \to -i\hbar^{-1}[\cdot,\cdot],
\label{ph-quantize}
\ee
where $\{\cdot,\cdot\}_c$ and $[\cdot,\cdot]$ stand for the Poisson
bracket and the commutator, respectively. This is called the
\emph{$\eta_+$-pseudo-Hermitian canonical quantization} scheme
\cite{jpa-2004b,jpa-2005b}.
The quantum system described by ${\cal H}_{\rm phys}$ and $H$ admits
a representation in conventional QM in which the Hilbert space is
${\cal H}=L^2(\R^d)$, the observables are Hermitian operators acting
in ${\cal H}$, and the Hamiltonian is given by
\be
h:=\rho H\rho^{-1}.
\label{h-Hermitian}
\ee
Because $\rho:{\cal H}_{\rm phys}\to{\cal H}$ is unitary, so is its
inverse $\rho^{-1}:{\cal H}\to{\cal H}_{\rm phys}$. This in turn
implies that $h$ is a Hermitian operator acting in ${\cal H}$. We
will call the representation of the quantum system that is based on
the Hermitian Hamiltonian $h$ the \emph{Hermitian representation}.
In this representation, we can proceed employing the usual
prescription for identifying the underlying classical Hamiltonian,
namely as
\be
H_c(\vec x_c,\vec p_c):=\left.\lim_{\hbar\to 0}
h(\vec x,\vec p)\right|_{\mbox{\tiny$\begin{array}{c}
\vec x\to\vec x_c\\
\vec p\to\vec p_c\end{array}$}}.
\label{class-H-2}
\ee
Note that this relation is consistent with (\ref{class-H}), because
in view of (\ref{X-P}) and (\ref{h-Hermitian}), $h=f(\vec x,\vec p)$
if and only if $H=f(\vec X,\vec P)$, where we suppose that
$f:\R^{2d}\to\C$ is a piecewise real-analytic function.
Each choice of a metric operator $\eta_+\in\fU_1$ corresponds to a
particular quantum system with an associated Hermitian
representation. One can in principle confine his (her) attention to
this representation which can be fully understood using the
conventional QM. The main disadvantage of employing the Hermitian
representation is that in general the Hamiltonian $h$ is a terribly
complicated nonlocal operator. Therefore, the computation of the
energy levels and the description of the dynamics are more
conveniently carried out in the pseudo-Hermitian representation. In
contrast, it is the Hermitian representation that facilitates the
computation of the expectation values of the physical position and
momentum operators as well as that of the localized states in
physical position or momentum spaces. See
\cite{jpa-2004b,jpa-2005b,jmp-2005,jpa-2006a} for explicit examples.
\subsection{Pseudo-Hermitian and Pseudo-Metric Operators}
\label{sec-ph-metric}
\begin{center}
\parbox{15.5cm}{\textbf{Definition~1:} \emph{A linear operator $A:{\cal
H}\to{\cal H}$ acting in a separable Hilbert space ${\cal H}$ is
said to be {pseudo-Hermitian} if ${\cal D}(A)$ is a dense subset of
${\cal H}$, and there is an everywhere-defined invertible Hermitian
linear operator $\eta:{\cal H}\to{\cal H}$ such that
\be
A^\dagger=\eta A \eta^{-1}.
\label{ph-2}
\ee}}
\end{center}
We will refer to an operator $\eta$ satisfying (\ref{ph-2}) as a
\emph{pseudo-metric operator associated with the operator $A$} and
denote the set of all such operators by $\fM_A$. Clearly, $A$ is
pseudo-Hermitian if and only if $\fM_A$ is nonempty. Furthermore,
for every linear operator $A$, $\fM_A\subseteq \fM_I$ where $I$ is
the identity operator. We will call elements of $\fM_I$
\emph{pseudo-metric operators}. Because they are Hermitian and have
full domain, they are necessarily bounded.\footnote{The definition
of a pseudo-Hermitian operator given in \cite{p1} requires $\eta$ to
be a Hermitian automorphism. This is equivalent to the definition
given above because of the following. Firstly, an automorphism is,
by definition, everywhere-defined. Hence, if it is Hermitian, it
must be bounded. Secondly because the inverse of every Hermitian
automorphism is a Hermitian automorphism, $\eta^{-1}$ is
everywhere-defined and bounded, i.e., $\eta$ is invertible. The fact
that every everywhere-defined invertible operator is an automorphism
is obvious.}
Clearly if $\eta_1$ belongs to $\fM_A$, then so does
$\eta_r:=r\eta_1$, for every nonzero real number $r$. The scaling
$\eta_1\to\eta_r$ of the pseudo-metric operators has no physical
significance. It signifies a spurious symmetry that we will
eliminate by restricting our attention to pseudo-metric operators
$\eta$ whose spectrum $\sigma(A)$ is bounded above by $1$, i.e.,
max$[\sigma(A)]=1$.\footnote{For a given $\eta_1\in\fM_A$ there
always exists $r_\star\in\R$ such that the spectrum of
$\eta_{r_\star}$ is bounded above by $1$ This follows from the fact
that because $\eta_1$ is an invertible bounded self-adjoint
operator, its spectrum $\sigma(\eta_1)$ is a compact subset of $\R$
excluding zero, \cite{kolmogorov-fomin}. Let $\alpha_1$ and
$\alpha_2$ be respectively the minimum and maximum values of
$\sigma(\eta_1)$. If $\alpha_2>0$, we take $r_\star:=\alpha_2^{-1}$;
if $\alpha_2<0$, we take $r_\star:=-\alpha_1^{-1}$.\label{scale}}
The latter form a subset of $\fM_A$ which we will denote by $\fU_A$.
In general, either $\fU_A$ is the empty set and $A$ is not
pseudo-Hermitian or $\fU_A$ is an infinite set\footnote{This is true
unless the Hilbert space is one-dimensional.}; \emph{the
pseudo-metric operator associated with a pseudo-Hermitian operator
is not unique.} To see this, let $\eta\in\fU_A$, $B:{\cal H}\to{\cal
H}$ be an everywhere-defined invertible bounded operator commuting
with $A$, and $\tilde\eta:=B^\dagger\eta B$. Then $B^\dagger$ is an
everywhere-defined bounded operator \cite{yosida} commuting with
$A^\dagger$. These in turn imply that $\tilde\eta$ is an
everywhere-defined invertible Hermitian operator which in view of
(\ref{ph-2}) satisfies $\tilde\eta A\tilde\eta^{-1}=B^\dagger\eta B
A B^{-1}\eta^{-1} B^{-1\dagger}=B^\dagger A^\dagger
B^{-1\dagger}=A^\dagger$, i.e., $\tilde\eta\in\fM_A$. Clearly there
is an infinity of choices for $B$ defining an infinite set of
pseudo-metric operators of the form $B^\dagger\eta B$. It is not
difficult to observe that upon making appropriate scaling of these
pseudo-metric operators one can construct an infinite set of
elements of $\fU_A$, i.e., $\fU_A$ is an infinite set.
The non-uniqueness of pseudo-metric operators associated with a
pseudo-Hermitian operator motivates the following definition.
\begin{center}
\parbox{15.5cm}{
\textbf{Definition~2:} \emph{Let $A$ be a pseudo-Hermitian operator
acting in ${\cal H}$ and $\eta\in \fM_I$ be a pseudo-metric
operator. Then $A$ is said to be {$\eta$-pseudo-Hermitian} if
$\eta\in\fM_A$.}} \end{center} Clearly, in order to determine
whether a pseudo-Hermitian operator $A$ is $\eta$-pseudo-Hermitian
one must know both $A$ and $\eta$. It is quite possible that a
pseudo-Hermitian operator fails to be $\eta$-pseudo-Hermitian for a
given $\eta\in \fM_I$.\footnote{The term ``$\eta$-pseudo-Hermitian
operator'' is used to emphasize that one works with a particular
pseudo-metric operator. It coincides with the notion of a
``$J$-Hermitian operator'' used by mathematicians
\cite{pease,azizov} and the old notion of a ``pseudo-Hermitian
operator'' used occasionally in the context of indefinite-metric
quantum theories \cite{case-1954,sudarshan}.}
Given a pseudo-Hermitian operator $A$, the set $\fM_A$ may or may
not include a positive-definite element $\eta_+$. If such a
positive-definite element exists, we can use it to construct the
inner product
\be
\br\cdot|\cdot\kt_{\eta_+}:=\br\cdot|\eta_+\cdot\kt
\label{inn-eta2}
\ee
that renders $A$ Hermitian,
\be
\br\cdot|A\cdot\kt_{\eta_+}=\br A \cdot|\cdot\kt_{\eta_+}.
\label{eta-self-adjoint-1}
\ee
This means that if we endow the underlying vector space ${\cal V}$
of ${\cal H}$ with the inner product (\ref{inn-eta2}), we find a
separable Hilbert space ${\cal H}_{_{\eta_+}}$ such that $A:{\cal
H}_{_{\eta_+}}\to {\cal H}_{_{\eta_+}}$ is Hermitian. In particular
the spectrum $\sigma(A)$ of $A$ is real. If $\sigma(A)$ happens to
be discrete, we can construct an orthonormal basis $\{\psi_n\}$ of
${\cal H}_{_{\eta_+}}$ consisting of the eigenvectors of $A$. As a
sequence of elements of ${\cal H}$, $\{\psi_n\}$ is a Riesz basis.
Hence $A:{\cal H}\to{\cal H}$ is diagonalizable. This shows that for
a densely-defined operator having a discrete spectrum, the condition
that it is a diagonalizable operator having a real spectrum is
necessary for the existence of a metric operator $\eta_+$ among the
elements of $\fM_A$. The existence of $\eta_+$, in particular,
implies that $\fM_A$ is non-empty. Hence, $A$ is necessarily
pseudo-Hermitian.
It is not difficult to show that the same conditions are also
sufficient for the inclusion of a metric operator in $\fM_A$,
\cite{p2,p3}. Suppose that $A:{\cal H}\to{\cal H}$ is a
densely-defined diagonalizable operator having a real spectrum. Let
$\{\psi_n\}$ be a Riesz basis consisting of the eigenvectors of $A$
and $\{(\psi_n,\phi_n)\}$ be its biorthonormal extension. Then, in
view of the spectral representation of $A$, i.e., $A=\sum_{n=1}^N
a_n|\psi_n\kt\br\phi_n|$ where $a_n$ are eigenvalues of $A$, and the
basic properties of the biorthonormal system $\{(\psi_n,\phi_n)\}$,
we can easily show that $\eta_+$, as defined by
\be
\eta_+:=\sum_{n=1}^N|\phi_n\kt\br\phi_n|,
\label{eta=quasi}
\ee
is a positive-definite operator belonging to $\fM_A$. As we
explained in Subsection~\ref{sec-biortho}, this is the unique metric
operator whose inner product makes $\{\psi_n\}$ orthonormal.
Again if $\fM_A$ includes a metric operator $\eta_+$, then
$\tilde\eta_+=B^\dagger\eta_+ B$ for any everywhere-defined,
bounded, invertible operator $B$ commuting with $A$ is also a metric
operator belonging to $\fM_A$. This shows that the subset $\fM^+_A$
of $\fM_A$ that consists of metric operators is either empty or has
an infinity of elements, \cite{p4,jmp-2003}. The same holds for
$\fU^+_A:=\fM^+_A\cap \fU_A$. In summary, \emph{for a Hilbert space
with dimension $N>1$, either there is no metric operator $\eta_+$
satisfying (\ref{eta-self-adjoint-1}) or there is an infinite set of
such metric operators that in addition fulfil ${\rm
max}[\sigma(\eta_+)]=1$.}
One may generalize the notion of the inner product by replacing
Condition~(\emph{i}) of Subsection~\ref{sec-inn} by the following
weaker condition.
\begin{itemize}
\item[](\'{\em i}\,) $\br\cdot|\cdot\kt$ is \emph{nondegenerate},
i.e., given $\psi\in{\cal H}$ the condition ``$\br\phi|\psi\kt=0$
for all $\phi\in{\cal H}$'' implies ``$\psi=0$''.
\end{itemize}
A function $\pbr\cdot|\cdot\pkt:{\cal H}\times{\cal H}\to\C$ which
satisfies conditions (\'{\em i}\,), (\emph{ii}) and (\emph{iii}) is
called a \emph{pseudo-inner product}. Clearly every inner product on
${\cal H}$ is a pseudo-inner product. The converse is not true,
because in general there are pseudo-inner products
$\pbr\cdot|\cdot\pkt$ that fail to satisfy (\emph{i}). This means
that there may exist nonzero $\psi\in{\cal H}$ such that
$\pbr\psi|\psi\pkt\leq 0$. Such a pseudo-inner product is called an
\emph{indefinite inner product}. It is not difficult to see that
given a pseudo-metric operator $\eta\in\fM_I$, the following
relation defines a pseudo-inner product on ${\cal H}$.
\be
\pbr\cdot|\cdot\pkt=\br\cdot|\eta\cdot\kt=:
\br\cdot|\cdot\kt_{_\eta}.
\label{inn-3}
\ee
Let $A:\cH\to\cH$ be a densely-defined operator, $\eta:\cH\to\cH$ be
a pseudo-metric operator, and
$A_\eta^\dagger:=\eta^{-1}A^\dagger\eta$. Then $A_\eta^\dagger$ for
all $\psi_1\in{\cal D}(A)$ and $\psi_2\in{\cal D}(A_\eta^\dagger)$,
we have $\br\psi_1|A_\eta^\dagger\psi_2\kt_{_\eta}=\br
A\psi_1|\psi_2\kt_{_\eta}$. In particular if $A$ is
$\eta$-pseudo-Hermitian, $A_\eta^\dagger=A$ and
\be
\br\psi_1|A\psi_2\kt_{_\eta}=\br A\psi_1|\psi_2\kt_{_\eta}.
\label{ph-3}
\ee
This means that every pseudo-Hermitian operator $A$ is Hermitian
with respect to the pseudo-inner product $\br\cdot|\cdot\kt_\eta$
defined by an arbitrary element $\eta$ of $\fM_A$. It is not
difficult to see that the converse is also true: If $\eta A$ and
$A^\dagger\eta$ have the same domains and $A$ satisfies (\ref{ph-3})
for some $\eta\in\fM_I$, then $A$ is pseudo-Hermitian and
$\eta\in\fM_A$.
An \emph{indefinite-metric space} is a complex vector space ${\cal
V}$ endowed with a function $\pbr\cdot|\cdot\pkt:{\cal H}\times{\cal
H}\to\C$ satisfying (\'{\em i}\,), (\emph{ii}) and (\emph{iii}),
\cite{bognar,azizov}. One can turn a Hilbert space into an
indefinite-metric space by endowing the underlying vector space with
an indefinite inner product $\br\cdot|\cdot\kt_\eta$ whose
pseudo-metric operator $\eta$ is not positive-definite. The latter
is called an \emph{indefinite metric operator}. It is important to
realize that the study of general indefinite-metric spaces is not
the same as the study of consequences of endowing a given Hilbert
space with an indefinite inner product. The latter, which is known
as the $\eta$-formalism, avoids a host of subtle questions such as
the nature of the topology of indefinite-metric spaces
\cite{nagy,nakanishi}.
The indefinite-metric quantum theories
\cite{pauli-1943,sudarshan,nagy,nakanishi} involve the study of
particular indefinite-metric spaces having a fixed indefinite inner
product. In this sense they share the philosophy adopted in
conventional QM; \emph{the indefinite-inner product is fixed from
the outset and the theory is built upon this choice.} The situation
is just the opposite in pseudo-Hermitian QM where the space of
state-vectors is supposed to have the structure of a (separable)
Hilbert space with an inner product which is neither indefinite nor
fixed \emph{a priori}.\footnote{Failure to pay attention to this
point is responsible for confusing pseudo-Hermitian QM with
indefinite-metric quantum theories. See for example
\cite{kleefeld}.}
In pseudo-Hermitian QM, the physical Hilbert space is constructed
using the following prescription. First one endows the vector space
of state-vectors with a fixed auxiliary (positive-definite) inner
product. This defines the reference Hilbert space ${\cal H}$ in
which all the relevant operators act. Next, one chooses a
Hamiltonian operator that acts in ${\cal H}$, is diagonalizable, has
a real spectrum, but needs not be Hermitian. Finally, one determines
the (positive-definite) inner products on ${\cal H}$ that render the
Hamiltonian Hermitian. Because there is an infinity of such inner
products, one obtains an infinite class of kinematically different
but dynamically equivalent quantum systems. The connection to
indefinite-metric theories is that for the specific ${\cal
PT}$-symmetric models whose study motivated the formulation of
pseudo-Hermitian QM, there is a simple and universal choice for an
indefinite inner product, namely the $\cP\cT$-inner product
(\ref{inn-PT}), which makes the Hamiltonian Hermitian. But this
indefinite inner product does not define the physical Hilbert space
of the theory.
Clearly the basic ingredient in both the indefinite-metric and
pseudo-Hermitian QM is the pseudo-metric operator. In general the
spectrum of a pseudo-metric operator need not be discrete. However,
for simplicity, we first consider a pseudo-metric operator
$\eta\in\fM_{I}$ that has a discrete spectrum. We can express it
using its spectral representation as
\be
\eta=\sum_{n=1}^N e_n\:
|\varepsilon_n\kt\br\varepsilon_n|,
\label{spec-rep-eta}
\ee
Because $\eta$ is a bounded invertible Hermitian operator, its
eigenvalues $e_n$ are nonzero and its eigenvectors
$|\varepsilon_n\kt$ form a complete orthonormal basis of the Hilbert
space ${\cal H}$. Therefore, we can define
\be
B:=\sum_{n=1}^N |e_n|^{-1/2}\:
|\varepsilon_n\kt\br\varepsilon_n|=|\eta|^{-1/2},
\label{B=}
\ee
and use it to obtain a new pseudo-metric operator, namely
\be
\tilde\eta:=B^\dagger\eta B=
\sum_{n=1}^N {\rm sgn}(e_n)\:
|\varepsilon_n\kt\br\varepsilon_n|.
\label{tilde-eta=}
\ee
The presence of a continuous part of the spectrum of $\eta$ does not
lead to any difficulty as far as the above construction is
concerned. Because $\eta$ is Hermitian, one can always define
$|\eta|:=\sqrt{\eta^2}$ and set $B:=|\eta|^{-1/2}$. Both of these
operators are bounded, positive-definite, and invertible. Hence
$\tilde\eta:=B^\dagger\eta B$ is an element of $\fM_A$ whose
spectrum is a subset of $\{-1,1\}$.
If we perform the transformation $\eta\to\tilde\eta$ on a
(positive-definite) metric operator $\eta_+$, we find $\tilde\eta=I$
and $\br\cdot|\cdot\kt_{_{\tilde\eta}}=\br\cdot|\cdot\kt$. This
observation is used by Pauli to argue that we would only gain
``something essentially new if we take into consideration indefinite
bilinear forms $\cdots$,'' \cite{pauli-1943}.\footnote{Pauli uses
the ``term bilinear form'' for what we call a ``pseudo-inner
product''.} To provide a precise justification for this assertion,
let $\eta_+$ be a (positive-definite) metric operator and ${\cal
H}_{\eta_+}$ be the Hilbert space having the inner product
$\br\cdot|\cdot\kt_{_{\eta_+}}$. Then, for all
$\psi_1,\psi_2\in{\cal H}$,
\be
\br B\psi_1|B\psi_2\kt_{_{\eta_+}}=
\br B\psi_1|\eta_+ B\psi_2\kt=\br \psi_1|B^\dagger
\eta B\psi_2\kt=\br \psi_1|\tilde\eta_+\psi_2\kt=
\br \psi_1|\psi_2\kt,
\label{unitary-B}
\ee
where we have used the identities $B:=|\etap|^{-1/2}=\etap^{-1/2}$
and $\tilde\eta_+:=B^\dagger\eta_+ B=I$. Eq.~(\ref{unitary-B}) shows
that $B$ is a unitary operator mapping ${\cal H}$ onto ${\cal
H}_{\eta_+}$. As a result, the quantum system $s_{_{\eta_+}}$ whose
state-vectors belong to ${\cal H}_{\eta_+}$ is unitary-equivalent to
the quantum system $s_{_I}$ whose Hilbert space is the reference
Hilbert space ${\cal H}$. They describe the same physical system.
This is the conclusion reached by Pauli in 1943, \cite{pauli-1943}.
There is a simple objection to this argument. It ignores the
dynamical aspects of the theory. As we will see below, the
description of the Hamiltonian and the time-evolution operator can
be very complicated in the ``\emph{Hermitian representation}'' of
the physical system. Therefore, although considering ${\cal
H}_{\eta_+}$ with a (positive-definite) metric operator ${\eta_+}$
generally yields an equivalent ``\emph{pseudo-Hermitian
representation}'' of the conventional QM, a clever choice of
$\eta_+$ may be of practical significance in deriving the physical
properties of the system under investigation. As we discuss in
Subsection~\ref{rqm-cq-qft}, it turns out that indeed these new
representations play a key role in the resolution of one the oldest
problems of modern physics, namely the problem of negative
probabilities in relativistic QM of Klein-Gordon fields
\cite{cqg,ap,ijmpa-2006,ap-2006a} and certain quantum field theories
\cite{bender-lee-model}.
We end this subsection with the following remarks.
\begin{itemize}
\item Strictly speaking Pauli's above-mentioned argument does not
hold, if one keeps $\etap$ to be positive-definite but does not
require it to be invertible or bounded \cite{kato,shubin}. For
example one might consider the case that $\etap^{-1}$ exists but is
unbounded. In this case, $\etap$ is not onto and $B$ fails to be a
unitary operator. This type of generalized metric operators and the
corresponding non-unitary transformations $B$ have found
applications in the description of resonances
\cite{lowdin,antoniou,reed-simon-4}. They also appear in the
application of pseudo-Hermitian quantum mechanics for typical
$\cP\cT$-symmetric and non-$\cP\cT$-symmetric models. For these
models the Hamiltonian operator is a second order differential
operator $H$ acting in an appropriate function space $\cF$ that
renders the eigenvalue problem for $H$ well-posed. As discussed in
great detail in \cite{cjp-2006}, if $H$ is to serve as the
Hamiltonian operator for a unitary quantum system, one must
construct an appropriate reference Hilbert space $\cH$ in which $H$
acts as a quasi-Hermitian operator. This in particular implies the
existence of an associated metric operator that satisfies the
boundedness and other defining conditions of the metric operators.
Suppose $H'$ is a differential operator acting in a function space
$\cF$ and having a discrete spectrum, i.e., there is a countable set
of linearly-independent eigenfunctions of $H'$ with isolated
non-degenerate or finitely degenerate eigenvalues. We can use $H'$
and $\cF$ to define a unitary quantum system as follows
\cite{jpa-2003,jpa-2004b}.
First, we introduce $\sF$ to be the subset of $\cF$ that contains
the eigenfunctions of $H'$ with real eigenvalues, and let $\cL$ be
the span of $\sF$, i.e., $\cL:=\left\{\:\sum_{m=1}^M
c_m\psi_m\:\big|\: M\in\Z^+, c_m\in\C,\psi_m\in\sF\:\right\}$. Next,
we endow $\cL$ with the inner product \cite{kretschmer-szymanowski}
\be
\br\: \sum_{j=1}^J c_j\psi_j\,\big|\,
\sum_{k=1}^K d_k\psi_k\:\kt:=\sum_{m=1}^{{\rm min}(J,K)}
c_m^*d_m,
\label{KS-inn}
\ee
and Cauchy-complete\footnote{Every separable inner product space $N$
can be extended to a separable Hilbert space $\cK$, called its
Cauchy completion, in such a way that $N$ is dense in $\cK$ and
there is no proper Hilbert subspace of $\cK$ with the same
properties, \cite{reed-simon}.} the resulting inner product space
into a Hilbert space $\cK$. We can then identify the restriction of
$H'$ onto $\cL$, that we denote by $H$, with the Hamiltonian
operator of a quantum system. It is a densely-defined operator
acting in $\cK$, because its domain has a subset $\cL$ that is dense
in $\cK$, \cite{reed-simon}. In fact, in view of (\ref{KS-inn}) and
the fact that $\cL$ is dense in $\cK$, the eigenvectors of $H$ form
an orthonormal basis of $\cK$. Moreover, $H:\cK\to\cK$ has, by
construction, a real spectrum. Therefore, it is a Hermitian
operator.
This construction is quite difficult to implement in practice.
Instead, one takes the reference Hilbert space $\cH$ to be an
$L^2$-space, ensures that the given differential operator that is
now denoted by $H$ has a real spectrum, and that the set of its
eigenfunctions $\sF$ is dense in $\cH$. Then, one constructs an
invertible positive operator $\eta_+$ satisfying the
pseudo-Hermiticity condition,
\be
H^\dagger=\eta_+H\,\eta_+^{-1},
\label{ph-new5}
\ee
and uses it to construct the physical Hilbert space and the
Hermitian representation of the system.
For most of the concrete models that have so far been studied the
obtained $\eta_+$ turns out not to be everywhere-defined or bounded.
But, these qualities are highly sensitive to the choice of the
reference Hilbert space that may also be considered as a degree of
freedom of the formalism. The mathematical data that have physical
content are the eigenfunctions of $H$ and their linear combinations,
i.e., elements of $\cL$. Therefore, the only physical restriction on
the reference Hilbert spaces ${\cal H}$ is that $\cL$ be a dense
subset of $\cH$. This means that the question of the existence of a
genuine metric operator associated with $H$ requires addressing the
problem of the existence of a (reference) Hilbert space $\cH$ such
that
\begin{enumerate}
\item $\cL$ is a dense subset of $\cH$, and
\item there is metric operator
$\eta_+:\cH\to\cH$ satisfying (\ref{ph-new5}).
\end{enumerate}
It is also possible that given a Hilbert space $\cH$ fulfilling the
first of these conditions and an unbounded invertible positive
operator $\eta_+:\cH\to\cH$ satisfying (\ref{ph-new5}), one can
construct a genuine bounded metric operator fulfilling the latter
condition. These mathematical problems require a systematic study of
their own. Following physicists' tradition, we shall ignore
mathematical subtleties related to these problems when we deal with
specific models that allow for an explicit investigation.
\item Let $A$ be a densely-defined linear operator with a
non-empty $\fU^+_A$ and $\eta_+\in \fU^+_A$. Because
$B^{-1}:=\eta_+^{1/2}:{\cal H}_{_{\eta_+}}\to{\cal H}$ is a unitary
operator and $A:{\cal H}_{_{\eta_+}}\to{\cal H}_{_{\eta_+}}$ is
Hermitian, the operator {\large$a$}$:=B^{-1} A B$ is a Hermitian
operator acting in ${\cal H}$. This shows that $A:{\cal H}\to{\cal
H}$ is related to a Hermitian operator {\large$a$}$:{\cal H}\to{\cal
H}$ via a similarity transformation,
\be
A=B\,\mbox{\large$a$}\,B^{-1}.
\label{A=rar}
\ee
Such an operator is called {\em quasi-Hermitian},
\cite{geyer}.\footnote{A linear densely-defined operator $A:{\cal
H}\to{\cal H}$ acting in a Hilbert space ${\cal H}$ is said to be
\emph{quasi-Hermitian} if there exists an everywhere-defined,
bounded, invertible linear operator $B:{\cal H}\to{\cal H}$ and a
Hermitian operator {\large$a$}$:{\cal H}\to{\cal H}$ such that
$A=B\mbox{\large$a$}B^{-1}$. The above analysis shows that $A$ is
quasi-Hermitian if and only if it is pseudo-Hermitian and $\fU^+_A$
is nonempty. In mathematical literature, the term quasi-Hermitian is
used for bounded operators $A$ satisfying $A^\dagger T=TA$ for a
positive but possibly non-invertible linear operator $T$,
\cite{dieudonne}. These and their various generalizations and
special cases have been studied in the context of symmetrizable
operators \cite{reid-1951,lax-1954,silberstein,zannen,kharazov}. For
a more recent review see \cite{istratescu}.}
\item Let $B:=\eta_+^{-1/2}$, $B':{\cal H}\to{\cal H}_{_{\eta_+}}$
be an arbitrary unitary operator, and
$\mbox{\large$a$}':={B'}^{-1}AB'$. Then in view of (\ref{A=rar}),
$\mbox{\large$a$}'=U^{-1}\mbox{\large$a$}\,U$, where $U:{\cal
H}\to{\cal H}$ is defined by $U:=B^{-1}B'$. Because both $B$ and
$B'$ are unitary operators mapping ${\cal H}$ onto ${\cal
H}_{_{\eta_+}}$, $\mbox{\large$a$}'$ and $U$ are respectively
Hermitian and unitary operators acting in ${\cal H}$. Conversely,
for every unitary operator $U:{\cal H}\to{\cal H}$ the operator
$B':=BU$ is a unitary operator mapping ${\cal H}$ onto ${\cal
H}_{_{\eta_+}}$ and $\mbox{\large$a$}':={B'}^{-1}AB'$ is a Hermitian
operator acting in ${\cal H}$. These observations show that the most
general Hermitian operator $\mbox{\large$a$}':{\cal H}\to{\cal H}$
that is related to $A$ via a similarity transformation,
\be
A=B'\,\mbox{\large$a$}'\,{B'}^{-1},
\label{A=rar-gen}
\ee
has the form
\be
B'=BU=\eta_+^{-1/2}U,
\label{B-prime=}
\ee
where $U$ is an arbitrary unitary transformation acting in ${\cal
H}$, i.e., $U\in{\cal U}({\cal H})$. If we identify $A$ with the
Hamiltonian operator for a quantum system and employ the formalism
of pseudo-Hermitian QM, the metric operator $\eta_+$ defines the
physical Hilbert space as ${\cal H}_{\rm phys}:={\cal
H}_{_{\eta_+}}$. Being Hermitian operators acting in ${\cal H}_{\rm
phys}$, the observables $O$ can be constructed using the unitary
operator $B:{\cal H}\to{\cal H}_{_{\eta_+}}$ and Hermitian operators
$o:{\cal H}\to{\cal H}$ according to
\be
O=B\,o{B}^{-1}=\eta_+^{-1/2}o\,\eta_+^{1/2}.
\label{observables-construct}
\ee
One can use any other unitary operator $B':{\cal H}\to{\cal
H}_{_{\eta_+}}$ for this purpose. Different choices for $B'$
correspond to different one-to-one mappings of the observables $O$
to Hermitian operators $o$. According to (\ref{B-prime=}) if
$o=B^{-1}O\,B$, then $o':={B'}^{-1}O\,B'=U^{-1}o\,U$. Therefore
making different choices for $B'$ corresponds to performing quantum
canonical transformations in ${\cal H}$. This in turn means that,
without loss of generality, we can identify the physical observables
of the system in its pseudo-Hermitian representation using
(\ref{observables-construct}).
\end{itemize}
\subsection{Spectral Properties of Pseudo-Hermitian Operators}
\label{sec-spec-prop}
Consider a pseudo-Hermitian operator $A$ acting in an
$N$-dimensional separable Hilbert space ${\cal H}$, with
$N\leq\infty$, and let $\eta\in\fM_A$. The spectrum of $A$ is the
set $\sigma(A)$ of complex numbers $\lambda$ such that the operator
$A-\lambda I$ is not invertible. Let $\lambda\in \sigma(A)$, then
$A-\lambda I$ is not invertible and because $\eta$ is invertible,
$\eta(A-\lambda I)\eta^{-1}=A^\dagger-\lambda I$ must not be
invertible. This shows that $\lambda\in\sigma(A^\dagger)$. But the
spectrum of $A^\dagger$ is the complex-conjugate of the spectrum of
$A$, i.e., $\lambda\in\sigma(A^\dagger)$ if and only if
$\lambda^*\in\sigma(A)$, \cite{kato}. This argument shows that as a
subset of the complex plane $\C$, the spectrum of a pseudo-Hermitian
operator is symmetric under the reflection about the real axis,
\cite{azizov,bognar}. In particular, the eigenvalues $a_n$ of $A$
(for which $A-a_nI$ is not one-to-one) are either real or come in
complex-conjugate pairs, \cite{pauli-1943,p1}.
Let $A$ be a diagonalizable pseudo-Hermitian operator with a
discrete spectrum \cite{reed-simon}.\footnote{This in particular
implies that the eigenvalues of $A$ have finite multiplicities.}
Then, one can use a Riesz basis $\{\psi_n\}$ consisting of a set of
eigenvectors of $A$ and the associated biorthonomal basis
$\{\phi_n\}$ to yield the following spectral representation of $A$
and a pseudo-metric operator $\eta\in\fM_A$, \cite{p1,p4}.
\bea
A&=&\sum_{n_0=1}^{N_0} a_{n_0} |\psi_{n_0}\kt\br\phi_{n_0}|+
\sum_{\nu=1}^{\cun} \left(\alpha_{\nu} |\psi_{\nu}\kt\br\phi_{\nu}|+
\alpha_{\nu}^* |\psi_{-\nu}\kt\br\phi_{-\nu}|\right),
\label{ph-sp-decom}\\
\eta&:=&\sum_{n_0=1}^{N_0} \sigma_{n_0}\,|\phi_{n_0}\kt\br\phi_{n_0}|+
\sum_{\nu=1}^{\cun} \left( |\phi_{\nu}\kt\br\phi_{-\nu}|+
|\phi_{-\nu}\kt\br\phi_{\nu}|\right),
\label{eta-sp-decom}
\eea
where $n_0$ labels the real eigenvalues $a_{n_0}$ (if any), $\nu$
labels the complex eigenvalues $\alpha_{\nu}$ with positive
imaginary part (if any), $-\nu$ labels the complex eigenvalues
$\alpha_{-\nu}=\alpha_{\nu}^*$ with negative imaginary part, the
eigenvalues with different spectral labels $n\in\{n_0,\nu,-\nu\}$
need not be distinct, $0\leq N_0,\cun\leq\infty$,
$\sigma_{n_0}\in\{-1,1\}$ are arbitrary, and we have
\bea
A|\psi_{n_0}\kt=a_{n_0}|\psi_{n_0}\kt,&&
A|\psi_{\pm\nu}\kt=\alpha_{\pm\nu}|\psi_{\pm\nu}\kt,
\label{eg-va-A}\\
\br\phi_{m_0}|\psi_{n_0}\kt=\delta_{m_0,n_0}&&
\mbox{for all $m_0,n_0\in\{1,2,3,\cdots, N_0\}$},
\label{biortho-A-1}\\
\br\phi_{\fg\mu}|\psi_{\fh\,\nu}\kt=
\delta_{\fg,\fh}\, \delta_{\mu,\nu}&&
\mbox{for all $\fg,\fh\in\{-,+\}$ and
$\mu,\nu\in\{1,2,3,\cdots,\cun\}$}.
\label{biortho-A-2}
\eea
Consider the set ${\cal L}(\{\psi_n\})$ of finite linear
combinations of $\psi_n$'s, as defined by (\ref{finite-span}).
According to (\ref{ph-sp-decom}), elements of ${\cal L}(\{\psi_n\})$
belong to the domain of $A$, i.e., ${\cal L}(\{\psi_n\})\subseteq
{\cal D}(A)$. But because $\{\psi_n\}$ is a basis, ${\cal
L}(\{\psi_n\})$ is a dense subset of ${\cal H}$. This implies that
${\cal D}(A)$ is a dense subset of ${\cal H}$.
Next, we show that the operator $\eta$ defined by
(\ref{eta-sp-decom}) does actually define a pseudo-metric operator,
i.e., it is an everywhere-defined, bounded, invertible, Hermitian
operator.
Let $\{\chi_n\}$ be an orthonormal basis of ${\cal H}$ and $B:{\cal
H}\to{\cal H}$ be the everywhere-defined, bounded, invertible
operator that maps $\{\chi_n\}$ onto the Riesz basis $\{\psi_n\}$,
i.e., $\psi_n=B\chi_n$ for all $n$. It is not difficult to see that
the biorthonormal basis $\{\phi_n\}$ may be mapped onto $\{\chi_n\}$
by $B^\dagger$, $\chi_n=B^\dagger\phi_n$ for all $n$. We can use
this relation and (\ref{eta-sp-decom}) to compute
\be
\tilde\eta:=B^\dagger\eta B=
\sum_{n_0=1}^{N_0} \sigma_{n_0}\,|\chi_{n_0}\kt\br\chi_{n_0}|+
\sum_{\nu=1}^{\cun} \left( |\chi_{\nu}\kt\br\chi_{-\nu}|+
|\chi_{-\nu}\kt\br\chi_{\nu}|\right).
\label{tilde-eta-sp-decom}
\ee
It is not difficult to show that $\tilde\eta$ is a pseudo-metric
operator, $\tilde\eta\in\fM_I$. To see this, let $\psi\in{\cal H}$
be arbitrary. Then because $\{\chi_n\}$ is orthonormal,
\be
\psi=\sum_{n=1}^N\br\chi_n|\psi\kt\:\chi_n=
\sum_{n_0=1}^{N_0}\br\chi_{n_0}|\psi\kt\:\chi_{n_0}+
\sum_{\nu=1}^{\cun}\left(\br\chi_{\nu}|\psi\kt\:\chi_{\nu}+
\br\chi_{-\nu}|\psi\kt\:\chi_{-\nu}\right).
\label{psi-expand-1}
\ee
In view of (\ref{tilde-eta-sp-decom}) and (\ref{psi-expand-1}), we
have
\be
\tilde\eta\,\psi=\sum_{n_0=1}^{N_0}
\sigma_{n_0}\br\chi_{n_0}|\psi\kt\:\chi_{n_0}+
\sum_{\nu=1}^{\cun}\left(\br\chi_{-\nu}|\psi\kt\:\chi_{\nu}+
\br\chi_{\nu}|\psi\kt\:\chi_{-\nu}\right).
\label{tilde-eta-psi}
\ee
In particular,
\be
\parallel\tilde\eta\,\psi\parallel^2=
\sum_{n_0=1}^{N_0} |\br\chi_{n_0}|\psi\kt|^2+
\sum_{\nu=1}^{\cun}\left(|\br\chi_{-\nu}|\psi\kt|^2+
|\br\chi_{\nu}|\psi\kt|^2\right) = \:\parallel\psi\parallel^2.
\label{norm=norm}
\ee
This shows that not only $\tilde\eta$ is everywhere-defined but it
is bounded. Indeed, we have $\parallel\tilde\eta\parallel=1$.
Because $\tilde\eta$ is a bounded everywhere-defined operator,
$\tilde\eta^\dagger$ is also everywhere-defined and as is obvious
from (\ref{tilde-eta-sp-decom}), it coincides with $\tilde\eta$,
i.e., $\tilde\eta$ is Hermitian. Finally, in view of
(\ref{tilde-eta-sp-decom}), we can easily show that
$\tilde\eta^2=I$. In particular, $\tilde\eta^{-1}=\tilde\eta$ is
bounded and $\tilde\eta$ is invertible. This completes the proof of
$\tilde\eta\in\fM_I$.
Next, we observe that $\tilde\eta\in\fM_I$ implies $\eta\in\fM_I$.
This is because according to (\ref{tilde-eta-sp-decom}),
$\eta={B^{\dagger}}^{-1}\tilde\eta B^{-1}$ and $B^{-1}$ and
${B^{\dagger}}^{-1}$ are bounded everywhere-defined invertible
operators. Therefore, $\eta$ as defined by (\ref{eta-sp-decom}) is a
pseudo-metric operator. Let us also note that the inverse of $\eta$
is given by
\be
\eta^{-1}=B\tilde\eta B^\dagger=\sum_{n_0=1}^{N_0} \sigma_{n_0}\,|\psi_{n_0}\kt\br\psi_{n_0}|+
\sum_{\nu=1}^{\cun} \left( |\psi_{\nu}\kt\br\psi_{-\nu}|+
|\psi_{-\nu}\kt\br\psi_{\nu}|\right).
\label{eta-inverse-sp-decom}
\ee
We can easily show that $\eta$ belongs to $\fM_A$ by substituting
(\ref{ph-sp-decom}), (\ref{eta-sp-decom}), and
(\ref{eta-inverse-sp-decom}) in $\eta A\eta^{-1}$ and checking that
the result coincides with $A^\dagger$.
In Ref.~\cite{p4}, it is shown that any element $\eta\in\fM_A$ can
be expressed in the form (\ref{eta-sp-decom}) where $\{\psi_n\}$ is
the biorthonormal basis associated with some (Riesz) basis
$\{\psi_n\}$ consisting of the eigenvectors of $A$. As any two Riesz
bases are related by a bounded everywhere-defined invertible
operator $L:{\cal H}\to{\cal H}$ one may conclude that the elements
of $\fM_A$ have the following general form.
\be
\eta'=L^\dagger \eta\, L,
\label{general-eta}
\ee
where $\eta$ is the pseudo-metric operator (\ref{eta-sp-decom}) that
is defined in terms of a fixed (but arbitrary) (Riesz) basis
$\{\psi_n\}$ consisting of the eigenvectors of $A$ and
$\sigma_{n_0}$ are a set of arbitrary signs. The operator $L$
appearing in (\ref{general-eta}) maps eigenvectors $\psi_n$ of $A$
to eigenvectors $L\psi_n$ of $A$ in such a way that $\psi_n$ and
$L\psi_n$ have the same eigenvalue, \cite{p4}. This in particular
implies that $L$ commutes with $A$.
Suppose $A$ has a complex-conjugate pair of nonreal eigenvalues
$\alpha_{\pm\nu}$, let $\psi_{\pm\nu}$ be a corresponding pair of
eigenvectors, $\eta'\in\fM_A$ be an arbitrary pseudo-metric operator
associated with $A$, $L$ be an everywhere-defined, bounded,
invertible operator commuting with $A$ and satisfying
(\ref{general-eta}), and $\xi:=L^{-1}\psi_\nu$. Then $\br
\xi|\eta'\xi\kt=\br\psi_\nu|\eta\,\psi_\nu\kt=0$. Because $L$ is
invertible $\xi\neq 0$, this is an indication that $\eta'$ is not a
positive-definite operator. Similarly suppose that one of the signs
$\sigma_{n_0}$ appearing in (\ref{eta-sp-decom}) is negative and let
$\zeta:=L^{-1}\psi_{n_0}$. Then
$\br\zeta|\eta'\zeta\kt=\br\psi_{n_0}|\eta\,\psi_{n_0}\kt=
\sigma_{n_0}=-1$, and again $\eta'$ fails to be positive-definite.
These observations show that in order for $\eta'$ to be a
positive-definite operator the spectrum of $A$ must be real and all
the signs $\sigma_{n_0}$ appearing in (\ref{eta-sp-decom}) must be
positive. In this case, we have $\eta'=\eta'_+$, where
\be
\eta'_+:=L^\dagger\eta_+\,L,~~~~~~~~~~
\eta_+=\sum_{n=1}^N |\phi_n\kt\br\phi_n|.
\label{eta-old-positive}
\ee
The choice of the signs $\sigma_{n_0}$ is not dictated by the
operator $A$ itself. Therefore it is the reality of the spectrum of
$A$ that ensures the existence of positive-definite elements of
$\fM_A$. By definition, such elements are metric operators belonging
to $\fM^+_A$. Their general form is given by
(\ref{eta-old-positive}).
\subsection{Symmetries of Pseudo-Hermitian Hamiltonians}
\label{sec-sym}
Consider a pseudo-Hermitian operator $A:\cH\to\cH$ and let $\eta_1$
and $\eta_2$ be a pair of associated pseudo-metric operators;
$\eta_1 A \eta_1^{-1}=A^\dagger=\eta_2 A \eta_2^{-1}$. Then it is a
trivial exercise to show that the invertible linear operator
$S:=\eta_2^{-1}\eta_1$ commutes with $A$, \cite{p1}. If we identify
$A$ with the Hamiltonian of a quantum system, which we shall do in
what follows, $S$ represents a linear symmetry of $A$.
Next, consider a diagonalizable pseudo-Hermitian operator $A$ with a
discrete spectrum, $\psi_n$ be eigenvectors of $A$, and
$\{(\psi_n,\phi_n)\}$ be the complete biorthonormal extension of
$\{\psi_n\}$ so that $A$ admits a spectral representation of the
form (\ref{ph-sp-decom}):
\be
A=\sum_{n_0=1}^{N_0} a_{n_0} |\psi_{n_0}\kt\br\phi_{n_0}|+
\sum_{\nu=1}^{\cun} \left(\alpha_{\nu} |\psi_{\nu}\kt\br\phi_{\nu}|+
\alpha_{\nu}^* |\psi_{-\nu}\kt\br\phi_{-\nu}|\right).
\label{ph-sp-decom-2}
\ee
Moreover, for every sequence $\sigma=(\sigma_{n_0})$ of signs
($\sigma_{n_0}\in\{-1,+1\}$), let
\bea
\eta_{ \sigma }&:=&\sum_{n_0=1}^{N_0} \sigma_{n_0}\,|\phi_{n_0}\kt\br\phi_{n_0}|+
\sum_{\nu=1}^{\cun} \left( |\phi_{\nu}\kt\br\phi_{-\nu}|+
|\phi_{-\nu}\kt\br\phi_{\nu}|\right),
\label{eta-sp-decom-2}\\
\cC_{ \sigma }&:=&\sum_{n_0=1}^{N_0} \sigma_{n_0} |\psi_{n_0}\kt\br\phi_{n_0}|+
\sum_{\nu=1}^{\cun} \left( |\psi_{\nu}\kt\br\phi_{\nu}|+
|\psi_{-\nu}\kt\br\phi_{-\nu}|\right),
\label{gen-C=}
\eea
and $\eta_1,\fS:\cH\to\cH$ be defined by
\bea
\eta_1&:=&\sum_{n_0=1}^{N_0} |\phi_{n_0}\kt\br\phi_{n_0}|+
\sum_{\nu=1}^{\cun} \left( |\phi_{\nu}\kt\br\phi_{-\nu}|+
|\phi_{-\nu}\kt\br\phi_{\nu}|\right),
\label{eta-1=}\\
\fS&:=&\sum_{n_0=1}^{N_0} |\psi_{n_0}\kt\star\br\phi_{n_0}|+
\sum_{\nu=1}^{\cun} \left( |\psi_{\nu}\kt\star\br\phi_{\nu}|+
|\psi_{-\nu}\kt\star\br\phi_{-\nu}|\right),
\label{gen-S=}
\eea
where for every $\psi,\phi\in\cH$ the symbol $|\psi\kt\star\br\phi|$
denotes the following antilinear operator acting in $\cH$.
\be
|\psi\kt\star\br\phi|\:\zeta:=\br\zeta|\phi\kt\:\psi=
\br\phi|\zeta\kt^*\,\psi,~~~~~~~~
\mbox{for all $\zeta\in\cH$}.
\label{star=}
\ee
As we discussed in Section~\ref{sec-spec-prop}, $\eta_\sigma$ and
$\eta_1$ are pseudo-metric operators associated with $A$. The
operators $\cC_\sigma$ and $\fS$ have the following remarkable
properties \cite{jmp-2003}.
\begin{itemize}
\item In view of the fact that
$\eta_1^{-1}=\sum_{n_0=1}^{N_0} |\psi_{n_0}\kt\br\psi_{n_0}|+
\sum_{\nu=1}^{\cun} \left( |\psi_{\nu}\kt\br\psi_{-\nu}|+
|\psi_{-\nu}\kt\br\psi_{\nu}|\right)$, we have
\be
\cC_\sigma=\eta_1^{-1}\eta_\sigma.
\label{C-sigma}
\ee
Therefore, $\cC_\sigma$ is a linear invertible operator that
generates a symmetry of $A$:
\be
[\cC_{\sigma},A]=0.
\label{C-sigma-sym}
\ee
\item Using (\ref{star=}) and the biorthonormality and
completeness properties of $\{(\psi_n,\phi_n)\}$, we can check that
$\fS$ is an invertible antilinear operator that also commutes with
$A$,
\be
[\fS,A]=0.
\label{gen-pt-sym}
\ee
\item $\cC_\sigma$ and $\fS$ are commuting involutions, i.e.,
\be
[\cC_\sigma,\fS]=0,~~~~~~~~~~\cC_\sigma^2=\fS^2=I.
\label{involutions}
\ee
\end{itemize}
In summary, we have constructed an involutive antilinear symmetry
generator $\fS$ and a class of involutive linear symmetry generators
$\cC_\sigma$ that commute with $\fS$.
It turns out that if a given diagonalizable operator $A$ with a
discrete spectrum commutes with an invertible antilinear operator,
then $A$ is necessarily pseudo-Hermitian. Therefore, for such
operators pseudo-Hermiticity and the presence of (involutive)
antilinear symmetries are equivalent conditions
\cite{p3,solombrino-2002}. Furthermore, each of these conditions is
also equivalent to the \emph{pseudo-reality} of the spectrum of $A$.
The latter means that the complex-conjugate of every eigenvalue of
$A$ is an eigenvalue with the same multiplicity
\cite{p3,solombrino-2002}. These observations are the key for
understanding the role of ${\cal PT}$ symmetry in the context of our
study. They admit extensions for a certain class of
non-diagonalizable operators with discrete spectrum
\cite{jmp-2002d,scolarici-solombrino-2003,sac-jpa-2006,cgs-jpa-2007}
and some operators with continuous spectrum
\cite{cqg,jmp-2005,jpa-2006b}.
The reality of the spectrum of $A$ is the necessary and sufficient
condition for the existence of an associated metric operator and the
corresponding positive-definite inner product that renders $A$
Hermitian \cite{p2}. For the case that the spectrum of $A$ is real,
the expressions for $\eta_{\sigma},\eta_1,\cC_\sigma,$ and $\fS$
simplify:
\bea
\eta_{ \sigma }&:=&\sum_{n=1}^{N} \sigma_{n }\,
|\phi_{n }\kt\br\phi_{n}|,~~~~~~~~~
\eta_1=\sum_{n=1 }^{N } |\phi_{n }\kt\br\phi_{n }|=\eta_+,
\label{etas-real}\\
\cC_{ \sigma }&=&\sum_{n=1 }^{N } \sigma_{n }
|\psi_{n }\kt\br\phi_{n }|,~~~~~~~~~
\fS=\sum_{n=1 }^{N } |\psi_{n }\kt\star\br\phi_{n }|,
\label{gen-C-S-real}
\eea
and we find
\be
\cC_\sigma\psi_n=\sigma_n\psi_n,~~~~~~~~\fS\psi_n=\psi_n.
\label{sec-6-exact-sym}
\ee
Hence, $\cC_\sigma$ and $\fS$ generate exact symmetries of $A$.
In order to make the meaning of $\cC_\sigma$ more transparent, we
use the basis expansion of an arbitrary $\psi\in\cH$, namely
$\psi=\sum_{n=1}^N c_n\psi_n$, to compute
\be
\cC_\sigma\psi=\sum_{n=1}^Nc_n~\cC\psi_n=
\sum_{n=1}^N \sigma_n c_n\psi_n=\psi_+-\psi_-,
\label{c-sigma-action}
\ee
where $\psi_\pm:=\sum_{n\in\fN_\pm}c_n\psi_n$ and $
\fN_\pm:=\Big\{n\in\{1,2,3,\cdots,N\}~\big|~\sigma_n=\pm 1\Big\}$.
According to (\ref{c-sigma-action}), for every $\psi\in{\cal H}$
there are unique state-vectors $\psi_\pm$ belonging to the
eigenspaces ${\cal H}_\pm:=\{\psi\in{\cal
H}~|~\cC_\sigma\psi=\pm\psi~\}$ of $\cC_\sigma$ such that
$\psi=\psi_+-\psi_-$. This identifies $\cC_\sigma$ with a ($\Z_2$-)
grading operator for the Hilbert space.
If $\sigma_n=\pm 1$ for all $n$, we find $\cC_\sigma=\pm I$. In the
following we consider the nontrivial cases: $\cC\neq\pm I$. Then
$\cH_\pm$ are proper subspaces of $\cH$ satisfying
\be
\cH=\cH_+\oplus\cH_-,
\label{oplus}
\ee
$\cC_\sigma$ is a genuine grading operator, both $\eta_\sigma$ and
$-\eta_\sigma$ fail to be positive-definite, and
$\br\cdot|\cdot\kt_{\eta_\sigma}$ is indefinite. Furthermore, in
view of (\ref{c-sigma-action}), the operators
$
\Pi_\pm:=\frac{1}{2}~(I\pm\cC_\sigma),
$
satisfy $\Pi_\pm\psi=\psi_\pm$, i.e., they are projection operators
onto $\cH_\pm$.
Next, consider computing $\br\psi_\fg|\phi_\fh\kt_{\eta_\sigma}$ for
arbitrary $\fg,\fh\in\{-,+\}$ and $\psi_\pm,\phi_\pm\in\cH_\pm$.
Using the basis expansion of $\psi_\pm$ and $\phi_\pm$, i.e.,
$\phi_\pm=\sum_{n\in\fN_\pm}d_n\psi_n$ and
$\psi_\pm=\sum_{n\in\fN_\pm}c_n\psi_n$, and (\ref{etas-real}), we
have $\eta_\sigma\phi_\pm=\pm\sum_{n\in\fN_\pm}d_n\phi_n$ and
\be
\br\psi_\fg|\phi_\fh\kt_{\eta_\sigma}=
\br\psi_\fg|\eta_\sigma\phi_\fh\kt=
\fg\:\delta_{\fg,\fh}\sum_{n\in\fN_\fg}c_n^{*}d_n.
\label{eq-123z}
\ee
Therefore, with respect to the indefinite inner product
$\br\cdot|\cdot\kt_{\eta_\sigma}$, the subspaces $\cH_+$ and $\cH_-$
are orthogonal, and (\ref{oplus}) is an orthogonal direct sum
decomposition.
Another straightforward implication of (\ref{eq-123z}) is that, for
all $\psi,\phi\in{\cal H}$,
\be
\br\psi|\phi\kt_{\eta_\sigma}=\br\psi_+|\phi_+\kt_{\eta_\sigma}+
\br\psi_-|\phi_-\kt_{\eta_\sigma}=
\sum_{n\in\fN_+}c_n^{*}d_n
-\sum_{n\in\fN_-}c_n^{*}d_n=
\br\psi_+|\phi_+\kt_{\eta_+}-\br\psi_-|\phi_-\kt_{\eta_+},
\label{sec6-eq1}
\ee
where $\psi_\pm:=\Pi_\pm\psi$, $\phi_\pm:=\Pi_\pm\phi$, $c_n:=
\br\phi_n|\psi\kt$ and $d_n:= \br\phi_n|\phi\kt$. Similarly, we have
\be
\br\psi|\cC_\sigma\phi\kt_{\eta_\sigma}=
\br\psi_+|\phi_+\kt_{\eta_\sigma}-
\br\psi_-|\phi_-\kt_{\eta_\sigma}=
\sum_{n\in\fN_+}c_n^{*}d_n
+\sum_{n\in\fN_-}c_n^{*}d_n=
\sum_{n=1}^N c_n^{*}d_n=\br\psi|\phi\kt_{\eta_+}.
\label{sec6-eq2}
\ee
This calculation shows that the positive-definite inner product
$\br\cdot|\cdot\kt_{\eta_+}$ can be expressed in terms of the
indefinite inner product $\br\cdot|\cdot\kt_{\eta_\sigma}$ and the
grading operator $\cC_\sigma$ according to
\be
\br\cdot|\cdot\kt_{\eta_+}=\br\cdot|\cC_\sigma
\cdot\kt_{\eta_\sigma}.
\label{sec6-inn=inn}
\ee
Conversely, one can use (\ref{sec6-inn=inn}) to define a
positive-definite inner product that makes $H$ Hermitian. The latter
scheme may be traced back to a similar construction developed in the
1950's in the context of indefinite-metric quantum theories
\cite{nevanlinna}. See \cite{nagy,nakanishi} for reviews. In the
context of ${\cal PT}$-symmetric quantum mechanics, it was proposed
(with a specific choice for the sequence $\sigma$) in \cite{bbj} and
coined the name ${\cal CPT}$-inner product.
To see the connection with the treatment of \cite{bbj}, consider the
case that $A$, which is now viewed as the Hamiltonian operator for a
quantum system, is symmetric, i.e.,
$
A^T:={\cal T}A^\dagger{\cal T}=A,
$
where $\cT$ is the time-reversal operator.\footnote{Recall that
$\cT$ is an antilinear Hermitian (and unitary) involution
\cite{wigner-1960,messiah}.} Then given the spectral representation
of $A$ and $A^\dagger$, we can easily choose a biorthonormal system
$\{(\psi_n,\phi_n)\}$ such that
\be
\phi_n=\sigma_n\cT\psi_n.
\label{sec6-e11}
\ee
Using this relation together with the biorthonormality and
completeness properties of $\{(\psi_n,\phi_n)\}$, we can obtain the
following spectral representation of $\cT$.
\be
\cT=\sum_{n=1}^N\sigma_n|\phi_n\kt\star\br\phi_n|.
\label{sec6-e12}
\ee
In view Eqs.~(\ref{etas-real}), (\ref{gen-C-S-real}),
(\ref{sec6-e12}), and $\cT^2=I$, we have
\bea
&&\br\psi_m|\psi_n\kt=\sigma_m\sigma_n\br\phi_n|\phi_m\kt,
\label{sec6-e13}\\
&&\fS=\cT\eta_\sigma.
\label{sec6-e14}
\eea
Clearly, we could use (\ref{sec6-e12}) to define an invertible
antilinear operator satisfying (\ref{sec6-e11}) for an arbitrary
possibly non-symmetric $A$, namely
\be
\cT_\sigma:=\sum_{n=1}^N\sigma_n|\phi_n\kt\star\br\phi_n|.
\label{sec6-e12n}
\ee
But in this more general case, (\ref{sec6-e13}) may not hold and
$\cT_\sigma$ may not be an involution.
In fact, condition (\ref{sec6-e13}) is not only a necessary
condition for $\cT_\sigma^2=I$ but it is also sufficient
\cite{jmp-2003}. An analogous necessary and sufficient condition for
$\eta_\sigma^2=I$ is \cite{jmp-2003}
\be
\br\psi_m|\psi_n\kt=\sigma_m\sigma_n\br\phi_m|\phi_n\kt,
\label{sec6-e15}
\ee
If both (\ref{sec6-e13}) and (\ref{sec6-e15}) hold,
\be
\cT_\sigma^2=\eta_\sigma^2=I,
\label{sec6-e16}
\ee
and as a result
\be
\fS=\cT_\sigma\eta_\sigma.
\label{S=TP}
\ee
By virtue of this relation and $\fS^2=I$,
\be
[\eta_\sigma,\cT_\sigma]=0.
\label{sec6-e17}
\ee
Therefore,
\be
\fS=\eta_\sigma \cT_\sigma.
\label{S=PT}
\ee
For the symmetric and $\cP\cT$-symmetric Hamiltonians
(\ref{pt-sym-nu}) that are considered in \cite{bbj}, we can find a
biorthonormal system $\{(\psi_n,\phi_n)\}$ satisfying
(\ref{sec6-e13}) -- (\ref{sec6-e15}), \cite{jmp-2003}. Moreover,
setting $\sigma_n:=(-1)^{n+1}$ for all $n\in\Z^+$, we have
\be
\eta_\sigma=\cP.
\label{sec6-e18}
\ee
Therefore, in light of (\ref{sec6-e14}) and (\ref{sec6-e17}), the
antilinear symmetry generator $\fS$ coincides with $\cP\cT$,
\be
\fS=\cP\cT,
\label{S=PTn}
\ee
and the linear symmetry generator $\cC_\sigma$ is the
``charge-conjugation'' operator $\cC$ of \cite{bbj}. In view of
(\ref{C-sigma}), (\ref{gen-pt-sym}), (\ref{etas-real}),
(\ref{involutions}), and (\ref{sec6-e18}), it satisfies
\be
\cC^2=I,~~~~~~[\cC,A]=[\cC,{\cal PT}]=0,
~~~~~~\cC=\eta_+^{-1}\cP.
\label{sec6-C-op}
\ee
Furthermore, because in the position representation of the state
vectors $(\cT\psi)(x)=\psi(x)^*$, the positive-definite inner
product (\ref{sec6-inn=inn}) coincides with the $\cC\cP\cT$-inner
product.
This completes the demonstration that the $\cC\cP\cT$-inner product
is an example of the positive-definite inner products
$\br\cdot|\cdot\kt_\etap$ that we explored earlier.
We conclude this subsection by noting that although in general we
can introduce a pair \emph{generalized time-reversal and parity
operators}, $\eta_\sigma$ and $\cT_\sigma$, they may fail to be
involutions.
\subsection{A Two-Level Toy Model} \label{sec-two-level}
In this subsection we demonstrate the application of our general
results in the study of a simple two-level model which, as we will
see in Subsection~\ref{rqm-cq-qft}, admits physically important
infinite-dimensional generalizations~\cite{cqg,ijmpa-2006,ap}.
Let ${\cal H}$ be the (reference) Hilbert space obtained by endowing
$\C^2$ with the Euclidean inner product and $\{e_1,e_2\}$ be the
standard basis of $\C^2$, i.e., $e_1:=${\scriptsize$
\left(\begin{array}{c}1\\0\end{array}\right)$}, $e_2:=${\scriptsize$
\left(\begin{array}{c}0\\1\end{array}\right)$}. Then we can
represent every linear operator $K$ acting in ${\cal H}$ in the
basis $\{e_1,e_2\}$ by a $2\times 2$ matrix which we denote by
$\underline{K}$; the entries of $\underline{K}$ have the form
$\underline{K}_{ij}:=\br e_i|Ke_j\kt$ where $i,j\in\{1,2\}$.
Now, consider a linear operator $A:\C^2\to\C^2$ represented by
\be
\underbar A:=\frac{1}{2}\,\left(\begin{array}{cc}
D+1 & D-1\\
-D+1 & -D-1\end{array}\right),
\label{matrix-A=}
\ee
where $D$ is a real constant. $A:{\cal H}\to{\cal H}$ is a Hermitian
operator if and only if $D=1$. We can easily solve the eigenvalue
problem for $A$. Its eigenvalues $a_n$ and eigenvectors $\psi_n$
have the form
\be
a_1=-a_2= D^{1/2},~~~~~~~
\psi_1=c_1\left(\begin{array}{c}
1+D^{1/2}\\
1-D^{1/2}\end{array}\right),~~~~~
\psi_2=c_2\left(\begin{array}{c}
1-D^{1/2}\\
1+D^{1/2}\end{array}\right),
\label{eg-va-A=}
\ee
where $c_1,c_2$ are arbitrary nonzero complex numbers. Clearly, for
$D=0$, $a_1=a_2=0$, the eigenvectors become proportional, and $A$ is
not diagonalizable. For $D\neq 0$, $A$ has two distinct eigenvalues
and $\{\psi_1,\psi_2\}$ forms a basis of $\C^2$. This shows that
$D=0$ marks an exceptional spectral point \cite{kato,heiss}, for
$D>0$ the eigenvalues are real and for $D<0$ they are imaginary.
It is an easy exercise to show that $A$ is
$\Sigma_3$-pseudo-Hermitian where $\Sigma_3:\C^2\to\C^2$ is the
linear operator represented in the standard basis by the Pauli
matrix $\psigma_3:=${\scriptsize$
\left(\begin{array}{cc}1&0\\0&-1\end{array}\right)$}. Hence $\fM_A$
includes $\Sigma_3$, and $A$ is $\Sigma_3$-pseudo-Hermitian for all
$D\in\R$. This is consistent with the fact that $\Sigma_3$ is not a
positive-definite operator, because otherwise $A$ could not have
imaginary eigenvalues. According to our general results, for $D>0$,
$\fM_A$ must include positive-definite operators. To construct these
we first construct the biorthonormal basis $\{\phi_1,\phi_2\}$
associated with $\{\psi_1,\psi_2\}$. For $D>0$, the basis vectors
$\phi_n$ are given by
\be
\phi_1=(4c_1^*)^{-1}\left(\begin{array}{c}
1+D^{-1/2}\\
1-D^{-1/2}\end{array}\right),~~~~~
\phi_2=(4c_2^*)^{-1}\left(\begin{array}{c}
1-D^{-1/2}\\
1+D^{-1/2}\end{array}\right).
\label{phi-eg-va-A=}
\ee
Inserting these relations in (\ref{eta-sp-decom}) we find the
following expression for the matrix representation of the most
general pseudo-metric operator $\eta\in\fM_A$.
\be
\mbox{\underbar{$\eta$}}=r_1\sigma_1
\left(\begin{array}{cc}
(1+D^{-1/2})^2 & 1-D^{-1}\\
1-D^{-1} & (1-D^{-1/2})^2\end{array}\right)+
r_2 \sigma_2
\left(\begin{array}{cc}
(1-D^{-1/2})^2 & 1-D^{-1}\\
1-D^{-1} & (1+D^{-1/2})^2\end{array}\right),
\label{gen-eta-example}
\ee
where $r_1:=|4c_1|^{-2}$ and $r_2:=|4c_2|^{-2}$ are arbitrary
positive real numbers and $\sigma_1,\sigma_2$ are arbitrary signs.
The choice $\sigma_1=-\sigma_2=1$ and $r_1=r_2=D^{1/2}/4$ yields
$\eta=\Sigma_3$. The choice $\sigma_1=\sigma_2=1$ yields the form of
the most general positive-definite element of $\fM_A$. A
particularly simple example of the latter is obtained by taking
$c_1=c_2=D^{-1/4}/2$ which implies $r_1=r_2=D^{1/2}/4$. It has the
form
\be
\mbox{\underbar{$\eta$}}_+=\frac{1}{2}\,
\left(\begin{array}{cc}
D^{1/2}+D^{-1/2} & D^{1/2}-D^{-1/2}\\
D^{1/2}-D^{-1/2} & D^{1/2}+D^{-1/2}\end{array}\right).
\label{eta-plus-example}
\ee
We can simplify this expression by introducing \footnote{This was
pointed out to me by Professor Haluk Beker.} $\theta:=\frac{1}{2}\ln
D$, and using the fact that the Pauli matrix
$\psigma_1:=${\scriptsize$
\left(\begin{array}{cc}0&1\\1&0\end{array}\right)$} squares to the
identity matrix $\underbar{\mbox{$I$}}$. This yields
\be
\mbox{\underbar{$\eta$}}_+=
\left(\begin{array}{cc}
\cosh\theta & \sinh\theta\\
\sinh\theta &
\cosh\theta\end{array}\right)=
\cosh\theta\, \underbar{\mbox{$I$}}+\sinh\theta\,\psigma_1=
e^{\theta\,\psigma_1}.
\label{eta-plus-example-2}
\ee
The metric operator represented by (\ref{eta-plus-example}) and
(\ref{eta-plus-example-2}) defines the following (positive-definite)
inner product on $\C^2$ with respect to which $A$ is a Hermitian
operator.
\be
\br\vec z|\vec w\kt_{_{\eta_+}}:=
\br\vec z|\eta_+\vec w\kt=
(z_1^*w_1+z_2^*w_2)\cosh\theta+(z_1^*w_2+z_2^*w_1)\sinh\theta,
\label{inn-prod-example}
\ee
where $\vec z=(z_1,z_2)^T,\vec w=(w_1,w_2)^T\in\C^2$ are arbitrary.
The inner product (\ref{inn-prod-example}) has a more complicated
form than both the reference (Euclidean) inner product, $\br\vec
z|\vec w\kt=z_1^*w_1+z_2^*w_2$, and the indefinite inner product
defined by $\Sigma_3$,
\be
\br\vec z|\vec w\kt_{_{\Sigma_3}}:=
\br\vec z|\Sigma_3\vec w\kt=z_1^*w_1-z_2^*w_2.
\label{sigam3-inn-prod}
\ee
Furthermore, unlike $\br\cdot|\cdot\kt$ and
$\br\cdot|\cdot\kt_{_{\Sigma_3}}$, the inner product $\br\cdot
|\cdot\kt_{_{\eta_+}}$ depends on $\theta$ and consequently $D$. In
particular, as $D\to 0$ it degenerates. In fact, a quick inspection
of Eq.~(\ref{gen-eta-example}) shows that every $D$-independent
pseudo-metric operator is proportional to $\Sigma_3$ and hence
necessarily indefinite; $\pm\Sigma_3$ are the only $D$-independent
elements of $\fU_A$.
Now, suppose that $A$ is the Hamiltonian operator of a two-level
quantum system. If we employ the prescription provided by the
indefinite-metric quantum theories, we should endow $\C^2$ with an
indefinite inner-product from the outset. The simplest choice that
is historically adopted and viewed, according to the above-mentioned
argument due to Pauli, as being the unique choice is
$\eta=\Sigma_3$. In this case the system has a physical state
corresponding to the state-vector $e_1$ and a hypothetical state or
ghost corresponding to $e_2$. Unfortunately, the subspace of
physical state-vector, i.e., the span of $\{e_1\}$, is not invariant
under the action of $A$ unless $D=1$. Hence, for $D\neq 1$, such an
indefinite-metric quantum theory suffers from interpretational
problems and is inconsistent. In contrast, pseudo-Hermitian quantum
mechanics provides a consistent description of a unitary quantum
theory based on the Hamiltonian $A$. This is done by endowing $\C^2$
with the (positive-definite) inner product
$\br\cdot|\cdot\kt_{\eta_+'}$, where $\eta_+'$ is given by the
right-hand side of (\ref{gen-eta-example}) with
$\sigma_1=\sigma_2=1$. It involves the free parameters $r_1$ and
$r_2$ that can be fixed from the outset or left as degrees of
freedom of the formulation of the theory. We can represent this most
general metric operator by
\be
\mbox{\underbar{$\eta'$}}_+= r
\left(\begin{array}{cc}
\cosh\theta+s & \sinh\theta\\
\sinh\theta & \cosh\theta-s
\end{array}\right),
\label{gen-eta-plus-example}
\ee
where $r:=2(r_1+r_2)D^{-1/2}\in\R^+$ and
$s:=\frac{r_1-r_2}{r_1+r_2}\in (-1,1)$ are arbitrary. We can use
(\ref{gen-eta-plus-example}) to determine the most general inner
product on $\C^2$ that makes $A$ Hermitian. This is given by
\be
\br\vec z|\vec w\kt_{_{\eta'_+}}:=
\br\vec z|\eta'_+\vec w\kt=r\left[\br\vec z|\vec w\kt_{_{\eta_+}}
+s\, \br\vec z|\vec w\kt_{_{\Sigma_3}}\right],
\label{inn-prod-example-general}
\ee
where $\br\cdot|\cdot\kt_{_{\eta_+}}$ and
$\br\cdot|\cdot\kt_{_{\Sigma_3}}$ are respectively defined by
(\ref{inn-prod-example}) and (\ref{sigam3-inn-prod}).
We can relate $\eta_+'$ to $\eta_+$ using a linear operator
$L:\C^2\to\C^2$ commuting with $A$ via $\eta_+'=L^\dagger\eta_+L$.
This operator has the following general form $L=2
D^{-1/4}\left[\sqrt{r_1}e^{i\varphi_1}|\psi_1\kt\br\phi_1|+
\sqrt{r_2}e^{i\varphi_2}|\psi_1\kt\br\phi_1|\right]$,
where $\varphi_1,\varphi_2\in[0,2\pi)$ are arbitrary. We can
represent it by
\be
\mbox{\underbar{$L$}}=
\left(\begin{array}{cc}
\lambda_-\cosh\theta+\lambda_+ & \lambda_-\sinh\theta\\
-\lambda_-\sinh\theta & -\lambda_-\cosh\theta+\lambda_+
\end{array}\right),
\label{B-matrix-rep=}
\ee
where $\lambda_\pm:=D^{-1/4}(\sqrt r_1 e^{i\varphi_1}\pm \sqrt r_2
e^{i\varphi_2})$. With the help of (\ref{eta-plus-example-2}),
(\ref{gen-eta-plus-example}), and (\ref{B-matrix-rep=}), we have
checked that indeed
$\mbox{\underbar{$\eta'$}$_+=$\underbar{$L$}$^\dagger$
\underbar{$\eta$}$_+$\underbar{$L$}}$.
Next, we wish to establish the quasi-Hermiticity of $A$ for $D>0$,
i.e., show that it can be expressed as $A=B\mbox{\large$a$}B^{-1}$
for an invertible operator $B:\C^2\to\C^2$ and a Hermitian operator
$\mbox{\large$a$}:{\cal H}\to{\cal H}$. As we explained in
Section~\ref{sec-ph-metric}, we can identify $B$ with the inverse of
the positive square root of a metric operator belonging to
$\fM_A^+$. A convenient choice is $B=\eta_+^{-1/2}$, for in light of
(\ref{eta-plus-example-2}) we have
\[\mbox{\underbar{$B$}}^{\pm 1}=e^{\mp\frac{\theta}{2}\,
\psigma_1}=
\left(\begin{array}{cc}
\cosh\frac{\theta}{2} & \mp\sinh\frac{\theta}{2}\\
\mp\sinh\frac{\theta}{2} &
\cosh\frac{\theta}{2}\end{array}\right).\]
This leads to the following remarkably simple expression for the
matrix representation of {\large$a$}.
\be
\mbox{\large\underbar{$a$}}=
\mbox{\underbar{$B$}}^{-1}\,
\mbox{\underbar{$A$}}\,
\mbox{\underbar{$B$}}=
\left(\begin{array}{cc}
D^{1/2} & 0\\
0 & -D^{1/2}\end{array}\right)= D^{1/2}\psigma_3.
\label{Hermitian-A=}
\ee
Hence, $\mbox{\large$a$}=D^{1/2}\Sigma_3$.
Because every Hermitian operator $o:{\cal H}\to{\cal H}$ is a linear
combination, with real coefficients, of the identity operator $I$
and the operators $\Sigma_1$, $\Sigma_2$, and $\Sigma_3$ that are
respectively represented by Pauli matrices $\psigma_1$, $\psigma_2$,
and $\psigma_3$, we can express every physical observable $O:{\cal
H}_{_{\eta_+}}\to{\cal H}_{_{\eta_+}}$ as
$O=\fa_0 I+\sum_{j=1}^3 \fa_j\,S_j$,
where $\fa_0,\fa_1,\fa_2,\fa_3$ are some real numbers and
$S_j:=B\Sigma_jB^{-1}=\eta_+^{-1/2}\Sigma_j\,\eta_+^{1/2}$ for all
$j\in\{1,2,3\}$. The observables $S_j$ are represented by
$\mbox{\underbar{$S_1$}}=\psigma_1$, $
\mbox{\underbar{$S_2$}}=
\cosh\theta\,\psigma_1-i\sinh\theta\,\psigma_3$, and
$\mbox{\underbar{$S_3$}}=i\sinh\theta\,\psigma_2+
\cosh\theta\,\psigma_3$.
Next, we repeat the calculation of the Hermitian Hamiltonian and the
physical observables for the case that we choose the general metric
operator $\eta_+'$ to construct the physical Hilbert space, i.e.,
set ${\cal H}_{\rm phys}={\cal H}_{_{\eta'_+}}$. The matrix
representation of the Hermitian Hamiltonian {\large$a'$}:${\cal
H}\to{\cal H}$ is then given by
\be
\mbox{\large\underbar{$a'$}}=
\mbox{\underbar{$\eta_+'$}}^{1/2}\,
\mbox{\underbar{$A$}}\,
\mbox{\underbar{$\eta_+'$}}^{-1/2}=D^{1/2}
\left(\begin{array}{cc}
u(\theta,s) & v(\theta,s)\\
v(\theta,s) & -u(\theta,s)\end{array}\right)=
D^{1/2}[v(\theta,s)\,\psigma_1+u(\theta,s)\,\psigma_3],
\label{Hermitian-A-prime=}
\ee
where
$u(\theta,s):=\frac{\sqrt{1-s^2}\,\sinh^2\theta+s^2\cosh\theta}{
\sinh^2\theta+s^2}$ and $
v(\theta,s):=\frac{s(\cosh\theta-\sqrt{1-s^2})\sinh\theta}{
\sinh^2\theta+s^2}$.
We can similarly express the physical observables in the form
\be
O'=\fa_0 I+\sum_{j=1}^3 \fa_j\,S'_j,
\label{gen-obs-prime=}
\ee
where $S'_j:={\eta'_+}^{-1/2}\Sigma_j\,{\eta'_+}^{1/2}$.
As seen from (\ref{Hermitian-A-prime=}) the Hermitian Hamiltonian
{\large$a'$} describes the interaction of a spin $\frac{1}{2}$
particle with a magnetic field that is aligned along the unit vector
$(u,0,v)^T$ in $\R^3$. As one varies $s$ the magnetic field rotates
in the $x$-$z$ plane. It lies on the $z$-axis for $s=0$ which
corresponds to using $\eta_+$ to define the physical Hilbert space.
Clearly, there is no practical advantage of choosing $s\neq 0$.
Furthermore, for all $s\in(-1,1)$ and in particular for $s=0$, the
Hermitian representation of the physical system is actually less
complicated than its pseudo-Hermitian representations. This seems to
be a common feature of a large class of two-level systems
\cite{jpa-2003}.
Next, we compute the symmetry generator $\cC_\sigma$ for
$\sigma_1=-\sigma_2=1$. Denoting this operator by $\cC$ for
simplicity, realizaing that $
\cC=|\psi_1\kt\br\phi_1|-|\psi_2\kt\br\phi_2|$, and using
(\ref{eg-va-A=}), (\ref{phi-eg-va-A=}), and $D=e^{2\theta}$, we find
\be
\underline{\cC}=\left(\begin{array}{cc}
\cosh\theta & \sinh\theta\\
-\sinh\theta & -\cosh\theta\end{array}\right).
\label{sec7-C=}
\ee
Observing that in view of (\ref{matrix-A=}), $A^2=D I$, and making
use of this relation and (\ref{sec7-C=}) we are led to the curious
relation \cite{ijmpa-2006}
\be
\cC=\frac{A}{\sqrt{A^2}}.
\label{sec7-C=AA}
\ee
It is important to note that in performing the above calculation we
have not fixed the normalization constants $c_1$ and $c_2$ appearing
in (\ref{eg-va-A=}) and (\ref{phi-eg-va-A=}). Therefore, up to an
unimportant sign, $\cC$ is unique.
As we mentioned above setting $c_1=c_2=D^{-1/4}/2$ and
$\sigma_1=-\sigma_2=1$, we find the pseudo-metric operator
$\eta_\sigma=|\phi_1\kt\br\phi_1|-|\phi_2\kt\br\phi_2|=\Sigma_3$.
Because $\Sigma_3$ is a linear involution we can identify it with
$\cP$. It is a simple exercise to show that $\cP,\cC$ and $\eta_+$
actually satisfy
\be
\cC=\eta_+^{-1}\cP.
\label{sec7-e31}
\ee
We can similarly construct the antilinear symmetry generator $\fS$.
It turns out that unlike $\cC$, $\fS$ depends on (the phase of)
normalization constants $c_1$ and $c_2$ that appear in
(\ref{eg-va-A=}) and (\ref{phi-eg-va-A=}). Setting
$c_1=c_2=D^{-1/4}/2$, we find
\be
\fS=\cT,
\label{sec7-e32}
\ee
where $\cT$ denotes complex conjugation, $\cT\vec z=\vec z^*$. In
view of (\ref{sec7-e32}), the symmetry condition $[\fS,A]=0$
corresponds to the statement that $A$ is a real operator, i.e.,
$\underline{A}$ is a real matrix, which is a trivial observation.
Similarly, we can introduce an antilinear operator $\cT_\sigma$
according to (\ref{sec6-e12n}). This yields
$\cT_\sigma:=|\phi_1\kt\star\br\phi_1|-|\phi_2\kt\star\br\phi_2|$,
which in general depends on the choice of $c_1$ and $c_2$. For
$c_1=c_2=D^{-1/4}/2$, we have $\cT_\sigma=\cP\cT$. Combining this
relation with (\ref{sec7-e32}) yields $\fS=\cP\cT_\sigma$. It is not
difficult to see that indeed $\cT_\sigma$ is an involution. But it
differs from the usual time-reversal operator $\cT$. Let us also
point out that we could construct the pseudo-Hermitian quantum
system defined by $A$ without going through the computation of
$\cC$, $\cP$ and $\cT_\sigma$ operators. What is needed is a metric
operator that defines the inner product of the physical Hilbert
space.
The construction of the $\cC$ operator for two-level systems with a
symmetric Hamiltonian has been initially undertaken in
\cite{bender-jpa-2003}. The $\cC$ operator for general two-level
systems and its relation to the metric operators of pseudo-Hermitian
quantum mechanics are examined in \cite{jpa-2003}. See also
\cite{znojil-geyer-2006}. A comprehensive treatment of the most
general pseudo-Hermitian two-level system that avoids the use of the
$\cC$ operator is offered in \cite{tjp}.
\section{Calculation of Metric Operator}
A pseudo-Hermitian quantum system is defined by a (quasi-Hermitian)
Hamiltonian operator and an associated metric operator $\eta_+$.
This makes the construction of $\eta_+$ the central problem in
pseudo-Hermitian quantum mechanics. There are various methods of
calculating a metric operator. In this section, we examine some of
the more general and useful of these methods.
\subsection{Spectral Method}\label{sec-spectral}
The spectral method, which we employed in
Section~\ref{sec-two-level}, is based on the spectral representation
of the metric operator:
\be
\eta_+=\sum_1^N|\phi_n\kt\br\phi_n|.
\label{sec8-q1}
\ee
It involves the construction of a complete set of eigenvectors
$\phi_n$ of $A^\dagger$ and summing the series appearing in
(\ref{sec8-q1}) (or performing the integrals in case that the
spectrum is continuous).
\subsubsection{$\cP\cT$-symmetric infinite square well}
\label{sec-pt-well}
The first pseudo-Hermitian and $\cP\cT$-symmetric model with an
infinite-dimensional Hilbert space that has been treated within the
framework of pseudo-Hermitian quantum mechanics is the one
corresponding to the $\cP\cT$-symmetric square well potential
\cite{znoji-well,bmq}:
\be
v(x)=\left\{\begin{array}{ccc}
-i\zeta ~{\rm sgn}(x)&{\rm for}&|x|<L/2,\\
\infty&{\rm for}&|x|\geq L/2,\end{array}\right.
\label{sec8-p2}
\ee
where $\zeta$ and $L$ are real parameters, $L$ is positive, and $x$
takes real values. This was achieved in \cite{jpa-2004b} using the
spectral method combined with a certain approximation scheme that
allowed for a reliable approximate evaluation of a metric operator
as well as the corresponding equivalent Hermitian Hamiltonian and
pseudo-Hermitian position and momentum operators. A more recent
treatment of this model that makes use of the spectral method and
obtains a perturbative expansion for a $\cC$ operator and the
corresponding metric operator $\eta_+$ in powers of $\zeta$ is given
in \cite{bender-well}.
\subsubsection{$\cP\cT$-symmetric barrier}
\label{sec-pt-barrier}
In \cite{jmp-2005}, the spectral method has been used for treating a
pseudo-Hermitian quantum system defined by the scattering potential:
\be
v(x)=\left\{\begin{array}{ccc}
-i\zeta ~{\rm sgn}(x)&{\rm for}&|x|<L/2,\\
0&{\rm for}&|x|\geq L/2,\end{array}\right.
\label{sec8-p3}
\ee
where again $\zeta,L,x\in\R$ and $L$ is positive. This potential was
originally used in \cite{ruschhaupt} as a phenomenological tool for
describing the propagation of electromagnetic waves in certain
dielectric wave guides.\footnote{The use of complex potential in
constructing various phenomenological models and effective theories
has a long history. For a discussion that is relevant to complex
scattering potentials see the review article \cite{muga-2004} and
\cite{ahmed-pra-2001,dkr-1,cannata}.} It is the first example of a
$\cP\cT$-symmetric potential with a continuous spectrum that could
be studied thoroughly within the context of pseudo-Hermitian quantum
mechanics.
Application of the spectral method for this potential involves
replacing the sum in (\ref{sec8-q1}) with an integral over the
spectral parameter and taking into account the double degeneracy of
the energy levels. The extremely lengthy calculation of a metric
operator for this potential yields the following remarkably simple
expression \cite{jmp-2005}.
\be
\br x|\eta_{+}|y\kt=\delta(x-y)+
{\mbox{$\frac{imL^2\zeta}{16\hbar^2}$}}\,
(2L+2|x+y|-|x+y+L|-|x+y-L|)\,
{\rm sgn}(x-y)+{\cal O}(\zeta^2),
\label{sec8-eta=}
\ee
where $x,y\in\R$, and ${\cal O}(\zeta^2)$ stands for terms of order
$\zeta^2$ and higher.
An unexpected feature of the scattering potential (\ref{sec8-p3}) is
that the corresponding equivalent Hermitian Hamiltonian has an
effective interaction region that is three times larger than that of
the potential (\ref{sec8-p3}). In other words, in the physical
space, which is represented by the spectrum of the pseudo-Hermitian
position operator, the interaction takes place in the interval
$[-\frac{3L}{2},\frac{3L}{2}]$ rather than
$[-\frac{L}{2},\frac{L}{2}]$.
\subsubsection{Delta-function potential with a complex coupling}
Another complex scattering potential for which the spectral method
could be successfully applied is the delta-function potential
\cite{jpa-2006b}:
\be
v(x)=\fz\:\delta(x),
\label{delta-po}
\ee
where $\fz$ is a complex coupling constant with a non-vanishing real
part. For this system it has been possible to compute a metric
operator and show that it is actually a bounded operator up to and
including third order terms in the imaginary part of $\fz$. It is
given by
\be
\br x|\eta_{+}|y\kt=\delta(x-y)+
\frac{im\zeta}{2\hbar^2}\,\left[
\theta( x y)\;e^{-\kappa| x- y|}+
\theta(- x y)\;e^{-\kappa| x+ y|}\right]\,
{\rm sgn}(y^2-x^2)+{\cal O}(\zeta^2),
\label{sec8-delta-metric}
\ee
where $\zeta:=\Im(\fz)$, $\kappa:=m\hbar^{-2}\Re(\fz)$, $\Re$ and
$\Im$ denote the real and imaginary parts of their arguments,
$\theta$ is the step function defined by $\theta(x):=[1+{\rm
sgn}(x)]/2$ for all $x\in\R$, and we have omitted the quadratic and
cubic terms for brevity.
In order to determine the physical meaning of the quantum system
defined by the potential (\ref{delta-po}) and the metric operator
(\ref{sec8-delta-metric}), we should examine the Hermitian
representation of the system. The equivalent Hermitian Hamiltonian
is given by \cite{jpa-2006b}
\be
h=\frac{p^2}{2m}+\Re(\fz)\,\delta(x)+\frac{m\zeta^2}{8\hbar^2}
\;h_2+
{\cal O}(\zeta^3),
\label{sec8-h=phys}
\ee
where
\be
(h_2\psi)(x):=\fa_\psi
\,e^{-\kappa|x|}+ \fb_\psi\delta(x),
\label{sec8-h2=phys}
\ee
$\psi\in L^2(\R)$ and $x\in\R$ are arbitrary, $\fa_\psi:=\psi(0)$,
and $\fb_\psi:=\int_{-\infty}^\infty e^{-\kappa|y|}\psi(y)\,dy$. As
seen from (\ref{sec8-h=phys}) and (\ref{sec8-h2=phys}) the nonlocal
character of the Hermitian Hamiltonian $h$ is manifested in the
$\psi$-dependence of the coefficients $\fa_\psi$ and $\fb_\psi$.
A generalization of the delta-function potential (\ref{delta-po})
that allows for a similar analysis is the double-delta function
potential: $v(x)=\fz_-\delta(x+a)+\fz_+\delta(x-a)$, where $\fz_\pm$
and $a$ are complex and real parameters, respectively
\cite{jpa-2009,jpa-2010}. Depending on the values of the coupling
constants $\fz_\pm$, this potential may develop spectral
singularities \cite{naimark,ljance}. These are the points where the
eigenfunction expansion for the corresponding Hamiltonian breaks
down \cite{jpa-2009}. This is a well-known mathematical phenomenon
\cite{naimark,ljance} with an interesting and potentially useful
physical interpretation: A spectral singularity is a real energy
where both the reflection and transmission coefficients diverge.
Therefore it corresponds to a peculiar type of scattering states
that behave exactly like resonances: They are resonances with
zero-width \cite{prl-2009,pra2-2009}.\footnote{The single
delta-function potential (\ref{delta-po}) develops a spectral
singularity for imaginary values of $\fz$. For other examples of
complex potentials with a spectral singularity, see
\cite{samsonov,prl-2009,pra2-2009}.}
\subsubsection{Other Models}
The application of the spectral method for systems with an
infinite-dimensional Hilbert space is quite involved. If the system
has a discrete spectrum it requires summing complicated series, and
if the spectrum is continuous it involves evaluating difficult
integrals. This often makes the use of certain approximation scheme
necessary and leads to approximate expressions for the metric
operator. A counterexample to this general situation is the quantum
system describing a free particle confined within a closed interval
on the real line and subject to a set of $\cP\cT$-symmetric Robin
boundary conditions \cite{KBZ}. For this system the spectral method
may be employed to yield a closed formula for a metric operator.
Other systems for which the spectral method could be employed to
give an explicit and exact expression for the metric operator are
the infinite-dimensional extensions of the two-level system
considered in Section~\ref{sec-two-level} where $D$ is identified
with a positive-definite operator acting in an infinite-dimensional
Hilbert space \cite{cqg,ap}. These quantum systems appear in a
certain two-component representation of the Klein-Gordon
\cite{jpa-1998} and (minisuperspace) Wheeler-DeWitt fields
\cite{jmp-1998}.
\subsection{Perturbation Theory}
\label{sec-pert}
The standard perturbation theory has been employed in the
determination of the spectrum of various complex potentials since
long ago \cite{caliceti-1980}.\footnote{For more recent
developments, see
\cite{bender-dunne-1999,caliceti-2005,caliceti-2006} and references
therein.} In the present discussion we use the term ``perturbation
theory'' to mean a particular perturbative method of constructing a
metric operator for a given quasi-Hermitian Hamiltonian operator.
This method involves the following steps.
\begin{enumerate}
\item Decompose the Hamiltonian $H$ in the form
\be
H=H_0+\epsilon H_1,
\label{sec8-q2}
\ee
where $\epsilon$ is a real (perturbation) parameter, and $H_0$ and
$H_1$ are respectively Hermitian and anti-Hermitian
$\epsilon$-independent operators.
\item Use the fact that $\eta_+$ (being a positive-definite
operator) has a unique Hermitian logarithm to introduce the
Hermitian operator $Q:=-\ln\eta_+$, so that
\be
\eta_+=e^{-Q},
\label{sec8-q3}
\ee
and express the pseudo-Hermiticity relation
$H^\dagger=\eta_+H\eta_+^{-1}$ in the form
\be
H^\dagger=e^{-Q}H\,e^{Q}.
\label{sec8-q3-ph}
\ee
In view of the Backer-Campbell-Hausdorff identity \cite{ryder},
\be
e^{-Q}H\,e^{Q}=H+\sum_{\ell=1}^\infty
\frac{1}{\ell!}[H,Q]_\ell=
H+[H,Q]+\frac{1}{2!}[[H,Q],Q]+\frac{1}{3!}[[[H,Q],Q],Q]+
\cdots,
\label{bch-identity}
\ee
where $[H,Q]_\ell:=[[\cdots[[H,Q],Q],\cdots],Q]$ and $\ell$ is the
number of copies of $Q$ appearing on the right-hand side of this
relation, (\ref{sec8-q3-ph}) yields
\be
H^\dagger=H+\sum_{\ell=1}^\infty
\frac{1}{\ell!}[H,Q]_\ell.
\label{ph-bch-identity}
\ee
\item Expand $Q$ in a power series in $\epsilon$ of the form
\be
Q=\sum_{j=1}^\infty Q_j\:\epsilon^j,
\label{sec8-q4}
\ee
where $Q_j$ are $\epsilon$-independent Hermitian operators.
\item Insert (\ref{sec8-q2}) and (\ref{sec8-q4}) in (\ref{ph-bch-identity})
and equate terms of the same order in powers
of $\epsilon$ that appear on both sides of this equation. This
leads to a set of operator equations for $Q_j$ which have the form
\cite{jpa-2006a}
\be
[H_0,Q_j]=R_j.
\label{sec8-q5}
\ee
Here $j\in\Z^+$ and $R_j$ is determined in terms of $H_1$ and
$Q_k$ with $k<j$ according to
\bea
&&R_j:=\left\{\begin{array}{ccc}
-2H_1&{\rm for}& j=1,\\
\sum_{k=2}^j q_k Z_{kj}&{\rm for}& j\geq 2,\end{array}
\right.~~~~~
q_k:=\sum_{m=1}^k\sum_{n=1}^m
\frac{(-1)^nn^k m!}{k!2^{m-1}n!(m-n)!}\,,~~
\label{sec8-q6}\\
&&Z_{kj}:=
\sum_{\stackrel{s_1,\cdots,s_k\in\Z^+}{s_1+\cdots+s_k=j}}
[[[\cdots[H_0,Q_{s1}],Q_{s_2}],\cdots,],Q_{s_k}].
\label{sec8-q7}
\eea
More explicitly we have
{\small
\bea
\left[H_0,Q_1\right]&=&-2H_1,
\label{e1}\\
\left[H_0,Q_2\right]&=& 0,
\label{e2}\\
\left[H_0,Q_3\right]&=&-\frac{1}{6}[H_1,Q_1]_{_2},
\label{e3}\\
\left[H_0,Q_4\right]&=&
-\frac{1}{6}\left([[H_1,Q_1],Q_2]+[[H_1,Q_2],Q_1]\right),
\label{e4}\\
\left[H_0,Q_5\right]&=&
\frac{1}{360}\,[H_1,Q_1]_{_4}-\frac{1}{6}\,
\left([H_1,Q_2]_{_2}+[[H_1,Q_1],Q_3]+[[H_1,Q_3],Q_1]\right).
\label{e5}
\eea}%
\item Solve the above equations for $Q_j$ iteratively by
making an appropriate ansatz for their general form.
\end{enumerate}
A variation of this method was originally developed in
\cite{bender-prd-2004} to compute the $\cC$ operator for the
following $\cP\cT$-symmetric Hamiltonians and some of their
multidimensional and field-theoretical generalizations.
\bea
H&=&\frac{1}{2m}\,p^2+\frac{1}{2}\,\mu^2 x^2+i\epsilon\, x^3,
\label{sec8-pt-sym-3}\\
H&=&\frac{1}{2m}\,p^2+i\epsilon\, x^3,
\label{sec8-pt-cubic}
\eea
where $\mu$ and $\epsilon$ are nonzero real coupling constants.
\subsubsection{$\cP\cT$-symmetric cubic anharmonic oscillator}
\label{sec-cubic-osc}
A perturbative calculation of a metric operator and the
corresponding equivalent Hermitian Hamiltonian and pseudo-Hermitian
position and momentum operators for the Hamiltonian
(\ref{sec8-pt-sym-3}) has been carried out in
\cite{jpa-2005b,jones-2005}.
Following \cite{bender-prd-2004} one can satisfy the operator
equations for $Q_j$ by taking $Q_{2i}=0$ for all $i\in\Z^+$ and
adopting the ansatz
\be
Q_{2i+1}=\sum_{j,k=0}^{i+1} c_{ijk}~\{x^{2j},p^{2k+1}\},
\label{ansatz}
\ee
where $\{\cdot,\cdot\}$ stands for the anticommutator and $c_{ijk}$
are real constants. Inserting (\ref{ansatz}) in (\ref{sec8-q5}), one
can determine $c_{ijk}$ for small values of $i$ \cite{jpa-2005b}.
See also \cite{bender-prd-2004,jones-2005}.
Again, to determine the physical content of the system defined by
the Hamiltonian (\ref{sec8-pt-sym-3}) and the metric operator
$\eta_+=e^{-Q}$, we need to inspect the associated Hermitian
Hamiltonian operator \cite{jpa-2005b}:
\bea
h&=&\frac{p^2}{2m}+\frac{1}{2}\mu^2x^2+
\frac{1}{m\mu^4}\left(\{x^2,p^2\}+p\,x^2p+
\frac{3m\mu^2}{2}\,x^4\right)
\epsilon^2+\frac{2}{\mu^{12}}
\left(\frac{p^6}{m^3}-\frac{63\mu^2}{16m^2}\{x^2,p^4\}\right.
\nn\\
&&\left.
-\frac{81\mu^2}{8m^2}\,p^2x^2p^2-\frac{33\mu^4}{16m}
\{x^4,p^2\}-\frac{69\mu^4}{8m}\,x^2p^2x^2
-\frac{7\mu^6}{4}\,x^6
\right)\epsilon^4+
{\cal O}(\epsilon^6),
\label{h-pert=4b}
\eea
and the underlying classical Hamiltonian (\ref{class-H-2}):
\bea
H_c&=&\frac{p_c^2}{2m}+\frac{1}{2}\mu^2x_c^2+
\frac{3}{2\mu^4}\left(\frac{2}{m}\,x_c^2p_c^2+\mu^2x_c^4\right)
\epsilon^2+\nn\\
&&\frac{2}{\mu^{12}}
\left(\frac{p_c^6}{m^3}-\frac{18\mu^2}{m^2}\,x_c^2p_c^4-
\frac{51\mu^4}{4m}
\,x_c^4p_c^2-\frac{7\mu^6}{4}\,x_c^6\right)\epsilon^4+
{\cal O}(\epsilon^6).
\label{H-class=}
\eea
If we only consider the terms of order $\epsilon^2$ and lower, we
can express (\ref{H-class=}) in the form
\be
H_c=\frac{p_c^2}{2M(x_c)}+\frac{\mu^2}{2}\,x_c^2+
\frac{3\epsilon^2}{2\mu^2}\,x_c^4+{\cal O}(\epsilon^4),
\label{H-class=2}
\ee
where $ M(x_c):=m(1+3\mu^{-4}\epsilon^2\,x_c^2)^{-1}=
m(1-3\mu^{-4}\epsilon^2\,x_c^2)+{\cal O}(\epsilon^4)$.
This shows that for small values of $\epsilon$, the
$\cP\cT$-symmetric Hamiltonian (\ref{sec8-pt-sym-3}) describes a
position-dependent-mass quartic anharmonic oscillator
\cite{jpa-2005b}. This observation has motivated the use of
non-Hermitian constant-mass standard Hamiltonians,
$H=p^2/(2m)+v(x)$, in the perturbative description of a class of
position-dependent-mass standard Hamiltonians \cite{BQR}.
As seen from (\ref{h-pert=4b}), the 4-th order (in $\epsilon$)
contribution to the equivalent Hermitian Hamiltonian $h$ involves
$p^6$. It is not difficult to show that $h=\sum_{\ell=0}^\infty
h_\ell \:\epsilon^{2\ell}$, where $h_\ell$ is a polynomial in $p$
whose degree is an increasing function of $\ell$. Therefore, the
perturbative expansion of $h$ includes arbitrarily large powers of
$p$. This confirms the expectation that $h$ is a nonlocal operator.
The same holds for the pseudo-Hermitian position and momentum
operators \cite{jpa-2005b,jones-2005}.
\subsubsection{Imaginary cubic potential}
\label{sec-cubic-imag}
Ref.~\cite{jpa-2006a} gives a perturbative treatment of the
Hamiltonian
\be
H=\frac{p^2}{2m}+i\,\epsilon\, x^3,
\label{sec8-cubic-pot}
\ee
in which the operator equations (\ref{sec8-q5}) are turned into
certain differential equations and solved iteratively. This method
relies on the observation that for this Hamiltonian, $H_0=p^2/2m$.
Therefore,
$\br x|[H_0,Q_j]|y\kt=$\linebreak$
-\frac{\hbar^2}{2m}\left(\partial_x^2-\partial_y^2\right)\br
x|Q_j|y\kt$.
In view of this identity and (\ref{sec8-q5}), we find
\be
(-\partial_x^2+\partial_y^2)\,\br x|Q_j|y\kt=\frac{2m}{\hbar^2}
\:\br x|R_j|y\kt.
\label{sec8-wave}
\ee
Because $R_j$ is given in terms of $H_1$ and $Q_i$ with $i<j$, one
can solve (\ref{sec8-wave}) iteratively for $\br x|Q_j|y\kt$. Note
also that this equation is a non-homogeneous $(1+1)$-dimensional
wave equation which is exactly solvable.
This approach has two important advantages over the earlier
perturbative calculation of the metric operator for the imaginary
cubic potential \cite{bender-prd-2004}. Firstly, it involves solving
a well-known differential equation rather than dealing with
difficult operator equations. Secondly, it is not restricted by the
choice of an ansatz, i.e., it yields the most general expression for
the metric operator. In particular, it reveals large classes of
$\cC\cP\cT$ and non-$\cC\cP\cT$-inner products that were missed in
an earlier calculation given in \cite{bender-prd-2004}. Here we give
the form of the equivalent Hermitian Hamiltonian associated with the
most general admissible metric operator:
\bea
h&=&\frac{p^2}{2m}+\frac{3m}{16}
\left(\{x^6,\frac{1}{p^2}\}
+22\hbar^2\{x^4,\frac{1}{p^4}\}+\alpha_2\,\hbar^4
\{x^2,\frac{1}{p^6}\}+\right.\nn\\
&&\left.
\hspace{.7cm}
\frac{(14\alpha_2+1680)\hbar^6}{p^8}
+\beta_2\,\hbar^3\{x^3,\frac{1}{p^5}\}\:{\cal P}\right)
\epsilon^2+\nn\\
&&\hspace{1.4cm}\hbar^6\left(
\alpha_3 (\hbar\{x^2,\frac{1}{p^{11}}\}
+\frac{44\hbar^3}{p^{13}} )+i\beta_3\,\{x^3,\frac{1}{p^{10}}\}\:
{\cal P}\right)\epsilon^3+
{\cal O}(\epsilon^4),
\label{h=man}
\eea
where $\alpha_2,\alpha_3,\beta_2,\beta_3$ are free real parameters
characterizing the nonuniqueness of the metric operator, and $\cP$
is the parity operator \cite{jpa-2006a}.
A remarkable feature of the Hamiltonian~(\ref{sec8-cubic-pot}) is
that the underlying classical Hamiltonian is independent of the
choice of the metric operator (to all orders of perturbation). Up to
terms of order $\epsilon^3$ it is given by the following simple
expression.
\be
H_c=\frac{p^2_c}{2m}+\frac{3}{8}\,m\epsilon^2\,
\frac{x_c^6}{p_c^2}
+{\cal O}(\epsilon^4).
\label{cubic-H-classical}
\ee
The presence of $p_c^2$ in the denominator of the second term is a
clear indication that the equivalent Hermitian Hamiltonian is a
nonlocal operator (and that this is the case regardless of the
choice of the metric operator.) Again the classical Hamiltonian
(\ref{cubic-H-classical}) clarifies the meaning of the imaginary
cubic potential $i\,\epsilon\,x^3$.
\subsubsection{Other Models}
The perturbation theory usually leads to an infinite series
expansion for the metric operator whose convergence behavior is
difficult to examine. There are however very special models for
which this method gives exact expressions for $Q$ and consequently
the metric operator $\eta_+=e^{-Q}$. Examples of such models are
given in \cite{bender-lee-model,BJR,jm,bender-mannheim}. The
simplest example is the free particle Hamiltonian studied in
\cite{jpa-2006a}.
For other examples of the perturbative calculation of a metric
operator and the corresponding equivalent Hermitian Hamiltonian, see
\cite{banerjee} and particularly \cite{FF}.
\subsection{Differential Representations of Pseudo-Hermiticity}
In the preceding subsection we show how one can turn the operator
equations appearing in the perturbative calculation of the metric
operator into certain differential equations. In this subsection we
outline a direct application of differential equations in the
computation of pseudo-metric operators for a large class of
pseudo-Hermitian Hamiltonian operators $H$ acting in the reference
Hilbert space $L^2(\R)$.
In the following we outline two different methods of identifying a
differential representation of the pseudo-Hermiticity condition,
\be
H^\dagger=\eta H\eta^{-1}.
\label{sec8-ph-pre}
\ee
\subsubsection{Field equation for the metric from
Moyal product}
Consider expressing (\ref{sec8-ph-pre}) in the form
\be
\eta H=H^\dagger\eta,
\label{sec8-ph}
\ee
and viewing $\eta$ and $H$ as complex-valued functions of $x$ and
$p$ that are composed by the Moyal $\mbox{\large$*$}$-product:
\be
f(x,p)\:\mbox{\large$*$}\:g(x,p):=f(x,p)\:
e^{i\hbar\stackrel{\leftarrow}{\partial_x}
\stackrel{\rightarrow}{\partial_p}}\:g(x,p)=
\sum_{k=0}^\infty \frac{(i\hbar)^k}{k!}\:[\partial^k_x f(x,p)]
\,\partial^k_p g(x,p).
\label{moyal}
\ee
This yields: $\eta(x,p)\:\mbox{\large$*$}\:H(x,p)=
H(x,p)^*\:\mbox{\large$*$}\:\eta(x,p)$, \cite{scholtz-geyer-2006}.
With the help of (\ref{moyal}) we can express this equation more
explicitly as
\be
\sum_{k=0}^\infty \frac{(i\hbar)^k}{k!}\:\Big\{
[\partial_p^k H(x,p)]\,\partial_x^k-
[\partial_x^k H(x,p)^*]\,\partial_p^k\Big\}\,\eta(x,p)=0.
\label{ph-moyal-eq}
\ee
This is a linear homogeneous partial differential equation of finite
order only if $H(x,p)$ is a polynomial in $x$ and $p$. For example,
for the imaginary cubic potential, i.e., the Hamiltonian
$H=\frac{p^2}{2m}+i\,\epsilon\, x^3$, it reads
\be
\left[\epsilon\,\hbar^3\partial_p^3
-3\,i\,\epsilon\,\hbar^2 x\,\partial_p^2-(2m)^{-1}\hbar^2\partial_x^2
-3\,\epsilon\,\hbar x^2\partial_p+im^{-1}\hbar\, p\,\partial_x+
2\,i\,\epsilon\, x^3\right]\eta(x,p)=0.
\label{sec8-moy-e1}
\ee
The presence of variable coefficients in this equation is an
indication that it is not exactly solvable. Particular perturbative
solutions can however be constructed. This applies more generally
for other polynomial Hamiltonians. Explicit examples are given in
\cite{scholtz-geyer-2006,FF-moyal,assis-fring}.
We should like to note however that not every solution of
(\ref{ph-moyal-eq}) defines a pseudo-metric (respectively metric)
operator. We need to find solutions that correspond to Hermitian
(respectively positive-definite) and invertible operators $\eta$.
Ref.~\cite{scholtz-geyer-2006} suggests ways to address this
problem.
The above-described method that is based on the Moyal product has
two important shortcomings.
\begin{enumerate}
\item If the Hamiltonian is not a polynomial of $x$ and $p$, then the
resulting equation (\ref{ph-moyal-eq}) is not a differential
equation with a finite order. This makes its solution extremely
difficult. This is true unless $H(x,p)$ has a particularly simple
form. A typical example is the exponential potential $e^{ix}$
treated in Ref.~\cite{curtright-jmp-2007}. This potential is
actually one of the oldest $\cP\cT$-symmetric potentials whose
spectral problem has been examined thoroughly \cite{gasymov}. For
$x\in\R$, its spectrum includes an infinity of spectral
singularities that prevent this potential from defining a genuine
unitary evolution.\footnote{For a discussion of biorthonormal
systems for this potential with $x$ taking values on a circle (a
closed interval with periodic boundary condition on the
eigenfunctions), see \cite{CM-jmp-2007}.}
\item For the polynomial Hamiltonians, for which (\ref{ph-moyal-eq})
is a differential equation, the general form and even the order of
this equation depends on the Hamiltonian. In particular, for the
standard Hamiltonians of the form
\be
H=\frac{p^2}{2m}+v(x),
\label{sec8-st-H}
\ee
they depend on the choice of the potential $v(x)$.
\end{enumerate}
We shall next discuss a differential representation of the
pseudo-Hermiticity that does not suffer from any of these
shortcomings.
\subsubsection{Universal field equation for the metric}
Consider a pseudo-Hermitian Hamiltonian of standard form
(\ref{sec8-st-H}). Substituting (\ref{sec8-st-H}) in the
pseudo-Hermiticity relation $\eta H=H^\dagger\eta$ and evaluating
the matrix elements of both sides of the resulting equation in the
coordinate basis $\{|x\kt\}$, we find \cite{jmp-2006a}
\be
\left[-\partial_x^2+\partial_y^2+\mu^2(x,y)\right]
\eta(x,y)=0,
\label{sec-8-diff-1}
\ee
where $\mu^2(x,y):=\frac{2m}{h^2}\,[v(x)^*-v(y)]$,
and $\eta(x,y):=\br x|\eta|y\kt$.
Eq.~(\ref{sec-8-diff-1}) is actually a Klein-Gordon equation for
$\br x|\eta|y\kt$ (with a variable mass term.) As such, it is much
easier to handle than the equation obtained for the pseudo-metric in
the preceding subsection, i.e., (\ref{ph-moyal-eq}). Moreover, it
applies to arbitrary polynomial and non-polynomial potentials.
It turns out that one can actually obtain a formal series expansion
for the most general solution of (\ref{sec-8-diff-1}) that satisfies
the Hermiticity condition: $\eta(x,y)=\eta(y,x)^*$.
This solution has the form \cite{jmp-2006a}
\be
\eta(x,y)=\sum_{\ell=0}^\infty {\cal K}^\ell u(x,y),
\label{sec-8-per}
\ee
where ${\cal K}$ is the integral operator defined by
\be
{\cal K}\,f(x,y):=\frac{m}{\hbar^2}\,\left[
\int^ydr \int_{x-y+r}^{x+y-r}
ds~v(r)\,f(s,r)+
\int^xds \int_{-x+y+s}^{x+y-s}
dr~v(s)^*\,f(s,r)\right],
\label{sec8-K=}
\ee
$f:\R^2\to\C$ is an arbitrary test function, $u:\R^2\to\C$ is
defined by $u(x,y):=u_+(x-y)+u_-(x+y)$, and $u_\pm$ are arbitrary
complex-valued (piecewise) smooth (generalized) functions satisfying
$u_\pm(x)^*=u_\pm(\mp x)$.
For imaginary potentials, the series solution (\ref{sec-8-per})
provides an extremely effective perturbative method for the
construction of the most general metric operator. For example, the
application of this method for the $\cP\cT$-symmetric square well
potential that we discussed in Subsection~\ref{sec-pt-well} yields,
after a page-long straightforward calculation, \cite{jmp-2006a}
\be
\eta(x,y)=\delta(x-y)+\zeta\left[w_+(x-y)+w_-(x+y)+
\frac{im}{2\hbar^2}\,|x+y|\,{\rm sgn}(x-y)\right]+{\cal
O}(\zeta^2),
\label{gen-eta-sw}
\ee
where $w_\pm:[-\frac{L}{2},\frac{L}{2}]\to\C$ are arbitrary
functions satisfying $w_\pm(x)^*=w_\pm(\mp x)$ and $w_\pm(\pm L)=0$.
The metric operator associated with the $\cC\cP\cT$-inner product
that is computed using the spectral method in \cite{bender-well}
turns out to correspond to a particular choice for $w_\pm$ in
(\ref{gen-eta-sw}).
A probably better evidence of the effectiveness of this method is
its application in the construction of a metric operator for the
$\cP\cT$-symmetric barrier potential that we examined in
Subsection~\ref{sec-pt-barrier}. Again, a two-pages-long calculation
yields \cite{jmp-2006a}
\bea
\eta(x,y)&=&\delta(x-y)+\zeta\big[~w_+(x-y)+w_-(x+y)+\nn\\
&&\hspace{.2cm}
\frac{im}{4\hbar^2}
\left(2|x+y|-|x+y+L|-|x+y-L|\right){\rm sgn}(x-y)\big]+
{\cal O}(\zeta^2).
\label{eta-pde-gen}
\eea
This expression reproduces the result obtained in \cite{jmp-2005}
using the spectral method (after over a hundred pages of
calculations), namely (\ref{sec8-eta=}), as a special case.
In Ref.~\cite{jmp-2006a} this differential representation of
pseudo-Hermiticity has been used to obtain a perturbative expression
for the metric operator associated with the imaginary delta-function
potentials of the form\footnote{The spectral properties of
$\cP\cT$-symmetric potentials of this form and their consequences
have been studied in
\cite{jones-1,ahmed-1,albaverio,dkr-1,demiralp2}. In particular, see
\cite{jpa-2009}.}
\be
v(x)=i\sum_{n=1}^N \zeta_n\,\delta(x-a_n),
\label{sec8-delta}
\ee
where $\zeta_n,a_n\in\R$. The result is
\be
\eta(x,y)=\delta(x-y)+\sum_{n=1}^N \frac{2m\zeta_n}{\hbar^2}\!\!
\left[w_{n+}(x-y)+w_{n-}(x+y)+\frac{i}{2}\theta(x+y-2a_n)\,
{\rm sgn}(y-x)\right]
+{\cal O}(\zeta_n^2).
\label{sec8-eta-delta-gen}
\ee
For the special case: $N=2$, $a_1=-a_2>0$ and $\zeta_1=-\zeta_2>0$,
where (\ref{sec8-delta}) is a $\cP\cT$-symmetric potential, a
careful application of the spectral method yields a
positive-definite perturbatively bounded metric operator
\cite{batal} that turns out to be a special case of
(\ref{sec8-eta-delta-gen}). The general $N=2$ case, that depending
on the choice of $\zeta_k$ may or may not posses $\cP\cT$-symmetry,
has been examined in \cite{jpa-2009,jpa-2010}.
The main difficulty with the approaches presented in this section
(and its subsections) is that they may lead to a ``metric'' operator
that is unbounded or non-invertible.\footnote{One must also restrict
the free functions appearing in the formula for $\eta(x,y)$ so that
the operator $\eta$ they define is at least densely-defined.} For
example setting $N=1$ in (\ref{sec8-delta}), one finds a delta
function potential with an imaginary coupling that gives rise to a
spectral singularity \cite{jpa-2006b}. Therefore, the corresponding
Hamiltonian is not quasi-Hermitian, and there is actually no genuine
(bounded, invertible, positive-definite) metric operator for this
potential. Yet, one can use (\ref{sec8-eta-delta-gen}) to obtain a
formula for a ``metric operator''! This observation suggests that
one must employ this method with extra care.
\subsection{Lie Algebraic Method}
In Subsection~\ref{sec-pert}, we described a perturbative scheme for
solving the pseudo-Hermiticity relation,
\be
H^\dagger=e^{-Q}H\, e^{Q},
\label{ph-Lie}
\ee
for the operator $Q$ that yields a metric operator upon
exponentiation, $\etap=e^{-Q}$. In this section we explore a class
of quasi-Hermitian Hamiltonians and corresponding metric operators
for which (\ref{ph-Lie}) reduces to a finite system of numerical
equations, although the Hilbert space is infinite-dimensional. The
key idea is the use of an underlying Lie algebra. In order to
describe this method we first recall some basic facts about Lie
algebras and their representations.
\subsubsection{Lie algebras and their representations}
Consider a matrix Lie group $G$, i.e., a subgroup of the general
linear group $GL(N,\C)$ for some $N\in\Z^+$, and let $\cG$ denote
its Lie algebra, \cite{elliott-dawber,isham}. A unitary
representation of $G$ is a mapping $\cU$ of $G$ into the group of
all unitary operators acting in a separable Hilbert space $\cH$ such
that the identity element of $G$ is mapped to the identity operator
acting in $\cH$ and for all $g_1,g_2\in G$,
$\cU(g_1g_2)=\cU(g_1)\cU(g_2)$. Such a unitary representation
induces a unitary representation for $\cG$, i.e., a linear mapping
$\fU$ of $\cG$ into the set of anti-Hermitian linear operators
acting in $\cH$ such that for all $X_1,X_2\in\cG$, $\fU([X_1,X_2])=
[\fU(X_1),\fU(X_2)]$, \cite{elliott-dawber,fell-doran}. The mappings
$\cU$ and $\fU$ are related according to: $\cU(e^X)=e^{\fU(X)}$, for
all $X\in\cG$. Furthermore, because for all $X\in\cG$, $\fU(X)$ is
an anti-Hermitian operator acting in $\cH$, there is a Hermitian
operator $K:\cH\to\cH$ such that $\fU(X)=iK$.
Let $\{K_1,K_2,\cdots, K_d\}$ be a set of Hermitian operators acting
in $\cH$ such that $\{iK_1,iK_2,\cdots,iK_d\}$ is a basis of
$\fU(\cG)$. Then $K_a$ with $a\in\{1,2,\cdots,d\}$ are called
\emph{generators} of $G$ in the representation $\cU$. If $\cU$ is a
faithful representation, i.e., it is a one-to-one mapping, the same
holds for $\fU$ and $d$ coincides with the dimension of $G$. In this
case, we refer to the matrices
\be
\underline{K_a}:=\fU^{-1}(K_a)
\label{sec4-generators=}
\ee
as generators of $G$ in its standard representation.
Next, consider the set of complex linear combinations of
$\underline{K_a}$, i.e., the \emph{complexification} of $\cG$:
$\cG_{_\C}:=\left\{\sum_{a=1}^d
\fc_a\,\underline{K_a}~|~\fc_a\in\C~\right\}$. We can extend the
domain of definition of $\fU$ to $\cG_{_\C}$ by linearity: For all
$\fc_a\in\C$, $\fU\left(\sum_{a=1}^d \fc_a\,\underline{K_a}\right):=
\sum_{a=1}^d \fc_a\:\fU(\underline{K_a})=
\sum_{a=1}^d \fc_a\, K_a$.
Similarly, we extend the definition of $\cU$ to the set of elements
of $GL(N,\C)$ that are obtained by exponentiation of those of
$\cG_{_\C}$. This is done according to
\be
\cU(e^X)=e^{\fU(X)},~~~~~~~\mbox{for all $X\in\cG_{_\C}$}.
\label{sec4-pre-cbh-id}
\ee
Next, we recall that according to Backer-Campbell-Hausdorff identity
(\ref{bch-identity}), for all $X,Y\in\cG_\C$, $e^{-X}Y
e^{X}\in\cG_{_\C}$. Furthermore, (\ref{bch-identity}) and
(\ref{sec4-pre-cbh-id}) imply
\be
\cU(e^{-X}Y\,e^{X})=e^{-\fU(X)}\fU(Y)\,e^{\fU(X)}=\cU(e^{-X})\,
\fU(Y)\,\cU(e^{X})~~~~~\mbox{for all
$X,Y\in\cG_\C$.}
\label{sec4-main-id}
\ee
This completes our mathematical digression.
\subsubsection{General outline of the method}
Suppose that $H:\cH\to\cH$ can be expressed as a polynomial in the
Hermitian generates $K_a$ of $G$ in a faithful unitary
representation $\cU$,\footnote{In mathematical terms, one says that
$H$ is an element of the enveloping algebra of $\cG$ in the
representation $\cU$.} i.e.,
\be
H=\sum_{k=1}^n\sum_{a_1,a_2,\cdots,a_k=1}^d
\lambda_{a_1,a_2,\cdots a_k}~K_{a_1}\, K_{a_2}\cdots K_{a_k},
\label{sec4-Lie1}
\ee
where $n\in\Z^+$, $d$ is the dimension of $G$, and
$\lambda_{a_1,a_2,\cdots a_k}\in\C$. Demand that $H$ admits a metric
operator of the form $\etap=e^{-Q}$ with $Q$ given by
\be
Q=\sum_{a=1}^d r_a~ K_a,~~~~~\mbox{for some $r_a\in\R$}.
\label{sec4-Lie4-Q}
\ee
Then as we will show below the right-hand side of (\ref{ph-Lie}) can
be evaluated using the standard representation of $\cG$ and readily
expressed as a polynomial in $K_a$ with the same order as $H$. Upon
imposing (\ref{ph-Lie}), we therefore obtain a (finite) set of
numerical equations involving the coupling constants
$\lambda_{a_1,a_2,\cdots a_k}$ and the parameters $r_a$ that
determine the metric operator via
\be
\etap:=\exp\left(-\sum_{a=1}^d
r_a~ K_a\right).
\label{sec4-Lie4}
\ee
In order to demonstrate how this method works, we introduce the
matrix $\underline{\etap}:=\exp\left(-\sum_{a=1}^d
r_a~\underline{K_a}\right)$,
that belongs to $\exp(\cG_{_\C})$ and satisfies $
\etap=\cU(\underline{\etap})$.
This together with (\ref{sec4-generators=}) and (\ref{sec4-main-id})
imply
\bea
\etap H\,\etap^{-1}&=&\sum_{k=1}^n\sum_{a_1,a_2,\cdots,a_k=1}^d
\lambda_{a_1,a_2,\cdots a_k}~
(\etap K_{a_1}\etap^{-1}) (\etap K_{a_2}\etap^{-1})\cdots (\etap
K_{a_k}\etap^{-1})\nn\\
&=&\sum_{k=1}^n\sum_{a_1,a_2,\cdots,a_k=1}^d\lambda_{a_1,a_2,\cdots a_k}~
\fU(\underline{\etap}\:\underline{K_{a_1}}\:\underline{\etap}^{-1})\cdots
\fU(\underline{\etap}\:\underline{K_{a_k}}\:\underline{\etap}^{-1})
\label{sec4-ph-Lie-exp1}
\eea
Because for all $a\in\{1,2,\cdots,d\}$,
$\underline{\etap}\:\underline{K_{a}}\:\underline{\etap}^{-1}$
belongs to $\cG_{_\C}$, there are complex coefficients $\kappa_{ab}$
depending on the structure constants $C_{abc}$ of the Lie algebra
$\cG$ and the coefficients $r_a$ such that
\be
\underline{\etap}\:\underline{K_{a}}\:\underline{\etap}^{-1}=
\sum_{b=1}^d\kappa_{ab}\underline{K_b}.
\label{sec4-kappa-exp1}
\ee
As a result, $\fU(\underline{\etap}\:\underline{K_{a}}\:
\underline{\etap}^{-1})=\sum_{b=1}^d\kappa_{ab}\:\fU(\underline{K_b})=
\sum_{b=1}^d\kappa_{ab}\,K_b$.
Inserting this relation in (\ref{sec4-ph-Lie-exp1}), we find
\be
\etap H\,\etap^{-1}=
\sum_{k=1}^n\sum_{b_1,b_2,\cdots,b_k=1}^d
\tilde\lambda_{b_1b_2\cdots b_k}K_{b_1}K_{b_2}\cdots K_{b_k},
\label{sec4-ph-Lie-exp2}
\ee
where $\tilde\lambda_{b_1b_2\cdots
b_k}:=\sum_{a_1,a_2,\cdots,a_k=1}^d
\lambda_{a_1,a_2,\cdots a_k}\kappa_{a_1b_1}\kappa_{a_2b_2}
\cdots\kappa_{a_k b_k}$.
Note that the coefficients $\tilde\lambda_{b_1b_2\cdots b_k}$ depend
on the parameters $r_a$ of the metric operator (\ref{sec4-Lie4}).
In view of (\ref{sec4-Lie1}) and (\ref{sec4-ph-Lie-exp2}), the
pseudo-Hermiticity relation $H^\dagger=\etap\,H\etap^{-1}$ takes the
form
\be
\sum_{k=1}^n\sum_{c_1,c_2,\cdots,c_{k-1},c_k=1}^d
(\lambda^*_{c_k c_{k-1} \cdots c_2c_1}-\tilde\lambda_{c_1c_2\cdots
c_{k-1}c_k})~K_{c_1}K_{c_2}\cdots K_{c_k}=0.
\label{sec4-main-eq}
\ee
We can use the commutation relations for the generators $K_a$,
namely $[K_a,K_b]=i\sum_{c=1}^d C_{abc}K_c$,
to reorder the factors $K_{c_1}K_{c_2}\cdots K_{c_k}$ and express
the left-hand side of (\ref{sec4-main-eq}) as a sum of linearly
independent operators. Consequently, the coefficients of this sum
must identically vanish. This yields a system of equations for
$r_a$. In general this system is over-determined and a solution
might not exist. However, there is a class of Hamiltonians of the
form (\ref{sec4-Lie1}) for which this system has solutions. In this
case, each solution determines a metric operator.
For the particular case that $n=1$, so that
\be
H=\sum_{a=1}^d\lambda_aK_b,
\label{sec4-H-linear}
\ee
we may employ a more direct method of deriving the system of
equations for $r_a$. This is based on the observation that in this
case we can obtain a representation of the pseudo-Hermiticity
relation $H^\dagger=\etap H\etap^{-1}$ in $\cG_\C$, namely
\be
\underline{H^\maltese}=\underline{\etap}\;\underline{H}\;
\underline{\etap}^{-1},
\label{sec4-ph-matrix-form}
\ee
where\footnote{Unless $G$ is a unitary group, $\fU$ is not a
$*$-representation \cite{fell-doran}, and
$\underline{H^\maltese}\neq \underline{H}^\dagger$.}
\be
\underline{H}:=\sum_{a=1}^d\lambda_a\underline{K_a},~~~~~~~~~~~
\underline{H^\maltese}:=\sum_{a=1}^d\lambda^*_a\underline{K_a}.
\label{sec4-H-linear-under}
\ee
The matrix equation~(\ref{sec4-ph-matrix-form}) is equivalent to a
system of $d$ complex equations for $d$ real variables $r_a$.
Therefore, it is generally over-determined.
We can use the above Lie algebraic method to compute the equivalent
Hermitian Hamiltonian $h$ for the quasi-Hermitian Hamiltonians of
the form (\ref{sec4-Lie1}). In view of the definitions: $h:=\rho
H\rho^{-1}$ and $\rho:=\sqrt{\etap}=\exp\left(\sum_{a=1}^d
\frac{r_a}{2}\,K_a\right),$ $h$ is given by the right-hand side of
(\ref{sec4-kappa-exp1}) provided that we use $\frac{r_a}{2}$ in
place of $r_a$.
An alternative Lie algebraic approach of determining metric operator
and the equivalent Hermitian Hamiltonian is the following. First, we
use the argument leading to (\ref{sec4-ph-Lie-exp1}) to obtain
\be
h=\rho H\rho^{-1}=\sum_{k=1}^n
\sum_{a_1,a_2,\cdots,a_k=1}^d\lambda_{a_1,a_2,\cdots a_k}~
\fU(\underline{\rho}\:\underline{K_{a_1}}\:\underline{\rho}^{-1})\:
\fU(\underline{\rho}\:\underline{K_{a_2}}\:\underline{\rho}^{-1})\cdots
\fU(\underline{\rho}\:\underline{K_{a_k}}\:\underline{\rho}^{-1}),
\label{sec4-Hermitian-h}
\ee
where $\underline{\rho}:=\exp\left(\sum_{a=1}^d
\frac{r_a}{2}\,\underline{K_a}\right)$. Then, we evaluate
$\underline{\rho}\:\underline{K_{a}}\:\underline{\rho}^{-1}$ and
express it as a linear combination of $\underline{K_{a}}$ with
$r_a$-dependent coefficients.\footnote{This is possible, because
$\underline{\rho}\in\exp(\cG_{_\C})$.} Substituting the result in
(\ref{sec4-Hermitian-h}) and using the linearity of $\fU$ and
$K_a=\fU(\underline{K_a})$ give
$h=\sum_{k=1}^n\sum_{a_1,a_2,\cdots,a_k=1}^d
\varepsilon_{a_1,a_2,\cdots a_k}~K_{a_1}\, K_{a_2}\cdots K_{a_k}$,
where $\varepsilon_{a_1,a_2,\cdots a_k}$ are $r_a$-dependent complex
coefficients. In this approach, we obtain the desired system of
equations for $r_a$ by demanding that $h=h^\dagger$. This is the
root taken in \cite{quesne-jpa-2007} where the Lie algebraic method
was originally used for the construction of the metric operators and
equivalent Hermitian Hamiltonians for a class of quasi-Hermitian
Hamiltonians of the linear form~(\ref{sec4-H-linear}) with
underlying $su(1,1)$ algebra. For an application of this approach to
Hamiltonians that are quadratic polynomials in generators of
$SU(1,1)$, see \cite{assis-fring-2008}.
\subsubsection{Swanson Model: $\cG=su(1,1)$}
In this section we explore the application of the Lie algebraic
method to construct metric operators for Swanson's Hamiltonian
\cite{swanson}:
\be
H=\hbar\omega(a^\dagger
a+\frac{1}{2})+\alpha\:a^2+\beta\:{a^\dagger}^2,
\label{sec4-swanson}
\ee
where $a:=\frac{\rx+i\rp}{\sqrt 2}$,
$\rx:=\sqrt{\frac{m\omega}{\hbar}}~x$,
$\rp:=\frac{p}{\sqrt{m\hbar\omega}}$, $\alpha,\beta,\omega,m$ are
real parameters, $m>0$, $\omega>0$, and $
\hbar^2\omega^2>4\alpha\beta$.
The latter condition ensures the reality and discreteness of the
spectrum of (\ref{sec4-swanson}).
The problem of finding metric operators for the
Hamiltonian~(\ref{sec4-swanson}) is addressed in
\cite{jones-2005,scholtz-geyer-2006,mgh-jpa-2007}. The use of the
properties of Lie algebras for solving this problem was originally
proposed in \cite{quesne-jpa-2007}.
Swanson's Hamiltonian (\ref{sec4-swanson}) is an example of a rather
trivial class of quasi-Hermitian Hamiltonians of the standard form
\be
H=\frac{[p+A(x)]^2}{2M}+v(x),
\label{sec9-HN-3}
\ee
where $A$ and $v$ are respectively a complex-valued vector potential
and a real-valued scalar potential, and $M\in\R^+$ is the mass. It
is easy to see that these Hamiltonians admit the $x$-dependent
metric operator: $\etap=\exp\left(-\frac{2}{\hbar}\int
dx~\Im[A(x)]\right)$.
This in turn yields the equivalent Hermitian Hamiltonian:
$h=\frac{1}{2m}\left(p+\Re[A(x)]\right)^2+v(x)$.
The subclass of the Hamiltonians (\ref{sec9-HN-3}) corresponding to
imaginary vector potentials has been considered in
\cite{ahmed-pla-2002}. The Swanson Hamiltonian~(\ref{sec4-swanson})
is a special case of the latter. It corresponds to the choice:
$M=\frac{m}{1-\tilde\alpha-\tilde\beta}$,
$A(x)=i\left(\frac{m\omega(\tilde\alpha-
\tilde\beta)}{1-\tilde\alpha-\tilde\beta}\right)x$, and
$v(x)=\frac{1}{2}\left(\frac{1-4\tilde\alpha
\tilde\beta}{1-\tilde\alpha-\tilde\beta}\right)m\omega^2x^2$, where
\be
\tilde\alpha:=\frac{\alpha}{\hbar\omega},~~~~~~
\tilde\beta:=\frac{\beta}{\hbar\omega}.
\label{sec4-dimensionless-swanson}
\ee
As shown in \cite{jones-2005,mgh-jpa-2007}, the
Hamiltonian~(\ref{sec4-swanson}) admits other exactly constructible
metric operators. The Lie algebraic method considered in this
section offers a systematic approach for constructing metric
operators for this Hamiltonian. In order to describe the details of
this construction we begin by recalling that the operators $a$ and
$a^\dagger$ are the usual harmonic oscillator annihilation and
creation operators that satisfy
\be
[a,a^\dagger]=1.
\label{boson-algebra}
\ee
A well-known consequence of this relation is the possibility of
constructing a unitary representation of the Lie algebra $su(1,1)$
using quadratic polynomials in $a$ and $a^\dagger$. To see this,
consider the Hermitian operators:
$K_1:=\frac{1}{4}(a^2+{a^\dagger}^2)$,
$K_2:=\frac{i}{4}(a^2-{a^\dagger}^2)$, and
$K_3:=\frac{1}{4}(aa^\dagger+a^\dagger a)=\frac{1}{2}(a^\dagger
a+\frac{1}{2})$ that act in $\cH:=L^2(\R)$, \cite{nova}. In view of
(\ref{boson-algebra}), they satisfy the $su(1,1)$ algebra:
$[K_1,K_2]=-iK_3$, $[K_2,K_3]=iK_1$, $[K_3,K_1]=iK_2$.
Clearly the Hamiltonian~(\ref{sec4-swanson}) can be expressed as a
linear combination of $K_1,K_2$ and $K_3$:
\be
H=2\left[(\alpha+\beta)K_1+i(\alpha-\beta)K_2+\hbar\omega\,K_3\right].
\label{sec4-swanson-2}
\ee
This relation identifies the Hamiltonian (\ref{sec4-swanson-2}) as a
special case of the Hamiltonians of the form (\ref{sec4-H-linear})
with $G=SU(1,1)$, $d=3$, $\lambda_1=2(\alpha+\beta)$,
$\lambda_2=2i(\alpha-\beta)$, and $\lambda_3=2\hbar\omega$. We can
further simplify (\ref{sec4-swanson-2}) by introducing non-Hermitian
generators $K_\pm:=K_1\pm iK_2$. In terms of these, we have
$H=2\left(\alpha K_+ +\beta K_- +\hbar\omega\,K_3\right)$.
In order to apply the above method of constructing a metric operator
for (\ref{sec4-swanson-2}), we need to find a set of generators
$\underline{K_a}$ of $SU(1,1)$ in its standard representation and a
faithful unitary representation $\fU$ of the Lie algebra $su(1,1)$
such that $K_a=\fU(\underline{K_a})$ for all $a\in\{1,2,3\}$. A
simple choice is
\be
\underline{K_1}:=\frac{i}{2}\,\mbox{\large$\sigma_1$}=
\frac{1}{2}\left(\begin{array}{cc}
0&i\\
i& 0\end{array}\right),~~~
\underline{K_2}:=\frac{i}{2}\,\mbox{\large$\sigma_2$}=
\frac{1}{2}\left(\begin{array}{cc}
0&1\\
-1& 0\end{array}\right),~~~
\underline{K_3}:=\frac{1}{2}\,\mbox{\large$\sigma_3$}=
\frac{1}{2}\left(\begin{array}{cc}
1&0\\
0&-1\end{array}\right),
\label{su1-1-st}
\ee
where $\mbox{\large$\sigma_1$}$, $\mbox{\large$\sigma_2$}$, and
$\mbox{\large$\sigma_3$}$ are the Pauli matrices. We also have
\be
\underline{K_+}:=\underline{K_1}+i\underline{K_2}=
\left(\begin{array}{cc}
0&i\\
0&0\end{array}\right),~~~~~~
\underline{K_-}:=\underline{K_1}-i\underline{K_2}=
\left(\begin{array}{cc}
0&0\\
i&0\end{array}\right),
\label{sec4-Kpm=}
\ee
that fulfill $\fU(\underline{K_\pm})=K_\pm$.
Comparing~(\ref{su1-1-st}) and (\ref{sec4-Kpm=}), we see that it is
more convenient to work with the generators $K_\pm$ and $K_3$ rather
than $K_a$ with $a\in\{1,2,3\}$. This in particular suggests the
following alternative parametrization of the metric operator
(\ref{sec4-Lie4}).
\be
\etap=\exp(z\,K_+)\:\exp(2rK_3)\:\exp(z^*\,K_-),~~~~~~\mbox{with
$z\in\C, r\in\R$}.
\label{sec4-rep-etap}
\ee
Clearly,
\bea
\underline{\etap}&=&\exp(z\,\underline{K_+})\;
\exp(2r\underline{K_3})\;\exp(z^*\,\underline{K_-})=
\left(\begin{array}{cc}
e^r-e^{-r}|z|^2 & i e^{-r}z\\
ie^{-r}z^* & e^{-r}\end{array}\right),
\label{sec4-rep-etap-under}\\
\underline{\etap}^{-1}&=&\exp(-z^*\,\underline{K_-})\;
\exp(-2r\underline{K_3})\;\exp(-z\,\underline{K_+})=
\left(\begin{array}{cc}
e^{-r} & -i e^{-r}z\\
-ie^{-r}z^* & e^r-e^{-r}|z|^2\end{array}\right),
\label{sec4-rep-etap-under-inv}
\eea
where we have made use of (\ref{su1-1-st}) and (\ref{sec4-Kpm=}).
Also in view of (\ref{sec4-H-linear-under}),
(\ref{sec4-dimensionless-swanson}), (\ref{sec4-swanson-2}), and
$K_\pm:=K_1\pm iK_2=K_\mp^\dagger$, we have
\bea
\underline{H}&=&2\hbar\omega[\tilde\alpha\underline{K_+}+\tilde\beta
\underline{K_-}+\underline{K_3}]=\hbar\omega\left(\begin{array}{cc}
1 & 2i\tilde\alpha\\
2i\tilde\beta & -1\end{array}\right),
\label{sec4-H-under-swanson}\\
\underline{H^\maltese}&=&2\hbar\omega[\tilde\alpha\underline{K_-}+\tilde\beta
\underline{K_+}+\underline{K_3}]=\hbar\omega\left(\begin{array}{cc}
1 & 2i\tilde\beta\\
2i\tilde\alpha & -1\end{array}\right).
\label{sec4-Hp-under-swanson}
\eea
Next, we insert (\ref{sec4-rep-etap-under}) --
(\ref{sec4-Hp-under-swanson}) in (\ref{sec4-ph-matrix-form}). This
yields the following three independent complex equations that are
more conveniently expressed in terms of $s:=e^r$ and $w:=e^{-r}z$.
\bea
&&s(\tilde\alpha s-w)+\tilde\beta(w^2-1)+
s|w|^2[w+\tilde\alpha s(|w|^2-2)]=0,
\label{sec4-e1=z}\\
&&\tilde\beta-\tilde\alpha s^2+sw^*(1+\tilde\alpha s w^*)=0,
\label{sec4-e2=z}\\
&&\tilde\beta w+s w^*[w+\tilde\alpha s(|w|^2-1)]=0.
\label{sec4-e3=z}
\eea
To solve these equations, we rewrite (\ref{sec4-e3=z}) as:
$w+\tilde\alpha s(|w|^2-2)=-\tilde\alpha s-\frac{\tilde\beta
w}{sw^*}$, and use this relation in (\ref{sec4-e1=z}) to obtain
\be
\tilde\alpha s(1-|w|^2)=\frac{\tilde\beta}{s}+w.
\label{sec4-4=z}
\ee
Substituting this equation back into (\ref{sec4-e3=z}), we find
$\tilde\beta(w-w^*)=0$. Therefore, either $\tilde\beta=0$ or
$w\in\R$. It is easy to show using (\ref{sec4-e2=z}) that the
condition $\tilde\beta=0$ implies $w\in\R$ as well. Hence, $w$ is
real and (\ref{sec4-4=z}) reduces to a quadratic equation
whose solution is
\be
w=\frac{-1\pm\sqrt{4\tilde\alpha^2s^2+1-4\tilde\alpha\tilde\beta}}{2\tilde\alpha
s}.
\label{sec4-w=sol}
\ee
It turns out that (\ref{sec4-e1=z}) -- (\ref{sec4-e3=z}) do not
impose any further restriction on $s$. Therefore, (\ref{sec4-w=sol})
is the solution of the system (\ref{sec4-e1=z}) --
(\ref{sec4-e3=z}). In terms of the original parameters $r$ and $z$,
it reads
$z=(-1\pm\sqrt{4\tilde\alpha^2e^{2r}+1-4\tilde\alpha\tilde\beta})/
(2\tilde\alpha)
=(-\hbar\omega\pm\sqrt{4\alpha^2e^{2r}+\hbar\omega-4\alpha\beta})/(
2\alpha)$.
Substituting this formula in (\ref{sec4-rep-etap}), we find two
one-parameter families of metric operators for Swanson's
Hamiltonian.
\section{Systems Defined on a Complex Contour}
\label{sec-contour}
\subsection{Spectral problems defined on a contour}
\label{sec9-spec}
Consider the Schr\"odinger operator $-\frac{d^2}{dx^2}+V(x)$,
where $V:\R\to\C$ is a complex-valued piecewise real-analytic
potential. The study of the spectral problem for this operator and
its complex generalization, $-\frac{d^2}{dz^2}+V(z)$,
with $z$ taking values along a contour\footnote{Here by the term
`contour' we mean (the graph of) a piecewise smooth simple curve
that needs not be closed.} $\Gamma$ in $\C$, predates the discovery
of quantum mechanics.\footnote{Hermann Weyl's dissertation of 1909
provides a systematic approach to this problem. For a detailed
discussion of Weyl's results, see \cite[\S 10]{hille}.} The case of
polynomial potentials have been studied thoroughly in \cite{sibuya}.
For a more recent discussion, see \cite{shin-jpa-2005}.
The spectrum of $-\frac{d^2}{dz^2}+V(z)$ depends on the choice of
the contour $\Gamma$. In the case that $\Gamma$ visits the point at
infinity, the spectrum is essentially determined by the boundary
condition imposed on the solutions $\Psi:\Gamma\to\C$ of the
following eigenvalue equation at infinity.
\be
\left[-\frac{d^2}{dz^2}+V(z)\right]\Psi(z)=E\,\Psi(z).
\label{sec9-eg-va}
\ee
For an extended contour $\Gamma$, that is obtained by a continuous
invertible deformation of the real axis in $\C$, the eigenvalue
problem (\ref{sec9-eg-va}) is well-posed provided that we demand
$\Psi(z)$ to decay exponentially as $|z|\to\infty$ along $\Gamma$.
To make this condition more explicit, we identify $\Gamma$ with the
graph of a parameterized curve $\zeta:\R\to\C$ in $\C$, i.e.,
\be
\Gamma=\{\zeta(s)~|~s\in\R~\}.
\label{sec9-param}
\ee
The assumption that $\Gamma$ is simple implies that $\zeta$ is a
one-to-one function, and we can express the above-mentioned boundary
condition as
\be
|\Psi(\zeta(s))|\to 0~~~{\rm exponentially~as}~~~
s\to\pm\infty.
\label{sec9-BC}
\ee
A simple consequence of this condition is
\be
\int_{-\infty}^\infty |\Psi(\zeta(s))|^2 ds<\infty.
\label{sec9-BC-2}
\ee
If we view $\C$ as a Riemannian manifold, namely $\R^2$ endowed with
the Euclidean metric tensor, and consider $\Gamma$ as a submanifold
of this manifold, we can use the embedding map $\zeta:\R\to\C$ to
induce a metric tensor $(\fg)$ on $\Gamma$. The corresponding line
element is given by
$d\ell:=\sqrt{\fg}\,ds=\sqrt{dx(s)^2+dy(s)^2}=|\zeta'(s)|\,ds$,
where $x(s):=\Re(\zeta(s))$ and $y(s):=\Im(\zeta(s))$. Therefore,
the integral measure defined by $\fg$ on $\Gamma$ is the arc-length
element $|\zeta'(s)|\,ds$. This in turn suggests the following
parametrization-invariant definition of the $L^2$-inner product on
$\Gamma$.
\be
\pbr\Psi|\Phi\pkt:=\int_{-\infty}^\infty
\Psi(\zeta(s))^*\,\Phi(\zeta(s))\, |\zeta'(s)|\, ds.
\label{sec9-L2-inn}
\ee
If we identify $s$ with the arc-length parameter, for which
$|\zeta'(s)|=1$, and let
$L^2(\Gamma):=\left\{\Psi:\Gamma\to\C~\Big|~\pbr\Psi|\Psi\pkt
<\infty~\right\}$,
we can express (\ref{sec9-BC-2}) as
\be
\Psi\in L^2(\Gamma).
\label{sec9=sq-int}
\ee
This shows that the boundary condition (\ref{sec9-BC}) implies the
square-integrability condition (\ref{sec9=sq-int}) along $\Gamma$.
The converse is not generally true; the spectrum defined by the
boundary condition (\ref{sec9-BC}) is a subset of the point
spectrum\footnote{By definition, the spectrum $\sigma(A)$ of an
operator $A$ acting in a Banach space is the set of complex numbers
$E$ for which the operator $A-E I$ is not invertible, i.e., one or
more of the following conditions hold: (1) $A-E I$ is not
one-to-one; (2) $A-E I$ is not onto; (3) $A-E I$ is one-to-one so
that it has an inverse, but the inverse is not a bounded operator
\cite{hislop-sigal}. The \emph{point spectrum} of $A$ is the subset
of $\sigma(A)$ consisting of the eigenvalues $E$ of $A$, i.e., the
numbers $E$ for which $A-E I$ is not one-to-one \cite{reed-simon}.}
of the operator $-\frac{d^2}{dz^2}+V(z)$ viewed as acting in the
Hilbert space $L^2(\Gamma)$. In the following we shall use the term
spectrum to mean the subset of the point spectrum that is defined by
the boundary condition~(\ref{sec9-BC}).
In order to demonstrate the importance of the choice of the contour
in dealing with the spectral problem (\ref{sec9-eg-va}), consider
the imaginary cubic potential $V(z)=iz^3$. As we mentioned in
Section~\ref{sec1}, the spectrum defined by the boundary
condition~(\ref{sec9-BC}) along the real axis ($\zeta(s)=s$) is
discrete, real, and positive \cite{dorey,shin}. But, the spectrum
defined by the same boundary condition along the imaginary axis is
empty. To see this, we parameterize the imaginary axis according to
$z=\zeta(s)=is$ with $s\in\R$. Then the operator
$-\frac{d^2}{dz^2}+iz^3$ takes the form $\frac{d^2}{ds^2}+s^3$, and
we can respectively express the eigenvalue equation
(\ref{sec9-eg-va}) and the boundary condition (\ref{sec9-BC}) as
\be
\left[\frac{d^2}{ds^2}+s^3\right]\psi(s)=E\,\psi(s),
\label{sec9-eg-va-2}
\ee
and
\be
|\psi(s)|\to 0~~~{\rm exponentially~as}~~~s\to\pm\infty,
\label{sec9-eg-va-2-boundary}
\ee
where $\psi(s):=\Psi(is)$ for all $s\in\R$.\footnote{This was
pointed out to me by Prof.~Yavuz Nutku.} But, it is well-known that
(\ref{sec9-eg-va-2}) does not have any solution fulfilling
(\ref{sec9-eg-va-2-boundary}) for either real or complex values of
$E$.\footnote{The point spectrum of $\frac{d^2}{ds^2}+s^3$ is $\C$,
i.e., (\ref{sec9-eg-va-2}) admits square-integrable solutions for
all $E\in\C$ (This was pointed out to me by Prof.\ Patrick Dorey.)
These solutions do not however satisfy
(\ref{sec9-eg-va-2-boundary}). They do not represent physically
acceptable bound states, because they do not belong to the domain of
the observables such as position, momentum or some of their powers.}
The imaginary cubic potential belongs to the class of potentials of
the form
\be
V_\nu(x)=\lambda\:x^2(ix)^\nu,~~~~~~\nu\in\R,~\lambda\in\R^+.
\label{sec9-x-nu}
\ee
As shown in \cite{dorey,shin}, for $\nu\geq 0$ these potentials
share the spectral properties of the imaginary cubic potential, if
we impose the boundary condition (\ref{sec9-BC}) along a contour
$\Gamma_\nu$ that lies asymptotically in the union of the Stokes
wedges \cite{bender-prl}:
\be
S_\nu^\pm:=\Big\{r\,e^{-i(\theta_\nu^\pm+
\varphi)}~\Big|~r\in[0,\infty),~\varphi\in(
-\delta_\nu,\delta_\nu)\:
\Big\},
\label{sec9-stokes}
\ee
where
\be
\theta_\nu^+:=\frac{\pi\nu}{2(\nu+4)}=:\theta_\nu,~~~~~~
\theta_\nu^-:=\pi-\theta_\nu,~~~~~~
\delta_\nu:=\frac{\pi}{\nu+4}.
\label{sec9-thetas}
\ee
\begin{figure}
\vspace{0.0cm} \hspace{0.00cm}
\centerline{\includegraphics[scale=.75,clip]{fig1.eps}}
\centerline{\parbox{16cm}{\caption{A contour $\Gamma_\nu$ (the solid curve) lying asymptotically in the
union of the Stokes wedges (the grey region). The dashed lines
are the bisectors of the Stokes wedges $S^\pm_\nu$. The angle
$\theta_\nu$ between the positive real axis and the bisector of
$S^+_\nu$ is also depicted.}}}
\label{fig1}
\vspace{0.0cm}
\end{figure}%
Here by asymptotic inclusion of $\Gamma_\nu$ in $S_\nu^-\cup
S_\nu^+$, we mean that if $\Gamma_\nu=\{\zeta_\nu(s)|s\in\R\}$ for a
piecewise smooth one-to-one function $\zeta_\nu:\R\to\C$, then there
must exist a positive integer $M$ such that for all $s\in\R$ the
condition $\pm s>M$ implies $\zeta(s)\in S_\nu^\pm$. Figure~1 shows
Stokes wedges and a typical contour lying in $S_\nu^-\cup S_\nu^+$
asymptotically.
For $\nu=2$, (\ref{sec9-x-nu}) gives the wrong-sign quartic
potential,
\be
V_2(x)=-\lambda\,x^4,~~~~~~~\lambda\in\R^+,
\label{sec9-quartic}
\ee
which is known to have an empty spectrum (defined by (\ref{sec9-BC})
along the real axis.) Setting $\nu=2$ in (\ref{sec9-stokes}) and
(\ref{sec9-thetas}), we have
$S_2^-=\Big\{r\,e^{i\theta}~\Big|~r\in[0,\infty),~
\theta\in(-\pi,-\frac{2\pi}{3})\:\Big\}$ and
$S_2^+=\Big\{r\,e^{i\theta}~\Big|~r\in[0,\infty),~
\theta\in(-\frac{\pi}{3},0)\:\Big\}$. Therefore the condition that
$\Gamma_2$ must lie asymptotically inside $S_2^-\cup S_2^+$ excludes
the real axis as a possible choice for $\Gamma_2$. It is not
difficult to see that the same holds for all $\nu\geq 2$.
\subsection{Equivalent spectral problems defined on $\R$}
\label{sec9-equiv}
The fact that the spectrum of the potentials (\ref{sec9-x-nu})
defined by the above mentioned boundary condition along $\Gamma_\nu$
is discrete, real and positive is by no means obvious, and its proof
is quite complicated \cite{dorey,shin}. In this subsection we will
outline a transformation scheme that maps the spectral problem for
these and similar potentials to an equivalent spectral problem that
is defined on the real line \cite{jpa-2005a}. This scheme provides
an intuitive understanding of the spectral properties of the
potentials (\ref{sec9-x-nu}) and in particular allows for a
straightforward treatment of the wrong-sign quartic potential
(\ref{sec9-quartic}) that we shall consider in the following
subsection.
Given an extended contour $\Gamma$, we can use $x:=\Re(z)$ to
parameterize it. We do this by setting $\Gamma=\{\zeta(x)|x\in\R\}$
where $\zeta(x):=x+if(x)$ for all $x\in\R$, and $f:\R\to\R$ is a
piecewise smooth function. This implies that along $\Gamma$,
$dz=d\zeta(x)=[1+if'(x)]dx$ and the eigenvalue
equation~(\ref{sec9-eg-va}) takes the form
\be
\left[-g(x)^2\frac{d^2}{dx^2}+ig(x)^3f''(x)
\frac{d}{dx}+\tilde v(x)\right]\tilde\psi(z)=E\,\tilde\psi(z),
\label{sec9-eg-va-x}
\ee
where for all $x\in\R$,
\be
g(x):=\frac{1}{1+if'(x)},~~~~\tilde{v}(x):=V(x+if(x)),~~~~
\tilde\psi(x):=\Psi(x+if(x)),
\label{sec9-trans-0}
\ee
and a prime stands for the derivative of the corresponding function.
Next, we examine the consequences of using the arc-length
parametrization of $\Gamma$. If we define $F:\R\to\R$ by $F(x):=
\int_0^x \sqrt{1+f'(u)^2}\,du$, we can express the arc-length
parameter along $\Gamma$, which we denote by $\rx$, as $\rx:=F(x)$.
Under the transformation $x\to\rx$, the eigenvalue equation
(\ref{sec9-eg-va-x}) takes the form
\be
e^{-2i\xi({\rm x})} \left[-\frac{d^2}{d\rx^2}+ia(\rx)
\frac{d}{d\rx}+v(\rx)\right]\psi(\rx)=E\,\psi(\rx),
\label{sec9-eg-va-X}
\ee
where\footnote{Because $F$ is a monotonically increasing function,
it is one-to-one. In particular it have an inverse that we denote by
$F^{-1}$.}
\bea
\xi(\rx)&:=&\tan^{-1}(f'(x))
\Big|_{x=F^{-1}({\rm x})},
~~~~~~~~
a(\rx):=\xi'(\rx)=\left. \frac{f''(x)}{[1+f'(x)^2]^{\frac{3}{2}}}
\:\right|_{x=F^{-1}({\rm x})},
\label{sec9-trans1}\\
v(\rx)&:=&e^{2i\xi({\rm x})}\tilde v(F^{-1}(\rx)),~~~~~~~~~~~
\psi(\rx):=\tilde\psi(F^{-1}(\rx)).
\label{sec9-trans}
\eea
The arc-length parametrization of $\Gamma$ is achieved by the
function $G:\R\to\Gamma$ defined by
$G(\rx):=F^{-1}(\rx)+if(F^{-1}(\rx))$ for all $\rx\in\R$.
This is the invertible function that maps the real line $\R$ onto
the contour $\Gamma$ in such a way that the (Euclidean) distance is
preserved, i.e., if $\rx_1,\rx_2\in\R$ are respectively mapped to
$z_1:=G(\rx_1)$ and $z_2:=G(\rx_2)$, then the length of the segment
of $\Gamma$ that lies between $z_1$ and $z_2$ is given by
$|\rx_1-\rx_2|$. In other words, $G:\R\to\Gamma$ is an isometry. In
light of (\ref{sec9-trans-0}) and (\ref{sec9-trans}), $G$ relates
the solutions $\Psi$ and $\psi$ of the eigenvalue equations
(\ref{sec9-eg-va}) and (\ref{sec9-eg-va-X}) according to
$\psi(\rx)=\Psi(G(\rx))$.
We can use $G$ to express $v$ in terms of the potential $V$
directly:
\be
v(\rx)=e^{2i\xi({\rm x})}V(G(\rx)).
\label{sec9-ceq0}
\ee
Furthermore, recalling that the arc-length parametrization of
$\Gamma$ corresponds to identifying $s$ and $\zeta(s)$ of
(\ref{sec9-param}) -- (\ref{sec9-BC-2}) respectively with $\rx$ and
$G(\rx)$, we have
\be
\pbr\Psi|\Psi\pkt=
\int_{-\infty}^\infty |\Psi(G(\rx))|^2 d\rx=
\int_{-\infty}^\infty |\psi(\rx)|^2 d\rx=\br\psi|\psi\kt.
\label{sec9-BC-4}
\ee
This observation has two important consequences. Firstly, it implies
that $\Psi\in L^2(\Gamma)$ if and only if $\psi\in L^2(\R)$.
Secondly, it allows for the introduction of an induced unitary
operator $G_*:L^2(\Gamma)\to L^2(\R)$, namely
$G_*(\Psi):=\psi$ if $\psi(\rx)=\Psi(G(\rx))$
for all $\rx\in\R$.\footnote{(\ref{sec9-BC-4}) shows that $G_*$ preserves the
norm. This is sufficient to conclude that it is a unitary operator,
for in an inner product space the norm uniquely determines the inner
product \cite[\S 6.1]{kato}.}
The above constructions show that a pseudo-Hermitian quantum system
that is defined by a Hamiltonian operator of the form
$-\frac{d^2}{dz^2}+V(z)$ acting in the reference Hilbert space
$L^2(\Gamma)$ is unitary-equivalent to the one defined by the
Hamiltonian operator
\be
H=e^{-2i\xi({\rm x})} \left[-\frac{d^2}{d\rx^2}+ia(\rx)
\frac{d}{d\rx}+v(\rx)\right]
\label{sec9-H-R}
\ee
that is defined in the reference Hilbert space $L^2(\R)$. In terms
of the unitary operator $G_*$, we have
$-\frac{d^2}{dz^2}+V(z)=G_*^{-1}H\,G_*$.
A particularly simple choice for a contour is a wedge-shaped
contour: $\Gamma^{(\theta)}:=\{ x+if(x)|x\in\R\}$ where $f(x):=-\tan
\theta \:|x|$ and $\theta\in[0,\mbox{$\frac{\pi}{2}$})$.
A typical example is the contour obtained by adjoining the bisectors
of the Stokes wedges (the dashed lines in Figure~1.) For such a
contour,
\bea
&&\rx=F(x)=\sec\theta \:x,~~~~~
x=F^{-1}(\rx)=\cos\theta\:\rx,
\label{sec9-ceq1}\\
&&G(\rx)=\cos\theta\:\rx-i\sin \theta\:|\rx|=\left\{
\begin{array}{ccc}
e^{i\theta}\,\rx&{\rm for}&\rx<0,\\
e^{-i\theta}\,\rx&{\rm for}&\rx\geq 0,\end{array}\right.
\label{sec9-ceq2}
\eea
and in view of (\ref{sec9-trans1}), (\ref{sec9-trans}), and
(\ref{sec9-ceq0}) the Hamiltonian operator (\ref{sec9-H-R}) takes
the form
\be
H=-e^{2i\theta\,\sg({\rm x})} \left[\frac{d^2}{d\rx^2}+2i\theta\,
\delta(\rx)\frac{d}{d\rx}\right]+
V(\cos\theta\:\rx-i\sin \theta\:|\rx|).
\label{sec9-H-we}
\ee
The presence of delta-function in (\ref{sec9-H-we}) has its root in
the non-differentiability of $\Gamma^{(\theta)}$ at the origin. One
can smooth out $\Gamma^{(\theta)}$ in an arbitrarily small open
neighborhood of the origin and show that this delta-function
singularity amounts to the imposition of a particular matching
condition at $\rx=0$ for the solutions of the corresponding
eigenvalue problem. As shown in \cite{jpa-2005a}, these are given by
\be
\psi(0^+)=\psi(0)=\psi(0^-),~~~~~~~
e^{-2i\theta}\psi'(0^-)=e^{2i\theta}\psi'(0^+),
\label{sec9-match}
\ee
where for every function $\phi:\R\to\R$, $\phi(0^\pm):= \lim_{{\rm
x}\to 0^\pm}\phi(\rx)$.
In view of (\ref{sec9-H-we}), we can express the eigenvalue equation
for $H$ in the form
\be
H_\pm \psi_\pm(\rx)=E\psi_\pm(\rx),~~~~~{\rm for}~~~~~\rx\in\R^\pm,
\label{sec9-eg-va-pm}
\ee
where
\be
H_\pm:=-e^{\pm 2i\theta} \frac{d^2}{d\rx^2}+
V(e^{\mp i\theta}\,\rx),
\label{sec9-ceq4}
\ee
and $\psi_-:(-\infty,0]\to\C$, $\psi_+:[0,\infty)\to\C$ are defined
by $\psi_\pm(\rx):=\psi(\rx)$ for all $\rx\in\R^\pm$, $\psi_\pm(0):=
\psi(0^\pm)$, and $\psi'_\pm(0):=\psi'(0^\pm)$.
In summary, the eigenvalue problem for the Schr\"odinger operator
$-\frac{d^2}{dz^2}+V(z)$ that is defined by the boundary condition
(\ref{sec9-BC}) along $\Gamma^{(\theta)}$ is equivalent to finding a
pair of functions $\psi_\pm$ satisfying
\bea
&&-e^{\pm 2i\theta} \psi_\pm''(\rx)+
V(e^{\mp i\theta}\,\rx)\:\psi_\pm(\rx)=E\:\psi_\pm(\rx)
~~~~~{\rm for}~~~~~\rx\in\R^\pm,
\label{sec9-eg-va-1}\\
&&\psi_-(0)=\psi_+(0),~~~~~~~~~~~~~~~~~~
e^{-2i\theta}\psi_-'(0)=e^{2i\theta}\psi_+'(0),
\label{sec9-eg-va-3}\\
&& \psi_\pm(\rx)\to 0~~~{\rm
exponentially~as}~~~\rx\to\pm\infty.
\label{sec9-BS-pm}
\eea
To elucidate the practical advantage of this formulation, we explore
its application for the potentials $V_\nu(z)=\lambda\,z^2(iz)^\nu$
with $\lambda\in\R^+$.
As we explained in the preceding subsection, we need to choose a
contour that belongs to the union of the Stokes wedges $S^\pm_\nu$
asymptotically. We shall choose the wedge-shaped contour
$\Gamma^{(\theta_\nu)}$ that consists of the bisectors of
$S^\pm_\nu$. Setting $V=V_\nu$ and $\theta=\theta_\nu$ in
(\ref{sec9-ceq4}) and using (\ref{sec9-thetas}) we find the
following most surprising result.
\be
H_\pm=e^{\pm 2i\theta_\nu}
\left[-\frac{d^2}{d\rx^2}+\lambda\,|\rx|^{\nu+2}\right].
\label{sec9-ceq5}
\ee
Similarly, (\ref{sec9-eg-va-1}) becomes
\be
-\psi_\pm''(\rx)+\lambda\,|\rx|^{\nu+2}\psi_\pm(\rx)=
E\,e^{\mp 2i\theta_\nu}\psi_\pm(\rx)
~~~~~{\rm for}~~~~~\rx\in\R^\pm.
\label{sec9-ceq5-1}
\ee
The appearance of the real confining potential
$\lambda\,|\rx|^{\nu+2}$ in (\ref{sec9-ceq5}) and
(\ref{sec9-ceq5-1}) allows for an alternative proof of the
discreteness of the spectrum of the potentials $z^2(iz)^\nu$. See
\cite[Appendix]{jpa-2005a} for details.\footnote{It would be
interesting to see if this approach can lead to an alternative proof
of the reality of the spectrum.}
\subsection{Wrong-sign quartic potential}
\label{sec-quartic}
In the preceding section we showed how one can transform the
spectral problem for a potential defined along a complex contour to
one defined along $\R$. The form of the transformed Hamiltonian
operator depends on the choice of the contour and its
parametrization. This raises the natural question whether one can
choose an appropriate parameterized contour so that the transformed
Hamiltonian admits an easily constructible metric operator. The
wrong-sign quartic potential $V_2(z)=-\lambda\,z^4$, with
$\lambda\in\R^+$, is a remarkable example for which the answer to
this question is in the affirmative.
Let $\Gamma_2=\{\zeta(s)|s\in\R\}$ be the contour defined by
\cite{jm}
\be
\zeta(s):=-2i\sqrt{1+is},~~~~~~~\mbox{for all $s\in\R$}.
\label{sec9-zeta=}
\ee
If we parameterize $\Gamma_2$ using $x=\Re(\zeta(s))$, we find
$\Gamma_2=\{x+if(x)|x\in\R\}$ where $f$ is given by
$f(x):=-\sqrt{x^2+1}$ for all $x\in\R$.
This shows that $\Gamma_2$ is a hyperbola with asymptotes
$\ell_\pm:=\{ \pm r e^{-i\frac{\pi}{4}}|r\in\R^+\}$. Because
$\ell_\pm$ lies in the Stokes wedge $S_2^\pm$, (\ref{sec9-zeta=})
defines an admissible contour for the potential $V_2$.
If we perform the change of variable $z\to\zeta(s)$, we can express
the eigenvalue equation:
$\left[-\frac{d^2}{dz^2}-\lambda\,z^4\right]\Psi(z)=E\Psi(z)$,
in the form
\be
\left[-(1+is)\frac{d^2}{ds^2}-\frac{i}{2}\,\frac{d}{ds}-16
\lambda\,(1+is)^2\right]\phi(s)=E\,\phi(s),
\label{sec-9-H-quartic-0}
\ee
where $\phi(s):=\Psi(\zeta(s))$. We can identify $s$ with a usual
(Hermitian) position operator, introduce the corresponding wave
number operator $\fK$ as $\br s|\fK:=-i\frac{d}{ds}\br s|$, and
express (\ref{sec-9-H-quartic-0}) as the eigenvalue equation for the
Hamiltonian
\be
H:=(1+is)\fK^2+\frac{\fK}{2}-16\lambda\,(1+is)^2
\label{sec-9-H-quartic-1}
\ee
that acts in $L^2(\R)$.
As shown in \cite{jm}, the application of the perturbative scheme of
Subsection~\ref{sec-pert} yields the following exact expressions for
a metric operator and the corresponding equivalent Hermitian
Hamiltonian, respectively.
\bea
\eta_+&=&\exp\left(\frac{ \fK^3}{48\lambda}-2
\fK\right),
\label{sec-9-metric=}\\
h&=&
\frac{ \fK^4}{64\lambda}-\frac{\fK}{2}+16\lambda\,s^2.
\label{sec-9-equi-h=}
\eea
We shall offer an alternative derivation of these formulas
momentarily.
Because $h$ is a Hermitian operator that is isospectral to $H$, the
spectrum of $H$ and consequently the operator
$-\frac{d^2}{dz^2}-\lambda\,z^4$ defined along $\Gamma_2$, is real.
It is also easy to show that the common spectrum of all these
operators is positive and discrete. To see this, we express $h$ in
its $\fK$-representation where eigenvalue equation $h\Phi=E\Phi$
reads
\be
\left[-16\lambda\,\frac{d^2}{d\fK^2}+
\frac{\fK^4}{64\lambda}-\frac{\fK}{2}\right]\tilde\Phi(\fK)=E
\,\tilde\Phi(\fK),
\label{sec9-fourier}
\ee
and
$\tilde\Phi(\fK):=\br\fK|\Phi\kt=(2\pi)^{-1/2}\int_{-\infty}^\infty
ds\: e^{-i\fK s}\Phi(s)$. The operator in the square bracket in
(\ref{sec9-fourier}) is a Schr\"odinger operator with a confining
quartic polynomial potential. Therefore its spectrum is positive and
discrete \cite{messiah}.
We can also use the same approach to treat the quartic anharmonic
oscillator, $V(z)=\omega^2z^2-\lambda z^4$. Using the
parametrization (\ref{sec9-zeta=}), we find the following
generalizations of (\ref{sec-9-H-quartic-1}) --
(\ref{sec9-fourier}).
\bea
&&H:=(1+is)\fK^2+\frac{\fK}{2}-16\lambda\,(1+is)^2-4\omega^2(1+is)
\label{sec-9-H-quartic-1-an}\\
&&\eta_+=\exp\left[\frac{ \fK^3}{48\lambda}-(2+\frac{
\omega^2}{4\lambda})\fK\right],
\label{sec-9-metric=an}\\
&&h=
\frac{(\fK^2-4\omega^2)^2}{64\lambda}-\frac{\fK}{2}+16\lambda\,s^2.
\label{sec-9-equi-h=an}\\
&&\left[-16\lambda\,\frac{d^2}{d\fK^2}+
\frac{(\fK^2-4\omega^2)^2}{64\lambda}-
\frac{\fK}{2}\right]\tilde\Phi(\fK)=E
\,\tilde\Phi(\fK).
\label{sec9-fourier-an}
\eea
Therefore, by scaling the eigenvalues according to $E\to\gamma E$
where $\gamma:=1/(16\lambda)$, we can identify the spectrum of the
operator $-\frac{d^2}{dz^2}+\omega^2z^2-\lambda z^4$ (defined along
$\Gamma_2$) with that of the operator
\be
-\frac{d^2}{dx^2}+
\frac{\gamma}{4}\left[\gamma(x^2-4\omega^2)^2-2x\right]
\label{sec9-linear-4-pot}
\ee
that acts in $L^2(\R)$, \cite{jm}. This observation has previously
been made in \cite{buslaev-grecchi}. As discussed in
\cite{andrianov-2007}, the approach of \cite{andrianov-1982} also
leads to the same conclusion.
Next, we outline an alternative and more straightforward method of
constructing a metric operator for the Hamiltonian
(\ref{sec-9-H-quartic-1-an}).
If we separate the Hermitian and anti-Hermitian parts of $H$, we
find
\be
H=16\lambda\,s^2+\fK^2-\frac{\fK}{2}-(16\lambda+4\omega^2)+
\frac{i}{2}\big\{s\,,\,\fK^2-4\omega^2-32\lambda\big\}.
\label{sec9-H-find-eta0}
\ee
We can combine the first and last terms on the right-hand side of
this equation to express $H$ in the form
\be
H=16\lambda\left[s+
\frac{i(\fK^2-4\omega^2-32\lambda)}{32\lambda}\right]^2
+\frac{(\fK^2-4\omega^2)^2}{64\lambda}-\frac{\fK}{2}.
\label{sec9-H-find-eta1}
\ee
As seen from this relation, the term responsible for the
non-Hermiticity of $H$ may be removed by a translation of $s$,
namely
\be
s\to s-\frac{i(\fK^2-4\omega^2-32\lambda)}{32\lambda}.
\label{sec9-sim-zero}
\ee
It is not difficult to see that such a translation is affected by a
$\fK$-dependent similarity transformation of the form $s\to
e^{g(\fK)}s\,e^{-g(\fK)}$. Recalling that for any analytic function
$g:\R\to\R$,
\be
e^{g(\fK)}s\,e^{-g(\fK)}=s-ig'(\fK),
\label{sec9-translation}
\ee
and comparing this equation with (\ref{sec9-sim-zero}), we find
\be
g(\fK)=\frac{\fK^3}{96\lambda}-(1+\frac{\omega^2}{8\lambda})+c.
\label{sec9-f=}
\ee
Here $c$ is an integration constant that we can set to zero without
loss of generality. In view of (\ref{sec9-translation}), we can map
$H$ to a Hermitian Hamiltonian $h$ according to
\be
H\to h:=e^{g(\fK)}H\,e^{-g(\fK)}=
16\lambda\,s^2+
\frac{(\fK^2-4\omega^2)^2}{64\lambda}-\frac{\fK}{2}.
\label{sec9-h=derive}
\ee
This is precisely the equivalent Hermitian Hamiltonian given by
(\ref{sec-9-equi-h=an}). Moreover, comparing (\ref{sec9-h=derive})
with the defining relation for the equivalent Hermitian Hamiltonian,
namely $h:=\rho\,H\rho^{-1}$, and recalling that
$\rho:=\sqrt\eta_+$, where $\eta_+$ is a metric operator associated
with the Hamiltonian $H$, we find $\eta_+=e^{2g(\fK)}$. In light of
(\ref{sec9-f=}), this coincides with the metric operator given by
(\ref{sec-9-metric=an}).
Next, we wish to explore the underlying classical system for the
pseudo-Hermitian quantum system defined by the potential
$V(z)=\omega^2z^2-\lambda z^4$ along $\Gamma_2$. If we view this
potential as an analytic continuation of $V(x)=\omega^2x^2-\lambda
x^4$, with $x$ denoting the Hermitian position operator, we should
identify the Hamiltonian operator for the system with
\be
H_{\Gamma_2}:=\frac{P_Z^2}{2m}+\Omega^2\,Z^2-\Lambda Z^4,
\label{sec9-H=class4}
\ee
where $\Omega\in\R$ and $\Lambda\in\R^+$ are dimensionful coupling
constants, and $Z$ and $P_Z$ are the dimensionful coordinate and
momentum operators along $\Gamma_2$. Using an arbitrary length scale
$\ell$, we can introduce the corresponding dimensionless quantities:
\be
\lambda:=\frac{2m\ell^6\Lambda}{\hbar^2},~~~
\omega:=\frac{\sqrt{2m}\,\ell^2\Omega}{\hbar},~~~
p_z:=\frac{\ell P_Z}{\hbar},~~~z:=\frac{Z}{\ell}.
\label{sec9-trans=11}
\ee
In terms of these the eigenvalue equation $H_{\Gamma_2}\Psi={\cal
E}\Psi$ takes the form
\be
\left[-\frac{d^2}{dz^2}+\omega^2z^2-\lambda\,z^4\right]\Psi(z)=
E\Psi(z),
\label{sec9-eg-va-dim}
\ee
where $E:=2m\ell^2{\cal E}/\hbar^2$.
Now, we are in a position to apply our earlier results. Setting
$z=\zeta(s):=-2i\sqrt{1+is}$, we can identify (\ref{sec9-eg-va-dim})
with the eigenvalue equation for the pseudo-Hermitian Hamiltonian
(\ref{sec-9-H-quartic-1-an}). One might argue that because $z$ and
$p_z$ represent dimensionless position and momentum operators, the
same should also hold for $s$ and $\fK$, respectively. This suggests
to identify the dimensionful position ($x$) and momentum $(p)$
operators as
\be
x=\alpha\,\ell\, s,~~~~~~~p=\frac{\hbar\fK}{\alpha\,\ell},
\label{sec9-trans=12}
\ee
where $\alpha\in\R^+$ is an arbitrary constant. In view of
(\ref{sec-9-equi-h=an}), (\ref{sec9-trans=11}),
(\ref{sec9-H-find-eta1}), and (\ref{sec9-trans=12}), we obtain the
following expressions for the dimensionful pseudo-Hermitian and
equivalent Hermitian Hamiltonians.
\bea
H'&:=&\frac{\hbar^2\,H}{2m\ell^2}=
\frac{(p^2-8m\,\tilde\ell^2\Omega^2)^2}{64\Lambda
\tilde\ell^4}-
\frac{\hbar\,p}{4m\tilde\ell}+16\Lambda\left[
\tilde\ell\,x+\frac{i(\tilde\ell^{-2}p^2-8m\Omega^2
-64m\ell^2\Lambda)}{64m\Lambda}\right]^2,
\label{sec9-H-ful}\\
h'&:=&\frac{\hbar^2\, h}{2m\ell^2}=
\frac{(p^2-8m\,\tilde\ell^2\Omega^2)^2}{64\Lambda
\tilde\ell^4}-
\frac{\hbar\,p}{4m\tilde\ell}+16\,\Lambda\,\tilde\ell^2 x^2,
\label{sec9-eq-h=}
\eea
where $\tilde\ell:=\ell/\alpha$. Note that unlike $h'$ that only
involves the length scale $\tilde\ell$, $H'$ depends on both $\ell$
and $\tilde\ell$. The same is true for the metric operator $\eta_+$.
This shows that the pseudo-Hermitian quantum systems defined by the
Hamiltonian $H'$ and the metric operator $\eta_+$ with different
values of the parameter $\alpha$ are unitary-equivalent to a
Hermitian quantum system that depends on a single length scale
($\tilde\ell$). The latter is not, however, fixed by the Hamiltonian
(\ref{sec9-H=class4}) and the contour $\Gamma_2$ along which it is
defined.\footnote{If $\Omega\neq 0$, we can choose $\alpha$ such
that $\tilde\ell=\Omega/\sqrt\Lambda$.}
Supposing that $\tilde\ell$ is independent of $\hbar$, we can take
$\hbar\to 0$, $x\to x_c$, and $p\to p_c$ in (\ref{sec9-eq-h=}) to
obtain the underlying classical Hamiltonian. The result is
\be
H'_c=\frac{(p_c^2-8m\,\tilde\ell^2\Omega^2)^2}{64\Lambda
\tilde\ell^4}-16\,\tilde\ell^2\Lambda\, x_c^2.
\label{sec9-eq-class-h=}
\ee
We can also introduce the pseudo-Hermitian position and momentum
operators (\ref{X-P}):
\be
X:=\eta_+^{-1/2}x\,\eta_+^{1/2}=x+
\frac{i(\tilde\ell^{-2}p^2-8m\Omega^2
-64m\ell^2\Lambda)}{64m\Lambda\tilde\ell},~~~~~
P:=\eta_+^{-1/2}p\,\eta_+^{1/2}=p.
\label{sec9-X-P}
\ee
As expected, in terms of $X$ and $P$, the Hamiltonian $H'$ takes the
form
\be
H'=\frac{(P^2-8m\,\tilde\ell^2\Omega^2)^2}{64\Lambda
\tilde\ell^4}-
\frac{\hbar\,P}{4m\tilde\ell}+16\,\Lambda\,\tilde\ell^2 X^2.
\label{sec9-H=XP}
\ee
Furthermore, the $\eta_+$-pseudo-Hermitian canonical quantization of
the classical Hamiltonian (\ref{sec9-eq-class-h=}) yields
(\ref{sec9-H=XP}) except for the linear term in $P$.
The fact that the pseudo-Hermitian quantum systems defined by the
Hamiltonian (\ref{sec9-H=class4}) depend on an arbitrary length
scale has its root in our identification of $s$ with the relevant
dimensionless position operator. We will next consider an
alternative approach that incorporates the spectral equivalence of
the Hamiltonians (\ref{sec9-H=class4}) and
(\ref{sec9-linear-4-pot}). It is based on treating $\fK$ as the
appropriate dimensionless position operator. More specifically, it
involves replacing (\ref{sec9-trans=12}) with $x=\beta\,\ell \fK$
and $p=-\hbar\,s/(\beta\,\ell)$,
where $\beta$ is an arbitrary dimensionless real parameter. If we
set $\beta:=1/(4\sqrt\lambda)=(32 m\Lambda)^{-1/2}\ell^{-3}\hbar$,
we find the following $\ell$-independent expressions for the
equivalent Hermitian Hamiltonian $h'$ and the underlying classical
Hamiltonian $H'_c$: \footnote{Eq.~(\ref{xzxz1}) was previously
obtained in \cite{bbcjmo-prd} where the term in (\ref{xzxz1}) is
attributed to an anomaly in a path-integral quantization of the
complex classical Hamiltonian
$\frac{p_c^2}{2m}+\Omega^2z_c^2-\Lambda z_c^4$ along the contour
$\Gamma_2$. For a more careful treatment of this problem, see
\cite{jmr-prd}.}
\bea
h'&=&\frac{p^2}{2m}+
4\Lambda\left(x^2-\frac{\Omega^2}{4\Lambda}\right)^2-
\hbar\sqrt{\frac{2\Lambda}{m}}\,x,
\label{xzxz1}\\
H'_c&=&\frac{p_c^2}{2m}+
4\Lambda\left(x_c^2-\frac{\Omega^2}{4\Lambda}\right)^2.
\label{sec9-H-class=}
\eea
Note, however, that $H'$ still depends on $\ell$:
\be
H'=\frac{1}{2m}\left[p-i\sqrt{\frac{m}{2}}\left(
4\sqrt\Lambda\,x^2-\frac{\Omega^2}{\sqrt\Lambda}-
8\sqrt\Lambda\,\ell^2\right)\right]^2+
4\Lambda\left(x^2-\frac{\Omega^2}{4\Lambda}\right)^2-
\hbar\sqrt{\frac{2\Lambda}{m}}\,x.
\label{sec9-H-L-dep}
\ee
This is also true for $\eta_+$. Again the quantum systems determined
by $H'$ and $\eta_+$ with deferent values of $\ell$ are
unitary-equivalent to a system that is independent of $\ell$.
Similarly to our earlier analysis, we can obtain an
$\ell$-independent expression for $H'$ in terms of the
pseudo-Hermitian position and momentum operators: $X=x$ and
$P=p-i\sqrt{\frac{m}{2}}\left(4\sqrt\Lambda\,x^2-
\frac{\Omega^2}{\sqrt\Lambda}-8\sqrt\Lambda\,\ell^2\right)$.
The result is $H'=\frac{P^2}{2m}+
4\Lambda\left(X^2-\frac{\Omega^2}{4\Lambda}\right)^2-
\hbar\sqrt{\frac{2\Lambda}{m}}\,X$.
\section{Complex Classical Mechanics versus Pseudo-Hermitian QM}
\label{sec6-Classical}
\subsection{Classical-Quantum Correspondence and Observables}
\label{sec6-CQ}
In Subsection~\ref{sec-quasi} we outlined a procedure that assigns
an underlying classical system for a given pseudo-Hermitian quantum
system with reference Hilbert space $L^2(\R^d)$. According to this
procedure, that we employed in Subsections~\ref{sec-cubic-osc},
\ref{sec-cubic-imag} and \ref{sec-quartic}, the classical
Hamiltonian $H_c$ may be computed using (\ref{class-H-2}). In other
words, to obtain $H_c$, we replace the standard position and
momentum operators with their classical counterparts in the
expression for the equivalent Hermitian Hamiltonian $h$ and evaluate
its $(\hbar\to 0)$-limit. We can quantize $H_c$ to yield $h$, if we
employ the standard canonical quantization scheme. We obtain the
pseudo-Hermitian Hamiltonian $H$, if we use the pseudo-Hermitian
canonical quantization scheme (\ref{ph-quantize}).\footnote{Clearly
this is true up to factor-ordering ambiguities/terms proportional to
positive powers of $\hbar$.}
By definition, a classical observable $O_c$ is a real-valued
function of the classical states $(\vec x_c,\vec p_c)$, i.e., points
of the phase space $\R^{2d}$. If we apply the usual (Hermitian)
canonical quantization program, the operator associated with a
classical observable $O_c(\vec x_c,\vec p_c)$ is given by
$o:=O_c(\vec x,\vec p)$ where $\vec x$ and $\vec p$ are the usual
Hermitian position and momentum operators. If we apply the
pseudo-Hermitian quantization, we find instead $O:=O_c(\vec X,\vec
P)$ where $\vec X$ and $\vec P$ are the pseudo-Hermitian position
and momentum operators, respectively (\ref{X-P}). The common feature
of both these quantization schemes is that they replace the usual
classical Poisson bracket,
$\{A_c,B_c\}_{\rm PB}:=\sum_{j=1}^d \left(
\frac{\partial A_c}{\partial{x_{cj}}}
\frac{\partial B_c}{\partial{p_{cj}}}-
\frac{\partial B_c}{\partial{x_{cj}}}
\frac{\partial A_c}{\partial{p_{cj}}}\right)$,
of any pair of classical observables $A_c$ and $B_c$ with
$(i\hbar)^{-1}$ times the commutator of the corresponding operators
$A$ and $B$, $
\{A_c,B_c\}_{\rm PB}\longrightarrow(i\hbar)^{-1}[A,B]$.
Let us also recall that given a pseudo-Hermitian quantum system
specified by the reference Hilbert space $\cH$, a quasi-Hermitian
Hamiltonian operator $H:\cH\to\cH$, and an associated metric
operator $\eta_+:\cH\to\cH$, the observables of the system are by
definition $\eta_+$-pseudo-Hermitian operators $O$ acting in $\cH$,
\be
O^\dagger=\eta_+O\,\eta_+^{-1}.
\label{sec6-ph}
\ee
It is absolutely essential to note that such an operator acquires
its physical meaning through the classical-to-quantum
correspondence:
\be
O_c\longrightarrow O,
\label{sec6-quantize2}
\ee
where $O_c$ is the classical observable corresponding to the
operator $O$. For example in conventional quantum mechanics, we
identify the operator $p:L^2(\R)\to L^2(\R)$ defined by $p\psi
=-i\hbar\psi'$, with the momentum of a particle moving on $\R$,
because $p$ corresponds to the classical momentum $p_c$ of the
underlying classical system. Without this correspondence, $p$ is
void of a physical meaning. It is merely a constant multiple of the
derivative operator acting in a function space.
The situation is not different in pseudo-Hermitian quantum
mechanics. Again the ($\eta_+$-pseudo-Hermitian) operators that
represent observables derive their physical meaning from their
classical counterparts through the pseudo-Hermitian version of the
classical-to-quantum correspondence. This is also of the form
(\ref{sec6-quantize2}), but the operator $O$ is now selected from
among the $\eta_+$-pseudo-Hermitian operators. As we discussed in
Subsection~\ref{sec-quasi}, if $o:\cH\to\cH$ denotes the Hermitian
observable associated with a classical observable $O_c$, the
corresponding pseudo-Hermitian observable is given by
Eq.~(\ref{O=ror}), i.e., $O:=\rho^{-1} o\, \rho$ where
$\rho:=\sqrt\eta_+$.
Whenever one deals with a symmetric, $\cP\cT$-symmetric,
diagonalizable Hamiltonian $H$ with a real spectrum, one can define
the observables as operators $O$ fulfilling the condition
\cite{bbj-erratum}
\be
O^T=\cC\cP\cT\,O\,\cC\cP\cT,
\label{sec6-revised}
\ee
where
\be
O^T:=\cT\,O^\dagger\cT
\label{sec6-transpose}
\ee
stands for the transpose of $O$, and $\cC$, $\cP$, $\cT$ are
respectively the charge, parity, and time-reversal operators that
are assumed to satisfy
\be
\cC^2=\cP^2=\cT^2=I,~~~~[\cC,\cP\cT]=[\cP,\cT]=[\cC,H]=0.
\label{sec6-cpt}
\ee
As we explained in Subsection~\ref{sec-sym}, we can relate $\cC$ to
an associated metric operator $\eta_+$ according to
\be
\cC=\eta_+^{-1}\cP.
\label{sec6-c=1}
\ee
Because $\cC^2=\cP^2=I$, we also have
\be
\cC=\cP\eta_+.
\label{sec6-c=2}
\ee
Inserting (\ref{sec6-transpose}) in (\ref{sec6-revised}) and making
use of (\ref{sec6-cpt}), we obtain
$\cT\,O^\dagger\cT=\cC\cP\cT\,O\,\cC\cP\cT=\cP\cT\cC\,O\,
\cC\cP\cT=\cT\cP\cC\,O\,\cC\cP\cT.$ In view of (\ref{sec6-cpt}),
(\ref{sec6-c=1}), and (\ref{sec6-c=2}), this relation is equivalent
to $O^\dagger=\cP\cC\,O\,\cC\cP=\cP(\cP\eta_+)\,O\,
(\eta_+^{-1}\cP)\cP=\eta_+ O\,\eta_+^{-1}$. Therefore
(\ref{sec6-revised}) implies the $\eta_+$-pseudo-Hermiticity of $O$.
The converse is also true for the cases that (\ref{sec6-revised})
can be applied consistently. This is actually not always the case.
For example, the application of (\ref{sec6-revised}) for the
Hamiltonian operator that commutes with $\cC\cP\cT$ gives $H^T=H$.
Therefore, unlike the $\eta_+$-pseudo-Hermiticity conditions
(\ref{sec6-ph}), (\ref{sec6-revised}) cannot be employed for
non-symmetric Hamiltonians. This shows that (\ref{sec6-revised}) has
a smaller domain of application than (\ref{sec6-ph}).
Another advantage of the requirement of $\eta_+$-pseudo-Hermiticity
(\ref{sec6-ph}) over the condition (\ref{sec6-revised}) is that it
makes the dynamical consistency of the definition of observables
more transparent. Recall that the main motivation for the
introduction of (\ref{sec6-revised}) in \cite{bbj-erratum} is that
its original variant \cite{bbj,bbj-ajp}, namely the requirement of
$\cC\cP\cT$-symmetry of $O$, i.e., $O=\cC\cP\cT\,O\,\cC\cP\cT$,
conflicts with the Schr\"odinger time-evolution in the Heisenberg
picture; in general the Heisenberg-picture operators
$O_H(t):=e^{-itH/\hbar}O e^{itH/\hbar}$ do not commute with
$\cC\cP\cT$ for $t\neq 0$, even if they do for $t=0$,
\cite{comment}. The reason why (\ref{sec6-revised}) does not suffer
from this problem is that it is a restatement of the
$\eta_+$-pseudo-Hermiticity of $O$. To see why the
Heisenberg-picture operators satisfy the latter condition for all
$t$, we first recall that because $H$ is $\eta_+$-pseudo-Hermitian,
it is a Hermitian operator acting in the physical Hilbert space
$\cH_{\rm phys}$ defined by the inner product
$\br\cdot|\cdot\kt_{_{\eta_+}}$. This implies that the
time-evolution operator $e^{-itH/\hbar}$ is a unitary operator
acting in $\cH_{\rm phys}$. Therefore, $e^{-itH/\hbar}O
e^{itH/\hbar}$ acts in $\cH_{\rm phys}$ as a Hermitian operator for
all $t$, i.e., it is $\eta_+$-pseudo-Hermitian for all $t$.
Alternatively, we could argue that because $H$ is
$\eta_+$-pseudo-Hermitian, $e^{-itH/\hbar}$ is
$\eta_+$-pseudo-unitary
\cite{ahmed-jain-pre,ahmed-jain-jpa,jmp-2005} and $e^{-itH/\hbar}O
e^{itH/\hbar}$ is $\eta_+$-pseudo-Hermitian.
\subsection{Complex Classical Systems and Compatible Poisson Brackets}
In their pioneering article \cite{bender-prl}, Bender and Boettcher
perform an asymptotic analysis of the spectral properties of the
complex potentials $V_\nu(z)=z^2(iz)^\nu$ that makes use of the
complex WKB-approximation. This involves the study of a certain type
of complex classical dynamical system $\cS_{BBM}$ that Bender,
Boettcher, and Meisenger (BBM) \cite{bender-jmp} identify with the
underlying classical system for the quantum system defined by
$V_\nu$. This approach, which has been the focus of attention in a
number of publications
\cite{nanayakkara,bcdm-2006,bhh-2007a,bhh-2007b}, is fundamentally
different from the prescription we used in
Subsections~\ref{sec-quasi} and \ref{sec6-CQ} to associate a
classical system ${\cal S}$ with a pseudo-Hermitian quantum system
$S$. In this subsection, we examine the structure of the complex
classical system $\cS_{BBM}$. For simplicity we consider complex
potentials $V$ that depend on a single complex variable
$\fz$.\footnote{The use of complex phase-space variables in standard
classical mechanics is an old idea \cite{marsden-ratiu}. See also
\cite{strocchi}. The subject of the present study is to consider
complex configuration variables.}
According to \cite{bender-jmp} the dynamics of $\cS_{BBM}$ is
determined by the Newton's equation
\be
m\ddot \fz=-V'(\fz),
\label{sec6-newton}
\ee
where $m\in\R^+$, each overdot stands for a time-derivative, and a
prime marks the differentiation with respect to $\fz$. We can
express (\ref{sec6-newton}) as a pair of first order differential
equations:
\be
m\dot\fz=\fp,~~~~~~~\dot\fp=-V'(\fz).
\label{sec6-first-order}
\ee
This is the Hamiltonian formulation of the dynamics of $\cS_{BBM}$;
introducing the complex Hamilton function
$\bfh:=\frac{\fp^2}{2m}+V(\fz)$,
we can express (\ref{sec6-first-order}) as the following pair of
Hamilton equations.
\be
\dot\fz=\frac{\partial\bfh}{\partial\fp},~~~~~~~~~~~
\dot\fp=-\frac{\partial\bfh}{\partial\fz}.
\label{sec6-H-eqs}
\ee
The variables $\fz$ and $\fp$ are the coordinates of the phase space
of the system $\fP$ which is as a set identical to $\C^2$ and
$\R^4$. This observation suggests that, similarly to the quantum
systems defined along a complex contour, the complex classical
system $\cS_{BBM}$ might admit a formulation that involves real
variables. This is actually quite straightforward. We can define the
real variables
\bea
&&x:=\Re(\fz),~~~~~~y:=\Im(\fz),~~~~~~p:=\Re(\fp),~~~~~~q:=\Im(\fp),
~~~~~~V_r:=\Re(V),~~~~~~V_i :=\Im(V),
\label{sec6-real-var}\\
&&H_r:=\Re(\bfh)=\frac{p^2-q^2}{2m}+V_r(x,y),~~~~~~~~~~~
H_i :=\Im(\bfh)=\frac{pq}{m}+V_i(x,y),
\label{sec6-real-HH}
\eea
and use the well-known relations
$\frac{\partial}{\partial\fz}=\frac{1}{2}\,\big(
\frac{\partial}{\partial x}-i\frac{\partial}{\partial y}\big)$ and
$\frac{\partial}{\partial\fp}=\frac{1}{2}\,\big(
\frac{\partial}{\partial p}-i\frac{\partial}{\partial q}\big)$,
to turn the complex Hamilton equations (\ref{sec6-H-eqs}) to a
system of four real equations.
It turns out that the resulting system of equations and consequently
the complex Hamilton equations (\ref{sec6-H-eqs}) are not consistent
with the standard symplectic structure (Poisson bracket) on the
phase space $\C^2=\R^4$, \cite{CM-jmp-2007,pla-2006}. To see this,
let us also introduce $w_1:=x$, $w_2:=p$, $w_3:=y$, and $w_4:=q$.
Then the standard Poisson bracket on $\R^4$ takes the form
\be
\{A,B\}_{PB}=\sum_{j,k=1}^4 J^{(\rm st)}_{jk}\:
\frac{\partial A}{\partial w_j}\,\frac{\partial B}{\partial w_j},
\label{sec6-PB}
\ee
where $J^{(\rm st)}_{jk}$ are the entries of the standard symplectic
matrix
\be
J^{(\rm st)}:=\left(\begin{array}{cccc}
0 & 1 & 0 & 0 \\
-1 & 0 & 0 & 0\\
0 & 0 & 0 & 1\\
0 & 0 & -1 & 0\end{array}\right),
\label{sec6-J-standard}
\ee
and $A$ and $B$ are a pair of classical observables (real-valued
functions of $w_j$).\footnote{$\Omega:=\sum_{j,k=1}^4
J_{jk}\,dw_j\wedge dw_k$ is the standard symplectic form on $\R^4$,
\cite{marsden-ratiu}.} Recall that given a Hamilton function $H$ on
the four-dimensional phase space obtained by endowing $\R^4$ with
the symplectic structure corresponding to the standard Poisson
bracket (\ref{sec6-PB}), we can express the Hamilton equations in
the form $\dot w_j=\{w_j,H\}_{\rm PB}$. If we express
(\ref{sec6-PB}) in terms of the complex variables $\fz$ and $\fp$,
we find that $\{\fz,\bfh\}_{\rm PB}=\{\fp,\bfh\}_{\rm PB}=0$,
\cite{pla-2006}. Therefore, it is impossible to formulate the
dynamics defined by (\ref{sec6-H-eqs}) using the standard symplectic
structure on $\C^2$.
This observation raises the problems of the existence, uniqueness,
and classification of the symplectic structures on $\C^2=\R^4$ that
are compatible with the dynamical equations (\ref{sec6-H-eqs}).
Ref.~\cite{CM-jmp-2007} gives a family of dynamically compatible
symplectic structures. Ref.~\cite{pla-2006} offers a complete
classification of such structures. The most general compatible
symplectic structure is defined by the following non-standard
Poisson bracket
\be
\lpb A,B\rpb=\sum_{j,k=1}^4 J_{jk}\:
\frac{\partial A}{\partial w_j}\,\frac{\partial B}{\partial w_j},
\label{sec6-gen-PB}
\ee
where $J_{ij}$ are the entries of the symplectic matrix
\be
J:=\frac{1}{2}\left(\begin{array}{cccc}
0 & 1+c & -a & -d\\
-(1+c) & 0 & -d & -b\\
a & d & 0 & -1+c\\
d & b & 1-c & 0\end{array}\right),
\label{rj=}
\ee
and $a,b,c,d$ are arbitrary real parameters satisfying
$c^2+d^2-ab\neq 1$. Regardless of the choice of these parameters, we
have $\dot\fz=\lpb\fz,\bfh\rpb$ and $\dot\fp=\lpb\fp,\bfh\rpb$. A
particularly, simple choice is $a=b=c=d=0$ that yields
\be
J=J_0:=
\frac{1}{2}\left(\begin{array}{cccc}
0 & 1 & 0 & 0 \\
-1 & 0 & 0 & 0\\
0 & 0 & 0 & -1\\
0 & 0 & 1 & 0\end{array}\right).
\label{rj=zero}
\ee
\subsection{Real Description of a Complex Classical System}
Among the basic results of classical mechanics is the uniqueness
theorem for symplectic structures on the phase space $\R^{2d}$,
\cite{arnold}. In order to explain the content of this theorem,
first we recall that a symplectic structure on $\R^{2d}$ is
determined by the corresponding Poisson bracket. Choosing a system
of coordinates $w_j$, we can express the latter in the form $\{
A,B\}_\cJ=\sum_{j,k=1}^{2d} \cJ_{jk}\frac{\partial A}{\partial
w_j}\,\frac{\partial B}{\partial w_j}$, where $\cJ$ is a real,
antisymmetric, nonsingular $2d\times 2d$ matrix. The above-mentioned
theorem states that there is always a system of (so-called Darboux)
coordinates in which $\{ A,B\}_\cJ$ takes the form of the standard
Poisson bracket. Application of this theorem for the Poisson bracket
(\ref{sec6-gen-PB}) yields a description of the dynamics defined by
the complex Hamiltonian $\bfh$ in terms of a real Hamiltonian $K$.
The construction of the Darboux coordinates associated with the most
general symplectic matrix (\ref{rj=}) is described in
\cite{pla-2006}. These coordinates take the following particularly
simple form for $a=b=c=d=0$.
\be
x_1:=\sqrt 2\, w_1=\sqrt 2\, x,~~~
p_1:=\sqrt 2\, w_2=\sqrt 2\, p,~~~
x_2:=\sqrt 2\, w_4=\sqrt 2\, q,~~~
p_2:=\sqrt 2\, w_3=\sqrt 2\, y.
\label{sec6-darboux}
\ee
These are precisely the phase space coordinates used in \cite{xa} to
study the complex trajectories appearing in the semiclassical
treatment of the propagator for a quartic anharmonic oscillator.
They are subsequently employed in the description of
$\cP\cT$-symmetric models \cite{kaushal-singh}.
The use of the coordinates (\ref{sec6-darboux}) together with the
assumption that the potential $V$ is an analytic function, i.e., the
Cauchy-Riemann conditions, $\frac{\partial v_r}{\partial x}-
\frac{\partial v_i}{\partial y}=\frac{\partial v_r}{\partial y}+
\frac{\partial v_i}{\partial x}=0$, hold, lead to the following
remarkable observations.
\begin{itemize}
\item The equivalent real Hamiltonian $K$ that describes the
dynamics is twice the real part of the complex Hamiltonian
$\bfh$, \cite{xa,pla-2006},
\be
K=\frac{p_1^2-x_2^2}{2m}+2\, V_r\big(\mbox{$\frac{x_1}{2},
\frac{p_2}{2}$}\big)=2H_r.
\label{sec6-K=}
\ee
\item The imaginary part of $\bfh$, i.e.,
\be
H_i=\frac{x_2p_1}{2m}+V_i\big(\mbox{$\frac{x_1}{2},
\frac{p_2}{2}$}\big)
\label{sec6-int-mov}
\ee
is an integral of motion; $\{H_i,K\}_{\rm PB}=0$, \cite{pla-2006}.
\end{itemize}
This implies that the dynamical system defined by the Hamilton
equations~(\ref{sec6-H-eqs}) is equivalent to a classical
Hamiltonian system with phase space $\R^4$ and the Hamiltonian $K=2
H_r$. Furthermore, in view of Liouville's theorem on integrable
systems, because the phase space is four-dimensional and $H_i$ is an
integral of motion that is functionally independent of $K$, this
system is completely integrable \cite{arnold,vilasi}.
The integral of motion $H_i$ generates a certain class of
transformations in the phase space that leave the dynamics
invariant. The infinitesimal form of these symmetry transformations
is given by \cite{pla-2006}:
\bea
x_1\to x_1+\delta x_1, && ~~~~~\delta x_1:=
\xi \{x_1,H_i\}_{\rm PB}=\frac{\xi\,x_2}{2m},
\label{sym1}\\
x_2\to x_2+\delta x_2, && ~~~~~\delta x_2:=
\xi \{x_2,H_i\}_{\rm PB}=\xi~\frac{\partial}
{\partial{x_1}}\, V_r\big(\mbox{$\frac{x_1}{2},
\frac{p_2}{2}$}\big),
\label{sym2}\\
p_1\to p_1+\delta p_1, && ~~~~~\delta p_1:=
\xi \{p_1,H_i\}_{\rm PB}=\xi~\frac{\partial}
{\partial p_2}\, V_r\big(\mbox{$\frac{x_1}{2},
\frac{p_2}{2}$}\big),
\label{sym3}\\
p_2\to p_2+\delta p_2, && ~~~~~\delta p_2:=
\xi \{p_2,H_i\}_{\rm PB}=-\frac{\xi\,p_1}{2m},
\label{sym4}
\eea
where $\xi$ is an infinitesimal real parameter.
The existence of these symmetry transformations is related to the
fact that the system involves a first class constraint. Choosing a
particular value $C$ for $H_i$, i.e., imposing the constraint
$\Phi:=H_i-C=0$, and moding out the above symmetry transformations
by identifying each orbit of these transformations with a single
point, we can construct a reduced dynamical system $\fS$ that has a
two-dimensional real phase space \cite{pla-2006}. This procedure has
been implemented for a class of monomial potentials in
\cite{smilga1} where the above symmetry transformations have been
examined in the Lagrangian formulation and the difficult issue of
the quantization of these systems has been addressed in some
detail.\footnote{The application of this approach for some
multi-dimensional systems is studied in \cite{ghosh-majhi}.} In
general the resulting quantum system depends on whether one imposes
the constraint before or after the quantization. In the former case
the prescription used to obtain the reduced classical system also
affects the resulting quantum system. For the imaginary cubic
potential that allows for a fairly detailed analysis, one obtains a
variety of quantum systems \cite{smilga1}. But none of these
coincides with the pseudo-Hermitian quantum system we studied in
Subsection~\ref{sec-cubic-imag}.
In summary the identification of the complex classical system
$\cS_{\rm BBM}$ with the classical limit of the pseudo-Hermitian
quantum system $S$ that is defined by a complex potential $V$ meets
two serious difficulties. Firstly, while $S$ has a single real
degree of freedom (one-dimensional real configuration space and
two-dimensional real phase space), $\cS_{\rm BBM}$ has two real
degrees of freedom (a four-dimensional real phase space). Secondly,
the Hamilton equations (\ref{sec6-H-eqs}) that determine the
dynamics of $\cS_{\rm BBM}$ are not consistent with the standard
symplectic structure on the phase space of $\cS_{\rm BBM}$. This in
particular means that under the naive correspondence $\cS_{\rm
BBM}\rightarrow S$, the Hamilton equations (\ref{sec6-H-eqs}) are
not mapped to the Heisenberg equations for $S$. These problems
persist regardless of whether the Hamiltonian operator for $S$ is
defined on the real line or a complex contour. In fact, the study of
the systems defined on a complex contour reveals another difficulty
with the naive correspondence $\cS_{\rm BBM}\rightarrow S$, namely
that while the definition of $S$ requires making a proper choice for
a contour, $\cS_{\rm BBM}$ is independent of such a
choice.\footnote{The phase-space path integral formulation of the
wrong-sign quartic potential defined on the contour
(\ref{sec9-zeta=}) is consistent with the standard Hilbert-space
formulation, if one restricts the complex classical Hamiltonian
$\bfh$ to the contour (\ref{sec9-zeta=}), \cite{bbcjmo-prd,jmr-prd}.
Whether imposing this restriction would lead to a particular reduced
classical system that is identical with the classical system $\cS$
defined by the Hamiltonian~(\ref{sec9-H-class=}) is worthy of
investigation.}
A proper treatment of the complex dynamical systems $\cS_{\rm BBM}$
requires the investigation of a dynamically compatible symplectic
structure. Once such a structure is selected one can adopt a
corresponding set of Darboux coordinates and use them to obtain a
standard (real) description of $\cS_{\rm BBM}$. The use of the real
description reveals the curious fact that this system admits an
integral of motion that is functionally independent of the
Hamiltonian. This has two important consequences:
\begin{enumerate}
\item The system is completely integrable;
\item The system has a first class constraint.
\end{enumerate}
The choice $H_i=0$ that is adopted in
\cite{bcdm-2006,bhh-2007a,bhh-2007b} is just one way of imposing the
constraint. It corresponds to restricting the dynamics to a
three-dimensional subspace of the phase space. On this subspace
there act the symmetry transformations (\ref{sym1}) -- (\ref{sym4})
that leave the dynamics invariant. Moding out these transformations,
one finds a reduced dynamical system $\fS$ with a two-dimensional
real phase space. It is the latter that can, in principle, be
related to the quantum system $S$. So far the existence and nature
of such a relationship could not be ascertained. For the simple
polynomial potentials that could be studied carefully, the various
known ways of constructing and quantizing $\fS$ lead to quantum
systems that differ from $S$.
Another important issue is that the above procedure of constructing
complex dynamical systems $\cS_{\rm BBM}$ and the corresponding
reduced systems $\fS$ may be carried through for any complex
analytic potential. But, not every such potential defines a unitary
pseudo-Hermitian quantum system. A typical example is the
exponential potential $V(x)=e^{i\kappa x}$ that is defined on $\R$.
It is well-known that the spectrum of the Hamiltonian operator
$H=p^2/(2m)+\epsilon\, e^{i\kappa x}$, with $m\in\R^+$ and
$\epsilon,\kappa\in\R-\{0\}$, includes spectral singularities
\cite{gasymov}. This shows that this operator cannot be mapped to a
Hermitian operator by a similarity transformation, i.e., it is not
quasi-Hermitian. Hence $H$ is not capable of defining a unitary
quantum system. Although the potential $V(x)=e^{i\kappa x}$ defines
a complex dynamical system $\cS_{\rm BBM}$, it cannot be related to
a unitary quantum system.\footnote{For a study of the classical
dynamics generated by this exponential potential, see
\cite{CM-jmp-2007}.}
We conclude this section by underlining a rather interesting
parallelism between the quantum and classical mechanics of complex
(analytic) potentials. Supposing that a complex (analytic) potential
$V$ defines a unitary pseudo-Hermitian quantum system, we showed in
Section~\ref{sec-phqm} that this system admits an equivalent
Hermitian representation. The above discussion reveals a classical
analogue of this equivalence; the complex dynamical system defined
by the complex Hamiltonian $H=p^2/(2m)+V$ admits an equivalent
description involving a real Hamiltonian. What differentiates the
pseudo-Hermitian and Hermitian representations of quantum mechanics
is the choice of the inner product (equivalently a metric operator)
on the space of state-vectors. What differentiates the complex and
real representations of the classical mechanics is the choice of the
symplectic structure (equivalently Poisson bracket) on the phase
(state) space. Mathematically the equivalence of the
pseudo-Hermitian and Hermitian representations of quantum mechanics
stems from the uniqueness theorem for separable Hilbert spaces. The
classical counterpart of this theorem that is responsible for the
above-mentioned equivalence of the complex and real representations
of the classical mechanics is the uniqueness theorem for symplectic
manifolds diffeomorphic to $\R^{2d}$.
\section{Time-Dependent Hamiltonians and Path-Integral Formulation}
\subsection{Time-dependent Quasi-Hermitian Hamiltonians}
\label{sec-time-dep}
Time-dependent Hamiltonian operators arise in a variety of
applications of conventional quantum mechanics. Their
time-dependence does not cause any difficulties except that for the
cases that the eigenvectors of the Hamiltonian are time-dependent,
the time-evolution operator takes the form of a time-ordered
exponential involving the Hamiltonian \cite{gp-book}.\footnote{A
rather common misconception in dealing with time-dependent
Hamiltonians is to think that the time-reversal operator $\cT$
changes the sign of the time variable $t$ in the Hamiltonian, i.e.,
$H(t)\to \cT H(t)\cT=H(-t)$ which in view of the definition of
$\cT$, namely for all $\psi\in L^2(\R)$, $(\cT\psi)(x):=\psi(x)^*$,
is generally false. See for example \cite{yuce}, where the author
considers a trivial non-Hermitian time-dependent Hamiltonian that is
obtained from a constant $\cP\cT$-symmetric Hamiltonian through a
time-dependent point transformation and a time-reparametrization.}
The situation is quite different when one deals with time-dependent
quasi-Hermitian Hamiltonians.\footnote{Time-dependent
quasi-Hermitian Hamiltonians arise naturally in the application of
pseudo-Hermitian quantum mechanics in quantum cosmology
\cite{cqg,ap}. See also \cite{pla-2004,FF}.} As the following no-go
theorem shows the observability of the Hamiltonian and the unitarity
of the time-evolution put a severe restriction on the way a
quasi-Hermitian Hamiltonian can depend on time \cite{plb-2007}.
\begin{center}
\parbox{15.5cm}{\textbf{Theorem~2:} \emph{Let $T\in\R^+$ and
for all $t\in[0,T]$, $H(t)$ be a time-dependent quasi-Hermitian
operator acting in a reference Hilbert space $\cH$. Suppose that
$H(t)$ serves as the Hamiltonian operator for a pseudo-Hermitian
quantum system with physical Hilbert space $\cH_{\rm phys}$.
If the time-evolution of the system, that is determined by
the Schr\"odinger equation: $i\hbar\dot\psi(t)=H(t)\psi(t)$,
is unitary and $H(t)$ is an observable for
all $t\in[0,T]$, then the metric operator defining
$\cH_{\rm phys}$ does not depend on time, i.e., there must exist
a time-independent metric operator $\eta_+$ such that $H(t)$
is $\eta_+$-pseudo-Hermitian for all $t\in[0,T]$.}}
\end{center}
Following \cite{plb-2007} we call a time-dependent quasi- (pseudo-)
Hermitian Hamiltonian admitting a time-independent metric
(pseudo-metric) operator \emph{quasi-stationary}. Theorem~2 states
that in pseudo-Hermitian quantum mechanics we are bound to use
quasi-stationary Hamiltonians. To demonstrate the severity of this
restriction, consider two-level quantum systems where the
Hamiltonian $H(t)$ may be represented by a $2\times 2$ complex
matrix $\underline{H(t)}$ with possibly time-dependent entries. The
requirement that $H(t)$ is quasi-Hermitian implies that
$\underline{H(t)}$ involves 6 independent real-valued functions
(because its eigenvalues are real). The additional requirement that
$H(t)$ is quasi-stationary reduces this number to 4.\footnote{This
can be easily inferred from the results of \cite{tjp}.} This is also
the same as the maximum number of independent real-valued functions
that a general time-dependent Hermitian Hamiltonian can include.
A simple implication of Theorem~2 is that the inner product of the
physical Hilbert space cannot depend on time, unless one defines the
dynamics of the quantum system by an operator that is not observable
or allows for nonunitary time-evolutions. In other words, insisting
on observability of the Hamiltonian operator and requiring unitarity
prohibit scenarios involving switching Hilbert spaces as proposed in
\cite{bbj-2007}.
\subsection{Path-Integral Formulation of Pseudo-Hermitian QM}
\label{path-integral}
Among the original motivations to consider $\cP\cT$-symmetric
quantum mechanical models is the potential applications of their
relativistic and field theoretical generalizations
\cite{bbj,bbj-ajp} in elementary particle physics. A necessary first
step in trying to explore the relativistic and field theoretical
generalizations of $\cP\cT$-symmetric or more generally
pseudo-Hermitian QM is a careful examination of its path-integral
formulation. In this section we use the approach of \cite{prd-2007}
to elucidate the role of the metric operator in the path-integral
formulation of pseudo-Hermitian QM and demonstrate the equivalence
of the latter with the path-integral formulation of the conventional
QM.
We shall first review the emergence of path integrals in dealing
with a simple conventional (Hermitian) quantum system. This requires
a brief discussion of the trace of a linear operator.
Let $L:\cH\to\cH$ be a linear operator acting in a separable Hilbert
space $\cH$ with inner product $\br\cdot|\cdot\kt$. Then the
\emph{trace} of $L$ is defined by
\be
{\rm tr}(L):=\sum_{n=1}^N \br\xi_n|L\xi_n\kt,
\label{trace=0}
\ee
where $\{\xi_n\}$ is an arbitrary orthonormal basis of $\cH$,
\cite{reed-simon}. Obviously, for $N=\infty$, the right-hand side of
(\ref{trace=0}) may not converge, and $\tr(L)$ may not exist.
Suppose that $K$ and $L$ are linear operators for which ${\rm
tr}(KL)<\infty$. Then invoking the completeness relation for $\xi_n$
and using Dirac's bra-ket notation, we can show that
\bea
{\rm tr}(LK)&=&
\sum_{m,n=1}^N \br\xi_n|L|\xi_m\kt\br\xi_m|K|\xi_n\kt
=\sum_{m,n=1}^N \br\xi_m|K|\xi_n\kt\br\xi_n|L|\xi_m\kt
=\sum_{m=1}^N \br\xi_m|KL|\xi_m\kt={\rm tr}(KL).
\label{trace-id}
\eea
A simple implication of this identity is that the right-hand side of
(\ref{trace=0}) is independent of the choice of the orthonormal
basis $\{\xi_n\}$.\footnote{To see this, let $\{\zeta_n\}$ be
another orthonormal basis of $\cH$. Then as we described in
Section~\ref{sec-unitary}, $\zeta_n$ are related to $\xi_n$ by a
unitary operator $U:\cH\to\cH$, $\zeta_n=U\xi_n$. This in turn
implies $\sum_{n=1}^N \br\zeta_n|L\zeta_n\kt= \sum_{n=1}^N \br
U\xi_n|L U\xi_n\kt= \sum_{n=1}^N \br\xi_n|U^\dagger (L U)\xi_n\kt=
\sum_{n=1}^N \br\xi_n|(LU)U^\dagger\xi_n\kt= \sum_{n=1}^N
\br\xi_n|L\xi_n\kt,$ where we have used (\ref{trace-id}) and
$UU^\dagger=I$.}
For a linear operator $L$ acting in $L^2(\R)$, we can use the
position basis $\{|x\kt\}$ to compute $\tr(L)$. To demonstrate how
this is done, let $\{\xi_n\}$ be an orthonormal basis of $L^2(\R)$.
Using (\ref{trace=0}), the completeness relation for $|x\kt$ and
$\xi_n$, and Dirac's bra-ket notation, we have
\bea
\tr(L)&=&
\sum_{n=0}^\infty \int_{-\infty}^\infty \int_{-\infty}^\infty
dx\,dx' \br\xi_n|x\kt\br x|L
|x'\kt\br x'|\xi_n\kt=\int_{-\infty}^\infty \int_{-\infty}^\infty dx\,dx'
\br x|L|x'\kt\sum_{n=0}^\infty \br x'|\xi_n\kt\br\xi_n|x\kt
\nn\\
&=&
\int_{-\infty}^\infty \int_{-\infty}^\infty dx\,dx',
\br x|L|x'\kt~\br x'|x\kt
=\int_{-\infty}^\infty dx \:\br x|L|x\kt.
\label{trace-x-basis}
\eea
Now, consider a quantum system defined by the Hilbert space
$\cH=L^2(\R)$ and a Hermitian Hamiltonian $H$ that is an analytic
(or piecewise analytic) function of the usual (Hermitian) position
operator ($x$), momentum operator ($p$), and possibly time ($t$).
The generating functional (also called partition function) for the
$n$-point (correlation) functions of the system is given by
\be
Z[J]=\tr\left(
\fT\exp\left\{-\frac{i}{\hbar}\int_{t_1}^{t_2}dt\:[H-Jx]\right\}\right),
\label{partition-fn}
\ee
where ``$\fT\exp$'' denotes the time-ordered exponential, $t_1$ and
$t_2$ are respectively the initial and final times for the evolution
of the system that are taken to be $-\infty$ and $\infty$ in the
scattering setups used particularly in quantum field theory, and $J$
stands for the (possibly time-dependent) coupling constant for the
source terms $Jx$, \cite{das,weinberg}. The latter is by definition
an observable \cite{bryce-qft}. In view of the fact that $x$ is also
an observable, this implies that $J$ must be real-valued.
One can easily justify the condition of the observability of the
source term by noting that $Z[J]$ is used to compute the $n$-point
functions according to \cite{das}:
$\big\langle \fT [x(\tau_1)x(\tau_2)\cdots x(\tau_n)]\big\rangle=\left.
\frac{(-i\hbar)^n}{Z[J]}\,\frac{\delta^n Z[J]}{\delta J(\tau_1)\delta J(\tau_2)
\cdots\delta J(\tau_n)}\right|_{J=0}$,
where $\tau_1,\tau_2,\cdots,\tau_n\in[t_1,t_2]$, $x(\tau)$ denotes
the position operator in the Heisenberg picture, i.e., $x(\tau):=
U_{_J}(\tau,t_1)\, x\, U_{_J}(t_1,\tau)$, and $U_{_J}$ is the
time-evolution operator associated with the interacting system; for
all $\ft_1,\ft_2\in\R$,
\be
U_{_J}(\ft_1,\ft_2):=\fT\exp\left\{-\frac{i}{\hbar}\int_{\ft_1}^{\ft_2}
dt\:[H-Jx]\right\}.
\label{sec7-evolution-op}
\ee
The $n$-point functions are essentially the expectation values of
the observables $\fT [x(\tau_1)\cdots x(\tau_n)]$ (in the ground
state of the system if $t_2=-t_1\to\infty$.) Therefore, the
observability of the source term $Jx$ is linked to the observability
of the Heisenberg-picture position operators $x(\tau_i)$. The latter
is ensured by the unitarity of the time-evolution operator
$U_{_J}(\ft_1,\ft_2)$ and the observability of the
Schr\"odinger-picture position operator $x$.
In view of (\ref{trace-x-basis}), we can express the partition
function (\ref{partition-fn}) in the form
\be
Z[J]=\int_{-\infty}^\infty dx\,\br x|
\fT\exp\left\{-\frac{i}{\hbar}\int_{t_1}^{t_2}dt\:[H-Jx]\right\}|x\kt=
\int_{-\infty}^\infty dx\, \br x,t_1|x,t_2\kt,
\label{partition-fn2}
\ee
where
\be
|x,t\kt:=U_{_J}(0,t)^\dagger|x\kt,
\label{sec7-xt-ket}
\ee
are the (generalized) eigenfunctions of the Heisenberg-picture
position operator $x(t)$. In light of (\ref{sec7-evolution-op}) and
(\ref{sec7-xt-ket}), we also have, for all $x_1,x_2\in\R$, $\br
x_1,t_1|x_2,t_2\kt=\br x_1|U_{_J}(t_1,t_2)|x_2\kt=\br x_1|
\fT\exp\left\{-\frac{i}{\hbar}\int_{t_1}^{t_2}dt\:[H-Jx]\right\}|x_2\kt$.
Computing this quantity as a phase-space path integral and
substituting the result in (\ref{partition-fn2}), we find the
following phase-space path integral expression for the generating
functional.
\be
Z[J]=\int\int\cD(x)\,\cD(p)\;
e^{-\frac{i}{\hbar}\int_{t_1}^{t_2}dt\,
[H(x,p;t)-J(t)x]}.
\label{partition-fn3}
\ee
If $H$ is a quadratic polynomial in $p$, we can perform the momentum
path integral in (\ref{partition-fn3}) and convert it into a
configuration-space (Lagrangian) path integral. This yields
\be
Z[J]=\int\cD(x)\:
e^{\frac{i}{\hbar}\int_{t_1}^{t_2}dt\,L_{_J}(x,\dot x;t)},
\label{partition-fn4}
\ee
where $L_{_J}(x,\dot x;t):=\dot x\,p-H(x,p;t)+J(t)x$,
and $p$ is to be identified with its expression obtained by solving
$\dot x=\partial H(x,p;t)/\partial p$ for $p$ as a function of $x$,
$\dot x$, and $t$.
Next, we consider the extension of the above constructions to a
system defined by the reference Hilbert space $\cH=L^2(\R)$, a
metric operator $\etap:L^2(\R)\to L^2(\R)$, and an
$\etap$-pseudo-Hermitian Hamiltonian $H:L^2(\R)\to L^2(\R)$ that is
again a (piecewise) analytic function of $x$, $p$, and possibly $t$.
According to the Theorem~2, in order for $H$ to be an observable
that generates a unitary time-evolution, it must be
quasi-stationary. As discussed in \cite{plb-2007}, this puts a
severe restriction on the form of allowed time-dependent
Hamiltonians.\footnote{In relativistic field theories, $H$ is
obtained by integrating the Hamiltonian density over a space-like
hypersurface. This makes the time-dependence of $H$ quite arbitrary
and renders the imposition of the condition of quasi-stationarity of
$H$ an extremely difficult task.}
We can certainly work in the Hermitian representation $(\cH,h)$ of
the system where $h:=\rho\, H\,\rho^{-1}$ (with $\rho:=\sqrt\etap$)
is the equivalent Hermitian Hamiltonian (\ref{h-Hermitian}). In this
representation the generating functional has the form
\be
Z[J]=\tr\left(
\fT\exp\left\{-\frac{i}{\hbar}\int_{t_1}^{t_2}dt\:[h-Jx]\right\}\right).
\label{partition-fn-H}
\ee
As we showed above this quantity admits a phase-space path integral
expression. But even for the case that $H$ is a quadratic polynomial
in $p$, the equivalent Hermitian Hamiltonian $h$ does not share this
property, and we cannot convert the right-hand side of
(\ref{partition-fn-H}) into a Lagrangian path integral in general.
This provides a concrete motivation for the derivation of a the
path-integral expression for the generating functional in the
pseudo-Hermitian representation of the system, i.e., $(\pH,H)$.
In \cite{bcm-2006} the authors use the
expression~(\ref{partition-fn4}) (with $t_2=-t_1\to\infty$) to
perform a perturbative calculation of the generating functional and
the one-point function for the $\cP\cT$-symmetric cubic anharmonic
oscillator (\ref{sec8-pt-sym-3}).\footnote{See also
\cite{bbmw-2002}.} As a result they find an imaginary value for the
one-point function. This is simply because the one-point function
they calculate corresponds to the ground state expectation value of
the usual position operator that is indeed not an observable of the
system. The physically meaningful generating functional is
\cite{prd-2007}
\be
Z[J]=\tr_\etap\!\left(
\fT\exp\left\{-\frac{i}{\hbar}\int_{t_1}^{t_2}dt\:[H-JX]\right\}\right),
\label{partition-fn-PH-etap}
\ee
where for every linear operator $K:\cH\to\cH$,
\be
{\rm tr}_{\etap}(K):=\sum_{n=1}^N\br\psi_n| K\psi_n\kt_\etap=
\sum_{n=1}^N\br\psi_n|{\etap}K\psi_n\kt,
\label{ph-trace}
\ee
$\{\psi_n\}$ is an arbitrary orthonormal basis of $\pH$, and $X$ is
the $\etap$-pseudo-Hermitian position operator (\ref{X-P}). The
$n$-point functions generated by (\ref{partition-fn-PH-etap})
correspond to the expectation value of time-ordered products of the
physical position operators and the resulting numerical values are
necessarily real.
It is not difficult to show that $\tr_\etap=\tr$. To do this, first
we recall that because $\rho:=\sqrt\eta:\pH\to\cH$ is a unitary
operator, it maps orthonormal bases of $\pH$ onto orthonormal bases
of $\cH$. In particular, $\xi_n:=\rho\,\psi_n$ form an orthonormal
basis of $\cH$. This together with $\rho^2={\etap}$,
$\rho^\dagger=\rho$, (\ref{ph-trace}), (\ref{trace=0}), and
(\ref{trace-id}) imply
\be
{\rm tr}_{\eta_+}(K)=
\sum_{n=1}^N\br\psi_n|\rho^2K\psi_n\kt=
\sum_{n=1}^N\br\rho\,\psi_n|\rho K\psi_n\kt=
\sum_{n=1}^N\br\xi_n|\rho K\rho^{-1}\xi_n\kt=
{\rm tr}(\rho K\rho^{-1})={\rm tr}(K).
\label{tr=tr-identity}
\ee
This relation allows us to express (\ref{partition-fn-PH-etap}) in
the form
\be
Z[J]=\tr\left(
\fT\exp\left\{-\frac{i}{\hbar}\int_{t_1}^{t_2}dt\:[H-JX]\right\}\right).
\label{partition-fn-PH}
\ee
Next, we employ the definitions of $h$ and $X$, namely $h:=\rho\,
H\,\rho^{-1}$ and $X:=\rho^{-1}x\,\rho$, and the fact that $\etap$
and consequently $\rho$ do not dependent on time, to establish
$\fT\exp\left\{-\frac{i}{\hbar}\int_{t_1}^{t_2}[h-Jx]\right\}=
\rho\;\fT\exp\left\{-\frac{i}{\hbar}\int_{t_1}^{t_2}[H-JX]\right\}
\rho^{-1}$.
In view of this relation and (\ref{trace-id}) the right-hand sides
of (\ref{partition-fn-H}) and (\ref{partition-fn-PH}) coincide. This
is another manifestation of the physical equivalence of Hermitian
and pseudo-Hermitian representations of the system.
As we emphasized in the preceding sections the metric operator plays
a fundamental role in the operator formulation of pseudo-Hermitian
quantum mechanics. The same is true about the path-integral
formulation
of this theory. To elucidate this point we examine the nature of the
dependence of the generating functional on the choice of a metric
operator $\etap$.
A simple consequence of (\ref{trace-x-basis}) and
(\ref{partition-fn-PH}) is
\be
Z[J]=\int_{-\infty}^\infty fx\:\br
x|\fT\exp\left\{-\frac{i}{\hbar}\int_{t_1}^{t_2}dt\:[H-JX]\right\}|x\kt.
\label{partition-fn-PH-x-basis}
\ee
Clearly, $Z[0]$ does not depend on $\etap$, \cite{prd-2007}. This
explains the results of \cite{jakubsky-2007} pertaining the
metric-independence of thermodynamical quantities associated with
non-interacting pseudo-Hermitian statistical mechanical models.
However, in contrast to the view expressed in \cite{jr-2007}, the
metric-independence of $Z[0]$ does not extend to $Z[J]$ with $Z\neq
0$. This is actually to be expected because the knowledge of $Z[J]$
allows for the calculation of the $n$-point functions that are
expectation values of the time-ordered products of the
Heisenberg-picture $\etap$-pseudo-Hermitian position operators
$X(\tau_i)$.
The dependence of $Z[J]$ on the choice of $\etap$ is rather
implicit. In the Hermitian representation, $\etap$ enters the
expression for $Z[J]$ through the equivalent Hermitian Hamiltonian
$h$. In the pseudo-Hermitian representation, this is done through
the source term $JX$. The presence of $X$ in
(\ref{partition-fn-PH-x-basis}) prevents one from obtaining a
Lagrangian path integral for $Z[J]$ even for the cases that $H$ is a
quadratic polynomial in $p$. Therefore, contrary to claims made in
\cite{bcm-2006}, in general, the pseudo-Hermitian representation is
not practically superior to the Hermitian representation. There are
certain calculations that are performed more easily in the
pseudo-Hermitian representation, and there are others that are more
straightforward to carry out in the Hermitian representation
\cite{cjp-2004b}.
\section{Geometry of the State-Space and the Quantum Brachistochrone}
\subsection{State-Space and Its geometry in Conventional QM}
\label{geom-conv-QM}
In conventional quantum mechanics the states are not elements of the
Hilbert space $\cH$, but the rays (one-dimensional subspaces) of the
Hilbert space.\footnote{Throughout this article the word ``state''
is used to mean ``pure state''.} The space of all rays that is
usually called the \emph{projective Hilbert space} and denoted by
$\cP(\cH)$ has the structure of a manifold. For an $N$-dimensional
Hilbert space $\cH$, $\cP(\cH)$ is the complex projective space $\C
P^{N-1}$. This is a compact manifold for finite $N$ and a well-known
infinite-dimensional manifold with very special and useful
mathematical properties for infinite $N$, \cite{gp-book}.
The projective Hilbert space $\cP(\cH)$ is usually endowed with a
natural geometric structure that is of direct relevance to physical
phenomena such as geometric phases \cite{page} and optimal-speed
unitary evolutions in quantum mechanics \cite{anandan-aharonov}. To
describe this structure, we need an appropriate representation of
the elements of $\cP(\cH)$. This is provided by the projection
operators associated with the states.
Consider a state $\lambda_\psi$ represented by a state-vector
$\psi\in\cH-\{0\}$, i.e., $\lambda_\psi=\{c\psi| c\in\C\}$, and the
projection operator
\be
\Lambda_\psi:=\frac{|\psi\kt\br\psi|}{\br\psi|\psi\kt}.
\label{state=}
\ee
Clearly the relation between states $\lambda_\psi$ and state-vectors
$c\psi$ is one to (infinitely) many. But the relation between the
states $\lambda_\psi$ and the projection operators $\Lambda_\psi$ is
one-to-one. This suggests using the latter to identify the elements
of the projective Hilbert space $\cP(\cH)$. This parametrization of
$\cP(\cH)$ has the advantage of allowing us to use the algebraic
properties of the projection operators (\ref{state=}) in the study
of states.
An important property of (\ref{state=}) is that it is a positive
operator having a unit trace. The positivity of $\Lambda_\psi$ is a
simple consequence of the identities
\be
\Lambda_\psi^\dagger=\Lambda_\psi,~~~~~~~~~
\Lambda_\psi^2=\Lambda_\psi.
\label{proj-ids}
\ee
We recall from Subsection~\ref{path-integral} that the trace of a
linear operator $J:\cH\to\cH$ is defined by
\be
{\rm tr}(J):=\sum_{n=1}^N \br\xi_n|J\xi_n\kt,
\label{trace=}
\ee
where $\{\xi_n\}$ is an arbitrary orthonormal basis of $\cH$,
\cite{reed-simon}. If $L:\cH\to\cH$ is a linear operator such that
${\rm tr}(L^\dagger L)<\infty$, $L$ is said to be a
\emph{Hilbert-Schmidt operator}. In view of (\ref{state=}),
(\ref{proj-ids}), (\ref{trace=}) and (\ref{norm2}), we have
\be
{\rm tr}(\Lambda_\psi^\dagger\Lambda_\psi)=
{\rm tr}(\Lambda_\psi^2)={\rm tr}(\Lambda_\psi)=
\sum_{n=1}^N
\frac{\br\xi_n|\psi\kt\br\psi|\xi_n\kt}{\br\psi|\psi\kt}
=\frac{\sum_{n=1}^N |\br\xi_n|\psi\kt|^2}{\br\psi|\psi\kt}
=\frac{\parallel\psi\parallel^2}{\br\psi|\psi\kt}=1.
\label{unit-trace}
\ee
Therefore, $\Lambda_\psi$ is a Hilbert-Schmidt operator with unit
trace.
The set $\fB_2(\cH)$ of Hilbert-Schmidt operators forms a subspace
of the vector space of bounded linear operators acting in $\cH$. We
can use ``${\rm tr}$'' to define the following inner product on
$\fB_2(\cH)$.
\be
( L|J ):={\rm tr}(L^\dagger J)~~~~~~~~\mbox{for all}
~~~~L,J\in\fB_2(\cH).
\label{trace-inn}
\ee
This is called the \emph{Frobenius} or \emph{Hilbert-Schmidt inner
product} \cite{horn-johnson2,reed-simon}. It has the appealing
property that given an orthonormal set $\{\chi_n\}$ of state-vectors
the corresponding set of projection operators $\{\Lambda_{\chi_n}\}$
forms an orthonormal subset of $\fB_2(\cH)$;
$\br\chi_m|\chi_n\kt=\delta_{mn}$ implies
$(\Lambda_{\chi_n}|\Lambda_{\chi_m})=\delta_{mn}$.
The set $\fH_2(\cH)$ of Hermitian Hilbert-Schmidt operators to which
the projection operators $\Lambda_\psi$ belong is a subset of
$\fB_2(\cH)$ that forms a real vector space with the usual addition
of linear operators and their scalar multiplication. It is not
difficult to see, with the help of (\ref{trace-id}), that
(\ref{trace-inn}) reduces to a real inner product on $\fH_2(\cH)$,
namely
\be
( L|J ):={\rm tr}(L J)~~~~~~~~\mbox{for all}
~~~~L,J\in\fH_2(\cH).
\label{trace-inn2}
\ee
Therefore endowing $\fH_2(\cH)$ with this inner product produces a
real inner product space.
By identifying states $\lambda_\psi$ with the projection operators
$\Lambda_\psi$ we can view the projective Hilbert space $\cP(\cH)$
as a subset of $\fH_2(\cH)$ and use the inner product
(\ref{trace-inn2}) to endow $\cP(\cH)$ with a natural metric tensor.
The corresponding line element $ds$ at $\Lambda_\psi$ is given by
\be
ds^2(\Lambda_\psi):= \frac{1}{2}\,( d\Lambda_\psi|d\Lambda_\psi )=
\frac{\br\psi|\psi\kt\br d\psi|d\psi\kt-|
\br\psi|d\psi\kt|^2}{\br\psi|\psi\kt^2},
\label{Fubini-Study=}
\ee
where we have inserted a factor of $\frac{1}{2}$ to respect a
mathematical convention and used (\ref{state=}), (\ref{trace=}),
(\ref{trace-inn2}), and the fact that $\{\xi_n\}$ is an orthonormal
basis of $\cH$.
For $N<\infty$, we can identify $\psi$ with a nonzero complex column
vector $\vec\fz$ with components $\fz_1,\fz_2,\cdots,\fz_N$. In
terms of these we can express (\ref{Fubini-Study=}) in the form
$ds^2=\sum_{a,b=1}^N g_{ab^*}d\fz_ad\fz_b^*$ where
\be
g_{ab^*}:=\frac{|\vec\fz|^2\delta_{ab}-\fz_a^*\fz_b}{
|\vec\fz|^4}.
\label{metric-global}
\ee
This is precisely the Fubini-Study metric on the complex projective
space $\C P^{N-1}$, \cite{egh}.
As a concrete example, consider the case that $\cH$ is
two-dimensional, i.e., $N=2$. Then using the standard basis
representation of operators acting in $\C^2$, and noting that in
this case all operators are Hilbert-Schmidt, we can infer that
$\fH_2(\cH)$ is equivalent to the set of all Hermitian matrices.
This is a four-dimensional real vector space which we can identify
with $\R^4$. Specifically, we can represent each $J\in\fH_2(\cH)$
using its standard matrix representation:
\be
\underline J=\left(\begin{array}{cc}
z & x-iy\\
x+iy & w\end{array}\right),
\label{R4=}
\ee
and observe that these matrices are in one-to-one correspondence
with $(x,y,z,w)\in\R^4$.
The projective Hilbert space $\cP(\cH)$ is a two-dimensional subset
of the four-dimensional real vector space $\fH_2(\cH)$. To see this
let us choose an arbitrary state-vector $\psi\in\C^2-\{\vec
0\}$. Then $\psi=\mbox{\scriptsize$\left(\begin{array}{c} \fz_1\\
\fz_2\end{array}\right)$}$ for some $\fz_1,\fz_2\in\C$ such that
$|\fz_1|^2+|\fz_2|^2\neq 0$, and in view of (\ref{state=}) we can
represent $\Lambda_\psi$ by
\be
\underline{\Lambda_\psi}=\frac{1}{|\fz_1|^2+|\fz_2|^2}
\left(\begin{array}{cc}
|\fz_1|^2 & \fz_1 \fz_2^*\\
\fz_1^* \fz_2 & |\fz_2|^2\end{array}\right).
\label{state-rep=}
\ee
Using the parametrization (\ref{R4=}), we find that for
$J=\Lambda_\psi$,
\be
x=\frac{\fz_1 \fz_2^*+\fz_1^* \fz_2}{
2(|\fz_1|^2+|\fz_2|^2)},~~~~
y=\frac{i(\fz_1 \fz_2^*-\fz_1^* \fz_2)}{
2(|\fz_1|^2+|\fz_2|^2)},~~~~
z=\frac{|\fz_1|^2}{|\fz_1|^2+|\fz_2|^2},~~~~
w=\frac{|\fz_2|^2}{|\fz_1|^2+|\fz_2|^2}.
\label{xyzw=}
\ee
Therefore, as expected $w=1-z$, so that
\be
\underline{\Lambda_\psi}=
\left(\begin{array}{cc}
z & x-iy\\
x+iy & 1-z\end{array}\right),
\label{state-rep=xyz}
\ee
and the condition $\Lambda_\psi^2=\Lambda_\psi$ takes the form
\be
x^2+y^2+(z-\frac{1}{2})^2=\frac{1}{4}.
\label{two-sphere-1}
\ee
This defines a two-dimensional sphere $S^2$ that we can use to
represent $\cP(\cH)$.
If we endow $\R^3$, that is parameterized by the Cartesian
coordinates $(x,y,z)$, with the Euclidean metric, we can identify
$S^2$ with a round sphere of unit diameter. We will next obtain an
expression for the metric induced on $S^2$ by the embedding
Euclidean space $\R^3$.
Let $\mathbf{N}$ and $\mathbf{S}$ respectively denote the north and
south poles of $S^2$, i.e., $\mathbf{N}:=(x=0,y=0,z=1)$ and
$\mathbf{S}:=(x=0,y=0,z=0)$, and consider the stereographic
projection of $S^2$ onto the tangent plane $\Pi_\mathbf{N}$ at
$\mathbf{N}$, as depicted in Figure~2.
\begin{figure}
\vspace{0.0cm} \hspace{0.00cm}
\centerline{\includegraphics[scale=.75,clip]{fig2.eps}}
\centerline{\parbox{16cm}{\caption{Stereographic projection of the sphere $S^2$ defined by
$x^2+y^2+(z-\frac{1}{2})^2=\frac{1}{4}$: $\mathbf{N}$ and
$\mathbf{S}$ are respectively the north and south poles of
$S^2$. $\Pi_{\mathbf{N}}$ is the tangent plane at $\mathbf{N}$.
$p$ is a point on $S^2-\{\mathbf{S}\}$, and $\rp$ is its
stereographic projection on $\Pi_{\mathbf{N}}$.}}}
\label{fig2}
\vspace{0.0cm}
\end{figure}
The line connecting $\mathbf{S}$ to a an arbitrary point $p=(x,y,z)$
on $S^2-\{\mathbf{S}\}$ intersects $\Pi_\mathbf{N}$ at a point
$\rp$. If we set up a Cartesian coordinate system in
$\Pi_\mathbf{N}$ with $\mathbf{N}$ as its origin and axes parallel
to the $x$- and $y$-axes, and denote by $(\rx,\ry)$ the coordinates
of $\rp$ in this coordinates system, we can uniquely identify the
points $p\in S^2-\{\mathbf{S}\}$ with $(\rx,\ry)\in\R^2$. Using
simple methods of analytic geometry, we can easily verify that
\be
\rx=\frac{x}{z},~~~~\ry=\frac{y}{z},~~~~
x=\frac{\rx}{1+\rx^2+\ry^2},~~~~
y=\frac{\ry}{1+\rx^2+\ry^2}, ~~~~
z=\frac{1}{1+\rx^2+\ry^2}.
\label{xyz-to-xy}
\ee
We can employ the last three of these relations to compute the line
element over the sphere in the $(\rx,\ry)$-coordinates. A rather
lengthy but straightforward calculation yields
\be
ds^2=dx^2+dy^2+dz^2=\frac{d\rx^2+d\ry^2}{(1+\rx^2+\ry^2)^2}.
\label{FS=derive1}
\ee
This relation together with $ds^2=\sum_{i,j=1}^2 g^{(FS)}_{ij}
d\rx^id\rx^j$, where $\rx^1:=\rx$ and $\rx^2:=\ry$, gives the
following local coordinate expression for the Fubini-Study metric
tensor \cite{egh},
\be
g^{(FS)}_{ij}=\frac{\delta_{ij}}{\left[1+(\rx^1)^2+
(\rx^2)^2\right]^2}.
\label{FS-metric=}
\ee
Expressing $\rx$ and $\ry$ in terms of the spherical coordinates,
\be
\varphi:=\tan^{-1}\big(\mbox{$\frac{y}{x}$}\big),~~~~~~~~
\theta:=\cos^{-1}(2z-1),
\label{spherical-coor}
\ee
of $S^2$, we find
\be
\rx=\tan\left(\mbox{$\frac{\theta}{2}$}\right)\cos\varphi,
~~~~~\ry=\tan\left(\mbox{$\frac{\theta}{2}$}\right)\sin\varphi.
\label{xy=tf}
\ee
Substituting these in (\ref{FS=derive1}) leads to the following
familiar relation for the line element of a sphere of unit diameter.
\be
ds^2=\frac{1}{4}\,\left(d\theta^2+\sin^2\theta\,
d\varphi^2\right).
\label{FS-explicit}
\ee
In order to make the relationship between (\ref{Fubini-Study=}) and
(\ref{FS=derive1}) more transparent, we recall that the south pole
$\mathbf{S}$ of $S^2$ corresponds to the projection operator
$\Lambda_{e_2}$ represented by
\be
\underline{\Lambda_{e_2}}=\left(\begin{array}{cc}
0 & 0\\
0 & 1\end{array}\right)
\label{state-minus}
\ee
and the state $\lambda_{e_2}:=\{c\, e_2\,|\,c\in\C\}$, where
$e_2:=${\scriptsize$\left(\begin{array}{c}0\\1\end{array}\right)$}.
The coordinates $(\rx,\ry)$ parameterize all the states except
$\lambda_{e_2}$. These are represented by the state-vectors
\be
\psi=\left(\begin{array}{c} \fz_1\\
\fz_2\end{array}\right)~~~{\rm with}~~~\fz_1\neq 0.
\label{psi=FS}
\ee
Introducing $\rz:=\frac{\fz_2}{\fz_1}$, we can simplify the
expression (\ref{state-rep=}) for the corresponding projection
operator. In terms of $\rz$, the coordinates $x$, $y$, $z$ appearing
in (\ref{state-rep=xyz}) take the form
$x=\frac{\Re(\rz)}{1+|\rz|^2}$, $y=\frac{\Im(\rz)}{1+|\rz|^2}$,
$z=\frac{1}{1+|\rz|^2}$.
Comparing these with (\ref{xyz-to-xy}) reveals
\be
\rx=\Re(\rz),~~~~~~~~\ry=\Im(\rz).
\label{x-y=re-im-z}
\ee
Now, we are in a position to compute the line element
(\ref{Fubini-Study=}). Inserting (\ref{psi=FS}) in
(\ref{Fubini-Study=}), setting $\fz_2=\fz_1 \rz$, and using
(\ref{x-y=re-im-z}) and (\ref{xy=tf}), we find
\be
ds^2=\frac{d\rz^* d\rz}{(1+|\rz|^2)^2}=
\frac{d\rx^2+d\ry^2}{(1+\rx^2+\ry^2)^2}=
\frac{1}{4}\,\left(d\theta^2+\sin^2\theta\,
d\varphi^2\right).
\label{twice-FS=}
\ee
Therefore, as a Riemannian manifold the state-space $\cP(\cH)$ is
identical to a two-dimensional (round) sphere of unit
diameter.\footnote{The above calculation of the metric tensor on
$S^2$ makes use of the stereographic projection of
$S^2-\{\mathbf{S}\}$ onto the plane $\Pi_\mathbf{N}$ which is a copy
of $\R^2=\C$. We could also consider the stereographic projection of
$S^2-\{\mathbf{N}\}$ onto the tangent plane $\Pi_\mathbf{S}$ at
$\mathbf{S}$. Using both the projections we are able to describe all
the points on $S^2$. This is a manifestation of the manifold
structure of $S^2$.}
\subsection{State-Space and Its geometry in Pseudo-Hermitian QM}
\label{geom-conv-PHQM}
The construction of the space of states in pseudo-Hermitian quantum
mechanics is similar to that in conventional quantum mechanics.
Again, the states are rays in the (reference) Hilbert space $\cH$
which are identical with the rays in the physical Hilbert space
$\cH_{\rm phys}$. The only difference is in the way one associates
projection operators to states and defines an appropriate notion of
distance (metric tensor) on the state-space.
In the following we shall use $\cP(\cH_{\rm phys})$ to denote the
state-space of a pseudo-Hermitian quantum system with physical
Hilbert space $\cH_{\rm phys}$. The latter is obtained by endowing
the underlying vector space of $\cH$ with the inner product
$\br\cdot|\cdot\kt_{\eta_+}$, where ${\etap}:\cH\to\cH$ is a given
metric operator rendering the Hamiltonian of the system
${\etap}$-pseudo-Hermitian. We shall also introduce the following
notation that will simplify some of the calculations: For all
$\xi,\zeta\in\cH$, $|\zeta\pkt:=|\zeta\kt=\zeta$,
$\pbr\zeta|:=\br\zeta|{\etap}$,
$\pbr\xi|\zeta\pkt:=\br\xi|{\etap}|\zeta\kt=\br\xi|{\etap}\zeta\kt=
\br\xi|\zeta\kt_{\eta_+}$.
First we define, for each pair of linear operators $L,J:\cH\to\cH$,
\be
(L|J)_{\eta_+}:={\rm tr}_{\eta_+}(L^\sharp J)=\tr(L^\sharp J),
\label{ph-HS-inn-prod-1}
\ee
where $L^\sharp$ stands for the ${\etap}$-pseudo-adjoint of $L$:
\be
L^\sharp:={\etap}^{-1}L^\dagger{\etap},
\label{eta-pseudo-adj}
\ee
$\tr_\etap$ is defined by (\ref{ph-trace}), and we have used
(\ref{tr=tr-identity}). The linear operators $A:\pH\to\pH$ for which
$(A|A)_{\eta_+}$ is finite together with (\ref{ph-HS-inn-prod-1})
form the inner product space $\fB_2(\pH)$ of Hilbert-Schmidt
operators acting in $\cH_{\rm phys}$. Substituting
(\ref{eta-pseudo-adj}) in (\ref{ph-HS-inn-prod-1}) and using
$\rho^2={\etap}$ and $\rho^\dagger=\rho$, we also have
\be
(L|J)_{\eta_+}={\rm tr}({\etap}^{-1}
L^\dagger{\etap} J)={\rm tr}(\rho^{-1}
L^\dagger\rho^2 J \rho^{-1})={\rm tr}((\rho
L\rho^{-1})^\dagger\rho J \rho^{-1})=(\rho
L\rho^{-1}|\rho J \rho^{-1}).~~~
\label{ph-HS-inn-prod-2}
\ee
These calculations show that $\rho:\pH\to\cH$ induces a unitary
operator $U_\rho:\fB_2(\pH)\to\fB_2(\cH)$ defined by
\be
U_\rho(L):=\rho\, L\, \rho^{-1},~~~~~~\mbox{for all $L\in
\fB_2(\pH)$}.
\label{induced-unitary}
\ee
Now, consider a state $\lambda_\psi:=\{c\psi|c\in\C\}$ for some
$\psi\in\cH_{\rm phys}-\{0\}$. Because $\pbr\cdot|\cdot\pkt$ is the
inner product of $\cH_{\rm phys}$, the orthogonal projection
operator onto $\lambda_\psi$ is given by
\be
\Lambda^{(\eta_+)}_\psi:=\frac{|\psi\pkt\pbr\psi|}{
\pbr\psi|\psi\pkt}=\frac{|\psi\kt\br\psi|{\etap}}{
\br\psi|{\etap}\psi\kt}=\frac{\br\psi|\psi\kt\,\Lambda_\psi\,
{\etap}}{\br\psi|{\etap}\psi\kt}.
\label{ph-proj-op}
\ee
A quick calculation shows that
${\Lambda^{({\etap})}_\psi}^2=\Lambda^{({\etap})}_\psi$.
Furthermore, using the arguments leading to (\ref{unit-trace}), we
have
\[{\rm
tr}_{\eta_+}(\Lambda^{({\etap})\sharp}_\psi\Lambda^{({\etap})}_\psi)=
{\rm tr}_{\eta_+}(\Lambda^{({\etap})}_\psi)=\frac{\sum_{n=1}^N
\pbr\psi_n|\psi\pkt\pbr\psi|\psi_n\pkt}{\pbr\psi|\psi\pkt}
=\frac{\sum_{n=1}^N
|\pbr\psi_n|\psi\pkt|^2}{\pbr\psi|\psi\pkt}=1.\]
This shows that $\Lambda^{({\etap})}_\psi\in \fB^{(\eta_+)}_2$, and
also, because ${\rm tr}_{\eta_+}={\rm tr}$,
\be
{\rm tr}(\Lambda^{({\etap})}_\psi)=1.
\label{ph-unit-trace2}
\ee
Another direct implication of (\ref{ph-proj-op}) is
$\Lambda^{({\etap})\dagger}_\psi ={\etap}
\Lambda^{({\etap})}_\psi{\etap}^{-1}$. Hence
$\Lambda^{({\etap})}_\psi$ belongs to the subset $\fH_2(\pH)$ of
${\etap}$-pseudo-Hermitian elements of $\fB_2(\pH)$. We can view
this as a real vector space. (\ref{ph-HS-inn-prod-1}) defines a real
inner product on this space, and the restriction of
(\ref{induced-unitary}) onto $\fH_2(\pH)$ that we also denote by
$U_\rho$ yields a unitary operator that maps $\fH_2(\pH)$ onto
$\fH_2(\cH)$. In fact $\fH_2(\pH)$ and $\fH_2(\cH)$ are real
separable Hilbert spaces and the existence of the unitary operator
$U_\rho:\fH_2(\pH)\to\fH_2(\cH)$ is a manifestation of the fact that
real separable Hilbert spaces of the same dimension are
unitary-equivalent.
The set of the projection operators (\ref{ph-proj-op}) that is in
one-to-one correspondence with the projective Hilbert space
$\cP(\pH)$ is a proper subset of $\fH_2(\pH)$. Similarly to the case
of conventional quantum mechanics, we can define a natural metric on
this space whose line element has the form
\bea
ds_{\eta_+}^2(\Lambda_\psi):=\frac{1}{2}
(d\Lambda^{({\etap})}_\psi|d\Lambda^{({\etap})}_\psi )_{{\etap}}
&=&\frac{\pbr\psi|\psi\pkt\pbr d\psi|d\psi\pkt-|
\pbr\psi|d\psi\pkt|^2}{\pbr\psi|\psi\pkt^2}\nn\\
&=&
\frac{\br\psi|{\etap}\psi\kt\br d\psi|{\etap} d\psi\kt-|
\br\psi|{\etap}d\psi\kt|^2}{\br\psi|{\etap}\psi\kt^2}.
\label{ph-Fubini-Study=}
\eea
It is important to note that as smooth manifolds $\cP(\cH)$ and
$\cP(\pH)$ are the same, but as Riemannian manifolds they are
different. While $\cP(\cH)$ is endowed with the Fubini-Study metric,
$\cP(\pH)$ is endowed with the metric corresponding to
(\ref{ph-Fubini-Study=}). For $N<\infty$ we can obtain a global
expression for the latter in terms of the coordinates
$\fz_1,\fz_2,\cdots,\fz_N$ of the state-vectors
$\psi=\vec\fz\in\C^N$. This yields the following generalization of
(\ref{metric-global}) that satisfies $ds_{\eta_+}^2=\sum_{a,b=1}^N
g^{({\etap})}_{ab^*}\,d\fz_a\fz_b^*$.
\be
g^{({\etap})}_{ab^*}:=
\frac{\sum_{c,d=1}^N(\eta_{_+cd}\:\eta_{_+ba}-
\eta_{_+ca}\:\eta_{_+bd})\,\fz^*_c\fz_d}{
\left(\sum_{r,s=1}^N\eta_{_+rs}\:\fz_r^*\fz_s\right)^2}.
\label{ph-metric-global}
\ee
Here $\eta_{_+ab}$ are the entries of the standard representation
$\underline{{\etap}}$ of ${\etap}$, \cite{prl-2007}.
For two-level systems where $N=2$, we can easily obtain an explicit
expression for the line element (\ref{ph-Fubini-Study=}). In
general the metric operator ${\etap}$ is represented by
\be
\underline{{\etap}}=\left(\begin{array}{cc}
a & b_1-ib_2\\
b_1+i b_2 & c\end{array}\right),
\label{underline-eta=}
\ee
where $a,b_1,b_2,c\in\R$ are such that
\be
a+c={\rm tr}(\underline{{\etap}})>0,~~~~~~~~~
\rd:=ac-(b_1^2+b_2^2)=\det(\underline{{\etap}})>0.
\label{constants=}
\ee
In view of (\ref{ph-proj-op}) we can again parameterize
$\underline{\Lambda^{({\etap})}_\psi}$ using the cartesian
coordinates $x,y,z$ of the sphere $S^2$ defined by
(\ref{two-sphere-1}). For the states differing from $\lambda_{e_2}$,
we can alternatively choose the coordinates $\rx$ and $\ry$ of
(\ref{x-y=re-im-z}) and show using (\ref{psi=FS}),
(\ref{ph-proj-op}), and (\ref{underline-eta=}) that
\be
ds_{{\etap}}^2=\frac{\rd(d\rx^2+d\ry^2)}{\left[a+2
(b_1\rx+b_2\ry)+c(\rx^2+\ry^2)\right]^2}.
\label{prl-8-cor}
\ee
It is easy to see that for ${\etap}=I$ where $a=c=1$ and
$b_1=b_2=0$, (\ref{prl-8-cor}) reproduces (\ref{twice-FS=}).
To gain a more intuitive understanding of (\ref{prl-8-cor}), we next
express its right-hand side in terms of the spherical coordinates
(\ref{spherical-coor}). Inserting (\ref{xy=tf}) in (\ref{prl-8-cor})
and carrying out the necessary calculations, we find
\be
ds_{{\etap}}^2=\frac{k_1(d\theta^2+\sin^2\theta\,
d\varphi^2)}{\left[1+k_2\cos\theta+g(\varphi)
\sin\theta\right]^2}=
\frac{k_1\:(d\theta^2+\sin^2\theta\,
d\tilde\varphi^2)}{\left[1+k_2\cos\theta+
k_3\cos\tilde\varphi\,\sin\theta\right]^2}
\label{prl-8-cor-sph}
\ee
where we have introduced
\bea
&&k_1:=\frac{\rd}{(a+c)^2}=
\frac{{\det(\underline{{\etap}})}}{
{\rm tr}(\underline{{\etap}})^2},~~~~~~~~
k_2:=\frac{a-c}{a+c},~~~~~~~~
k_3:=\frac{2\sqrt{b_1^2+b_2^2}}{a+c},
\\
&&g(\varphi):=\frac{2(b_1\cos\varphi+b_2\sin\varphi)}{a+c},~~~~~~~~
\tilde\varphi:=\varphi-\beta,~~~~~~~~
\beta:=\tan^{-1}\big(\frac{b_2}{b_1}\big).
\eea
Note that the change of coordinate $\varphi\to\tilde\varphi$
corresponds to a constant rotation about the $z$-axis, and because
of (\ref{constants=}) we have $k_1>0$, $-1<k_2<1$ and $0\leq k_3<1$.
The projective Hilbert space $\cP(\pH)$ is the Riemannian manifold
obtained by endowing the sphere $S^2$ with the metric
$\mathbf{g^{({\etap})}}$ corresponding to the line
element~(\ref{prl-8-cor-sph}).
Next, consider the general case where $N$ need not be two. In order
to compare the geometric structure of $\cP(\pH)$ and $\cP(\cH)$, we
recall that $\cP(\pH)\subset\fB_2(\pH)$,
$\cP(\cH)\subset\fB_2(\cH)$, and that the linear operator $U_\rho$
of Eq.~(\ref{induced-unitary}) maps $\fB_2(\pH)$ onto $\fB_2(\cH)$.
It is easy to check that the restriction of $U_\rho$ on $\cP(\pH)$,
i.e., the function $u_\rho:\cP(\pH)\to\cP(\cH)$ defined by
\be
u_\rho(\Lambda^{({\etap})}_\psi):=\rho\, \Lambda^{({\etap})}_\psi
\rho^{-1},~~~~~~~~~\mbox{for all $\Lambda^{({\etap})}_\psi\in
\cP(\pH)$,}
\label{u-rho-restrict}
\ee
is a diffeomorphism. Furthermore, in view of (\ref{state=}),
(\ref{ph-proj-op}), and $\rho^\dagger=\rho=\sqrt{\etap}$, we have
\be
u_\rho(\Lambda^{({\etap})}_\psi)=
\frac{\rho|\psi\kt\br\psi|{\etap}\rho^{-1}}{\br\psi|\rho^2\psi\kt}
=\frac{\rho|\psi\kt\br\psi|\rho}{\br\rho\,\psi|\rho\,\psi\kt}=
\frac{|\Psi\kt\br\Psi|}{\br\Psi|\Psi\kt}=\Lambda_\Psi,
\label{u-rho-restrict2}
\ee
where $\Psi:=\rho\,\psi\in\cH$. A straightforward consequence of
(\ref{Fubini-Study=}), (\ref{trace-inn2}), (\ref{ph-HS-inn-prod-2}),
(\ref{ph-Fubini-Study=}), (\ref{u-rho-restrict}) and
(\ref{u-rho-restrict2}) is
\bea
ds^2(u_\rho(\Lambda^{({\etap})}_\psi))&=&
ds^2(\Lambda_\Psi)=\frac{1}{2}\,(d\Lambda_\Psi|d\Lambda_\Psi)=
\frac{1}{2}\,(\rho\, d\Lambda^{({\etap})}_\psi\rho^{-1}|
\rho\, d\Lambda^{({\etap})}_\psi\rho^{-1})\nn\\
&=&\frac{1}{2}\,(d\Lambda^{({\etap})}_\psi|d\Lambda^{({\etap})}_\psi)_{{\etap}}=
ds_{{\etap}}^2(\Lambda^{({\etap})}_\psi).
\eea
This shows that $u_\rho:\cP(\pH)\to\cP(\cH)$ is an \emph{isometry},
i.e., it leaves the distances invariant. Therefore, $\cP(\pH)$ and
$\cP(\cH)$ have the same geometric structure. In particular, for
$N=2$, we can identify $\cP(\pH)$ with a sphere of unit diameter
embedded in $\R^3$ with its standard geometry.
This result is another manifestation of the fact that
pseudo-Hermitian quantum mechanics is merely an alternative
representation of the conventional quantum mechanics. Because the
geometry of the state-space may be related to physical quantities
such as geometric phases, we should not have expected to obtain
different geometric structures for $\cP(\pH)$ and $\cP(\cH)$.
\subsection{Optimal-Speed Evolutions}
Let $H:\cH\to\cH$ be a possibly time-dependent Hermitian Hamiltonian
with a discrete spectrum. Suppose that we wish to use $H$ to evolve
an initial state $\lambda_{\psi_I}\in\cP(\cH)$ into a final state
$\lambda_{\psi_F}\in\cP(\cH)$ in some time $\tau$. Then the evolving
state-vector $\psi(t)\in\cH$ satisfies
\be
H\psi(t)=i\hbar\dot\psi(t),~~~~~
\psi(0)=\psi_I,~~~~~\psi(\tau)=\psi_F,
\label{sch-eq-QBP}
\ee
and the corresponding state $\lambda_{\psi(t)}$ traverses a curve in
the projective Hilbert space $\cP(\cH)$.
The instantaneous speed for the motion of $\lambda_{\psi(t)}$ in
$\cP(\cH)$ is
\be
v_\psi:=\frac{ds}{dt},
\label{speed1=}
\ee
where $ds$ is the line element given by (\ref{Fubini-Study=}). In
view of this equation,
\be
v_\psi^2=\frac{\br\psi(t)|\psi(t)\kt\br\dot\psi(t)|\dot\psi(t)\kt-
|\br\psi(t)|\dot\psi(t)\kt|^2}{\br\psi(t)|\psi(t)\kt^2}
=\frac{\Delta E_\psi(t)^2}{\hbar^2},
\label{speed2=}
\ee
where
\be
\Delta E_\psi(t)^2:=
\frac{\br\psi(t)|H^2\psi(t)\kt}{\br\psi(t)|\psi(t)\kt}
-\frac{|\br\psi(t)|H\psi(t)\kt|^2}{\br\psi(t)|\psi(t)\kt^2},
\label{uncertainty=}
\ee
is the square of the energy uncertainty, and we have used
(\ref{sch-eq-QBP}) and the Hermiticity of $H$. We can employ
(\ref{speed1=}) and (\ref{speed2=}) to express the length of the
curve traced by $\lambda_{\psi(t)}$ in $\cP(\cH)$ in the form
\cite{anandan-aharonov}:
\be
s=\frac{1}{\hbar}\int_0^\tau \Delta E_\psi(t)\, dt.
\label{length=uncertainty}
\ee
Because $\Delta E_\psi$ is non-negative, $s$ is a monotonically
increasing function of $\tau$. This makes $\tau$ a monotonically
increasing function of $s$. Therefore, the shortest travel time is
achieved for the paths of the shortest length, i.e., the geodesics
of $\cP(\cH)$.
For a time-independent Hamiltonian $H$ we have
$\psi(t)=e^{-itH/\hbar}\psi_I$ and as seen from
(\ref{uncertainty=}), $\Delta E_\psi$ is also time-independent. In
this case, (\ref{length=uncertainty}) yields
\be
\tau=\frac{\hbar s}{\Delta E_\psi},
\label{tau=QBP}
\ee
and the minimum possible travel time is achieved when $s$ is
identified with the geodesic distance between $\psi_I$ and $\psi_F$.
Because of the particular structure of $\cP(\cH)$ the geodesic(s)
connecting any two states $\lambda_{\psi_I}$ and $\lambda_{\psi_F}$
lie entirely on a two-dimensional submanifold of $\cP(\cH)$ that is
actually the projective Hilbert space $\cP(\cH_{I,F})$ for the
subspace $\cH_{I,F}$ of $\cH$ spanned by $\psi_I$ and $\psi_F$. If
the time evolution generated by the Hamiltonian minimizes the travel
time the evolving state $\lambda_{\psi_I}$ should stay in
$\cP(\cH_{I,F})$ during the evolution. This means that the problem
of determining minimum-travel-time evolutions
\cite{fleming,Vaidman,Margolus,brody,carlini1} reduces to the case
that $\cH$ is two-dimensional \cite{brody-hook}. As we saw in
Subsection~\ref{geom-conv-QM}, in this case $\cP(\cH)$ is a round
sphere of unit diameter and the geodesics are the large circles on
this sphere.
Next, we study, without loss of generality, the case $N=2$. It is
easy to show the existence of a time-independent Hamiltonian that
evolves $\lambda_{\psi_I}$ to $\lambda_{\psi_F}$ along a geodesic.
We will next construct such a Hamiltonian.
Consider a time-independent Hamiltonian $H$ acting in a
two-dimensional Hilbert space. We can always assume that $H$ has a
vanishing trace so that its eigenvalues have opposite sign,
$E_2=-E_1=:E$.\footnote{This is true for general possibly
time-dependent Hamiltonians $H(t)$. Under the gauge transformation
$\psi(t)\to \psi'(t)\to e^{i\alpha(t)/\hbar}\psi(t)$ with
$\alpha(t)=N^{-1}\int_0^t {\rm tr}[H(s)]ds$, the Hamiltonian $H(t)$
transforms into the traceless Hamiltonian $H'(t):=H(t)-N^{-1}{\rm
tr}[H(t)]I$.} Because $\Delta E_\psi$ is time-independent we can
compute it at $t=0$. Expanding $\psi(0)=\psi_I$ in an orthonormal
basis $\{\psi_1,\psi_2\}$ consisting of a pair of eigenvectors of
$H$, i.e., writing it in the form
\be
\psi_I=c_1\psi_1+c_2\psi_2,~~~~~c_1,c_2\in\C,
\label{psi-expand-ini}
\ee
we find \cite{p87}
\be
\Delta E_\psi=E \sqrt{1-\left(\frac{|c_1|^2-|c_2|^2}{
|c_1|^2+|c_2|^2}\right)^2}\leq E.
\label{delta-E=}
\ee
Therefore, the travel time $\tau$ satisfies
\be
\tau\geq \tau_{\rm min}:=\frac{\hbar s}{E},
\label{bound}
\ee
where $s$ is the geodesic distance between $\lambda_{\psi_I}$ and
$\lambda_{\psi_F}$ in $\cP(\cH)$. (\ref{bound}) identifies
$\tau_{\rm min}$ with a lower bound on the travel time. Next, we
shall construct a Hamiltonian $H_\star$ with eigenvalues $\pm E$ for
which $\tau=\tau_{\rm min}$. This will, in particular, identify
$\tau_{\rm min}$ with the minimum travel time.
In order to determine $H_\star$ we only need to construct a pair of
its linearly independent eigenvectors $\psi_1$ and $\psi_2$ and use
its spectral resolution:
\be
H_\star=E\big(-|\psi_1\kt\br\psi_1|+|\psi_2\kt\br\psi_2|\big).
\label{H-star=1}
\ee
As seen from (\ref{delta-E=}), to saturate the lower bound on
$\tau$, we must have $|c_1|=|c_2|$. In view of time-independence of
$\Delta E_\psi$ we could also use $\psi_F$ to compute this quantity.
If we expand the latter as
\be
\psi_F=d_1\psi_1+d_2\psi_2,~~~~~d_1,d_2\in\C,
\label{psi-expand-fin}
\ee
we find that $\Delta E_\psi$ satisfies (\ref{delta-E=}) with
$(c_1,c_2)$ replaced by $(d_1,d_2)$. Therefore, by the same argument
we find $|d_1|=|d_2|$. Therefore, there must exist
$\beta_I,\beta_{F}\in\R$ such that
\be
c_2=e^{\beta_I}c_1,~~~~~~~d_2=e^{\beta_F}d_1.
\label{cc-dd}
\ee
Inserting these relations in (\ref{psi-expand-ini}) and
(\ref{psi-expand-fin}) gives
\be
\psi_1+e^{i\beta_I}\psi_2=c_1^{-1}\psi_I,~~~~~~
\psi_1+e^{i\beta_F}\psi_2=d_1^{-1}\psi_F.
\label{psi-eqns}
\ee
Solving these for $\psi_1$ and $\psi_2$, we obtain
\be
\psi_1=\frac{\sqrt 2\big(\hat\psi_I-
e^{\frac{i\vartheta}{2}}\hat\psi_F\big)}{1-e^{i\vartheta}},
~~~~~~~~~~~
\psi_2=\frac{\sqrt 2\:e^{-i\beta_F}\big(-\hat\psi_I+
e^{\frac{-i\vartheta}{2}}\hat\psi_F\big)}{1-e^{i\vartheta}},
\label{psi12=QBP}
\ee
where we have introduced
\be
\vartheta:=\beta_I-\beta_F,~~~~~~~
\hat\psi_I:=\frac{\psi_I}{\sqrt 2\, c_1},~~~~~~~
\hat\psi_F:=\frac{e^{\frac{i\vartheta}{2}}\psi_F}{
\sqrt 2\, d_1}.
\label{vartheta=}
\ee
Clearly $\hat\psi_n$ determines the same state as $\psi_n$ for
$n\in\{1,2\}$. Also in view of (\ref{H-star=1}), the presence of
$\beta_F$ in (\ref{psi12=QBP}) does not affect the expression for
$H_\star$. The only parameter that has physical significance is the
angle $\vartheta$. The orthonormality of $\psi_1$ and $\psi_2$
implies that $\hat\psi_I$ and $\hat\psi_F$ have unit norm and more
importantly that $\vartheta$ fulfils
\be
\br\hat\psi_I|\hat\psi_F\kt=\cos(\mbox{$\frac{\vartheta}{2}$}).
\label{angle-beta1}
\ee
In terms of $\psi_I$ and $\psi_F$ this equation takes the form
\be
\br \psi_I| \psi_F\kt=c_1^*d_1\big(1+
e^{-i\vartheta}\big).
\label{angle-beta11}
\ee
Note that because $\hat\psi_I$ and $\hat\psi_F$ have unit norm,
according to (\ref{vartheta=}), $\br\psi_I|\psi_I\kt=2|c_1|^2$ and
$\br\psi_F|\psi_F\kt=2|d_1|^2$. These relations together with
(\ref{vartheta=}) and (\ref{angle-beta1}) imply
\be
\cos^2(\mbox{$\frac{\vartheta}{2}$})=
\frac{|\br\psi_I|\psi_F\kt|^2}{\br\psi_I|\psi_I\kt\,
\br\psi_F|\psi_F\kt}.
\label{cosvartheta=}
\ee
Therefore whenever $\psi_I$ and $\psi_F$ are orthogonal,
$\vartheta=\pi$. Furthermore, as discussed in
\cite{anandan-aharonov}, (\ref{cosvartheta=}) shows that $\vartheta$
is related to the geodesic distance $s$ between $\lambda_{\psi_I}$
and $\lambda_{\psi_F}$ according to\footnote{Note that the metric on
$\cP(\cH)$ that is used in \cite{anandan-aharonov} differs from our
metric by a factor of $\sqrt 2$.}
\be
\vartheta=2s.
\label{angle-beta4}
\ee
In the case that $\psi_I$ and $\psi_F$ are orthogonal,
$\lambda_{\psi_I}$ and $\lambda_{\psi_F}$ are \emph{antipodal}
points on $\cP(\cH)$. Therefore their geodesic distance $s$ is half
of the perimeter of a large circle. Because $\cP(\cH)$ is a round
sphere of unit diameter, we have $s=\pi/2$, which is consistent with
(\ref{angle-beta4}).
Having calculated the eigenvectors $\psi_1$ and $\psi_2$, we can use
(\ref{H-star=1}) to obtain an explicit expression for the
Hamiltonian $H_\star$ that evolves $\lambda_{\psi_I}$ into
$\lambda_{\psi_F}$ in time $\tau_{\rm min}$. Substituting
(\ref{psi12=QBP}) in (\ref{H-star=1}) and using (\ref{vartheta=}),
(\ref{angle-beta1}) and (\ref{angle-beta4}), we find
\cite{carlini1,brody-hook,p87}
\be
H_\star=
\frac{iE\big(|\hat\psi_F\kt\br\hat\psi_I|-
|\hat\psi_I\kt\br\hat\psi_F|\big)}{4\sin(\frac{\vartheta}{2})}=
\frac{iE\cot(s)}{4}
\left(\frac{|\psi_F\kt\br\psi_I|}{\br\psi_I|\psi_F\kt}-
\frac{|\psi_I\kt\br\psi_F|}{\br\psi_F|\psi_I\kt}\right).
\label{H-star=expl}
\ee
The last equation shows that $H_\star$ depends only on the states
$\lambda_{\psi_I}$ and $\lambda_{\psi_F}$ and not on the particular
state-vectors one uses to represent these states. Note also that
this equation is valid generally; it applies for quantum systems
with an arbitrary finite- or infinite-dimensional $\cH$.
This completes our discussion of the quantum Brachistochrone problem
in conventional quantum mechanics. We can use the same approach to
address this problem within the framework of pseudo-Hermitian
quantum mechanics \cite{prl-2007}. This amounts to making the
following substitutions in the above analysis: $|\psi_n\kt \to
|\psi_n\pkt$, $\br\psi_n| \to \:\pbr\psi_n|$, and $s\to
s_{{\etap}}$. In particular, the minimum travel time is given by
\be
\tau_{\rm min}^{({\etap})}=\frac{\hbar s_{{\etap}}}{E},
\label{ph-bound}
\ee
and the ${\etap}$-pseudo-Hermitian Hamiltonian that generates
minimal-travel-time evolution between $\lambda_{\psi_I}$ and
$\lambda_{\psi_F}$ has the form
\be
H^{({\etap})}_\star=
\frac{iE\cot(s_{{\etap}})}{4}
\left(\frac{|\psi_F\pkt\pbr\psi_I|}{\pbr\psi_I|\psi_F\pkt}-
\frac{|\psi_I\pkt\pbr\psi_F|}{\pbr\psi_F|\psi_I\pkt}\right),
\label{ph-H-star=expl}
\ee
where
\be
\cos^2(s_{{\etap}})=
\frac{|\pbr\psi_I|\psi_F\pkt|^2}{\pbr\psi_I|\psi_I\pkt\,
\pbr\psi_F|\psi_F\pkt}.
\label{ph-vartheta=}
\ee
Eq.~(\ref{ph-H-star=expl}) gives the expression for the most general
time-independent optimal-speed quasi-Hermitian Hamiltonian operator
that evolves $\psi_I$ into $\psi_F$. Similarly to its Hermitian
counterpart, it applies irrespective of the dimensionality of the
Hilbert space.
In \cite{bbj-2007}, the authors show using a class of
quasi-Hermitian Hamiltonians that one can evolve an initial state
$\lambda_{\psi_I}$ into a final state $\lambda_{\psi_F}$ in a time
$\tau$ that violates the condition $\tau\geq\tau_{\rm min}$. They
actually show that by appropriately choosing the form of the
quasi-Hermitian Hamiltonian one can make $\tau$ arbitrarily small.
This phenomenon can be easily explained using the above treatment of
the problem. The minimum travel time for an
${\etap}$-pseudo-Hermitian Hamiltonian is given by (\ref{ph-bound}).
In view of (\ref{prl-8-cor-sph}), depending on the value of
$k_1=\det(\underline{{\etap}})/{\rm tr}(\underline{{\etap}})^2$ one
can make $s_{{\etap}}$ and consequently $\tau_{\rm min}^{({\etap})}$
as small as one wishes. This observation does not, however, seem to
have any physically significant implications, because a physical
process that involves evolving $\lambda_{\psi_I}$ into
$\lambda_{\psi_F}$ using an ${\etap}$-pseudo-Hermitian Hamiltonian
$H$ can be described equally well by considering the evolution of
$\lambda_{\rho\psi_I}$ into $\lambda_{\rho\psi_F}$ using the
Hermitian Hamiltonian $h:=\rho\, H\,\rho^{-1}$. In light of the
existence of the isometry $u_\rho:\cP(\pH)\to\cP(\cH)$, the distance
between $\lambda_{\psi_I}$ and $\lambda_{\psi_F}$ in $\cP(\pH)$ is
equal to the distance between $\lambda_{\rho\psi_I}$ and
$\lambda_{\rho\psi_F}$ in $\cP(\cH)$. It is also easy to show that
the travel time for both the evolutions are identical. Therefore as
far as the evolution speed is concerned there is no advantage of
using the ${\etap}$-pseudo-Hermitian Hamiltonian $H$ over the
equivalent Hermitian Hamiltonian $h$, \cite{prl-2007}.
We wish to emphasize that the existence of a lower bound on travel
time is significant, because it limits the speed with which one can
perform unitary transformations dynamically. Such transformations
play a central role in quantum computation. For example the
construction of efficient NOT-gates involves unitary transformations
that map a state into its antipodal state. The distance between
antipodal states in $\cP(\pH)$ is the same as in $\cP(\cH)$. Hence,
for such states $\tau^{({\etap})}_{\rm min}=\tau_{\rm min}$.
The situation is quite different if we consider the evolution
generated by the ${\etap}$-pseudo-Hermitian Hamiltonian
$H_\star^{(\etap)}$ in the standard projective Hilbert space
$\cP(\cH)$. In this case, we can indeed obtain arbitrarily fast
evolutions, but they will not be unitary \cite{fring-jpa-2007}. The
possibility of infinitely fast non-unitary evolutions is actually
not surprising. What is rather surprising is that one can achieve
such evolutions using quasi-Hermitian Hamiltonians; \emph{there are
arbitrarily fast quasi-unitary evolutions} \cite{p87}.
A scenario that is also considered in \cite{bbj-2007} is to use both
Hermitian and quasi-Hermitian Hamiltonians to produce an arbitrarily
fast evolution of $\lambda_{\psi_I}$ into $\lambda_{\psi_F}$. This
is done in three stages. First, one evolves the initial state
$\lambda_{\psi_I}$ into an auxiliary state $\lambda_{\psi'_I}$ using
a Hermitian Hamiltonian $h_1$ in time $\tau_1$, then one evolves
$\lambda_{\psi'_I}$ into another auxiliary state $\lambda_{\psi'_F}$
using a quasi-Hermitian Hamiltonian $H$ in time $\tau'$, and finally
one evolves $\lambda_{\psi'_F}$ into the desired final state
$\lambda_{\psi_F}$ using another Hermitian Hamiltonian $h_2$ in time
$\tau_2$. By choosing the intermediate states $\lambda_{\psi'_I}$
and $\lambda_{\psi'_F}$ appropriately one can make $\tau_1$ and
$\tau_2$ as small as one wishes. By choosing $H$ to be an
${\etap}$-pseudo-Hermitian operator of the form
(\ref{ph-H-star=expl}) with the parameter $k_1$ of ${\etap}$
sufficiently small one can make the total travel time
$\tau:=\tau_1+\tau'+\tau_2$ smaller than $\tau_{\rm min}$. In this
scenario both the initial and final states belong to $\cP(\cH)$, but
to maintain unitarity of the evolution one is bound to switch (the
defining metric of) the physical Hilbert space at $t=\tau_1$ and
$t=\tau_1+\tau'$. Therefore, this scheme involves a physical Hilbert
space with a time-dependent inner product. As discussed in
Subsection~\ref{sec-time-dep}, the latter violates the condition
that the Hamiltonian is an observable. Therefore, \emph{there seems
to be no legitimate way of lowering the bound on travel time between
two states of a given distance except allowing for non-unitary
(possibly quasi-unitary) evolutions.}
\section{Physical Applications}
Since its inception in the form of $\cP\cT$-symmetric models in the
late 1990's and later as a consistent quantum mechanical scheme
\cite{jpa-2004b} pseudo-Hermitian QM has been the subject of
extensive research. The vast majority of the publications on the
subject deal with issues related to formalism or various (quantum
mechanical as well as field theoretical) toy models with mostly
obscure physical meaning. There are however a number of exceptions
to this general situation where concrete problems are solved using
the methods developed within the framework of pseudo-Hermitian QM.
In this section we outline the basic ideas upon which these recent
developments rest. Before engaging into a discussion of these,
however, we wish to list some of the applications that predate the
recent activities in the field.
\subsection{Earlier Applications}
\subsubsection{Dyson Boson Mapping}
Among the earliest manifestations of pseudo-Hermitian operators is
the one appearing in the context of the Dyson mapping of Hermitian
Fermionic Hamiltonians to equivalent quasi-Hermitian bosonic
Hamiltonians \cite{dyson-1956}. Dyson mapping has subsequently found
applications in nuclear physics \cite{janssen-1971} and provided the
basic idea for the formulation of quasi-Hermitian QM \cite{geyer}.
For a brief review of the Dyson mapping method see
\cite{geyer-cjp-2004}.
\subsubsection{Complex Scaling and Resonances}
Consider the one-parameter family of operators:
$ru_\alpha=\exp\left(\frac{i\alpha}{2\hbar}\:\{x,p\}\right)$ with
$\alpha\in\C$, that act in the Hilbert space $L^2(\R)$. We can
easily use the Backer-Campbell-Hausdorff
identity~(\ref{bch-identity}) together with the canonical
computation relation $[x,p]=i\hbar$ to show that $\ru_\alpha$
induces a scaling of the position and momentum operators: $x\to
\ru_\alpha\; x\;\ru_\alpha^{-1}=e^\alpha x$ and $p\to \ru_\alpha\;
p\;\ru_\alpha^{-1}=e^{-\alpha} p$. For $\alpha\in\R$, $\ru_\alpha$
is a unitary transformation, and one can use the latter property to
show that
$(\ru_\alpha\psi)(x)=e^{-\frac{\alpha}{2}}\psi(e^{\alpha}x)$ for all
$\psi\in L^2(\R)$.
For $\alpha\in\C-\R$, the transformation $\psi\to\ru_\alpha\psi$ is
called a \emph{complex scaling} transformation. In this case,
$\ru_\alpha$ is no longer a unitary operator. In fact, neither
$\ru_\alpha$ nor its inverse is bounded. This implies that its
action on a Hermitian Hamiltonian $H$, namely $H\to H'=\ru_\alpha
H\ru_\alpha^{-1}$, that (neglecting the unboundedness of
$\ru_\alpha$ and $\ru_\alpha^{-1}$) maps $H$ into a quasi-Hermitian
Hamiltonian $H'$, can have dramatic effects on the nature of its
continuous spectrum. This observation has applications in the
treatment of resonances (where one replaces $x$ with the radial
spherical coordinate in $\R^3$). The main idea is to perform an
appropriate complex scaling transformation so that the
non-square-integrable wave functions representing resonant states of
$H$ are mapped to square-integrable eigenfunctions of $H'$. For
details, see \cite{simon,lowdin,antoniou} and references therein.
\subsubsection{Vortex Pinning in Superconductors}
Consider the Hamiltonian: $H_g=\frac{(p+ig)^2}{2m}+v(x)$, where $g$
is a real constant and $v$ is a real-valued potential. This
Hamiltonian can be mapped to the Hermitian Hamiltonian $H_0$ by the
similarity transformation:
\be
H_g\to e^{-gx/\hbar}\,H_g\,e^{gx/\hbar}=H_0.
\label{sec9-HN-2}
\ee
This in turn implies that $H_g$ is $\etap$-pseudo-Hermitian for the
metric operator $\etap:=e^{-2gx/\hbar}$. This is to be expected,
because (\ref{sec9-HN-2}) is an example of the quasi-Hermitian
Hamiltonians of the form (\ref{sec9-HN-3}) that admit $x$-dependent
metric operators. In \cite{hn-prl-1996,hn-prb-1997-8} the
Hamiltonian $H_g$ (with random potential $v$) is used in modeling a
delocalization phenomenon relevant for the vortex pinning in
superconductors. A review of the ensuing developments is provided in
\cite{hatano-pa-1998}.
\subsection{Relativistic QM, Quantum Cosmology, and QFT}
\label{rqm-cq-qft}
The issue of constructing an appropriate inner product for the
defining Hilbert space of a quantum mechanical system, which we
shall refer to as the \emph{Hilbert-space problem}, is almost as old
as quantum mechanics itself. Probably the first serious encounter
with this problem is Dirac's attempts to obtain a probabilistic
interpretation of the (first-quantized) Klein-Gordon fields in the
late 1920's. The same problem arises in the study of other bosonic
fields and particularly in the application of Dirac's method of
constrained quantization for systems with first class constraints
\cite{dirac-book}. This method defines the ``physical space'' $\cV$
of the state-vectors as the common null space (kernel) of the
constraints, but it does not specify the inner product necessary to
make $\cV$ into a Hilbert space. Often the inner product induced
from the auxiliary Hilbert space of the unconstrained system is not
physically admissible, and one must find an alternative method of
constructing an appropriate inner.
In trying to deal with the Hilbert-space problem for Klein-Gordon
fields, Dirac was led to the discovery of the wave equation for
massive spin-half particles and the antimatter that earned him the
1933 Nobel prize in physics. Another major historical development
that has its root in attempts to address the Hilbert-space problem
is the discovery of the method of second quantization and eventually
relativistic quantum field theories. These developments did not
bring a definitive resolution for the original problem, but
diminished the interest in its solution considerably.
In the 1960's the discovery of the Hamiltonian formulation of the
General Theory of Relativity \cite{ADM} provided the necessary means
to apply Dirac's method of constrained quantization to gravity. This
led to the formulation of canonical quantum gravity and quantum
cosmology \cite{dewitt-1967,wheeler-1968} and brought the
Hilbert-space problem to forefront of research in fundamental
theoretical physics for the second time. In this context it emerges
as the problem of finding an appropriate inner product on the space
of solutions of the Wheeler-DeWitt equation. Without such an inner
product these solutions, that are often called the
``wave functions of the universe,'' are void of a physical meaning. The
lack of a satisfactory solution to this problem has been one of the
major obstacles in transforming canonical quantum gravity and
quantum cosmology into genuine physical theories
\cite{kuchar-1992,isham-1993}.
A widely used approach in dealing with the Hilbert-space problem for
Klein-Gordon and Proca fields is to use the ideas of
indefinite-metric quantum theories. These fields admit a conserved
current density whose integral over space-like hypersurfaces yields
a conserved scalar charge. This is however not positive-definite and
as a result cannot be used to define a positive-definite inner
product and make the space of all fields $\cV$ into a genuine
Hilbert space directly. This makes one pursue the following
well-known scheme \cite{wald}. First, one uses the conserved charge
to define an indefinite inner product on $\cV$, and then restricts
this indefinite inner product to the (so-called positive-energy)
subspace of $\cV$ where the inner product is positive-definite. The
common practice is to label this subspace as ``physical'' and define
the Hilbert space using the fields belonging to this ``physical
space.''
This approach is not quite satisfactory, because even for physical
fields the above-mentioned conserved current density can take
negative values \cite{bk-2003}. Therefore it cannot be identified
with a probability density. There are also other problems related
with the observables that mix ``physical fields'' with ``unphysical
fields'' or ``ghosts.''
The application of pseudo-Hermitian QM in dealing with the Hilbert
space problem in relativistic QM and quantum cosmology
\cite{cqg,ap,ijmpa-2006,ap-2006a,js-2006,zm-2008}, and the removal
of ghosts in certain quantum field theories
\cite{bender-lee-model,jones-prd-2008} relies on the construction of
an appropriate (positive-definite) inner product on the space of
solutions of the relevant field equation.\footnote{This should be
distinguished with the treatment of the Pais-Uhlenbeck oscillator
proposed in \cite{bender-mannheim}, because the latter involves
changing the boundary conditions on the field equation which in turn
changes the vector space of fields.}
The basic idea behind the application of pseudo-Hermitian QM in
dealing with the Hilbert-space problem in relativistic QM and
quantum cosmology is that the relevant field equations whose
solutions constitute the state-vectors of the desired quantum theory
are second order differential equations in a ``time''
variable.\footnote{This is the physical time variable in an inertial
frame in relativistic QM or a fictitious evolution parameter in
quantum cosmology which may not be physically admissible
\cite{ap}.}. These equations have the following general form.
\be
\frac{d^2}{dt^2}\,\psi(t)+D\psi(t)=0,
\label{sec9-f-eq}
\ee
where $t$ denotes a dimensionless time variable, $\psi:\R\to\cL$ is
a function taking values in some separable Hilbert space $\cL$, and
$D:\cL\to\cL$ is a positive-definite operator that may depend on
$t$.
We can express (\ref{sec9-f-eq}) as a two-component Schr\"odinger
equation \cite{feshbach-villars},
\be
i\frac{d}{dt}\Psi(t)=H\psi(t),
\label{sec9-2-comp}
\ee
where $\Psi:\R\to\cL^2$ and $H:\cL^2\to\cL^2$ are defined by
\cite{jpa-1998,jmp-1998}
\be
\Psi(t):=\left(\begin{array}{c}
\psi(t)+i\dot\psi(t)\\
\psi(t)-i\dot\psi(t)\end{array}\right),~~~~~
H:=\frac{1}{2}
\left(\begin{array}{cc}
D+1 & D-1\\
-D+1 & -D-1\end{array}\right),
\label{sec9-2-comp2}
\ee
$\cL^2$ stands for the Hilbert space $\cL\oplus\cL$, and a dot
denotes a $t$-derivative. The Hamiltonian (\ref{sec9-2-comp2}) can
be easily shown to be quasi-Hermitian \cite{cqg}.
In Subsection~\ref{sec-two-level}, we examined in detail the quantum
system defined by the Hamiltonian (\ref{sec9-2-comp2}) for the case
that $\cL$ is $\C$ with the usual Euclidean inner product and $D$ is
multiplication by a positive number. In this case, the field
equation~(\ref{sec9-f-eq}) is the classical equation of motion for a
(complex) harmonic oscillator with frequency $\sqrt D$. It turns out
that most of the practical and conceptual difficulties of addressing
the Hilbert-space problem for Klein-Gordon, Proca, and
Wheeler-DeWitt fields can be reduced to and dealt with in the
context of this simple oscillator.\footnote{For a discussion of this
particular quantization of the classical harmonic oscillator, see
\cite{jmp-2005}.} In particular, the cases in which $D$ is
$t$-dependent (that arises in quantum cosmological models) require a
more careful examination. We will not deal with these cases here.
Instead, we refer the interested reader to \cite{ap} where a
comprehensive discussion of these issues and their ramifications is
provided.
Following the approach taken in Subsection~\ref{sec-two-level}, one
can construct a metric operator $\eta_+:\cL^2\to\cL^2$ and a new
inner product $\br\cdot|\cdot\kt_\etap$ on $\cL^2$ that renders $H$
Hermitian. This defines a physical Hilbert space $\cK$ of
two-component fields $\Psi(t)$. Because $H:\cK\to\cK$ is Hermitian,
it generate a unitary time-evolution in $\cK$. In particular, for
every initial time $t_0\in\R$, every pair $\Psi_1$ and $\Psi_2$ of
solutions of the Schr\"odinger equation (\ref{sec9-2-comp}), and all
$t\in\R$,
\be
\br\Psi_1(t)|\Psi_2(t)\kt_\etap=
\br\Psi_1(t_0)|\Psi_2(t_0)\kt_\etap.
\label{sec9-unitary}
\ee
As a vector space $\cK$ (and $\cL^2$) are isomorphic to the space of
solutions of the single-component field equation (\ref{sec9-f-eq}),
i.e., $\cV:=\{\psi:\R\to\cL~|~\ddot\psi(t)+D\psi(t)=0~{\rm
for~all}~t\in\R~\}$. We can obtain an explicit realization of this
isomorphism as follows. Let $t_0$ be an initial time and
$\cU_{t_0}:\cV\to\cK$ be defined by
\be
\cU_{t_0}(\psi):=\Psi(t_0).
\label{sec9-U=}
\ee
According to (\ref{sec9-2-comp2}) and (\ref{sec9-U=}), the effect of
$\cU_{t_0}$ on solutions $\psi$ of the field equation
(\ref{sec9-f-eq}) is to map them to the corresponding initial
conditions $\psi(t_0)$ and $\dot\psi(t_0)$. Because the field
equation is linear and second order, this mapping is a linear
bijection. Therefore, $\cU_{t_0}$ is a vector space isomorphism.
This is an important observation, because it allows us to use
$\cU_{t_0}$ to induce a positive-definite inner product
$(\cdot,\cdot)_\etap$ on $\cV$ form the inner product
$\br\cdot|\cdot\kt_\etap$ on $\cK$. The induced inner product is
defined by
\be
(\psi_1,\psi_2)_\etap:=\br\cU_{t_0}(\psi_1)|\cU_{t_0}(\psi_2)\kt_\etap,
~~~~{\rm for~all}~~~~\psi_1,\psi_2\in\cV.
\label{sec9-inn-prod}
\ee
Note that in view of (\ref{sec9-U=}) and (\ref{sec9-unitary}), the
right-hand side of (\ref{sec9-inn-prod}) is independent of the value
of $t_0$. This makes $(\cdot,\cdot)_\etap$ into a well-defined inner
product on $\cV$ and gives it the structure of a Hilbert
space.\footnote{Strictly speaking one must also perform a Cauchy
completion of the inner product space obtained by endowing $\cV$
with $(\cdot,\cdot)_\etap$.}
In principle different choices for $\etap$ give rise to different
inner products $(\cdot,\cdot)_\etap$, but the resulting Hilbert
spaces $\cH_\etap:=(\cV,(\cdot,\cdot)_\etap)$ are
unitary-equivalent. The arbitrariness in the choice of $\etap$ can
be restricted by imposing additional physical conditions. For
example in the case of Klein-Gordon and Proca fields, the
requirement that $(\cdot,\cdot)_\etap$ be Lorentz-invariant reduces
the enormous freedom in the choice of $\etap$ to a finite number of
free numerical parameters \cite{cqg,zm-2008}.
The most general Lorentz-invariant and positive-definite inner
product on the space of (real or complex) Klein-Gordon fields has
the following form \cite{ap-2006a}
\be
(\psi_1,\psi_2)_\etap:=-\frac{i\hbar\kappa}{2mc}\int_{\Sigma}
d\sigma^\mu\left[\psi_1(x)^*
\stackrel{\leftrightarrow}{\partial_\mu}\cC\psi_2(x)+
a\,\psi_1(x)^*\stackrel{\leftrightarrow}{\partial_\mu}\psi_2(x)\right],
\label{kg-inn=}
\ee
where $\Sigma$ is a spacelike Cauchy hypersurface in the Minkowski
spacetime, $x:=(x^0,x^1,x^2,x^3)$ are the spacetime coordinates in
an inertial frame, $\psi_1$ and $\psi_2$ are a pair of solutions of
the Klein-Gordon equation:
$\hbar^2\left[-\partial_0^2+\nabla^2\right]\psi(x)=m^2c^2\psi(x)$,
such that for all $x^0\in\R$, $\psi(x^0,\vec x)$ and
$\partial_0\psi(x^0,\vec x)$ define square-integrable functions of
$\vec x:=(x^1,x^2,x^3)$, $\partial_\mu:=\partial/
\partial x^\mu$, $\nabla^2:=\partial_1^2+\partial_2^2+\partial_3^2$,
for any pair of differentiable functions $f$ and $g$,
$f\!\stackrel{\leftrightarrow}{\partial_\mu}\!g:= f\partial_\mu
g-g\partial_\mu f$, $\kappa\in\R^+$ and $a\in(-1,1)$ are arbitrary
dimensionless free parameters demonstrating the arbitrariness in the
choice of $\etap$, and $\cC$ is the grading operator defined by
\be
(\cC\psi)(x):=i\left(-\nabla^2+\frac{m^2c^2}{\hbar^2}\right)^{\!-1/2}\!\!
\psi(x)=\int_{\R^3}\int_{\R^3} dk^3 dy^3\;
\frac{e^{i\vec k\cdot(\vec x-\vec y)}\psi(x^0,\vec y)}{
\sqrt{\vec k^2+\frac{m^2c^2}{\hbar^2}}}.
\label{sec9-charge}
\ee
Note that $-\nabla^2+\hbar^2/(m^2c^2)$ is a positive operator acting
in $\cL=L^2(\R^3)$ and that $\cC$ is Lorentz-invariant
\cite{ap-2006a}.
According to (\ref{sec9-inn-prod}), as a linear operator mapping
$\cH_\etap$ to $\cK$, $\cU_{t_0}$ is a unitary operator. Similarly
$\rho:=\sqrt\etap$ is a unitary operator mapping $\cK$ to $\cL^2$.
Therefore $\rho\:\cU_\etap:\cH_\etap\to\cL^2$ is also unitary.
Usually $\cL$ is an $L^2$-space with well-known self-adjoint
operators. This allows for a simple characterization of the
self-adjoint operators $o$ acting in $\cL^2$. We can use these
operators and the unitary operator $\rho\:\cU_\etap$ to construct
the self-adjoint operators $O:\cH_\etap\to\cH_\etap$ that serve as
the observables of the desired quantum theory. This is done using
\be
O=(\rho\:\cU_\etap)^{-1}\,o\:\rho\:\cU_\etap.
\label{sec9-obs}
\ee
The application of this construction for Klein-Gordon
\cite{ijmpa-2006,ap-2006a} and Proca \cite{zm-2008} fields yields
explicit expressions for the corresponding relativistic position
operators and localized states, a problem that has been a subject of
ongoing research since the 1940's \cite{pryce-1948,newton-wigner}. A
natural consequence of these developments is the construction of a
set of genuine relativistic coherent states for Klein-Gordon fields
interacting with a constant magnetic field \cite{ap-2006b}.
\subsection{Electromagnetic Wave Propagation}
An interesting application of pseudo-Hermitian QM is its role in
dealing with the centuries-old problem of the propagation of
electromagnetic waves in linear dielectric media \cite{epl}. Unlike
the applications we discussed in the preceding section, here it is
the spectral properties of quasi-Hermitian operators and their
similarity to Hermitian operators that plays a key role.
Consider the propagation of the electromagnetic waves inside a
source-free dispersionless (linear) dielectric medium with
dielectric and permeability tensors $\Ep=\Ep(\vec x)$ and
$\MU=\MU(\vec x)$ that may depend on space $\vec x\in\R^3$ but not
on time $t\in\R$. Maxwell's equations in such a medium read
\cite{jackson}
\bea
&&\vec\nabla\cdot\vec D=0,~~~~~~~~~~~~~~\vec\nabla\cdot\vec B=0,
\label{sec9-max-1}\\
&&\dot{\vec B}+\vec\nabla\times\vec E=0,~~~~~~~
\dot{\vec D}-\vec\nabla\times\vec H=0,
\label{sec9-max-2}
\eea
where $\vec E$ and $\vec B$ are the electric and magnetic fields, a
dot means a time-derivative, and
\be
\vec D:=\Ep\,\vec E,~~~~~~~~~~~~~ \vec H:=\MU^{-1}\vec B.
\label{sec9-D-H}
\ee
Eqs.~(\ref{sec9-max-1}) and (\ref{sec9-max-2}) are respectively
called the constraint and dynamical equations. The former may be
viewed as conditions on the initial values of the electromagnetic
field, because once they are satisfied for some initial time the
dynamical equations ensure their validity for all time.
Similarly to Klein-Gordon equation, we can express the dynamical
Maxwell equations (\ref{sec9-max-2}) as first order ordinary
differential equations for state-vectors belonging to a separable
Hilbert space. To achieve this we introduce the complex vector space
$\cV$ of vector fields $\vec F:\R^3\to\C^3$ and endow it with the
inner product $\br\vec F_1|\vec F_2\kt:=\int_{\R^3}d^3x\:\vec
F_1(\vec x)^*\cdot\vec F_2(\vec x),~ {\rm for~all}~ \vec F_1,\vec
F_2\in\cV$, to define the Hilbert space of square-integrable vector
fields: $\cH:=\{\vec F:\R^3\to\C^3~|~\br\vec F|\vec F\kt<\infty~\}$.
The operation of computing the curl of the (differentiable) elements
of this Hilbert space turns out to define a linear Hermitian
operator $\fD:\cH\to\cH$ according to $(\fD\vec F)(\vec
x):=\vec\nabla\times\vec F(\vec x)$. We can use $\fD$ to write
(\ref{sec9-max-2}) in the form: $\dot{\vec B}(t)+\fD\vec E(t)=0$ and
$\dot{\vec D}(t)-\fD\vec H(t)=0$. Evaluating the time-derivative of
both sides of the second of these equations and using the first of
these equations and (\ref{sec9-D-H}), we find
\be
\ddot{\vec E}(t)+\Omega^2\vec E(t)=0,
\label{sec9-max-4}
\ee
where $\Omega^2:\cH\to\cH$ is defined by
$\Omega^2:=\Ep^{-1}\fD\MU^{-1}\fD$.
In view of the fact that $\Ep$, $\MU$, and consequently $\Omega^2$
are time-independent, we can integrate (\ref{sec9-max-4}) to obtain
the following formal solution
\be
\vec E(t)=\cos(\Omega t)\vec E_0+
\Omega^{-1}\sin(\Omega t)\dot{\vec E}_0,
\label{sec9-E=}
\ee
where $\vec E_0:=\vec E(0)$, $\dot{\vec E}_0:=\dot{\vec E}(0)=
\Ep^{-1}\fD\MU^{-1}\vec B(0)$, and
\be
\cos(\Omega t):=\sum_{n=0}^\infty
\frac{(-1)^n}{(2n)!}\;(t^2\Omega^2)^n,
~~~~~~~~~
\Omega^{-1}\sin(\Omega t):=t\sum_{n=0}^\infty
\frac{(-1)^n}{(2n+1)!}\;(t^2\Omega^2)^n.
\label{sec9-cos-sin}
\ee
Given initial values of the electric and magnetic fields $\vec E(0)$
and $\vec B(0)$, we can use (\ref{sec9-E=}) and (\ref{sec9-cos-sin})
to obtain a series expansion for the evolving electric field. One
can select specific initial fields so that this expansion involves a
finite number of nonzero terms, but these are of little physical
significance. In general the resulting solution is an infinite
derivative series expansion that is extremely difficult to sum or
provide reliable estimates for. A crucial observation which
nonetheless makes this expansion useful is that, for the cases that
$\Ep$ and $\MU$ are Hermitian, the operator $\Omega^2:\cH\to\cH$ is
$\Ep$-pseudo-Hermitian: ${\Omega^2}^\dagger=
\Ep\:\Omega^2\:\Ep^{-1}$. In particular, for lossless material where
$\Ep$ is a positive $\vec x$-dependent matrix, $\Omega^2$ is a
quasi-Hermitian operator\footnote{As an operator acting in $\cH$ the
dielectric tensor $\Ep$ plays the role of a metric operator. This is
one of the rare occasions where a metric operator has a concrete
physical meaning.} that can be mapped to a Hermitian operator $h$ by
a similarity transformation, namely $h=\rho\:\Omega^2\:\rho^{-1}$
where $\rho:=\Ep^{\frac{1}{2}}$.
In terms of $h$ the solution~(\ref{sec9-E=}) takes the form: $
\vec E(t)=\rho^{-1}[\cos(h^{\frac{1}{2}}t)\rho\:\vec E_0+
h^{-\frac{1}{2}}\sin(h^{\frac{1}{2}} t)\rho\:\dot{\vec E}_0]$.
Therefore,
\be
\vec E(\vec x,t)=\br\vec x|\vec E(t)\kt=
\rho^{-1}(\vec x)\int_{\R^3}d^3y~
\left[\stackrel{\leftrightarrow}{C}\!\!(\vec x,\vec y;t)
\rho(\vec y)\vec E_0(\vec y)+
\stackrel{\leftrightarrow}{S}\!\!(\vec x,\vec y;t)
\rho(\vec y)\dot{\vec E}_0(\vec y)\right],
\label{sec9-E=h2}
\ee
where
\be
\stackrel{\leftrightarrow}{C}\!\!(\vec x,\vec y;t):=
\br\vec x|\cos(h^{\frac{1}{2}}t)|\vec y\kt,~~~~~
\stackrel{\leftrightarrow}{S}\!\!(\vec x,\vec y;t):=
\br\vec x|h^{-\frac{1}{2}}\sin(h^{\frac{1}{2}} t)|\vec y\kt.
\label{sec9-kernel}
\ee
The fact that $h$ is a Hermitian operator acting in $\cH$ makes it
possible to compute the kernels
$\stackrel{\leftrightarrow}{C}\!\!(\vec x,\vec y;t)$ and
$\stackrel{\leftrightarrow}{S}\!\!(\vec x,\vec y;t)$ of the
operators $\cos(h^{\frac{1}{2}}t)$ and
$h^{-\frac{1}{2}}\sin(h^{\frac{1}{2}} t)$ using the spectral
representation of $h$:
\be
h=\sum_{n=1}^N \sum_a E_n~|\psi_{n,a}\kt\br\psi_{n,a}|,
\label{sec9-spec-res}
\ee
where the sum over the spectral label $n$ should be identified with
an integral or a sum together with an integral whenever the spectrum
of $h$ has a continuous part, $E_n$ and $|\psi_{n,a}\kt$ denote the
eigenvalues and eigenvectors of $h$ respectively, and $a$ is a
degeneracy label. In view of (\ref{sec9-spec-res}), for every
analytic function $F$ of $h$, such as $\cos(h^{\frac{1}{2}}t)$ and
$h^{-\frac{1}{2}}\sin(h^{\frac{1}{2}} t)$, we have $\br\vec
x|F(h)|\vec y\kt=\sum_{n=1}^N\sum_a F(E_n)~\psi_{n,a}(\vec
x)\psi_{n,a}(\vec y)^*$. In the scattering setups where $\Ep$ and
$\MU$ tend to constant values, $h$ has a continuous spectrum and one
finds integral representations for the kernels (\ref{sec9-kernel})
that reduce the solution~(\ref{sec9-E=h2}) of Maxwell's equations
into performing certain integrals (after solving the eigenvalue
problem for $h$.)
Ref.~\cite{epl} outlines the application of this method for the
cases that the medium is isotropic, the initial fields as well as
the dielectric and permeability constants change only along the
$z$-direction, and the WKB approximation is applicable in dealing
with the eigenvalue problem for $h$. Under these conditions, one can
compute the kernels (\ref{sec9-kernel}) analytically. This allows
for the derivation of the following closed form expression for the
propagating electromagnetic field in terms of the initial fields
$\vec E_0$, $\dot{\vec E}_0$, and the $z$-dependent dielectric and
permeability constants $\varepsilon(z)$ and $\mu(z)$.
\bea
\vec
E(z,t)&=&\frac{1}{2}
\left[\frac{\mu(z)}{\varepsilon(z)}\right]^{\frac{1}{4}}
\left\{\left[\frac{\varepsilon(w_-(z,t))}{\mu(w_-(z,t))}
\right]^{\frac{1}{4}}\right.\!\!
\vec E_0(w_-(z,t))+
\vspace{.5cm}\left[\frac{\varepsilon(w_+(z,t))}{\mu(w_+(z,t))}
\right]^{\frac{1}{4}}\!\!
\vec E_0(w_+(z,t))+\nn\\
&&\vspace{1cm}\left.\int_{w_-(z,t)}^{w_+(z,t)} dw~
\mu(w)^{\frac{1}{4}}\varepsilon(w)^{\frac{3}{4}}
\dot{\vec E}_0(w)\right\},
\nn
\eea
where $w_\pm(z,t):=u^{-1}(u(z)\pm t)$ and $u(z):=\int_0^z
d\fz\:\sqrt{\varepsilon(\fz)\mu(\fz)}$, and $u^{-1}$ stands for the
inverse function for $u$, \cite{epl}.
The possibility of the inclusion of dispersion effects in the above
approach of solving Maxwell's equations is considered in
\cite{pla-2010a}.
\subsection{Other Applications and Physical Manifestations}
The following are some other areas where pseudo-Hermitian operators
arise and/or the methods of pseudo-Hermitian QM are used in dealing
with specific physics problems.
\subsubsection{Atomic Physics and Quantum Optics}
Effective quasi-Hermitian scattering Hamiltonians arise in the study
of the bound-state scattering from spherically symmetric short range
potentials. As shown by Matzkin in \cite{matzkin}, the use of the
machinery of pseudo-Hermitian QM in the study of these Hamiltonians
leads to a more reliable quantitative description of the scattering
problem. It also provides a better understanding of the
approximation schemes used in this context in the past and allows
for their improvement.
The relevance of pseudo-Hermitian operators to two-level atomic and
optical systems has been noted in
\cite{ben-aryeh-jpa-2004,ben-aryeh-jmo-2008,ss-jpa-2008}, and their
application in describing squeezed states is elucidated in
\cite{deb-epjd-2005,ben-aryeh-job-2005}. The optical systems provide
an important arena for manufacturing non-Hermitian and in particular
pseudo- and quasi-Hermitian effective Hamiltonians. Recent
experimental studies of $\cP\cT$-symmetric periodic potentials that
make use of $\cP\cT$-symmetric optical lattices is based on this
observation \cite{mec-prl-2008}. See also \cite{berry-2008}.
\subsubsection{Open Quantum Systems}
The emergence of non-Hermitian effective Hamiltonians in the
description of the resonant states, that is based on Feshbach's
projection scheme \cite{feshback-1962} is a very well-known
phenomenon \cite{muga-2004}. The application of a similar idea, that
replaces the projection scheme with an averaging scheme, for open
quantum systems also leads to a class of non-Hermitian effective
Hamiltonians (usually called Liouvillian or Liouville's super
operator) \cite{breur-petruccione}. These Hamiltonians that
determine the dynamics of the reduced density operators can, under
certain conditions, be pseudo-Hermitian or even quasi-Hermitian. In
\cite{stenholm-ap-2002,jakob-stenholm-pra-2004,stenholm-jakob-ap-2004},
Stenholm and Jakob explore the application of the properties of
pseudo- and quasi-Hermitian operators in the study of open quantum
systems. The key development reported in these articles is the
construction of a metric operator, that uses the spectral method we
discussed in Subsection~\ref{sec-spectral}, and the identification
of the corresponding norm with a viable candidate for a generalized
notion of entropy.
\subsubsection{Magnetohydrodynamics}
Pseudo-Hermitian effective Hamiltonians arise in the study of the
dynamo effect in magnetohydrodynamics
\cite{gs-jmp-2003,gsz-jmp-2005}. These Hamiltonians are typically
non-quasi-Hermitian and involve exceptional points. Therefore, they
can only be treated in the framework of indefinite-metric theories
and using the properties of Krein spaces \cite{azizov}.
\subsubsection{Quantum Chaos and Statistical Mechanics}
In \cite{djm-pra-1995}, Date el al study the spectrum of the
Hamiltonian operator: $H=\frac{1}{2}\left(p_x+\frac{\alpha
y}{r^2}\right)^2+\frac{1}{2}\left(p_y-\frac{\alpha
y}{r^2}\right)^2$, where $\alpha$ is a real coupling constant and
$r:=\sqrt{x^2+y^2}$. This Hamiltonian that is Hermitian and
$\cP\cT$-symmetric describes a rectangular Aharonov-Bohm billiard.
Here Note that the spectrum is obtained by imposing Dirichlet
boundary condition on the boundary of the rectangular configuration
space, that is defined by $|x|\leq a$ and $|y|\leq b$ for some
$a,b\in\R^+$, and also at the location of the flux line, namely
$x=y=0$. The main result of \cite{djm-pra-1995} is that the nearest
neighbor spacing distribution for this system has a transition that
interpolates between the Poisson (level clustering) and Wigner
(level repulsion) distributions. In an attempt to obtain a random
matrix model with this kind of behavior, Ahmed and Jain constructed
and studied certain pseudo-Hermitian random matrix models in
\cite{ahmed-jain-pre,ahmed-jain-jpa}.
\subsubsection{Biophysics}
In \cite{ee-jcp-2008}, Eslami-Moossallam and Ejtehadi have
introduced the following effective Hamiltonian for the description
of the dynamics of an anisotropic DNA molecule.
\be
H:=\frac{J_1^2}{2A_1}+\frac{J_2^2}{2A_2}+\frac{J_3^2}{2C}+
i\omega_0J_3-\tilde f\cos\beta,
\label{sec9-DNA}
\ee
where $A_1,A_2,C,\omega_0,$ and $\tilde f$ are real coupling
constants, $\alpha,\beta,\gamma$ are Euler angles, and
$J_1,J_2,J_3$, that satisfy the commutation relations for angular
momentum operators, are defined by
$J_1:=-i\left(-\frac{\cos\gamma}{\sin\beta}\:\frac{\partial}{\partial\alpha}
+\sin\gamma\:\frac{\partial}{\partial\beta}+\cot\beta\cos\gamma\:
\frac{\partial}{\partial\gamma}\right)$,
$J_2:=-i\left(\frac{\sin\gamma}{\sin\beta}\:\frac{\partial}{\partial\alpha}
+\cos\gamma\:\frac{\partial}{\partial\beta}-\cot\beta\sin\gamma\:
\frac{\partial}{\partial\gamma}\right)$, and
$J_3:=-i\:\frac{\partial}{\partial\gamma}$.
Clearly the Hamiltonian (\ref{sec9-DNA}) is non-Hermitian, but it is
at the same time real, i.e., it commutes with the time-reversal
operator $\cT$. In light of the fact that $\cT$ is an antilinear
operator, this implies that $H$ is a pseudo-Hermitian operator
\cite{p3}. It would be interesting to see if this observation has
any physically interesting implications, besides the restriction it
puts on the spectrum of $H$.
|
1,116,691,499,858 | arxiv | \section{Irreducibility of $Z$}
We show the fundamental group of $Z$ in Section \ref{subsec_label-preserving action} is irreducible. We will follow \cite[Chapter II.3.2]{wise1996non} very closely.
Let $X,Y,f_1,f_2$ be as in Section \ref{subsec_label-preserving action}. Note that there is a cubical map $r:X\times [0,1]\to Z$. An edge in $X\times [0,1]$ is \textit{vertical} if it is parallel to the $X$ factor, otherwise it is \textit{horizontal}. An edge in $Z$ is called \textit{vertical} or \textit{horizontal} if it is the $r$-image of a vertical or horizontal edge in $X\times [0,1]$. The \textit{vertical} or \textit{horizontal 1-skeleton} of $Z$, denoted by $Z^{(1)}_v$ or $Z^{(1)}_h$, is the union of all vertical edges or horizontal edges in $Z$. We identify the vertical 1-skeleton $Z_v^{(1)}$ with $Y$, hence edges in $Z_v^{(1)}$ inherit orientation and labelling from $Y$. The two vertices of $Z$ are in $Z_v^{(1)}$, and we also denote them by $u$ and $v$.
Recall that $X$ has 4 vertices $\{u,v,u',v'\}$ (see Figure \ref{the example}), which gives 4 horizontal edges in $X\times [0,1]$. The $r$-image of them are 4 horizontal loops in $Z$, which we denote by $e_u,e_v,e_{u'},e_{v'}$ respectively. The horizontal 1-skeleton of $Z$ has two connected components, one of form $e_u\cup e_{u'}$, and the other of form $e_v\cup e_{v'}$.
Then $Z$ has a graph of spaces decomposition, where the underlying graph is a circle with one vertex, the edge space is $X$, the vertex space is $Y$, and the two boundary morphisms are $f_1$ and $f_2$ respectively.
Note that if $Z$ has a finite cover which is a product of two graphs, then every pair of vertical edge loop $\ell_1$ (i.e. an edge loop made of vertical edges) and horizontal edge loop $\ell_2$ should virtually commute in $\pi_1(Z)$, i.e. $\ell^{n}_1$ and $\ell^{m}_2$ commute for some non-zero integer $n$ and $m$. So it suffices to find a vertical loop and a horizontal loop which do not virtually commute.
We assume $\ell_1$ and $\ell_2$ are locally geodesic. Pick a lift $\tilde{v}$ of $v$ in the universal cover $\tilde{Z}$ of $Z$. For $i=1,2$, we lift the path $\ell^{\infty}_i$ (which is the concatenation of countably infinitely many $\ell_1$'s) to a geodesic ray $\tilde{\ell}_i$ in $\tilde{Z}$ emanating from $\tilde{v}$. Then $\tilde{\ell}_1$ and $\tilde{\ell}_2$ span a quarter-plane. We identify this quarter plane with $[0,\infty)\times [0,\infty)$ such that $\ell_1=\{0\}\times [0,\infty)$. For integers $n,m\ge 0$, we define $\varphi_{n}(\ell^{m}_1)=p(\{n\}\times [0,m])$ and $\varphi_{n}(\ell^{\infty}_1)=p(\{n\}\times [0,\infty))$ where $p:\tilde{Z}\to Z$.
Let $\omega\subset Y$ be an edge path based at $v$. There are two lifts of $\omega$ with respect to $f_1$, based at $v$ and $v'$ respectively. We map them back in $Y$ via $f_2$ and denote the resulting edge paths by $\phi_1(\omega)$ and $\phi_2(\omega)$ respectively. If $\omega=\ell_1$ is a locally geodesic loop and $\ell_2=e_ v$, then for any $n,m\ge 0$,
\begin{equation}
\label{conjugation appendix}
(\phi_1)^{n}(\omega^{m})=\varphi_n(\omega^{m}).
\end{equation}
Each $\omega$ gives rise to a word on $\{a^{\pm},b^{\pm},c^{\pm}\}$ (however, an arbitrary word may not give rise to an edge path based at $v$). Let $\sharp_{a}(\omega)$ be the number of $a$'s in $\omega$ (counted with sign). We define $\sharp_{b}(\omega)$ and $\sharp_{c}(\omega)$ in a similar way.
Let $\iota$ be the automorphism of $Y$ that flips the $a$-edge and $c$-edge, and fixes the $c$-edge. For $i=1,2$, let $G_i$ be the subgroup of $\pi_1(Y,v)$ induced by $f_i:X\to Y$. We record the following observations.
\begin{lem}
\label{calculation}
Let $\sigma$ be an edge loop in $Y$ based at $v$.
\begin{enumerate}
\item $\sigma\in G_1\Leftrightarrow \sharp_{b}(\sigma)+\sharp_{c}(\sigma)\equiv_2 0$
\item $\phi_2=\iota\circ\phi_1$
\item $\sigma\notin G_1\Rightarrow \phi_1(\sigma^2)=\phi_1(\sigma)\cdot\phi_2(\sigma)$
\end{enumerate}
\end{lem}
\begin{lem}
\label{double}
Let $\sigma$ be an edge loop based at $v$. If $\sigma\notin G_1$, then $\phi_1(\sigma^{2})\notin G_1$.
\end{lem}
\begin{proof}
By Lemma \ref{calculation} (1), it suffices to prove $\sharp_{b}(\phi_1(\sigma^{2}))+\sharp_{c}(\phi_1(\sigma^{2}))\equiv_2 1$.
\begin{align*}
&\sharp_{b}(\phi_1(\sigma^{2}))+\sharp_{c}(\phi_1(\sigma^{2}))\equiv_2\sharp_{a}(\phi_1(\sigma^{2}))=\\
&\sharp_{a}(\phi_1(\sigma)\cdot\phi_2(\sigma))=\sharp_{a}(\phi_1(\sigma))+\sharp_{a}(\iota\circ\phi_1(\sigma))=\\
&\sharp_{a}(\phi_1(\sigma))-\sharp_{c}(\phi_1(\sigma))\equiv_2\sharp_{a}(\phi_1(\sigma))+\sharp_{c}(\phi_1(\sigma))\equiv_2\\
&\sharp_{b}(\sigma)+\sharp_{c}(\sigma)\equiv_2 1.
\end{align*}
The first equality holds since there are even number of edges in $\phi_1(\sigma^{2})$. The second equality follows from Lemma \ref{calculation} (3). The third equality follows from Lemma \ref{calculation} (2). The fourth equality follows from the definition of $\iota$. Note that for any edge $e\subset\sigma$, the label of $e$ belongs to $\{b,c,b^{-1},c^{-1}\}$ if and only if the label of $\phi_1(e)$ in $\phi_1(\sigma)$ belongs to $\{a,c,a^{-1},c^{-1}\}$. Thus the sixth equality holds. The last equality follows from Lemma \ref{calculation} (1).
\end{proof}
In the rest of this section, we always pick $\ell_2=e_v$ in the definition of $\varphi_n$, and let $\sigma$ be the vertical loop based at $v$ of form $ba$ (see Figure \ref{the example}). We claim $\varphi_n(\sigma^{2^{n}})\notin G_1$ for any integer $n\ge 1$. By (\ref{conjugation appendix}), it suffices to prove $\phi^{n}_1((\sigma)^{2^n})\notin G_1$. Since $\sigma\notin G_1$, then case $n=1$ follows from Lemma \ref{double}. Note that when $n>1$, $\phi^{n-1}_1(\sigma^{2^n})=(\phi^{n-1}_1(\sigma^{2^{n-1}}))^{2}$. To see this, we start with $\phi_1(\sigma^{2^n})=(\phi_1(\sigma^{2}))^{2^{n-1}}$ (since $\sigma^2\in G_1$) and iterate. Thus the claim follows by iterating Lemma \ref{double}. The claim is also true if we replace $\sigma$ by $\sigma^{k}$ with $k$ odd, since $\sigma^{k}\notin G_1$ in this case.
It suffices to show there do not exist integers $n,m\neq 0$ such that $\sigma^{m}$ and $e^{n}_{v}$ commute in $\pi_1(Z,v)$. Suppose $\sigma^{m}$ and $e^{n}_{v}$ commutes. We can assume $m,n>0$. Suppose $m=k\cdot 2^{l}$ with $k$ odd. Then $\varphi_{2nl}((\sigma^{k})^{2^{2nl}}) \notin G_1$. Note that $\varphi_{2nl}((\sigma^{k})^{2^{2nl}})=\varphi_{2nl}(\sigma^{2^{2nl-l}\cdot m})$. Since $\sigma^{m}$ and $e^{n}_{v}$ commute, $\varphi_{2nl}(\sigma^{2^{2nl-l}\cdot m})=\sigma^{2^{2nl-l}\cdot m}\in G_1$ (note that $2^{2nl-l}\cdot m$ is even), which is a contradiction.
\section{Introduction}
\subsection{Background and overview}
It is well-known that if a finitely generated group is quasi-isometric to a free group, then it is commensurable to a free group. In this paper, we seek higher dimensional version of this fact in the class of right-angled Artin groups. Recall that given a finite simplicial graph $\Ga$, the \textit{right-angled Artin group} (RAAG) $G(\Ga)$ with defining graph $\Ga$ is given by the following presentation:
\begin{center}
\{$v_i\in\textmd{Vertex}(\Ga)\ |\ [v_i,v_j]=1$ if $v_{i}$ and $v_{j}$ are joined by an edge\}.
\end{center}
This class of groups has deep connections with a lot of other objects. Wise has proposed a program for embedding certain groups virtually into RAAG's (\cite[Section~6]{wise2009research}). One highlight in this direction is the recent solution of virtual Haken conjecture, which relies on the embedding of fundamental groups of 3-dimensional hyperbolic manifolds into RAAG's \cite{agol2013virtual,wisestructure}. Other examples include Coxeter groups \cite{haglund2010coxeter}, certain random groups \cite{ollivier2011cubulating} and small cancellation groups \cite{wise2004cubulating}, certain classes of 3-manifold groups \cite{przytycki2012mixed,przytycki2013graph,liu2013virtual,hagen2013cocompactly}, hyperbolic free-by-cyclic groups \cite{hagen2013cubulating,hagen2014cubulating}, one relator groups with torsion and limit groups \cite{wisestructure} etc.
A theory of special cube complexes has been developed along the way \cite{wise2002residual,haglund2008finite,MR2377497,haglund2012combination}, which can be viewed as a higher-dimensional analogue of earlier work of Stallings \cite{stallings1983topology} and Scott \cite{scott1978subgroups}. In particular, it offers a geometric way to study finite index subgroups of virtually special groups.
This paper is motivated by the following question, which falls into Gromov's program of classifying groups and metric spaces up to quasi-isometry.
\begin{que}
\label{motivating question}
Let $H$ be a finitely generated group quasi-isometric to $G(\Gamma)$. When is $H$ commensurable to $G(\Ga)$? Which $G(\Ga)$ is rigid in the sense that any such $H$ quasi-isometric to $G(\Gamma)$ is commensurable to $G(\Ga)$?
\end{que}
This question naturally leads to studying finite index subgroups of $G(\Ga)$. The aforementioned theory of special cube complexes turns out to be one of our main ingredients. We will give a class of quasi-isometrically rigid RAAG's, based upon a series of previous works \cite{bestvina2008quasiflats,bks2,huang_quasiflat,huang2014quasi,cubulation}.
Before going into detailed discussion about RAAG's, we would like to compare RAAG's with other objects studied under Gromov's program. Previously quasi-isometry classification or rigidity results of spaces and groups can be very roughly divided into the case where various notions of non-positive curvature conditions are satisfied (like relative hyperbolicity, $CAT(0)$, coarse median etc), and the case where the non-positive curvature condition is absent (like certain solvable groups and graph of groups etc). The non-positively curved world can be further divided into the hyperbolic case (including relative hyperbolic with respect to flats), and the higher rank case, where there are lot of higher dimensional flats in the space, and they have non-trivial intersection pattern. Most RAAG's belong to the last case. Other higher rank cases include (1) higher rank lattices in semisimple Lie groups \cite{kleiner1997rigidity,eskin1997quasi,eskin1998quasi}; (2) mapping class groups and related spaces \cite{eskin2015rigidity,bowditch2015large2,bowditch2015large1,bowditch2015large,behrstock2012geometry,hamenstaedt2005geometry}; (3) certain 3-manifold groups and their generalizations \cite{kapovich1997quasi,behrstock2008quasi,MR2727658,frigerio2011rigidity}. The rigidity of these spaces often depends on the complexity of the intersection pattern of flats inside the space. The classes of RAAG's studied in this paper will be more \textquotedblleft rigid\textquotedblright\ than (3), but more \textquotedblleft flexible\textquotedblright\ than (1) and (2), see \cite{bks2,huang2014quasi,cubulation} for a detailed discussion.
In general, quasi-isometries of $G(\Ga)$ may not be of bounded distance from isometries, even if they are equivariant with respect to a cobounded group action. However, it is shown in \cite{bks2,huang2014quasi} that when the outer automorphism group $\out(G(\Ga))$ is finite, every quasi-isometry is of bounded distance from a canonical representative which preserves standard flats of $G(\Ga)$. Based on this, we show in \cite{cubulation} that if $H$ is quasi-isometric to $G(\Ga)$ with $\out(G(\Ga))$ finite, then $H$ acts properly and cocompactly on a $CAT(0)$ cube complex $X$ which is closely related to the universal cover of the Salvetti complex (Section \ref{Salvetti complex}) of $G(\Ga)$.
Thus the problem is related to the commensurability classification of uniform lattices in $\aut(X)$ for a $CAT(0)$ cube complex $X$. This is well-understood when $X$ is a tree \cite{bass1990uniform}. In the case of right-angled buildings, Haglund observed a relation between commensurability of lattices and separability of certain subgroups \cite{haglund2006commensurability}, this together with \cite{agol2013virtual} give commensurability results for some classes of uniform lattices in hyperbolic right-angled buildings. However, when $X$ is not hyperbolic, there exist uniform lattices with very exotic behaviour \cite{wise1996non,burger_mozes}. The cases we are interested in is not (relatively) hyperbolic, and the above literature will serve as a guideline for putting suitable conditions on $\Ga$.
We close this section with a summary of previously known examples and non-examples of classes of RAAG's which satisfy commensurability rigidity in the sense of Question \ref{motivating question}. The rigid ones include
\begin{itemize}
\item The free group $F_m$ of $m$-generators \cite{stallings1968torsion,dunwoody1985accessibility,karrass1973finite}, \cite[1.C]{gromov1996geometric}.
\item The free Abelian groups $\Z^{n}$ \cite{gromov1981groups,bass1972degree}.
\item $F_{m}\times \Z^{n}$ \cite{whyte2010coarse}.
\item Free products of free groups and free Abelian groups \cite{behrstock2009commensurability}.
\end{itemize}
The non-rigid classes of RAAG's include
\begin{itemize}
\item $F_m\times F_{\ell}$ with $m,\ell\ge 2$ \cite{wise1996non,burger_mozes}.
\item $G(\Ga)$ with $\Ga$ being a tree of diameter $\ge 3$ \cite{behrstock2008quasi}.
\end{itemize}
\subsection{Main results}
All graphs in this section are simple. A finite graph $\Gamma$ is \textit{star-rigid} if for any automorphism $\alpha$ of $\Gamma$, $\alpha$ is identity whenever it fixes a closed star of some vertex $v\in\Gamma$ point-wise. Two groups $H$ and $G$ are \textit{commensurable} if there exist finite index subgroups $H'\le H$, $G'\le G$ such that $H'$ and $G'$ are isomorphic.
\begin{thm}
\label{rigidity}
Suppose $\Ga$ is a graph such that
\begin{enumerate}
\item $\Ga$ is star-rigid;
\item $\Ga$ does not contain induced 4-cycle;
\item $\out(G(\Ga))$ is finite.
\end{enumerate}
Then any finite generated group quasi-isometric to $G(\Ga)$ is commensurable to $G(\Ga)$.
\end{thm}
One may find it helpful to think about the case when $\Ga$ is a pentagon. Condition (2) is sharp in the following sense.
\begin{thm}
\label{non-rigid RAAG's}
Suppose $\Ga$ is a graph which contains an induced 4-cycle. Then there exists a finitely generated group $H$ which is quasi-isometric to $G(\Ga)$ but not commensurable to $G(\Ga)$.
\end{thm}
It follows that there are plenty of RAAG's with finite outer automorphism group which are not rigid in the sense of Question \ref{motivating question}. This is quite different from the conclusion in the internal quasi-isometry classification of RAAG's \cite{bks2,huang2014quasi}.
\begin{remark}
It is natural to ask how much portion of RAAG's which are rigid in the sense of Question \ref{motivating question} are characterized by Theorem \ref{rigidity}. This is related to a graph theoretic question as follows. There is a 1-1 correspondence between finite simplicial graphs up to graph isomorphisms and RAAG's up to group isomorphisms \cite{droms1987isomorphisms}. Thus it makes sense to talk about random RAAG's by considering random graphs. It was shown in \cite{charney2012random,day2012finiteness} that a random RAAG satisfies (3). Moreover, since a finite random graph is asymmetric, it satisfies (1). However, we ask whether (1) and (3) are also generic properties among finite simplicial graphs which do not have induced 4-cycles. A positive answer to this question, together with Theorem \ref{rigidity} and Theorem \ref{non-rigid RAAG's}, would mean almost all RAAG's which are rigid in the sense of Question \ref{motivating question} are characterized by Theorem \ref{rigidity}.
\end{remark}
Condition (3) of Theorem \ref{rigidity} is motivated by the study of asymptotic geometry of RAAG's. It turns out that $G(\Ga)$ satisfies a form of vertex rigidity (\cite[Theorem 1.6]{bks2} and \cite[Theorem 4.15]{huang2014quasi}) if and only if $\out(G(\Ga))$ is finite. The motivation for condition (1) and condition (2) is explained after Corollary \ref{arithmeticity}.
Let $S(\Ga)$ be the Salvetti complex of $G(\Ga)$ (Section \ref{Salvetti complex}) and let $X(\Ga)$ be the universal cover of $S(\Ga)$. The following is a simpler version of Theorem \ref{rigidity} in the case of lattices in $\aut(X(\Ga))$.
\begin{thm}
\label{lattice}
Suppose $\Ga$ is a graph such that
\begin{enumerate}
\item $\Ga$ is star-rigid;
\item $\Ga$ does not contain induced 4-cycle.
\end{enumerate}
Let $H$ be a group acting geometrically on $X(\Ga)$ by automorphisms. Then $H$ and $G(\Ga)$ are commensurable. Moreover, let $H_1, H_2\le\aut(X(\Ga))$ be two uniform lattices. Then there exists $g\in\aut(X(\Ga))$ such that $gH_1 g^{-1}\cap H_2$ is of finite index in both $gH_1 g^{-1}$ and $H_2$.
\end{thm}
Let $H\le \aut(X(\Ga))$. Recall that the \textit{commensurator} of $H$ in $\aut(X(\Ga))$ is
$\{g\in\aut(X(\Ga))\mid gHg^{-1}\cap H\ \textmd{is of finite index in}\ H\ \textmd{and}\ gHg^{-1}\}$. Then Theorem \ref{lattice} and \cite[Theorem I]{haglund2008finite} imply the following arithmeticity result.
\begin{cor}
\label{arithmeticity}
Let $\Ga$ be as in Theorem \ref{lattice}. Then the commensurator of any uniform lattice in $\aut(X(\Ga))$ is dense in $\aut(X(\Ga))$.
\end{cor}
Recall that edges in $X(\Ga)$ can be labelled by vertices of $\Ga$ (see Section \ref{Salvetti complex}). Condition (1) is to guarantee that every $H$ has label-preserving finite index subgroup. We caution the reader that this condition is not equivalent to the \textquotedblleft type preserving\textquotedblright\ condition introduced in \cite{haglund2006commensurability} in the studying of right-angled buildings, though these two conditions have similar flavour.
Theorem \ref{lattice} is a consequence of the following result.
\begin{thm}
\label{label-preserving rigidity}
Suppose $\Ga$ is a graph which does not contain induced 4-cycle. Let $H$ be a group acting geometrically on $X(\Ga)$ by label-preserving automorphisms. Then $H$ is commensurable to $G(\Ga)$.
\end{thm}
For a more general version of Theorem \ref{label-preserving rigidity}, see Theorem \ref{good cover}.
The no induced 4-cycle condition is motivated by the examples of exotic groups acting geometrically on the product of two trees \cite{wise1996non,burger_mozes}. It is not hard to see that $X(\Ga)$ contains an isometrically embedded product of two trees of infinitely many ends if and only if $\Ga$ contains an induced 4-cycle. It turns out that one can modify their examples such that the exotic action on the product of two trees extends to an exotic action on $X(\Ga)$, which gives a converse to Theorem \ref{label-preserving rigidity}.
\begin{thm}
\label{label-preserving non-rigid RAAG's}
Let $\Ga$ be a graph which contains an induced 4-cycle. Then there exists a torsion free group $H$ acting geometrically on $X(\Ga)$ by label-preserving automorphisms such that $H$ is not residually finite. In particular, $H$ and $G(\Ga)$ are not commensurable.
\end{thm}
Note that even in the case where $\Ga$ is a 4-cycle, the above theorem is not completely trivial, since the examples mentioned above in \cite{wise1996non,burger_mozes} are not label-preserving. Also Theorem \ref{non-rigid RAAG's} follows from Theorem \ref{label-preserving non-rigid RAAG's}.
\subsection{Sketch of proof and organization of the paper}
We refer to Section \ref{Salvetti complex} for our notations. Let $G(\Ga)$ be a RAAG with finite outer automorphism group and let $H$ be a group quasi-isometric to $G(\Ga)$. For simplicity we assume $H$ is torsion free and $H$ acts geometrically on $X(\Ga)$. In general, $H$ admits a geometric model which is quite similar to $X(\Ga)$, which is discussed in Section \ref{sec_blow-up building} and Section \ref{sec_geometric model}.
\textit{Step 1:} We orient edges in $S(\Ga)$ and label them by vertices of $\Ga$. This lifts to $G(\Ga)$-equivariant orientation and labelling of edges in $X(\Ga)$. We are done if $H$ happens to preserve both the orientation and labelling. By a simple observation (Lemma \ref{label-preserving}), we can assume $H$ acts on $G(\Gamma)$ in a label-preserving way by passing to a finite index subgroup. However, the issue with orientation is more serious.
We induct on the complexity of the underlying graph, and assume Theorem \ref{label-preserving rigidity} holds for any proper subgraph of $\Ga$. Pick a vertex $v\in\Ga$ and let $\Gamma'\subset\Gamma$ be the induced subgraph spanned by vertices in $\Ga\setminus\{v\}$. Then there is a canonical embedding $S(\Ga')\to S(\Ga)$. Note that $G(\Gamma)$ is an HNN-extension of $G(\Gamma')$ along the subgroup $G(lk(v))\subset G(\Ga')$. Let $T$ be the associated Bass-Serre tree. Alternatively, $T$ is obtained from $X(\Ga)$ by collapsing edges which are in the lifts of $S(\Ga')$.
Since $H$ is label-preserving, there is an induced action $H\curvearrowright T$. Up to passing to a subgroup of index 2, we assume $H$ acts on $T$ without inversion. This induces a graph of groups decomposition of $H$ and a graph of spaces decomposition of $K=X(\Gamma)/H$. Each vertex group acts geometrically on $X(\Ga')$ by label-preserving automorphisms (since the universal cover of each vertex space is isometric to $X(\Gamma')$), hence it is commensurable to $G(\Gamma')$ by the induction assumption. It follows that each vertex space has a finite sheet cover which is a special cube complex. At this point, the reader is not required to know what is exactly a special cube complex. One may perceive it as a cube complex with some nice combinatorial features and we will explain the relevant properties later.
Each cover of $K$ has an induced graph of spaces structure. We claim there exists a finite sheet cover $\bar{K}\to K$ such that each vertex space of $\bar{K}$ is a special cube complex. It is not hard to deduce the above theorem from this claim. The relation between specialness and commensurability is discussed in Section \ref{subsec_special and commensurability}.
Any edge group of $T/H$ acts geometrically on $X(lk(v))$. In general they could be very complicated, however, their intersections are controlled as follows.
\begin{lem}
\label{intersection}
Suppose $\Ga$ has no induced 4-cycle. Then the largest intersection of edge groups and their conjugates in a vertex group is virtually a free Abelian group.
\end{lem}
\textit{Step 2:} For simplicity, we assume $K$ only has two vertex spaces $L$ and $R$, and one edge space $E\subset K$ such that the two ends of $E\times[0,1]$ are attached isometrically to locally convex subcomplexes $E_L\subset L$ and $E_R\subset R$ respectively. Let $L_1=L\cup (E\times [0,1/2])$ and $R_1=R\cup (E\times [1/2,1])$.
It follows from the work of Haglund and Wise \cite{haglund2012combination} that if $\pi_{1}(K)$ is hyperbolic, and if $\pi_{1}(E_L)$ and $\pi_{1}(E_R)$ are malnormal in $\pi_{1}(L)$ and $\pi_{1}(R)$ respectively, then $K$ has the desired finite sheet cover. Later the malnormality is dropped in \cite{wisestructure} by using the cubical small cancellation theory, and hyperbolicity is relaxed to relative hyperbolicity with respect to Abelian subgroups. However, in general it is easy to give counterexamples without any hyperbolicity assumption. Moreover, these works did not take quasi-isometry invariance into consideration.
In most of our cases, both relative hyperbolicity and malnormality fail. However, $K$ has more structure than a generic special cube complex and malnormality does not fail in a terrible way (Lemma \ref{intersection}). Due to lack of hyperbolicity, we will get around the cubical small cancellation or Dehn filling argument, and use a different collapsing argument based on the blow-up building construction in \cite{cubulation}, which also applies to group quasi-isometric to $G(\Ga)$.
Let $L'\to L$ be a finite sheet special cover. It also induces a finite sheet cover $L'_1\to L_1$. Similarly we define $R'$ and $R'_1$. Usually there are more than one lifts of $E_L$ in $L'$, each lift gives rise to a half-tube in $L'_1$. It suffices to match half-tubes in $L'_1$ with half-tubes in $R'_1$. Suppose there are bad half-tubes in $L'_1$ which do not match up with half-tubes in $R'_1$. It is natural to ask whether it is possible to pass to finite sheet covers of $L'_1$ such that bad half-tubes are modified in the cover while good half-tubes remain unchanged.
One ideal situation is the following. Suppose $A_{1},A_{2}\subset L'$ are elevations of $E_L$ and suppose there exists a retraction $r:L'\to A_{1}$ such that $r_{\ast}(\pi_{1}(A_{2}))$ is trivial. Then there exists cover $L''\to L'$ which realizes any further cover of $A_{1}$ without changing $A_{2}$. We want to achieve at least some weaker version of this ideal situation. Since $L'$ is a special cube complex, $L'$ has a finite sheet cover which retracts onto $A_{1}$. This is constructed in \cite{MR2377497}, in which it is called the \textit{canonical retraction}. Then we need to look at the images of the lifts of $A_2$ under this retraction. Due to the failure of malnormality, $r_{\ast}(\pi_{1}(A_{2}))$ is in generally non-trivial, however, Lemma \ref{intersection} suggested that it is reasonable to control the retraction image such that it is not more complicated than a torus.
The rough idea is that we first collapse all the tori in $K$. Then the tubes become collapsed tubes and the previous statement is equivalent to that the projection of a collapsed tube to another collapsed tube is contractible. The reader may notice that if we collapse all the tori in $K$, then the resulting space is a point. In order to make this idea work, we first exploded $K$ with respect to the intersection pattern of the tori in $K$, then collapse the tori. The resulting space $K'$ is a space which encodes the intersection pattern of tori in $K$. This is done in the setting of blow-up building developed in \cite{cubulation}. Moreover, this also works in the case when $H$ is a group quasi-isometric to $X(\Ga)$.
By killing certain holonomy in $K$ (see Section \ref{sec_branched complexes with trivial holonomy}), $K'$ becomes a special cube complex. Moreover, one deduce that $\pi_1(K')$ is Gromov-hyperbolic from the no induced 4-cycle condition (see Section \ref{subsec_wall projection}). Now we are in a situation to apply the work of Haglund and Wise on hyperbolic special cube complex.
\textit{Step 3:} While matching the tubes, we also need to keep track of finer information about these retraction tori, such as the length of the circles in the tori, and how other circles retracts onto a particular circle. It turns out that the construction in \cite{MR2377497} does not quite preserve this information since the canonical completion is too large in some sense (Remark \ref{larger circle}). We need a modified version of completion and retraction, which is discussed in Section \ref{subsec_modified completeions and retractions}.
\textit{Step 4:} Given that the retraction images are tori, and the retraction preserves finer combinatorial information of tori in $K$, we construct the desired cover of $K$ in Section \ref{subsec_matching}. The argument is a modified version of \cite[Section 6]{haglund2012combination}.
\textit{Step 5:} We show how to drop the finite outer automorphism condition in the case of group acting geometric on $X(\Ga)$. We will explode $K$ in a different way and decompose it into suitable vertex spaces and edge spaces. See Section \ref{sec_uniform lattice}.
\textit{Step 6:} When there is an induced 4-cycle in $\Ga$, the largest intersection of edges groups and their conjugates in a vertex group may contain a free group of rank 2. In this case it is impossible for $K$ to be virtually special in general. The counterexamples are given in Section \ref{sec_failure of commensurability}.
\subsection{Acknowledgement} This paper would not be possible without the helpful discussions with B. Kleiner. In particular, he pointed out a serious gap in the author's previous attempt to prove a special case of the main theorem. Also the author learned Lemma \ref{normal subgroup} from him. The author thanks D. T. Wise for pointing out the reference \cite{haglund2006commensurability} and X. Xie for helpful comments and clarifications.
\section{Preliminaries}
\subsection{Right-angled Artin groups and Salvetti complexes}
\label{Salvetti complex}
We refer to \cite{charney2007introduction} for background on right-angled Artin groups. Throughout this section, $\Ga$ will be a finite simplicial graph.
\begin{definition}[Salvetti complex]
Denote the vertex set of $\Gamma$ by $\{v_{i}\}_{i=1}^{m}$. We associated each $v_i$ with a standard circle $\Bbb S^{1}_{v_{i}}$ and choose a base point $p_{i}\in \Bbb S^{1}_{v_{i}}$. Let $\Bbb T^{m}=\Pi_{i=1}^{m} \Bbb S^{1}_{v_{i}}$. Then $\Bbb T^{m}$ has a natural cube complex structure. Then $\Delta$ gives rise to a subcomplex $T_{\Delta}=\Pi_{v_{i}\in v(\Delta)}\Bbb S^{1}_{v_{i}}\times \Pi_{v_{i}\notin v(\Delta)}\{p_{i}\}$. Then $S(\Gamma)$ is defined to be the subcomplex of $\Bbb T^{m}$ which is the union of all $T_{\Delta}$'s with $\Delta$ ranging over all clique subgraphs of $\Gamma$.
\end{definition}
$S(\Gamma)$ is a non-positively curved cube complex whose 2-skeleton is the presentation complex of $G(\Gamma)$, so $\pi_{1}(S(\Gamma))\cong G(\Gamma)$. The closure of each $k$-cell in $S(\Gamma)$ is a $k$-torus, which is called a \textit{standard torus}. A standard torus of dimension 1 is also called a \textit{standard circle}. The \textit{dimension} of $G(\Gamma)$ is the dimension of $S(\Gamma)$. Recall that $\Ga'\subset\Ga$ is an \textit{induced subgraph} if vertices of $\Ga'$ are adjacent in $\Ga'$ if and only if they are adjacent in $\Ga$. Each induced subgraph $\Ga'\subset\Ga$ gives rise to an isometric embedding $S(\Ga')\to S(\Ga)$. The universal cover of $S(\Gamma)$ is a $CAT(0)$ cube complex, which we denote by $X(\Ga)$. We label standard circles of $S(\Ga)$ by vertices of $\Gamma$, and this lifts to a $G(\Ga)$-invariant edge labelling of $X(\Gamma)$.
\begin{definition}($\Ga'$-components)
\label{components}
Let $K$ be a cube complex. Suppose edges of $K$ are labelled by vertices of $\Ga$. Pick a induced subgraph $\Ga'\subset\Ga$. A \textit{$\Ga'$-component} $L\subset K$ is a subcomplex such that
\begin{enumerate}
\item $L$ is connected.
\item Each edge in $L$ is labelled by a vertex in $\Ga'$. Moreover, for each vertex $v\in\Ga'$, there exists an edge in $L$ which is labelled by $v$.
\item $L$ is maximal with respect to (1) and (2).
\end{enumerate}
Here $\Ga'$ is called the \textit{support} of $L$.
\end{definition}
Note that an $\emptyset$-component is a vertex in $K$. If $\Ga'$ is a complete graph of $k$ vertices, $\Ga'$-components of $X(\Ga)$ are isometrically embedded $k$ dimensional Euclidean spaces. They are called \textit{standard flats}. When $k=1$, we also call them \textit{standard geodesics}. Vertices in $X(\Ga)$ are understood to be $0$-dimensional standard flats. Also note that in order to define $\Ga'$-components in $S(\Ga)$ and $X(\Ga)$, the second part of Definition \ref{components} (2) is not necessary, since for each vertex $x$ in $S(\Ga)$ or $X(\Ga)$, and for each vertex $v\in\Ga$, there exists a $v$-labelled edge containing $x$. However, we will encounter other complexes later where this property is not true.
\begin{lem}
\label{convexity of components}
Suppose $K$ is non-positively curved and parallel edges of $K$ are labelled by the same vertex. Then each $\Ga'$-component $L$ is locally convex. In particular, if $K$ is $CAT(0)$, then $L$ is $CAT(0)$.
\end{lem}
\begin{proof}
It suffices to check for each vertex $x\in L$, if a collection of edges $\{e_i\}_{i=1}^{n}$ emanating from $x$ span a cube in the ambient complex, then this cube is in $L$. However, this follows from the fact that parallel edges have the same label, and the maximality of $L$.
\end{proof}
The following object was first introduced in \cite{kim2013embedability}.
\begin{definition}[extension complex]
\label{extension complex}
The \textit{extension complex} $\P(\Ga)$ of a finite simplicial graph $\Ga$ is defined as follows. The vertices of $\mathcal{P}(\Gamma)$ are in 1-1 correspondence with the parallel classes of standard geodesics in $X(\Gamma)$. Two distinct vertices $v_{1},v_{2}\in\mathcal{P}(\Gamma)$ are connected by an edge if and only if there is a standard geodesic $l_{i}$ in the parallel class associated with $v_{i}$ ($i=1,2$) such that $l_{1}$ and $l_{2}$ span a standard 2-flat. Then $\mathcal{P}(\Gamma)$ is defined to be the flag complex of its 1-skeleton, namely we build $\mathcal{P}(\Gamma)$ inductively from its 1-skeleton by filling a $k$-simplex whenever we see the $(k-1)$-skeleton of a $k$-simplex.
Since each complete subgraph in the 1-skeleton of $\mathcal{P}(\Gamma)$ gives rise to a collection of mutually orthogonal standard geodesics lines in $X(\Ga)$, there is a 1-1 correspondence between $k$-simplexes in $\mathcal{P}(\Gamma)$ and parallel classes of standard $(k+1)$-flats in $X(\Gamma)$. For standard flat $F\subset X(\Gamma)$, we denote the simplex in $\mathcal{P}(\Gamma)$ which corresponds to standard flats parallel to $F$ by $\Delta(F)$.
\end{definition}
Each vertex $v\in\P(\Ga)$ is labelled a vertex of $\Ga$ in the following way. Pick standard geodesic $\ell\subset X(\Ga)$ such that $\Delta(\ell)=v$ and pick edge $e\subset\ell$. We label $v$ by the label of $e$. Note that this does not depend on the choice of $\ell$ in the parallel class, and the edge inside $\ell$. This labelling induces a map from the 0-skeleton of $\P(\Ga)$ to vertices in $\Ga$, which can be extended to a simplicial map $\P(\Ga)\to F(\Ga)$, here $F(\Ga)$ is the flag complex of $\Ga$.
We refer to \cite[Section 2.3.3]{kleiner1997rigidity} for the definition and properties of the \textit{parallel set} of a convex subset in a $CAT(0)$ space.
\begin{definition}
\label{v-parallel set}
For vertex $v\in\P(\Ga)$, the \textit{$v$-parallel set}, which we denote by $P_v$, is the parallel set of a standard geodesic $\ell\subset X(\Ga)$ such that $\Delta(\ell)=v$. Note that $P_v$ does not depend on the choice of the standard geodesic $\ell$ in the parallel class.
\end{definition}
\begin{definition}
\label{definition of links}
Pick induced subgraph $\Ga'\subset\Ga$. The \textit{link} of $\Ga'$, denoted by $lk(\Ga')$, is the induced subgraph spanned by vertices which are adjacent to every vertex in $\Ga'$. The \textit{closed star} of $\Ga'$, denoted by $St(\Ga')$, is the induced subgraph spanned by vertices in $\Ga'$ and $lk(\Ga')$.
Let $K$ be a polyhedral complex and pick $x\in K$. The \textit{geometrical link} of $x$ in $K$, denoted by $Lk(x,K)$, is the object defined in \cite[Chapter I.7.15]{bridson1999metric}.
\end{definition}
In general, these two notion of links do not agree on graphs.
\begin{lem}
\label{parallel set}
$($\cite[Lemma 3.4]{huang2014quasi}$)$
Let $K$ be a $\Ga'$-component in $X(\Ga)$. Then the parallel set $P_{K}$ of $K$ is exactly the $St(\Ga')$-component containing $K$. Moreover, $P_K$ admits a splitting $P_K=K\times K^{\perp}$, where $K^{\perp}$ is isomorphic to a $lk(\Ga')$-component.
\end{lem}
In particular, for vertex $v\in\P(\Ga)$, $P_v$ is the $St(\bar{v})$-component that contains $\ell$, where $\ell$ is a standard geodesic with $\Delta(\ell)=v$ and $\bar{v}\in\Ga$ is the label of edges in $\ell$.
\subsection{Special cube complex}
We refer to Section 2.A and Section 2.B of \cite{haglund2012combination} for background about cube complexes and hyperplanes. An edge is \textit{dual} to a hyperplane if it has nonempty intersection with the hyperplane. A hyperplane $h$ is \textit{2-sided} if it has a small neighbourhood which is a trivial bundle over $h$. Two edges are \textit{parallel} if they are dual to the same hyperplane.
\label{subsec_special cube complex}
Let $X$ be a cube complex and pick a two-sided hyperplane $h\subset X$. Then the parallelism induces a well-defined orientation for edges dual to $h$. For each vertex $v\in X$, let $g_{v}$ be the graph made of all dual edges of $h$ that contains $v$ ($g_v$ could be empty). Then the geometric link $Lk(v,g_{v})$ is a discrete graph and each of its vertices is either incoming or outcoming depending on the orientations of edges.
We say $h$ \textit{self-osculates} if
\begin{enumerate}
\item $h$ is two-sided and embedded;
\item there exists vertex $v\in X$ such that $Lk(v,g_{v})$ has more than one points.
\end{enumerate}
In this case, $h$ \textit{directly self-osculates} if there exist vertex $v\in X$ such that $Lk(v,g_{v})$ has at least two vertices which are both incoming or both outcoming, otherwise, $h$ \textit{indirectly self-osculates}. For example, a circle with one edge has an indirectly self-osculating hyperplane.
Let $h_{1}$ and $h_{2}$ be a pair of embedded two-sided hyperplanes. Then they \textit{interosculate} if there exist edges $a_{i},b_{i}$ dual to $h_{i}$ $(i=1,2)$, and vertices $v_{a}\in a_{1}\cap a_{2}$, $v_{b}\in b_{1}\cap b_{2}$ such that
\begin{enumerate}
\item there exist a vertex in $Lk(v_{a},a_{1})$ and a vertex in $Lk(v_{a}, a_{2})$ which are adjacent in $Lk(v_{a},X)$ (note that if $a_1$ is not embedded, then $Lk(v_{a},a_{1})$ has two points);
\item there exist a vertex in $Lk(v_{b},b_{1})$ and a vertex in $Lk(v_{b}, b_{2})$ which are not adjacent in $Lk(v_{b},X)$.
\end{enumerate}
\begin{definition}
\label{special}
A non-positively curved cube complex $X$ is \textit{special} if
\begin{enumerate}
\item Each hyperplane is 2-sided and embedded.
\item No hyperplane directly self-osculate.
\item No two hyperplanes interosculate.
\end{enumerate}
$X$ is \textit{directly special} if $X$ is special and no hyperplane of $X$ self-osculate.
\end{definition}
\begin{thm}
\cite[Lemma 2.6]{haglund2012combination}
Let $X$ be a non-positively cube complex. Then $X$ is special if and only if there exists a local-isometry $X\to S(\Gamma)$ for some Salvetti complex $S(\Gamma)$.
\end{thm}
\begin{lem}
\label{directly special}
\cite{haglund2012combination} Let $X$ be a compact special cube complex. Then $X$ has a finite cover which is directly special. Moreover, being directly special is preserved under passing to any further cover.
\end{lem}
\subsection{Canonical completions and retractions}
\label{subset_the canonical comletion}
We follow \cite[Section 3]{haglund2012combination}.
Given compact special cube complexes $A$ and $X$, and a locally isometry $A\to X$, one can construct a finite cover of $X$ that contains a copy of $A$. This finite cover is called the \textit{canonical completion} of the pair $(A,X)$ and is denoted by $\mathsf{C}(A,X)$. Moreover, there is a \textit{canonical retraction} map $r:\mathsf{C}(A,X)\to A$. Now we describe the construction of $\mathsf{C}(A,X)$ and the retraction map $r$.
\textit{Case 1:} $X$ is a circle with a single edge. Then each connected component of $A$ is either a point, or a circle, or a path. We attach an extra edge to each component which is not a circle to make it a circle. The resulting space is denoted by $\mathsf{C}(A,X)$. It is clear that $\mathsf{C}(A,X)$ is finite cover of $X$ and $A\subset\mathsf{C}(A,X)$. Moreover, there is a retraction $\mathsf{C}(A,X)\to A$ by sending each extra edge to the component of $A$ it was attached along.
\textit{Case 2:} $X$ is a wedge of finitely many circles $\{c_{i}\}_{i=1}^{n}$. Let $A_{i}$ be the inverse image of $c_{i}$ under $A\to X$. We define $\mathsf{C}(A,X)$ to be the union of all $\mathsf{C}(A_{i},c_{i})$'s identified along their vertex sets, and define the canonical retraction $r:\mathsf{C}(A,X)\to A$ to be the map induced by $\mathsf{C}(A_{i},c_{i})\to A_{i}$. It is still true that $\mathsf{C}(A,X)\to X$ is a finite sheet covering map and $A\subset\mathsf{C}(A,X)$.
\text{Case 3:} $X$ is the Salvetti complex of some RAAG. Then the 1-skeleton $X^{1}$ of $X$ is a wedge of circles. It turns out that there is a natural way to attach higher dimensional cells to $\mathsf{C}(A^{1},X^{1})$ to obtain a non-positively curved cube complex $\mathsf{C}(A,X)$ (see \cite[Theorem 2.6]{bou2015residual}). Moreover, the covering map $\mathsf{C}(A^{1},X^{1})\to X^{1}$ and the retraction $\mathsf{C}(A^{1},X^{1})\to A^{1}$ extent to a covering map $\mathsf{C}(A,X)\to X$ and a retraction $\mathsf{C}(A,X)\to A$. The following is immediate from our construction.
\begin{lem}
\label{product and canonical completion}
For $i=1,2$, let $B_i\to X_i$ be a local isometry from $B_i$ to a Salvetti complex. Then the canonical completion $\C(B_1\times B_2,X_1\times X_2)$ with respect to the product of these two local isometries is naturally isomorphic (as cube complexes) to $\C(B_1,X_1)\times\C(B_2,X_2)$.
\end{lem}
\textit{Case 4:} $X$ is any compact special cube complex. Let $\Gamma$ be a graph such that its vertex set corresponds to the hyperplanes in $X$, and two vertex are adjacent if the corresponding hyperplanes cross each other. Such graph is called the \textit{intersection graph} of $X$. Suppose $R=S(\Gamma)$. Since $X$ is special, there is a local isometry $X\to R$, which induces a local isometry $A\to R$. We define $\mathsf{C}(A,X)$ to be the pull-back of the covering map $\mathsf{C}(A,R)\to R$ which fits into the following commuting diagram:
\begin{center}
$\begin{CD}
@. \mathsf{C}(A,X) @>>> \mathsf{C}(A,R)\\
@. @VVV @VVV\\
A @>>>X @>>> R
\end{CD}$
\end{center}
Recall that an $i$-cube in $\mathsf{C}(A,X)$ can be represented by a pair of $i$-cubes in $\mathsf{C}(A,R)$ and $X$ that are mapped to the same $i$-cube in $S(\Gamma)$. Thus there exists a naturally embedded copy of $A$ in $\mathsf{C}(A,X)$ by considering $A\to \mathsf{C}(A,R)$ and $A\to X\to R$. The component of $\mathsf{C}(A,X)$ which contains this copy of $A$ is called the \textit{main component}. The canonical retraction is defined to be the composition $\mathsf{C}(A,X)\to\mathsf{C}(A,R)\to A$.
We recall the following notion from \cite[Section 2.1]{caprace2011rank}. It is also called a projection-like map in \cite{haglund2012combination}.
\begin{definition}
\label{cubical map}
A cellular map between cube complexes is \textit{cubical} if its restriction $\sigma\to\tau$ between cubes factors as $\sigma\to\eta\to\tau$, where the first map $\sigma\to\eta$ is a natural projection onto a face of $\sigma$ and the second map $\eta\to\tau$ is an isometry.
\end{definition}
Note that if the inverse image of each edge in $S(\Gamma)$ under $A\to S(\Gamma)$ is a disjoint union of vertices and single edges, then the retraction $\mathsf{C}(A^{1},R^{1})\to A^{1}$ is cubical. Hence the canonical retraction $\mathsf{C}(A,X)\to A$ is cubical. For example, the assumption is satisfied when $X$ is directly special.
\begin{definition}
\cite[Definition 3.14]{haglund2012combination} Let $X$ denote a cube complex. Let $A$ and $B$ be subcomplexes of $X$. We define $\wpj_{X} (A\to B)$, the \textit{wall projection} of $A$ onto $B$ in $X$, to be the union of $B^{0}$ together with all cubes of $B$ whose edges are all parallel to edges of $A$.
\end{definition}
\begin{lem}
\label{wpj}
\cite[Lemma 3.16]{haglund2012combination} Let $A$ and $D$ be locally convex subcomplex of a directed special cube complex $B$. Let $\hat{D}$ denote the preimage of $D$ in $\mathsf{C}(A,B)$, and let $r:\mathsf{C}(A,B)\to A$ be the canonical retraction map. Then $r(\hat{D})\subset\wpj_{B}(D\to A)$.
\end{lem}
Let $A$ be a locally convex subcomplex in a special cube complex $X$. It is natural to ask what is the inverse image of $A$ under the covering $\mathsf{C}(A,X)\to X$. This usually depends on how $A$ sits inside $X$.
\begin{definition}
\cite[Definition 3.10]{haglund2012combination}
Let $D\to C$ be a combinatorial map between cube complexes. Then a hyperplane of $D$ is mapped to a hyperplane of $C$. Hence there is an induced map $V_{D}\to V_{C}$ between the set of hyperplanes in $D$ and $C$ respectively. The map $D\to C$ is \textit{wall-injective} if $V_{D}\to V_{C}$ in injective.
\end{definition}
\begin{lem}
\cite[Lemma 3.13]{haglund2012combination}
Let $X$ be a special cube complex and let $A\subset X$ be a wall-injective locally convex subcomplex. Then the preimage of $A$ in $\mathsf{C}(A,X)$ is canonically isomorphic to $\mathsf{C}(A,A)$.
\end{lem}
\begin{remark}
\label{larger circle}
Let $A$ be a special cube complex. Usually $\mathsf{C}(A,A)$ is much larger than $A$. For example, let $A$ be a circle made of $n$-edges for $n\ge 3$. Then $\mathsf{C}(A,A)$ is the disjoint union of a circle of length $n$ and a circle of length $n(n-1)$.
\end{remark}
\begin{lem}
\label{wall-injective}
\cite[Corollary 3.11]{haglund2012combination}
Let $A\to X$ be a local-isometry. If the canonical retraction $\C(A,X)\to A$ is cubical, then $A$ is wall-injective in $\C(A,X)$. In particular, $A$ is wall-injective in $\C(A,X)$ when $X$ is directly special.
\end{lem}
\begin{definition}
\label{elevation}
\cite[Definition 3.17]{haglund2012combination} Let $\pi:\bar{X}\to X$ denote a covering map. Let $A\subset X$ denote a connected subspace. An \textit{elevation} of $A$ to $\bar{X}$ is a connected component of the preimage of $A$ under $\bar{X}\to X$. If we choose base points $p\in A$ and $\bar{p}\in\bar{X}$ with $\pi(\bar{p})=p$, then the \textit{based elevation} of $A$ is the component of the preimage of $A$ that contains $\bar{p}$.
If $f:A\to X$ is any continuous map for $A$ connected. We define an elevation $\bar{A}$ of $A$ to be a cover of $A$ corresponding to the subgroup $f^{-1}_{\ast}(\pi_{1}(\bar{X}))$ (this depends on our choice of base points in $A$, $X$ and $\bar{X}$). Then $\bar{A}$ fits into the following commuting diagram:
\begin{center}
$\begin{CD}
\bar{A} @>>> \bar{X}\\
@VVV @VVV\\
A @>>> X
\end{CD}$
\end{center}
\end{definition}
\begin{remark}
The above two notions of elevation are consistent in the following sense. Suppose $f:A\to X$ is continuous and let $C$ be the mapping cylinder of $f$. Since $C$ is homotopic to $X$, the covering map $\bar{X}\to X$ induces a covering map $\bar{C}\to C$. Then elevations of $A\to X$ corresponds to components in the inverse image of $A$ in $\bar{C}\to C$.
Alternatively, let $\tilde{A}\to A$ be the pull-back of the $\bar{X}\to X$. Then an elevation of $A\to X$ corresponds to a connected component in $\tilde{A}$.
\end{remark}
\section{The structure of blow-up building}
\label{sec_blow-up building}
\subsection{The blow-up building} We briefly review the Davis realization of right-angled buildings and the construction of blow-up buildings in \cite{cubulation}. The reader can find detailed proofs of the statements mentioned below in \cite{cubulation}.
Let $\P$ be the poset of standard flats in $X(\Ga)$. Recall that an \textit{interval} of $\P$ is a subset of form $I_{a,b}=\{x\in \P\mid a\le x\le b\}$ for some $a,b\in\P$. Since every interval of $\P$ is a Boolean lattice of finite rank, one can construct a cube complex $|\B|$ whose poset of cubes is isomorphic to poset of intervals in $\P$ (\cite[Proposition A.38]{abramenko2008buildings}). $|\B|$ is called the \textit{Davis realization of the right-angled building associated with $G(\Ga)$} and it is a $CAT(0)$ cube complex \cite{davis1994buildings}.
More precisely, there is a 1-1 correspondence between $k$-cubes in $|\B|$ and intervals of form $I_{F_1,F_2}$ where $F_1\subset F_2$ are two standard flats in $X(\Ga)$ with $\dim(F_2)-\dim(F_1)=k$. In particular, vertices of $|\B|$ correspond to standard flats in $X(\Ga)$. We label each vertex of $|\B|$ by the support (Definition \ref{components}) of the corresponding standard flat (if a vertex of $|\B|$ corresponds to a $0$-dimensional standard flat, then it is labelled by the empty set), and the \textit{rank} of this vertex is defined to be the number of vertices in its label. Moreover, the vertex set of $|\B|$ inherits a partial order from $\P$.
We label each edge of $|\B|$ by the unique vertex of $\Ga$ in the symmetric difference of the labels of its two endpoints. Note that two parallel edges are labelled by the same vertex of $\Ga$, hence there is an induced labelling of hyperplanes in $|\B|$. If two hyperplanes cross, then their labels are adjacent in $\Ga$.
Let $K\subset X(\Ga)$ be a $\Ga'$-component and let $\P_K\subset \P$ be the sub-poset of standard flats inside $K$. Then $\P_K$ gives rise to a convex subcomplex $|\B|_{K}\subset |\B|$. By definition, $|\B|_K$ is isomorphic to the Davis realization of the right-angled building associated with $G(\Ga')$.
There is an induced action $G(\Ga)\acts|\B|$ which preserves the labellings of vertices and edges. The action is cocompact, but not proper - the stabilizer of a cube is isomorphic to $\Z^{n}$ where $n$ is the rank of the minimal vertex in this cube. The following construction is motivated by the attempt to blow-up $|\B|$ with respect to this data of stabilizers (in a possibly non-equivariant way). See Theorem \ref{inverse image are flats} for a precise statement.
\begin{definition}[branched lines and flats]
A \textit{branched line} is constructed from a copy of $\Bbb R$ with finitely many edges of length 1 attached to each integer point. We also require that the valence of each vertex in a branched line is bounded above by a uniform constant. This space has a natural simplicial structure and is endowed with the path metric. A \textit{branched flat} is a product of finitely many branched lines.
Let $\beta$ be a branched line. We call those vertices with valence $=1$ in $\beta$ the \textit{tips} of $\beta$, and the collection of all tips is denoted by $t(\beta)$. The copy of $\Bbb R$ in $\beta$ is called the \textit{core} of $\beta$. For a branched flat $F$, we define $t(F)$ to be the product of the tips of its factors, and the \textit{core} of $F$ to be the product of the cores of its factors.
\end{definition}
\begin{definition}[blow-up building]
\label{construction}
The following construction is a special case of \cite[Section 5.2, Section 5.3]{cubulation}. Let $\P(\Ga)$ be the extension complex. Pick a vertex $v\in\P(\Ga)$, we associate $v$ with a branched line $\beta_{v}$. Moreover, for each standard geodesic line $\ell$ with $\Delta(\ell)=v$ (see Definition \ref{extension complex}), we associate a bijection $f_{v,\ell}$ from the 0-skeleton $\ell^{(0)}$ of $\ell$ to $t(\beta_{v})$ such that
if two standard lines $\ell,\ell'$ are parallel, then $f_{v,\ell'}=f_{v,\ell}\circ p$, where $p:\ell'^{(0)}\to \ell^{(0)}$ is the map induced by parallelism. These $f_{v,\ell}$'s are called \textit{blow-up data}.
We associate each standard flat $F\subset X(\Ga)$ with a space $\beta_F$ as follows. If $F$ is a 0-dimensional standard flat (i.e. $\Delta(F)=\emptyset$), then $\beta_F$ is a point. Suppose $\Delta(F)\neq\emptyset$. Let $F=\prod_{v\in\Delta(F)}\ell_{v}$ be a product decomposition of $F$, here $v$ is a vertex in $\Delta(F)$, and $\ell_{v}\subset F$ is a standard geodesic line with $\Delta(\ell_{v})=v$. Then $\beta_{F}=\prod_{v\in\Delta(F)}\beta_{v}$.
For standard flats $F'\subset F$, suppose $F'=\prod_{v\in\Delta(F')}\ell_{v}\times\prod_{v\notin\Delta(F')}\{x_{v}\}$ ($x_{v}$ is a vertex in $\ell_{v}$). Then we define an isometric embedding $\beta_{F'}\hookrightarrow \beta_F$ as follows:
\begin{center}
$\beta_{F'}=\prod_{v\in\Delta(F')}\beta_v\cong \prod_{v\in\Delta(F')}\beta_{v}\times\prod_{v\notin\Delta(F')}\{f_{v,\ell_{v}}(x_{v})\}\hookrightarrow \prod_{v\in\Delta(F)}\beta_{v}=\beta_F$.
\end{center}
One readily verify that the above construction gives a functor from the poset $\P$ to the category of branched flats with isometric embeddings as morphisms. Let $Y(\Ga)$ be the space obtained by gluing the collection of all $\beta_F$'s according the isometric embeddings defined as above. The following properties follows from functorality of $F\to \beta_F$.
\begin{enumerate}
\item Each $\beta_F$ is embedded in $Y(\Ga)$. It is called a \textit{standard branched flat}. Then core of a standard branched flat is called a \textit{standard flat}. There is a 1-1 correspondence between standard flats in $X(\Ga)$ and standard flats in $Y(\Ga)$.
\item $\beta_{F_1}\cap \beta_{F_2}=\beta_{F_{1}\cap F_{2}}$ \cite[Lemma 8.1]{cubulation}. Thus if the cores of $\beta_{F_1}$ and $\beta_{F_2}$ have nontrivial intersection, then $\beta_{F_1}=\beta_{F_2}$. In particular, different standard flats in $Y(\Ga)$ are disjoint.
\item We can glue all the $f_{v,\ell}$'s together to obtain an injective map $f:X(\Ga)^{(0)}\to Y(\Ga)$ such that $f$ map the vertex set of each standard flat bijectively to the tips of a standard branched flat. The image of $f$ is exactly the collection of $0$-dimensional standard flats in $Y(\Ga)$.
\end{enumerate}
$Y(\Ga)$ is called the \textit{blow-up building of type $\Ga$} and it is a $CAT(0)$ cube complex (\cite{cubulation}). We claim for each vertex $x\in Y(\Ga)$, there is a unique standard flat containing $x$. Then this induces a map $\L_{1}:Y(\Ga)^{(0)}\to\P$ (recall that $\P$ is the poset of standard flats in $X(\Ga)$). The claim is clear when $\Ga$ is a clique (in this case $Y(\Ga)$ is a product of branched lines). In general we can choose a standard branched flat containing $x$ and find a standard flat inside which contains $x$. Since different standard flats in $Y(\Ga)$ are disjoint, thus the uniqueness follows. We label each vertex $x\in Y(\Ga)$ by the support of $\L_1(x)$. The \textit{rank} of $x$ is the number of vertices in this clique.
\end{definition}
We record the following simple observation.
\begin{lem}
\label{label and core}
Pick standard flat $F\subset X(\Ga)$, then for any vertex $v$ in the core of $\beta_F$, we have $\L_1(v)=F$. Conversely, if a vertex $v\in Y(\Ga)$ satisfies $\L_1(v)=F$, then $v$ is in the core of $\beta_{F}$.
\end{lem}
\begin{definition}[edge labelling and hyperplane labelling of $Y(\Ga)$]
\label{edge and hyperplane labelling}
We label each edge of $Y(\Ga)$ by a vertex of $\Ga$ as follows. First we define a map $\L_2$ from the collection of edges of $Y(\Ga)$ to vertices of $\P(\Ga)$. Pick edge $e\subset Y(\Ga)$, then $e$ is contained in a standard branched flat $\beta_F$. There is a unique product factor $\beta_v$ of $\beta_F$ such that $e$ is parallel to some edge in $\beta_v$. We define $\L_2(e)=v$. There may be several $\beta_F$'s that contain $e$, however they will give rise to the same $v$. $\L_2$ actually induces an edge-labelling of $Y(\Ga)$ by vertices of $\Ga$, since vertices of $\P(\Ga)$ are labelled by the vertices of $\Ga$ (see Section \ref{Salvetti complex}).
Since every cube of $Y(\Ga)$ is contained in some $\beta_F$, the opposite edges of a 2-cube are mapped to the same vertex under $\L_2$. Thus $\L_2(e_1)=\L_2(e_{2})$ if $e_{1}$ is parallel to $e_{2}$. This induces labelling of hyperplanes in $Y(\Ga)$ by vertices of $\Ga$. Note that if two hyperplanes in $Y(\Ga)$ cross, then their labels are adjacent in $\Ga$.
\end{definition}
It follows from Lemma \ref{convexity of components} that each $\Ga'$-component in $Y(\Ga)$ or $|\B|$ is a convex subcomplex.
\begin{example}
\label{label of products}
Let $\Ga$ be a clique and $\{v_{i}\}_{i=1}^{n}$ be its vertex set. Then we can identify vertices of $\P(\Ga)$ with vertices of $\Ga$. In this case, $Y(\Ga)\cong\prod_{i=1}^{n}\beta_{v_i}$ where $\beta_{v_i}$ is a branched line. Let $p_i:Y(\Ga)\to \beta_{v_i}$ be the projection map. For vertex $x\in Y(\Ga)$, then $v_i$ is in the label of $x$ if and only if $p_i(x)$ is in the core of $\beta_{v_i}$. For edge $e\subset Y(\Ga)$, the label of $e$ is the unique $v_i$ such that $p_i(e)$ is an edge.
\end{example}
Let $e\subset Y(\Ga)$ be an edge and $v_1,v_2$ be its endpoints. If $v_1$ and $v_2$ have the same rank, then $\L_1(v_1)=\L_1(v_2)$, hence they are labelled by the same clique in $\Ga$, and the label of $e$ belongs to this clique. Otherwise, one of $\L_1(v_1)$ and $\L_1(v_2)$ is a codimension one flat in another. Thus $v_1$ and $v_2$ are labelled by two cliques such that one is contained in another. The label of $e$ is the unique vertex in the difference of these two cliques. Again, to prove these statements, it suffices to consider the case when $\Ga$ is a clique. The general case reduces to this special case by considering a standard branched flat containing $e$. Moreover, we have the following consequence.
\begin{lem}\
\label{labelling diference}
\begin{enumerate}
\item Pick an edge $e$ in $Y(\Ga)$ $($or $|\B|)$. Then the label of $e$ is contained in the label of at least one of its endpoints.
\item Pick an edge path $\omega$ connecting vertices $x$ and $y$ in $Y(\Ga)$ $($or $|\B|)$. Then for every vertex in the symmetric difference of the labellings of $x$ and $y$, there exists an edge in $\omega$ labelled by this vertex.
\end{enumerate}
\end{lem}
\begin{definition}
\label{def_beta_K}
Pick a $\Ga'$-component $K\subset X(\Ga)$. We define $\beta_{K}=\cup_{F}\beta_F$ with $F$ ranging over standard flats in $K$.
\end{definition}
We recall the following properties of $\beta_{K}$, where (1) follows from \cite[Lemma 5.18]{cubulation} and (2) follows from \cite[Corollary 5.16]{cubulation}.
\begin{lem}\
\label{product decomposition of standard components}
\begin{enumerate}
\item The subcomplex $\beta_{K}$ is convex in $Y(\Ga)$, and it is the convex hull of $f(K^{(0)})$ ($f$ is introduced in Definition \ref{construction}).
\item Suppose $\Ga'$ admits a join decomposition $\Ga'=\Ga'_1\circ\Ga'_2$. For $i=1,2$, pick $\Ga'_{i}$-component $K_i\subset K$ and let $\pi_i:\beta_K\to \beta_{K_i}$ be the $CAT(0)$ projection (\cite[Proposition II.2.4]{MR1744486}). Then $\pi_1\times \pi_2:\beta_{K}\to \beta_{K_1}\times\beta_{K_2}$ is a cubical isomorphism. Similarly, $|\B|_{K}\cong|\B|_{K_1}\times|\B|_{K_2}$.
\end{enumerate}
\end{lem}
\begin{lem}\
\label{parallel set in blow-up building}
Pick vertex $v\in\P(\Ga)$. Let $P_v$ be as in Definition \ref{v-parallel set}.
\begin{enumerate}
\item The subcomplex $\beta_{P_v}$ admits a natural $CAT(0)$ cubical product decomposition
\begin{equation}\label{branched product decomposition}
\beta_{P_v}\cong \beta_v\times \beta^{\perp}_v.
\end{equation}
\item Let $e\subset Y(\Ga)$ be an edge such that $v=\L_2(e)$. Then the hyperplane $h_e$ dual to $e$ can be identified as the product factor $\beta^{\perp}_v$ in (\ref{branched product decomposition}).
\end{enumerate}
\end{lem}
\begin{proof}
Suppose $v$ is labelled by $\bar{v}\in\Ga$. Note that $P_v$ is a $St(\bar{v})$-component in $X(\Ga)$. By construction, any standard $\{\bar{v}\}$-component in $\beta_{P_v}$ can be naturally identified with $\beta_{v}$. Let $\beta_{P_v}\cong \beta_v\times \beta^{\perp}_v$ be the product decomposition induced by the join decomposition $St(\bar{v})=\{\bar{v}\}\circ lk(\bar{v})$ as in Lemma \ref{product decomposition of standard components} (2).
To see (2), recall that if $e'$ is parallel to $e$, then $\L_2(e')=\L_2(e)$. Thus it suffices to prove if an edge $e'\subset Y(\Ga)$ satisfies $\L(e')=v$, then $e'\subset\beta_{P_v}$. To see this, note that by the definition of $\L_2$, there exists standard flat $F \subset X(\Ga)$ such that $e' \subset\beta_{F}$ and $v\in\Delta(F)$. Thus $F\subset P_v$ and $e'\subset \beta_{P_v}$.
\end{proof}
\subsection{Canonical projection and standard components}
If we collapse each standard flat in $Y(\Ga)$, then the resulting space is $|\B|$. More precisely, we define the \textit{canonical projection} $\pi:Y(\Ga)\to |\B|$ as follows. Let $\L_1:Y(\Ga)^{(0)}\to \P\cong |\B|^{(0)}$ be as in Definition \ref{construction}. We claim $\L_1$ can be extended to a cubical map, which defines $\pi$ (such extension, if exists, must be unique). When $\Ga$ is a clique, $Y(\Ga)$ is a product of branched line, then it is clear that $\L_1$ can be extended to a map which collapses the core of each factors. In general, for each standard flat $F\subset X(\Ga)$, we can extend $\L_1|_{\beta^{(0)}_F}$ to a map $\pi_{F}:\beta_F\to |\B|_F$. The uniqueness of extension implies we can piece together these $\pi_F$'s to define $\pi$. By definition, $\pi$ preserves the rank and labelling of vertices, and it induces a bijection between vertices of rank 0 in $Y(\Ga)$ and $|\B|$.
Pick an edge $e\subset Y(\Ga)$, $e$ is \textit{vertical} if $\pi(e)$ is a point, otherwise $e$ is \textit{horizontal}. In the latter case, $e$ and $\pi(e)$ have the same label. Note that $e$ is vertical if and only if its two endpoints of $e$ have the same rank. Edges parallel to a vertical (or horizontal) edge are also vertical (or horizontal), thus it makes sense to talk about vertical (or horizontal) hyperplanes. It is shown in \cite{cubulation} that $\pi$ is a restriction quotient in the sense that if one start with the wall space associated with $Y(\Ga)$ and pass to a new wall space by forgetting all the vertical hyperplanes, then the new dual cube complex is $|\B|$. The following is a consequence of Theorem 4.4, Corollary 5.15 and the discussion in the beginning of Section 5.2 of \cite{cubulation}.
\begin{thm}
\label{inverse image are flats}
For any cube $\sigma\subset|\B|$ and an interior point $x\in\sigma$, $\pi^{-1}(x)$ is isomorphic to $\E^{n}$ where $n$ is the rank of the minimal vertex in $\sigma$. Conversely, if $Y$ is any $CAT(0)$ cube complex such that there exists a cubical map $\pi:Y\to |\B|$ which satisfies the property in the previous sentence, then $Y$ can be constructed via Definition \ref{construction}.
\end{thm}
\begin{definition}
\label{standard components}
A \textit{standard $\Ga'$-component} of $Y(\Ga)$ (or $|\B|$) is a $\Ga'$-component which contains a vertex of rank $0$.
\end{definition}
\begin{lem}
\label{standard and label of vertices}
Let $K$ be a $\Ga'$-component in $|\B|$ $($or $Y(\Ga))$. Then the following are equivalent.
\begin{enumerate}
\item $K$ is standard.
\item There exists one vertex in $K$ whose label is contained in $\Ga'$.
\item The label of each vertex in $K$ is contained in $\Ga'$.
\end{enumerate}
\end{lem}
\begin{proof}
$(1)\Rightarrow (2)$ is trivial ($K$ contains a vertex of rank 0). $(2)\Rightarrow(3)$ follows from Lemma \ref{labelling diference} (2) and the connectedness of $K$. For $(3)\Rightarrow(1)$, pick vertex $x\in K$ labelled by $\Ga_x\subset\Ga'$ and let $\beta_{F}$ be the minimal standard branched flat containing $x$. Then $x$ is in the core of $\beta_{F}$ by minimality. Then the discussion in Example \ref{label of products} implies the label of each edge of $\beta_{F}$ is in $\Ga_x$. Thus $\beta_{F}\subset K$ and in particular $K$ contains a rank 0 vertex.
\end{proof}
\begin{lem}
\label{correspondence of standard components}
The map $\pi$ induces a one to one correspondence between standard $\Ga'$-components in $|\B|$ and $Y(\Ga)$ in the following sense.
\begin{enumerate}
\item If $K\subset|\B|$ is a standard $\Ga'$-component, then $\pi^{-1}(K)$ is a standard $\Ga'$-component.
\item If $L\subset Y(\Ga)$ is a standard $\Ga'$-component, then $\pi(L)$ is a standard $\Ga'$-component.
\end{enumerate}
\end{lem}
\begin{proof}
We prove (1). It follows from \cite[Theorem 4.4]{cubulation} that $\pi^{-1}(K)$ is convex, in particular connected. Now we verify Definition \ref{components} (2). Suppose there exists an edge in $\pi^{-1}(K)$ labelled by $v\notin\Ga'$. Then there is an endpoint of this edge labelled by a clique $C\subset\Ga$ such that $v\in C$. Let $x\in K$ be the $\pi$-image of this endpoint. Then $x$ is also labelled by $C$. Pick a rank 0 vertex $y\in K$. Then the connectedness of $K$ implies we can join $x$ and $y$ by an edge path $\omega\subset K$. Since $y$ is labelled by the empty set, Lemma \ref{labelling diference} (2) implies at least one edge in $\omega$ is labelled by $v$, which is a contradiction. The second part of Definition \ref{components} (2) is clear. The maximality of $\pi^{-1}(K)$ follows from the maximality of $K$.
To see (2), note that each edge of $\pi(L)$ is labelled by a vertex in $\Ga'$, and $\pi(L)$ contains a rank 0 vertex. Let $K'$ be the standard $\Ga'$-component containing $\pi(L)$. By (1), $\pi^{-1}(K')$ is a standard $\Ga'$-component containing $L$, thus $L=\pi^{-1}(K')$ and $K'=\pi(L)$.
\end{proof}
\begin{lem}
\label{characterization of standard components}
For each $\Ga'$-component $K\subset X(\Ga)$, $|\B|_{K}$ and $\beta_K$ are standard $\Ga'$-components. Conversely, each standard $\Ga'$-component in $|\B|$ or $Y(\Ga)$ must arise in such a way. In particular, each vertex of rank 0 is contained in some standard $\Ga'$-component.
\end{lem}
\begin{proof}
We prove the above statement in $Y(\Ga)$, the case of $|\B|$ is similar. The definition of $\beta_{K}$ implies each edge of $\beta_{K}$ is labelled by a vertex in $\Ga'$, thus there exists a standard $\Ga'$-component $L\subset Y(\Ga)$ containing $\beta_{K}$. Let $\omega\subset L$ be an edge path from a vertex in $\beta_{F}$ to an arbitrary vertex in $L$ and let $\{v_{i}\}_{i=1}^{n}$ be consecutive vertices in $\omega$. Let $F_i=\L_1(v_i)$ (see Definition \ref{construction} for the map $\L_1$). Then $F_{i}\cap F_{i+1}\neq\emptyset$ and the support of each $F_i$ is in $\Ga'$ (by Lemma \ref{standard and label of vertices}). So all $F_i$'s are in the same $\Ga'$-component of $X(\Ga)$. Thus for each vertex $v\in L$, $F_v=\L_1(v)\subset K$. It follows from Lemma \ref{label and core} that $v\in\beta_{F_v}\subset\beta_K$. Thus $\beta_{K}=L$. To see the converse, pick a rank 0 vertex $x$ in an arbitrary standard $\Ga'$-component $L'$. Let $K'\subset X(\Ga)$ be the $\Ga'$-component containing $\L_1(x)$. Then $\beta_{K'}\cap L'\neq\emptyset$. Since $\beta_{K'}$ and $L'$ are both standard $\Ga'$-components, they are the same complex.
\end{proof}
\begin{lem}
\label{intersection of standard components}
For $i=1,2$, let $L_i$ be a standard $\Ga_i$-component in $|\B|$ $($or $Y(\Ga))$ and let $K_i\subset X(\Ga)$ be the associated $\Ga_i$-component. Then $L_1\cap L_2\neq\emptyset$ if and only if $K_1\cap K_2\neq\emptyset$. In this case, $L_1\cap L_2$ is a standard $\Ga_1\cap\Ga_2$-component.
\end{lem}
\begin{proof}
We prove the lemma in $Y(\Ga)$, the case of $|\B|$ is similar. It is clear from the definitions that $K_1\cap K_2\neq\emptyset$ implies $L_1\cap L_2\neq\emptyset$. To see the conserve, pick vertex $x\in L_1\cap L_2$, then the argument in Lemma \ref{standard and label of vertices} implies that the minimal standard branched flat containing $x$ is inside both $L_1$ and $L_2$. In particular, $L_1\cap L_2$ contains a vertex of rank 0, which implies $K_1\cap K_2\neq\emptyset$. By Lemma \ref{characterization of standard components}, there exists a standard $\Ga_1\cap \Ga_2$-component $N$ containing this rank 0 vertex. It is clear that $N\subset L_1\cap L_2$ and $L_1\cap L_2\subset N$.
\end{proof}
Let $L$ be a (not necessarily standard) $\Ga_L$-component in $Y(\Ga)$ and let $x\in L$ be a vertex. Suppose $\Ga^{\perp}_{L}$ is the clique spanned by vertices in $\Ga_x\setminus \Ga_L$, where $\Ga_x$ is the label of $x$. Then Lemma \ref{labelling diference} (2) implies $\Ga^{\perp}_{L}$ is contained in the label of each vertex of $L$, and Lemma \ref{labelling diference} (1) implies $\Ga$ contains the graph join $\Ga_L\circ \Ga^{\perp}_{L}$.
The proof of Lemma \ref{standard and label of vertices} implies $x$ is connected to a rank $0$ vertex $x_0$ via an edge-path such that the label of each edge in this path is contained in $\Ga^{\perp}_{L}$. Let $M$ be the $\Ga_L\circ \Ga^{\perp}_{L}$-component containing $x_0$ and let $M=N\times N^{\perp}$ be the product decomposition in Lemma \ref{product decomposition of standard components} (2). Note that $L\subset M$ and $L$ has trivial projection to $N^{\perp}$. Thus $L$ must be of form $\{y\}\times N$ for some vertex $y\in N^{\perp}$, and $L$ is standard if and only if $y$ is of rank 0. We can characterize standard $\Ga'$-components in $|\B|$ in a similar way.
\begin{lem}\
\label{Ga'-component}
\begin{enumerate}
\item Any $\Ga'$-component in $Y(\Ga)$ or $|\B|$ is parallel to (hence isomorphic to) a standard $\Ga'$-component.
\item If $\Ga'$ is a subgraph satisfying $\Ga'\nsubseteq lk(v)$ for any vertex $v\in\Ga\setminus\Ga'$, then each $\Ga'$-component is standard.
\item If $e_1$ and $e_2$ are two edges in $Y(\Ga)$ or $|B|$ emanating from the same vertex, then they span a square if and only if their labels are adjacent in $\Ga$.
\end{enumerate}
\end{lem}
\begin{proof}
It remains to prove the if direction of (3). Let $L$ be the $\Ga'$-component containing $e_1$ and $e_2$ where $\Ga'$ is the edge spanned by the label of $e_1$ and $e_2$. Then (1) implies $L$ is a product of two branched lines in the case of $Y(\Ga)$, and is a product of two infinite trees of diameter 2 in the case of $|\B|$, thus (3) follows. Alternatively, one can also prove (3) directly by looking at the link of vertices, see \cite[Section 5.4]{cubulation}.
\end{proof}
\section{A geometric model for groups q.i. to RAAGs}
\label{sec_geometric model}
In \cite{cubulation}, it was shown that if a group $H$ is quasi-isometric to $G(\Ga)$ such that $\out(G(\Ga))$ is finite, then $H$ acts geometrically on a $CAT(0)$ cube complex which is very similar to $X(\Ga)$. In this section, we briefly recall the construction in \cite{cubulation}, and prove several further properties of this construction.
\subsection{Quasi-action on RAAG's}
\label{subsec_quasi action}
We pick an identification between $G(\Ga)$ and the 0-skeleton $X^{(0)}(\Ga)$ of $X(\Ga)$. It follows from \cite{huang2014quasi} that quasi-isometries between RAAG's with finite outer-automorphism group are at bounded distance from a canonical representative that respects standard flats, in the following sense.
\begin{definition}
\label{def_flat_preserving}
A quasi-isometry $\phi:X^{(0)}(\Ga)\ra X^{(0)}(\Ga)$ is {\em flat-preserving} if it is a bijection and for every standard flat $F\subset X(\Ga)$ there is a standard flat $F'\subset X(\Ga)$ such that $\phi$ maps the $0$-skeleton of $F$ bijectively onto the $0$-skeleton of $F'$. The standard flat $F'$ is uniquely determined, and we denote it by $\phi_*(F)$.
\end{definition}
\begin{thm}[Vertex rigidity \cite{huang2014quasi,bks2}]
\label{thm_intro_vertex_rigidity}
Suppose $\out(G(\Ga))$ is finite and $G(\Ga)\not\simeq\Z$. Let $\phi:X^{(0)}(\Ga)\ra X^{(0)}(\Ga)$ be an $(L,A)$-quasi-isometry. Then there is a unique flat-preserving quasi-isometry $\bar\phi:X^{(0)}(\Ga)\ra X^{(0)}(\Ga)$ at finite distance from $\phi$, and moreover
$$
d(\bar\phi,\phi)=\sup\{v\in X^{(0)}(\Ga)\mid d(\bar\phi(v),\phi(v))\}
<D=D(L,A)\,.
$$
\end{thm}
Given any quasi-action $H\stackrel{\rho}{\acts}X^{(0)}(\Ga)$, we may apply Theorem \ref{thm_intro_vertex_rigidity} to the associated family of quasi-isometries $\{\rho(h):X^{(0)}(\Ga)\ra X^{(0)}(\Ga)\}_{h\in H}$ to obtain a new quasi-action $H\stackrel{\bar\rho}{\acts}X^{(0)}(\Ga)$ by flat-preserving quasi-isometries. Due to the uniqueness assertion in Theorem \ref{thm_intro_vertex_rigidity}, the quasi-action $\bar\rho$ is actually an action, i.e. the composition law $\bar\rho(hh')=\bar\rho(h)\circ\bar\rho(h')$ holds exactly, not just up to bounded error.
Pick vertex $v\in\P(\Ga)$ and let $P_{v}$ be the $v$-parallel set (Definition \ref{v-parallel set}). Note that there is a cubical product decomposition $P_{v}\simeq \R_v\times Q_v$, where $\R_v$ is parallel to some standard geodesic $\ell$ with $\Delta(\ell)=v$. Likewise, there is a product decomposition of the $0$-skeleton
\begin{equation}
\label{eqn_0_skeleton_product_decomposition}
P_{v}^{(0)}
\simeq \Z_{v}\times Q_{v}^{(0)}
\end{equation}
where $\Z_{v}$ is the $0$-skeleton of $\R_{v}$ equipped with the induced metric.
Now let $\rho:H\acts X^{(0)}$ be an action by flat-preserving $(L,A)$-quasi-isometries. It follows readily from Definition \ref{def_flat_preserving} that the action respects parallelism of standard geodesics and standard flats, and this implies there is an induced action of $H$ on $\P(\Ga)$. For each vertex $v\in\P(\Ga)$, let $H_v\le H$ be the stabilizer of $v$. The action $H_{v}\acts P_{v}^{(0)}$ preserves the product decomposition (\ref{eqn_0_skeleton_product_decomposition}), and therefore gives rise to a factor action $\rho_{v}:H_{v}\acts \Z_{v}$ by $(L,A)$-quasi-isometries.
\begin{thm}\label{conjugate to left translation}
\cite[Corollary 6.17]{cubulation} Let $\rho:H\acts G(\Gamma)$ be an action by flat-preserving bijections. Suppose
\begin{enumerate}
\item the induced action $H\acts\P(\Ga)$ is preserves the label of vertices in $\P(\Ga)$;
\item for each vertex $v\in\mathcal{P}(\Gamma)$, the action $\rho_{v}:H_{v}\acts \Z$ is conjugate to an action by translations.
\end{enumerate}
Then the action $\rho$ is conjugate to an action $H\acts G(\Gamma)$ by left translations via a flat-preserving bijection.
\end{thm}
Now we impose a condition which guarantees assumption (1) of Theorem \ref{conjugate to left translation}.
\begin{definition}
A finite simplicial graph $\Gamma$ is \textit{star-rigid} if for any automorphism $\alpha$ of $\Gamma$, $\alpha$ is identity whenever it fixes a closed star of some vertex $v\in\Gamma$ point-wise.
\end{definition}
\begin{lem}
\label{label-preserving}
Suppose $\Ga$ is star-rigid, and $\out(G(\Ga))$ is finite. If $H$ acts on $\P(\Ga)$ by simplicial isomorphisms, then there exists a finite index subgroup $H'\le H$ such that the induced action $H'\acts\P(\Ga)$ is label-preserving.
\end{lem}
\begin{proof}
Since $\out(G(\Ga))$ is finite, there is an induced action $H\acts G(\Ga)$ by flat-preserving maps. Let $F(\Ga)$ be the flag complex of $\Ga$. We define a homomorphism $\alpha:H\to \aut(F(\Ga))$ as follows.
Each vertex $x\in X(\Ga)$ gives rise to a simplicial embedding $i_{x}:F(\Ga)\to\P(\Ga)$ by considering the collection of standard flats passing through $x$. Pick $h\in H$ and vertex $x\in X(\Ga)$, we define $\alpha(h,x)=i^{-1}_{h(x)}\circ h\circ i_{x}$. We claim $\alpha(h,x)$ does not depend on the choice of $x$. It suffices to show $\alpha(h,x')=\alpha(h,y')$ for any two vertices $x'$ and $y'$ in the same standard geodesic $\ell$. Let $v=i^{-1}_{x'}(\Delta(\ell))$. Then $\alpha(h,x')$ and $\alpha(h,y')$ coincident on the closed star of $v$, hence they are equal. Now there is a well-defined map $\alpha:H\to \aut(F(\Ga))$. One readily verify that $\alpha$ is a group homomorphism and we take $H'=\ker(\alpha)$.
\end{proof}
Assumption (2) of Theorem \ref{conjugate to left translation} is not true in general, see Proposition \ref{prop_intro_semiconjugacy}. However, the following weaker version holds under additional conditions. The proof is postponed to the next section.
\begin{lem}\label{conjugate to action by translations}
Let $\rho:H\acts G(\Gamma)$ be an action by flat-preserving bijections which are $(L,A)$-quasi-isometries. Suppose the action is cobounded and discrete. Then for each vertex $v\in\P(\Ga)$, there exists a subgroup $H'_{v}\le H_{v}$ of finite index such that the action $\rho_{v}:H_v\acts\Z_v$ restricted on $H'_{v}$ is conjugate to an action by translations.
\end{lem}
The following is the key to understand the factor actions $\rho_{v}:H_v\acts\Z_v$.
\begin{prop}
\label{prop_intro_semiconjugacy}
\cite[Proposition 6.2]{cubulation}
If $U\stackrel{\rho_0}{\acts} \Z$ is an action by $(L,A)$-quasi-isometries,
then there exist a branched line $\beta$ without valence 2 vertices, an action $U\stackrel{\rho_1}{\acts}\beta$ by simplicial isomorphisms and a bijective equivariant $(L',A')$-quasi-isometry (here $L',A'$ depends only on $L$ and $A$)
$$
U\stackrel{\rho_0}{\acts} \Z\lra
U\stackrel{\rho_1}{\acts}t(\beta).
$$
\end{prop}
Note that the valence of each vertex in $\beta$ is uniformly bounded above by some constant depending only on $L$ and $A$.
\begin{definition} (an equivariant construction, \cite[Section 5.6 and Section 6.1]{cubulation})
\label{an equivariant construction}
Let $\rho:H\acts G(\Ga)$ be an action by flat-preserving bijections which are also $(L,A)$-quasi-isometries. We will use Definition \ref{construction} to construct an isometric action $\rho':H\acts Y(\Ga)$ from $\rho$. First we need to choose the blow-up data $f_{v,\ell}$'s which are compatible with the $H$-action. Actually, it suffices to associate a branched line $\beta_v$ and a map $f_v:\Z_v\to t(\beta_v)$ to each vertex $v\in\P(\Ga)$, since for each standard geodesic $\ell$ such that $\Delta(\ell)=v$, we can identify vertices of $\ell$ with $\Z_v$ via (\ref{eqn_0_skeleton_product_decomposition}), which induces the map $f_{v,\ell}$. To define $f_v$'s, we start with the factor actions $\rho_{v}:H_{v}\acts \Z_{v}$. Note that there exists $L_1,A_1>0$ such that each $\rho_v$ is an action by $(L_1,A_1)$-quasi-isometries.
Consider the induced action $H\acts \P(\Ga)$ and pick a representative from each $H$-orbit of vertices of $\P(\Ga)$. The resulting set is denoted by $V$. For each $v\in V$, by Proposition \ref{prop_intro_semiconjugacy}, there exist a branched line $\beta_v$, an isometric action $H_v\acts \beta_v$ and an $H_v$-equivariant bijection $f_v: H_v\acts\Z_v\lra H_v\acts t(\beta_v)$. Moreover, the valence of vertices in each $\beta_v$ is uniformly bounded from above in terms of $L_1$ and $A_1$. For vertex $v\notin V$, we pick an element $h\in H$ such that $h(v)\in V$. Note that $h$ induces a bijection $h':\Z_v\to \Z_{h(v)}$. We define $\beta_v=\beta_{h(v)}$ and $f_{v}=f_{h(v)}\circ h':\Z_v\to t(\beta_v)$.
Let $Y(\Ga)$ be the associated blow-up building and let $f:G(\Ga)\to Y(\Ga)$ be the map as in Definition \ref{construction}. The following are true (see \cite[Theorem 6.4]{cubulation}).
\begin{enumerate}
\item The complex $Y(\Ga)$ is uniformly locally finite (this essentially follows from the fact the complexity of all $\beta_v$'s are uniformly bounded from above).
\item The bijections $f_{v,\ell}$'s and the action $\rho$ induce an action $\rho':H\acts Y(\Ga)$ by cubical isomorphisms. For each element $h\in H$, $\rho'(h)$ maps standard branched flats to standard branched flats. Hence $\rho'(h)$ also preserves the collection of $\beta_{P_v}$'s (see Lemma \ref{parallel set in blow-up building} and Definition \ref{def_beta_K} for $\beta_{P_v}$).
\item The map $f$ is an $H$-equivariant quasi-isometry.
\item If $\rho$ is discrete and cobounded, then $\rho'$ is geometric, i.e. the action of $\rho'$ on $Y(\Ga)$ is proper and cocompact.
\end{enumerate}
\end{definition}
\subsection{Stabilizer of parallel sets} In this section we restrict ourselves to the case when $\rho:H\acts G(\Ga)$ is a proper and cobounded action by $(L,A)$-quasi-isometries which are also flat-preserving bijections. In this case, the action $\rho':H\acts Y(\Ga)$ constructed in Definition \ref{an equivariant construction} is geometric. Since the collection $\{\beta_{P_v}\}_{v\in \textmd{Vertex}(\P(\Ga))}$ is $H$-invariant and locally finite, the action $\stab_{H}(\beta_{P_v})\acts \beta_{P_v}$ is cocompact. However, $H_v\le \stab_{H}(\beta_{P_v})$ is of finite index. Thus the action $H_v\acts \beta_{P_v}$ is geometric. Note that $H_v\acts \beta_{P_v}$ respects the product decomposition $\beta_{P_v}\simeq \beta_v\times \beta^{\perp}_v$ (Lemma \ref{parallel set in blow-up building}), hence there is an induced factor action $H_v\acts \beta_v$, which is cobounded. If $H_v$ contains element that flips the two ends of $\beta_v$, then $\beta_v/H_v$ is a tree. Otherwise $H_v$ acts by translations on the core of $\beta_v$, which gives rise to a homomorphism $\phi:H_v\to \Z$ by associating each element in $H_v$ its translation length. Let $K_v$ be the kernel of $\phi$. Then we have a splitting exact sequence $1\to K_v\to H_v\to \Z\to 1$. Let $\alpha\in H_v$ be a lift of a generator of $\Z$.
\begin{lem}
\label{subgroup with no twist}
Suppose $\rho:H\acts G(\Ga)$ is discrete and cobounded and suppose $H_v$ does not flip the two ends of $\beta_v$. Then the conjugation map $K_v\to \alpha K_v\alpha^{-1}$ gives rise to an element of finite order in $\out(K_v)$. Thus there exists integer $n>0$ such that $\phi^{-1}(n\Z)$ is isomorphic to $K_v\oplus n\Z$. Moreover, up to passing to a possibly larger $n$, we can choose the $n\Z$ factor such that it acts trivially on $\beta^{\perp}_v$.
\end{lem}
\begin{proof}
Since $H_v$ preserves the product structure of $\beta_{P_v}$, we have a homomorphism $h:H_v\to \aut(\beta^{\perp}_v)$. Moreover, the following diagram commutes:
\begin{center}
$\begin{CD}
K_v @>C_{\alpha}>> K_v\\
@VhVV @VhVV\\
h(K_v) @>C_{h(\alpha)}>> h(K_v)
\end{CD}$
\end{center}
Here $C_{\alpha}$ and $C_{h(\alpha)}$ denote conjugation by $\alpha$ and $h(\alpha)$ respectively. Note that the action $h(K_v)\acts \beta^{\perp}_v$ is geometric. We apply Lemma \ref{normal subgroup} below to the vertex set of $\beta^{\perp}_v$ to deduce that $h(K_v)$ is of finite index in $h(H_v)$. Thus $C_{h(\alpha)}$ gives rise to a finite order element in $\out(h(K_v))$. Since $h:K_v\to h(K_v)$ has finite kernel, $C_{\alpha}$ is of finite order in $\out(K_v)$. Then we can find integer $n>0$ such that $\phi^{-1}(n\Z)=K_v\oplus\langle n\alpha\rangle$. Apply Lemma~\ref{normal subgroup} to the vertex set of $\beta^{\perp}_v$ again, we know there exists integer $m>0$ such that $h(mn\alpha)\in h(K_v)$. Suppose $h(mn\alpha)=h(\beta)$ for $\beta\in K_v$. Then $\phi^{-1}(mn\Z)=K_v\oplus\langle (mn\alpha)\beta^{-1}\rangle$, where $(mn\alpha)\beta^{-1}$ acts trivially on $\beta^{\perp}_v$.
\end{proof}
\begin{lem}[B. Kleiner]\label{normal subgroup}
Suppose $Z$ is a metric space such that every $r$-ball contains at most $N=N(r)$ elements. Assume that $A \acts Z$ is a faithful action by $(L,A)$-quasi-isometries, and $B$ is a normal subgroup of $A$ that acts discretely cocompactly on $Z$. Then $B$ is of finite index in $A$.
\end{lem}
\begin{proof}
Suppose $B$ is of infinite index in $A$. Let $\{\alpha_i\}_{i\in I}\subset A$ be a collection of elements such that different $\alpha_i$'s are in different $B$-cosets. Pick a base point $p\in Z$. Since the $B$-action is cocompact, we can assume there exists $D>0$ such that $d(\alpha_i(p),p)<D$ for all $i\in I$.
Note that $B$ is finitely generated from our assumption. Let $\{b_{\lambda}\}_{\lambda\in\Lambda}$ be a finite generating for $B$. For each $b_{\lambda}$, there exists $D'>0$ such that $d(\alpha_ib_{\lambda}\alpha^{-1}_i(p),p)<D'$ for all $i\in I$. Since $\alpha_ib_{\lambda}\alpha^{-1}_i\in B$, by the discreteness of $B$-action, we can assume without loss of generality that $\alpha_ib_{\lambda}\alpha^{-1}_i=\alpha_jb_{\lambda}\alpha^{-1}_j$ for any $i\neq j$. Since there are only finitely many $b_{\lambda}$'s, we can also assume the equality holds for all $b_{\lambda}$'s. Thus $\alpha_ib\alpha^{-1}_i=\alpha_jb\alpha^{-1}_j$ for any $b\in B$ and $i,j\in I$. We can further assume without loss of generality that $\alpha_ib\alpha^{-1}_i=b$ for any $i\in I$ and $b\in B$.
Since $\alpha_i$ commutes with each element of $B$, by the cocompactness of $B$-action, there exists $R>0$ such that $\alpha_i$ is completely determined by its behavior on the $R$-ball centred at $p$. However, $d(\alpha_i(p),p)<D$ for all $i\in I$, so there are only finitely many ways to define $\alpha_i$. Thus there exists a pair $i\neq j$ such that $\alpha_i=\alpha_j$, which yields a contradiction.
\end{proof}
\begin{proof}[Proof of Lemma \ref{conjugate to action by translations}]
Up to passing to a subgroup of index 2, we assume $H_v$ does not flip the ends of $\beta_v$. Let $h:H_v\to\isom(\beta_v)$ be the homomorphism induced by the action $H_v\acts\beta_v$ and let $\bar{H}_v=\textmd{Im\ }h$. We claim the action $\bar{H}_{v}\acts\beta_v$ is geometric. Assuming the claim, we deduce that $\bar{H}_v$ has a finite index subgroup $A$ isomorphic to $\Z$. Define $H'_{v}=h^{-1}(A)$ and the lemma follows.
It suffices to show the subgroup of $\bar{H}_v$ which stabilizes a tip of $\beta_v$ is finite. Pick $x\in t(\beta_v)$. Then the subgroup $K\le H_v$ which stabilizes $\{x\}\times \beta^{\perp}$ acts geometrically on itself, thus $K$ is finitely generated. Since $H_v$ does not flip the ends of $\beta_v$, $K$ acts trivially on the core of $\beta_v$. Thus $h(K)$ is a finitely generated subgroup of $\prod_{i\in I}G_{i}$, here each $G_{i}$ is the permutation group of $n_{i}$ points and there is a uniform upper bound for all the $n_{i}$'s. Hence $h(K)$ is finite.
\end{proof}
\subsection{Special cube complexes and blow-up buildings}
\label{subsec_special and commensurability}
Suppose $\Ga$ is star-rigid and $\out(G(\Ga))$ is finite. Let $H$ be a finitely generated group quasi-isometric to $G(\Ga)$ and let $\rho:H\acts G(\Ga)$ be the associated action by flat-preserving maps. By Lemma \ref{label-preserving}, we can assume the induced action $H\acts\P(\Ga)$ preserves the labelling of vertices (up to passing to a finite index subgroup). Then Definition \ref{an equivariant construction} gives rise to a geometric action of $H$ on a blow-up building $Y(\Ga)$, moreover, this action preserves the labelling of edges.
\begin{lem}
\label{orientation and specialness}
Suppose there exists a torsion free subgroup $H'\le H$ of finite index. Then the following are equivalent:
\begin{enumerate}
\item For each vertex $v\in\mathcal{P}(\Gamma)$, the restricted action $\rho_{v}:H'_{v}\acts \Z_v$ is conjugate to an action by translations ($H'_v\le H'$ is the stabilizer of $v$).
\item $Y(\Ga)/H'$ is a special cube complex.
\end{enumerate}
If either (1) or (2) is true, then $H'$ is isomorphic to a finite index subgroup of $G(\Ga)$, hence $H$ is commensurable to $G(\Ga)$.
\end{lem}
\begin{proof}
(1)$\Rightarrow$(2): Let $H'_{v}\acts\beta_v$ be the restricted action on the associated branched line, then (1) implies the image of $H'_v\to \isom(\beta_v)$ is isomorphic to $\Z$. Thus there exists an $H'_v$-invariant orientation of edges such that for any two adjacent edges of $\beta_v$ which are in the same $H'_v$ orbit, the orientation on them avoid following two configurations:
\begin{center}
\includegraphics[scale=0.5]{1.png}
\end{center}
For each $H'$-orbit of vertices in $\P(\Ga)$, we pick a representative $v$ and assign an $H'_v$-equivariant orientation of edges in $\beta_v$ which satisfies the above condition. This induces orientation of edges in $\beta_{P_v}$ which are parallel to the $\beta_v$ factor (see (\ref{branched product decomposition})). The $H'$-orbits of these oriented edges cover the 1-skeleton of $Y(\Ga)$ (see (2) of Lemma \ref{parallel set in blow-up building}), thus we have an $H'$-invariant edge orientation of $Y(\Ga)$. This orientation respects parallelism of edges, thus each hyperplane of $Y(\Ga)/H'$ is 2-sided. Since $H'$ preserves the edge-labelling of $Y(\Ga)$, hyperplanes of $Y(\Ga)/H'$ do not self-intersect. Inter-osculation is ruled out by Lemma \ref{Ga'-component} (3). The above condition for orientation of edges of $\beta_v$ implies hyperplanes of $Y(\Ga)/H'$ do not directly self-osculate.
(2)$\Rightarrow$(1): Let $\bar{H}'_v$ be the image of $H'_v\to \isom(\beta_v)$. It suffices to show $\bar{H}'_v$ is isomorphic to $\Z$. The proof of Lemma \ref{conjugate to action by translations} implies $\bar{H}'_v$ has a finite index subgroup isomorphic to $\Z$. Thus it suffices to show $\bar{H}'_v$ is torsion-free. Suppose the contrary is true, either there is an element of $\bar{H}'_v$ that flips the two endpoints of an edge in $\beta_v$, which gives rise to a 1-sided hyperplane in $Y(\Ga)/H'$, or there exists an element of $\bar{H}'_v$ that permutes edges in the closed star of some vertex of $\beta_v$ in a non-trivial way, which gives rise to a directly self-osculating hyperplane in $Y(\Ga)/H'$.
The last statement of the lemma follows from Theorem \ref{conjugate to left translation}.
\end{proof}
\section{Branched complexes with trivial holonomy}
\label{sec_branched complexes with trivial holonomy}
In this section, we look at more properties of the quotient of a blow-up building by a proper and cocompact group action. In particular, we introduce a notion of trivial holonomy for such quotient, under which condition one can collapse tori in the quotient to obtain another special cube complex.
\subsection{The branched complex}
\label{subsec_branched complex basics}
Let $Y(\Ga)$ be a blow-up building as in Definition \ref{construction} and let $|\B|$ be the Davis realization of the right-angled building associated with $G(\Ga)$. In the light of Section \ref{subsec_special and commensurability}, we assume $H$ acts geometrically on $Y(\Ga)$ by label-preserving cubical isomorphisms. A \textit{branched complex of type $\Ga$} is the orbifold $K(\Ga)=Y(\Ga)/H$. It is \textit{torsion free} if $H$ is torsion free (in this case the action $H\acts Y(\Ga)$ is free), and it is \textit{special} if $H$ is torsion free and $K(\Ga)$ is a special cube complex. If $H$ is torsion free, and $\Ga$ is a clique or a point, then the corresponding branched complex is also called a \textit{branched torus} or a \textit{branched circle}. The \textit{core} of a branched torus is the quotient of the core of $Y(\Ga)$.
\begin{lem}
\label{rank and label preserving}
$H$ preserves the rank and label of vertices in $Y(\Ga)$.
\end{lem}
\begin{proof}
The discussion before Lemma \ref{labelling diference} implies for any edge $e\subset Y(\Ga)$ with its endpoints denoted by $v_1$ and $v_2$, one can deduce the label of $v_2$ from the label of $v_1$, label of $e$, and the rank of $v_1$ and $v_2$. Thus it suffices to prove $H$ preserves the rank of vertices in $Y(\Ga)$.
Let $\Ga=\Ga_1\circ\Ga_2\circ\cdots\circ\Ga_n$ be the join decomposition of $\Ga$ and pick a standard $\Ga_{i}$-component $Y_i$ in $Y(\Ga)$. Then Lemma \ref{product decomposition of standard components} (2) implies that $\prod_{i=1}^{n}p_{i}:Y(\Ga)\to\prod_{i=1}^{n} Y_i$ is a cubical isomorphism, where $p_i:Y(\Ga)\to Y_i$ is the $CAT(0)$ projection. For each $i$, all $\Ga_i$-components are parallel to each other (see the discussion before Lemma \ref{Ga'-component}), and $H$ permutes these $\Ga_i$-components (since $H$ is label-preserving). Thus $H$ respects the product decomposition $Y(\Ga)\cong\prod_{i=1}^{n} Y_i$ and induce trivial permutation of the factors. Note that the label of a vertex $y\in Y(\Ga)$ is the clique spanned by the union of labels of $p_{i}(y)$ for $1\le i\le n$. Thus it suffices to prove the lemma when $n=1$. If $\Ga$ is discrete, then $Y(\Ga)$ is a tree. The label-preserving condition implies $H$ preserves rank 0 vertices. If $\Ga$ is not discrete, then \cite[Theorem 5.24]{cubulation} implies $H$ preserves the rank of all vertices in $Y(\Ga)$.
\end{proof}
It follows from the above lemma that the action $H\acts Y(\Ga)$ descends to an action $H\acts|\B|$ through the canonical projection $\pi:Y(\Ga)\to |\B|$. Moreover, the action $H\acts|\B|$ preserves the labelling and rank of vertices.
\begin{lem}
\label{finite index subgroup without inversion}
There is a finite index subgroup $H'\le H$ such that $H'\acts Y(\Ga)$ is an action without inversion in the sense that if $h\in H'$ fixes a cube in $Y(\Ga)$, then it fixes the cube pointwise.
\end{lem}
\begin{proof}
Pick vertex $u\in\Ga$. Let $\h_u$ be the collection of hyperplanes in $Y(\Ga)$ that are labelled by $u$ (see Definition~\ref{edge and hyperplane labelling}). Then distinct elements in $\h_u$ have empty intersection. It follows that the dual cube complex to the wall space $(Y(\Ga),\h_u)$ is a tree, which we denote by $T$. In other words, $T$ has a vertex for each connected component of $Y(\Ga)\setminus \h_u$, and an edge when one travels from one component to another by crossing a hyperplane.
Since $H$ is label-preserving, it permutes elements in $\h_u$. Thus there is an induced action $H\acts T$. Up to passing to an index 2 subgroup, we assume $H$ acts on $T$ without inversion. Thus $H\acts Y(\Ga)$ does not flip any edge in $Y(\Ga)$ labelled by $u$. By repeating the above argument for each vertex in $\Ga$, there exists a finite index subgroup $H'\le H$ which does not flip any edge in $Y(\Ga)$. Since $H'$ is label-preserving, it acts on $Y(\Ga)$ without inversion.
\end{proof}
For comparison, we look at the action of $H\acts|\B|$. It preserves rank of vertices and label of edges, hence it is already an action without inversion.
From now on, we assume $H$ acts on $Y(\Ga)$ without inversion. Then the cube complex structure on $Y(\Ga)$ descends to $K(\Ga)$, and there is a well-defined labelling of edges of the $Y(\Ga)/H$ by vertices of $\Ga$. Moreover, each vertex of $K(\Ga)$ has a well-defined rank and label by Lemma \ref{rank and label preserving}. Similarly, we can define cube complex structure on $|\B|/H$ as well as labelling of vertices and edges and rank of vertices.
As cube complexes, $K(\Ga)$ and $|\B|/H$ may not be non-positively curved. For example, let $\Ga$ be an edge, then $X(\Ga)\cong \E^2$. Suppose $H$ acts on $\E^{2}$ by translations generated by $(1,1)$ and $(1,-1)$. Then we can obtain $|\B|/H$ by taking two unit squares and identify them along two consecutive edges.
\begin{lem}
\label{inverse image of Ga'-component}
Let $L$ be a $\Ga'$-component in $|\B|/H$ and let $K$ be a connected component of $p^{-1}(L)$, where $p:|\B|\to |\B|/H$. Let $H_K\le H$ be the stabilizer of $K$. Then the natural map $i:K/H_K\to L$ is a cubical isomorphism. In particular, $p$ maps $\Ga'$-components to $\Ga'$-components. Similar statement holds for $\Ga'$-components in $K(\Ga)$.
\end{lem}
\begin{proof}
Note that each connected component of $p^{-1}(L)$ is a $\Ga'$-component. The cube complex structure of $K$ descends to $K/H_K$, and $i$ is a cubical map which maps cubes in $K/H_K$ to cubes in $L$ of the same dimension. To show $i$ is surjective, it suffices to show $p:p^{-1}(L)\to L$ is surjective. Since the action of $H$ respects cubical structure, for each point $x\in |\B|/H$ and $y\in p^{-1}(x)$, we can lift each piecewise linear path starting at $x$ to a path starting at $y$. Thus the surjectivity follows. For the injectivity of $i$, it suffices to show for any $h\in H$, if $h\cdot K\cap K\neq\emptyset$, then $h\in H_K$. This is true because $H$ is label-preserving. The $K(\Ga)$ case is similar.
\end{proof}
We define \textit{standard $\Ga'$-components} in $K(\Ga)$ (or $|\B|/H$) to be those $\Ga'$-components which contain a rank 0 vertex. Since the canonical projection $\pi:Y(\Ga)\to |\B|$ is $H$-equivariant, it induces a cubical map $\pi_{H}:K(\Ga)\to |\B|/H$.
\begin{remark}
\label{surjection on fundamental groups}
We define a natural map $h_{\ast}:H\to \pi_1(|\B|/H)$ as follows. Pick a base point $\star\in|\B|/H$ and one of its lifts $x\in|\B|$. For each $h\in H$, we pick a path $\omega_h$ connecting $x$ and $h\cdot x$, and map $h$ to the element in $\pi_1(|\B|/H,\star)$ represented by the image of $\omega_h$ in $|\B|/H$. Since $|\B|$ is simply connected, $h_{\ast}$ is well-defined and is a group homomorphism. Moreover, since we can lift piecewise linear paths from $|\B|/H$ to $|\B|$, $h_{\ast}$ is surjective.
When $H$ is torsion free, $h_{\ast}$ can be defined alternatively as follows. We can pick a base point $\bar{\star}\in K(\Ga)$ such that $\pi_{H}(\bar{\star})=\star$ and identify $\pi_1(K(\Ga),\bar{\star})$ with $H$ by choosing a lift of $\bar{\star}$ in $Y(\Ga)$. Then the map $(\pi_{H})_{\ast}:\pi_1(K(\Ga),\bar{\star})\to\pi_1(|\B|/H,\star)$ coincides with $h_{\ast}$ up to conjugation.
\end{remark}
\begin{lem}\
\label{correspondence between standard components downstairs}
\begin{enumerate}
\item The inverse image of a standard $\Ga'$-component in $|\B|/H$ under $\pi_H$ is a standard $\Ga'$-component.
\item The $\pi_H$-image of a standard $\Ga'$-component is a standard $\Ga'$-component.
\end{enumerate}
Thus $\pi_H$ induces a 1-1 correspondence between standard $\Ga'$-components in $K(\Ga)$ and standard $\Ga'$-components in $|\B|/H$.
\end{lem}
\begin{proof}
Note that for any edge $e\subset K(\Ga)$, either $\pi_{H}(e)$ is a point, or $\pi_{H}(e)$ is an edge whose label is the same as $e$. Moreover, $\pi_H$ induces a bijection between rank 0 vertices (since this is true for the canonical projection $\pi:Y(\Ga)\to |\B|/H$). Thus (2) follows from (1). Now we prove (1). Consider the following commuting diagram.
\begin{center}
$\begin{CD}
Y(\Ga) @>\pi>> |\B|\\
@VVqV @VVpV\\
K(\Ga) @>\pi_H>> |\B|/H
\end{CD}$
\end{center}
Pick a standard $\Ga'$-component $L\subset|\B|/H$. Then $p^{-1}(L)$ is a disjoint union of standard $\Ga'$-components. Lemma \ref{inverse image of Ga'-component} implies $H$ acts transitively on the components of $p^{-1}(L)$. By Lemma \ref{correspondence of standard components}, $\pi^{-1}(p^{-1}(L))$ is also a disjoint union of standard $\Ga'$-components, and $H$ acts transitively on these components. Thus $q\circ\pi^{-1}\circ p^{-1}(L)=\pi^{-1}_{H}(L)$ is a standard $\Ga'$-component.
\end{proof}
The following is a consequence of Lemma \ref{intersection of standard components}.
\begin{lem}
\label{intersection of standard components downastairs}
For $1\le i\le n$, let $L_i$ be a standard $\Ga_i$-component in $K(\Ga)$ or $|\B|/H$. If $\cap_{i=1}^{n}L_i\neq\emptyset$, then it is a disjoint union of standard $\cap_{i=1}^{n}\Ga_i$-components.
\end{lem}
\subsection{Trivial holonomy and specialness}
\label{subsec_trivial holonomy and specialness}
For each vertex $v\in \P(\Ga)$, let $P_v$ be as in Definition \ref{v-parallel set} and let $\beta_{P_v}$ be as in Definition \ref{def_beta_K}. Suppose $H_v\le H$ is the stabilizer of $\beta_{P_v}$ and let $\beta_{P_v}=\beta_v\times \beta^{\perp}_v$ be as in Lemma \ref{parallel set in blow-up building}. Since $H_v$ preserves this decomposition, there is a factor action $\rho_v:H_v\acts \beta_{v}$. The action $H_v\acts \beta_{P_v}$ is \textit{reducible} if there is a decomposition $H_v=L_v\oplus\Z$ such that $L_v$ acts trivially on the $\beta_{v}$ factor and $\Z$ acts trivially on the $\beta^{\perp}_v$ factor.
\begin{lem}
\label{intersection and normal subgroup for reducible action}
Let $H'_v$ be a finite index torsion free normal subgroup of $H_v$ and let $H_1$ and $H_2$ be two finite index subgroups of $H'_v$. If the induced action $H_i\acts \beta_{P_v}$ is reducible for $i=1,2$, then
\begin{enumerate}
\item the induced action of $H_1\cap H_2$ is also reducible;
\item the induced action of the largest normal subgroup of $H_v$ contained in $H_1$ is reducible.
\end{enumerate}
\end{lem}
\begin{proof}
Since $H'_v$ is torsion free, the action $H'_v\acts \beta_{P_v}$ is faithful. Then (1) follows readily from the definition. Note that for any $g\in H_v$, $g H_1g^{-1}\subset H'_v$. Moreover, $gH_1g^{-1}$ is reducible. Thus (2) follows from (1).
\end{proof}
We caution the reader that Lemma \ref{intersection and normal subgroup for reducible action} is not true if we drop the first sentence of the lemma. For example, one can take $H_v=\Z\oplus\Z \oplus\Z/2\Z$ acting on $\Bbb R^2$, where then first and second $\Z$ factor act by translation along the $x$-axis and $y$-axis respectively, and the $\Z/2\Z$ acts trivially. Take $H_1=\langle 1,0,0\rangle\oplus \langle 0,1,0\rangle$ and $H_2=\langle 1,0,1\rangle\oplus \langle 0,1,1\rangle$. It is easy to check that $H_1$ and $H_2$ are reducible, but $H_1\cap H_2$ is not reducible.
Pick vertex $v\in\Ga$, it follows from Lemma \ref{Ga'-component} (2) and Lemma \ref{characterization of standard components} that each $St(v)$-component in $Y(\Ga)$ is of form $\beta_{P_w}=\beta_w\times \beta^{\perp}_w$ for some vertex $w\in\P(\Ga)$, hence each $St(v)$-component $L\subset K(\Ga)$ is of form $\beta_{P_w}/H_w$ by Lemma \ref{inverse image of Ga'-component}. If $K(\Ga)$ is special (recall that we require $H$ to be torsion free in our definition of specialness), then Lemma \ref{orientation and specialness} implies that $L$ is a fibre bundle over a branched circle, where the fibres come from the $\beta^{\perp}_w$ factor in $\beta_{P_w}$. The \textit{$v$-holonomy} of $L$ is defined to be the holonomy group of the connection on this fibre bundle induced by the cube complex structure on $L$. Note that $L$ has trivial $v$-holonomy if and only if $L$ is isomorphic (as cube complexes) to the product of a $v$-component in $L$ and $lk(v)$-component in $L$, such splitting is called a \textit{$v$-splitting}. Since the holonomy group is finite, $L$ has a finite sheet cover with trivial $v$-holonomy. The \textit{$v$-holonomy} of $K(\Ga)$ is the collection of $v$-holonomy of all of its $St(v)$-components.
\begin{definition}
$K(\Ga)$ \textit{has trivial holonomy} if $K(\Ga)$ is special and it has trivial $v$-holonomy for each vertex $v\in\Ga$. Similarly, if $S'(\Ga)$ is a finite cover of the Salvetti complex $S(\Ga)$, we define the $v$-holonomy of its $St(v)$-component in a similar way. Again, if all such holonomy vanishes, then $S'(\Ga)$ \textit{has trivial holonomy}.
\end{definition}
\begin{remark}
We caution the reader that the notion of trivial holonomy defined here is different from Haglund's notion \cite[Defition 5.6]{haglund2006commensurability} in the studying of right-angled building. In particular, our notion is not stable when passing to subgroups while Haglund's notion is.
\end{remark}
Suppose $K(\Ga)$ is special. Then $K(\Ga)$ has trivial holonomy if and only if $H_w\acts \beta_{P_w}$ is reducible for each vertex $w\in\P(\Ga)$. This together with Lemma \ref{intersection and normal subgroup for reducible action} imply the following result.
\begin{lem}
\label{normal subgroup and trivial holonomy}
Let $H'$ be a finite index torsion free normal subgroup of $H$. Suppose there exist finite index subgroups $H_1,H_2\le H'$ such that $Y(\Ga)/H_i$ has trivial holonomy for $i=1,2$. Then $Y(\Ga)/(H_1\cap H_2)$ has trivial holonomy. In particular, let $N\vartriangleleft H$ be the largest normal subgroup contained in $H_1$, then $Y(\Ga)/N$ has trivial holonomy.
\end{lem}
\begin{lem}
\label{product decomposition of actions in building}
Let $L$ be a $\Ga'$-component in $|\B|$. Suppose $\Ga'$ admit a join decomposition $\Ga'=\Ga_1\circ \Ga_2$ where $\Ga_1$ is a clique with its vertices denoted by $\{v_i\}_{i=1}^{n}$. Let $L=\prod_{i=1}^{n}L_i\times L_{n+1}$ be the product decomposition such that $L_{i}$ corresponds to $v_i$ for $1\le i\le n$ and $L_{n+1}$ corresponds to $\Ga_2$. Let $H_{L}\le H$ be the stabilizer of $L$. If $K(\Ga)$ has trivial holonomy, then $H_L=\oplus_{i=1}^{n+1}H_i$ such that $H_i$ acts trivially on $L_j$ if $i\neq j$, and $H_i\cong \Z$ for $1\le i\le n$.
\end{lem}
\begin{proof}
First we look at the case $\Ga_1=\{v\}$ and $\Ga_2=lk(v)$. Then $L$ is a standard component by Lemma \ref{Ga'-component} (2) and $\pi^{-1}(L)=\beta_{P_w}$ for some vertex $w\in\P(\Ga)$ by Lemma \ref{correspondence of standard components} and Lemma \ref{characterization of standard components} ($\pi$ is the canonical projection). Moreover, the stabilizer is exactly $H_{L}$. Since $K(\Ga)$ has trivial holonomy, the action $H_{L}\acts \beta_{P_w}$ is reducible. Since $\pi$ respects the product decomposition of $\beta_{P_w}$, the action $H_{L}\acts L$ has the required decomposition. Now we look at the case $\Ga_1=\{v\}$ and $\Ga_2$ is any induced subgraph of $lk(v)$. This follows from the previous case by consider the $St(v)$-component that contains $L$. In general, for each vertex $v_i\in\Ga_1$, $\Ga'$ can be written as a join of $\{v_i\}$ and a induced subgraph of $lk(v_i)$. Thus we can induct on the number of vertices in $\Ga_1$ and apply case 2 to reduce the number of vertices.
\end{proof}
\begin{lem}
\label{trivial holonomy and special}
If $K(\Ga)$ has trivial holonomy, then $|\B|/H$ is a compact special cube complex.
\end{lem}
\begin{proof}
First we show $|\B|/H$ is a non-positively curved cube complex. Let $x\in |\B|$ be a vertex of rank $n$ and let $p:|\B|\to |\B|/H$ be the projection map. It suffices to show the link of $p(x)$ in $|\B|/H$ is flag.
Suppose $x$ is labelled by $\Delta_x$ with its vertex set denoted by $\{v_i\}_{i=1}^{n}$. Let $\Delta^{\perp}_x\subset\Ga$ be the induced subgraph spanned by vertices in $\Ga$ which are adjacent to each vertex in $\Delta_x$, and let $L$ be the $\Delta_{x}\circ\Delta^{\perp}_x$-component containing $x$. For each $1\le i\le n$, let $L_i$ be the $v_i$-component that contains $x$. Then each $L_i$ is a infinite tree of diameter 2. Let $L_{n+1}$ be the $\Delta^{\perp}_x$-component that contains $x$. Then $L$ admits a product decomposition $L=\prod_{i=1}^{n}L_i\times L_{n+1}$. By the construction of $|\B|$, each edge in $|\B|$ which contains $x$ is labelled by a vertex in $\Delta_{x}\circ\Delta^{\perp}_x$, thus a small neighbourhood of $x$ in $|\B|$ is contained in $L$. Let $H_L\le H$ be the stabilizer of $L$. By Lemma \ref{inverse image of Ga'-component}, it suffices to show the link of $p(x)$ in $L/H_L$ is flag.
By Lemma \ref{product decomposition of actions in building}, $H_L=\oplus_{i=1}^{n+1}H_i$ such that $H_i$ acts trivially on $L_j$ if $i\neq j$, and $H_i\cong \Z$ for $1\le i\le n$. Then $H/H_L$ and $\prod_{i=1}^{n+1} L_i/H_i$ are isomorphic as cube complexes. We represent $x$ as $(x_1,x_2,\cdots,x_{n+1})$ with respect to the product decomposition of $L$. Note that different edges in $H_{n+1}$ that contains $x_{n+1}$ have different labels (actually they are in 1-1 correspondence with vertices in $\Delta^{\perp}_x$). Since the action $H_{n+1}\acts L_{n+1}$ is label-preserving, any element in $H_{n+1}$ which fixes $x_{n+1}$ also fixes each edge emanating from $x$, hence fixes a small neighbourhood around $x_{n+1}$. Thus the link of $p(x_{n+1})$ in $L_{n+1}/H_{n+1}$ is isomorphic to the link of $x_{n+1}$ in $L_{n+1}$, hence is flag. On the other hand, all $L_i/H_i$'s are trees for $1\le i\le n$. Thus the link of $p(x)$ in $L/H_L$ is flag.
Now we show $|\B|/H$ is special. We can orient each edge of $|\B|$ from vertex of lower rank to vertex of higher rank. This orientation is compatible with parallelism. Since $H$ preserves the rank of vertices, it preserves this orientation, hence hyperplanes in $|\B|/H$ are two-sided. They do not self-interest since the action $H\acts|\B|$ preserve the labelling of hyperplanes. Inter-osculation is ruled out by Lemma \ref{Ga'-component} (3).
It remains to show hyperplanes in $|\B|/H$ do not self-osculate. Pick a hyperplane $h\subset |\B|/H$ and let $v$ be the label of an edge dual to $h$ (all edges dual to $h$ have the same label). Then Lemma \ref{Ga'-component} (3) implies there is a $St(v)$-component $L'\subset |\B|/H$ such that $L'$ contains each cube which intersects $h$. It suffices to show $h$ does not self-osculate in $L'$. Lemma \ref{product decomposition of actions in building} implies $L'$ admits a product decomposition $L'\cong L'_1\times L'_2$ which is induced by the join decomposition $St(v)=\{v\}\circ lk(v)$. Note that $L'_1$ is a finite tree of diameter 2 and $h$ is dual to some edge in the $L'_1$ factor, thus $h$ does not self-osculate.
\end{proof}
\begin{definition}
\label{reduction}
For each special $K(\Ga)$ we associate a cube complex, which is called the \textit{reduction} of $K(\Ga)$ and is denoted by $S_K(\Ga)$, as follows. Let $f:X(\Ga)^{(0)}\to Y(\Ga)$ be the bijection between $X(\Ga)^{(0)}$ and rank 0 vertices of $Y(\Ga)$ in Definition \ref{construction}. Recall that we have identified $X(\Ga)^{(0)}$ with $G(\Ga)$, thus $f$ induces an action $H\acts G(\Ga)$ by flat-preserving bijections. Lemma \ref{orientation and specialness} and Theorem \ref{conjugate to left translation} imply that up to pre-composing $f$ with a flat-preserving bijection, we can assume $H\acts G(\Ga)$ is an action by left translations. Using the identification $X(\Ga)^{(0)}\cong G(\Ga)$ again, we have an isometric action $H\acts X(\Ga)$. Define $S_K(\Ga)=X(\Ga)/H$. Note that $S_K(\Ga)$ is a finite sheet cover of the Salvetti complex $S(\Ga)$.
\end{definition}
Pick vertex $w\in\P(\Ga)$. Note that $f:P_{w}^{(0)}\cong \Z_{w}\times Q_{v}^{(0)}\to \beta_{P_w}\cong\beta_w\times \beta^{\perp}_{w}$ respects the product decompositions. Thus $H_w\acts \beta_{P_w}$ is reducible if and only if $H_w\acts P_{w}^{(0)}$ is reducible. Hence $K(\Ga)$ has trivial holonomy if and only if its reduction $S_K(\Ga)$ has trivial holonomy.
Let $\mathcal{C}_K$ be the category whose objects are standard $\Ga'$-components in finite covers of $K(\Ga)$ ($\Ga'$ can be any induced subgraph of $\Ga$), and morphisms are label and rank preserving local isometries. Let $\mathcal{C}_S$ be the category whose objects are $\Ga'$-components in finite covers of the reduction $S_K(\Ga)$, and morphisms are label-preserving local isometries. Note that $f$ is $H$-equivariant, and it maps the $0$-skeleton of a $\Ga'$-component in $X(\Ga)$ to vertices of rank 0 in a standard $\Ga'$-component of $Y(\Ga)$. Thus the following holds.
\begin{lem}
\label{trivial holonomy of reduction}
The map $f$ induces a functor $\Phi_f$ from $\mathcal{C}_S$ to $\mathcal{C}_K$. Moreover, an object in $\mathcal{C}_S$ has trivial holonomy if and only if its image in $\mathcal{C}_K$ has trivial holonomy.
\end{lem}
\subsection{Coverings of complexes with trivial holonomy}
\begin{lem}
\label{trivial holonomy and completion}
Suppose $K(\Ga)$ has trivial holonomy. Let $L\subset K(\Ga)$ be a $\Ga'$-component and let $L'\to L$ be a finite cover with trivial holonomy. Then the canonical complete $\C(L',K(\Ga))$ also has trivial holonomy. Moreover, $L'$ is wall-injective in $\C(L',K(\Ga))$.
\end{lem}
\begin{proof}
Let $\Lambda$ be the intersection graph of $K(\Ga)$. Recall that $\C(L',K(\Ga))$ is the pull-back defined as follows (recall that we require $K(\Ga)$ to be special in our definition of trivial holonomy).
\begin{center}
$\begin{CD}
\C(L',K(\Ga)) @>>> \C(L',S(\Lambda))\\
@VcVV @VbVV\\
K(\Ga) @>a>> S(\Lambda)
\end{CD}$
\end{center}
Pick $St(v)$-component $N\subset K(\Ga)$. Since $K(\Ga)$ has trivial holonomy, let $N=N_1\times N_2$ be the $v$-splitting of $N$, where $N_1$ is a $\{v\}$-component in $N$. Note that $a(N)=a(N_1)\times a(N_2)$ and $a|_{N}$ splits as a product of two maps. Let $N'\subset \C(L',S(\Lambda))$ be a lift of $a(N)\subset S(\Lambda)$. We claim $N'$ has a similar cubical splitting as $a(N)$ and $b|_{N'}$ splits as a product of two maps. Then one deduce from this claim and the definition of pull-back that each component of $c^{-1}(N)$ admits a similar cubical splitting, which implies $\C(L',K(\Ga))$ has trivial holonomy.
Now we prove the claim. It follows from the definition of $\C(L',S(\Lambda))$ that $N'$ is of form $\C(N'\cap L',a(N))$, here the map $N'\cap L'\to a(N)$, which we denoted by $i$, is the restriction of the map $L'\to L\stackrel{a}{\to} S(\Lambda)$ to $N'\cap L'$. By Lemma \ref{product and canonical completion}, it suffices to show $i$ has a splitting which is compatible with $a(N)=a(N_1)\times a(N_2)$. Let $p:L'\to L$ be the covering map. Note that $p(N'\cap L')$ is a connected component in the wall projection $\wpj_{K(\Ga)} (N\to L)$. If no edge of $p(N'\cap L')$ is parallel to some edge in $N_1$, then the image of $i$ is inside an $a(N_2)$-slice of $a(N)$ and $i$ splits trivially. If there is an edge $e\subset p(N'\cap L')$ parallel to an edge in $N_1$, then $e\subset N$. We also deduce that $p(N'\cap L')\subset N$. Thus $N'\cap L'$ is contained in some $St(v,\Ga')$-component of $L'$. Since $L'$ has trivial holonomy, this component admits a $v$-splitting. Since $N'\cap L'$ is locally convex, it inherits a splitting, and $i$ also splits as required.
Since each $St(v)$-component of $K(\Ga)$ has a $v$-splitting, the assumption of Lemma \ref{wall-injective} is satisfied. Thus $L'$ is wall-injective in $\C(L',K(\Ga))$.
\end{proof}
\begin{lem}
\label{finite cover with trivial holonomy}
Suppose $K(\Ga)$ is special. Then $K(\Ga)$ has a finite cover which has trivial holonomy.
\end{lem}
\begin{proof}
By Lemma \ref{trivial holonomy of reduction}, we can assume $K(\Ga)$ is a finite cover of $S(\Ga)$. We induct on the number of vertices in $\Ga$. The case where $\Ga$ has only one vertex is trivial. If there exists vertex $v\in\Ga$ such that $\Ga=St(v)$, then up to passing to a finite sheet cover we can assume $K(\Ga)$ is isomorphic to a product of a circle and a $lk(v)$-component. However, by induction, the $lk(v)$-component has a finite cover with trivial holonomy, thus $K(\Ga)$ has the required cover. Now we assume $St(v)\subsetneq\Ga$ for any vertex $v\in\Ga$.
By induction, there exists a finite sheet cover $A_v$ of the Salvetti complex $S(St(v))$ such that $A_v$ has trivial holonomy and $A_v$ factors through any $St(v)$-components of $K(\Ga)$. By Lemma \ref{normal subgroup and trivial holonomy}, we can assume $A_v$ is a regular cover. Let $K_v$ be the pull back as follows.
\begin{center}
$\begin{CD}
K_v @>>> \C(A_v,S(\Ga))\\
@VVV @VVV\\
K(\Ga) @>>> S(\Ga)
\end{CD}$
\end{center}
Then each $St(v)$-component of $K_v$ has trivial $v$-holonomy.
We claim if we already know $K(\Ga)$ has trivial $v'$-holonomy for a vertex $v'\neq v$, then $K_v$ also has trivial $v'$-holonomy. Then the lemma follows by repeating the above argument for each vertex in $\Ga$. However, this claim follows from $\C(A_v,S(\Ga))$ has trivial holonomy (the proof is identical to Lemma \ref{trivial holonomy and completion}) and Lemma \ref{intersection and normal subgroup for reducible action}.
\end{proof}
\section{Control of retraction image}
\subsection{Wall projections in branched complexes}
\label{subsec_wall projection}
Let $K(\Ga)$, $Y(\Ga)$, $H$ and $|\B|$ be as in Section \ref{sec_branched complexes with trivial holonomy}. In this section we study wall projections in $K(\Ga)$. Assuming hyperbolicity, a powerful tool for this purpose is the following connected intersection theorem proved by Haglund and Wise.
\begin{thm}
\label{haglund-wise connected intersection}
$($\cite[Theorem 4.23]{haglund2012combination}$)$ Let $X$ be a compact special cube complex whose universal cover is Gromov-hyperbolic. Let $(B_0,...,B_n,A)$ be connected locally convex subcomplexes containing the basepoint of $X$. Suppose that $A\subset\cap_{j=0}^{n}B_j$.
Then there is a based finite cover $\bar{X}$ and base elevations $\bar{B}_0,...,\bar{B}_n$ with $\bar{A}\cong A$, such that $\cap_{j\in J}\bar{B}_{j}$ is connected for each $J\subset\{0,...,n\}$.
\end{thm}
However, this result does not apply to $K(\Ga)$ directly since $Y(\Ga)$ is not hyperbolic in general. We first deal with this issue.
For a $CAT(0)$ cube complex $X$, we can endow each cube with the $l^{1}$-metric, which gives rise to a $l^{1}$-metric on $X$. An \textit{$l^{1}$-flat} in $X$ is an isometric embedding $\E^{n}\to X$ with respect to the $l^{1}$-metric such that its image is a subcomplex of $X$. An \textit{$l^{1}$-geodesic} is a 1-dimension $l^{1}$-flat. The following is a version of Eberlein's theorem (see \cite[Theorem II.9.33]{bridson1999metric}) in the cubical setting.
\begin{lem}
\label{Eberlin}
Let $X$ be a proper and cocompact $CAT(0)$ cube complex. Then $X$ is Gromov-hyperbolic if and only if $X$ does not contain a 2 dimensional $l^1$-flat.
\end{lem}
\begin{proof}
It suffices to show the only if direction. Suppose $X$ is not Gromov-hyperbolic, then \cite[Theorem C]{kleiner1999local} implies $X$ contains a 2-flat $F$ (with respect to the $CAT(0)$ metric). Let $L$ be the smallest convex subcomplex of $X$ that contains $F$. Then \cite[Corollary 3.5]{huang2015cocompactly} implies $L$ admits a splitting $L=K_1\times K_2\times\cdots K_{n}\times K$ such that $n\ge 2$ and each $K_i$ contains a geodesic line. Since each $K_i$ is uniformly locally finite, it is not hard to approximate a geodesic line in $K_i$ by an $l^{1}$-geodesic line. Hence $X$ contains a 2-dimension $l^{1}$-flat, which is a contradiction.
\end{proof}
\begin{lem}
\label{hyperbolicity of building quotient}
Suppose $K(\Ga)$ has trivial holonomy. If $\Ga$ does not contain any induced 4-cycle, then the universal cover $X$ of $|\B|/H$ does not contain a 2 dimensional $l^1$-flat, hence hyperbolic.
\end{lem}
\begin{proof}
$X$ has an induced edge-labelling from $|\B|/H$. Let $\ell\subset X$ be a $l^{1}$-geodesic line and let $\Ga_{\ell}\subset\Ga$ be the set of labels of edges of $\ell$. We claim the diameter of $\Ga_{\ell}$ is $\ge 2$. Suppose the contrary is true. Then there exists a clique $\Delta\subset\Ga$ which contains $\Ga_{\ell}$. It follows from Lemma \ref{inverse image of Ga'-component} and Lemma \ref{product decomposition of actions in building} that each $\Delta$-component in $|\B|/H$ is a product of trees of diameter 2, hence contractible. Thus each $\Delta$-component in $X$ is bounded, which is contradictory to that it contains an $l^{1}$-geodesic line $\ell$.
Since $K(\Ga)$ has trivial holonomy, $|\B|/H$ is non-positively curved by Lemma \ref{trivial holonomy and special}. Thus by Lemma \ref{Eberlin}, it suffices to show $X$ can not contain any 2 dimensional $l^{1}$-flat. Suppose the contrary is true and let $\ell_1$ and $\ell_2$ be two $l^{1}$-geodesic lines which spans a 2-flat. Note that Lemma \ref{Ga'-component} (3) is true for $|\B|/H$, hence for $X$. Thus each vertex in $\Ga_{\ell_1}$ is adjacent to every vertex in $\Ga_{\ell_2}$. By the previous claim, we can find a pair of non-adjacent vertices in both $\Ga_{\ell_1}$ and $\Ga_{\ell_2}$, which gives rise to an induced 4-cycle in $\Ga$.
\end{proof}
\begin{cor}
\label{connected intersection}
Suppose $K(\Ga)$ has trivial holonomy and $\Ga$ does not contain induced 4-cycle. Let $(B_0,...,B_n,A)$ be a collection of standard components containing the basepoint of $K(\Ga)$. Suppose that $A\subset\cap_{j=0}^{n}B_j$.
Then there is a based finite cover $\bar{K}(\Ga)$ and base elevations $\bar{B}_0,...,\bar{B}_n$ with $\bar{A}\cong A$, such that $\cap_{j\in J}\bar{B}_{j}$ is connected for each $J\subset\{0,...,n\}$.
\end{cor}
\begin{proof}
Let $\pi_{H}:K(\Ga)\to |\B|/H$ be the map induced by the canonical projection. Suppose $\pi_{H}(B_i)=D_i$ and $\pi_{H}(A)=C$. Then $(D_0,...,D_n,C)$ are standard components in $|\B|/H$ by Lemma \ref{correspondence between standard components downstairs}, in particular they are locally convex by Lemma \ref{convexity of components}. It follow from Lemma \ref{trivial holonomy and special} and Lemma \ref{hyperbolicity of building quotient} that $|\B|/H$ satisfies the assumption of Theorem \ref{haglund-wise connected intersection}. Thus there is a based finite cover $\bar{L}\to |\B|/H$ and based elevations $\bar{D}_0,...,\bar{D}_n$ with $\bar{C}\cong C$, such that $\cap_{j\in J}\bar{D}_{j}$ is connected for each $J\subset\{0,...,n\}$. Let $\bar{K}(\Ga)$ be the based cover of $K(\Ga)$ corresponding to the subgroup $H'=(\pi_{H})^{-1}_{\ast}(\pi_1(\bar{L},\star))$. This gives rise to the following commuting diagram between based spaces.
\begin{center}
$\begin{CD}
(\bar{K}(\Ga),\star) @>>\pi'> (\bar{L},\star)\\
@VVV @VVV\\
(K(\Ga),\star) @>>\pi_H> (|\B|/H,\star)
\end{CD}$
\end{center}
Let $(\bar{B}_0,...,\bar{B}_n,\bar{A})$ be the based lifts in $\bar{K}(\Ga)$. Then they are also standard components, $\pi'(\bar{B}_i)=\bar{D}_i$ and $\pi'(\bar{A})=\bar{C}$. Thus $\bar{A}\cong A$. Recall that $\pi_H$ induces a surjection on the fundamental groups, then $\pi'$ is exactly the $H'$-equivariant quotient of the canonical projection $\pi$ (see Remark \ref{surjection on fundamental groups}). Then the connectedness of $\cap_{j\in J}\bar{B}_{j}$ follows from Lemma \ref{intersection of standard components downastairs} and Lemma \ref{correspondence between standard components downstairs}.
\end{proof}
\begin{cor}
\label{projections are circles}
Let $\Ga'$ be a graph without induced 4-cycles. Pick vertex $v\in\Ga'$ and let $\Ga\subset\Ga'$ be the induced subgraph spanned by vertices in $\Ga'\setminus\{v\}$. Let $\Ga_0=lk(v,\Ga')\subset\Ga$. Suppose $K=K(\Ga)$ is special and pick a base point $p\in K$. Let $i:A\to K$ be a local isometry such that
\begin{enumerate}
\item the image of $A$ is a standard $\Ga_0$-component which contains the base point $p$;
\item the map $i:A\to i(A)$ is a covering map of finite degree.
\end{enumerate}
Then there exists a finite cover $A_{0}\to A$ such that for any further finite cover $\bar{A}\to A_{0}$ with trivial holonomy, there exists a finite directly special cover $\bar{K}\to K$ and an embedding $\bar{A}\to \bar{K}$ such that they fit into the following commutative diagram:
\begin{center}
$\begin{CD}
\bar{A} @>>> \bar{K}\\
@VVV @VVV\\
A @>>> K
\end{CD}$
\end{center}
Moreover, the following statements are true.
\begin{enumerate}
\item All elevations of $A\to K$ to $\bar{K}$ are embedded.
\item $\bar{A}$ is wall-injective in $\bar{K}$.
\item Let $\bar{B}\subset\bar{K}$ be any $\Ga_0$-component distinct from $\bar{A}$. Then the wall projection $\wpj_{\bar{K}}(\bar{B}\to\bar{A})$ is a disjoint union of branched tori of various dimension and isolated points.
\end{enumerate}
\end{cor}
In the following proof, for any vertex $v\in \Ga$, we always consider its link $lk(v)$ and its closed star $St(v)$ inside $\Ga$ rather than $\Ga'$.
\begin{proof}
By Lemma \ref{directly special}, we assume $K$ is directly special. Let $K_{1}$ be the canonical completion $\mathsf{C}(A,K)$. Take $K_{2}\to K$ to be the smallest regular cover factoring through each component of $K_{1}$ (more precisely, each component of $K_1$ gives rise to a finite index subgroup of $\pi_1(K,p)$, and $K_2$ is the regular cover corresponding to the intersection of all these subgroups, as well as their conjugates). It is clear that all elevations of $A\to K$ to $K_{2}$ are injective. Let $K_{3}\to K_2$ be a finite cover which has trivial holonomy (Lemma \ref{finite cover with trivial holonomy}). We can assume $K_3\to K$ is regular by Lemma \ref{normal subgroup and trivial holonomy}. Let $A_{0}\to K_{3}$ be an elevation of $A\to K$. We claim $A_{0}$ satisfies our condition.
Let $\C(\bar{A},K_3)$ be the canonical completion with respect to the map $\bar{A}\to A_0\to K_3$ and let $\hat{K}$ be its main component. Then $\hat{K}$ has trivial holonomy and $\bar{A}$ is wall injective in $\hat{K}$ by Lemma~\ref{trivial holonomy and completion}.
Suppose $\{v_{i}\}_{i=1}^{n}$ are the vertices in $\Ga_0$. For each $i$, let $\Ga_i\subset\Ga$ be the induced subgraph spanned by $St(v_i)\cup\Ga_0$. We claim $\Ga_i\cap \Ga_j=\Ga_0$ if $v_i$ and $v_j$ are not adjacent. If there exists a vertex $u\in (\Ga_i\cap \Ga_j)\setminus\Ga_0$, then $u,v_i,v_j$ and $v$ will form a induced $4$-cycle in $\Ga'$, which yields a contradiction.
By Corollary \ref{connected intersection}, there exists a finite cover $\bar{K}\to\hat{K}$ such that we can find an elevation of $\bar{A}$ in $\bar{K}$ which is isomorphic to $\bar{A}$ (hence we also denote this elevation by $\bar{A}$) such that the standard $\Ga_i$-component $\bar{K}_i\subset\bar{K}$ that contains $\bar{A}$ satisfies $\cap_{i\in I}\bar{K}_i$ is connected for any $I\subset\{1,...,n\}$. Thus by Lemma \ref{intersection of standard components downastairs} and the previous paragraph, $\bar{K}_i\cap\bar{K}_j=\bar{A}$ if $v_i$ and $v_j$ are not adjacent. We claim $\bar{A}$ and $\bar{K}$ satisfy the requirements of the corollary.
Since the covering map $\bar{K}\to K$ factors through $K_2$, every elevation of $A$ to $\bar{K}$ is embedded. Since $\bar{K}$ covers $K$, it is directly special. Since $\bar{A}$ is wall-injective in $\hat{K}$ (Lemma \ref{wall-injective}), it is wall-injective in $\bar{K}$. Let $\bar{B}\neq\bar{A}$ be another $\Ga_0$-component. Pick two distinct edges $e_i$ and $e_j$ in $\wpj_{\bar{K}}(\bar{B}\to\bar{A})$ and suppose they are labelled by $v_i$ and $v_j$ in $\Ga_0$ respectively. Then $\bar{B}\subset\bar{K}_i$ and $\bar{B}\subset\bar{K}_j$. Thus $v_i=v_j$ or they are adjacent, otherwise we would have $\bar{K}_i\cap\bar{K}_j=\bar{A}$, which contradicts $\bar{B}\neq\bar{A}$. Thus the labels of all edges in $\wpj_{\bar{K}}(\bar{B}\to\bar{A})$ are contained in a clique of $\Ga$. On the other hand, by the definition of wall projection, if each edge in a corner of some cube is contained $\wpj_{\bar{K}}(\bar{B}\to\bar{A})$, then this cube and the smallest branched torus containing this cube is contained in $\wpj_{\bar{K}}(\bar{B}\to\bar{A})$. Thus (3) follows.
\end{proof}
\subsection{The modified completions and retractions}
\label{subsec_modified completeions and retractions}
Let $K=K(\Ga)=Y(\Ga)/H$ be a branched complex which is directly special. Recall that parallel edges have the same label, thus there is a well-defined labelling for hyperplanes of $K$. Note that each edge $e\subset K$ is contained in a $v$-component ($v\in\Ga$ is the label of $e$), which is a branched circle. If $e$ is contained in the core of this branched circle, then $e$ is called a \textit{core} edge. Since a core edge is parallel to another core edge, it makes sense to talk about \textit{core hyperplanes} in $K$. We orient each edge in $K$ such that
\begin{enumerate}
\item the orientation respects parallelism between edges;
\item for any circle in $K$ made of core edges, the orientation of each edge in the circle fits together to give an orientation of the circle.
\end{enumerate}
Condition (2) is possible by Lemma~\ref{orientation and specialness}. Namely for each standard branched line in $Y(\Ga)$, we can choose an orientation for its core. And we can require this choice is compatible with parallelism and is $H$-equivariant, thus it descends to orientation of corresponding circles in $K$.
Two hyperplanes of $K$ are \textit{equivalent} if and only if they are both core hyperplanes and they are dual to the same $v$-component for some vertex $v\in\Ga$. We claim this is indeed an equivalent relationship. Suppose for $i=1,2$, $h$ and $h_i$ are core hyperplanes which are dual to the same $v_i$-component. Then $v_1=v_2$, and $h$ and $h_i$ are in the same $St(v_i)$-component. Thus $h,h_1$ and $h_2$ are in the same $St(v_1)$-component. Recall that each $St(v_1)$-component is of form $\beta_{P_{\bar{v}}}/H_{\bar{v}}$ for some vertex $\bar{v}\in\P(\Ga)$ (Lemma \ref{Ga'-component} (2), Lemma \ref{characterization of standard components} and Lemma \ref{inverse image of Ga'-component}). Then $h_1$ and $h_2$ are equivalent by Lemma \ref{parallel set in blow-up building}.
For two equivalent classes of hyperplanes $\mathcal{C}_{1}$ and $\mathcal{C}_{2}$, if an element in $\mathcal{C}_{1}$ crosses an element in $\mathcal{C}_{2}$, then each element in $\mathcal{C}_{1}$ will cross every element in $\mathcal{C}_{2}$. Let $\Gamma_{K}$ be a graph on the equivalence classes of hyperplanes in $K$ such that vertices are adjacent if and only if the corresponding equivalent classes cross.
For each class of hyperplanes $\mathcal{C}$, we define whether $\mathcal{C}$ \textit{$($directly or indirectly$)$ self-osculates} in the same way as the beginning of Section~\ref{subsec_special cube complex}, except we change the definition of $g_v$ there to be the graph made of all edges which are dual to some hyperplane in $\mathcal{C}$ and contain $v$. Similarly, the notion of \textit{inter-osculation} is also well-defined for two classes of hyperplanes. It follows from our definition of the equivalence relationship, as well as our choice of the edge orientation of $K$, that each equivalent class of hyperplanes does not directly self-osculate, and no two classes inter-osculate (note that Lemma \ref{Ga'-component} (3) is also true for $K(\Ga)$, and this excludes the inter-osculation of two classes).
There is a natural map $K\to S(\Gamma_{K})$ by sending an oriented edge $\vec{a}$ in $K$ to the oriented edge in $S(\Ga_K)$ that corresponds to the hyperplane dual to $\vec{a}$. The above discussion implies that this map is a local isometry. Suppose $A\subset K$ is an $\Ga'$-component for $\Ga'\subset\Ga$, and let $A\to K\to S(\Gamma_{K})$ be the composition map. Let $\mathsf{C'}(A,K)$ be the pullback which fits into the following diagram:
\begin{center}
$\begin{CD}
\mathsf{C'}(A, K) @>>> \mathsf{C}(A, S(\Gamma_{K}))\\
@VVV @VVV\\
K @>>> S(\Gamma_{K})
\end{CD}$
\end{center}
Here $\mathsf{C}(A, S(\Gamma_{K}))$ is the canonical completion defined in Section~\ref{subset_the canonical comletion}. An edge of $S(\Ga_K)$ is a \textit{core edge} if it arises from a family of core hyperplanes. Inverse image of core edges in $S(\Ga_K)$ are defined to be \textit{core edges} in $\mathsf{C}(A, S(\Gamma_{K}))$.
There is a natural copy of $A$ inside $\mathsf{C'}(A, K)$. Let $r:\mathsf{C}(A, S(\Ga_K))\to A$ be the canonical retraction. Note that if $e\subset S(\Ga_K)$ is a core edge, then its inverse image under the map $A\to S(\Ga_K)$ is a disjoint union of circles or isolated points; if $e$ is not a core edge, then its inverse image is a disjoint union of edges or points (since $K(\Ga)$ is directly special). It follows that $r$ is a cubical map. Moreover,
\begin{lem}
\label{retraction}
Let $e\subset \mathsf{C}(A, S(\Ga_K))$ be a core edge. If $r(e)$ is also an edge, then $r(e)=e$.
\end{lem}
Let $r'$ be the composition $\mathsf{C'}(A, K)\to \mathsf{C}(A,S(\Ga_K))\to A$ which is also a retraction. Note that $\mathsf{C'}(A,K)$ is different from $\mathsf{C}(A,K)$ in general. $\mathsf{C'}(A,K)$ is called the \textit{modified completion} and $r'$ is called the \textit{modified retraction}.
\begin{lem}
\label{modified wpj}
Let $D\subset K$ be a $\Lambda$-component for a induced subgraph $\Lambda\subset\Ga$ and let $\hat{D}$ denote the preimage of $D$ in $\mathsf{C'}(A,K)$. Then $r'(\hat{D})\subset\wpj_{K}(D\to A)$.
\end{lem}
\begin{proof}
By the definition of pull-back, edges of $\mathsf{C'}(A,K)$ are of form $(b_{1},b_{2})$ where $b_{1}\subset K$ and $b_{2}\subset \mathsf{C}(A,S(\Ga_K))$ are sent to the same edge of $S(\Ga_K)$. Moreover, $r'(b_{1},b_{2})=r(b_{2})$ where $r:\mathsf{C}(A,S(\Gamma_{K}))\to A$ is the canonical retraction. Suppose $b_1\subset D$ and $r(b_{2})=b'_{2}$ is an edge.
\textit{Case 1:} $b_2$ is not a core edge. Then $b_1$ is not a core edge. In this case, $b_2$ and $b'_2$ are mapped to the same edge in $S(\Ga_K)$. Hence the same is true for $b'_2$ and $b_1$. Hence they are parallel.
\textit{Case 2:} $b_2$ is a core edge. Then $b_1$ is also a core edge. Moreover $b_{2}=b'_{2}\subset A$ by Lemma \ref{retraction}. Thus the hyperplane dual to $b_{1}$ and the hyperplane dual to $b_{2}$ are in the same equivalent class. For $i=1,2$, let $C_i$ be the circle made of core edges that contains $b_i$. Then $\wpj_{K}(C_1\to C_2)=C_2$. Moreover, $C_2\subset A$ and $C_1\subset D$. Thus $b'_2\subset \wpj_{K}(D\to A)$.
\end{proof}
\begin{remark}
\label{core edge}
It follows from the above proof that if $r'(e)$ is a core edge in $A$, then $e$ has to be a core edge in $\hat{D}$.
\end{remark}
\begin{lem}
\label{length of circle}
Let $C\subset K$ be a circle made of core edges. Then the inverse image of $C$ under $\mathsf{C'}(A,K)\to K$ is a disjoint union of circles whose combinatorial length are equal to the length of $C$.
\end{lem}
\begin{proof}
Suppose $length(C)=l$. Let $e$ be the image of $C$ under $K\to S(\Ga_K)$ ($e$ is an edge). We claim the inverse image of $e$ under $\mathsf{C}(A, S(\Ga_K))\to S(\Ga_K)$ consists of circles of length $1$ or $l$. It suffices to show the inverse image of $e$ under $A\to S(\Ga_K)$ consists of circles of length $l$ or isolated points. Suppose $C'\subset A$ is a circle mapped to $e$. Then hyperplanes dual to edges in $C'$ are equivalent to hyperplanes dual to edges in $C$, hence edges of $C$ and $C'$ are labelled by the same vertex $v\in\Ga$, moreover, $C$ and $C'$ are in the same $St(v)$-component. Thus they have the same length. Now the lemma follows from the claim and the construction of $\mathsf{C'}(A,K)$.
\end{proof}
One can compare the above result with Remark~\ref{larger circle}.
\begin{cor}
\label{circle isomorphism}
Let $C\subset\mathsf{C'}(A,K)$ be a circle made of core edges and let $r':\mathsf{C'}(A,K)\to A$ be the modified retraction. Then either $r'(C)$ is a point, or $r'|_{C}$ is an isomorphism. More precisely, let $C_1\subset K$ be a circle made of core edges.
\begin{enumerate}
\item If $\wpj_{K}(C_1\to A)$ does not contain any edge, then $r'$ sends each component in the inverse image of $C_1$ under $\mathsf{C'}(A,K)\to K$ to a point.
\item If $\wpj_{K}(C_1\to A)$ contains at least one edge, then either $r'$ maps a component as above to a point, or $r'$ maps it isometrically into $A$. And there exists at least one component such that the latter case happens.
\end{enumerate}
\end{cor}
\begin{proof}
Suppose edges in $C_1$ are labelled by $v$. Then (1) follows by applying Lemma \ref{modified wpj} to the $v$-component that contains $C_1$. Under the assumption in (2), there exists a circle made of core edges $C_2\subset A$ such that $\wpj_{K}(C_1\to C_2)=C_2$. The images of $C_1$ and $C_2$ under $K\to S(\Ga_K)$ give rise to the same edge $e\subset S(\Ga_K)$. Thus the pair $(C_1,C_2)$ represents a circle in $\mathsf{C'}(A,K)$ which is mapped isometrically to $C_2$ under the map $r'$. The rest of (2) follows from the proof of Lemma~\ref{length of circle}.
\end{proof}
Let $A$ be as above. Now we want to temporarily forget about the fact that $A$ is inside a larger space $K$. We define two hyperplanes of $A$ are \textit{equivalent} if and only if there exists a vertex $v\in\Ga'$ such that these two hyperplanes are dual to core edges the same $v$-component. Let $\Gamma_{A}$ be a graph on the equivalence classes of hyperplanes in $A$ defined in a similar way to $\Ga_K$. We define $\mathsf{C}'(A,A)$ to be the pullback that fits into the following diagram:
\begin{center}
$\begin{CD}
\mathsf{C'}(A,A) @>>> \mathsf{C}(A,S(\Gamma_{A}))\\
@VVV @VVV\\
A @>>> S(\Gamma_{A})
\end{CD}$
\end{center}
\begin{lem}
\label{inverse image}
If the inclusion $A\to K$ induces an injective map of equivalence classes of hyperplanes, then there is a canonical isomorphism $\phi$ from $\mathsf{C'}(A,A)$ to the inverse image of $A$ under the covering map $\mathsf{C'}(A,K)\to K$. Moreover, let $r'_{1},r'_{2}$ be modified retractions, then the following diagrams commutes:
\begin{diagram}
\mathsf{C'}(A,A) &\rTo^{\phi} &\mathsf{C'}(A,K) &\ \ \ \ \ \ \ \ \ &\mathsf{C'}(A,A) &\rTo^{\phi} &\mathsf{C'}(A,K)\\
\dTo & &\dTo &\ \ \ \ \ \ \ \ \ & &\rdTo_{r'_{1}} &\dTo_{r'_{2}}\\
A &\rTo &K &\ \ \ \ \ \ \ \ \ & & &A
\end{diagram}
\end{lem}
\begin{proof}
The proof is similar to \cite[Lemma 3.13]{haglund2012combination}. By assumption, there is a natural embedding $S(\Gamma_{A})\to S(\Ga_K)$ (this may not be a local-isometry). This induces an embedding $\mathsf{C}(A,S(\Gamma_{A}))\to \mathsf{C}(A,S(\Ga_K))$ such that the following diagram commutes ($r,r'$ are canonical retractions):
\begin{diagram}
\mathsf{C}(A,S(\Gamma_{A})) &\rTo &\mathsf{C}(A,S(\Ga_K))\\
&\rdTo_{r'} &\dTo_{r}\\
& &A
\end{diagram}
Moreover, $\mathsf{C}(A,S(\Gamma_{A}))$ is the inverse image of $S(\Gamma_{A})$ under $\mathsf{C}(A,S(\Ga_K))\to S(\Ga_K)$. Now the lemma follows from the definition of pullback and modified retraction.
\end{proof}
\begin{lem}
The assumption of Lemma~\ref{inverse image} is satisfied if $A$ is wall-injective in $X$.
\end{lem}
\begin{proof}
Suppose there are two different classes of core hyperplanes $\mathcal{C}_1$ and $\mathcal{C}_2$ which are mapped to the same class $\mathcal{C}$ of hyperplanes of $X$. Then there is a circle made of core edges $C\subset A$ such that $\mathcal{C}_1$ and $\mathcal{C}$ are made of hyperplanes dual to edges in $C$. Since $A$ is wall-injective, there is 1-1 correspondence between elements in $\mathcal{C}_1$ and elements in $\mathcal{C}$. Thus there exist a hyperplane in $\mathcal{C}_1$ and a hyperplane in $\mathcal{C}_2$ which are mapped to the same hyperplane in $\mathcal{C}$. This yields a contradiction.
\end{proof}
\begin{remark}
\label{label-preserving isomorphism}
Let $A\to A'$ be a label-preserving isomorphism. Then it induces an isomorphism $S(\Gamma_{A})\to S(\Gamma_{A'})$ which fits into the diagram below on the left. This induces the commuting digram in the middle, which gives rise to an isomorphism $\mathsf{C}'(A, A)\to \mathsf{C}'(A',A')$ that fits into the commuting diagram on the right, whose vertical maps can be both covering maps or both modified retractions.
\begin{diagram}
A &\rTo &A' &\ \ \ &\C(A,S(\Ga_A)) &\rTo &\C(A',S(\Ga_{A'})) &\ \ \ &\mathsf{C'}(A,A) &\rTo &\mathsf{C'}(A',A')\\
\dTo & &\dTo &\ \ \ &\dTo & &\dTo &\ \ \ &\dTo & &\dTo\\
S(\Ga_A) &\rTo &S(\Ga_{A'}) &\ \ \ &S(\Ga_A) &\rTo &S(\Ga_{A'}) &\ \ \ &A &\rTo &A'
\end{diagram}
Actually, there is an injective homomorphism from the group of label-preserving automorphisms of $A$ to the group of label-preserving automorphisms of $\C'(A,A)$.
\end{remark}
\section{Construction of the finite cover}
\label{sec_construction of the finite cover}
\subsection{A graph of spaces}
Throughout this section, $\Ga$ will be a finite simplicial graph without induced 4-cycles. And $H$ is a group acting on the blow-up building $Y(\Ga)$ geometrically by label-preserving automorphisms. Our goal is to show $H$ has a finite index torsion free subgroup $H'\le H$ such that $Y(\Ga)/H'$ is a special cube complex. This can be reduced to the following claim.
\begin{claim}
\label{induction claim}
For every vertex $u\in\Ga$, there exists a finite index torsion free subgroup $\bar{H}\le H$ such that for any vertex $\bar{u}\in\P(\Ga)$ labelled by $u$, the factor action (see Section~\ref{subsec_quasi action}) $\rho_{\bar{u}}:\bar{H}_{\bar{u}}\acts\Z_{\bar{u}}$ is conjugate to an action by translations.
\end{claim}
For each vertex of $\Ga$, we find a finite index subgroup of $H$ as in the above claim. Let $H'$ be the intersection of all these subgroups. Then $H'$ satisfies condition (1) of Lemma~\ref{orientation and specialness}, hence $Y(\Ga)/H'$ is a special cube complex.
We will induct on the number of vertices in $\Ga$, and assume the above claim is true for graphs with $\le n-1$ vertices. Let $\Ga$ be a graph of $n$ vertices without induced $4$-cycles. Note that any subgraph of $\Ga$ does not have induced 4-cycles.
Pick vertex $u\in\Ga$. Let $\Lambda\subset\Ga$ be the induced subgraph spanned by vertices in $\Ga\setminus\{u\}$ and let $\Lambda_u\subset\Lambda$ be the induced subgraph spanned by vertices adjacent to $u$ (it is possible that $\Lambda_u=\emptyset$ or $\Lambda_u=\Lambda$). By Lemma \ref{finite index subgroup without inversion}, we assume $H$ acts on $Y(\Ga)$ without inversions. Moreover, let $\h_u$, $T$ and the action $H\acts T$ be as in Lemma \ref{finite index subgroup without inversion}. Let $K=Y(\Ga)/H$. We label the vertices and edges of $K$ as in Section \ref{subsec_branched complex basics}.
There is a restriction quotient map $q:Y(\Ga)\to T$ (see \cite[Section 2.3]{caprace2011rank}). In other words, $q$ is the cubical map that collapses every edge of $Y(\Ga)$ which is not labelled by $u$ to a point. Note that $q$ is $H$-equivariant. Moreover, $q$ maps each $u$-component isometrically into $T$, whose image is called a \textit{$u$-component} of $T$. The collection of $u$-components in $T$ covers $T$, and these $u$-components are permuted under the $H$-action. We define the \textit{tips} of $T$ to be union of the tips of $u$-components in $T$ (note that each $u$-component is a branched line). The sets of tips of $T$ is $H$-invariant. If $x\in T$ is not a tip, then there is a unique $u$-component that contains $x$. Moreover, $q^{-1}(x)$ is isometric to some $\Lambda_u$-component of $Y(\Ga)$. If $x\in T$ is a tip, then $q^{-1}(x)$ is a standard $\Lambda$-component of $Y(\Ga)$.
Let $\G=T/H$ be the quotient graph. We define \textit{tips} and \textit{$u$-components} in $\G$ to be the images of tips and $u$-components in $T$. Here is an alternative characterization of $u$-components of $\G$. Pick a vertex $\bar{u}\in\P(\Ga)$ labelled by $u$ and let $\beta_{P_{\bar{u}}}=\beta_{\bar{u}}\times\beta^{\perp}_{\bar{u}}$ be as in Lemma~\ref{parallel set in blow-up building}. We identify $\beta_{\bar{u}}$ with its image in $T$ under $Y(\Ga)\to T$. Note that if $h\in H$ satisfies that $h(\beta_{\bar{u}})\cap\beta_{\bar{u}}$ contains a point which is not a tip of $\beta_{\bar{u}}$, then $h(\beta_{\bar{u}})=\beta_{\bar{u}}$. Thus there is an embedding $\beta_{\bar{u}}/\stab(\beta_{\bar{u}})\to \G$, whose image is a $u$-component. Note that $\stab(\beta_{\bar{u}})=\stab(\beta_{P_{\bar{u}}})$.
Let $E$ be an Eilenberg–MacLane space for $H$. Then the diagonal action $H\acts Y(\Ga)\times E$ is free. Moreover, there is an $H$-equivariant projection $Y(\Ga)\times E\to Y(\Ga)$, which descends to a projection $p:\K=(Y(\Ga)\times E)/H\to K$. Note that for any $x\in K$ and $y\in Y(\Ga)$ which is mapped to $x$ under $Y(\Ga)\to K$, $\pi_1(p^{-1}(x))$ is isomorphic to the stabilizer of $y$. We can view $K$ as a developable complex of groups and $\K$ as the corresponding complex of spaces (see \cite[Chapter III.$\mathcal{C}$]{bridson1999metric}).
The above map $q:Y(\Ga)\to T$ induces maps $\pi:K\to \G$ and $\bar{\pi}:\K\to \G$. These maps induce graph of spaces decompositions
\begin{equation*}
K=\bigg(\bigsqcup_{v\in \textmd{Vertex}(\G)} K_{v}\sqcup\bigsqcup_{e\in \textmd{Edge}(\G)}(N_{e}\times [0,1])\bigg) /\sim
\end{equation*}
and
\begin{equation*}
\K=\bigg(\bigsqcup_{v\in \textmd{Vertex}(\G)} \K_{v}\sqcup\bigsqcup_{e\in \textmd{Edge}(\G)}(\n_{e}\times [0,1])\bigg) /\sim.
\end{equation*}
Note that $\K_v=p^{-1}(K_v)$ and $\n_{e}=p^{-1}(N_e)$. There is a 1-1 correspondence between covers of $\K$ and orbifold covers of $K$, and these covers have induced graph of spaces structures.
Let $\{A(v,i)\}_{i\in I_v}$ be the collection of images of boundary morphisms inside $K_{v}$. Then each $A(v,i)$ is a $\Lambda_u$-component in $K$. This component is standard if and only if $v$ is a tip. Let $N(v,i)$ and $\partial_{v,i}:N(v,i)\to A(v,i)$ be the associated edge space and boundary morphism. It is possible that $N(v,i)\neq N(v,j)$, but $A(v,i)=A(v,j)$. However, this can not happen when $v$ is a tip, since in such case for each vertex in $A(v,i)$, there is only one edge labelled by $u$ which contains this vertex. Note that each $\partial_{v,i}$ preserves edge-labellings. We define $\{\A(v,i)\}_{i\in I_v}$ and $\bar{\partial}_{v,i}:\n(v,i)\to \A(v,i)$ in a similar way. Each $\bar{\partial}_{v,i}$ is a covering map of finite degree. If $v$ is a tip, then $\bar{\partial}_{v,i}$ is a homeomorphism; if $v$ is not a tip, then $A(v,i)=K_v$ and $\A(v,i)=\K_v$.
For subgraph $\Lambda_1\subset\Lambda$, a \textit{$\Lambda_1$-component} $\K_1\subset \K$ is the inverse image of a $\Lambda_1$-component of $K$ under $p:\K\to K$. The \textit{parallel set} of $\K_1$ is the unique $\Lambda_2$-component that contains $\K_1$, where $\Lambda_2=St(\Lambda_1)$ (Definition \ref{definition of links}). Two $\Lambda_1$-components of $\K$ are \textit{parallel} if they are in the same $\Lambda_2$-component of $\K$. We can define parallel sets and parallelism between $\Lambda_1$-components of $K$ in a similar way.
The image of every $St(u)$-component in $\K$ under the map $\bar{\pi}:\K\to \G$ is a $u$-component in $\G$. Thus $\bar{\pi}$ induces a 1-1 correspondence between $St(u)$-components in $\K$ and $u$-components in $\G$. If a $u$-component in $\G$ contains a circle $C$, then $\bar{\pi}^{-1}(C)$ is a bundle over $C$, whose fibres are homeomorphic to some $\Lambda_u$-component of $\K$. The map $\bar{\pi}$ also induces a 1-1 correspondence between $\Lambda$-components in $\K$ and tips in $\G$.
Since $\Ga$ contains no induced 4-cycle, when $\Lambda_{u}$ is not a clique, every $St(u)$-component in $\K$ is the parallel set of a $\Lambda_u$-component in $\K$. We divide the proof of Claim~\ref{induction claim} into the case when $\Lambda_u$ is a clique, and the case when $\Lambda_u$ is not a clique. Also we make the additional assumption that $\Ga\neq St(u)$, since the $\Ga=St(u)$ case of the claim follows from Lemma~\ref{subgroup with no twist} and Lemma~\ref{conjugate to action by translations}. When $\Ga\neq St(u)$, all $St(u)$-components and $\Lambda$-components are standard (Lemma \ref{Ga'-component} (2)) and they intersects along standard $\Lambda_u$-components (Lemma \ref{intersection of standard components downastairs}).
Recall that by induction hypothesis, if $\Ga'\subset\Ga$ is a induced subgraph that does not contain all vertices of $\Ga$, then each $\Ga'$-component has a finite sheet torsion free special cover. This in particular applies to $\Lambda,\Lambda_u$ and $St(u)$.
To simply notation, we will view $K$ and $\Ga'$-components of $K$ ($\Ga'\subset\Ga$ is a subgraph) as developable complexes of groups, and work with their orbifold covers. This is equivalent to working with $\K$ and $\Ga'$-components of $\K$, and using the usual covering space theory. The fundamental group of a $\Ga'$-component of $K$ is understood to be the orbifold fundamental group of this component, which is isomorphic to the usual fundamental group of the corresponding $\Ga'$-component in $\K$.
For vertex $v\in\Ga$, a \textit{$v$-edge} in $K$ is an edge labelled by $v$ and a \textit{$v$-circle} in $K$ is a core circle made of $v$-edges. We record the following observation.
\begin{lem}
\label{length of circle0}
Suppose $\{K_i\}_{i=1}^{n}$ are finite covers of $K$ such that there exists a torsion free regular cover $\bar{K}\to K$ such that each $K_i$ factors through $\bar{K}$. Let $K'$ be the smallest regular cover of $K$ which factors through each $K_i$. Pick vertex $w\in\Ga$. Suppose $\ell$ is an integer such that for each $i$ and each $w$-circle in $K_i$, its length divides $\ell$. Then the length of each $w$-circle divides $\ell$.
\end{lem}
\begin{proof}
Let $H_i$ and $\bar{H}$ be the subgroups of $H$ corresponding to $K_i$ and $\bar{K}$. Note that the action $\bar{H}\acts Y(\Ga)$ is free, and $hH_ih^{-1}\le \bar{H}$ for each $i$ and $h\in H$. Thus it suffices to show for each line $L\subset Y(\Ga)$ made of $w$-edges, each $i$ and each $h\in H$, the translation length of the generator of $\stab_{hH_ih^{-1}}(L)$ divides $\ell$. But this follows from the assumption.
\end{proof}
Again, the above lemma may not hold if we do not assume $K_i$ factors through $\bar{K}$. See the example after Lemma \ref{intersection and normal subgroup for reducible action}.
\subsection{Matching finite covers of vertex spaces and edge spaces}
\label{subsec_matching}
The space $A(v,i)$ is called a \textit{gate} if $v$ is a tip. Our strategy is to construct suitable finite sheet covers for each $\Lambda$-component and $St(u)$-component of $K$, and glue them together along the elevations of gates.
\begin{lem}
\label{collection of covers}
There exists a collection $\mathcal{C}$ of finite sheet regular special covers with trivial holonomy, one for each $\Lambda$-component and $St(u)$-component in $K$, such that for each gate $A(v,i)$, there exists a torsion free regular cover $A_r(v,i)$ such that each possible elevation of $A(v,i)$ to some element of $\mathcal{C}$ factors through $A_r(v,i)$.
\end{lem}
\begin{proof}
For each $\Lambda$-component and $St(u)$-component of $K$, we find a finite sheet special cover which is regular and has trivial holonomy. This is possible by Lemma \ref{finite cover with trivial holonomy} and Lemma \ref{normal subgroup and trivial holonomy}. We denote the resulting collection of covers by $\mathcal{C}'$. For each gate $A(v,i)$, let $A_r(v,i)$ be the smallest regular cover of $A(v,i)$ which factors through each possible elevation of $A(v,i)$ to elements in $\mathcal{C}'$. Pick an element $K'_v\in\mathcal{C}'$ which covers a $\Lambda$-component $K_v\subset K$. Let $K''_{v}$ be the smallest regular cover of $K_v$ such that for each gate $A(v,i)$ in $K_v$, $K''_v$ factors through each component of the canonical completion $\C(A_r(v,i),K'_v)$. We replace $K'_v$ by $K''_v$ and replace other elements of $\mathcal{C}'$ in a similar way to obtain a collection $\mathcal{C}''$. Moreover, we replace each element in $\mathcal{C}''$ by a further cover which is regular and has trivial holonomy, using Lemma \ref{finite cover with trivial holonomy} and Lemma \ref{normal subgroup and trivial holonomy}. The resulting collection has the required properties.
\end{proof}
\subsubsection{Case 1: $\Lambda_u$ is a clique}
\label{one clique}
We first look at the case $\Lambda_u=\emptyset$. It follows from the induction hypothesis that each $\Lambda$-component of $K$ has a finite torsion free cover. Moreover, each $St(u)$-component (or equivalently, $u$-component) of $K$ has a finite torsion free cover whose fundamental group is $\Z$. One can glue together suitable many copies of these covers to form a finite cover of $K$. This cover satisfies the requirement in Claim~\ref{induction claim}. It is torsion free, since each of its vertex spaces and edge spaces is torsion free.
Now we assume $\Lambda_u$ is a non-empty clique. Let $\mathcal{C}$ be the collection of covers as in Lemma \ref{collection of covers} and let $\ell$ be a positive integer such that the length of each core circle (i.e. circle made of core edges) in every element of $\mathcal{C}$ divides $\ell$. It follows from Lemma~\ref{equal torus} below that there exist suitable further finite covers of elements in $\mathcal{C}$ such that we can glue together suitable many copies of them along standard $\Lambda_u$-components to form a finite cover of $K$, which satisfies Claim~\ref{induction claim} (note that two covers of the a $\Lambda_u$-component $L\subset K$ may not be isomorphic as covering spaces of $L$ even when they are the same $\ell$-branched torus, however, this is true if they factors through a common torsion free regular cover of $L$).
\begin{lem}
\label{equal torus}
Suppose $K(\Ga)$ has a finite regular cover $\bar{K}(\Ga)$ which is special. Let $\ell$ be a positive integer such that for each vertex $u\in\Ga$, the length of each $u$-circle in $K(\Ga)$ divides $\ell$. Then $K(\Ga)$ has a finite index regular cover $K'$ such that for each clique $\Delta\subset\Ga$, each standard $\Delta$-component of $K'$ is an $\ell$-branched torus, i.e. it is isomorphic as cube complexes to a product of branched circles whose core circle have length $=\ell$.
\end{lem}
\begin{proof}
By Lemma \ref{trivial holonomy of reduction}, we can assume $\bar{K}(\Ga)$ is a finite cover of the Salvetti complex $S(\Ga)$ (note that all $u$-circles in $K(\bar{\Ga})$ has length $=\ell$ if and only if the same is true for the reduction of $K(\bar{\Ga})$). For a clique $\Delta\in\Ga$, let $T_{\Delta}\to S(\Delta)$ be the $\ell$-branched torus which covers $S(\Delta)$. Define $\mathring{K}(\Ga)$ to be the pull-back as follows.
\begin{center}
$\begin{CD}
\mathring{K}(\Ga) @>>> \C(T_{\Delta},S(\Ga))\\
@VVV @VVV\\
\bar{K}(\Ga) @>>> S(\Ga)
\end{CD}$
\end{center}
Since $T_{\Delta}$ factors through each $\Delta$-component of $\bar{K}(\Ga)$, each $\Delta$-component of $\mathring{K}(\Ga)$ is isomorphic to $T_{\Delta}$. We claim that (1) for each $u\in\Ga$, the length of each $u$-circle in $\mathring{K}(\Ga)$ divides $\ell$; (2) if we already know there is a clique $\Delta'\subset\Ga$ such that each $\Delta'$-component of $\bar{K}(\Ga)$ is isomorphic to $T_{\Delta'}$, then the same is true for $\Delta'$-component of $\mathring{K}(\Ga)$. To see (1), note that each $u$-circle in $S(\Ga)$ has length $=1$, and a $u$-circle in $\C(T_{\Delta},S(\Ga))$ has length equal to either $\ell$ or $1$. (2) follows from (1) and the fact that $\C(T_{\Delta},S(\Ga))$ has trivial holonomy (Lemma \ref{trivial holonomy and completion}).
Now we can repeat the above process for each clique in $\Ga$ to obtain a finite cover $\hat{K}(\Ga)$ of $\bar{K}(\Ga)$ such that for each clique $\Delta\subset\Ga$, each $\Delta$-component in $\hat{K}(\Ga)$ is isomorphic to $T_{\Delta}$. Let $K'$ to be the smallest regular cover of $K(\Ga)$ that factors through $\hat{K}(\Ga)$. Then $K'$ has the required property by Lemma \ref{length of circle0}.
\end{proof}
\subsubsection{Case 2: $\Lambda_u$ is not a clique} Pick a $St(u)$-component $L\subset K$. Then $L$ can be viewed as a graph of spaces over a $u$-component $\G_L\subset\G$ whose boundary morphisms are covering maps of finite degree. A finite sheet cover $L'$ of $L$ is \textit{admissible} if
\begin{enumerate}
\item there exists a torsion free regular cover $\bar{L}\to L$ such that each component of $L'$ factors through $\bar{L}$;
\item each component of $L'$ is a trivial bundle over a branched circle (we only require the bundle is topologically trivial, however, it may have non-trivial $u$-holonomy).
\end{enumerate}
The existence of such cover follows from Lemma \ref{subgroup with no twist} and the induction hypothesis. It follows from Lemma \ref{orientation and specialness} that to show Claim \ref{induction claim}, it suffices to find a finite torsion free cover of $K$ such that each of its $St(u)$-component is admissible.
\begin{ob}
\label{admissible}
Let $M$ be an admissible cover of $L$ with trivial $u$-holonomy.
\begin{enumerate}
\item The smallest regular cover of $L$ which factors through each component of $M$ also has trivial $u$-holonomy.
\item Each component of $\C'(M,M)$ has trivial $u$-holonomy.
\item Let $M'\subset M$ be a vertex space or edge space. Then the inverse image of $M'$ under the covering $\C'(M,M)\to M$ is naturally isomorphic to $\C(M',M')$.
\end{enumerate}
\end{ob}
Here (1) follows from Lemma \ref{intersection and normal subgroup for reducible action}. (2) follows from the argument in Lemma \ref{trivial holonomy and completion}. Since $M$ has trivial $u$-holonomy, it is a product. Thus $M'$ is wall-injective in $M$ and (3) follows from Lemma \ref{inverse image}.
A collection $\Sigma$ of (not necessarily connected) finite sheet covers, one for each gate, is called \textit{admissible} if there exists a collection $\Phi$ of admissible covers, one for each $St(u)$-component in $K$, such that for each gate, its inverse image in the element of $\Phi$ that covers this gate is a disjoint union of spaces, each of which is isomorphic to the element in $\Sigma$ that covers this gate, in the sense of covering spaces. $\Sigma$ \textit{has trivial $u$-holonomy} if we can choose the collection $\Phi$ such that each of its element has trivial $u$-holonomy. $\Sigma$ is \textit{regular admissible} if each element in $\Sigma$ is connected and each element in $\Phi$ is a regular cover. In generally, being a regular admissible collection is stronger than being an admissible collection of regular coverings.
In the rest of this section, we will use a modified version of the argument in \cite[Section 6]{haglund2012combination}. The reader could also consult \cite[Page 51-52]{wise2012riches} for a pictorial illustration of the strategy in the malnormal case.
\begin{lem}
\label{first step}
For each tip $v\in\G$ and each gate $A(v,i)$ in $K_v$, we can find a finite sheet base pointed cover $K(v,i)\to K_v$ (the base point of $K_v$ is in $A(v,i)$, and we allow the base point to change for different gates in $K_v$) such that the following properties hold.
\begin{enumerate}
\item Each $K(v,i)$ is torsion free and special.
\item The based elevation $A'(v,i)\to K(v,i)$ of $A(v,i)\to K_v$ is wall-injective.
\item Each $A'(v,i)$ has trivial holonomy. The collection of all such coverings $A'(v,i)\to A(v,i)$ is a regular admissible collection with trivial $u$-holonomy.
\item For each gate $A(v,i)$, there exists a torsion free regular cover $A_r(v,i)$ such that each elevation of $A(v,i)$ to $K(v,j)$ (it is possible that $j\neq i$) factors through $A_r(v,i)$.
\item $\wpj_{K(v,i)}(B\to A'(v,i))$ is a disjoint union of branched torus and isolated points for any $\Lambda_u$-component $B\subset K(v,i)$ different from $A'(v,i)$.
\end{enumerate}
\end{lem}
\begin{proof}
Let $\mathcal{C}$ be the collection in Lemma \ref{collection of covers} and let $L'\in \mathcal{C}$ be an element which covers a $St(u)$-component $L\subset K$. Pick a gate $A(v,i)\subset L$. Let $A_1(v,i)\to A(v,i)$ be a finite cover which factors through each elevation of $A(v,i)$ to $L'$ and $K'_v$ ($K'_v$ is the element $\mathcal{C}$ that covers $K_v$). We apply Corollary~\ref{projections are circles} to $A_1(v,i)\to K'_v$ to obtain a finite cover $A_2(v,i)\to A_1(v,i)$ such that any further finite cover of $A_2(v,i)$ with trivial holonomy will satisfy the conclusion of Corollary~\ref{projections are circles}. Let $L''\to L$ be a regular special cover with trivial holonomy such that for each gate $A(v,i)\subset L$, $L''$ factors through each component of the canonical completion $\C(A_2(v,i),L')$. We choose $A'(v,i)$ to be an elevation of $A(v,i)$ to $L''$ (since $L''$ is regular, all elevations are the same). (3) is clear since $L''$ has trivial holonomy. Since $A'(v,i)$ factors through $A_2(v,i)$, by Corollary~\ref{projections are circles}, we can find finite cover $K(v,i)\to K'_v$ which satisfies (1), (2) and (5). (4) follows from Lemma \ref{collection of covers}.
\end{proof}
For each $A'(v,i)$, we form the modified completions $\mathsf{C}'(A'(v,i),A'(v,i))$ and $\mathsf{C}'(A'(v,i),K(v,i))$, and denote them by $\C'_{v,i}(A',A')$ and $\C'_{v,i}(A',K)$ for simplicity. By Lemma \ref{inverse image}, there is a canonical inclusion $\C'_{v,i}(A',A')\to \C'_{v,i}(A',K)$. Moreover, it follows from Observation \ref{admissible} (3) that the collection of all $\C'_{v,i}(A',A')$'s is admissible.
For vertex $a\in \Ga$, we define $\ell_{a}$ to be the least common multiple of the lengths of all $a$-circles in the collection $\{\C'_{v,i}(A',K)\}_{v\in \textmd{Tip}(\G),i\in I_v}$.
Let $\pi:\C'_{v,i}(A',K)\to K(v,i)$ be the covering map and let $r':\C'_{v,i}(A',K)\to A'(v,i)$ be the modified retraction. Pick a $\Lambda_u$-component $A'\subset\C'_{v,i}(A',K)$, by Corollary~\ref{projections are circles} and Lemma \ref{modified wpj}, if $\pi(A')\neq A'(v,i)$, then $r'(A')$ is contained in a branched torus. Let $T$ be the smallest branched torus containing $r'(A')$ and let $\{v_i\}_{i=1}^{n}$ be the label of edges in $T$. Since $A'(v,i)$ has trivial holonomy, $T$ has a cubical product decomposition $T=\prod_{i=1}^{n}C_i$ where each $C_i$ is a $v_i$-component in $T$. Let $C'_i$ be a cover of $C_i$ of degree $=\ell_{v_i}/n_i$ where $n_i$ is the length of the core of $C_i$, and let $T'=\prod_{i=1}^{n}C'_i$. We define the \textit{shadow} of $A'$ (when $\pi(A')\neq A'(v,i)$) to be one connected component of the pull-back $A'_S$ as follows.
\begin{center}
$\begin{CD}
A'_{S} @>>> T'\\
@VVV @VVV\\
A' @>r'>> T
\end{CD}$
\end{center}
It is possible that both $T'$ and $T$ are one point, in which case $A'_S=A'$. The definition of shadow does not depend on the choice of components in $A'_S$, since all of them give the same regular cover of $A'$. We record the following two observations.
\begin{remark}\
\label{circle in shadow}
(1) Let $T_1\to T$ be any cover such that for each $i$, the length of the core of $v_i$-components in $T_1$ divides $\ell_{v_i}$. Then $T'$ factors through $T_1$.
(2) Let $C'$ be a circle made of core edges in $A'$. By Corollary \ref{circle isomorphism}, either $r'(C')$ is a point, in which case all inverse images of $C'$ in $A'_{S}$ are circles which have the same length as $C'$; either $r'|_{C'}$ is an isomorphism, in which case the inverse image of $C'$ in $A'_{S}$ is a circle of length $\ell_a$ where $a$ is the label of edges in $C'$.
\end{remark}
Let $A(v,i)$ be a gate. Let $\dot{A}(v,i)\to A(v,i)$ be the smallest regular cover that factors through each elevation of $A(v,i)$ to $\C'_{v,j}(A',K)$ and the shadow of this elevation (if exists), here $(v,j)$ ranges over all pairs such that $A(v,j)\subset K_{v}$.
Let $L$ be the $St(u)$-component that contains $A(v,i)$ and let $L'$ be the cover of $L$ induced by the admissible collection of $A'(v,i)$'s. Recall that $L'$ has trivial $u$-holonomy, hence it is isomorphic to a product of $A'(v,i)$ with a $u$-component in $L'$. Then $\dot{A}(v,i)\to A'(v,i)$ induces a cover of $L'$, which we denote by $\dot{L}(v,i)$. Let $\bar{L}$ be the smallest regular cover of $L$ which factors through each $\dot{L}(v',i')$, where $(v',i')$ is a pair such that $A(v',i')$ is a gate in $L$. Then $\bar{L}$ has trivial $u$-holonomy by Lemma \ref{intersection and normal subgroup for reducible action}. Let $\bar{A}(v,i)\to A(v,i)$ be the regular cover induced by $\bar{L}\to L$. Then the collection of all $\bar{A}(v,i)$'s is regular admissible with trivial $u$-holonomy.
It follow from Remark~\ref{circle in shadow} (2) that the length of each $a$-circle in the spaces involved in the construction of $\bar{A}(v,i)$ divides $\ell_a$, thus the lemma below follows from Lemma \ref{length of circle0} and Lemma \ref{first step} (4).
\begin{lem}
\label{length divide}
Let $a\in\Lambda_u$ be a vertex. Then the length of any $a$-circle in $\bar{A}(v,i)$ divides $\ell_a$.
\end{lem}
For each pair $(v,i)$, we define the pull-back $\overline{\C'_{v,i}(A',K)}$ and $\overline{\C'_{v,i}(A',A')}$ as follows.
\begin{center}
$\begin{CD}
\overline{\C'_{v,i}(A',K)} @>>> \bar{A}(v,i)@.\ \ \ \ \ \overline{\C'_{v,i}(A',A') } @>>> \bar{A}(v,i)\\
@VVV @VVV @VVV @VVV\\
\C'_{v,i}(A',K) @>>> A'(v,i)@.\ \ \ \ \ \C'_{v,i}(A',A') @>>> A'(v,i)
\end{CD}$
\end{center}
Then one deduce from Lemma \ref{inverse image} that there is a naturally defined embedding $i_1:\overline{\C'_{v,i}(A',A')}\to \overline{\C'_{v,i}(A',K)}$ which fits into the following commuting diagram. Moreover, $f^{-1}(i_{2}(\C'_{v,i}(A',A')))=i_1(\overline{\C'_{v,i}(A',A')})$.
\begin{center}
$\begin{CD}
\overline{\C'_{v,i}(A',A')} @>i_1>> \overline{\C'_{v,i}(A',K)}\\
@VVV @VVfV\\
\C'_{v,i}(A',A') @>i_2>> \C'_{v,i}(A',K)
\end{CD}$
\end{center}
We claim the collection of $\overline{\C'_{v,i}(A',A')}$'s is admissible. Pick a $St(u)$-component, and let $M_1,M_2$ and $M_3$ be the covers of this component induced by the collection of $\C'_{v,i}(A',A')$'s, $A'(v,i)$'s and $\bar{A}(v,i)$'s respectively. Recall that $M_2$ and $M_3$ have trivial $u$-holonomy, and the same is true for $M_1$ by Observation \ref{admissible} (2). By Observation \ref{admissible} (3), the modified retraction $M_1\to M_2$ is compatible with the retraction $\C'_{v,i}(A',A')\to A'(v,i)$. Then the claim follows by considering the pull-back of the covering $M_3\to M_2$ under the retraction $M_1\to M_2$.
\begin{lem}
\label{factor through}
Let $A(v,i)$ be a gate. Suppose $\bar{A}$ is either (1) an elevation of $A(v,i)$ to $\overline{\C'_{v,i}(A',K)}$ such that the image of $\bar{A}$ under the covering map $\overline{\C'_{v,i}(A',K)}\to K(v,i)$ is distinct from $A'(v,i)$; or (2) an elevation of $A(v,i)$ to $\overline{\C'_{v,j}(A',K)}$ with $i\neq j$. Then $\bar{A}(v,i)$ factors through $\bar{A}$.
\end{lem}
\begin{proof}
We prove case (2). The proof of case (1) is similar. The following diagram indicate the history of $\bar{A}$:
\begin{center}
$\begin{CD}
\bar{A} @>>> A_{2} @>>> A_{1} @>>> A(v,i)\\
@VVV @VVV @VVV @VVV\\
\overline{\C'_{v,j}(A',K)} @>>> \C'_{v,j}(A',K) @>>> K(v,j) @>>> K_{v}
\end{CD}$
\end{center}
Since $i\neq j$, $A_{1}\neq A'(v,j)$. By Theorem \ref{projections are circles} and Lemma \ref{modified wpj}, $r'(A_{2})$ is contained in a branched torus. Let $T$ be the smallest such branched torus. The case $T$ is a point is clear. Otherwise, let $\bar{T}$ be any lift of $T$ in $\bar{A}(v,j)$. It follows from Lemma \ref{length divide} that the length of any $a$-circle in $\bar{T}$ divides $\ell_a$. It follows from Remark \ref{circle in shadow}(1) that the shadow of $A_{2}$ factors through $\bar{A}$, thus $\bar{A}(v,i)$ factors through $\bar{A}$.
\end{proof}
For each tip $v\in\G$, let $\mathring{K}(v,i)$ be the smallest regular cover of $K_{v}$ factoring through each component of $\overline{\C'_{v,i}(A',K)}$ and let $\mathring{K}_{v}$ be the smallest regular cover of $K_{v}$ factoring through each $\mathring{K}(v,i)$. Let $\mathring{A}(v,i)$ be the smallest regular cover of $A(v,i)$ factoring through each component of $\overline{\C'_{v,i}(A',A')}$.
\begin{lem}\
\label{lift}
\begin{enumerate}
\item The collection $\{\mathring{A}(v,i)\}_{v\in \textmd{Tip}(\G),i\in I_v}$ is regular admissible.
\item Each elevation of $A(v,i)$ to $\mathring{K}_{v}$ is isomorphic to $\mathring{A}(v,i)$ as covering spaces of $A(v,i)$.
\end{enumerate}
\end{lem}
\begin{proof}
Since the collection of $\overline{ \C'_{v,i}(A',A')}$'s is admissible, (1) follows. Let $\mathring{A}$ be a lift of $A(v,i)$ to $\mathring{K}_{v}$. By Lemma \ref{inverse image}, the lift of $A(v,i)$ to $\C'_{v,i}(A',K')$ could be any component of $\C'_{v,i}(A',A')$, hence the lift of $A(v,i)$ to $\overline{\C'_{v,j}(A',K)}$ could be any component of $\overline{\C'_{v,j}(A',A')}$. Thus $\mathring{A}$ factors through $\mathring{A}(v,i)$. On the other hand, Lemma \ref{factor through} implies that $\mathring{A}(v,i)$ factors through each elevation of $A(v,i)$ to $\overline{ \C'_{v,j}(A',K)}$ for each $j$. Thus $\mathring{A}(v,i)$ factors through $\mathring{A}$.
\end{proof}
The collection in Lemma~\ref{lift} (1) induces a finite sheet cover for each $St(u)$-component in $K$. Lemma~\ref{lift} (2) enables us to glue suitable many copies of these covers together with copies of $\mathring{K}_{v}$'s to form a finite sheet cover of $K$, which satisfies Claim~\ref{induction claim}. Thus we have proved the following result.
\begin{cor}
\label{good cover}
Suppose $\Ga$ is a graph without induced $4$-cycle. Let $H\acts Y(\Ga)$ be a group acting geometrically by label-preserving automorphisms on a blow-up building $Y(\Ga)$. Then $H$ has a finite index subgroup $H'\le H$ such that $Y(\Ga)/H'$ is a special cube complex.
\end{cor}
The next result follows from Definition \ref{an equivariant construction}, Lemma~\ref{label-preserving} and Corollary~$\ref{good cover}$:
\begin{thm}
Suppose $\Ga$ is a graph such that
\begin{enumerate}
\item $\Ga$ is star-rigid;
\item $\Ga$ does not contain induced 4-cycle;
\item $\out(G(\Ga))$ is finite.
\end{enumerate}
Then any finite generated group quasi-isometric to $G(\Ga)$ is commensurable to $G(\Ga)$.
\end{thm}
\section{Uniform lattices in $\aut(X(\Ga))$}
\label{sec_uniform lattice}
In this section we study groups acting geometrically on $X(\Ga)$.
\subsection{Admissible edge labellings of $X(\Ga)$}
Recall that we label circles in the 1-skeleton of the Salvetti complex $S(\Gamma)$ by vertices in $\Gamma$. This induces a $G(\Ga)$-invariant edge-labelling on the universal cover $X(\Gamma)$. Let $x$ be a vertex in $S(\Gamma)$ or $X(\Gamma)$. Then each vertex in the link of $x$ comes from an edge, hence inherits a label. Let $F(\Ga)$ be the flag complex of $\Ga$. Then there is a projection map which preserves the label of vertices.
\begin{equation}
\label{projection based at x}
p_x:Lk(x,X(\Ga))\to F(\Ga)
\end{equation}
See Definition \ref{definition of links} for $Lk(x,X(\Ga))$. Note that for each vertex $v\in F(\Ga)$, $p^{-1}_{x}(v)$ is a pair of vertices. We also pick an orientation for each edge in $S(\Ga)$, and lift them to orientations of edges in $X(\Ga)$ which is $G(\Ga)$-invariant. We fix a $G(\Ga)$-invariant labelling and a $G(\Ga)$-invariant orientation of edges in $X(\Ga)$, and call them the \textit{reference labelling} and \textit{reference orientation}.
Throughout this section, $\Ga$ will be a graph without induced 4-cycle. A vertex $v\in\Ga$ is of \textit{type I} if there exists a vertex $w\in\Ga$ not adjacent to $v$ such that $lk(v)\subset St(w)$. Otherwise $v$ is of \textit{type II}. Two vertices $v_1$ and $v_2$ of type I are \textit{equivalent} if they are adjacent. Now we verify this is indeed an equivalence relationship. Since $\Ga$ has no induced 4-cycle, the link of each vertex of type I has to be a (possibly empty) clique, thus $v_1$ and $v_2$ are adjacent if and only if $St(v_1)=St(v_2)$.
Given vertex $v$ of type I, let $[v]$ be the clique in $\Ga$ spanned by type I vertices which are equivalent to $v$ and let $lk([v])$ be the clique spanned by vertices in $St(v)\setminus [v]$. The definition of $lk([v])$ does not depend on the choice of representative in $v$. Any vertex in $lk([v])$ is of type II. Since the closed star of each vertex in $[v]$ is a clique, a vertex $w\in\Ga\setminus [v]$ is in $lk([v])$ if and only if $w$ is adjacent to one vertex in $[v]$ if and only if $w$ is adjacent to each vertex in $[v]$.
Let $\Ga_i$ be the induced subgraph of $\Ga$ spanned by vertices of type $i$. Then $\Ga_1$ is a disjoint unique of cliques, one for each equivalent class of vertices of type I. Let $\Delta\subset\Ga_2$ be a clique. Let $\Ga_{1,\Delta}\subset\Ga_1$ be the union of all $[v]$'s such that $lk([v])=\Delta$. We allow $\Delta=\emptyset$. Note that $\Ga_{1,\emptyset}$ is always a (possibly empty) discrete graph. Also it follows from the definition that for cliques $\Delta_1,\Delta_2\subset \Ga_2$, if $\Delta_1\neq\Delta_2$, then $\Ga_{1,\Delta_1}\cap \Ga_{1,\Delta_2}=\emptyset$.
\begin{lem}
\label{standard geodesic line}
Pick $\alpha\in\aut(X(\Ga))$ and pick vertex $v\in\Ga_2$.
\begin{enumerate}
\item For any $v$-component $\ell\subset X(\Ga)$, $\alpha(\ell)$ is a $v'$-component for vertex $v'\in\Ga_2$.
\item Let $\Delta\subset\Ga_2$ be a clique. Suppose $\alpha$ sends a $\Delta$-component to a $\Delta'$-component for $\Delta'\subset\Ga_2$ (this follows from (1)). Then $\Ga_{1,\Delta}$ is isomorphic to $\Ga_{1,\Delta'}$.
\end{enumerate}
\end{lem}
\begin{proof}
Let $P_{\ell}=\ell\times \ell^{\perp}$ be the $St(v)$-component containing $\ell$ with its natural splitting. Then $\alpha(P_{\ell})=\alpha(\ell)\times\alpha(\ell^{\perp})$. Let $L_1$ and $L_2$ be the collection of labels of edges in $\alpha(\ell)$ and $\alpha(\ell^{\perp})$ respectively. To prove (1), it suffices to show $L_1$ is made of one vertex. Suppose the contrary is true. Let $K$ be the $L_1$-component that contains $\alpha(\ell)$. Since each vertex in $L_1$ is adjacent to every vertex in $L_2$, there exists a copy of $K\times \alpha(\ell^{\perp})$ in $X(\Ga)$ which contains $\alpha(P_{\ell})$. Then $\ell\times \ell^{\perp}\subset \alpha^{-1}(K)\times\ell^{\perp}$. Thus the label of each edge in $\alpha^{-1}(K)$ is adjacent to every vertex in $lk(v)$. Since $\alpha^{-1}(K)$ is not isometric to a line, it has an edge whose label (denoted by $w$) is different from $v$. Thus $lk(v)\subset St(w)$ and $w\notin lk(v)$, which contradicts that $v$ is of type II.
Suppose edges of $\alpha(\ell)$ are labelled by $v'$. Now we show $v'\in\Ga_2$. If the contrary is true, then there is a vertex $w'\in\Ga\setminus St(v')$ such that $lk(v')\subset lk(w')$. Let $K$ be the $\{v',w'\}$-component containing $\alpha(\ell)$. Since $L_2\subset lk(v')$, $\alpha(\ell)\times\alpha(\ell^{\perp})\subset K\times\alpha(\ell^{\perp})\subset X(\Ga)$. Now we can reach a contradiction as in the previous paragraph.
Now we prove (2). The case $\Delta=\Delta'=\emptyset$ is trivial. We assume $\Delta\neq\emptyset$. Let $F$ and $F'=\alpha(F)$ be the $\Delta$-component and $\Delta'$-component as in (2). Pick vertex $x\in F$ and let $x'=\alpha(x)$. Note that $\alpha$ induces an isomorphism $\alpha_{x}:Lk(x,X(\Ga))\to Lk(x',X(\Ga'))$. Let $p_x$ and $p_{x'}$ be the maps defined as in (\ref{projection based at x}). A vertex $v$ in $Lk(x,X(\Ga))$ (or $Lk(x',X(\Ga'))$) is of \textit{type I} if $p_x(v)$ (or $p_{x'}(v)$) is of type I, otherwise $v$ is of \textit{type II}. It follows from (1) that $\alpha_x$ induces a bijection between vertices of type II in $Lk(x,X(\Ga))$ and $Lk(x',X(\Ga))$. Thus if we replace type II by type I in the previous sentence, it still holds.
Pick a vertex $u\in Lk(x,X(\Ga))$ such that $p_x(u)\in \Ga_{1,\Delta}$. Let $e_u$ be the edge containing $x$ which gives rise to $u$ in the link of $x$. Note that $u'=\alpha_x(u)$ is a vertex of type I. Moreover, since $e_u$ is orthogonal to $F$, $e_{u'}=\alpha(e_u)$ is orthogonal to $F'$. Thus $p_{x'}(u')\in\Ga_{1,\Delta'_1}$ for some clique $\Delta'_1$ which contains $\Delta'$. We claim $\Delta'=\Delta'_1$. If $\Delta'\subsetneq\Delta'_1$, let $F'_1$ be the standard flat such that its support is $\Delta'_1$ and it contains $F'$. Let $F_1=\alpha^{-1}(F'_1)$. Then $F_1$ is a standard flat (this follows from (1) since $\Delta'_1\subset\Ga_2$), and $F\subsetneq F_1$. Since $e_{u'}$ is orthogonal to $F'_1$, $e_u$ is orthogonal to $F_1$. Thus $p_x(u)\in \Ga_{1,\Delta_1}$ for some $\Delta_1$ containing the support of $F_1$. This contradicts $p_x(u)\in \Ga_{1,\Delta}$ since $\Delta_1\neq \Delta$. Thus the claim holds. Let $V$ (or $V'$) be vertices in $Lk(x,X(\Ga))$ (or $Lk(x',X(\Ga))$) whose $p_x$-images (or $p_{x'}$-images) are in $\Ga_{1,\Delta}$ (or $\Ga_{1,\Delta'}$). Then the claims implies $\alpha_x$ induces a bijection between $V$ and $V'$, hence it also induced an isomorphism between the full subcomplexes of $Lk(x,X(\Ga))$ and $Lk(x',X(\Ga))$ spanned by $V$ and $V'$ respectively. However, these two full subcomplexes are isomorphic to the links of the base points in the Salvetti complexes $S(\Ga_{1,\Delta})$ and $S(\Ga_{1,\Delta'})$ respectively. Thus $\Ga_{1,\Delta}$ and $\Ga_{1,\Delta'}$ are isomorphic (however, the isomorphism between them may not be induced by $\alpha_x$).
\end{proof}
\begin{lem}
\label{edge-labelling}
Let $X$ be a $CAT(0)$ cube complex. Suppose it is possible to label the edges of $X$ by vertices of $\Gamma$ such that for each vertex $x\in X$, the link of $x$ in $X$ is isomorphic to the link of the base point in $S(\Gamma)$ in a label-preserving way. Then there is a label-preserving isomorphism $X(\Ga)\cong X$.
\end{lem}
\begin{proof}
Pick a vertex $a\in \Ga$. It follows from the assumption that each $a$-component in $X$ is a line, which is called an $a$-line. Moreover, the following are true:
\begin{enumerate}
\item Each $a$-line is a convex subcomplex of $X$.
\item Two $a$-lines $l_{1}$ and $l_{2}$ are parallel if and only if there exist edges $e_{1}\subset l_{1}$ and $e_{2}\subset l_{2}$ such that $e_{1}$ and $e_{2}$ are parallel.
\end{enumerate}
We orient each edge of $X$ as follows. For each vertex $a\in\Ga$, we group the collection of $a$-lines into parallel classes. In each parallel class, we choose an orientation for one $a$-line, and extent to other $a$-lines in the parallel class by parallelism. It follows from (2) of the previous paragraph that if two edges are parallel, then their orientation is compatible with the parallelism.
Pick base vertices $x\in X(\Gamma)$ and $x'\in X$. By looking at the oriented labelling on the edges of $X$ and $X(\Gamma)$, every word in $G(\Ga)$ corresponds to a unique edge path in $X(\Ga)$ (or $X$) starting at $x$ (or at $x'$), and vice verse. We define a map $f:X(\Gamma)^{(0)}\to X^{(0)}$ as follows. Pick vertex $y\in X(\Ga)$ and an edge path $\omega$ from $x$ to $y$, then there is an edge path $\omega'\subset X$ from $x'$ to $y'$ such that $\omega$ and $\omega'$ correspond to the same word. We define $f(y)=y'$. This definition does not depend on the choice of $\omega$. Actually if we pick two words $w_{1}$ and $w_{2}$ whose corresponding endpoints are $y$, then we can obtain $w_{2}$ from $w_{1}$ by performing the following two basic moves:
\begin{enumerate}
\item $waa^{-1}w'\to ww'$.
\item $wabw'\to wbaw'$ when $a$ and $b$ commutes.
\end{enumerate}
However, these two moves do not affect the position of $y'$. Note that $f$ is surjective and can be extended to a local isometry from $X(\Ga)$ to $X$. Thus $X(\Ga)\cong X$.
\end{proof}
\begin{definition}
A labelling of edges in $X(\Ga)$ is \textit{admissible} if it satisfies the assumption of Lemma~\ref{edge-labelling}.
\end{definition}
\begin{cor}
\label{conjugation and admissible labelling}
Suppose $H$ acts on $X(\Ga)$ by automorphisms such that it preserves an admissible labelling of $X(\Ga)$. Then there exists $g\in\aut(X(\Ga))$ such that $gHg^{-1}$ preserves the reference labelling of $X(\Ga)$.
\end{cor}
\begin{lem}
\label{conjugation and admissible orientation}
Suppose $H$ acts on $X(\Ga)$ geometrically by automorphisms such that it preserves the reference labelling of $X(\Ga)$. If $\Ga$ does not have induced $4$-cycle, then there exist a torsion free finite index subgroup $H'\le H$ and an element $g\in \aut(X(\Ga))$ such that $gH'g^{-1}$ preserves both the reference labelling and the reference orientation. Hence $gH'g^{-1}\le G(\Ga)$.
\end{lem}
\begin{proof}
By Corollary~\ref{good cover}, there is a finite index torsion free subgroup $H'\le H$ be such that $X(\Ga)/H'$ is special. Lemma~\ref{orientation and specialness} implies that we can orient each standard geodesic line in a way which is compatible with parallelism and $H'$-action. The argument in Lemma~\ref{edge-labelling} implies that there exists a label-preserving automorphism $g\in\aut(X(\Ga))$ which maps this orientation to the reference orientation. Thus the lemma follows.
\end{proof}
We recall the following result from \cite{bass1990uniform}.
\begin{thm}
\label{tree lattice}
Let $T$ be a uniform tree and let $H_1,H_2\le\aut(T)$ be two uniform lattices (i.e. they acts geometrically on $T$). Then there exists $g\in \aut(T)$ such that $gH_1g^{-1}\cap H_2$ is of finite index in both $H_2$ and $gH_1g^{-1}$.
\end{thm}
\begin{cor}
\label{existence of admissible labelling on tree}
Suppose $\Ga$ is a disjoint of cliques. Let $H$ be a group acting geometrically on $X(\Ga)$ by automorphisms. Then there exist a finite index torsion free subgroup $H'\le H$ and an admissible labelling of $X(\Ga)$ which is invariant under $H'$.
\end{cor}
Actually in this case $H$ has a finite index subgroup which is isomorphic to a free product of finite generated free Abelian groups \cite[Theorem 2]{behrstock2009commensurability}. However, the corollary does not follow directly from this fact.
\begin{proof}
We form a partition $\Ga=\sqcup_{i=1}^{k}\Ga_i$ such that each $\Ga_i$ is made of cliques of the same size, and cliques in different $\Ga_i$'s have different size. We claim it suffices to prove the corollary for each $\Ga_i$. To see this, suppose $k\ge 2$. Let $t_{k}$ be the tree of diameter 2 with $k$ vertices of valance one. For each $i$, we glue $S(\Ga_i)$ to an endpoint of $t_k$ along the unique vertex in $S(\Ga_i)$ such that different $S(\Ga_i)$'s are glued to different endpoints of $t_k$. Then the resulting space $\bar{S}(\Ga)$ is homotopic equivalent to $S(\Ga)$. We pass to the universal cover $\bar{X}(\Ga)$, and collapse each elevation of $S(\Ga_i)$ ($1\le i\le k$) in $\bar{X}(\Ga)$. The resulting space is a tree, which we denote by $T$.
Since the action $H\acts X(\Ga)$ maps $\Ga_i$-components to $\Ga_i$-components, there is an induced action $H\acts\bar{X}(\Ga)$, moreover, $H$ permutes the elevations of $S(\Ga_i)$'s. Thus there is an induced action $H\acts T$. By passing to a finite index subgroup, we assume $H$ acts on $T$ without inversion. Then there is a graph of spaces decomposition of the orbifold $\bar{X}(\Ga)/H$ along the graph $T/H$. The fundamental group of each edge orbifold is finite, and the fundamental group of each vertex orbifold is either finite, or isomorphic to a group acting geometrically on $X(\Ga_i)$. If the corollary is true for each $\Ga_i$, then we can argue as in Section \ref{sec_construction of the finite cover} to pass to suitable torsion free finite sheet covers of each edge space and vertex space, and glue them together to form a finite sheet cover of $\bar{X}(\Ga)/H$, which gives the required subgroup of $H$.
Now we look at the case when $\Ga$ is a disjoint union of $p$ copies of $n$-cliques. First we assume $n\ge 2$. By the argument of the previous paragraph (we consider the space obtained by attaching $p$ tori to the valance one vertices of $t_p$, then $H$ acts on the universal cover of this space), we can assume $H$ is torsion free by passing to a finite index subgroup. Moreover, we can assume $K=X(\Ga)/H$ is a union of tori (i.e. $K$ does not Klein bottles, however, we do not require these tori to be embedded). It suffices to show $K$ has a finite sheet cover which covers $S(\Ga)$.
We pick a copy of $\Bbb S^{3}$ with a free $\Z/p$ action. Pick a $\Z/p$-orbit of $\Bbb S^3$ and we glue a $n$-torus to $\Bbb S^3$ along each point in the orbit. The resulting space has the same fundamental group as $S(\Ga)$, and admits a free $\Z/p$ action. Denote the quotient space by $S_p(\Ga)$. We modify the complex $K$ in a similar way. Namely for each vertex $x\in K$, there are $p$ tori containing $x$. We replace $x$ by a copy of $\Bbb S^3$ and re-glue those tori along a $\Z/p$-orbit in $\Bbb S^3$. The resulting complex $K'$ has the same fundamental group as $K$. Since $K'$ does not contain Klein bottles, there exists a finite sheet covering map $K'\to S_p(\Ga)$, which implies $K$ has a finite sheet cover which covers $S(\Ga)$. When $n=1$, it follows from \cite[Theorem 2]{behrstock2009commensurability} that $H$ has a finite index torsion free subgroup. The rest follows from Theorem \ref{tree lattice}. Alternatively, it is not hard to modify the above argument such that it also works for $n=1$.
\end{proof}
\subsection{The conjugation theorem}
\begin{thm}
\label{conjugation}
Suppose $\Ga$ is a simplicial graph such that
\begin{enumerate}
\item $\Ga$ is star-rigid;
\item $\Ga$ does not contain induced 4-cycle.
\end{enumerate}
Let $H$ be a group acting geometrically on $(X(\Ga))$ by automorphisms. Then $H$ and $G(\Ga)$ are commensurable. Moreover, let $H_1, H_2\le\aut(X(\Ga))$ be two uniform lattices. Then there exists $g\in\aut(X(\Ga))$ such that $gH_1 g^{-1}\cap H_2$ is of finite index in both $gH_1 g^{-1}$ and $H_2$.
\end{thm}
\begin{proof}
If $\Ga_2=\emptyset$, then $\Ga$ is a discrete set and the theorem follows from Corollary \ref{existence of admissible labelling on tree}. Now we assume $\Ga_2\neq\emptyset$. Pick vertex $v\in\Ga_2$, it follows from Lemma \ref{standard geodesic line} (1) that $\alpha$ sends each standard geodesic in $X(\Ga)$ labelled by a vertex in $\Ga_2$ to another standard geodesic labelled by a (possibly different) vertex in $\Ga_2$. Thus the collection of all $\Ga_2$-components in $X(\Ga)$ is $H$-invariant. It follows that the stabilizer of each $\Ga_2$-component acts cocompactly on itself.
Pick vertex $x\in X(\Ga)$. Then $\alpha$ induces an isomorphism between the links of $x$ and $\alpha(x)$, which gives rise to an automorphism $\alpha_x:\Ga_2\to\Ga_2$ by Lemma \ref{standard geodesic line} (1). We claim that $\alpha_x$ can be extended to an automorphism of $\Ga$. Recall that $\Ga_1$ is a disjoint union of $\Ga_{1,\Delta}$ with $\Delta$ ranging over cliques (including the empty clique) of $\Ga_2$. By Lemma~\ref{standard geodesic line} (2), we can specify an isomorphism $\Ga_{1,\Delta}\to\Ga_{1,\alpha_x(\Delta)}$ for each $\Delta$, and use them to define the extension of $\alpha_x$ as required.
Pick another vertex $y\in X(\Ga)$ which is adjacent to $x$ along an edge $e$. We claim $\alpha_x=\alpha_y$. Let $v_e\in\Ga$ be the label of $e$. Then $\alpha_x(v)=\alpha_y(v)$ for any $v\in\Ga_2$ which is adjacent to $v_e$. Suppose $v_e\notin\Ga_2$. Then $\alpha_x(lk([v_e]))=\alpha_y(lk([v_e]))$. We can extend $\alpha_x$ and $\alpha_y$ to be automorphisms of $\Ga$ which agree on $St(v_e)$ (recall that $St(v_e)$ is a clique spanned by $[v_e]$ and $lk([v_e])$). Thus $\alpha_x=\alpha_y$ by condition (1) of the theorem. Suppose $v_e\in\Ga_2$ and let $v\in\Ga_1$ be a vertex adjacent to $v_e$. Then $lk([v])\subset \Ga_2\cap St(v_e)$. Thus $\alpha_x$ and $\alpha_y$ agree on $lk([v])$. Again we can extent $\alpha_x$ and $\alpha_y$ to automorphisms of $\Ga$ which agree on $St(v_e)$ and deduce $\alpha_x=\alpha_y$.
The above claim implies $\alpha$ determines a well-defined element in $\aut(\Ga_2)$. Thus we have a homomorphism $H\to \aut(\Ga_2)$. By replacing $H$ by the kernel of this homomorphism, we assume $H$ maps $v$-components to $v$-components for $v\in\Ga_2$. Thus for each clique $\Delta\subset\Ga_2$, $H$ preserves $\Delta$-components. Then it follows from the proof of Lemma \ref{standard geodesic line} (2) that for each edge $e\in X(\Ga)$ labelled by a vertex in $\Ga_{1,\Delta}$, the label of $\alpha(e)$ is also inside $\Ga_{1,\Delta}$. Thus $H$ preserves $\Ga_{1,\Delta}$-components.
Let $\{\Delta_i\}_{i=1}^{k}$ be the collection of cliques in $\Ga_2$ such that $\Ga_{1,\Delta_i}\neq\emptyset$. Let $\Lambda_i$ be the induced subgraph spanned by $\Delta_i$ and $\Ga_{1,\Delta_i}$. Now we define a new cube complex
\begin{center}
$\bar{S}(\Ga)=(S(\Ga_2)\sqcup (\sqcup_{i=1}^{k}S(\Lambda_i)) \sqcup (\sqcup_{i=1}^{k}C_i))/\sim$.
\end{center}
Here $C_i=S(\Delta_i)\times [0,1]$ (if $\Delta_i=\emptyset$, then $S(\Delta_i)$ is one point), and we glue one end of $C_i$ to $S(\Delta_i)\subset S(\Ga_2)$, and another end to $S(\Delta_i)\subset S(\Lambda_i)$. There is a homotopy equivalence $\bar{S}(\Ga)\to S(\Ga)$ by collapsing the $[0,1]$ factor of each $C_i$, which lifts to a map between their universal covers $r:\bar{X}(\Ga)\to X(\Ga)$. If an edge of $\bar{X}(\Ga)$ is not degenerate under $r$, then it inherits a label from its image, otherwise we leave it unlabelled. $\bar{S}(\Ga)$ is non-positively curved (\cite[Proposition II.11.13]{bridson1999metric}), hence $\bar{X}(\Ga)$ is a $CAT(0)$ cube complex. Since the action of $H$ on $X(\Ga)$ preserves $\Lambda_i$-components and $\Ga_2$-components, there is an action $H\acts \bar{X}(\Ga)$ by automorphisms such that $r$ is $H$-equivariant.
Let $\h$ be the collection of hyperplanes in $\bar{X}(\Ga)$ which comes from hyperplanes in $C_i$ dual to the $[0,1]$ factor. Note that different elements in $\h$ do not intersect. Let $T$ be the dual tree to the wallspace $(\bar{X}(\Ga),\h)$, i.e. each vertex of $T$ corresponds to a component of $\bar{X}(\Ga)\setminus \h$, and two vertices are adjacent if the corresponding two components are adjacent along an element of $\h$. Then there is an induced action $H\acts T$. Moreover, we have an $H$-equivariant cubical map $q:\bar{X}(\Ga)\to T$ by collapsing edges which are not dual to hyperplanes in $\h$. Pick point $s\in T$. If $s$ is a vertex, then $q^{-1}(s)$ is either a $\Lambda_i$-component or a $\Ga_2$-component. If $s$ is not a vertex, then $q^{-1}(s)$ is isometric to a Euclidean space.
Up to passing to a subgroup of index 2, we assume $H$ acts on $T$ without inversion. Let $\G=T/H$. Then the natural map $\bar{X}(\Ga)/H=K\to \G$ induces a graph of spaces decomposition of the orbifold $K$. Moreover, any cover of $K$ has an induced graph of spaces decomposition. Since the action $H\acts \bar{X}(\Ga)$ maps $\Lambda_i$-components to $\Lambda_i$-components for each $i$, and maps $v$-edges to $v$-edges for each vertex $v\in\Ga_2$, it makes sense to talk about $\Lambda_i$-components and $\Ga_2$-components in $K$.
Let $K_v\subset K$ be a vertex space. A finite sheet cover $K'_v$ of $K_v$ is \textit{good} if the following holds. If $K_v$ is a $\Ga_2$-component, then $K'_v$ is good if it is torsion free and special. Corollary~\ref{good cover} implies such cover exists. Suppose $K_v$ is a $\Lambda_i$-component. Then $K_v=X_v/G_v$, where $X_v$ is a $\Lambda_i$-component in $\bar{X}(\Ga)$ and $G_v\le H$ is the stabilizer of $X_v$. $X_{v}$ has a product decomposition $X_v=X(\Delta_i)\times X(\Ga_{1,\Delta_i})$. Note that $G_v$ maps $w$-edges to $w$-edges for each vertex $w\in\Delta_i$, however, this may not be true if $w\in\Ga_{1,\Delta_i}$. $K'_v$ is good if it corresponds to a subgroup $G'_v\le G_v$ such that (1) $G'_v=\Z^{n}\times L'_v$ where $\Z^{n}$ acts trivially on the $X(\Ga_{1,\Delta_i})$ and $L'_v$ acts trivially on the $X(\Delta_i)$ factor, here $n$ is the number of vertices in $\Delta_i$; (2) there exists a $L'_v$-invariant admissible labelling of $X(\Ga_{1,\Delta_i})$. Since $X(\Delta_i)\cong \E^{n}$, we can apply Lemma~\ref{subgroup with no twist} $n$ times to deduce the existence of cover satisfying (1). Since $\Ga_{1,\Delta_i}$ is a disjoint union of cliques, Corollary~\ref{existence of admissible labelling on tree} implies that there exists a good cover of $K_v$.
Using Lemma~\ref{equal torus} and the argument in Section~\ref{one clique}, we can construct a finite cover $K'\to K$ such that each vertex space of $K'$ is good. Then it is possible to put an admissible labelling on each $\Lambda_i$-component of $\bar{X}(\Ga)$ in a way which is $\pi_1(K')$-invariant. By applying the $H$-equivariant retraction map $r:\bar{X}(\Ga)\to X(\Ga)$, we obtain a $\pi_1(K')$-invariant admissible labelling of $X(\Ga)$. Up to conjugation, we can assume $\pi_1(K')$ preserves the reference labelling (Corollary \ref{conjugation and admissible labelling}). Then the theorem follows from Lemma \ref{conjugation and admissible orientation}.
\end{proof}
\section{Induced 4-cycle and failure of commensurability}
\label{sec_failure of commensurability}
\subsection{Label-preserving action on product of two trees}
\label{subsec_label-preserving action}
Pick two trees $T$ and $T'$, and let $H$ be a torsion free group acting geometrically on $T\times T'$ by automorphisms. $H$ is \textit{reducible} if it has a finite index subgroup which is a product of free groups, or equivalently, $(T\times T')/H$ has a finite sheet cover which is isomorphic to a product of two graphs. Otherwise $H$ is \textit{irreducible}.
Up to passing to a subgroup of index 2, we assume $H$ does not flip the two tree factors. Then there are factors actions $H\acts T$ and $H\acts T'$.
\begin{thm}$($\cite{burger2000lattices}$)$
\label{non-discrete}
Let $h_1:G\to \aut(T)$ and $h_2: G\to \aut(T')$ be the homomorphisms induce by the factor actions. Then the following are equivalent.
\begin{enumerate}
\item The image of $h_1$ is a discrete subgroup of $\aut (T)$.
\item The image of $h_2$ is a discrete subgroup of $\aut (T')$.
\item $G$ is reducible.
\end{enumerate}
\end{thm}
Let $T_n$ be the $n$-valence tree. When $\Ga$ is a 4-gon, $X(\Ga)\cong T_4\times T_4$. Thus we can label edges of $T_4\times T_4$ by vertices of $\Ga$. It is known that there is an irreducible group acting on $T_4\times T_4$ (\cite{janzen2009smallest}), however, such example is not label-preserving. In this section, we will construct an irreducible group acting on $T_4\times T_4$ by label-preserving automorphisms. Given a label-preserving action of $H$ on $T_4\times T_4$, we can pass to an action of $H$ on $T_3\times T_3$ by modifying each $T_4$ as follows.
\begin{center}
\includegraphics[scale=0.50]{3.png}
\end{center}
The other direction is given as follows.
\begin{lem}
\label{T_3}
Suppose there exists an irreducible group $H$ acting geometrically on $T_{3}\times T_3$. Then there exists an irreducible group $H'$ acting geometrically on $T_4\times T_4$ in a label-preserving way.
\end{lem}
\begin{proof}
We modify the first factor $T_3$ to obtain a 4-valence graph $\G$ as follows.
\begin{center}
\includegraphics[scale=0.50]{5.png}
\end{center}
More precisely, we replace each vertex by a 3-cycle with its edges labelled by $a$, and double each edge to obtain two edges labelled by $b$. We also orient each $b$-edge of $\G$ such that the orientations on two consecutive $b$-edges are consistent. Then each automorphism of $T_3$ induces a unique label-preserving automorphism of $\G$ which respects the orientation of $b$-edges. Thus the factor action $H\acts T_3$ induces a label-preserving action $H\acts \G$. In particular, we obtain an action $H\acts \G\times T_3$. This action is geometric, since there is an $H$-equivariant map $\G\times T_3\to T_3\times T_3$ induced by the natural map $\G\to T_3$. Let $G=\pi_1((\G\times T_3)/H)$. Then $G$ acts geometrically on $T_4\times T_3$, and there is an exact sequence $1\to \pi_1(\G\times T_3)\to G\to H\to 1$. By comparing the action of $H$ and $G$ on the second tree factor, we deduce the irreducibility of $G$ from the irreducibility of $H$ and Theorem \ref{non-discrete}. We can modify the second tree factor in a similar way.
\end{proof}
Now we construct an irreducible group acting on $T_3\times T_3$. First we consider the following modification of the main example in \cite{wise1996non}. Let $X$ be the graph in the top of Figure \ref{the example} below. The top left and top right picture indicate two different ways of labelling and orientating edges of $X$. Let $Y$ be the graph in the bottom. Then there are two different covering maps $f_1:X\to Y$ and $f_2:X\to Y$ induced by the edge labelling and orientation in the top left and top right respectively.
\begin{figure}[h!]
\includegraphics[scale=0.5]{2.png}
\caption{}
\label{the example}
\end{figure}
Let $Z'=(X\times[0,1]\sqcup Y\times[0,1])\sim$, where for $i=0,1$, we identify $X\times\{i\}$ with $Y\times\{i\}$ via the covering map $f_{i+1}$ (i.e. $x$ and $f_{i+1}(x)$ are identified). One readily verify that with the natural cube complex structure on $Z'$, the universal cover of $Z'$ is isomorphic to $T_3\times T_3$. Now we collapse the $[0,1]$ factor in $Y\times [0,1]$, which gives a homotopy equivalence $Z'\to Z$. A minor adjustment of the argument in \cite[Chapter II.3.2]{wise1996non} implies $\pi_1(Z)$ is irreducible, hence $\pi_1(Z')$ is also irreducible. For the convenience of the reader, we give a detailed proof of the irreducibility of $\pi_1(Z)$ in the appendix.
\begin{thm}
\label{irreducible group exists}
There exists a torsion free irreducible group acting geometrically on $T_3\times T_3$. Hence there exists a torsion free irreducible group acting geometrically on $T_4\times T_4$ by label-preserving automorphisms.
\end{thm}
\subsection{The general case}
Let $G$ be a group. Recall that the \textit{profinite topology} on $G$ is the topology generated by cosets of its finite index subgroup. A subset $K\subset G$ is \textit{separable} if it is closed in the profinite topology. Equivalently, $K$ is separable if for any $g\notin K$, there exists a finite index normal subgroup $N\vartriangleleft G$ such that $\phi(g)\notin \phi(K)$ for $\phi:G\to G/N$.
$G$ is \textit{residually finite} if the identity element is closed under profinite topology. Being residually finite is a commensurability invariant. If $G$ is residually finite, then the solution to any equation on $G$ is separable, since the group multiplication is continuous with respect to the profinite topology. In particular, the centralizer of every element in $G$ is separable.
\begin{lem}
Let $\Ga$ be a 4-gon and let $H$ be an irreducible group acting geometrically on $X(\Ga)$ by label-preserving automorphisms. Then $H$ is not residually finite.
\end{lem}
\begin{proof}
We argue by contradiction and assume $H$ is residually finite. For each vertex $v\in\P(\Ga)$, let $P_v$ be the $v$-parallel set defined in Definition \ref{v-parallel set} and let $H_v\le H$ be the stabilizer of $P_v$. Since $H$ is label-preserving, $H_v$ acts on $P_v$ cocompactly. Let $P_{v}=\ell_v\times \ell_{v}^{\perp}$ be the product decomposition of $P_{v}$, where $\ell_v$ is a standard geodesic line with $\Delta(\ell_v)=v$.
Note that there exists a finite index subgroup $H'_v\le H_v$ such that $H'_v=\Z\oplus L_v$ with $\Z$ acting trivially on the $\ell_{v}^{\perp}$ factor and $L_v$ acting trivially on the $\Z$ factor. Let $g$ be a generator of the $\Z$ factor of $H'_v$. Then the centralizer $C_{H}(g)$ of $g$ in $H$ is of finite index in $H_v$. Since $H$ is residually finite, $C_{H}(g)$ is separable in $H$, hence there exists a finite index subgroup $H_1\le H$ such that $H_1\cap H_v=C_{G}(g)$. Note that the factor action $C_G(g)\acts \ell_v$ does not flip the two ends of $\ell_v$.
Since there are only finitely many $H$-orbits of parallel sets of standard lines, we can repeat the above argument finitely many times to obtain a finite index subgroup $H'\le H$ such that for each vertex $v\in \P(\Ga)$, the factor action $H'_v\acts\ell_v$ does not flip the two ends of $\ell_v$. In this case, we can orient each standard line in $X(\Ga)$ in an $H'$-invariant way such that they are compatible with parallelism. Thus there exists $k\in\aut(X(\Ga))$ such that $kH'k^{-1}$ preserves both the reference labelling and the reference orientation, which implies $H'$ is reducible.
\end{proof}
\begin{thm}
Let $\Ga$ be any finite simplicial graph which contains an induced 4-cycle. Then there exists a group $H$ acting geometrically on $X(\Ga)$ by label-preserving automorphisms such that it is not residually finite. In particular, $H$ is not commensurable to $G(\Ga)$.
\end{thm}
\begin{proof}
Let $\Ga'\subset\Ga$ be an induced $4$-cycle and pick a $\Ga'$-component $Y\subset X(\Ga)$. The covering map $X(\Ga)\to S(\Ga)$ induces a local isometry $Y\to S(\Ga)$. Let $Z$ be the canonical completion of $Y\to S(\Ga)$. Then the universal cover of $Z$ is isomorphic to $X(\Ga)$. Let $H$ be a torsion free irreducible group acting on $Y$ in a label-preserving way (Theorem \ref{irreducible group exists}). The $H\acts Y$ extends to a label-preserving free action $H\acts Z$ such that the inclusion $Y\to Z$ is $H$-equivariant. Let $G=\pi_1(Z/H)$. Then $G$ acts geometrically on $X(\Ga)$ by label-preserving automorphisms. Moreover, the inclusion $Y\to X(\Ga)$ induces an embedding $H\to G$. Since $H$ is not residually finite, so is $G$. However, each RAAG is residually finite \cite{haglund2008finite}, thus $H$ and $G(\Ga)$ are not commensurable.
\end{proof}
\begin{cor}
Let $\Ga$ be any finite simplicial graph which contains an induced 4-cycle. Then there exists a group $H$ quasi-isometric to $G(\Ga)$ such that it is not commensurable to $G(\Ga)$.
\end{cor}
|
1,116,691,499,859 | arxiv | \section{Introduction}
Let $\mathbb F_{2^{s}}$ be a Galois field over $\mathbb F_2$, and $\chi$ be the cubic character, namely $\chi$
is a mapping from $\mathbb F_{2^{s}}^*$ into the complex numbers defined as
$$ \chi(\alpha^h \theta^j) = e^{\frac{2i\pi }{3}h} \dot{=} ~\omega^h ~~~~h=0,1,2~~, $$
where $\alpha$ is primitive and $\theta$ is a cube in $\mathbb F_{2^{s}}^*$, furthermore we set $\chi(0)=0$ by definition.
Let $\Tr_s(x)=\sum_{j=0}^{s-1} x^{2^j}$ be the trace function over $\mathbb F_{2^s}$, and $\Tr_{s/r}(x)=\sum_{j=0}^{s/r-1} x^{2^{rj}}$ be the relative trace function over $\mathbb F_{2^s}$
relatively to $\mathbb F_{2^r}$, with $r|s$ \cite{lidl}. \\
A Gauss sum of a character $\chi$ over $\mathbb F_{2^{s}}$ is defined as \cite{berndt}
$$ G_s(\beta, \chi) = \sum_{y \in \mathbb F_{2^s}} \chi(y) e^{\pi i \Tr_s(\beta y)} =
\bar \chi(\beta) G_s(1, \chi) ~~~~\forall \beta \in \mathbb F_{2^{s}} ~~. $$
The values of the Gauss sums of a cubic character over $\mathbb F_{2^{s}}$ can be found by computing the Gauss sum over $GF(4)$ and applying Davenport-Hasse's theorem on the lifting of characters (\cite{berndt,Jungnickel,lidl}) for $s$ even (and by computing the Gauss sum over $GF(2)$ and then trivially lifting for $s$ odd). However it is possible to use a more elementary approach, and this is the topic of the present work.
If $s$ is odd then the cubic character is trivial because every element $\beta$ in $\mathbb F_{2^s}$ is a
cube as the following chain of equalities shows
$$ \beta \cdot 1 = \beta \cdot (\beta^{2^s-1})^2= \beta \beta^{2^{s+1}-2} = \beta^{2^{s+1}-1} =(\beta^{\frac{2^{s+1}-1}{3}})^3 ~~, $$
since $\beta^{2^s-1}=1$, and $s+1$ is even, so that $2^{s+1}-1$ is divisible by $3$. In this case we have
$$ G_s(1, \chi) = \sum_{y \in \mathbb F_{2^s}} \chi(y) e^{\pi i \Tr_s( y)} =
\sum_{y \in \mathbb F_{2^s}^*} e^{\pi i \Tr_s( y)} =-1 ~~, $$
since the number of elements with trace $1$ is equal to the number of elements with trace $0$ ($\Tr_s(x)\in\mathbb F_{2}$; moreover $\Tr_s(x)=1$ and $\Tr_s(x)=0$ are two equations of degree $2^{s-1}$), and
$e^{\pi i \cdot 0} =1$ while $e^{\pi i \cdot 1} =-1$.
If $s=2m$ is even, the cubic character is nontrivial, and the computation of the Gauss sums requires some more effort; before we show how they can be computed with an elementary approach, we need some preparatory lemmas.
\section{Preliminary facts}
First of all we recall that, for any nontrivial character $\chi$ over $\mathbb F_{q}$, $\sum_{x \in \mathbb F_{q}} \chi(x) = 0 $.
This is used to prove a property of a sum of characters, already known to Kummer \cite{winter}, which can be
formulated in the following form:
\begin{lemma}
\label{lemma1}
Let $\chi$ be a nontrivial character and $\beta$ any element of $\mathbb F_{q}$; then
$$ \sum_{x \in \mathbb F_{q}} \chi(x) \bar \chi(x+\beta) = \left\{ \begin{array}{lcl}
q-1 &~~& \mbox{if} ~~\beta=0 \\
-1 &~~& \mbox{if} ~~\beta\neq 0
\end{array} \right. ~~. $$
\end{lemma}
\noindent
{\sc Proof}.
If $\beta=0$, the summand is $\chi(x) \bar \chi(x) =1$, unless $x=0$ in which case it is $0$,
then the conclusion is immediate. \\
When $\beta\neq 0$, we can exclude again the term with $x=0$, as $\chi(x)=0$, so that $x$ is invertible,
and the summand can be written as
$$\chi(x) \bar \chi(x+\beta)= \chi(x) \bar \chi(x) \bar \chi(1+\beta x^{-1}) = \bar \chi(1+\beta x^{-1}) ~~. $$
With the substitution $y=1+\beta x^{-1}$, the summation becomes
$$ \sum_{\stackrel{y \in \mathbb F_{q}}{y \neq 1}} \chi(y) = -1 + \sum_{y \in \mathbb F_{q}} \chi(y) =-1~~, $$
as $\chi(y)=1$ for $y=1$.
\begin{flushright} $\Box$ \end{flushright}
We are now interested in the sum $\sum_{x \in \mathbb F_{q}} \chi(x) \chi(x+1)$. Note that for the Gauss sums over $\mathbb F_{2^{s}}$ we have
\begin{equation}
\label{gauss01}
G_{s}(1, \chi) = \sum_{\stackrel{y \in \mathbb F_{2^{s}}}{\Tr_{s}(y)=0}} \chi(y) -\sum_{\stackrel{y \in \mathbb F_{2^{s}}}{\Tr_{s}(y)=1}} \chi(y)~~.
\end{equation}
It follows that, if $\chi$ is a nontrivial character, then the Gauss sum over $\mathbb F_{2^{s}}$ satisfies the following:
$$G_{s}(1,\chi) =2\sum_{\stackrel{y \in \mathbb F_{2^{s}}}{\Tr_{s}(y)=0}} \chi(y).$$
In fact
half of the field elements have trace $0$ and the other half $1$, so that
$$\sum_{\stackrel{y \in \mathbb F_{2^{s}}}{\Tr(y)=0}} \chi(y) =-\sum_{\stackrel{y \in \mathbb F_{2^{s}}}{\Tr(y)=1}} \chi(y) $$
as the sum over all field elements is zero, since $\chi$ is nontrivial
\begin{lemma}
\label{lemma2}
If $\chi$ is a nontrivial character over $\mathbb F_{2^{s}}$, then
$$\sum_{x \in \mathbb F_{2^{s}}} \chi(x) \chi(x+1) = G_{s}(1,\chi)~~. $$
\end{lemma}
\noindent
{\sc Proof}.
The sum $\sum_{x \in \mathbb F_{2^{s}}} \chi(x) \chi(x+1)$ can be written as $\sum_{x \in \mathbb F_{2^{s}}} \chi(x(x+1))$, since the character is a
multiplicative function, now the function $f(x)=x(x+1)$ is a mapping from $\mathbb F_{2^{s}}$ onto the subset of elements with trace $0$, as $\Tr_s(x)=\Tr_s(x^2)$ for any $s$, and each image comes exactly from two elements, $x$ and $x+1$.
It follows that
\begin{equation}
\label{gausssum}
\sum_{x \in \mathbb F_{2^{s}}} \chi(x) \chi(x+1) = 2 \sum_{\stackrel{y \in \mathbb F_{2^{s}}}{\Tr_{s}(y)=0}} \chi(y) = G_{s}(1,\chi) ~~.
\end{equation}
\begin{flushright} $\Box$ \end{flushright}
\noindent
\begin{lemma}
\label{lemma3}
Let $\chi$ be a nontrivial character of order $2^r+1$. Then the Gauss sum $G_{s}(1,\chi)$ is a real number.
\end{lemma}
\noindent
{\sc Proof}.
Using (\ref{gausssum})
we have
$$ \bar G_{s}(1,\chi)= \sum_{x \in \mathbb F_{2^{s}}} \bar \chi(x) \bar \chi(x+1) = \sum_{x \in \mathbb F_{2^{s}}} \chi(x^{2^r}) \chi(x^{2^r}+1) = \sum_{x \in \mathbb F_{2^{s}}} \chi(x) \chi(x+1) = G_{s}(1,\chi) ~~,$$
as $\bar \chi(x)=\chi(x)^{2^r}=\chi(x^{2^r})$ and $x\to x^{2^r}$ is a field automorphism, so it just permutes the elements of the field. \hfill $\Box$
\vspace{5mm}
\noindent
\section{Main results}
The absolute value of $G_{s}(1,\chi)$ can be evaluated using elementary standard techniques going back to Gauss (see e.g. \cite{berndt}),
while its argument requires a more subtle analysis. Our main theorems in the following section derive in an elementary way the exact value of the Gauss sum for the cubic character $\chi$ over $\mathbb F_{2^{2m}}$ (the case of $s$ odd is trivial, as shown above). Before we proceed, we show in a standard way what is its absolute value.
Since $ G_{2m}(\beta, \chi)= \bar\chi(\beta) G_{2m}(1,\chi)$ , on one hand, we have
\begin{equation}
\label{sum1}
\begin{array}{lcl}
\displaystyle \sum_{\beta \in \mathbb F_{2^{2m}}} G_{2m}(\beta, \chi) \bar G_{2m}(\beta, \chi) &=& \displaystyle
\sum_{\beta \in \mathbb F_{2^{2m}}}\bar \chi(\beta) \chi(\beta) G_{2m}(1, \chi) \bar G_{2m}(1, \chi) \\
&=& \displaystyle \sum_{\beta \in \mathbb F_{2^{2m}}^*} G_{2m}(1, \chi) \bar G_{2m}(1, \chi) =
(2^{2m}-1) G_{2m}(1, \chi) \bar G_{2m}(1, \chi)~~. \end{array}
\end{equation}
On the other hand, by the definition of Gauss sum, we have
$$ \begin{array}{l}
\displaystyle \sum_{\beta \in \mathbb F_{2^{2m}}} G_{2m}(\beta, \chi) \bar G_{2m}(\beta, \chi) = \sum_{\beta \in \mathbb F_{2^{2m}}}\sum_{\alpha \in \mathbb F_{2^{2m}}}\sum_{\gamma \in \mathbb F_{2^{2m}}} \bar \chi(\alpha)
e^{\pi i \Tr_{2m}(\beta \alpha)}
\chi(\gamma) e^{-\pi i \Tr_{2m}(\gamma \beta)}
\end{array} ~~,
$$
and substituting $\alpha=\gamma+\theta$ in the last sum, we have
\begin{equation}
\label{sum2}
\sum_{\beta \in \mathbb F_{2^{2m}}} G_{2m}(\beta, \chi) \bar G_{2m}(\beta, \chi) = \sum_{\gamma \in \mathbb F_{2^{2m}}}\sum_{\theta \in \mathbb F_{2^{2m}}} \bar \chi(\gamma+\theta)
\chi(\gamma)\sum_{\beta \in \mathbb F_{2^{2m}}} e^{\pi i \Tr_{2m}(\beta \theta)} = 2^{2m}(2^{2m}-1) ~~,
\end{equation}
as the sum on $\beta$ is $2^{2m}$ if $\theta=0$ and is $0$ otherwise, since the values of the trace are equally distributed, as said above; consequently the sum over
$\gamma$ is $2^{2m}-1$ times $2^{2m}$, as $\chi(0)=0$.
From the comparison of (\ref{sum1}) with (\ref{sum2}) we get $ G_{2m}(1, \chi) \bar G_{2m}(1, \chi)=2^{2m}$, then $|G_{2m}(1, \chi)|=2^{m}$.
Few initial values are $G_2(1,\chi)=2$, $G_4(1,\chi)=-4$, $G_6(1,\chi)=8$, $G_8(1,\chi)=-16$, and $G_{10}(1,\chi)=32$, so a reasonable guess is $G_{2m}(1,\chi)=-(-2)^{m}$.
This guess is correct as proved by the following theorems.
\begin{theorem}
\label{signodd}
If $m$ is odd, the value of the Gauss sum $G_{2m}(1,\chi)$ is $2^m$.
\end{theorem}
\noindent
{\sc Proof}.
Let $\alpha$ a primitive cubic root of unity in $\mathbb F_{2^{2m}}$, then it is a root of $x^2+x+1$. In other words, a root
$\alpha$ of $x^2+x+1$, which does not belong to $\mathbb F_{2^{m}}$, as $m$ is odd, can be used to define a quadratic
extension of this field, i.e. $\mathbb F_{2^{2m}}$, and the elements of this extension can be represented
in the form $x+ \alpha y$, with $x,y \in \mathbb F_{2^{m}}$. Furthermore, the two roots $\alpha$ and $1+\alpha$
of $x^2+x+1$ are either fixed or exchanged by any Frobenius automorphism; in particular the automorphism
$\sigma^m(x) = x^{2^m}$ necessarily exchange the two roots as it fixes precisely all the elements of
$\mathbb F_{2^{m}}$, while $\alpha$ does not belong to this field, so that $\sigma^m(\alpha)\neq \alpha$.
Now, a Gauss sum $G_{2m}(1,\chi)$ can be written as
\begin{equation}
\label{signodd2}
G_{2m}(1,\chi)= 2 \sum_{\stackrel{z \in \mathbb F_{2^{2m}}}{\Tr_{2m}(z)=0}} \chi(z) = 2 \sum_{\stackrel{x,y \in \mathbb F_{2^{m}}}{\Tr_{2m}(x+\alpha y)=0}} \chi(x+\alpha y) = 2 \sum_{\stackrel{x,y \in \mathbb F_{2^{m}}}{\Tr_{m}(y)=0}} \chi(x+\alpha y) ~~,
\end{equation}
where we used the trace property
$$\Tr_{2m}(x+\alpha y)=\Tr_{2m}(x)+\Tr_{2m}(\alpha y)=\Tr_{m}(x)+\Tr_{m}(x^{2^m})+\Tr_{2m}(\alpha y)=\Tr_{2m}(\alpha y),$$
and the fact that
$$ \begin{array}{lcl}
\Tr_{2m}(\alpha y) &=& \Tr_{m}(\alpha y)+\Tr_{m}(\alpha y)^{2^m}=\Tr_{m}(\alpha y)+\Tr_{m}((\alpha y)^{2^m}) \\
&=& \Tr_{m}(\alpha y)+\Tr_{m}(\alpha^{2^m} y) = \Tr_{m}(\alpha y)+\Tr_{m}((\alpha+1)y)=
\Tr_{m}(y)~~, \end{array} $$
since $\alpha^{2^m}=\alpha+1$ as previously shown.
The last summation in (\ref{signodd2}) can be split into three sums by separating the cases $x=0$ and $y=0$
$$ 2 \sum_{\stackrel{x,y \in \mathbb F_{2^{m}}}{\Tr_{m}(y)=0}} \chi(x+\alpha y) =
2 \sum_{\stackrel{y \in \mathbb F_{2^{m}}}{\Tr_{m}(y)=0}} \chi(\alpha y) +
2 \sum_{x \in \mathbb F_{2^{m}}} \chi(x) +
2 \sum_{\stackrel{x,y \in \mathbb F_{2^{m}}^*}{\Tr_{m}(y)=0}} \chi(x+\alpha y) ~~. $$
Considering the three sums separately, we have:
$$\sum_{x \in \mathbb F_{2^{m}}} \chi(x)=2^{m}-1 ~~,$$
as $\chi(x)=1$ unless $x=0$ since $m$ is odd;
$$\sum_{\stackrel{y \in \mathbb F_{2^{m}}}{\Tr_{m}(y)=0}} \chi(\alpha y)= \chi(\alpha) (2^{m-1}-1) ~~,$$
as the character is multiplicative, $\chi(y)=1$ unless $y=0$, and only the $0$-trace elements (which are $2^{m-1}-1$) should be counted;
$$ \sum_{\stackrel{x,y \in \mathbb F_{2^{m}}^*}{\Tr_{m}(y)=0}} \chi(x+\alpha y) =
\sum_{\stackrel{x,y \in \mathbb F_{2^{m}}^*}{\Tr_{m}(y)=0}} \chi(y) \chi(xy^{-1}+\alpha ) =
\sum_{\stackrel{ z,y \in \mathbb F_{2^{m}}^*}{\Tr_{m}(y)=0}} \chi(z+\alpha ) =
(2^{m-1}-1) \sum_{z \in \mathbb F_{2^{m}}^*} \chi(z+\alpha ) ~~. $$
as $y$ is invertible, $\chi(y)=1$ since $m$ is odd, $z$ has been substituted for $xy^{-1}$,
and the sum we get in the end, being independent of $y$, is simply multiplied by the number of values assumed by
$y$. Altogether we have
$$ G_{2m}(1,\chi)= 2^{m+1}-2+\chi(\alpha) (2^m-2) + (2^m-2) \sum_{z \in \mathbb F_{2^{m}}^*} \chi(z+\alpha ) =
2^{m+1}-2 + (2^m-2) \sum_{z \in \mathbb F_{2^{m}}} \chi(z+\alpha ) ~~, $$
and, for later use, we define $A(\alpha) = \sum_{z \in \mathbb F_{2^{m}}} \chi(z+\alpha )$.
In order to evaluate $A(\alpha)$, we consider the sum of $A(\beta)$,
for every $\beta \in \mathbb F_{2^{2m}}$, and observe that $A(\beta)=2^m-1$ if $\beta \in \mathbb F_{2^{m}}$, while,
if $\beta \not \in \mathbb F_{2^{m}}$ all sums assume the same value $A(\alpha)$, which is shown as follows: set $\beta=u+\alpha v$ with $v \neq 0$, then
$$ \sum_{z \in \mathbb F_{2^{m}}} \chi(z+u+ \alpha v ) = \sum_{z \in \mathbb F_{2^{m}}} \chi(v) \chi((z+u)v^{-1}+ \alpha )
= \sum_{z' \in \mathbb F_{2^{m}}} \chi(z'+ \alpha )~~. $$
Therefore, the sum $ \sum_{\beta \in \mathbb F_{2^{2m}}} A(\beta) =\sum_{\beta}\sum_z \chi(z+\beta)=\sum_z\sum_{\beta} \chi(z+\beta)=0$ yields
$$ 2^m(2^m-1)+(2^{2m}-2^m) A(\alpha) =0 $$
which implies $A(\alpha) =-1$, and finally
$$ G_{2m}(1,\chi)= 2^{m+1}-2 - (2^m-2) =2^m ~~. $$ \hfill $\Box$
\paragraph{Remark 1.}
The above theorem can also be proved using a theorem by Stickelberger (\cite[Theorem 5.16]{lidl})
\begin{theorem}
\label{signeven}
If $m$ is even, the Gauss sum $G_{2m}(1,\chi)$ is equal to $(-2)^{m/2} G_{m}(1,\chi)$ .
\end{theorem}
\noindent
{\sc Proof}.
The relative trace of the elements of $\mathbb F_{2^{2m}}$ over $\mathbb F_{2^{m}}$, which is
$$ \Tr_{(2m/m)}(x)= x+x^{2^m} ~~, $$
introduces the polynomial $x+x^{2^m}$ which defines a mapping from $\mathbb F_{2^{2m}}$ onto $\mathbb F_{2^{m}}$ with kernel the subfield $\mathbb F_{2^{m}}$ (\cite{lidl}). The equation $x^{2^m}+x=y$ has in fact exactly $2^m$ roots in $\mathbb F_{2^{2m}}$ for every $y \in \mathbb F_{2^{m}}$. \\
By definition we have
$$ G_{2m}(1,\chi)= 2\sum_{\stackrel{z \in \mathbb F_{2^{2m}}}{\Tr_{2m}(z)=0}} \chi(z) =
2 \sum_{\stackrel{x,y \in \mathbb F_{2^{m}}}{\Tr_{2m}(x+\alpha y)=0}}\chi(x+\alpha y) ~~, $$
where $\alpha$ is a root of an irreducible quadratic polynomial $x^2+x+b$ over $\mathbb F_{2^{m}}$, i.e. $\Tr_{m}(b)=1$ (\cite[Corollary 3.79]{lidl}) and $ \Tr_{(2m/m)}(\alpha)= 1$, which can be seen from the coefficient of $x$ of the polynomial.
Now
$$\Tr_{2m}(x+\alpha y)= \Tr_{2m}(x)+\Tr_{2m}(\alpha y)=\Tr_{2m}(\alpha y)= \Tr_{m}(\alpha y)+\Tr_{m}(\alpha^{2^m} y)~~, $$
but $\alpha^{2^m}=1+\alpha$, so that $\Tr_{2m}(x+\alpha y)= \Tr_{m}(y)$, and we have
$$ G_{2m}(1,\chi)= 2\sum_{\stackrel{x,y \in \mathbb F_{2^{m}}}{\Tr_{m}(y)=0}}\chi(x+\alpha y) =
2\sum_{{x \in \mathbb F_{2^{m}}}}\chi(x) + 2\sum_{\stackrel{y \in \mathbb F_{2^{m}}^*}{\Tr_{m}(y)=0}}\chi(\alpha y)+
2\sum_{\stackrel{x,y \in \mathbb F_{2^{m}}^*}{\Tr_{m}(y)=0}}\chi(x+\alpha y)
~~, $$
where the first summation has been split into the sum of three summations, by separating the cases $y=0$ and $x=0$.
We observe that, since the character over $\mathbb F_{2^{m}}$ is not trivial, the first sum is $0$
and the second is $\chi(\alpha) G_{m}(1,\chi)$, while the third sum can be
written as follows
$$ 2\sum_{\stackrel{x,y \in \mathbb F_{2^{m}}^*}{\Tr_{m}( y)=0}}\chi(x+\alpha y)=
2\sum_{\stackrel{x,y \in \mathbb F_{2^{m}}^*}{\Tr_{m}( y)=0}} \chi(y) \chi(xy^{-1}+\alpha) =
2\sum_{\stackrel{y \in \mathbb F_{2^{m}}^*}{\Tr_{m}( y)=0}} \chi(y)
\sum_{z \in \mathbb F_{2^{m}}^*} \chi(z+\alpha)
~~. $$
Putting all together, we obtain
$$ G_{2m}(1,\chi)= G_{m}(1,\chi) \sum_{z \in \mathbb F_{2^{m}}} \chi(z+\alpha) = G_{m}(1,\chi) A_m(\alpha) ~~, $$
which shows that $| A_m(\alpha)| = 2^{m/2}$ and that $A_m(\alpha)$ is real, as both $G_{2m}(1,\chi)$ and $G_{m}(1,\chi)$ are real. Note that this holds for any $\alpha$ with $\Tr_{(2m/m)}(\alpha)=1$. \\
We will show now that
$A_m(\alpha) = (-2)^{m/2}$.
Consider the sum of $A_m(\gamma)$ over all $\gamma$ with relative trace equal to $1$, which is, on one hand $2^m A_m(\alpha)$, as the polynomial $x^{2^m}+x=1$ has exactly $2^m$ roots in $\mathbb F_{2^{2m}}$ and on the other hand, explicitly we have
$$ \sum_{\stackrel{\gamma \in \mathbb F_{2^{2m}}^*}{\Tr_{2m/m}(\gamma)=1}} A_m(\gamma) =
\sum_{z \in \mathbb F_{2^{m}}} \sum_{\stackrel{\gamma \in \mathbb F_{2^{2m}}^*}{\Tr_{2m/m}(\gamma)=1}} \chi(z+\gamma) =
\sum_{z \in \mathbb F_{2^{m}}} \sum_{\stackrel{\gamma' \in \mathbb F_{2^{2m}}^*}{\Tr_{2m/m}(\gamma')=1}} \chi(\gamma')=
2^m \sum_{\stackrel{\gamma' \in \mathbb F_{2^{2m}}^*}{\Tr_{2m/m}(\gamma')=1}} \chi(\gamma')~~, $$
where the summation order has been exchanged, and $\Tr_{2m/m}(\gamma)=\Tr_{2m/m}(\gamma')$ as $\Tr_{2m/m}(z)=0$ for any $z\in\mathbb F_{2^{m}}$.
Comparing the two results, we have
$$ A_m(\alpha)= \sum_{\stackrel{\gamma' \in \mathbb F_{2^{2m}}^*}{\Tr_{2m/m}(\gamma')=1}} \chi(\gamma')=
M_0+M_1 \omega+ M_2 \omega^2~~, $$
where $M_0$ is the number of $\gamma'$ with $\Tr_{2m/m}(\gamma')=1$ that are cubic residues, i.e. they have character
$\chi(\gamma')$ equal to $1$, $M_1$ is the number of $\gamma'$ with $\Tr_{2m/m}(\gamma')=1$ that have character
$\omega$, and $M_2$ is the number of $\gamma'$ with $\Tr_{2m/m}(\gamma')=1$ that have character $\omega^2$, then
$M_0+M_1+M_2=2^m$, and $M_1=M_2$ since $A_m(\alpha)$ is real. Therefore,we have $A_m(\alpha)= M_0-M_1$, and so we consider two equations for $M_0$ and $M_1$
$$ \left\{ \begin{array}{l}
M_0 + 2 M_1 = 2^m \\
M_0-M_1 = \pm 2^{m/2} \\
\end{array} \right.
$$
solving for $M_1$ we have $M_1= \frac{1}{3}(2^m \mp 2^{m/2}) $. Since $M_1$ must be an integer, we have
$$ \left\{ \begin{array}{lcl}
M_0 - M_1 = 2^{m/2} &~~& \mbox{if $m/2$ is even}\\
M_0 - M_1 = -2^{m/2}&~~& \mbox{if $m/2$ is odd}. \\
\end{array} \right. ~~
$$
\begin{flushright} $\Box$ \end{flushright}
\begin{corollary}
If $m$ is even, the value of the Gauss sum $G_{2m}(1,\chi)$ is $-2^m$.
\end{corollary}
\noindent
{\sc Proof}.
It is a direct consequence of the two theorems above.
\begin{flushright} $\Box$ \end{flushright}
\section*{Acknowledgment}
The Research was supported in part by the Swiss National Science
Foundation under grant No. 126948
|
1,116,691,499,860 | arxiv | \section{Introduction}
Deep neural networks (DNNs) have been actively researched as a powerful technique to approximate functions and solutions of PDEs for scientific computing applications \cite{yu2018deep,raissi2018hidden,raissi2019physics,peng2023non,liu2021vpvnet}. However, recent work \cite{xu2020frequency} have indicated that common fully connected DNNs demonstrated a spectral bias in learning functions with wide range of frequency information, namely, the lower frequency content of the function can be learned quickly while the convergence to higher frequency content may follow, but requiring large amount of training for the networks. Such a phenomena differs from the behavior of the convergence of traditional multigrid methods for finding approximate solutions to PDEs whereas the higher frequency error is eliminated first. In order to remedy the spectral bias of the DNNs, a multiscale DNN method was proposed \cite{liu2020multi,wang2020multi}, which consists of a series of parallel common fully connected sub-neural networks. Each of the sub-networks will receive a scaled version of the input and their outputs will then be combined to produce the output of the MscaleDNN (refer to Fig. \ref{net}). The individual sub-network in the MscaleDNN with a scaled input is designed to approximate a segment of frequency content of the targeted function and the effect of the scaling is to convert a specific range of high frequency content to a lower one so the learning can be accomplished much quickly.
In the design of the multiscale DNN, in order to produce scale separation and identification capability of the MscaleDNN, activation functions with a localized frequency profile, compared with normal activation functions, e.g., ReLU, tanh, etc. This is similar to the idea of compact mother scaling and wavelet functions from the wavelet theory \cite{daubechies1992ten}.
Although, the MultiscaleDNN has been used in computing oscillatory solutions of Navier-Stokes flows and PDEs \cite{liu2020multi,wang2020multi} with oscillatory solutions, there is no rigorous mathematical analysis which explains the reasons for its much improvement performance in approximating highly oscillatory functions compared with the plain fully connected DNNs. In this paper, we will use the neural tangent kernel (NTK) framework to derive a diffusion model for the dynamics of the error of the learning by MscaleDNN in the frequency domain in the limiting case where the learning rate goes to zero and the width of network goes to infinity. As more scales are used in the MscaleDNN, the diffusion coefficients of the diffusion equations for the error manifest its effect over larger frequency ranges while those for the normal fully connected DNN concentrates low frequency near zero-frequency. As a result, the derived diffusion model provides clear evidence why the MscaleDNN can learn quickly over a wider range of frequency range with more scales employed.
The rest of the paper is organized as follows. In Section 2, a brief review of the MscaleDNN is given. Section 2 derives the diffusion model of the learning error for fitting a function and finding the solution of a boundary value problem of an ODE. Numerical methods and solutions for the error diffusion equations will be presented in Section 4, which clearly shows the improved convergence of the MscaleDNN over wider range of frequencies while the number of scales used in MscaleDNN is increased. Finally, section 5 gives the conclusion and future work.
\section{Brief review of MscaleDNN}
\begin{figure}[htbp]
\centering
\subfigure[Fully connected DNN]{\includegraphics[scale=0.8]{FCDNN}}\qquad
\subfigure[Multi-scale DNN]{\includegraphics[scale=0.4]{MscaleNet}}
\caption{Illustration of fully connected DNN and Multi-scale DNN.}
\label{net}
\end{figure}
Fig. \ref{net} shows the schematics of a MscaleDNN consisting of $n$ networks. Each scaled input passing through a sub-network can be expressed in the following formula
\begin{equation}
f_{\vtheta}(\vx) = \vW^{[L-1]} \sigma\circ(\cdots (\mW^{[1]} \sigma\circ(\mW^{[0]} (\vx) + \vb^{[0]} ) + \vb^{[1]} )\cdots)+\vb^{[L-1]}, \label{mscalednn}
\end{equation}
where $W^{[1]}$ to $W^{[L-1]}$ and $b^{[1]}$ to $b^{[L-1]}$ are the weight matrices and bias unknowns, respectively, to be optimized via the training, $\sigma(x)$ is the activation function. In this work, the following plane wave activation function will be used for its localized frequency property \cite{liu2020multi},
\begin{equation}
\sigma(x)=\sin(x).
\end{equation}
For the input scales, we could select the scale for the $i$-th sub-network to be $i$ (as shown in Fig. \ref{net}) or $2^{i-1}$.
Mathematically, a MscaleDNN solution $f(\vx)$ is represented by the following sum of sub-networks $f_{\theta^{n_{i}}}$ with network parameters denoted by $\theta^{n_{i}}$ (i.e., weight matrices and bias)
\begin{equation}
f(\vx)\sim
{\displaystyle\sum\limits_{i=1}^{M}}
f_{\theta^{n_{i}}}(\alpha_{i}\vx),\label{f_app}%
\end{equation}
where $\alpha_i$ is the chosen scale for the $i$-th sub-network in Fig. \ref{net}. For more details on the design and discussion of the MscaleDNN, please refer to \cite{liu2020multi,wang2020multi}.
For comparison studies in this paper, we will refer to a ``{\bf normal}'' network as an one fully connected DNN with the same total number of neurons as the MscaleDNN, but without multi-scale features. We would perform extensive numerical experiments to examine the effectiveness of different settings and select efficient ones to solve complex problems.
\section{Error diffusion model of MscaleDNN}
In this section, the convergence of machine learning algorithm for fitting and PDE approximation problems with multi-scale neural networks is analyzed. In both scenario, we show that the evolution of the error can be modeled by a diffusion equation in the Fourier frequency domain as the with of the network goes to infinity and learning rate approaches to zero.
\subsection{Error analysis in high dimensional fitting problem}
We first consider a fitting problem with objective function $y=f(\bm x)$ defined in the $d$-dimensional domain $\Omega=[-1, 1]^d\subset\mathbb R^d$ by a neural network. The mean square loss
\begin{equation}\label{meansquareloss}
L(\bm\theta)=\int_{\Omega}|\mathcal N(\bm x,{\bm \theta})-f(\bm x)|^2d\bm x,
\end{equation}
for a neural network function $\mathcal N(\bm x, \bm \theta)$ is adopted in the following analysis.
The gradient descent dynamics based on the corresponding loss functional \eqref{meansquareloss} is
\begin{equation}\label{graddescentscheme}
\bm\theta^{(k+1)}=\bm\theta^{(k)}-\tau\nabla L(\bm\theta^{(k)}),
\end{equation}
where $\tau$ is the learning rate. By regarding $\tau$ as the time step size, the continuum limit dynamics for $\tau\rightarrow 0$ is
\begin{equation}\label{graddescentdynamics}
\frac{{\rm d}\bm\theta(t)}{{\rm d} t}=-\nabla L(\bm\theta(t)).
\end{equation}
With the mean square loss function \eqref{meansquareloss}, we obtain
\begin{equation}\label{DNNfundynamics}
\begin{split}
\partial_t\mathcal N(\bm x,\bm\theta)=&[\nabla_{\bm\theta}\mathcal N(\bm x,\bm\theta)]^{\rm T} \frac{{\rm d}\bm\theta}{{\rm d} t}
=-\int_{\Omega}(\nabla_{\theta}\mathcal N(\bm x,\bm\theta))^{\rm T}\nabla_{\bm \theta}\mathcal N(\bm x',\theta)(\mathcal N(\bm x',\theta)-f(\bm x'))d\bm x'\\
:=&-\int_{\Omega}\Theta(\bm x, \bm x';\bm\theta)(\mathcal N(\bm x',\bm \theta)-f(\bm x'))d\bm x',
\end{split}
\end{equation}
for the dynamics of the network function $\mathcal N(\bm x,\theta)$, where
\begin{equation}\label{NTKdef}
\Theta(\bm x, \bm x';\bm\theta)=(\nabla_{\theta}\mathcal N(\bm x,\bm\theta))^{\rm T}\nabla_{\bm \theta}\mathcal N(\bm x',\theta),
\end{equation}
is the neural tangent kernel (NTK) \cite{jacot2018neural}.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.7\linewidth]{MSDNNOneLayer}
\caption{An illustration of an one layer multiscale DNN.}
\label{MsDNNOneLayernet}
\end{figure}
A multi-scale neural network with one hidden layer (see. Fig. \ref{MsDNNOneLayernet}) is defined by
\begin{equation}\label{MsDNNfun}
{\mathcal N}_s(\bm x,\bm\theta)=\frac{1}{\sqrt{m}}\sum\limits_{p=0}^{s}\sum\limits_{k=1}^{q}\sigma(\bm\theta_{pq+k}^{\rm T}2^p\bm x+b_{pq+k}),\quad \bm x\in \Omega:=[-1, 1]^d,
\end{equation}
where $s+1$ is the number of scales, $q$ is the number of neurons for each scale, $m=(s+1)q$ is the total number of neurons in the hidden layer. Apparently, the network reduce to fully connected neural network with one hidden layer if $s=0$ (see Fig. \ref{FDNNOneLayernet}). For this simple multi-scale neural network, direct calculation gives its NTK
\begin{equation}\label{MsDNNNTK}
{\Theta}_s(\bm x, \bm x';\bm\theta)=\frac{1}{m}\sum\limits_{p=0}^{s}(4^p\bm x^{\rm T}\bm x'+1)\sum\limits_{k=1}^{q}\sigma'(\bm\theta_{pq+k}^{\rm T}2^p\bm x+b_{pq+k})\sigma'(\bm\theta_{pq+k}^{\rm T}2^p\bm x'+b_{pq+k}).
\end{equation}
Set the activation function $\sigma(x)=\sin(x)$ and assume all the parameters $\{\theta_{p,k}\}$ in $\bm\theta_p=(\theta_{p,1}, \theta_{p,2},\cdots, \theta_{p,d})^{\rm T}$, $\{b_p\}$ are independent random variables of normal distribution. Then, by the law of large numbers and identity
$$\int_{-\infty}^{\infty}\cos(x\theta+y)e^{-\frac{\theta^2}{2}}d\theta=\sqrt{2\pi}e^{-\frac{x^2}{2}}\cos y,$$ we have
\begin{equation*}
\begin{split}
\lim\limits_{q\rightarrow\infty}{\Theta}_s(\bm x, \bm x';\bm\theta)&=\lim\limits_{q\rightarrow\infty}\frac{1}{m}\sum\limits_{p=0}^{s}(4^p\bm x\bm x'+1)\sum\limits_{k=1}^{q}\cos(\bm\theta_{pq+k}^{\rm T}2^p\bm x+b_{pq+k})\cos(\bm\theta_{pq+k}^{\rm T}2^p\bm x'+b_{pq+k})\\
&=\frac{1}{s+1}\sum\limits_{p=0}^{s}(4^p\bm x\bm x'+1)\mathbb E(\cos(\bm\theta_{0}^{\rm T}2^p\bm x+b_{0})\cos(\bm\theta_{0}^{\rm T}2^p\bm x'+b_{0}))\\
&=\frac{1}{2(s+1)}\sum\limits_{p=0}^{s}(4^p\bm x\bm x'+1)\Big[e^{-2}e^{-\frac{4^{p}|\bm x+\bm x'|^2}{2}}+e^{-\frac{4^{p}|\bm x-\bm x'|^2}{2}}\Big].
\end{split}
\end{equation*}
As the width of the network goes to infinity, the dynamics of the gradient descent learning \eqref{DNNfundynamics} tends to
\begin{equation}\label{limitdynamicMsDNNbiastrue}
\begin{split}
&\partial_t(\mathcal N_s(\bm x,\bm\theta)-f(\bm x))\\
=&-\frac{\bm x^{\rm T}}{2(s+1)}\sum\limits_{p=0}^{s}4^p\int_{\Omega}\Big[e^{-2}\mathcal G_p(\bm x+\bm x')+\mathcal G_p(\bm x-\bm x')\Big]\bm x'(\mathcal N_s(\bm x',\bm\theta)-f(\bm x')){\rm d}\bm x'\\
&-\frac{1}{2(s+1)}\sum\limits_{p=0}^{s}\int_{\Omega}\Big[e^{-2}\mathcal G_p(\bm x+\bm x')+\mathcal G_p(\bm x-\bm x')\Big](\mathcal N_s(\bm x',\bm\theta)-f(\bm x')){\rm d}\bm x',
\end{split}
\end{equation}
where
\begin{equation}\label{Gaussion}
\mathcal G_p(\bm x):=e^{-4^p|\bm x|^2/2},\quad \bm x\in\mathbb R^d,
\end{equation}
is the scaled Gaussian function. Define zero extension of the error function by
\begin{equation}
\eta(\bm x,\bm \theta)=\begin{cases}
0, & \bm x\notin\Omega,\\
\mathcal N_s(\bm x,\bm\theta)-f(\bm x), & \bm x\in\Omega.
\end{cases}
\end{equation}
Then, the dynamic system \eqref{limitdynamicMsDNNbiastrue} can be rewritten as
\begin{equation}\label{limitdynamicMsDNNbiastruere}
\begin{split}
\partial_t\eta(\bm x,\bm\theta))
=&-\frac{\bm x^{\rm T}}{2(s+1)}\sum\limits_{p=0}^{s}4^p\int_{\mathbb R^d}\Big[e^{-2}\mathcal G_p(\bm x+\bm x')+\mathcal G_p(\bm x-\bm x')\Big]\bm x'\eta(\bm x',\bm\theta){\rm d}\bm x'\\
&-\frac{1}{2(s+1)}\sum\limits_{p=0}^{s}\int_{\mathbb R^d}\Big[e^{-2}\mathcal G_p(\bm x+\bm x')+\mathcal G_p(\bm x-\bm x')\Big]\eta(\bm x',\bm\theta){\rm d}\bm x'.
\end{split}
\end{equation}
\begin{figure}[htbp]
\centering
\includegraphics[width=0.5\linewidth]{FDNNOneLayer}
\caption{An illustration of an one layer fully connected DNN.}
\label{FDNNOneLayernet}
\end{figure}
Given any $g(\bm x)\in L^1(\mathbb R^d)$, define its Fourier transform
\begin{equation}\label{Fouriertransform}
\hat g(\bm\xi):=\mathcal F[g](\bm\xi)=\int_{\mathbb R^d}g(\bm x)e^{-{\rm i} 2\pi\bm\xi^{\rm T} \bm x}{\rm d}\bm x.
\end{equation}
Then, we have identities
\begin{equation}\label{derivativefourier}
\mathcal F[\nabla g](\bm\xi)=2\pi{\rm i}\bm\xi\mathcal F[g](\bm\xi),\quad \nabla \hat g(\bm\xi)=-2\pi{\rm i} \mathcal F[\bm xg(\bm x)](\bm\xi),\quad \forall \bm xg(\bm x)\in (L^1(\mathbb R^d))^d,
\end{equation}
and
\begin{equation}\label{gaussianfourier}
\mathcal F[e^{-|\bm x|^2}](\xi)=\pi^{\frac{d}{2}}e^{-\pi^2|\bm\xi|^2},\quad \mathcal F[g(a\bm x)](\bm\xi)=\Big(\frac{1}{|a|}\Big)^d\mathcal F[g]\Big(\frac{\bm\xi}{a}\Big).
\end{equation}
Given any functions $h(\bm x)$, $g(\bm x)$, their cross-correlation and convolution are given by
\begin{equation}
h\star g:=\int_{\mathbb R^d}\overline{h(\bm x')}g(\bm x+\bm x'){\rm d}\bm x', \quad h* g:=\int_{\mathbb R^d}h(\bm x')g(\bm x-\bm x'){\rm d}\bm x',
\end{equation}
Then, there holds the identities
\begin{equation}\label{ccorrconvtheorem}
\widehat{h\star g}(\bm \xi)=\overline{\hat h(\bm\xi)}\hat g(\bm\xi), \quad \widehat{h* g}(\bm\xi)=\hat h(\bm\xi)\hat g(\bm\xi).
\end{equation}
Taking Fourier transform \eqref{Fouriertransform} on both sides of \eqref{limitdynamicMsDNNbiastruere} with respect to $\bm x$ and then applying \eqref{derivativefourier}-\eqref{ccorrconvtheorem} to rearrange the terms gives partial differential equation
\begin{equation}\label{MsDNNfourierfreqdynamicsbaistrue}
\begin{split}
\frac{\partial \hat{\eta}(\bm\xi,\bm\theta(t))}{\partial t}=&\frac{1}{8\pi^2(s+1)}\nabla_{\bm\xi}\cdot\Big[\sum\limits_{p=0}^{s}4^{p}\widehat{\mathcal G}_p(\bm \xi)\Big(\nabla_{\bm \xi}\hat{ \eta}(\bm\xi,\bm\theta(t))-e^{-2}\nabla_{\bm \xi}\overline{\hat{ \eta}(\bm\xi,\bm\theta(t))}\Big)\Big]\\
&-\frac{1}{2(s+1)}\sum\limits_{p=0}^{s}\widehat{\mathcal G}_p(\bm \xi)[e^{-2}\bar{\hat{ \eta}}(\bm\xi,\bm\theta(t))+\hat{ \eta}(\bm\xi,\bm\theta(t))].
\end{split}
\end{equation}
where
\begin{equation}\label{FourierGauss}
\widehat{\mathcal G}_p(\bm\xi)=(2\pi)^{\frac{d}{2}}2^{-pd}e^{-\frac{2\pi^2|\bm \xi|^2}{4^p}}
\end{equation}
Define
\begin{equation}\label{DNNcoefficients}
\begin{split}
A_s^{\pm}(\xi)=\frac{1\pm e^{-2}}{8\pi^2(s+1)}\sum\limits_{p=0}^{s}4^p\widehat{\mathcal G}_p(\bm\xi),\quad B_s^{\pm}(\xi)=\frac{1\pm e^{-2}}{2(s+1)}\sum\limits_{p=0}^{s}\widehat{\mathcal G}_p(\bm\xi),
\end{split}
\end{equation}
and denote $\hat{\eta}(\bm\xi,\bm\theta(t))=\hat{\eta}_{\rm re}(\bm\xi,\bm\theta(t))+{\rm i} \hat{\eta}_{\rm im}(\bm\xi,\bm\theta(t))$. The diffusion equation \eqref{MsDNNfourierfreqdynamicsbaistrue} can be rewritten as two independent equations
\begin{equation}\label{realimagdynamicsbaistrueMsDNN}
\begin{split}
\frac{\partial \hat{\eta}_{\rm re}(\bm\xi,\bm\theta(t))}{\partial t}=&\nabla_{\bm\xi}\cdot\Big[A_s^-(\xi)\nabla_{\bm \xi}\hat{\eta}_{\rm re}(\bm\xi,\bm\theta(t))\Big]-B_s^+(\xi)\hat{ \eta}_{\rm re}(\bm\xi,\bm\theta(t)),\\
\frac{\partial \hat{\eta}_{\rm im}(\bm\xi,\bm\theta(t))}{\partial t}=&\nabla_{\bm\xi}\cdot\Big[A_s^+(\xi)\nabla_{\bm \xi}\hat{\eta}_{\rm im}(\bm\xi,\bm\theta(t))\Big]-B_s^-(\xi)\hat{ \eta}_{\rm im}(\bm\xi,\bm\theta(t)).
\end{split}
\end{equation}
with respect to the real and imaginary parts of $\hat{\eta}(\bm\xi,\bm\theta(t))$.
A simpler diffusion equation can be derived if the bias are set to zero in the network. In fact, a function represented by the network without bias has the form
\begin{equation}\label{msdnnwithoutbias}
{\mathcal N}_s(\bm x,\bm\theta)=\frac{1}{\sqrt{m}}\sum\limits_{p=0}^{s}\sum\limits_{k=1}^{q}\sigma(\bm\theta_{pq+k}^{\rm T}2^p\bm x),\quad \bm x\in \Omega:=[-1, 1]^d,
\end{equation}
and the neural tangent kernel is given by
\begin{equation}
{\Theta}_s(\bm x, \bm x';\bm\theta)=\frac{\bm x^{\rm T} \bm x'}{m}\sum\limits_{p=0}^{s}4^p\sum\limits_{k=1}^{q}\sigma'(\bm\theta_{pq+k}^{\rm T}2^p\bm x)\sigma'(\bm\theta_{pq+k}^{\rm T}2^p\bm x').
\end{equation}
Set the activation function $\sigma(x)=\sin(x)$, and assume all the parameters $\{\theta_p\}$ are independent random variables of normal distribution. Then, by law of large numbers, we have
\begin{equation}
\begin{split}
\lim\limits_{q\rightarrow\infty}{\Theta}_s(\bm x, \bm x';\bm\theta)&=\lim\limits_{q\rightarrow\infty}\frac{\bm x^{\rm T} \bm x'}{m}\sum\limits_{p=0}^{s}4^p\sum\limits_{k=1}^{q}\cos(\bm\theta_{pq+k}^{\rm T}2^p\bm x)\cos(\bm\theta_{pq+k}^{\rm T}2^p\bm x')\\
&=\frac{\bm x^{\rm T} \bm x'}{2(s+1)}\sum\limits_{p=0}^{s}4^p\mathbb E(\cos(\bm\theta_{0}^{\rm T}2^p\bm x)\cos(\bm\theta_{0}^{\rm T}2^p\bm x'))\\
&=\frac{\bm x^{\rm T} \bm x'}{2(s+1)}\sum\limits_{p=0}^{s}4^p\Big[\mathcal G_p(\bm x+\bm x')+\mathcal G_p(\bm x-\bm x')\Big].
\end{split}
\end{equation}
As the width of the network goes to infinity, the dynamics of the gradient descent learning tends to
\begin{equation}\label{limitdynamicMsDNN}
\partial_t\eta(\bm x,\theta)
=-\frac{\bm x^{\rm T}}{2(s+1)}\int_{\Omega}\sum\limits_{p=0}^{s}4^p\Big[\mathcal G_p(\bm x+\bm x')+\mathcal G_p(\bm x-\bm x')\Big]\bm x'\eta(\bm x',\bm\theta){\rm d}\bm x'.
\end{equation}
Mimic the derivation for \eqref{MsDNNfourierfreqdynamicsbaistrue}, we obtain from \eqref{limitdynamicMsDNN} that
\begin{equation}
\begin{split}
\frac{\partial \hat{\eta}(\bm\xi,\bm\theta(t))}{\partial t}=&\frac{1}{8\pi^2(s+1)}\nabla_{\bm\xi}\cdot\Big[\sum\limits_{p=0}^{s}4^p\widehat{\mathcal G}_p(\bm\xi)\Big(\nabla_{\bm\xi}\hat{ \eta}(\bm\xi,\bm\theta(t))-\nabla_{\bm\xi}\overline{\hat{\eta}(\bm\xi,\bm\theta(t))}\Big)\Big]\\
=&\frac{{\rm i}}{4\pi^2(s+1)}\nabla_{\bm\xi}\cdot\Big[\sum\limits_{p=0}^{s}4^p\widehat{\mathcal G}_p(\bm\xi)\nabla_{\bm \xi}\hat{\eta}_{\rm im}(\bm\xi,\bm\theta(t))\Big],
\end{split}
\end{equation}
where $\hat{\tilde\eta}_{\rm im}(\bm\xi,\bm\theta(t)):=\mathfrak{Im}\Big\{\hat{\tilde\eta}(\bm\xi,\bm\theta(t))\Big\}$. The dynamic system \eqref{limitdynamicMsDNN} in the Fourier frequency domain implies that only the imaginary part of the error evolves during the gradient descent training if a one layer multi-scale neural network without bias and activation function $\sigma(x)=\sin(x)$ is used. This conclusion is consistent with the fact that the network function \eqref{msdnnwithoutbias} can only be used to fit odd functions.
\subsection{Error analysis for the approximation of differential equation}
It is well known that linear differential equations are equivalent to fitting problems in the Fourier spectral domain. Here, we consider the machine learning algorithm for the two points boundary value problem:
\begin{equation}
\begin{cases}
\displaystyle -v''(x)=f(x), \quad -1<x<1,\\
\displaystyle v(-1)=a,\quad \ v(1)=b.
\end{cases}
\end{equation}
The mean square loss
\begin{equation}\label{meansquarelossDE}
L(\bm\theta)=\int_{-1}^1|\partial_{xx}w( x;{\bm \theta})+f(x)|^2{\rm d} x=\int_{-1}^1|\partial_{xx}w( x;{\bm \theta})-v''(x)|^2{\rm d} x,
\end{equation}
is adopted in the following analysis. In order to impose the Dirichlet boundary condition, we take the approximate function
\begin{equation}\label{NNapproximation}
w(x, \bm \theta)=\mathcal N(x,\bm\theta)+[a-\mathcal N(-1,\bm\theta)]\frac{1-x}{2}+[b-\mathcal N(1,\bm\theta)]\frac{1+x}{2},
\end{equation}
to be a modification of the neural network function $\mathcal N(x;\bm\theta)$. Apparently, we have
\begin{equation}\label{NNFunDeri}
\partial_{xx}w( x, {\bm \theta})-v''(x)=\partial_{xx}\mathcal N( x,{\bm \theta})-v''(x).
\end{equation}
The gradient descent dynamics for loss functional $L(\bm\theta)$ is given by \eqref{graddescentscheme} and its continuum limit for $\tau\rightarrow 0$ is given by \eqref{graddescentdynamics}.
From \eqref{graddescentdynamics}, \eqref{meansquarelossDE} and \eqref{NNFunDeri}, we can calculate that
\begin{equation}\label{DElearningdynamics}
\begin{split}
\partial_t(\mathcal N(x;\bm\theta(t))-v(x))=&-\nabla_{\bm\theta}\mathcal N(x;\bm\theta(t))\nabla L(\bm\theta(t))\\
=&-\int_{-1}^1\partial_{x'x'}\Theta(x, x',\bm\theta(t))\partial_{x'x'}(\mathcal N(x';\bm\theta(t))-v(x')){\rm d}x',
\end{split}
\end{equation}
where $\Theta(x, x',\bm\theta(t))$ is the NTK.
Suppose the multi-scale neural network sketched in Fig. \ref{MsDNNOneLayernet} is adopted, i.e.,
\begin{equation}
\mathcal N_s( x,\bm\theta)=\frac{1}{\sqrt{m}}\sum\limits_{p=0}^{s}\sum\limits_{k=1}^{q}\sigma(\theta_{pq+k}2^px+b_{pq+k}),,\quad x\in[-1, 1],
\end{equation}
which is the 1-dimensional special case of \eqref{MsDNNfun}. According to \eqref{MsDNNNTK}, the neural tangent kernel for this 1-dimensional neural network is given by
\begin{equation}
{\Theta}_s( x, x';\bm\theta)=\frac{1}{m}\sum\limits_{p=0}^{s}(4^p x x'+1)\sum\limits_{k=1}^{q}\sigma'(\theta_{pq+k}2^p x+b_{pq+k})\sigma'(\theta_{pq+k}2^p x+b_{pq+k}),
\end{equation}
for all $x, x'\in [-1, 1]$. Set the activation function $\sigma(x)=\sin(x)$ and assume all the parameters $\{\theta_{p}\}$, $\{b_p\}$ are independent random variables of normal distribution. Let the width of each sub neural network in the MsNN goes to infinity, we obtain
\begin{equation*}
{\Theta}_s^{\infty}(\bm x, \bm x';\bm\theta):=\lim\limits_{q\rightarrow\infty}{\Theta}_s(\bm x, \bm x';\bm\theta)=\frac{1}{2(s+1)}\sum\limits_{p=0}^{s}(4^p x x'+1)\Big[e^{-2}\mathcal G_p(x+x')-\mathcal G_p(x-x')\Big].
\end{equation*}
Here, $\mathcal G_p(x)$ is just the one dimension special case of the multi-dimensional Gaussion defined in \eqref{Gaussion}.
Denote zero extension of the error function by
\begin{equation}
\eta(x,\bm\theta(t))=\begin{cases}
0, & |x|>1,\\
\mathcal N_s(x,\bm\theta(t))-v(x), & |x|\leq 1,
\end{cases}
\end{equation}
Then, the dynamics \eqref{DElearningdynamics} tends to
\begin{equation}\label{DElearningdynamicslimit}
\begin{split}
\partial_t\eta(x;\bm\theta(t))=-\int_{-\infty}^{+\infty}\partial_{x'x'}\Theta_s^{\infty}(x, x')\partial_{x'x'}\eta(x',\bm\theta(t)){\rm d}x',
\end{split}
\end{equation}
as the width of the neural network goes to infinity. Note that
\begin{equation}\label{derivatNTKMsDNN}
\begin{split}
\partial_{x'x'}\Theta_s^{\infty}(x, x',\bm\theta)=&\frac{x}{s+1}\sum\limits_{p=0}^{s}4^p\Big[e^{-2}\mathcal G_p'(x+x')-\mathcal G_p'(x-x')\Big]\\
&+\frac{1}{2(s+1)}\sum\limits_{p=0}^{s}(4^pxx'+1)\Big[e^{-2}\mathcal G_p''(x+x')+\mathcal G_p''(x-x')\Big].
\end{split}
\end{equation}
Substituting \eqref{derivatNTKMsDNN} into \eqref{DElearningdynamicslimit} and taking 1-dimensional Fourier transform \eqref{Fouriertransform} on both sides and then applying the 1-dimensional version of \eqref{derivativefourier}-\eqref{ccorrconvtheorem} to rearrange the terms gives partial differential equation
\begin{equation}
\begin{split}
\partial_t\hat\eta(\xi,\bm\theta(t))=&\partial_{\xi}\Big[\frac{1}{2(s+1)}\sum\limits_{p=0}^{s}4^p\widehat{\mathcal G}_p(\xi)\xi^2\partial_{\xi}\Big(e^{-2}\overline{\widehat{\partial_{xx}\eta}(\xi,\bm\theta)}-\widehat{\partial_{xx}\eta}(\xi,\bm\theta)\Big)\Big]\\
&+\partial_{\xi}\Big[\frac{1}{s+1}\sum\limits_{p=0}^{s}4^p\widehat{\mathcal G}_p(\xi)\xi\Big(e^{-2}\overline{\widehat{\partial_{xx}\eta}(\xi,\bm\theta)}-\widehat{\partial_{xx}\eta}(\xi,\bm\theta)\Big)\Big]\\
&+\frac{2\pi^2}{s+1}\sum\limits_{p=0}^{s}\widehat{\mathcal G}_p(\xi)\xi^2\Big(e^{-2}\overline{\widehat{\partial_{xx}\eta}(\xi,\bm\theta)}+\widehat{\partial_{xx}\eta}(\xi,\bm\theta)\Big)\\
=&\partial_{\xi}\Big[\frac{1}{2(s+1)}\sum\limits_{p=0}^{s}4^p\widehat{\mathcal G}_p(\xi)\partial_{\xi}\Big(e^{-2}\xi^2\overline{\widehat{\partial_{xx}\eta}(\xi,\bm\theta)}-\xi^2\widehat{\partial_{xx}\eta}(\xi,\bm\theta)\Big)\Big]\\
&+\frac{2\pi^2}{s+1}\sum\limits_{p=0}^{s}\widehat{\mathcal G}_p(\xi)\xi^2\Big(e^{-2}\overline{\widehat{\partial_{xx}\eta}(\xi,\bm\theta)}+\widehat{\partial_{xx}\eta}(\xi,\bm\theta)\Big),
\end{split}
\end{equation}
where
\begin{equation}
\widehat{\mathcal G}_p(\xi)=\sqrt{2\pi}2^{-p}e^{-\frac{2\pi^2\xi^2}{4^p}}.
\end{equation}
Define coefficients
\begin{equation}
A_s^{\pm}(\xi)=\frac{1\pm e^{-2}}{2(s+1)}\sum\limits_{p=0}^{s}4^p\widehat{\mathcal G}_p(\xi),\quad B_s^{\pm}(\xi)=\frac{2\pi^2(1\pm e^{-2})}{s+1}\sum\limits_{p=0}^{s}\widehat{\mathcal G}_p(\xi).
\end{equation}
The real and imaginary parts of $\hat\eta(\xi,\bm\theta(t))$ satisfy diffusion equations
\begin{equation}\label{DEdiffusion}
\begin{split}
\partial_t\hat\eta_{\rm re}(\xi,\bm\theta(t))=&-\partial_{\xi}\Big[A_s^-(\xi)\partial_{\xi}\Big(\xi^2\widehat{\partial_{xx}\eta}_{\rm re}(\xi,\bm\theta)\Big)\Big]+B_s^+(\xi)\xi^2\widehat{\partial_{xx}\eta}_{\rm re}(\xi,\bm\theta),\\
\partial_t\hat\eta_{\rm im}(\xi,\bm\theta(t))=&-\partial_{\xi}\Big[A_s^+(\xi)\partial_{\xi}\Big(\xi^2\widehat{\partial_{xx}\eta}_{\rm im}(\xi,\bm\theta)\Big)\Big]+B_s^-(\xi)\xi^2\widehat{\partial_{xx}\eta}_{\rm im}(\xi,\bm\theta).
\end{split}
\end{equation}
By introducing weighted error function $\omega(\xi,\bm\theta(t))=\xi^2\widehat{\partial_{xx}\eta}(\xi,\bm\theta(t))$, the diffusion equation \eqref{DEdiffusion} can be rewritten as
\begin{equation}
\begin{split}
\frac{1}{4\pi^2\xi^4}\partial_t\omega_{\rm re}(\xi,\bm\theta(t))=&\partial_{\xi}\Big[A_s^-(\xi)\partial_{\xi}\omega_{\rm re}(\xi,\bm\theta)\Big]-B_s^+(\xi)\omega_{\rm re}(\xi,\bm\theta),\\
\frac{1}{4\pi^2\xi^4}\partial_t\omega_{\rm im}(\xi,\bm\theta(t))=&\partial_{\xi}\Big[A_s^+(\xi)\partial_{\xi}\omega_{\rm re}(\xi,\bm\theta)\Big]-B_s^-(\xi)\omega_{\rm re}(\xi,\bm\theta).
\end{split}
\end{equation}
\section{Numerical analysis of the diffusion equations}
According to the analysis in the above subsection, the dynamics of the gradient descent learning tends to the diffusion equations \eqref{realimagdynamicsbaistrueMsDNN} in the Fourier spectral domain as the width of the network goes to infinity. In general, the diffusion equations for the real and imaginary parts can be written as
\begin{equation}\label{generaldiffusioneq}
\frac{\partial u(\bm\xi,t)}{\partial t}=\nabla_{\bm\xi}\cdot\Big[A_s^{\mp}(\xi)\nabla_{\bm\xi} u(\bm\xi,t)\Big]-B_s^{\pm}(\xi)u(\bm\xi,t),\quad \bm\xi\in\mathbb R^d,
\end{equation}
where the functions $A_s^{\pm}(\bm\xi), B^{\mp}_s(\bm\xi)$ defined in \eqref{DNNcoefficients} are linear combination of scaled Gaussian functions. Apparently, both $A_s^{\pm}(\bm\xi), B^{\mp}_s(\bm\xi)$ are positive functions in $\mathbb R^d$. Therefore, the solution of \eqref{generaldiffusioneq} has energy equality
\begin{equation}\label{energyequality}
\frac{\rm d}{{\rm d} t}\int_{\mathbb R^d}|u(\bm\xi, t)|^2{\rm d}\bm\xi=-2\int_{\mathbb R^d}\Big[A^{\mp}_s(\bm\xi)\Big|\nabla_{\bm\xi} u(\bm\xi, t)\Big|^2+B^{\pm}_s(\bm\xi)|u(\bm\xi, t)|^2\Big]{\rm d}\bm\xi,
\end{equation}
which implies that the solution $u(\bm\xi, t)\rightarrow 0$ for any $\bm\xi\in\mathbb R^d$ as $t\rightarrow\infty$. That means the gradient descent learning for a fitting problem with a one layer neural network is convergent suppose the learning rate is sufficiently small and the width of the neural network is sufficiently large. Apparently, the diffusion coefficients $\{A_s^{\pm}(\xi), B_s^{\mp}(\xi)\}$ plays a key role in the convergence speed. Some examples of $\{A_s^{-}(\xi), B^{+}_s(\xi)\}$ are plotted in Fig. \ref{coefficientsplot}. We can see that both $A_s^{-}(\xi)$ and $B^{+}_s(\xi)$ have larger support and value as the increasing of $s$.
\begin{figure}[ht!]
\center
\includegraphics[scale=0.35]{Aminus}
\includegraphics[scale=0.35]{Bplus}
\caption{Diffusion coefficients $A^-_s(\xi)$ and $B^+_s(\xi)$ with $s=0, 3, 4$.}%
\label{coefficientsplot}%
\end{figure}
\subsection{Hermite spectral method for 1-dimensional case}
As the analytic solutions of \eqref{generaldiffusioneq} are not available, we resort to numerical method to analyze the solutions of \eqref{generaldiffusioneq} for different numbers of scales. We will employ a Hermite spectral method for the spatial discretization of equation \eqref{generaldiffusioneq} due to the unbounded computational domain and the involved Gaussian functions. For this purpose, we introduce the Hermite functions (cf. \cite{ShenTaoWang2011}) defined by
\begin{equation}
\widehat H_n(x)=\frac{1}{\pi^{1/4}\sqrt{2^nn!}}e^{-x^2/2}H_n(x),\quad n\geq 0, \;\; x\in\mathbb R,
\end{equation}
where $H_n(x)$ are Hermite polynomials. The Hermite functions $\widehat H_n(x)$ are orthogonal function in $L^2(\mathbb R)$, i.e.,
\begin{equation}\label{Hermiteorthorgonal}
\int_{-\infty}^{\infty}\widehat H_n(x)\widehat H_m(x)dx=\delta_{mn},
\end{equation}
where $\delta_{mn}$ is Kronecker symbol. We will also use the following recurrence formulas (cf. \cite{ShenTaoWang2011})
\begin{equation}\label{H_nDS}
\begin{split}
\widehat H_{n+1}(x)=x\sqrt{\frac{2}{n+1}}\widehat H_n(x)-\sqrt{\frac{n}{n+1}}\widehat H_n(x)=0,\quad n\geq 1,\\
\widehat H_0(x)=\pi^{-1/4}e^{-x^2/2},\quad \widehat H_1(x)=\sqrt{2}\pi^{-1/4}xe^{-x^2/2},
\end{split}
\end{equation}
and
\begin{equation}\label{H_nDS1}
\widehat H'_n(x)=\sqrt{2n}\widehat H_{n-1}(x)-x\widehat H_n(x)=\sqrt{\frac{n}{2}}\widehat H_{n-1}(x)-\sqrt{\frac{n+1}{2}}\widehat H_{n+1}(x),
\end{equation}
in the derivation of the Hermite spectral discretization.
We discretize the computational time interval $[0, T]$ into equi-spaced intervals $I_k:=[k\Delta t, (k+1)\Delta t]$ for $k=0, 1, \cdots, N$, where $\Delta t=T/N$. Then, the Hermite spectral method together with backward Euler time discretization is defined as: find numerical approximation
\begin{equation}\label{Hermiteapprox}
u_p^{m}(\xi)=\sum_{k=0}^{p}u_{k}^m\widehat H_k(\lambda\xi),
\end{equation}
for $u(\xi, t)$ at time $t_m=m\Delta t$ s.t.,
\begin{equation}\label{hermitespectralscheme}
\Big(\frac{u_p^m(\xi)-u_p^{m-1}(\xi)}{\Delta t},\widehat H_n(\lambda\xi)\Big)=-\Big(A_s^{\mp}(\xi)\frac{d u^m_p(\xi)}{d\xi}, \frac{d\widehat H_n(\lambda\xi)}{d\xi}\Big)-(B_s^{\pm}(\xi)u^m_p(\xi), \widehat H_n(\lambda\xi)),
\end{equation}
for $k=0, 1, \cdots, p$. Here, $\lambda$ is a scaling parameter to tune the interval with effective resolution, the $L^2$ inner product is defined as
\begin{equation}\label{FC1}
(f(\xi), g(\xi))=\int_{-\infty}^{\infty}f(\xi)g(\xi)d\xi.
\end{equation}
Denoted the unknown vector by $\bs U_p^m=(u_0^m, u_1^m, \cdots, u_p^m)^{\rm T}$, the numerical scheme \eqref{hermitespectralscheme} leads to linear systems
\begin{equation}
\mathbb D\frac{\bs U_p^m-\bs U_p^{m-1}}{\Delta t}=(\mathbb K^{\mp}+\mathbb M^{\pm})\bs U_p^m,
\end{equation}
where $\mathbb D=(D_{nk})$, $\mathbb K^{\pm}=(K_{nk}^{\pm})$, $\mathbb M=(M_{nk}^{\pm})$ are matrices with entries given by
\begin{equation}
\begin{split}
D_{nk}&=(\widehat H_k(\lambda \xi), \widehat H_n(\lambda \xi))=\frac{1}{\lambda}\delta_{nk},\quad K_{nk}^{\pm}=-\lambda^2\Big(A_s^{\pm}(\xi)\widehat H'_k(\lambda\xi), \widehat H'_n(\lambda\xi) \Big),\\
M_{nk}^{\pm}&=-(B^{\pm}_s(\xi)\widehat H_k(\lambda\xi), \widehat H_n(\lambda\xi)).
\end{split}
\end{equation}
By the recurrence formula \eqref{H_nDS}, we have
\begin{equation}
\begin{aligned}
\widehat H'_k(x)\widehat H'_n(x)&=\left[\sqrt{\frac{k}{2}}\widehat H_{k-1}(x)-\sqrt{\frac{k+1}{2}}\widehat H_{k+1}(x)\right]\left[\sqrt{\frac{n}{2}}\widehat H_{n-1}(x)-\sqrt{\frac{n+1}{2}}\widehat H_{n+1}(x)\right]\\
&=\frac{\sqrt{nk}}{2}\widehat H_{k-1}(x)\widehat H_{n-1}(x)-\frac{\sqrt{(n+1)k}}{2}\widehat H_{k-1}(x)\widehat H_{n+1}(x)\\
&-\frac{\sqrt{n(k+1)}}{2}\widehat H_{k+1}(x)\widehat H_{n-1}(x)+\frac{\sqrt{(n+1)(k+1)}}{2}\widehat H_{k+1}(x)\widehat H_{n+1}(x),
\end{aligned}
\end{equation}
for all $n, \;\;\; k\geq 1$. Therefore,
\begin{equation}
\begin{split}
K_{nk}^{\pm}=&\frac{\sqrt{nk}}{2}C^{\pm}_{n-1,k-1}-\frac{\sqrt{(n+1)k}}{2}C^{\pm}_{n+1,k-1}\\
&-\frac{\sqrt{n(k+1)}}{2}C^{\pm}_{n-1,k+1}+\frac{\sqrt{(n+1)(k+1)}}{2}C^{\pm}_{n+1,k+1},
\end{split}
\end{equation}
for all $k\geq 1$, where
$$C^{\pm}_{nk}=-\lambda^2\int_{-\infty}^{\infty} c^{\pm}_s(\xi)\widehat H_k(\lambda\xi) \widehat H_n(\lambda\xi)d\xi.$$
Noting that
\begin{equation}
M^{\pm}_{nk}=-\int_{-\infty}^{\infty}d_s^{\pm}(\xi)\widehat H_k(\lambda\xi)\widehat H_n(\lambda\xi)d\xi,
\end{equation}
and $A_s^{\pm}(\xi)$, $B_s^{\pm}(\xi)$ are linear combination of Gaussian functions as presented in \eqref{DNNcoefficients}, the computation of $C^{\pm}_{nk}$ and $M^{\pm}_{nk}$ can be reduced to compute the weighted inner products
\begin{equation}
I_{nk}(\tau)=\int_{-\infty}^{\infty}\widehat H_n(x)\widehat H_k(x)e^{-\tau x^2}dx=\frac{1}{\sqrt{\tau+1}}\int_{-\infty}^{\infty}\widetilde H_n\Big(\frac{y}{\sqrt{\tau+1}}\Big)\widetilde H_k\Big(\frac{y}{\sqrt{\tau+1}}\Big)e^{-y^2}dy.
\end{equation}
where $\widetilde H_n(x)$ is the normalized Hermite polynomial defined by $\widetilde H_n(x)=e^{x^2/2}\widehat H_n(x)$. In fact, for $A_s^{\pm}(\xi)$, $B_s^{\pm}(\xi)$ given in \eqref{DNNcoefficients}, we have
\begin{equation}
\begin{split}
C_{nk}^{\pm}=-\frac{(1\pm e^{-2})\lambda}{2(2\pi)^{\frac{3}{2}}(s+1)}\sum\limits_{p=0}^s2^pI_{nk}\Big(\frac{2\pi^2}{4^p\lambda^2}\Big),\quad
M_{nk}^{\pm}=-\sqrt{\frac{\pi}{2}}\frac{1\pm e^{-2}}{(s+1)\lambda}\sum\limits_{p=0}^s2^{-p}I_{nk}\Big(\frac{2\pi^2}{4^p\lambda^2}\Big).
\end{split}
\end{equation}
Next, we present formulas for the calculation of the integrals $I_{nk}(\tau)$. Given any scaling factor $a$, scaled Hermite polynomial $\widetilde H_n(ay)$ has formulation
\begin{equation}\label{H_n(ty)_HZK}
\widetilde H_n(ay)=\sum_{k=0}^{n}h_{n,k}(a)\widetilde H_k(y),
\end{equation}
where the coefficients $\{h_{n,k}(a)\}$ can be calculated via recurrence formulas \eqref{h_{k,m}_DTGS}. Therefore,
\begin{equation}
\begin{aligned}
I_{nk}(\tau)&=\frac{1}{\sqrt{\tau+1}}\int_{-\infty}^{\infty}\widetilde H_n\Big(\frac{y}{\sqrt{\tau+1}}\Big)\widetilde H_k\Big(\frac{y}{\sqrt{\tau+1}}\Big)e^{-y^2}dy\\
&=\frac{1}{\sqrt{\tau+1}}\sum_{i=0}^{n}\sum_{j=0}^{k}h_{n,i}\Big(\frac{1}{\sqrt{\tau+1}}\Big)h_{k,j}\Big(\frac{1}{\sqrt{\tau+1}}\Big)\int_{-\infty}^{\infty}\widetilde H_i(y)\widetilde H_j(y)e^{-y^2}dy\\
&=\frac{1}{\sqrt{\tau+1}}\sum_{i=0}^{\min\{n,k\}}h_{n,i}\Big(\frac{1}{\sqrt{\tau+1}}\Big)h_{k,i}\Big(\frac{1}{\sqrt{\tau+1}}\Big).
\end{aligned}
\end{equation}
In order to derive recurrence formulas for the coefficients $\{h_{n,k}(a)\}$, we recall the recurrence formula \eqref{H_nDS} to obtain
\begin{equation}\label{H_nDS3}
\sqrt{2(n+1)}\widetilde H_{n+1}(ay)=2ay\widetilde H_n(ay)-\sqrt{2n}\widetilde H_{n-1}(ay),\quad n\ge 1.
\end{equation}
Substituting the expansion \eqref{H_n(ty)_HZK} into \eqref{H_nDS3} gives
\begin{equation}\label{exprecurrence}
\sqrt{2(n+1)}\sum_{k=0}^{n+1}h_{n+1,k}(a)\widetilde H_k(y)=2ay\sum_{k=0}^{n}h_{n,k}(a)\widetilde H_k(y)-\sqrt{2n}\sum_{k=0}^{n-1}h_{n-1,k}(a)\widetilde H_k(y),
\end{equation}
for $n\geq 1$. Noting that
\begin{equation}
\widetilde H_{1}(y)=\sqrt{2}y\widetilde H_0(y),\quad 2y\widetilde H_k(y)=\sqrt{2(k+1)}\widetilde H_{k+1}(y)+\sqrt{2k}\widetilde H_{k-1}(y)\ ,\quad k\ge 1,
\end{equation}
direct calculation gives
\begin{equation*}
\begin{split}
2ay\sum_{k=0}^{n}h_{n,k}(a)\widetilde H_k(y)=&a\sum_{k=1}^{n}h_{n,k}(a)\left[\sqrt{2(k+1)}\widetilde H_{k+1}(y)+\sqrt{2k}\widetilde H_{k-1}(y)\right]+2ayh_{n,0}(a)\widetilde H_0(y)\\
=&a\sum_{k=0}^{n}\sqrt{2(k+1)}h_{n,k}(a)\widetilde H_{k+1}(y)+a\sum_{k=1}^{n}\sqrt{2k}h_{n,k}(a)\widetilde H_{k-1}(y)\\
=&a\sum_{k=1}^{n+1}\sqrt{2k}h_{n,k-1}\widetilde H_{k}(y)+a\sum_{k=0}^{n-1}\sqrt{2(k+1)}h_{n,k+1}\widetilde H_{k}(y).
\end{split}
\end{equation*}
Therefore, \eqref{exprecurrence} can be rearranged into
\begin{equation}
\begin{aligned}
&\sqrt{2(n+1)}\sum_{k=0}^{n+1}h_{n+1,k}\widetilde H_k(y)\\
=&a\sum_{k=1}^{n+1}\sqrt{2k}h_{n,k-1}\widetilde H_{k}(y)+a\sum_{k=0}^{n-1}\sqrt{2(k+1)}h_{n,k+1}\widetilde H_{k}(y)-\sqrt{2n}\sum_{k=0}^{n-1}h_{n-1,k}\widetilde H_k(y)\\
=&[\sqrt{2}ah_{n,1}-\sqrt{2n}h_{n-1,0}]\widetilde H_{0}(y)+a\sqrt{2n}h_{n,n-1}\widetilde H_n(y)+a\sqrt{2(n+1)}h_{n,n}\widetilde H_{n+1}(y)\\
&+\sum_{k=1}^{n-1}[a\sqrt{2k}h_{n,k-1}+a\sqrt{2(k+1)}h_{n,k+1}-\sqrt{2n}h_{n-1,k}]\widetilde H_k(y).\\
\end{aligned}
\end{equation}
Matching the coefficients in both sides of the above equation gives us
\begin{equation}\label{h_{k,m}_DTGS}
\begin{aligned}
h_{n+1,0}(a)&=\sqrt{\frac{1}{n+1}}ah_{n,1}(a)-\sqrt{\frac{n}{n+1}}h_{n-1,0}(a),\\
h_{n+1,k}(a)&=a\sqrt{\frac{k}{n+1}}h_{n,k-1}(a)+a\sqrt{\frac{k+1}{n+1}}h_{n,k+1}(a)-\sqrt{\frac{n}{n+1}}h_{n-1,k}(a),\quad k=1,2,\cdots,n-1,\\
h_{n+1,k}(t)&=a\sqrt{\frac{k}{n+1}}h_{n, k-1}(a),\quad k=n,n+1,
\end{aligned}
\end{equation}
for all $n\ge1$, while the initial values are given by
\begin{equation}\label{h_{k,m}_DTCZ}
h_{0,0}(a)=1,\quad h_{1,0}(a)=0,\quad h_{1,1}(a)=a.
\end{equation}
\begin{remark}
By induction, $h_{n,k}(a)$ has explicit formula
\begin{equation}
h_{n,k}(a)=\begin{cases}
0, & \ n-k=2s+1,\\
\displaystyle \sqrt{\frac{n!}{2^{n-k}k!}}\frac{1}{s!}a^k(a^2-1)^s, & \ n-k=2s,
\end{cases}
\end{equation}
for all $k=0,1,\cdots,n$.
\end{remark}
\subsection{Numerical results}
{\bf Example 1.} We first give a numerical example for the diffusion problem \eqref{generaldiffusioneq} to show the decay of the solution with respect to $t$. The initial function is given by
\begin{equation}
u(\xi, 0)=\begin{cases}
1, & |x|\leq 3,\\
0, & |x|>3,
\end{cases}
\end{equation}
\begin{figure}[ht!]
\center
\subfigure[$A_0^-(\xi)$, $B_0^+(\xi)$]{\includegraphics[scale=0.24]{realdiffusions0}}
\subfigure[$A_0^+(\xi)$, $B_0^-(\xi)$]{\includegraphics[scale=0.24]{imagdiffusions0}}
\subfigure[$A_3^-(\xi)$, $B_3^+(\xi)$]{\includegraphics[scale=0.24]{realdiffusions3}}
\subfigure[$A_3^+(\xi)$, $B_3^-(\xi)$]{\includegraphics[scale=0.24]{imagdiffusions3}}
\subfigure[$A_6^-(\xi)$, $B_6^+(\xi)$]{\includegraphics[scale=0.24]{realdiffusions6}}
\subfigure[$A_6^+(\xi)$, $B_6^-(\xi)$]{\includegraphics[scale=0.24]{imagdiffusions6}}
\caption{Numerical solution of \eqref{generaldiffusioneq} with three groups of diffusion coefficients.}%
\label{Ex1_1}%
\end{figure}
and three groups of diffusion coefficients $\{A^{\mp}_s(\xi), B_s^{\pm}(\xi)\}, s=0, 3, 6$ are tested. For the numerical discretization of the PDE, we take $p=100$, $\Delta t=1.0e-3$. The numerical solutions at different time $t$ are plotted in Fig. \ref{Ex1_1}. The numerical results clearly show that the initial function decays faster and faster as the increasing of $s$. It is worthy to emphasize that diffusion coefficients $\{A^{\mp}_0(\xi), B_0^{\pm}(\xi)\}$ only produce fast decay in a small neighborhood of the origin. These observations are consistent with the performance of the multi-scale neural network which has faster convergence especially in the approximation of highly oscillated functions.
\noindent{\bf Example 2.} In this example, we will compare the gradient descent learning in the physical domain with the corresponding diffusion process in the Fourier spectral domain. We test a fitting problem with objective function
$$f(x)=\sin a\pi x+\cos b\pi x,$$
on the interval $[-1, 1]$. The Fourier transform of $f(x)$ with zero extension outside $[-1, 1]$ is
\begin{equation*}
\hat f(\xi)=\frac{\sin[(b+2\xi)\pi]}{(b+2\xi)\pi}+\frac{\sin[(b-2\xi)\pi]}{(b-2\xi)\pi}+{\rm i}\left[\frac{\sin[(a+2\xi)\pi]}{(a+2\xi)\pi}-\frac{\sin[(a-2\xi)\pi]}{(a-2\xi)\pi}\right].
\end{equation*}
For the one layer multi-scale neural network, the Fourier transform of $\mathcal N_s(x,\theta)$ we can be calculated as
\begin{equation*}
\begin{split}
\widehat{\mathcal N}_s(\xi,\theta)=&\frac{1}{\sqrt{m}}\sum\limits_{p=0}^{s}\sum\limits_{k=1}^{q}\frac{-2\pi{\rm i}\xi(e^{2\pi{\rm i}\xi}\sin(2^p\theta_{pq+k}-b_{pq+k})+e^{-2\pi{\rm i}\xi}\sin(2^p\theta_{pq+k}+b_{pq+k}))}{4^p\theta_{pq+k}^2-4\pi^2\xi^2}\\
+&\frac{1}{\sqrt{m}}\sum\limits_{p=0}^{s}\sum\limits_{k=1}^{q}\frac{2^p\theta_{pq+k}(e^{2\pi{\rm i}\xi}\cos(2^p\theta_{pq+k}-b_{pq+k})-e^{-2\pi{\rm i}\xi}\cos(2^p\theta_{pq+k}+b_{pq+k})) }{4^p\theta_{pq+k}^2-4\pi^2\xi^2}.
\end{split}
\end{equation*}
\begin{figure}[ht!]
\center
\subfigure{\includegraphics[scale=0.23]{errcomps3t0}}
\subfigure{\includegraphics[scale=0.23]{errcomps3t20}}
\subfigure{\includegraphics[scale=0.23]{errcomps3t50}}
\subfigure{\includegraphics[scale=0.23]{errcomps3t99}}
\caption{Error evolution comparison with $m=8000$.}%
\label{Ex2_5}%
\end{figure}
\begin{figure}[ht!]
\center
\subfigure{\includegraphics[scale=0.23]{errcomps2t0}}
\subfigure{\includegraphics[scale=0.23]{errcomps2t20}}
\subfigure{\includegraphics[scale=0.23]{errcomps2t50}}
\subfigure{\includegraphics[scale=0.23]{errcomps2t100}}
\caption{Error evolution comparison with $m=80000$.}%
\label{Ex2_6}%
\end{figure}
We first validate that the error reduction by the gradient descent learning tends to the diffusion process \eqref{generaldiffusioneq} of the error $\hat\eta_{NN}(\xi, \theta)=\hat f(\xi)-\widehat{\mathcal N}_s(\xi,\theta)$ in the Fourier spectral domain as the width of neural network goes to infinity. We take $a=4.2$, $b=5.8$ and the initial errors are given by $\eta_{NN}(x, \theta_0)={\mathcal N}_s(x,\theta_0)-f(x)$ with parameters initialized by sampling from independent random variables of normal distribution. In the gradient descent learning, the training data set consists of $1000$ uniformly distributed points in $[-1, 1]$ and learning rate $\tau=1.0e-3$ is adopted. In the Fourier spectral domain, the diffusion equation \eqref{generaldiffusioneq} with initial function $\hat\eta(\xi, \theta_0)=\widehat{\mathcal N}_s(\xi,\theta_0)-\hat f(\xi)$ are solved by using the Hermite spectral method introduced above. We take $p=100$ and $\Delta t=\tau$ in the discretization. Noting that \cite{gradshteyn2014table,li2022efficient}
\begin{equation}
\mathcal F^{-1}[\widehat H_k(\xi)](x)=\int_{-\infty}^{+\infty}\widehat H_k(\xi)e^{2{\rm i}\pi\xi x}{\rm d}\xi=\sqrt{2\pi}{\rm i}^k\widehat H_k(2\pi\xi),
\end{equation}
the $p$-th order Hermite spectral approximation
\begin{equation}
\hat\eta_p(\xi,\bm \theta(t_m))=\sum_{k=0}^{p}\hat\eta_{k}^m\widehat H_k(\lambda\xi),
\end{equation}
can be analytically transformed back to physical domain as
\begin{equation}
\eta_p(x,\bm \theta(t_m))=\sum_{k=0}^{p}\hat\eta_{k}^m\int_{-\infty}^{+\infty}\widehat H_k(\lambda\xi)e^{2{\rm i}\pi x\xi}{\rm d}\xi=\frac{\sqrt{2\pi}}{\lambda}\sum_{k=0}^{p}\hat\eta_{k}^m{\rm i}^k\widehat H_k\Big(\frac{2\pi x}{\lambda}\Big).
\end{equation}
Then, the errors $\eta_{NN}(x,\theta(t_m))$ and $\eta_p(x,\bm \theta(t_m))$ evolved by gradient descent method and diffusion equations are compared in Fig. \ref{Ex2_5}-\ref{Ex2_6}. We provide numerical results for different width of neural network with $m=8000, 80000$ while the number of scales is fixed to be $s=3$. Although the initial errors are different for neural networks with different number of neurons, the evolution of the error matches quite well.
\begin{figure}[ht!]
\center
\subfigure[real part]{\includegraphics[scale=0.23]{FCDNNerrrediffusion}}
\subfigure{\includegraphics[scale=0.23]{FCDNNerrimdiffusion}}
\caption{Numerical solution of \eqref{generaldiffusioneq} with coefficients $\{A_0^{\pm}(\xi), B_0^{\mp}\}$.}%
\label{Ex2_1}%
\end{figure}
\begin{figure}[ht!]
\center
\subfigure[imaginary part]{\includegraphics[scale=0.23]{MSDNNerrrediffusion}}
\subfigure{\includegraphics[scale=0.23]{MSDNNerrimdiffusion}}
\caption{Numerical solution of \eqref{generaldiffusioneq} with coefficients $\{A_3^{\pm}(\xi), B_3^{\mp}\}$.}%
\label{Ex2_2}%
\end{figure}
Now, we are able to use the numerical solution of the diffusion equation \eqref{generaldiffusioneq} to show the advantages of using multi-scale neural network.
We set $m=8000$, $a=4.2$, $b=5.8$ and the initial errors are given by $\hat\eta(\xi, \theta_0)=\widehat{\mathcal N}_s(\xi,\theta_0)-\hat f(\xi)$ with parameters initialized by sampling from independent random variables of normal distribution. In the discretization of the diffusion equation \eqref{generaldiffusioneq}, we take $p=100$ and $\Delta t=1.0e-3$. The numerical solution of the diffusion equations with coefficients $\{A_0^{\pm}(\xi), B_0^{\mp}\}$ and $\{A_3^{\pm}(\xi), B_3^{\mp}\}$ at different time $t$ are plotted in Fig. \ref{Ex2_1}-Fig. \ref{Ex2_2}. We can see that diffusion coefficients $\{A_0^{\pm}(\xi), B_0^{\mp}\}$ only produce decay in a very small neighborhood of the origin while the coefficients $\{A_3^{\pm}(\xi), B_3^{\mp}\}$ produce much faster decay in a large interval. Although the initial errors are different, the numerical results still verify that multi-scale neural networks has better performance than fully connected ones.
\section{Conclusion and future work}
In this paper, we investigated the convergence of the machine learning algorithm using multi-scale neural network by deriving diffusion models for the error of the MscaleDNN in either approximating oscillatory functions or solving BVP of ODEs. When the sine function is selected as the activation function, we show that the gradient descent learning leads to a diffusion process of the initial error in the Fourier frequency domain when the width of the neural network goes to infinity. At the same time, we show that the MscaleDNN with different scales leads to the same diffusion equation but with coefficients with wider support in the frequency domain. To study the quantatively behavior of the diffusion models, a Hermite spectral method is employed to produce highly accurate numerical solutions for the diffusion equation. Numerical results show that the diffusion coefficients corresponding to the multi-scale neural network will generate faster decay speed and wider decay range on the initial function with the increasing of the number of the scales. This is consistent with the performance of the multi-scale neural network which has faster convergence especially in the approximation of highly oscillated functions. Moreover, we also numerically show that the derived diffusion equation can be used to predict the convergence of the machine learning algorithm even when the with of the neural network is not very large.
This work is just the beginning of the theoretical analysis for the multi-scale neural network, with many challenge problems in this direction. The analysis for the multi-scale deep neural network with other popular activation functions, e.g., ReLU, Sigmoid, etc, is the most import future work.
\section*{Acknowledgments}
B. Wwang acknowledges the financial support provided by NSFC (grant 12022104), the Construct Program of the Key Discipline in Hunan Province. W. Z. Zhang acknowledges the financial support provided by NSFC (grant 12201603)
\bibliographystyle{plain}
|
1,116,691,499,861 | arxiv | \section{Introduction}
\label{sec:intro}
In accreting neutron stars, the accreted hydrogen and helium burns a few hours after arriving on the star via the rp-process \citep{Wallace1981,Bildsten1998}. The resulting ashes consist of a complex mixture of heavy elements beyond the iron group \citep{Schatz1998}. These heavy element ashes initially form the liquid ocean of the star but upon further compression freeze to form the outer crust. The composition of the outer crust is an important quantity to understand because it sets the thermal and electrical conductivity, which determines thermal and magnetic evolution (e.g.~\citealt{Brown1998}), and determines the distribution of nuclear heat sources (in particular, in the outer crust, electron captures onto nuclei; \citealt{Haensel2003,Gupta2007}).
\cite{Horowitz2007} showed that chemical separation would occur on freezing of the rp-process ashes. They used molecular dynamics simulations to compute the evolution of a representative 17-component mixture from \cite{Gupta2007}, finding that the lighter nuclei preferentially remain in the liquid phase. \cite{Medin2010} developed a semi-analytic method that was able to closely reproduce these results, building on previous work on one-, two- and three-component plasmas, in particular the work of \cite{Ogata1993} for C--O--Ne mixtures. A comparison of the semi-analytic method with molecular dynamics was also carried out for mixtures of C, O and Ne with application to white dwarf interiors \citep{Hughto2012}.
Chemical separation in neutron star oceans has several observational implications. It can change the concentration and distribution of carbon in the ocean \citep{Horowitz2007}, which is believed to be the fuel for the energetic thermonuclear flashes known as superbursts \citep{Strohmayer2002}. The release of light elements at the base of the ocean will drive convection \citep{Medin2011,Medin2015}, and can change the early lightcurve of cooling neutron stars in transiently-accreting systems that go into quiescence \citep{Medin2014}. Chemical separation can simplify the mixture of elements that are present in the solid outer crust. The degree of scattering of electrons by impurities in the lattice is determined by the impurity parameter $Q_{\rm imp}=\sum_j x_j (Z_j-\langle Z\rangle)^2$, where $Z_j$ is the nuclear charge of species $j$, $x_j$ is the number fraction of species $j$ and the sum is over all species in the mixture. The mean charge is $\langle Z\rangle = \sum_j x_j Z_j$. The impurity parameter can be as large as $\sim 100$ in the ashes of rp-process H/He burning \citep{Schatz1999}. Whether it is significantly reduced by chemical separation at the ocean floor is an important question to resolve.
\cite{Horowitz2007} calculated chemical separation for one particular rp-process ash mixture. However, the rp-process can produce a variety of different compositions depending on burning conditions \citep{Schatz2003}. An open question is to what extent chemical separation occurs for these different mixtures of nuclei. The semi-analytic method of \cite{Medin2010} enables the phase diagram of a mixture to be calculated much more rapidly than a molecular dynamics simulation. We take advantage of this here to calculate chemical separation for a variety of different rp-process ashes. Rather than considering arbitrary mixtures, we use a range of heavy element mixtures resulting from calculations of hydrogen and helium burning involving the rp-process. In \S 2, we calculate the liquid and solid compositions in equilibrium for different initial mixtures. In \S 3, we discuss the implications of our results for observations.
\section{Calculation of the liquid--solid equilibrium for a variety of mixtures}
\subsection{Input mixtures}
We adopt realistic mixtures that result from calculations of rp-process burning under different conditions. An important factor is whether the hydrogen and helium burning is thermally stable or unstable. \cite{Schatz1999} calculated the ashes of stable burning, and showed that the composition for accretion rates $\gtrsim \dot m_{\rm Edd}$ has a range of mean nuclear charge from $\langle Z\rangle\approx 26$--$50$. We focus on the nuclear charge $Z$ rather than mass $A$ because that is the relevant quantity for freezing of a Coulomb liquid. The composition becomes heavier at larger accretion rates, up to a limit at $\approx 50 \dot m_{\rm Edd}$ (where $\dot m_{\rm Edd}$ is the local Eddington accretion rate) beyond which the mixture reaches nuclear statistical equilibrium, driving the mixture back to iron group (this effect can be seen in Figs.~7 and 10 of \citealt{Schatz1999}). \cite{Schatz2003} extended these calculations to lower accretion rates below $\dot m_{\rm Edd}$ and showed that the composition would be much lighter, with $\langle Z\rangle\approx 10$ at $\dot m\approx 0.1\ \dot m_{\rm Edd}$. Unstable burning gives a heavier composition than stable burning because it occurs at a significantly larger temperature, resulting in heavy ashes beyond the iron group with $A\approx 60$--$100$ \citep{Schatz2003,Woosley2004}.
\cite{Stevens2014} recently calculated the rp-process ashes for a large number of steady-state burning models with different accretion rates, base fluxes, and helium fraction in the accreted material. Here we study four models from that paper which have helium-rich accreting material with helium mass fraction $Y$=0.55 and accretion rates $\dot m/\dot m_{\rm Edd}=0.1,0.5,1.0,2.0$ and then a series of models with $Y$=0.2752 and $\dot m/\dot m_{\rm Edd}$ from 0.1 to 40. We also look at the composition used by \cite{Horowitz2007}, taken from \cite{Gupta2008}, which was the result of unstable H/He burning, and an additional composition resulting from unstable burning. These compositions span a nuclear charge range of $\langle Z\rangle=8$--$35$.
The rp-process ashes are produced well above the ocean floor, at densities $\sim 10^5$--$10^6\ {\rm g\ cm^{-3}}$. We allow for beta decays and electron captures as the ashes move to higher densities by finding the $Z$ for each mass chain $A$ that is in beta equilibrium at an electron Fermi energy $E_F=4\ {\rm MeV}$, which is a typical Fermi energy at the freezing depth (for a one-component plasma, the Fermi energy at the freezing depth is $E_F=1.7\ {\rm MeV}\ T_8 (Z/26)^{-5/3}$, where $T_8=T/10^8\ {\rm K}$). We have checked that our results are not sensitive to the exact choice of $E_F$.
Some of the compositions contain substantial amounts of light elements helium and carbon, which are likely to burn before reaching the base of the ocean where chemical separation occurs. At lower accretion rates the validity of this assumption is much more uncertain and carbon could potentially reach the ocean basin possibly sourcing superbursts \citep{Strohmayer2002}. Recent multi-zone modelling of superbursts find additional crustal heating ($Q_b$) necessary to transition from unstable to stable burning at low accretion rates \citep{Keek2011}. This in conjunction with rp-process ashes with large carbon mass fractions make the assumption of complete and stable burning of carbon even more uncertain at low accretion rates. The interaction of chemical separation with rp-process ashes of these low accretion rates will be discussed further in \S 3. Unless otherwise stated, we assume that light element burning converts the He and C into Mg (Z=12), and so set the He and C number fractions to zero, and add the sum of the original He and C number fractions to the Mg number fraction \footnote{This is an approximation for complete burning of C and He to Mg. In reality, complete burning would imply the abundances of C and He be added to that of Mg. In terms of number fraction this can be written as, $x_{Mg}^{new}=x_{Mg}^{old}+x_{C}/2+x_{He}/6$. The difference between these two reformulations have been studied and are insignificant to the general features of chemical separation considered in this paper.}.
\begin{deluxetable*}{cccccccccccccc}
\tablecaption{Properties of the liquid--solid equilibrium compositions\label{tab:results}}
\tablehead{\colhead{$\dot{m} / \dot{m}_{\rm Edd}$} & \colhead{$\langle Z\rangle_{i}$} & \colhead{$\langle Z\rangle_{l}$} & \colhead{$\langle Z\rangle_{s}$} & \colhead{$Q_i$} & \colhead{$Q_l$} & \colhead{$Q_s$} & \colhead{$Y_{e,i}$} & \colhead{$Y_{e,l}$} & \colhead{$Y_{e,s}$} & \colhead{$Y_{e,l} - Y_{e,i}$} & \colhead{$\Gamma_{i}$} & \colhead{$\Gamma_{l}$} & \colhead{$\Gamma_{s}$}}
\startdata
& & & & & & Y=0.2752 & & & & & &\\
\hline
0.1 & 11.4 & 11.6 & 11.2 & 8.3 & 13.5 & 3.1 & 0.4600 & 0.4592 & 0.4609 & -0.0008 & 286.5 & 299.5 & 273.5\\
0.2 & 16.4 & 13.2 & 19.7 & 41.2 & 23.1 & 38.4 & 0.4483 & 0.4581 & 0.4419 & 0.0098 & 509.3 & 348.4 & 670.2 \\
0.3 & 18.8 & 12.8 & 24.8 & 48.1 & 19.1 & 4.6 & 0.4460 & 0.4655 & 0.4367 & 0.0194 & 556.7 & 288.8 & 824.5\\
0.4 & 21.6 & 16.9 & 26.3 & 51.2 & 50.1 & 8.2 & 0.4434 & 0.4582 & 0.4343& 0.0148 & 351.5 & 240.6 & 462.2\\
0.5 & 22.5 & 18.3 & 26.8 & 49.8 & 55.3 & 8.7 & 0.4405 & 0.4535 & 0.4319 & 0.0130 & 318.4 & 232.8 & 404.0\\
0.6 & 23.1 & 19.1 & 27.1 & 49.7 & 58.3 & 8.8 & 0.4388 & 0.4507 & 0.4307 & 0.0119 & 305.3 & 229.4 & 381.2\\
0.7 & 23.4 & 19.4 & 27.5 & 50.6 & 60.4 & 7.9 & 0.4380 & 0.4496 & 0.4302 & 0.0116 & 303.1 & 228.4 & 377.8\\
0.8 & 23.7 & 19.6 & 27.7 & 51.8 & 62.3 & 8.0 & 0.4376 & 0.4490 & 0.4300 & 0.0114 & 301.4 & 227.4 & 375.3\\
0.9 & 23.8 & 19.8 & 27.9 & 52.7 & 64.0 & 8.2 & 0.4372 & 0.4484 & 0.4297 & 0.0111 & 298.5 & 226.2 & 370.9\\
1.0 & 24.0 & 20.0 & 28.0 & 52.8 & 65.1 & 8.5 & 0.4368 & 0.4475 & 0.4295 & 0.0107 & 294.6 & 225.1 & 364.0\\
1.26 & 24.4 & 20.5 & 28.3 & 53.1 & 66.9 & 9.3 & 0.4363 & 0.4462 & 0.4294 & 0.0099 & 289.0 & 224.3 & 353.6\\
1.58 & 24.7 & 21.0 & 28.5 & 52.6 & 67.6 & 9.4 & 0.4363 & 0.4455 & 0.4297 & 0.0092 & 285.3 & 224.3 & 346.2\\
2.0 & 25.1 & 21.6 & 28.6 & 51.1 & 67.2 & 10.8 & 0.4363 & 0.4443 & 0.4304 & 0.0081 & 278.5 & 224.2 & 332.7\\
2.51 & 25.5 & 22.3 & 28.7 & 48.4 & 65.1 & 11.3 & 0.4362 & 0.4433 & 0.4309 & 0.0070 & 272.3 & 224.2 & 320.3\\
3.16 & 26.0 & 23.2 & 28.7 & 44.1 & 61.2 & 12.0 & 0.4357 & 0.4414 & 0.4312 & 0.0057 & 260.0 & 221.3 & 298.6\\
4.0 & 26.7 & 24.5 & 28.8 & 38.1 & 54.0 & 13.0 & 0.4351 & 0.4394 & 0.4316 & 0.0042 & 245.6 & 217.6 & 273.6 \\
5.0 & 27.5 & 25.9 & 29.2 & 34.1 & 48.6 & 14.1 & 0.4334 & 0.4357 & 0.4315 & 0.0022 & 234.1 & 214.2 & 253.9 \\
6.31 & 28.7 & 27.4 & 30.0 & 32.4 & 45.6 & 16.0 & 0.4311 & 0.4320 & 0.4304 & 0.0008 & 253.8 & 247.0 & 260.5\\
8.0 & 30.1 & 29.1 & 31.0 & 32.1 & 42.3 & 20.1 & 0.4300 & 0.4306 & 0.4294 & 0.0006 & 232.2 & 222.0 & 242.4\\
10.0 & 31.3 & 30.5 & 32.1 & 35.5 & 42.0 & 27.7 & 0.4292 & 0.4300 & 0.4285 & 0.0007 & 241.2 & 232.1 & 250.3\\
12.59 & 32.5 & 31.6 & 33.4 & 43.5 & 47.3 & 37.9 & 0.4278 & 0.4288 & 0.4268 & 0.0010 & 252.1 & 240.9 & 263.4\\
15.85 & 33.8 & 32.8 & 34.7 & 52.1 & 53.3 & 49.0 & 0.4263 & 0.4275 & 0.4253 & 0.0011 & 254.9 & 243.4 & 266.4\\
20.0 & 34.9 & 33.9 & 36.0 & 45.9 & 47.5 & 42.0 & 0.4242 & 0.4252 & 0.4233 & 0.0009 & 254.0 & 242.1 & 265.8\\
25.0 & 36.6 & 35.2 & 37.9 & 42.8 & 44.4 & 37.4 & 0.4214 & 0.4223 & 0.4206 & 0.0009 & 251.2 & 236.1 & 266.3\\
30.0 & 25.4 & 22.3 & 28.6 & 48.4 & 65.3 & 11.5 & 0.4366 & 0.4434 & 0.4314 & 0.0068 & 270.1 & 223.0 & 317.1\\
40.0 & 26.0 & 23.4 & 28.6 & 43.3 & 60.7 & 12.3 & 0.4360 & 0.4413 & 0.4318 & 0.0053 & 255.2 & 219.2 & 291.2\\
\hline
& & & & & & Y=0.55 & & & & & & \\
\hline
0.1 & 11.5 & 11.3 & 11.6 & 1.4 & 1.8 & 1.0 & 0.4620 & 0.4623 & 0.4618 & 0.0003 & 199.1 & 195.2 & 203.0\\
0.5 & 13.0 & 13.4 & 12.5 & 18.8 & 25.5 & 11.7 & 0.4569 & 0.4543 & 0.4597 & -0.0025 & 402.7 & 434.3 & 371.0\\
1.0 & 13.8 & 12.2 & 15.4 & 28.0 & 15.9 & 34.9 & 0.4628 & 0.4739 & 0.4543 & 0.0111 & 555.1 & 442.7 & 667.6\\
2.0 & 14.4 & 12.3 & 16.6 & 31.3 & 11.8 & 41.5 & 0.4661 & 0.4833 & 0.4541 & 0.0172 & 590.9 & 434.1 & 747.7\\
\hline
& & & & & & X-ray Bursts & & & & & & \\
\hline
Horowitz \footnote{The nuclear abundances in terms of atomic mass number (A) were not given for this mixture and so corresponding $Y_e$ values could not be calculated.} & 29.3 & 27.7 & 30.9 & 38.9 & 51.4 & 21.0 & - & - & - & - & 236.0 & 216.9 & 255.1\\
XRB & 34.5 & 28.6 & 40.4 & 127.5 & 146.5 & 39.1 & 0.4251 & 0.4324 & 0.4201 & 0.0073 & 325.2 & 246.7 & 403.6\\
\enddata
\end{deluxetable*}
\begin{figure}
\begin{center}
\includegraphics[width=1.05\columnwidth]{fig2.pdf}
\end{center}
\caption{The mean charge of the nuclei $\langle Z\rangle$ averaged by number for liquid and solid compositions as a function of $\langle Z\rangle$ of the initial composition. The dashed line shows $\langle Z\rangle_{l,s} = \langle Z\rangle_i$. Black, Red, Blue and Green results correspond to $Y$=$\{$0.2752, 0.55$\}$, \cite{Horowitz2007} and additional X-ray burst data respectively.}
\label{fig:Z}
\end{figure}
\subsection{Calculation of chemical separation}
For each composition, we then use the semi-analytic approach of \cite{Medin2010} to find the composition of the liquid and solid that are in equilibrium with each other. Full details can be found in that paper, but for clarity we give a brief summary of the method here. We start with the linear mixing rule for the free energy of a mixture
\begin{equation}
f^{LM} = \sum_j x_j \left[ f^{OCP}(\Gamma_j)+\ln\left({x_jZ_j\over \langle Z\rangle}\right)\right],
\label{eq:Flm}
\end{equation}
where $f^{OCP}$ is the free energy of a one component plasma, and the logarithmic term is the entropy of mixing. The Coulomb parameter $\Gamma_j$ is given by $\Gamma_j = Z_j^{5/3}\Gamma_e$ with $\Gamma_e=e^2/a_ek_BT$, where $a_e$ is the mean electron spacing $a_e = [3/(4\pi n_e)]^{1/3}$ at the local electron density $n_e$. While the linear mixing rule is adequate to describe the free energy of the liquid phase, the solid phase requires a correction term be included, and so we write the liquid and solid free energies as
\begin{equation}
f_l = f^{LM}_l \hspace{1cm} f_s = f^{LM}_s + \Delta f_s.
\end{equation}
Extending the work of \cite{Ogata1993} on the three-component plasma to an arbitrary number of $m$ components, \cite{Medin2010} write $\Delta f_s$ as a sum over pairwise interactions,
\begin{equation}
\label{eq:delta_fs}
\Delta f_s = \sum_{j=1}^{m-1}\sum_{k=j+1}^m \Gamma_j x_jx_k \Delta g\left({x_k\over x_j+x_k},{Z_k\over Z_j}\right),
\end{equation}
where the function $\Delta g$ is taken from \cite{Ogata1993}.
Once the free energy is obtained, we then look for an $m-1$ dimensional plane in the $m$ dimensional space of composition that is tangent to the free energy surfaces for the liquid and solid. This $m$-dimensional version of the usual tangent construction then allows one to decompose any composition that lies between the liquid and solid tangent points into coexisting phases of that liquid and solid.
We calculate the liquid and solid compositions that are in equilibrium for a mixture of 50\% liquid and 50\% solid. We calculate the chemical separation using the 17 most abundant species from each rp-process mixture. The choice of 17 species is for convenience, since it matches the number of species in the composition used by \cite{Horowitz2007} and \cite{Medin2010}. However, we have checked that using a smaller or larger number of species in the calculation of chemical separation does not significantly change the results.
\subsection{Results}
The results for all mixtures studied in this paper ($Y=\{$0.55, 0.2752$\}$ and two X-ray burst compositions) are summarized in Table \ref{tab:results} which gives the mean charge $\langle Z\rangle$ and impurity parameter in the initial mixture, liquid and solid for each case. The mean charge is plotted in Figure \ref{fig:Z}. In agreement with the results of \cite{Horowitz2007} for a single mixture, we find the general result that $\langle Z\rangle$ increases for the solid and decreases for the liquid relative to the initial composition \footnote{The notable exceptions to this rule, described further on in the text, are the compositions corresponding to $\dot m$=0.1 $\dot m_{\rm Edd}$, Y=0.2752 and $\dot m$=0.5 $\dot m_{\rm Edd}$, Y=0.55 which produce a `heavier' equilibrium liquid than its solid counterpart.}. This implies that the equilibrium liquid and solid phases will be preferentially enriched in lighter and heavier nuclei respectively relative to the initial mixture.
For the same initial mixture used in the molecular dynamics simulations of \cite{Horowitz2007}, we find good agreement with their results (as did \citealt{Medin2010}). \cite{Horowitz2007} found the average nuclear charge, $\langle Z\rangle$, of the initial and the liquid and solid equilibrium phases to be $\langle Z\rangle_i$=29.30, $\langle Z\rangle_l$=28.04 and $\langle Z\rangle_s$=30.48 with equilibrium impurity parameters of $Q_i$=38.9, $Q_l$=52.7, $Q_s$=22.3 and $\Gamma$ values of $\Gamma_{i}$=247, $\Gamma_{l}$=233, $\Gamma_{s}$=261. Our semi-analytic method yields $\langle Z\rangle$ values of $\langle Z\rangle_i$=29.3, $\langle Z\rangle_l$=27.47 and $\langle Z\rangle_s$=30.9 with impurity parameters of $Q_i$=38.9, $Q_l$=51.4, $Q_s$=21.0 and $\Gamma$ values of $\Gamma_{i}$=236, $\Gamma_{l}$=217, $\Gamma_{s}$=255 (listed as ``Horowitz'' in Table \ref{tab:results}).
\begin{figure}
\begin{center}
\includegraphics[width=1.05\columnwidth]{XsXl_5comp.pdf}
\end{center}
\caption{The initial number fraction $x_i$ (right panels) and relative number fractions in solid and liquid $x_s/x_l$ (left panels) for five different cases. The models shown are for $Y$=0.2752 and $\dot m=0.1,0.5,5,15.85,$ and $30\ \dot m_{\rm Edd}$. Vertical dashed lines are $\langle Z\rangle_{i}$ of the mixture and horizontal dashed lines mark $x_s/x_l$=1 ``cross-over point". }
\label{fig:abuns}
\end{figure}
The initial number fractions $x_i$ and relative number fractions of solid to liquid $x_s/x_l$ for five different models are shown in Figure \ref{fig:abuns} for helium abundance $Y$=0.2752. Lighter nuclei, relative to the average Z of the composition, are preferentially retained in the liquid ($x_s/x_l<1$), whereas heavier nuclei are found in the solid ($x_s/x_l>1$). This extends the result of \cite{Horowitz2007} to a wider range of compositions. The average atomic number ($\langle Z\rangle_{i}$) of the initial mixture (vertical dashed lines) provides a quick but imperfect means of determining the ``cross-over point" below which nuclei are preferentially retained in the liquid and above which tend to crystallize in the solid. This method tends to underestimate the point for lighter compositions ($\langle Z\rangle_{i} <$ 25) and overestimate it for heavier compositions ($\langle Z\rangle_{i} >$ 25).
The top panels of Figure \ref{fig:abuns}, corresponding to an accretion rate $\dot{m}$=0.1 $\dot{m}_{\rm Edd}$, show that for light compositions the general rule of light nuclei being retained in the liquid and heavy nuclei crystallizing in the solid no longer applies. This observation for the light compositions is further verified by equivalent plots for $Y$=0.55 data shown in Figure \ref{fig:abunY0.55}. In this light composition regime ($\langle Z\rangle_{i} < 15$) the correlation of phase with nuclear charge is no longer as strong as for heavier mixtures. Instead, the lightest compositions corresponding to the top panel of Figure \ref{fig:abuns} and the top two panels of Figure \ref{fig:abunY0.55} show that the solid phase is preferentially populated by elements with $Z \sim \langle Z\rangle_{i}$, which includes the most abundant element of the composition (Z=12). These nuclei are not necessarily all heavy relative to $\langle Z\rangle_{i}$, allowing for the possibility of phase separation producing heavier equilibrium liquid than the corresponding solid as is the case for the $\dot m$=0.5 $\dot m_{\rm Edd}$, $Y$=0.55 composition. As the compositions become heavier (i.e. the bottom two panels of Figure \ref{fig:abunY0.55}) the relative number fraction plots ($x_s/x_l$) begin their convergence to the general trend seen in the heavy composition regime.
\begin{figure}
\begin{center}
\includegraphics[width=1.05\columnwidth]{XsXl_4comp_1.pdf}
\end{center}
\caption{The initial number fractions (right panels) and relative number fractions in solid and liquid (left panels) for four different cases. The models shown are for $Y$=0.55 and $\dot m=0.1,0.5,1$ and $2\ \dot m_{\rm Edd}$. Vertical dashed lines are $\langle Z\rangle_{i}$ of the mixture and horizontal dashed lines mark $x_s/x_l$=1 ``cross-over point".}
\label{fig:abunY0.55}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=1.05\columnwidth]{XsXl_carbon_compare.pdf}
\end{center}
\caption{The relative number fractions in the equilibrium solid and liquid $x_s/x_l$ for the composition $\dot m=0.5 \dot m_{\rm Edd}$ Y=0.55 (bottom panel) and an X-ray burst ash mixture (top panel). Red data corresponds to compositions assuming no burning of light elements C and He while Black data corresponds to this paper's assumption of complete burning of C and He to Mg.
\label{fig:carbon}}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=1.05\columnwidth]{Q_imp2.pdf}
\end{center}
\caption{Impurity parameter $Q_{\rm imp}$ for the initial mixture, solid, and liquid phases. Chemical separation acts to purify the solid. Black, Red, Blue and Green results correspond to $Y$=$\{$0.2752, 0.55$\}$, \cite{Horowitz2007} and X-ray burst data respectively. The transparent blue band corresponds to the constraints on the outer crust $Q_{\rm imp}$ from \cite{Page2013} from fits to the cooling curve of the transient accreting neutron star XTE~J1701-462. The dotted horizontal line is the upper limit $Q_{\rm imp}<10$ derived by \cite{Brown2009} from fits to the cooling curves of KS~1731-260 and MXB~1659-29. (See \S~\ref{discussion}.)\label{fig:Qimp}}
\end{figure}
If we drop the assumption that He and C burn to Mg before freezing begins, we see a substantial change in the results. The presence of carbon in these light compositions appears to exert a strong influence on the phase separation trend of the $x_s/x_l$ plots. In the light composition regime the rp-process ashes produce substantial amounts of carbon ($x_i^C$=0.42-0.78) which when included with any residual He gives less regular behaviour in the trend of $x_s/x_l$ and tends to magnify $x_s/x_l$ for certain elements. As seen in the bottom panel of Figure \ref{fig:carbon}, for these light mixtures carbon's presence significantly lightens the composition which in turn drastically increases the relative number fractions for specific elements. This effect tends to decrease as the compositions become heavier and the contributions of C and He are no longer as significant. An exception to this was found for the relatively heavy X-ray burst composition (Green data of Figure \ref{fig:Z} and \ref{fig:Qimp}) with its broad range of nuclear species spanning Z=2-44. The top panel of Figure \ref{fig:carbon} shows that including C and He in the composition with initial number fractions ($x_i^C,x_i^{He}$)=(0.044,0.096) causes irregularities to develop in the $x_s/x_l$ trend even though the corresponding $\langle Z\rangle_{i}$ of the composition does not change substantially and is well outside the domain where we expect these irregularities to develop (i.e. outside the light mixture regime). Although these results are interesting it is unlikely that these light compositions with high carbon content reflect the actual composition undergoing phase separation at the base of a neutron star's ocean. These models do indicate, however, that even slight variations in the initial composition can elicit substantial changes to the relative number fractions of certain elements.
Figure \ref{fig:Qimp} shows the impurity parameter for the initial, solid, and liquid compositions for all mixtures (Red: $Y$=0.55 and Black: $Y$=0.2752). Also plotted are the impurity parameters for the 17-species composition of \cite{Horowitz2007} (Blue) and the additional X-ray burst composition (Green). Across all data sets the impurity parameter of the solid phase is always lower than its initial or liquid counterpart. Comparing Figure \ref{fig:Qimp} with Figure \ref{fig:Z}, we see that the compositions with the largest difference in impurity parameter between liquid and solid, $Q_l-Q_s$, correspond to the compositions with the largest difference in $\langle Z\rangle$ between liquid and solid. In general, a larger enrichment of light elements in the liquid leads to a purer solid phase.
We find that the degree of chemical separation depends on the fractional spread in $Z$ in the mixture, which we measure with the parameter $\overline{\sigma}=Q_{\rm imp}^{1/2}/\langle Z\rangle_{i}$. The bottom panel of Figure \ref{fig:std} shows how this parameter depends on the $\langle Z_i\rangle$ for our mixtures. In general, heavier mixtures tend to have a smaller fractional spread in $Z$ and vice versa, with the exception of the lightest mixtures. The top three panels of Figure \ref{fig:std} show that the Coulomb coupling parameter of the initial mixture at equilibrium ($\Gamma_i$), the contrast in $Y_e$ between liquid and initial mixture ($Y_{e,l}-Y_{e-,i}$), and the heavy element enrichment in the solid ($\langle Z\rangle_{s} - \langle Z\rangle_{i}$) all increase with $\overline{\sigma}$. This means that mixtures with a larger $\overline{\sigma}$ have a lower melting temperature and a greater degree of chemical separation. The lightest mixtures deviate from these trends because they are in the light mixture regime where the solid forms from the most abundant element. In that case, there is no longer a clean separation in $Z$ between elements that go into the liquid and elements that go into the solid, and the degree of chemical separation is much smaller. For example, the models with $\langle Z\rangle_i\leq 13$ in Table 1 have much smaller values of $Y_{e,l}-Y_{e,i}$. These lightest mixtures do still lie on the same trend of $\Gamma_i$ against $\overline{\sigma}$ however, so that $\overline{\sigma}$ is a good predictor of $\Gamma_i$ no matter what the value of $\langle Z\rangle_i$.
\begin{deluxetable}{cccccc}
\tablecaption{Initial number fractions ($x_i$) for the two most abundant species \label{tab:tab1}}
\tablehead{\colhead{$\dot m/\dot m_{\rm Edd}$} & \colhead{$Y$} & \colhead{$Z_1$} & \colhead{$x_1$} & \colhead{$Z_2$} & \colhead{$x_2$} }
\startdata
2 & 0.55 & 12 & 0.49 & 8 & 0.12\\
0.5 & 0.2752 & 28 & 0.41 & 12 & 0.27\\
10 & 0.2752 & 28 & 0.31 & 30 & 0.30
\enddata
\end{deluxetable}
\begin{figure}
\begin{center}
\includegraphics[width=1.05\columnwidth]{Zvsstd_all.pdf}
\end{center}
\caption{The scaling of $\langle Z\rangle_i$, $\langle Z\rangle_s - \langle Z\rangle_i$, $Y_{e,l} - Y_{e,i}$, $\Gamma_i$ with the fractional spread in $Z$ of the initial composition, $\overline{\sigma}=Q_{\rm imp}^{1/2}/\langle Z\rangle_i.$}
\label{fig:std}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=1.05\columnwidth]{XsXl_compare_log.pdf}
\end{center}
\caption{A comparison of results from the 17-species calculation to the two/three-component approximation for three steady-state models of $\dot{m}=2$, Y=0.55 (lower panel), 0.5, Y=0.2752 (centre panel) and 10 $\dot m_{\rm Edd}$, Y=0.2752 (upper panel). }
\label{fig:TCP}
\end{figure}
\subsection{The two-component approximation}
\cite{Medin2010} compared their results for the 17-species mixture with a two-component model in which the chemical separation for any given element was calculated by approximating the mixture as consisting of that element plus the most abundant element (which was Se for the 17-species mixture). The pattern of enrichment of heavy elements in the solid and light elements in the liquid was well-reproduced by this simpler model, although with differences in the detailed values of $x_s/x_l$.
Here, we investigate how well the two-component model reproduces our results for a wider range of compositions, and extend it to include three components. Figure \ref{fig:TCP} compares the results from the 17-species calculation to the two- and three-component approximation for three steady-state models summarised in Table \ref{tab:tab1}. For the two-component model, we follow \cite{Medin2010} and calculate the equilibrium solid-to-liquid number fractions for each element assuming the plasma is composed of only two ion species, the element itself and the most abundant element in the mixture. The initial composition of the mixture is chosen such that the ratio of the abundances of the two elements is the same as the 17-component system. In the three-component approximation, we calculate $x_s/x_l$ of each element assuming the plasma is composed of the two most abundant elements in addition to the element in question. The number fractions are renormalized for each element so that its number fraction relative to the two most abundant elements agrees with the 17-component mixture.
For most of the compositions the two-component approximation tends to reproduce the general number fraction trend of the 17-component plasma with discrepancies further reduced in the three-component model, as illustrated by the two examples in the upper panels of Figure \ref{fig:TCP}. However, we noticed that compositions with a large $\overline{\sigma}$ show significant differences. In particular, the two and three-component models show the opposite trend of $x_s/x_l$ with $Z$ compared to the 17-component calculation. This can be seen in the lower panel of Figure \ref{fig:TCP}. The difference arises because compositions with large $\overline{\sigma}$ mark the transition between the light and heavy element regimes. In this case, whereas the full composition is close to being in the heavy element regime, the two- and three-component approximations remains firmly in the light elements regime and therefore show the opposite trend.
\section{Discussion}
\label{discussion}
In this paper, we have applied the semi-analytic method of \cite{Medin2010} to a range of rp-process ash mixtures to survey the extent of chemical separation in accreting neutron stars. We find that for heavy mixtures with $\langle Z\rangle \gtrsim 20$, there is a clean separation between elements with $Z\lesssim \langle Z\rangle$ which are preferentially retained in the liquid, and those with $Z\gtrsim \langle Z\rangle$ which go into the solid (Fig.~\ref{fig:abuns}). The effect is to reduce $Q_{\rm imp}$ from $30$--$60$ in the inital liquid to as low as $10$ in the solid (Fig.~\ref{fig:Qimp}). Lighter mixtures generally show a similar behavior in that the liquid is lighter than the solid ($\langle Z\rangle_l < \langle Z\rangle_s$), but are different in that the preference of any given elements for liquid or solid is harder to predict. For the lightest of these mixtures, the solid phase is dominated by the most abundant element in the mixture which can be either light or heavy relative to $\langle Z\rangle_i$. The fractional spread in $Z$ as measured by $\overline{\sigma}=Q_{\rm imp}^{1/2}/\langle Z\rangle$ plays a role in determining the amount of chemical separation (Fig.~\ref{fig:std}). The Coulomb coupling parameter $\Gamma_i$, heavy element enrichment as measured by $\langle Z\rangle_{s} - \langle Z\rangle_{i}$, and the contrast in $Y_e$ between liquid and initial mixture all increase with $\overline{\sigma}$. These general behaviours may help in developing simple models of chemical separation to be used in time-dependent calculations of the evolution of accreting neutron stars where it is not feasible or possible to calculate the phase diagram for the complex mixtures of elements that are present.
\subsection{Possibility of multiple solid phases}
Our results have implications for the depth of the neutron star ocean, the composition of the outer parts of the neutron star crust, and for driving convection in the neutron star ocean. One caveat is that we have calculated the liquid--solid equilibrium for a 50/50 mixture of liquid and solid, and so have not followed the complete freezing of the mixture. The depth of the ocean floor likely does not correspond to the value of $\Gamma$ for a 50/50 mixture, nor is the composition at the top of the crust the same as the 50/50 equilibrium solid composition. In fact, in steady state, the average composition entering the crust must be the same as the rp-process ashes entering the ocean, and therefore the $Q_{\rm imp}$ in the outer crust would be the same on average as the rp-process ashes.
It is unlikely, however, that a single phase solid with the same composition as the rp-process ashes can form. For a general multicomponent mixture, multiple solid phases are likely to form instead, with the liquid adopting the composition at the eutectic point. \cite{Horowitz2007} found this for the particular mixture they studied, and a simpler example is shown in Fig.~1 of \citealt{Medin2011} for a two-component mixture of O and Se. The outer crust is therefore likely to consist of multiple solid phases. Diffusion could result in the different solid phases merging in the outer crust. \cite{Hughto2011} find a diffusion coefficient of $D\sim 10^{-6}\omega_p a^2\sim 10^{-8}\ \rho_9^{-1/6}$ at $\Gamma\sim 180$, just after the solid forms. Since the ocean may take $\sim 100$ years to accrete, the diffusion length is $\sim 3\ {\rm cm}$ close to the top of the crust. This may be enough to merge different solid phases, although the diffusion coefficient decreases sharply with density \citep{Hughto2011}.
We are not able to confidently calculate beyond the 50/50 equilibrium mixture and the possible multiphase solid composition at present, without further comparisons with molecular dynamics simulations. As discussed by \cite{Medin2010}, as the solid fraction increases the resulting compositions depend more sensitively on the particular form of $\Delta f_s$ chosen (eq.~[\ref{eq:delta_fs}]) because $\Delta f_s$ dominates the free energy at large $\Gamma$. It is also not clear if a steady state is reached in the outer crust, as the rp-process ashes are sensitive to variations in accretion rate onto the star. Given these uncertainties, we will assume in exploring the implications of our results that the results we have obtained for 50/50 mixtures give a good estimate of the typical mixtures that make up the outer crust. Further work is needed to constrain the nucleation rate of the equilibrium solid at the ocean-crust boundary. The actual equilibrium mass fraction could therefore differ substantially from the 50/50 mass fraction assumed here.
The equilibrium liquid-solid compositions were also determined for the lightest mixtures (i.e. low accretion rates $\dot{m} \leq 0.5 \dot{m}_{\rm Edd}$) assuming no burning of carbon. We find large variability in the equilibrium $\Gamma_i$ values for these mixtures ($\Gamma_i$=265-626) which roughly equates to an order of magnitude density difference in the approximate location of the ocean-crust boundary. We also find that for the three lightest compositions, the corresponding $\Gamma_i$ values increase with $\langle Z\rangle_i$. This result runs contrary to one-component plasmas which crystallize more readily with increase in nuclear charge. This discrepency is likely due to the increase in the contribution of the entropy of mixing term in equation (\ref{eq:Flm}) as compositions become more heterogenous with increase in accretion rate. Research on the variability of the ocean-crust boundary with composition could offer opportunities to further constrain properties of cooling neutron stars post-outburst or accretion regimes of superbursting sources.
\subsection{The impurity parameter in the outer crust}
The impurity parameter $Q_{\rm imp}$ sets the electron-impurity scattering rate in the crust. Whether this determines the electrical and thermal conductivities depends on how it compares to the electron-phonon scattering rate. At a density $\rho$, impurity scattering will dominate if $Q_{\rm imp}>Q_{\rm crit}$, where
\begin{equation}
\label{eq:Qcrit}
Q_{\rm crit}=32\ {T_8^2\over \rho_{11}^{5/6}}\left({Z\over 20}\right)\left({Y_e\over 0.4}\right)^{-4/3}
\end{equation}
\citep{Cumming2004}, and $T_8=T/10^8\ {\rm K}$. As density increases, impurity scattering is more likely to dominate, as the increasing Debye temperature reduces the phonon-scattering contribution.
Constraints on the crust thermal conductivity, and therefore $Q_{\rm imp}$, have come from modelling the observed cooling of neutron star transients in quiescence \citep{Shternin2007,Brown2009,Page2012,Turlione2015}. \cite{Brown2009} assumed a constant $Q_{\rm imp}$ throughout the entire crust and showed that fits to the transient sources MXB~1659-29 and KS~1731-260 required $Q_{\rm imp}\leq 10$, even when uncertainties in the neutron star surface gravity, and therefore crust thickness, were taken into account. This upper limit is indicated by the horizontal dashed line in Figure \ref{fig:Qimp}.
\cite{Brown2009} noted that their constraint on $Q_{\rm imp}$ applies to the inner crust, since electron-phonon scattering sets the conductivity in the outer crust. The value of $Q_{\rm imp}$ in the inner crust likely does not reflect the $Q_{\rm imp}$ of the freshly made solid at the top of the crust. \cite{Steiner2012} modelled the evolution of multicomponent mixtures in the crust as they are compressed and undergo electron captures and neutron emissions and captures near neutron drip. Those models showed a reduction in $Q_{\rm imp}$ near neutron drip, where neutrons are able to redistribute among nuclei (see also \citealt{Gupta2008}). The simplification of the mixture near neutron drip means that a relatively large $Q_{\rm imp}$ at the top of the crust could be consistent with the constraint $Q_{\rm imp}<10$ from \cite{Brown2009}. Our calculations of the mixture entering the outer crust can be used as input to calculations of the nuclear evolution through neutron drip.
\cite{Page2013} were able to derive constraints on the outer crust $Q_{\rm imp}$. They fit the neutron star transient XTE~J1701-462 using different values of $Q_{\rm imp}$ for the outer and inner crust and found best fitting models with $Q_{\rm imp}\approx 15-30$ at densities $\rho<10^{12}\ {\rm g\ cm^{-3}}$, transitioning to $Q_{\rm imp}\approx 3-4$ for $\rho>10^{13}\ {\rm g\ cm^{-3}}$. Their constraint for the outer crust is represented by a blue shaded band in Figure \ref{fig:Qimp}. Our results in this paper show that chemical separation on freezing could explain the inferred values of $Q_{\rm imp}$ for the outer crust. Figure \ref{fig:Qimp} shows that the initial values of $Q_{\rm imp}$ are larger than 30 for compositions with $\langle Z\rangle\gtrsim 16$, but that most of the corresponding solid compositions have $Q_{\rm imp}<30$. Further work on the constraints on $Q_{\rm imp}$ in the outer crust from other sources would be interesting.
\subsection{Compositionally-driven convection}
We can use our results to estimate the strength of compositionally driven convection in the neutron star ocean \citep{Medin2011}. The rate of buoyancy production at the ocean--crust boundary depends mostly on the difference in $Y_e$ between the rp-process ash mixture and the equilibrium liquid composition. Electrons dominate the pressure in the ocean (by a factor of $\sim E_F/k_BT\sim 100$), so that the compositional buoyancy is dominated by the gradient of $Y_e$. In that case, we can simplify the expression for the Brunt-V\"ais\"al\"a frequency \citep{BC98,Medin2015} as
\begin{equation}
{N^2H\over g} \approx {\chi_T\over\chi_\rho}\left(\nabla_{ad}-\left.{d\ln T\over d\ln P}\right|_{\star}\right)-{\chi_{Y_e}\over \chi_{\rho}}\left.{d\ln Y_e\over d\ln P}\right|_{\star},
\end{equation}
where we have dropped the ion terms. In this limit, the expression for the convective heat flux in the ocean from \cite{Medin2011} (see their equation~43) becomes
\begin{equation}
\label{eq:Fconv}
F_{\rm conv} = {c_PT\dot m\over \chi_T} \chi_{Y_e} \left({Y_{e,l}-Y_{e,i}\over Y_{e,l}}\right).
\end{equation}
This equation gives the expected heat flux for steady accretion; in a time-dependent situation such as cooling after an accretion outburst, the $\dot m$ factor should be replaced by $\partial y_m/\partial t$, the rate of change of the column depth of the ocean floor due to temperature changes \citep{Medin2015}. Since the pressure is dominated by relativistic degenerate electrons, $\chi_{Y_e}\approx 4/3$, so that the composition enters equation (\ref{eq:Fconv}) through the factor $(Y_e-Y_{e,i})/Y_e$.
The values of $Y_e$ for the initial, liquid, and solid mixtures are given in Table \ref{tab:results}. The largest values of $Y_{e,l}-Y_{e,i}$ are $\approx 0.02$. This is in the range considered by Medin \& Cumming (2011), who looked at two-component mixtures only for simplicity. In that paper, steady state was defined as the point where the composition crystallizing at the bottom of the ocean matched the compostion accreting from the top. This agreement suggests that their results represent a realistic range of outcomes for compositonally-driven convection despite their two-component approximation. It is likely that the values $Y_{e,l}-Y_{e,i}$ tabulated in this paper provide lower bounds on the acual convective heat flux of that mixture. This is because unlike Medin \& Cumming (2011), the results calculated here are for a system where 50 \% of the mixture crystallizes into solid. Generally, this equilibrium solid state is subtantially different (heavier) from the initial mixture accreting at the top of the ocean. Therefore we expect further phase separation and enrichment of the ocean in light elements to arrive closer to the steady state of Medin \& Cumming (2011). Their O--Se mixture, which was chosen to have a similar $\langle Z \rangle$ to the mixture studied by Horowitz et al.~(2007), had 2\% oxygen by mass at the top of the ocean, so that $Y_{e,i}\approx Y_{e,\rm{Se}}=34/79=0.430$. The steady-state composition at the base of the ocean was 37\% oxygen, giving $Y_e-Y_{e,i}=0.026$. Their Fe--Se mixture had $X_{Fe}=0.23$ at the top of the ocean and $X_{Fe}=0.37$ at the base, giving $Y_e-Y_{e,i}=0.005$. Interestingly, some compositions that we study in this paper have $Y_e-Y_{e,i}<0$, so that chemical separation would not lead to convection in those cases. Also we note that the lightest mixtures with $\langle Z\rangle\leq 13$ have smaller values of $Y_{e,l}-Y_{e,i}$ by about a factor of ten compared to the heavier mixtures. Therefore, light element oceans should have much less convective driving.
\subsection{Uncertainties in semi-analytic model}
Our results rely on an extension of the analytic fits made by \cite{Ogata1993} to Monte Carlo simulations of three-component plasmas to multicomponent plasmas. Even for the three-component case, \cite{Hughto2012} found systematically lower melting temperatures (higher $\Gamma$) in their MD simulations compared to the semi-analytic model. They noted that this discrepancy seemed to grow with impurity parameter $Q_{\rm imp}$, perhaps suggesting a problem with the form of the deviation from linear mixing $\Delta f_s$ [Equation~(\ref{eq:delta_fs})] assumed in the semi-analytic model.
Figure \ref{fig:std} shows that $\Gamma_i$ increases with $Q_{\rm imp}$ at fixed $\langle Z\rangle$, which may explain the trend with $Q_{\rm imp}$ seen by \cite{Hughto2011}, since $\Delta f_s$ becomes more and more important at larger values of $\Gamma$. It is important to carry out further comparisons with molecular dynamics simulations \citep{Horowitz2007,Hughto2012} to check and refine these assumptions about the functional form of the free energy
as well as to investigate the formation of multiple solid phases. We hope that our results will give a useful starting point for such comparisons.
\acknowledgements
A.C. is supported by an NSERC Discovery grant, is a member of the Centre de Recherche en Astrophysique du Qu\'ebec (CRAQ), and an Associate of the CIFAR Cosmology and Gravity program. Z.M. recognizes the auspices of the National Nuclear Security Administration of the U.S. Department of Energy at Los Alamos National Laboratory and supported by Contract Nos. DE-AC52-06NA25396 and DE-FG02-87ER40317. H.S. acknowledges support from the National Science Foundation under Grant No. PHY-1430152 (JINA Center for the Evolution of the Elements) and PHY-1102511.
|
1,116,691,499,862 | arxiv | \section{Introduction}
\vspace{-0.03cm}
\pgfkeysvalueof{/tikz/dsp/label}}{sec:intro}
Single-channel speech enhancement algorithms typically operate in the \ac{STFT} domain~\cite{timowienerfiltering2018, noisereducitonsurvey, timo2012}. The Gaussian statistical model in the \ac{STFT} domain has been shown to be effective~\cite{timowienerfiltering2018, ephraim1984speech}. Given the assumption that the complex-valued speech and noise coefficients are uncorrelated and Gaussian-distributed with zero mean, various estimators have been derived, such as the Wiener filter and the \ac{STSA} estimator~\cite{timowienerfiltering2018,ephraim1984speech, wolfe2003efficient}. The Wiener filter, which is optimal in the \ac{MMSE} sense, requires estimation of speech and noise variances. This can be achieved by various signal processing estimators with varying degrees of success for different signal characteristics~\cite{timo2011waspaa, noisereducitonsurvey, timowienerfiltering2018, carbajal2021guided, huajian2021noiseaware, richter2020speech, simonvae, bandovae}.
Recently, \acp{DNN} have been successfully applied to speech enhancement and regularly show an improved performance over classical methods~\cite{bandovae, simonvae, wang2018supervised, rehr2021snr}. Among the DNN-based approaches relevant to this work are deep generative models (e.g., variational autoencoder) and supervised masking approaches. Generative models estimate the clean speech distribution and subsequently combine it with a separate noise model to construct a point estimate of a noise-removing mask (Wiener filter)~\cite{bandovae, simonvae}. In contrast, typical supervised learning approaches are trained on pairs of noisy and clean speech samples and directly estimate a time-frequency mask that aims at reducing noise interference with minimal speech distortion given a noisy mixture, using a suitable loss function (e.g., \ac{MSE})~\cite{wang2018supervised, rehr2021snr}. However, the supervised approaches often learn the mapping between noisy and clean speech blindly and output a single point estimate without guarantee or measure of its accuracy. In this work we focus on adding an uncertainty measure to a supervised method by estimating the speech posterior distribution, instead of only its mean. Note that while this is conceptually related to the generative approach, in this case we do not estimate the clean speech prior distribution, but rather the posterior distribution of clean speech given a noisy mixture.
\begin{figure*}[!ht]
\centering
\input{block_diagram.tikz}
\caption{Block diagram of the neural network-based uncertainty estimation. The neural network is trained according to the proposed hybrid loss function. }
\pgfkeysvalueof{/tikz/dsp/label}}{fig:uncertainty_diagram}
\end{figure*}
Uncertainty modeling based on neural networks has been actively studied in e.g., computer vision~\cite{uncertaintyincvalex2017}. Inspired by this, here we propose a hybrid loss function to capture uncertainty associated with the estimated Wiener filter in the neural network-based speech enhancement algorithm, as depicted in Fig.~\ref{fig:uncertainty_diagram}. More specifically, we propose to train a neural network to predict the Wiener filter and its variance, which quantifies the uncertainty, based on the \ac{MAP} inference of complex spectral coefficients, such that full Gaussian posterior distribution can be estimated. To regularize the variance estimation, we build an \ac{A-MAP} estimator of spectral magnitudes using the estimated Wiener filter and uncertainty, which is in turn used together with the \ac{MAP} inference of spectral coefficients to form a hybrid loss function. Experimental results show the effectiveness of the proposed approach in capturing uncertainty. Furthermore, the \ac{A-MAP} estimator based on the estimated Wiener filter and its associated uncertainty results in improved speech enhancement performance.
\vspace{-0.075cm}
\section{Signal model}
\pgfkeysvalueof{/tikz/dsp/label}}{sec:model}
\vspace{-0.03cm}
We consider the speech enhancement problem in the single microphone case with additive noise. The noisy signal $x$ can be transformed into the time-frequency domain using the \ac{STFT}:
\begin{equation}
X_{ft} = S_{ft} + N_{ft} \, ,
\pgfkeysvalueof{/tikz/dsp/label}}{eqn:timemodel}
\end{equation}
where $X_{ft}$, $S_{ft}$, and $N_{ft}$ are complex noisy speech coefficients, complex clean speech coefficients, and complex noise coefficients, respectively. The frequency and frame indices are given by $f \in \{1,2,\cdots, F\}$ and $t \in \{1,2,\cdots, T\}$, where $F$ denotes the number of frequency bins, and $T$ represents the number of time frames. Furthermore, we assume a Gaussian statistical model, where the speech and noise coefficients are uncorrelated and follow a circularly symmetric complex Gaussian distribution with zero mean, i.e.,
\begin{equation}
\pgfkeysvalueof{/tikz/dsp/label}}{eq:prior}
S_{ft} \sim \mathcal{N}_\mathbb{C}(0,\,\sigma^{2}_{s,ft}), \hspace{0.3cm}
N_{ft} \sim \mathcal{N}_\mathbb{C}(0,\,\sigma^{2}_{n,ft}) \, ,
\end{equation}
where $\sigma^{2}_{s,ft}$ and $\sigma^{2}_{n,ft}$ represent the variances of speech and noise, respectively. The likelihood $p(X_{ft}|S_{ft})$ follows a complex Gaussian distribution with mean $S_{ft}$ and variance $\sigma_{n, ft}^2$, given by
\begin{equation}
\pgfkeysvalueof{/tikz/dsp/label}}{eq:likelihood}
p(X_{ft}|S_{ft}) = \frac{1}{\pi \sigma^2_{n,ft}} \exp\left(-\frac{|X_{ft}-S_{ft}|^2}{\sigma^2_{n,ft}}\right) \, .
\end{equation}
Given the speech prior in~\eqref{eq:prior} and the likelihood in~\eqref{eq:likelihood}, we can apply Bayes' theorem to find the speech posterior distribution, which is complex Gaussian of the form
\begin{equation}
\pgfkeysvalueof{/tikz/dsp/label}}{eqn:posteriorcoplex}
p(S_{ft}|X_{ft}) = \frac{1}{\pi \lambda_{ft}} \exp{\left(-\frac{|S_{ft} - W^{\text{WF}}_{ft}X_{ft}|^2}{\lambda_{ft}}\right)} \, ,
\end{equation}
where $W^{\text{WF}}_{ft} = \frac{\sigma_{s,ft}^2}{\sigma_{s,ft}^2 + \sigma_{n,ft}^2}$ is the \emph{Wiener filter} and $\lambda_{ft} = \frac{\sigma_{s,ft}^2\sigma_{n,ft}^2}{\sigma_{s,ft}^2 + \sigma_{n,ft}^2}$ is the posterior's variance~\cite{timowienerfiltering2018}. The \ac{MMSE} and \ac{MAP} estimators of $S_{ft}$ under this model are both given by the \emph{Wiener filter} \cite{timowienerfiltering2018}: $\widetilde{S}_{ft} = W^{\text{WF}}_{ft} X_{ft}$. It is known that the expectation of \ac{MMSE} estimation error is closely related to the posterior variance~\cite{astudillo2009accounting}, and under the assumption of complex Gaussian distribution it corresponds directly to the variance, i.e.,
\begin{equation}
\begin{split}
E\{|S_{ft}-\widetilde{S}_{ft}|^2\} & = \iint |S_{ft}-\widetilde{S}_{ft}|^2 \, p(S_{ft}|X_{ft})p(X_{ft})\, \mathrm{d} S_{ft}\, \mathrm{d} X_{ft} \\
& = \int \lambda_{ft} \, p(X_{ft}) \, \mathrm{d} X_{ft} = \lambda_{ft}.
\end{split}
\pgfkeysvalueof{/tikz/dsp/label}}{mmse_error}
\end{equation}
The variance $\lambda_{ft}$ can be interpreted as a measure of uncertainty associated with the \ac{MMSE} estimator~\cite{timowienerfiltering2018}. In the following sections $\lambda_{ft}$ will be referred to as the (estimation) uncertainty.
\vspace{-0.1cm}
\section{Deep Uncertainty Estimation}
\pgfkeysvalueof{/tikz/dsp/label}}{sec:deepuncertainty}
\vspace{-0.05cm}
The Wiener filter can be computed for a given noisy signal by estimation of $\sigma^{2}_{s,ft}$ and $\sigma^{2}_{n,ft}$ using traditional signal processing techniques. It is, however, also possible to directly estimate $W^{\text{WF}}_{ft}$ using a \ac{DNN}.
Furthermore, if optimization is based on the posterior~\eqref{eqn:posteriorcoplex}, besides $W^{\text{WF}}_{ft}$ also the uncertainty $\lambda_{ft}$ can be estimated as previously proposed in the computer vision domain~\cite{uncertaintyincvalex2017}. Taking the negative logarithm (which does not affect the optimization problem due to monotonicity) and averaging over the time-frequency plane results in the following minimization problem:
\begin{equation}
\begin{split}
&\widetilde{W}^{\text{WF}}_{ft}, \widetilde{\lambda}_{ft} = \\
&\argmin_{W^{\text{WF}}_{ft},\lambda_{ft}} \underbrace{\frac{1}{FT}\sum_{f,t} \log(\lambda_{ft}) + \frac{|S_{ft} - W^{\text{WF}}_{ft} X_{ft}|^2}{\lambda_{ft}}}_{\mathcal{L}_{p(S|X)}} \, ,
\end{split}
\pgfkeysvalueof{/tikz/dsp/label}}{eqn:logposterior}
\end{equation}
where $\widetilde{W}_{ft}$, $\widetilde{\lambda}_{ft}$ denote estimates of the Wiener filter and its uncertainty. If we assume a constant uncertainty for all time-frequency bins, i.e., $\lambda_{ft} = \lambda^\ast$, and refrain from explicitly optimizing for $\lambda^\ast$, $\mathcal{L}_{p(S|X)}$ degenerates into the well known \ac{MSE} loss
\begin{equation}
\mathcal{L}_{\text{MSE}} = \frac{1}{FT}\sum_{f,t}|S_{ft}-W^{\text{WF}}_{ft}X_{ft}|^2 \, ,
\pgfkeysvalueof{/tikz/dsp/label}}{eqn:mse}
\end{equation}
which is widely used in DNN-based regression tasks, including speech enhancement~\cite{wang2018supervised, braun2021loss}. In this work we depart from the assumption of constant uncertainty. Instead, we propose to include uncertainty estimation as an additional task by training a DNN with the full negative log-posterior $\mathcal{L}_{p(S|X)}$.
It has been previously shown that modeling uncertainty by minimizing $\mathcal{L}_{p(S|X)}$ results in improvement over baselines that do not take uncertainty into account in computer vision tasks~\cite{uncertaintyincvalex2017}. However, in preliminary experiments we have observed that directly using~\eqref{eqn:logposterior} as loss function results in reduced estimation performance for the Wiener filter and is prone to overfitting. To overcome this problem, we propose an additional regularization of the loss function by incorporating the estimated uncertainty into clean speech estimation as described next.
\vspace{-0.15cm}
\section{Joint~enhancement and uncertainty estimation}
\pgfkeysvalueof{/tikz/dsp/label}}{sec:proposedscheme}
Besides estimation of the Wiener filter and its uncertainty, we propose to also incorporate a subsequent speech enhancement task that explicitly uses both into the training procedure. The speech enhancement task provides additional coupling between the DNN outputs (Wiener filter and uncertainty). In this manner, the DNN is guided towards estimation of uncertainty values that are relevant to the speech enhancement task, as well as enhanced estimation of the Wiener filter.
If we consider complex coefficients with symmetric posterior~\eqref{eqn:posteriorcoplex}, the MAP and MMSE estimators both result directly in the Wiener filter $W^{\text{WF}}_{ft}$ and do not require an uncertainty estimate. However, this changes if we consider spectral magnitude estimation. The magnitude posterior $p(|S_{ft}|\:|X_{ft})$, found by integrating the phase out of~\eqref{eqn:posteriorcoplex}, follows a Rician distribution~\cite{wolfe2003efficient}
\begin{equation}
\begin{split}
&p(|S_{ft}|\:|X_{ft}) =\\ &\frac{2|S_{ft}|}{\lambda_{ft}} \exp\left(-\frac{|S_{ft}|^2+(W_{ft}^{\text{WF}})^2|X_{ft}|^2}{\lambda_{ft}}\right)\mathit{I_0}\left(\frac{2|X_{ft}|\,|S_{ft}|W^{\text{WF}}_{ft}}{\lambda_{ft}}\right)\,,
\end{split}
\end{equation}
where $\mathit{I_0}(\cdot)$ is the modified zeroth-order Bessel function of the first kind.
In order to compute the MAP estimate for the spectral magnitude, one needs to find the mode of the Rician distribution, which is difficult to do analytically. However, one may approximate it with a simple closed-form expression~\cite{wolfe2003efficient}:
\begin{equation}
\pgfkeysvalueof{/tikz/dsp/label}}{eqn:approximated_map}
\begin{split}
|\widehat{S}_{ft}| &\approx W^{\text{A-MAP}}_{ft}|X_{ft}|\\
&= \left(\frac{1}{2}W^{\text{WF}}_{ft} + \sqrt{\left(\frac{1}{2}W^{\text{WF}}_{ft}\right)^2 + \frac{\lambda_{ft}}{4|X_{ft}|^2}}\right) |X_{ft}| \, ,
\end{split}
\end{equation}
where $|\widehat{S}_{ft}|$ is an estimate of the clean spectral magnitude $|S_{ft}|$ using the \ac{A-MAP} estimator of spectral magnitudes $W^{\text{A-MAP}}_{ft}$. It can be seen that the estimator $W^{\text{A-MAP}}_{ft}$ makes use of both the Wiener filter $W^{\text{WF}}_{ft}$ and the associated uncertainty $\lambda_{ft}$. An estimate of the time-domain clean speech signal, denoted as $\widehat{s}$, is then obtained by combining the estimated magnitude $|\widehat{S}_{ft}|$ with the noisy phase, followed by the \ac{iSTFT}. The estimated time-domain signal is then used to compute the negative \ac{SI-SDR} metric \cite{le2019sdr}:
\begin{equation}
\pgfkeysvalueof{/tikz/dsp/label}}{eq:sisdr}
\mathcal{L}_{\text{SI-SDR}} = -10\log_{10}\left(\frac{||\alpha s||^2}{||\alpha s - \widehat{s}||^2}\right)\, , \quad \alpha = \frac{\widehat{s}^{T}s}{||s||^2}\, ,
\end{equation}
which is in turn used as an additional term in the loss function that forces the speech estimate (computed with $W^{\text{A-MAP}}_{ft}$) to be similar to the clean target $s$.
Finally, we propose to combine the \ac{SI-SDR} loss $\mathcal{L}_{\text{SI-SDR}}$ with the negative log-posterior $\mathcal{L}_{p(S|X)}$ given in~\eqref{eqn:logposterior}, and train the neural network using a hybrid loss
\begin{equation}
\mathcal{L} = \beta \mathcal{L}_{p(S|X)} + (1-\beta) \mathcal{L}_{\text{SI-SDR}}\, ,
\pgfkeysvalueof{/tikz/dsp/label}}{proposedloss}
\end{equation}
with the weighting factor $\beta \in [0,1]$ as the hyperparameter. By explicitly using the estimated uncertainty for the speech enhancement task, the hybrid loss guides both mean and variance estimation to improve speech enhancement performance. An overview of this approach is depicted in Fig.~\ref{fig:uncertainty_diagram}.
\vspace{-0.15cm}
\section{Experimental setting}
\pgfkeysvalueof{/tikz/dsp/label}}{sec:experimetnal setting}
\vspace{-0.15cm}
\subsection{Dataset}
For training we use the \ac{DNS} Challenge dataset~\cite{reddy2020interspeech}, which includes a large amount of synthesized noisy and clean speech pairs. We randomly sample a subset of 100 hours with \acp{SNR} uniformly distributed between -5~dB and 20~dB. The data are randomly split into training and validation sets (80\% and 20\% respectively).
Evaluation was performed on the synthetic test set without reverberation from \ac{DNS} Challenge. Noisy signals are generated by mixing clean speech signals from~\cite{pirker2011pitch} with noise clips sampled from 12 noise categories~\cite{reddy2020interspeech}, with \acp{SNR} uniformly drawn from 0~dB to 25~dB. To examine performance across different datasets, we additionally synthesized another test dataset using clean speech signals from the \texttt{si\_et\_05} subset of the WSJ0~\cite{garofolo1993csr} dataset and four types of noise signals from CHiME~\cite{chime3dataset} (\texttt{cafe}, \texttt{street}, \texttt{pedestrian}, and \texttt{bus}) with \acp{SNR} randomly sampled from \{-10~dB, -5~dB, 0~dB, 5~dB, 10~dB\}. A few samples are dropped due to the clipping effect in the mixing processing, and finally, this results in a test dataset of 623 files.
\begin{figure}[ht!]
\begin{minipage}[b]{0.5\linewidth}
\centering
\centerline{\includegraphics[width=4cm, height=3.5cm]{noisy.pdf}}
\centerline{(a) Noisy}
\end{minipage}
\begin{minipage}[b]{0.51\linewidth}
\centering
\centerline{\includegraphics[width=4.7
cm, height=3.5cm]{clean.pdf}}
\centerline{(b) Clean}
\end{minipage}
\begin{minipage}[b]{0.5\linewidth}
\centering
\centerline{\includegraphics[width=4cm, height=3.6cm]{WF.pdf}}
\centerline{(c) WF}
\end{minipage}
\begin{minipage}[b]{.5\linewidth}
\pgfkeysvalueof{/tikz/dsp/label}}{fig:error}
\centering
\centerline{\includegraphics[width=4cm, height=3.6cm]{error.pdf}}
\centerline{(d) Error}
\end{minipage}
\begin{minipage}[b]{0.5\linewidth}
\centering
\centerline{\includegraphics[width=4cm, height=3.7cm]{uncertainty.pdf}}
\centerline{(e) Uncertainty}
\end{minipage}
\begin{minipage}[b]{0.5\linewidth}
\centering
\centerline{\includegraphics[width=4cm, height=3.7cm]{A-MAP.pdf}}
\centerline{(f) A-MAP}
\end{minipage}
\caption{Example of estimation uncertainty captured by the proposed method on the DNS test dataset, shown in (e). The proposed method allows estimating clean speech by either using the estimated Wiener filter or applying the A-MAP estimator that incorporates both the estimated Wiener filter and the associated uncertainty, and the resulting estimates are shown in~(c) and~(f), denoted by WF and A-MAP, respectively. The estimation error of Wiener filtering in~(d) is computed between the estimated magnitudes~(c) and clean magnitudes~(b), indicating over- or under-estimation of speech magnitudes.}
\pgfkeysvalueof{/tikz/dsp/label}}{fig:illustraionofuncertainty}
\vspace{-0.3cm}
\end{figure}
\begin{figure*}[th]
\begin{minipage}[b]{0.33\linewidth}
\centering
\centerline{\includegraphics[width=6.3cm]{polqa_cutomized_improvement.pdf}}
\end{minipage}
\begin{minipage}[b]{0.36\linewidth}
\centering
\centerline{\includegraphics[width=5cm]{ESTOI_customized_improvement.pdf}}
\end{minipage}
\begin{minipage}[b]{0.27\linewidth}
\centering
\centerline{\includegraphics[width=4.85cm]{sisdr_customized_improvement.pdf}}
\end{minipage}
\caption{Performance improvement obtained on the synthetic dataset using clean speech from WSJ0 and noise signals from CHiME. POLQAi denotes POLQA improvement relative to noisy mixtures. The same definition applies to ESTOIi and SI-SDRi. The marker denotes the mean value over all utterances and the vertical bar indicates the 95\%-confidence interval.}
\pgfkeysvalueof{/tikz/dsp/label}}{fig:wsjnchime}
\vspace{-0.1cm}
\end{figure*}
\vspace{-0.15cm}
\subsection{Baselines}
To evaluate the effectiveness of modeling uncertainty in neural network-based speech enhancement, we consider training the same neural network using standard cost functions, i.e., the \ac{MSE} defined as $\mathcal{L}_{\text{MSE}}$ in~\eqref{eqn:mse} and the SI-SDR defined as $\mathcal{L}_\text{SI-SDR}$ in~\eqref{eq:sisdr}.
They are represented by MSE and SI-SDR in Table~\ref{dns_evaluation} and Fig.~\ref{fig:wsjnchime}.
\vspace{-0.15cm}
\subsection{Hyperparameters}
All audio signals are sampled at 16~kHz and transformed into the time-frequency domain using the \ac{STFT} with a 32~ms Hann window and 50\% overlap.
For a fair comparison, we used the separator of Conv-TasNet~\cite{convtasnet2019} that has a~\ac{TCN} architecture. It has been shown to be effective in modeling temporal correlations. We used the causal version of the implementation and default hyperparameters provided by the authors\footnote{\url{https://github.com/naplab/Conv-TasNet}} without performing a hyperparameter search. Note that for our model performing uncertainty estimation, the output layer is split into two heads that predict both the Wiener filter and the uncertainty. We applied the sigmoid activation function to the estimated mask, while using the \emph{log-exp} technique to constrain the uncertainty output to be greater than $0$, i.e., the network outputs the logarithm of the variance, which is then recovered by the exponential term in the loss function. All neural networks were trained for 50 epochs with a batch size 16, the maximum norm of gradients was set to 5, and the parameters were optimized using the Adam optimizer~\cite{adamkinma} with a learning rate of 0.001. We halved the learning rate if the validation loss did not decrease for 3 consecutive epochs. To prevent overfitting, training was stopped if the validation loss failed to decrease within 10 consecutive epochs. The weighting factor $\beta$ is set to 0.01, chosen empirically.
\begin{table}[t!]
\centering
\resizebox{\columnwidth}{!}{
\begin{tabular}{|c||c||c|c|c|c|c|}
\hline
& POLQA & ESTOI & SI-SDR (dB)\\
\hline
Noisy & 2.30 $\pm$ 0.10 & 0.81 $\pm$ 0.02 & 9.07 $\pm$ 0.89\\
\hline
SI-SDR & 2.93 $\pm$ 0.11 & 0.88 $\pm$ 0.01 & 15.99 $\pm$ 0.75 \\
MSE
& 2.88 $\pm$ 0.10 & 0.88 $\pm$ 0.01 & 16.05 $\pm$ 0.71 \\
\hline
Proposed WF
& 3.00 $\pm$ 0.11 & 0.88 $\pm$ 0.01 & 16.39 $\pm$ 0.73 \\
Proposed A-MAP
& \fontseries{b}\selectfont 3.06 $\pm$ 0.10 & \fontseries{b}\selectfont 0.89 $\pm$ 0.01 & \fontseries{b}\selectfont 16.42 $\pm$ 0.73 \\
\hline
\end{tabular}
}
\caption{Average performance over all utterances of the DNS non-reverberant synthetic test dataset in terms of POLQA, ESTOI, and SI-SDR. Values are given in mean $\pm$ confidence interval (95\% confidence).}
\pgfkeysvalueof{/tikz/dsp/label}}{dns_evaluation}
\vspace{-0.2cm}
\end{table}
\vspace{-0.15cm}
\section{Results and discussion}
\pgfkeysvalueof{/tikz/dsp/label}}{sec:results and discussion}
\subsection{Analysis of uncertainty estimation}
In Fig.~\ref{fig:illustraionofuncertainty}, we use an audio example from the \ac{DNS} test dataset to illustrate the uncertainty captured by the proposed method, and all plots are shown in decibel~(dB) scale. Applying the estimated Wiener filter to the noisy coefficients yields an estimate of the clean speech, denoted as WF shown in Fig.~\ref{fig:illustraionofuncertainty}~(c). To measure the prediction error, we can compute the absolute values of the difference between the estimated magnitudes, i.e., WF, and reference magnitudes given in Fig.~\ref{fig:illustraionofuncertainty}~(b), which indicates over- or under-estimation of speech magnitudes, shown in Fig.~\ref{fig:illustraionofuncertainty}~(d). It is observed that the model produces large errors when speech is heavily corrupted by noise, as can be seen by comparing the marking regions~(green boxes) of the noisy mixture shown in Fig.~\ref{fig:illustraionofuncertainty}~(a) and the prediction error of Fig.~\ref{fig:illustraionofuncertainty}~(d). By comparing error in Fig.~\ref{fig:illustraionofuncertainty}~(d) and uncertainty in Fig.~\ref{fig:illustraionofuncertainty}~(e), the estimator generally associates large uncertainty with large prediction errors, while giving low uncertainty to accurate estimates, e.g., the first 3~seconds. This shows that the model produces uncertainty measurements that are closely related to estimation errors. In our proposed method with uncertainty estimation, we can use not only the estimated Wiener filter, but also the estimated \ac{A-MAP} mask that incorporates both the estimated uncertainty and Wiener filter, as given in \eqref{eqn:approximated_map}. This estimate is denoted as \ac{A-MAP} in Fig.~\ref{fig:illustraionofuncertainty}~(f). We observe that the \ac{A-MAP} estimate causes less speech distortion compared with the WF estimate, as can be seen, e.g., from the marking regions of WF and A-MAP.
\vspace{-0.15cm}
\subsection{Performance Evaluation}
In Table~\ref{dns_evaluation}, we present average evaluation results of our method on the \ac{DNS} synthetic test set in terms of \ac{SI-SDR} measured in dB, \ac{ESTOI}~\cite{estoi}, and \ac{POLQA}\footnote{We would like to thank J. Berger and Rohde\&Schwarz SwissQual AG for their support with POLQA.}~\cite{polqa}. We observe that modeling uncertainty yields improvement over the baselines, where the proposed WF outperforms the baselines in terms of \ac{POLQA} and \ac{SI-SDR}, and a larger improvement can be observed between the baselines and the proposed A-MAP. This shows that it is advantageous to model uncertainty within the model instead of directly estimating optimal points.
In Fig.~\ref{fig:wsjnchime}, we present speech enhancement results in terms of mean improvement of \ac{POLQA}, \ac{ESTOI}, and \ac{SI-SDR}. For this evaluation we used another unseen test dataset based on speech from WSJ0 and noise from CHiME. It shows that our proposed approach performs better in terms of speech quality given by higher \ac{POLQA} values without deteriorating \ac{ESTOI} (with an exception at SNR of $-10$~dB) and \ac{SI-SDR}, which again demonstrates the benefit of modeling uncertainty. We also observe that larger improvement over the baselines is achieved at high \acp{SNR}, which may be explained by the fact that, at high \acp{SNR}, speech quality (and thus \ac{POLQA}) is mainly affected by speech distortions, while at low \acp{SNR} the main factor is residual noise.
\vspace{-0.05cm}
\section{Conclusion}
\pgfkeysvalueof{/tikz/dsp/label}}{sec:conclusion}
\vspace{-0.05cm}
Based on the common complex Gaussian model of speech and noise signals, we proposed to augment the existing neural network architecture with an additional uncertainty estimation task. Specifically, we proposed simultaneous estimation of the Wiener filter and the associated uncertainty to capture the full speech posterior distribution. Furthermore, we proposed using the estimated Wiener filter and uncertainty to produce an A-MAP estimate of the clean spectral magnitude. Eventually, we combined uncertainty estimation and speech enhancement by the proposed hybrid loss function. We showed that the approach can capture uncertainty and lead to improved speech enhancement performance across different speech and noise datasets. For future work, it would be interesting to integrate the uncertainty estimation into multi-modal learning systems, which may rely more on other modalities when audio modality raises high uncertainty.
\AtNextBibliography{\small}
\section{REFERENCES}
\pgfkeysvalueof{/tikz/dsp/label}}{sec:refs}
\atColsBreak{\vskip5pt}
\printbibliography[heading=none]
\end{document}
|
1,116,691,499,863 | arxiv | \section{Introduction}
\begin{figure}[t]
\begin{minipage}[b]{0.45\linewidth}
\centerline{\includegraphics[width=4.00cm]{l1.pdf}}
\centerline{(a)}
\end{minipage}
\begin{minipage}[b]{0.52\linewidth}
\centerline{\includegraphics[width=4.45cm]{corr_trans1.pdf}}
\centerline{(b)}
\end{minipage}
\caption{Illustrations of (a) first-order distillation and (b) correlation distillation. Red and Green feature bins represent the important regions that should be focused on.}
\label{fig:l1}
\end{figure}
Object detection is a basic computer vision task in many multimedia applications, such as autonomous driving, object tracking and so on. Modern object detection methods based on Convolution Neural Networks (CNNs) have achieved state-of-the-art results, which are usually trained on predefined datasets with a fixed number of classes. In many practical applications, new object classes often emerge after the detectors have been trained. Due to the privacy of data and limited storage of the devices, sometimes the old data can not be available for training the detectors from scratch. Even if the old data are available, this procedure will take a long training time. Fine-tuning is a commonly used method to transfer the pretrained model on new data. However, directly fine-tuning on new classes will severely decrease the performance on old classes~\cite{kirkpatrick2017overcoming}, which is known as catastrophic forgetting~\cite{french1999catastrophic}~\cite{goodfellow2013empirical}~\cite{mccloskey1989catastrophic}. Therefore, improving the ability of object detectors to learn new object classes continuously is necessary.
Recently, incremental learning has been paid more attention to classification, which aims to continuously learn to address new tasks from new data while preserving the learned knowledge from the old data. Based on the regularization methods to overcome catastrophic forgetting, the incremental learning methods can be divided into two categories~\cite{hou2019learning}: parameter-based~\cite{aljundi2018memory}~\cite{kirkpatrick2017overcoming}~\cite{zenke2017continual} and distillation-based~\cite{aljundi2017expert}~\cite{jung2018less}~\cite{li2017learning}~\cite{rannen2017encoder}~\cite{rebuffi2017icarl}. Due to the difficulty of designing a reasonable metric to evaluate the importance of all parameters, we follow the distillation-based regularization methods to preserve the learned knowledge from the old classes when adapting the old model on the data of new object classes.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.7\linewidth]{mvc.pdf}
\end{center}
\caption{Illustration of three views for correlation distillation.}
\label{fig:mvc}
\end{figure}
Different from image classification, object detection involves distinguishing foreground from complex background and the precise localization of objects, which is more challenging for incremental learning. Existing incremental object detection methods~\cite{chen2019new}~\cite{hao2019take}~\cite{hao2019end}~\cite{li2019rilod}~\cite{shmelkov2017incremental}~\cite{zhang2020class} mainly adopt knowledge distillation to regularize the behavior of the incremental model to be similar to the old model for preserving the learned knowledge. The typical way is to minimize the distance between features or output logits of the old and incremental models.
However, due to the inherent difference in the categories to be detected, we should preserve the stability to detect old classes and the plasticity to learn new classes. Directly enforcing the incremental model to imitate all activations in the feature maps of the old model (denoted as first-order distillation) makes the incremental model confusing about which knowledge is important and should be transferred. The sufficiently learned valuable knowledge may not be well preserved, instead some unimportant and misleading knowledge may be preserved. As shown in Figure~\ref{fig:l1}(a), the relative relations between the important activations are broken in the distillation procedure.
In the perspective of the linguistic structuralism~\cite{matthews2001short}, the meaning of a sign depends on its relations with other signs within the system~\cite{park2019relational}. Analogously, the meaning of an activation value depends on its relation with other activation values within the feature map. An activation value will be meaningless without regard to its context activation values.
Compared with the first-order distillation, the correlation distillation (second-order distillation) transfers a similarity correlation matrix
as shown in Figure~\ref{fig:l1}(b), which explores intra-feature structural relations rather than individual activations and transfers a high-level representation of the activations in the feature maps. As studied in the representational similarity analysis in neuroscience~\cite{kriegeskorte2008representational}, the transformed feature contains more information than the original feature, which is the high-level abstractions of the activation behavior in the features of neural network~\cite{blanchard2019neurobiological}.
For object detection, the relations within the activations in feature maps contain more information, such as image-level relations, foreground-background relations and intra-instance relations. Exploring and transferring the relative similarity of the discriminative patterns in the feature space can preserve the stability and plasticity for incremental object detection.
In this paper, we propose a novel multi-view correlation distillation based incremental object detection method (MVCD), which mainly focuses on the design of distillation losses in the feature space of the two-stage object detector Faster R-CNN~\cite{ren2015faster}. It is a dual network including the old model and the incremental model, which cooperate for transferring the old model trained on old classes to incrementally detect new classes. To trade off between the stability to preserve the learned knowledge from the old data and the plasticity to learn new knowledge from new classes, we design the correlation distillation losses from three views in the feature space of the object detector, which consists of the channel-wise, point-wise and instance-wise views as shown in Figure~\ref{fig:mvc}. Here, the three views for the feature maps can be seen as three abstractions of activation behaviors obtained from the stimuli from the different parts of the input. The channel-wise view explores the correlation among the feature map channels in the image-level feature. The point-wise view explores the knowledge among the discriminative regions corresponding to the foreground and the background. The instance-wise view explores the correlation among the intra-instance patches to preserve the discriminability of features for detecting the old classes.
The contributions of our work are as follows:
\begin{itemize}
\setlength{\itemsep}{0pt}
\setlength{\parsep}{0pt}
\setlength{\parskip}{0pt}
\item We propose a novel incremental object detection method, which is the first attempt to explore the multi-view correlations (second-order distillation) in the feature space of the two-stage object detector
\item To get a good trade-off between the stability and the plasticity of the incremental model, we design correlation distillation losses from three views for regularizing the learning in feature space, which transfers the learned channel-wise, point-wise and instance-wise correlations to the incremental model.
\item A new metric called \textit{Stability-Plasticity-mAP} (SPmAP) is proposed to quantize \textit{Stability} and \textit{Plasticity}, which is integrated with the original mAP to measure the performance of incremental object detector comprehensively.
\item Extensive experiments are conducted on VOC2007~\cite{everingham2010pascal} and COCO~\cite{lin2014microsoft}. The results demonstrate the effectiveness of the proposed method to learn to detect new classes continuously, and it also achieves promising results compared with previous methods.
\end{itemize}
\section{Related Work}
Incremental learning aims to develop machine learning systems to
continuously deal with streams of new data while preserving the learned knowledge from the old data.
The main challenge is to mitigate catastrophic forgetting and find a good trade-off between the stability and the plasticity of the incremental model.
According to the optimization directions to preserve the learned knowledge, existing works can be divided into two categories~\cite{hou2019learning}: parameter-based and distillation-based. The parameter-based methods aim to preserve important parameters and penalize the changes of these parameters, such as EWC~\cite{kirkpatrick2017overcoming} and MAS~\cite{aljundi2018memory}. However, designing a metric to evaluate the importance of all parameters is also a tough task. Therefore, we mainly focus on distillation-based methods in our work.
\textbf{Distillation-based Incremental Learning:} Knowledge distillation is a commonly used technique to transfer knowledge from one network to another network. Hinton et al.~\cite{hinton2015distilling} propose to transfer the knowledge from a large network to a small network using distillation by encouraging the responses of these two networks to be similar.
For incremental learning, distillation-based methods utilize the learned knowledge from the old model to guide the learning of the new model by minimizing the distillation losses. LwF~\cite{li2017learning} utilizes a modified cross-entropy loss to preserve original knowledge with only examples from the new task. iCaRL~\cite{rebuffi2017icarl} combines representation learning and knowledge distillation for jointly learning feature representation and classifiers, and a small set of exemplars is selected to perform nearest-mean-of-exemplars classification. Rannen et al.~\cite{rannen2017encoder} propose an auto-encoder based method to retain the knowledge from old tasks, which prevents the reconstructions of the features from changing and leaves space for the features to adjust. Sun et al.~\cite{sun2018active}~\cite{sun2018lifelong} propose to maintain a lifelong dictionary, which is used to transfer knowledge to learn each new metric learning task.
Recently, several novel knowledge distillation methods have explored the relationships between samples or instances to transfer the knowledge from teacher model to student model~\cite{li2020local}~\cite{liu2019knowledge}~\cite{park2019relational}. Liu et al.~\cite{liu2019knowledge} construct the instance relationship matrix. Park et al.~\cite{park2019relational} propose the distance-wise and angle-wise distillation losses to minimize the difference in relations. Li et al.~\cite{li2020local} propose to explore the local correlations to transfer the knowledge. Inspired by these methods, we believe that transferring the correlations in feature space for incremental learning may not only preserve the learned knowledge of the old model but also maintain the scalability to learn new knowledge, which can get a balance between stability and plasticity of the incremental model.
\textbf{Incremental Object Detection:} The first incremental object detector~\cite{shmelkov2017incremental} is based on Fast R-CNN~\cite{girshick2015fast}. It uses EdgeBoxes~\cite{zitnick2014edge} and MCG~\cite{arbelaez2014multiscale} to precompute proposals, and knowledge distillation is used to regularize the outputs of the final classification and regression layers in order to preserve the performance on old classes.
Recently, several end-to-end incremental object detection methods~\cite{chen2019new}~\cite{hao2019take}~\cite{hao2019end}~\cite{li2019rilod} are proposed. Chen et al.~\cite{chen2019new} propose to use L2 loss to minimize the difference between the feature maps of the old and the incremental models, which is referred to hint loss.
Hao et al.~\cite{hao2019take} introduce a hierarchical large-scale retail object detection dataset called TGFS and presents a class-incremental object detector that utilizes an exemplar set with a fixed size of old data for training. Hao et al.~\cite{hao2019end} use a frozen duplication of RPN to preserve the knowledge gained from the old classes, and a feature-changing loss (L2 Loss) is proposed to reduce the difference of the feature maps between the old and new classes. Li et al.~\cite{li2019rilod} extract three types of knowledge from the original model, which is based on RetinaNet~\cite{lin2017focal}, and it uses smooth L1 loss to minimize the feature difference. A dual distillation training function is proposed in~\cite{zhang2020class} that pre-trains a separate model only for the new classes, such that a student model can learn from two teacher models simultaneously. In addition, several novel works on incremental few-shot object detection are proposed~\cite{perez2020incremental}~\cite{yang2020context}.
However, the few-shot setting is more challenging than the many-shot setting in incremental object detection, and the problem of incremental object detection on the many-shot setting has not been well resolved. In our work, we mainly focus on general incremental learning for object detection.
The typical way of the above-mentioned incremental object detection methods to preserve the learned knowledge is to imitate the important activations of the original model by minimizing the first-order distillation losses. However, it is hard for the incremental model to fully understand the transferred knowledge due to the inherent difference in the categories to be detected. Different from these methods, we explore the important correlations in the feature space of the object detector and only transfer the correlations instead of the values in the feature maps, which can preserve the relative relations within the important learned knowledge and maintain the capability to learn to detect new classes.
\section{Method}
\begin{figure}[t]
\begin{center}
\includegraphics[width=1.0\linewidth]{framework1.pdf}
\end{center}
\caption{The whole framework of the proposed method. The old model is a frozen copy of the original model trained on the old data, and the incremental model is an adapted model on new classes. Grey and white indicate the frozen and updated parameters respectively.}
\label{fig:framework}
\end{figure}
\subsection{Overview}
The proposed multi-view correlation distillation mechanism for incremental object detection is shown in Figure~\ref{fig:framework}, which is a dual network. A frozen copy of the old model trained on the old data provides the learned knowledge of the old classes, such as the activations on feature maps, detection results and the logits from the output layers. The incremental model is adapted to detect both old and new classes on new data with the annotations of new classes as well as the learned knowledge from the old model. Samples of the new data are input into both the old model and the incremental model, then the detection results of the old model are integrated with the ground-truth of new classes to guide the learning of the incremental model. We also use the commonly used distillation loss (L1 loss) on the final output layers (classification layer and regression layer) to penalize the difference between the logits from the old model and the incremental model. In addition to these techniques, in our work, we mainly focus on the distillation in the feature space of the object detector to better preserve the learned knowledge. Different from image classification, the feature space in object detectors can be divided into image-level features and instance-level features, so we elaborately design the distillation losses for both of them to maintain the important knowledge.
\subsection{Multi-view Correlation Distillation}
The typical way to preserve the learned knowledge from the old model in feature space is to minimize the distance such as L1 loss between the activations of the old model and the incremental model. However, it is difficult for the incremental model to fully understand the transferred knowledge in feature space, which may result in the preservation of unimportant information instead of the useful knowledge for minimizing the overall loss. Meanwhile, this constraint may also restrict the plasticity of the incremental model for learning new classes. Incremental object detection aims to not only preserve the learned knowledge but also maintain the scalability for learning new classes.
Therefore, we design a novel correlation distillation mechanism, which explores and transfers the important correlations from channel-wise, point-wise and instance-wise views in the feature space of the old object detector. The channel-wise view explores the correlation between the important feature maps in the image-level feature. The point-wise view explores the correlation between the discriminative background and foreground regions. The instance-wise view explores the correlation between the intra-instance patches, which aims to preserve the discriminability of the features for precisely detecting the old classes. The total loss function is defined as:
\begin{equation}
\label{Eq:total}
\begin{aligned}
\mathcal{L}=\mathcal{L}_{frcnn}+\mathcal{D}_{out}+\lambda(\mathcal{D}_{cc}+\mathcal{D}_{pc}+\mathcal{D}_{ic})
\end{aligned}
\end{equation}
where $\mathcal{L}_{frcnn}$ is the standard loss function in Faster R-CNN, and $\mathcal{D}_{out}$ is the commonly used distillation loss on the final classification and regression layers, and here we use L1 loss. $\mathcal{D}_{cc}$, $\mathcal{D}_{pc}$ and $\mathcal{D}_{ic}$ are the proposed channel-wise, point-wise and instance-wise correlation distillation losses. We set $\lambda=1$ in our experiments.
\subsubsection{Channel-wise Correlation Distillation}
\label{sec:cc}
The convolution kernels are responsible for extracting different patterns, so the channel-wise importances are different for each sample, which is the indispensable knowledge to transfer. However, the correlations between the feature distribution along channels are seldom considered in previous first-order-distillation-based methods. Intuitively, to preserve the plasticity of the incremental object detector, only the important channel-wise activations learned on the old data should be transferred to the incremental model and the rest unimportant channels can be left for learning new classes. Due to the disadvantage of first-order distillation, we propose a channel-wise correlation distillation loss. It constrains the specific inter-channel relations for different samples, and the consistent correlations of feature distribution along channels between the old and incremental model are preserved. It is achieved by distilling the correlations within the important channels of each image rather than restricting the overall activation values to be similar.
\begin{figure}[t]
\begin{center}
\includegraphics[width=1.0\linewidth]{se1.pdf}
\end{center}
\caption{SE block in RPN.}
\label{fig:se}
\end{figure}
The squeeze-and-excitation module (SE module)~\cite{hu2018squeeze}, as a widely used channel attention module, can generate channel-wise weights for each image. SE module consists of squeeze and excitation operations. The original feature maps are aggregated across spatial dimensions, and a channel descriptor is obtained through a squeeze operation. Then, the sample-specific activations are learned for each channel (channel-wise attention vectors $\textbf{v}$)
through an excitation operation. The original feature maps are then reweighted to generate the output of the SE block.
To measure the importance of channels in image-level features, an SE module is added after the convolution layer in RPN which has higher-level features for discriminating foreground and background. The structure is shown in Figure~\ref{fig:se}.
After the channel-wise attention vector $\textbf{v} = \left\{ v_1,v_2,...,v_C\right\}$ from the old model is obtained for each image, we normalize the vector to $[0,1]$. The important channels $F^{cc}\in \mathbb{R}^{N^{cc}\times W \times H}$ are classified by a threshold (0.5), where $N^{cc}$ is the number of important channels.
The loss can be written as:
\begin{equation}
\begin{aligned}
S^{cc}(i,j)=\psi(F^{cc}(i),F^{cc}(j))&,\quad
S^{cc'}(i,j)=\psi(F^{cc'}(i),F^{cc'}(j))\\
\label{Eq:cc}
\mathcal{D}_{cc}=\frac{1}{N^{cc}\times N^{cc}}&\sum_{i=1}^{N^{cc}}\sum_{j=1}^{N^{cc}}|S^{cc}(i,j)-S^{cc'}(i,j)|
\end{aligned}
\end{equation}
where $S^{cc}$ is the channel-wise correlation matrix and $F$ and $F^{'}$ represent the features of the incremental model and the old model respectively. $i$ and $j$ are the indexes of the channels. $\psi(\cdot,\cdot)$ is cosine similarity between two vectors. The channel-wise correlation matrices of the old model and the incremental model both use the indexes of important channels obtained from the old model.
The channel-wise correlations represent the relative distribution of specific patterns learned in different channels, which is the abstractions of channel-level activation behaviors. The channel-wise correlation distillation only transfers correlations between the important channels of old classes and removes redundant channel-wise information, which leaves room for learning new classes.
\subsubsection{Point-wise Correlation Distillation}
\label{sec:pc}
RPN is used to discriminate the object-like region proposals and background region proposals. The discriminative points in the activation-based spatial attention map of the feature in RPN correspond to the obvious regions of the background or foreground. Extracting the feature vectors of these points and only transferring the correlation between these point-wise feature vectors can preserve the knowledge of obvious foreground and background learned on the old data. The rest points are left to be optimized on new data.
The obvious regions are the points with high or low responses on the activation-based spatial attention map, which can be obtained by thresholding the attention map. We use $F_{att}=\sum_{i=1}^{C}{|F_i|}$ to get the attention map, where $C$ is the number of channels. Then, the activation-based spatial attention map is further normalized for selecting the point-wise feature vectors with high or low responses. Here, we use two thresholds $\theta_{high}$ and $\theta_{low}$ to select these point-wise feature vectors of discriminative regions. The correlation matrix is constructed between them, which also uses the cosine similarity to describe the correlation.
In the dual network, the activation-based spatial attention map is obtained from the image-level feature of the old model ($F^{'}$), then the attention map is used to select points with high
and low
responses.
After the indexes of these points are obtained $P^{high}=\left\{ (x_1,y_1),(x_2,y_2)...,(x_{N^h},y_{N^h})\right\}$ and $P^{low}=\left\{(x_1,y_1),(x_2,y_2)...,(x_{N^l},y_{N^l})\right\}$, where $N^h$ and $N^l$ are the numbers of the points with high and low responses respectively, we extract the point-wise feature vectors from the features of the old model and the incremental model ($F^{'}$ and $F$) respectively. The point-wise correlation matrices are also calculated for these two models respectively, and the distance between these two matrices is minimized. The loss is written as Equation~\ref{Eq:pc}.
\begin{equation}
\begin{aligned}
S^{pc}(i,j)\!=\!\psi(F^{high}(i),F^{low}(j)),&\quad
S^{pc'}(i,j)\!=\!\psi(F^{high'}(i),F^{low'}(j))\\
\label{Eq:pc}
\mathcal{D}_{pc}=&||S^{pc}-S^{pc'}||_F^2
\end{aligned}
\end{equation}
where $F^{high}\in \mathbb{R}^{N^h \times C}$ and $F^{low}\in \mathbb{R}^{N^l \times C}$ are the extracted point-wise feature vectors corresponding to the points with high and low responses from the incremental model respectively. Similarly, $F^{high'}\in \mathbb{R}^{N^h \times C}$ and $F^{low'}\in \mathbb{R}^{N^l \times C}$ represent the corresponding feature vectors from the old model
The point-wise correlations represent the abstractions of the activation behaviors about the relative responses between foreground and background. The point-wise correlation distillation can preserve the consistent correlations between the feature distributions of obvious foreground and background, and the indistinct regions can be left to learn new classes.
\subsubsection{Instance-wise Correlation Distillation}
\label{sec:ic}
Inspired from~\cite{li2020local}, the local features and their correlation in each instance also contain many details and discriminative patterns.
The old model can generate more discriminative local features with sufficient old data, while the incremental model is hard to achieve that due to the lack of old data.
The old model trained on old classes can make right predictions for different categories of objects with similar appearances based on the discriminative local regions of each instance. Therefore, learning the local knowledge of each instance from the old model is an important way to maintain the stability to detect the old classes. The intra-instance correlation is not considered when simply imitating the global activations of instance-level features, which will degrade the discriminability for detecting old classes due to the loss of distinctive local patterns.
To transfer the correlation of the local regions for each instance from the old model to the incremental model, we compute the correlation matrix of the local regions for each instance-level feature, which is the pooled feature after the RoI-Pooling and convolution layers in the detection head. The pooled feature of each instance $F^l \in \mathbb{R}^{pc \times ph \times pw}$ ($pc=2048$, $ph=4$ and $pw=4$) are divided into $k \times k$ ($k=2$) patches, and each patch has a shape of $pc \times \frac{ph}{k} \times \frac{pw}{k}$ ($2048\times2\times2$). The instance-wise correlation distillation loss is defined as:
\begin{equation}
\begin{aligned}
S^{ic}(i,j)=\psi(F^l(i),F^l(j))&,\quad
S^{ic'}(i,j)=\psi(F^{l'}(i),F^{l'}(j))\\
\label{Eq:ic}
\mathcal{D}_{ic}=\frac{1}{k\times k}\sum_{i=1}^{k}&\sum_{j=1}^{k}|S^{ic}(i,j)-S^{ic'}(i,j)|
\end{aligned}
\end{equation}
where $F^l(\cdot)$ and $F^{l'}(\cdot)$ are the vectorized features of the local patches from the incremental model and the old model respectively.
The instance-wise correlations represent the abstractions of activation behaviors within an instance, which explores the correlations between the local parts of an instance. The instance-wise correlation distillation can transfer the relative relationship between the response values of class-specific local features learned from the old model.
\subsection{Stability-Plasticity-mAP}
For incremental object detection, due to the different numbers of old and new classes and the different difficulties of learning each class, mAP is not very suitable for measuring the performance of the incremental model on handling the stability-plasticity dilemma. Therefore, to quantize \textit{Stability} and \textit{Plasticity}, we propose a new metric called \textit{Stability-Plasticity-mAP} (SPmAP), as written in Equation~\ref{Eq:metric}. Because incremental object detection aims to reach the performance of the model trained on all data with only the data of new classes, we use the model train on all classes as the up-bound model to measure stability and plasticity.
\textit{Stability} is the average difference of precisions on old classes, and \textit{Plasticity} is the average difference of precisions on new classes.
We also integrate the overall mAP difference ($mAP_{dif}$) representing the overall performance of all classes into the metric to measure the performance comprehensively.
\begin{equation}
\begin{aligned}
SPmAP=((Stability&+Plasticity)/2+mAP_{dif})/2 \\
Stability=\frac{1}{N^{o}}&\sum\nolimits_{i=1}^{N^{o}}(UP(i)-INC(i)) \\
Plasticity=\frac{1}{N^{n}}&\sum\nolimits_{i=N^{o}+1}^{N}(UP(i)-INC(i)) \\
mAP_{dif}=\frac{1}{N}&\sum\nolimits_{i=1}^{N}\!(UP(i)\!-\!INC(i))
\end{aligned}
\label{Eq:metric}
\end{equation}
where $N$ is the number of all classes. $N^{o}$ and $N^{n}$ represent the numbers of old and new classes respectively. $UP$ and $INC$ are the average precisions of the up-bound model and the incremental model.
\section{Experiments and Evaluation}
\begin{table*}
\caption{Per-class average precision (\%) on VOC2007 test dataset. Comparisons are conducted under different settings when 1, 5, 10 classes are added at once.
\begin{center}
\resizebox{1.0\textwidth}{!}{
\begin{tabular}{l|cccccccccccccccccccccc}
\toprul
\multicolumn{23}{c}{1}\\
\midrule\specialrule{0.0em}{0pt}{0pt
Method &aero & bike & bird & boat & bottle & bus & car & cat & chair & cow & table & dog & horse & mbike & person & plant & sheep & sofa & \multicolumn{1}{c|}{train} & \multicolumn{1}{c|}{tv} &\multicolumn{1}{c|}{ mAP} & SPmAP\\
\midrule\specialrule{0.0em}{0pt}{0pt
Old(1-19) &74.3&77.9 & 72.6 & 59.3 & 52.5 & 77.7 & 81.3 & 84.8 & 45.1 & 82.7 & 63.3 & 85.0 & 83.6 & 78.5 & 77.1 & 41.4 & 72.2 & 66.4 & \multicolumn{1}{c|}{77.9} &\multicolumn{1}{c|}{-} &\multicolumn{1}{c|}{71.2}&- \\
\midrule\specialrule{0.0em}{0pt}{0pt
Fine-tuning &31.2& 24.7 & 30.1 & 21.4 & 21.9 & 49.7 & 58.3 & 28.3 & 11.9 & 33.9 & 7.7 & 21.4 & 18.6 & 17.0 & 14.5 & 10.9 & 28.5 & 9.8 &\multicolumn{1}{c|}{25.4} & \multicolumn{1}{c|}{59.8} & \multicolumn{1}{c|}{26.2}&30.4 \\
Shmelkov et al.~\cite{shmelkov2017incremental} & 65.0 & 72.2 & 69.4 & 53.1 & 50.4 & 71.8 & 78.5 & 81.9 & 46.8 & 79.3 & 57.3 & 82.0 & 81.9 & 74.4 & 76.2 & 39.3 & 70.0 & 62.3 & \multicolumn{1}{c|}{60.3} & \multicolumn{1}{c|}{60.3} & \multicolumn{1}{c|}{66.6}&5.8\\
Chen et al.~\cite{chen2019new}& 70.7 &76.8 & 72.5 & 57.8 & 51.0 & 75.3 & 81.4 & 84.7 & 46.7 & 80.0 & 61.4 & 82.4 & 83.3 & 75.8 & 76.4 & 40.5 & 73.1 & 63.9 & \multicolumn{1}{c|}{69.8 } & \multicolumn{1}{c|}{51.6 } & \multicolumn{1}{c|}{68.8 }&6.3 \\
Hao et al.~\cite{hao2019take}&67.5 & 73.4 & 66.9 & 51.8 & 49.4 & 70.8 & 77.8 & 82.6 & 45.9 & 81.2 & 60.1 & 80.4 & 82.2 & 74.9 & 76.5 & 38.0 & 70.7 & 64.7 & \multicolumn{1}{c|}{60.6} & \multicolumn{1}{c|}{60.5} & \multicolumn{1}{c|}{66.8 }&5.7 \\
Hao et al.~\cite{hao2019end}&69.5 & 76.5 & 69.9 & 57.1 & 49.8 & 74.3 & 79.3 & 79.7 & 46.9 & 82.5 & 61.4 & 81.5 & 82.0 & 75.1 & 76.7 & 39.3 & 73.8 & 64.5 & \multicolumn{1}{c|}{66.2} & \multicolumn{1}{c|}{58.6 } & \multicolumn{1}{c|}{68.2 } &5.0\\
Li et al.~\cite{li2019rilod}&71.3 & 75.8 & 70.8 & 56.2 & 50.0 & 74.8 & 80.0 & 82.3 & 46.5 & 83.0 & 58.5 & 82.5 & 79.9 & 78.1 & 77.0 & 39.4 & 72.4 & 66.1 & \multicolumn{1}{c|}{69.0} & \multicolumn{1}{c|}{59.7} & \multicolumn{1}{c|}{68.7 }&4.4\\
Plain L1&70.6 & 77.1 & 70.7 & 58.4 & 50.4 & 75.2 & 80.5 & 83.0 & 46.5 & 83.0 & 61.9 & 82.7 & 81.4 & 74.9 & 76.9 & 40.1 & 72.3 & 65.4 & \multicolumn{1}{c|}{67.6} & \multicolumn{1}{c|}{59.6 }& \multicolumn{1}{c|}{68.9} &4.2\\
MVCD &71.2 & 76.7 & 71.3 & 60.1 & 51.2 & 76.7 & 80.2 & 83.5 & 47.4 & 82.4 & 62.5 & 83.2 & 83.2 & 75.9 & 77.2 & 41.6 & 72.0 & 66.6 & \multicolumn{1}{c|}{70.7} & \multicolumn{1}{c|}{60.6} & \multicolumn{1}{c|}{\textbf{69.7}} &\textbf{3.4}\\
\midrule\specialrule{0.0em}{0pt}{0pt
\multicolumn{23}{c}{5}\\
\midrule\specialrule{0.0em}{0pt}{0pt
Method& aero & bike & bird & boat & bottle & bus & car & cat & chair & cow & table & dog & horse & mbike &\multicolumn{1}{c|}{person} & plant & sheep & sofa & train & \multicolumn{1}{c|}{tv} & \multicolumn{1}{c|}{mAP}&SPmAP\\
\midrule\specialrule{0.0em}{0pt}{0pt
Old(1-15) &72.8&77.5 & 72.5 & 58.7 &55.2 & 74.8 & 83.7 & 85.6 & 47.2 & 76.3 & 64.2 & 82.9 & 82.7 &78.1 &\multicolumn{1}{c|}{77.2} &-&-&-&-&\multicolumn{1}{c|}{-}&\multicolumn{1}{c|}{ 72.6} &-\\
\midrule\specialrule{0.0em}{0pt}{0pt
Fine-tuning &42.2& 38.4 & 38.9 & 33.4 & 22.1 & 27.8 & 64.0 & 62.9 & 16.8 & 39.5 & 35.4 & 50.0 & 59.9 & 42.1 & \multicolumn{1}{c|}{25.8} &33.9 & 53.2 & 53.4 & 51.8 & \multicolumn{1}{c|}{62.1} &\multicolumn{1}{c|}{42.7} &26.6\\
Shmelkov et al.~\cite{shmelkov2017incremental} &57.9 & 74.0 & 65.9 & 39.7 & 47.4 & 46.4 & 78.2 & 77.7 & 44.8 & 68.9 & 59.2 & 76.0 & 80.8 & 73.2 & \multicolumn{1}{c|}{74.7} & 36.3 & 62.1 & 61.4 & 64.3 & \multicolumn{1}{c|}{64.2}& \multicolumn{1}{c|}{62.7 }&8.8\\
Chen et al.~\cite{chen2019new}&65.0&75.8 & 67.0 & 46.1 & 51.8 & 54.5 & 80.5 & 79.2 & 46.0 & 72.1 & 62.8 & 74.4 & 81.3 & 75.3 & \multicolumn{1}{c|}{74.9} & 32.2 & 60.6 & 55.0 & 54.5 & \multicolumn{1}{c|}{56.5} & \multicolumn{1}{c|}{63.3} &9.2\\
Hao et al.~\cite{hao2019take}&56.5 & 74.6 & 67.0 & 39.7 & 47.3 & 53.4 & 77.9 & 79.4 & 44.6 & 68.6 & 56.9 & 75.7 & 80.4 & 75.5 &\multicolumn{1}{c|}{ 74.9} & 36.5 & 60.8 & 59.9 & 65.2 & \multicolumn{1}{c|}{64.4} & \multicolumn{1}{c|}{63.0} &8.6\\
Hao et al.~\cite{hao2019end}&61.4& 75.2 & 67.4 & 44.3 & 49.6 & 51.7 & 79.2 & 78.7 & 45.8 & 70.7 & 61.0 & 76.0 & 81.8 & 75.4 & \multicolumn{1}{c|}{74.8} & 36.3 & 63.9 & 59.4 & 66.1 & \multicolumn{1}{c|}{63.6} &\multicolumn{1}{c|}{ 64.1}&7.5 \\
Li et al.~\cite{li2019rilod}&63.8& 76.5 &70.1 & 48.2 & 50.7 & 54.9 & 80.6 & 79.6 & 45.7 & 74.7 & 59.7 & 78.2 & 82.7 & 73.0 & \multicolumn{1}{c|}{75.2} & 36.8 & 64.0 & 63.2 & 65.3 & \multicolumn{1}{c|}{61.5} & \multicolumn{1}{c|}{65.2 }&6.5\\
Plain L1&64.0 & 75.1 & 68.5 & 45.8 & 51.8 & 52.7 & 80.0 & 78.5 & 45.8 & 72.8 & 60.3 & 76.8 & 81.6 & 75.7 & \multicolumn{1}{c|}{75.2} & 36.3 & 63.1 & 59.3 & 64.6 & \multicolumn{1}{c|}{61.3} & \multicolumn{1}{c|}{64.5} &7.4\\
MVCD& 65.7 & 76.6 & 71.9 & 51.5 & 51.0 & 64.9 & 81.5 & 80.6 & 47.0 & 74.3 & 60.8 & 80.6 & 82.2 & 76.8 & \multicolumn{1}{c|}{75.8} & 37.0 & 63.9 & 58.9 & 67.0 & \multicolumn{1}{c|}{62.8} & \multicolumn{1}{c|}{\textbf{66.5} }&\textbf{5.5} \\
\midrule\specialrule{0.0em}{0pt}{0pt
\multicolumn{23}{c}{10}\\
\midrul
Method& aero & bike & bird & boat & bottle & bus & car & cat & chair & \multicolumn{1}{c|}{cow} & table & dog & horse & mbike & person & plant & sheep & sofa & train & \multicolumn{1}{c|}{tv} &\multicolumn{1}{c|}{ mAP}&SPmAP\\
\midrule\specialrule{0.0em}{0pt}{0pt
Old(1-10) &
89.4 & 89.6 & 89.4 &87.2 &76.7 &83.9 &89.5 &86.6 &78.8 &\multicolumn{1}{c|}{71.8} &-&-&-&-&-&-&-&-&-&\multicolumn{1}{c|}{-}&\multicolumn{1}{c|}{84.3} &-\\
\midrul
Fine-tuning& 46.9 &33.2 &37.9 & 33.5 &28.3 & 45.2 & 45.9 & 52.3 & 13.0 & \multicolumn{1}{c|}{35.6} & 50.0 & 64.6 &75.0 & 67.3 & 73.5 & 32.8 & 60.2 &59.2 & 61.9 & \multicolumn{1}{c|}{59.4}& \multicolumn{1}{c|}{48.8 }&22.8\\
Shmelkov et al.~\cite{shmelkov2017incremental}&67.7 & 63.3 & 64.0 & 46.1 & 51.6 & 72.9 & 79.0 & 72.8 & 37.1 &\multicolumn{1}{c|}{ 67.7} & 52.3 & 77.7 & 78.2 & 76.3 & 75.4 &37.6 & 67.6 & 67.2 & 74.0 & \multicolumn{1}{c|}{65.6} &\multicolumn{1}{c|}{ 64.7}&6.9 \\
Chen et al.~\cite{chen2019new}&69.9 & 64.4 & 66.5 & 51.2 & 54.3 & 76.0 & 79.6 & 74.7 & 38.8 &\multicolumn{1}{c|}{ 71.7} & 51.0 & 77.3 & 78.6 & 73.7 & 72.9 & 29.1 & 65.7 & 63.9 & 71.0 & \multicolumn{1}{c|}{57.2} & \multicolumn{1}{c|}{64.4} &7.2\\
Hao et al.~\cite{hao2019take}&65.3 & 64.2 & 64.7 & 48.7 & 50.6 & 72.8 & 79.4 & 72.0 & 37.6 & \multicolumn{1}{c|}{67.4} & 50.9 & 78.1 & 78.3 & 76.9 & 75.4 & 39.2 & 68.2 & 63.8 & 72.7 & \multicolumn{1}{c|}{66.9} & \multicolumn{1}{c|}{64.6}&6.9\\
Hao et al.~\cite{hao2019end}&68.4 & 61.9 & 67.0 & 52.7 & 53.3 & 73.2 & 80.2 & 74.8 & 38.7 &\multicolumn{1}{c|}{ 69.5} & 54.7 & 75.1 & 78.4 & 76.1 & 74.7 & 34.2 & 69.7 & 65.2 & 71.7 & \multicolumn{1}{c|}{65.1} & \multicolumn{1}{c|}{65.2} &6.3\\
Li et al.~\cite{li2019rilod}&70.1 & 64.0 & 68.0 & 52.6 & 52.6 & 73.5 & 81.1 & 75.7 & 39.0 &\multicolumn{1}{c|}{ 66.7} & 53.9 & 77.7 & 78.9 & 74.5 & 73.7 & 33.4 & 67.0 & 63.6 & 70.5 & \multicolumn{1}{c|}{63.2} &\multicolumn{1}{c|}{ 65.0} &6.6\\
Plain L1&71.2 & 64.4 & 67.4 & 53.5 & 53.3 & 76.0 & 79.9 & 76.9 & 38.9 & \multicolumn{1}{c|}{71.1} & 54.7 & 78.0 & 78.7 & 74.1 & 73.8 & 34.3 & 67.4 & 65.1 & 70.8 & \multicolumn{1}{c|}{62.8} & \multicolumn{1}{c|}{65.6} &5.9\\
MVCD& 72.1 &
68.9 &
68.2 &
53.9 &
54.2 &
74.7 &
81.5 &
75.3 &
40.0 &
\multicolumn{1}{c|}{72.7} &
55.9 &
79.5 &
79.4 &
73.5 &
72.5 &
32.9 &
68.4 &
61.4 &
73.0 &
\multicolumn{1}{c|}{63.7} &
\multicolumn{1}{c|}{\textbf{66.1}} &
\textbf{5.5}\\
\midrule\specialrule{0.0em}{0pt}{0pt
Up-bound(1-20)&72.4 &76.9&73.4 &59.2& 54.5 &79.1 & 81.9 & 86.3 & 47.4 & 82.4 & 63.7 & 84.9 & 83.0 & 80.2 & 77.2 & 42.6 & 75.1 & 64.4 & 77.7 & \multicolumn{1}{c|}{69.0} & \multicolumn{1}{c|}{71.6}&- \\
\bottomrul
\end{tabular}}
\end{center}
\label{table:all-detail}
\end{table*}
\begin{table}
\caption{Average precision (\%) on COCO minival (first 5000 validation images). Comparisons are conducted when 40 classes are added at once.}
\begin{center}
\resizebox{0.9\linewidth}{!}{
\begin{tabular}{L{3cm}|C{1.5cm}C{1.5cm}}
\toprul
Method & mAP &SPmAP\\%&old [email protected] &new [email protected]
\midrule\specialrule{0.0em}{0pt}{0pt
Old(1-40) & 52.26 &-\\
\midrule\specialrule{0.0em}{0pt}{0pt
Fine-tuning&17.85 &31.74\\
Plain L1& 44.46&5.00\\
MVCD &\textbf{44.62}&\bf4.59\\
\midrule\specialrule{0.0em}{0pt}{0pt
Up-bound(1-80) & 49.21&-\\
\bottomrul
\end{tabular}}
\end{center}
\label{table:coco}
\end{table}
\begin{table*}
\caption{ Average precision (\%) on VOC2007 test dataset when adding 5 or 10 new classes sequentially.}
\begin{center}
\resizebox{0.78\textwidth}{!}{
\begin{tabular}{c|L{1.3cm}|c|c|c|c|c|c|c|c|c|c}
\toprul
\multirow{3}{*}{5}&\multicolumn{1}{l|}{Method} &\multicolumn{2}{c|}{+plant} & \multicolumn{2}{c|}{+sheep} & \multicolumn{2}{c|}{+sofa}&\multicolumn{2}{c|}{+train} & \multicolumn{2}{c}{+tv} \\
\cmidrule{2-12}\specialrule{0.0em}{0pt}{0pt
&Plain L1&\multicolumn{2}{c|}{66.78} &\multicolumn{2}{c|}{62.38} &\multicolumn{2}{c|}{61.09} &\multicolumn{2}{c|}{56.10} &\multicolumn{2}{c}{51.39}\\
&MVCD&\multicolumn{2}{c|}{\bf 68.19} &\multicolumn{2}{c|}{\bf 65.63}& \multicolumn{2}{c|}{\bf 63.31} & \multicolumn{2}{c|}{\bf 56.72} & \multicolumn{2}{c}{\bf 51.89} \\
\midrule\specialrule{0.0em}{0pt}{0pt}%
\multirow{6}{*}{10}&\multicolumn{1}{l|}{Method}&+table & +dog & +horse & +mbike & +person & +plant & +sheep & +sofa & +train & +tv \\
\cmidrule{2-12}\specialrule{0.0em}{0pt}{0pt
&Plain L1&58.34 &55.69 &50.14& 44.04& 39.59 &33.27& 31.89& 30.15& 25.46& 26.88\\
&MVCD &
\bf 59.91 &\bf 58.51 &\bf55.32 & \bf50.18 & \bf46.85 & \bf40.05 &\bf34.07 & \bf32.53 & \bf27.77 & \bf27.13\\
\cmidrule{2-12}\specialrule{0.0em}{0pt}{0pt
&\multicolumn{1}{l|}{Method} &\multicolumn{2}{c|}{+table \& dog} & \multicolumn{2}{c|}{+horse \& mbike} & \multicolumn{2}{c|}{+person \& plant}&\multicolumn{2}{c|}{+ sheep \& sofa} & \multicolumn{2}{c}{+train \& tv} \\
\cmidrule{2-12}\specialrule{0.0em}{0pt}{0pt
&Plain L1&\multicolumn{2}{c|}{58.87} & \multicolumn{2}{c|}{53.42} & \multicolumn{2}{c|}{46.28} & \multicolumn{2}{c|}{46.50} & \multicolumn{2}{c}{42.67}\\
&MVCD&\multicolumn{2}{c|}{\bf60.76}& \multicolumn{2}{c|}{\bf59.20}& \multicolumn{2}{c|}{\bf51.68}& \multicolumn{2}{c|}{\bf51.78}& \multicolumn{2}{c}{\bf48.23}\\
\bottomrul
\end{tabular}}
\end{center}
\label{table:10-sqe}
\end{table*}
\begin{table}
\renewcommand\arraystretch{0.9}
\caption{Results on VOC2007 and COCO, when four groups are added sequentially.}
\begin{center}
\resizebox{0.9\linewidth}{!}{
\begin{tabular}{l|L{1.3cm}|cccc|c}
\toprul
&Method &A& B &C& D&mAP\\
\cmidrule{1-7}\specialrule{0.0em}{0pt}{0pt
\multirow{8}{*}{VOC}
&\multirow{4}{*}{Plain L1}
& 48.75 & - & - & - & 48.75 \\
&& 38.96 & 60.55 & - & - & 49.75\\
&& 22.01 & 27.57 & 55.09 & - &34.89\\
&& 6.03 & 11.69 & 33.06 & 35.81 &21.65\\
\cmidrule{2-7}\specialrule{0.0em}{0pt}{0pt
&\multirow{4}{*}{MVCD}
& 48.75 & - & - &- &48.75 \\
&& \bf44.62 & 58.38 & - & - &\bf51.50 \\
&& \bf30.59 & \bf33.08 & \bf55.62 & -&\bf39.76\\
&& \bf14.25 & \bf18.34 & \bf41.72 & \bf36.32 &\bf27.66\\
\midrule\specialrule{0.0em}{0pt}{0pt
\multirow{8}{*}{COCO}
&\multirow{4}{*}{Plain L1}
& 57.29 & - & - & - &57.29\\
&& 45.24 & 19.48 & - & - &32.36\\
&& 30.22 & 14.93 & 23.22 & - &22.79\\
&& 19.14 & 10.58 & 14.02 & 28.49 &18.06\\
\cmidrule{2-7}\specialrule{0.0em}{0pt}{0pt
&\multirow{4}{*}{MVCD}
& 57.29 & - & - &- &57.29\\
&&\bf48.48 &\bf20.74 & - & - &\textbf{34.61}\\
&& \bf34.75 & \bf15.45 &\bf24.29 & - &\textbf{24.83}\\
&& \bf21.50 & \bf10.62 & \bf16.28 & \bf29.26 &\textbf{19.41}\\
\bottomrul
\end{tabular}}
\end{center}
\label{table:4-group}
\end{table}
\begin{table*}
\caption{Ablation Study}
\begin{center}
\resizebox{0.70\linewidth}{!}{
\begin{tabular}{ccc|cccccc}
\toprul
\multicolumn{3}{c|}{Correlation Distillation}&\multicolumn{2}{c|}{1} & \multicolumn{2}{c|}{5} &\multicolumn{2}{c}{10}\\
\midrule\specialrule{0.0em}{0pt}{0pt
\multirow{1}{*}{$\mathcal{D}_{cc}$}&\multirow{1}{*}{$\mathcal{D}_{pc}$}&\multirow{1}{*}{$\mathcal{D}_{ic}$}
&mAP&\multicolumn{1}{c|}{SPmAP}&mAP&\multicolumn{1}{c|}{SPmAP}&mAP&\multicolumn{1}{c}{SPmAP}\\
\midrul
&&&66.50&\multicolumn{1}{c|}{5.39}&62.90&\multicolumn{1}{c|}{8.50}&64.31&\multicolumn{1}{c}{7.24}\\
\midrule\specialrule{0.0em}{0pt}{0pt
$\checkmark$&&&67.52(+1.02)&\multicolumn{1}{c|}{5.12}&63.28(+0.38) &\multicolumn{1}{c|}{8.19}&65.45($+1.14$)&\multicolumn{1}{c}{6.10}\\
&$\checkmark$&&68.44(+1.94)&\multicolumn{1}{c|}{4.26} &65.28(+2.38)&\multicolumn{1}{c|}{6.37}&65.53($+1.22$)&\multicolumn{1}{c}{6.02}\\
&&$\checkmark$&68.48(+1.98)&\multicolumn{1}{c|}{4.03}&65.49(+2.59)&\multicolumn{1}{c|}{6.29}&65.72($+1.41$)&\multicolumn{1}{c}{5.83}\\
&$\checkmark$&$\checkmark$&69.47(+2.97)&\multicolumn{1}{c|}{3.52}&66.42(+3.52)&\multicolumn{1}{c|}{\bf 5.42}&65.77($+1.46$)&\multicolumn{1}{c}{5.78}\\
$\checkmark$&&$\checkmark$&69.32(+2.82)&\multicolumn{1}{c|}{3.68}&66.48(+3.58)&\multicolumn{1}{c|}{5.52}&65.97($+1.66$)&5.58\\
$\checkmark$&$\checkmark$&&68.69(+2.19)&\multicolumn{1}{c|}{4.33}&65.52(+2.62)&\multicolumn{1}{c|}{6.25}&65.70($+1.39$)&\multicolumn{1}{c}{5.85}\\
\midrule\specialrule{0.0em}{0pt}{0pt
$\checkmark$&$\checkmark$&$\checkmark$&\textbf{69.70(+3.20)}&\multicolumn{1}{c|}{\bf 3.41}&\textbf{66.53(+3.63)}&\multicolumn{1}{c|}{\underline{ 5.49}}&\textbf{66.08(+1.77)}&\multicolumn{1}{c}{\bf 5.47}\\
\bottomrul
\end{tabular}}
\end{center}
\label{table:ablation}
\end{table*}
\begin{table}
\caption{The results of alternative choices.}
\begin{center}
\resizebox{1\linewidth}{!}{
\begin{tabular}{ll|cccccc}
\toprul
\multicolumn{2}{l|}{\multirow{2}{*}{Method}}&\multicolumn{2}{c|}{1}&\multicolumn{2}{c|}{5}&\multicolumn{2}{c}{10}\\
\specialrule{0.0em}{0pt}{0pt}
\cmidrule{3-8}\specialrule{0.0em}{-1pt}{-1pt}
&&mAP&\multicolumn{1}{c|}{SPmAP}&mAP&\multicolumn{1}{c|}{SPmAP}&mAP&\multicolumn{1}{c}{SPmAP}\\
\midrule\specialrule{0.0em}{0pt}{0pt
\multirow{1}{*}{$\mathcal{D}_{cc}$}
&$L1$&70.00&\multicolumn{1}{c|}{3.88}&66.37&\multicolumn{1}{c|}{5.86}&65.21&\multicolumn{1}{c}{6.34}\\
\midrule\specialrule{0.0em}{0pt}{0pt
\multirow{2}{*}{$\mathcal{D}_{pc}$}
&$L1$&69.89 &\multicolumn{1}{c|}{4.40} &66.05 &\multicolumn{1}{c|}{ 6.31}& 64.58&\multicolumn{1}{c}{ 6.97}\\
&$\theta=0.5$&65.95&\multicolumn{1}{c|}{8.95}&62.69&\multicolumn{1}{c|}{9.87}&62.50&9.05\\
\midrule\specialrule{0.0em}{0pt}{0pt
\multirow{2}{*}{$\mathcal{D}_{ic}$}
&$L1$&\bf 70.45 &\multicolumn{1}{c|}{5.22}&\bf 66.56&\multicolumn{1}{c|}{6.11}&63.31&8.24\\
&$k=4$&69.01&\multicolumn{1}{c|}{3.52}&65.46&\multicolumn{1}{c|}{6.47}&66.04&5.51\\
\midrule\specialrule{0.0em}{0pt}{0pt
\multicolumn{2}{l|}{MVCD}&69.70&\multicolumn{1}{c|}{\bf 3.41}&66.53&\multicolumn{1}{c|}{\bf 5.49}&\textbf{66.08}&\multicolumn{1}{c}{\bf 5.47}\\
\bottomrul
\end{tabular}}
\end{center}
\label{table:alter}
\end{table}
\begin{table}
\caption{Training time ($second$).}
\begin{center}
\resizebox{0.75\linewidth}{!}{
\begin{tabular}{l|ccc}
\toprul
Method&1&5&10\\
\midrule\specialrule{0.0em}{0pt}{0pt
MVCD&2254.52& 8227.90& 24001.84\\
From Scratch&\multicolumn{3}{c}{36006.98}\\
\bottomrul
\end{tabular}}
\end{center}
\label{table:speed}
\end{table}
\subsection{Experiment Setup}
\textbf{Datasets.} The proposed method is evaluated on two benchmark datasets Pascal VOC 2007 and Microsoft COCO. VOC2007 has 20 object classes, and we use the trainval subset for training and the test subset for evaluation. COCO has 80K images in the training set and 40K images in the validation set for 80 object classes, and the minival (the first 5K images from the validation set) split is used for evaluation. There are two schemes to add new classes for evaluation: addition at once and sequential addition.
\noindent\textbf{Evaluation Metrics.} The compared methods are fine-tuning and some recent related works~\cite{shmelkov2017incremental}~\cite{chen2019new}~\cite{hao2019take}~\cite{hao2019end}~\cite{li2019rilod}. We reproduce the distillation methods and evaluate their performance under the same settings as our proposed method. We also design a baseline (Plain L1) that directly minimizes the L1 loss between the activations in the features of the old model and the incremental model. The basic object detector is Faster R-CNN for all methods.
We use both mean average precision (mAP) at 0.5 IoU threshold and the proposed ``SPmAP'' to measure the performance.
\noindent\textbf{Implementation Details.} The old model is trained for 20 epochs, and the initial learning rate is set to 0.001 ($lr=0.001$), and decays every 5 epochs with $gamma=0.1$. The momentum is set to 0.9. The incremental model is trained for 10 epochs with $lr=0.0001$ and decays to 0.00001 after 5 epochs. The confidence and IoU threshold for NMS are set to 0.5 and 0.3 respectively. The thresholds in Section~\ref{sec:pc} are set to $\theta_{high}=0.8$ and $\theta_{low}=0.1$, and $k$ in Section~\ref{sec:ic} is set to 2. ResNet-50~\cite{he2016deep} is used as the backbone. We conduct all experiments on a single NVIDIA GeForce RTX 2080 Ti
\subsection{Addition of Classes at Once}
In the first experiment, we evaluate the performance on adding new classes at once. We take 19, 15 and 10 classes from VOC2007 sorted in alphabetical order as the old classes, and the remaining 1, 5, 10 classes are the corresponding new classes as described in~\cite{shmelkov2017incremental}. For COCO, we take the first 40 classes as the old classes and the remaining 40 classes as the new classes. In these settings, if the image contains the categories to be detected, it will be selected for training or testing, so there is an overlap between the old data and the new data. However, the annotations of old classes in the new data are not available.
Table~\ref{table:all-detail} lists the per-category average precision on VOC2007 test subset.
Old($\cdot$) represents the model trained on the old data, and Up-bound($\cdot$) represents the model trained on all data of both old and new classes. On the first setting, the mAP of fine-tuning gets only 26.2\%, which has caused severely catastrophic forgetting. Different from the original fine-tuning, which randomly initializes the classification layer for a new task, in order to preserve the learned knowledge, we initialize the parameters in the classification layer and the regression layer of the incremental model with those of the old model learned from the old classes.
However, the performance of fine-tuning still degrades a lot. As can be seen, when only add one new class (``tv monitor"), the mAP and SPmAP of MVCD can reach 69.7\% and 3.4\% respectively, outperforming other L1/L2-distillation-based incremental object detection methods by a large margin. The mAP of MVCD exceeds the suboptimal Plain L1 about 0.8\%. It represents our method can better balance stability and plasticity.
For the second setting, we take 15 classes as the old classes, and the remaining 5 classes are added at once. MVCD also performs well compared with other methods. The mAP increases by about 2.0\% compared with Plain L1. With the increasing number of new classes, the mAP of fine-tuning is improved, however, the phenomenon of catastrophic forgetting is not mitigated. On the third setting, when 10 classes are added at once, the mAP of MVCD gets 66.1\%, outperforming the suboptimal Plain L1 about 0.5\%. Similarly, MVCD achieves the best SPmAP compared with other methods.
We also evaluate the performance on adding more classes as shown in Table~\ref{table:coco}. We take 40 classes from COCO training dataset as the old classes and the remaining 40 classes as the new classes. Both mAP and SPmAP of MVCD outperform Plain L1 and exceeds fine-tuning by a large margin.
The above results demonstrate that MVCD can effectively mitigate catastrophic forgetting on the setting of addition at once. The comparisons with other methods with the metric ``SPmAP'' also verify the superiority of MVCD on maintaining the stability and plasticity of the incremental model.
As can be seen, the designed ``Plain L1'' achieves comparable performance with other first-order-distillation-based methods. Therefore, in the following experiments, we use ``Plain L1'' as the baseline for comparison.
\subsection{Sequential Addition of Multiple Classes}
In this experiment, we evaluate the performance of our method by adding classes sequentially for incremental learning.
For the first setting, we also take 15 and 10 classes from VOC2007 sorted in alphabetical order as old classes, and the remaining 5 and 10 classes are as new classes. Table~\ref{table:10-sqe} lists the mAP(\%) when adding 5 and 10 classes sequentially.
As can be seen, MVCD in the setting of sequential addition of 5 classes outperforms Plain L1 in all incremental learning steps, and it can reach 51.89\% after the $5^{th}$ incremental learning step. The average improvements over all steps is 1.6\%, and the max difference can reach 3.25\% in the $2^{th}$ step.
We also evaluate the performance on adding 10 new classes sequentially with ten-step and five-step incremental learning respectively. In the ten-step learning, we add one new class at a time step, and in the five-step learning, we add two new classes at a time step.
As shown in the ten-step setting result, the proposed MVCD has consistent improvements in all learning steps. After the $6^{th}$ learning step, MVCD still exceeds Plain L1 6.78\% (40.05\% vs. 33.27\%). Due to the small number of samples in some categories, the performance is decreased slightly. However, MVCD is still better than Plain L1, which demonstrates the effectiveness of multi-view correlation distillation. In the five-step setting, the mAP of MVCD can still reach 48.23\% after the $5^{th}$ incremental learning step, and it outperforms Plain L1 by a large margin in all learning steps.
These experiments demonstrate that the proposed MVCD can mitigate catastrophic forgetting better than the first-order distillation even after many incremental learning steps.
We also split the training set of VOC2007 and COCO into four groups: A, B, C and D as described in~\cite{hao2019end}. For fair comparisons, we also use ResNet-50~\cite{he2016deep} in this experiment. For each group, images that only contain the objects of classes in this group are selected, which means that there are no overlaps in these four groups.
The results are shown in Table~\ref{table:4-group}, the performance of MVCD is better than Plain L1 in all incremental learning steps. On VOC2007, MVCD improves about 6.01\% compared with Plain L1 after the last learning step. On COCO, MVCD is consistently better than Plain L1 in all learning steps.
\subsection{Ablation Study}
As listed in Table~\ref{table:ablation}, the proposed multi-view correlation distillation losses $\mathcal{D}_{cc}$, $\mathcal{D}_{pc}$ and $\mathcal{D}_{ic}$ on three settings are evaluated separately. The baseline only uses the distillation on the final classification and regression layers as shown in the first row in Table~\ref{table:ablation}. ``$+$" represents the increased mAP(\%) compared with the baseline.
Firstly, these three distillation losses are evaluated individually. As can be seen, the accuracy increases by about 0.85\% on average by using $\mathcal{D}_{cc}$.
The average increments of 1.85\% and 1.99\% are obtained when $\mathcal{D}_{pc}$ and $\mathcal{D}_{ic}$ are individually utilized. It verifies that these three correlation distillation losses are all useful for incremental object detection. The performance on SPmAP also show the effectiveness of these losses on preserving stability and plasticity. Then, we test different combinations of arbitrarily two losses, and the results on mAP show that the performances of these combinations are a little decreased compared with the combination of three correlation distillation losses.
The alternative choices of hyper-parameters $\theta$, $k$ and distillation ways on channel-wise, point-wise and instance-wise features are also tested as shown in Table~\ref{table:alter}. For MVCD in the last row, we use the settings as described in implementation details. For point-wise correlation distillation, we replace the high and low thresholds with a single threshold $\theta=0.5$ to divide the point-wise feature vectors into the vectors with high responses and low responses. Compared with our final setting $\theta_{high}=0.8$ and $\theta_{low}=0.1$, the performance decreases a lot, which verifies that only preserving the correlation between the most discriminative point-wise features can maintain the stability and plasticity of the incremental model better. For the instance-wise correlation distillation, the instance-level feature is divided into $4\times4$ ($k=4$) patches, and the result shows $k=2$ is better than $k=4$. We also replace the channel-wise, point-wise and instance-wise correlation distillation losses with L1 loss to minimize the distance between the selected features. The performance on SPmAP is worse than preserving the correlations, which demonstrates correlation distillation is more appropriate to get a tradeoff between stability and plasticity for incremental object detection.
We also compare the training time of the proposed incremental learning method with training the detector from scratch using the similar GPU memory as listed in Table~\ref{table:speed}. When adding a few new classes, the proposed incremental object detection method has absolute superiority in training time with just a minor accuracy loss.
\section{Conclusion}
In this paper, we propose a novel multi-view correlation distillation based incremental object detection method, which transfers the correlations from the channel-wise, point-wise and instance-wise views in the feature space of the two-stage object detector. The channel-wise and point-wise correlations are designed for image-level features, and the instance-wise correlation is designed for instance-level features, which can get a good trade-off between the stability and the plasticity of the incremental model. Experimental results on VOC2007 and COCO with the new metric ``SPmAP'' demonstrate the effectiveness of the proposed method on incrementally learning to detect objects of new classes without severely forgetting originally learned knowledge.
\bibliographystyle{ACM-Reference-Format}
|
1,116,691,499,864 | arxiv | \section{Introduction}
\label{seq-intro}
In spite of the present
success of the Standard Model (SM) several observations indicate its possible nonfundamentality:
large number of the SM fermions, their arbitrary masses and mixings, fractional electric charge of quarks, etc.
Among the diversity of new physics models the theories of compositeness~\cite{Pati:1974yy,Terazawa:1976xx,Shupe:1979fv,
Squires:1980cm,Harari:1980ez,Fritzsch:1981zh,Barbieri:1981cy} try to solve these problems by introducing
a substructure of the SM particles, which
subcomponents are commonly referred as preons~\cite{Pati:1974yy}.
The composite models may include in the particle content radial, orbital, topological, structural,
and other excitations of the ground state particles,
e.g., an {\it excited lepton}
that shares leptonic quantum number with one of the existing leptons, has larger mass and no color charge.
Besides the outlined issues on the
particles and their interactions that come from the laboratory studies,
another
opened questions (including the dark matter problem) arise from the astrophysical observations
of the universe around us.
In particular, our universe appears to be populated exclusively with baryonic matter rather than
\mbox{antimatter
~\cite{PDG2016}. However this baryon asymmetry
can not be explained within the Big Bang cosmology and the SM. Possible scenarios of dynamical generation of
the baryon asymmetry during the evolution of the universe from a hot early
matter-antimatter symmetric stage are referred as the baryogenesis (BG) mechanisms~\cite{Dolgov:1991fr,Dine:2003ax},
and include new physics.
In the next section we discuss one example of the composite models in question.
The interactions and the mass bounds for the excited leptons are outlined in section~\ref{sec-Ex-leptons},
and the new BG scenarios that involve these particles are discussed in section~\ref{sec-BG}. Finally,
we conclude in section~\ref{sec-conclusion}.
\section{Composite model example}
Consider the haplon models~\cite{Fritzsch:1981zh,Fritzsch:2014gta},
which are similar to the models with wakems and chroms~\cite{Terazawa:1979pj,Terazawa:1984dj,Terazawa:2011ci},
and are based on the symmetry ${\rm SU}(3)_c\times {\rm U}(1)_{em}\times {\rm SU}(N)_h$,
where the new haplon group ${\rm SU}(N)_h$ has the confinement scale of the order of
0.3~TeV, and denotes, e.g., ${\rm SU}(2)_L\times {\rm SU}(2)_R$.
These models contain the two categories of preons (haplons): the fermions
$\alpha^{+1/2}$ and $\beta^{-1/2}$, and the scalars $\ell^{+1/2}$ and $c_k^{-1/6}$, where $k=r,g,b$
(``red, green, blue'').
Their quantum number assignment is given in Table~\ref{tab-1}, where $Q$ is the electric charge, C1 is the choice for the
${\rm SU}(3)_c$ representations in Ref.~\cite{Fritzsch:1981zh}\footnote{Notice that C1 assignment does not provide
a spin-charge separation~\cite{Xiong:2016fum}.} and C2 is an alternative choice~\cite{Fritzsch:2014gta}.
Then the haplon pairs can compose the SM particles as
$\nu=(\alpha\bar\ell)$, $e^-=(\beta\bar\ell)$, $u=(\alpha\bar c_k)$, $d=(\beta\bar c_k)$,
$W^-=(\bar\alpha\beta)$, $W^3=(\bar\alpha\alpha-\bar\beta\beta)/\sqrt{2}$,\dots,
and the new particles, e.g., a scalar leptoquark $S^{+2/3}_\ell=(\ell\bar c_k)$,
and the neutral scalars $S^0_\ell=(\ell\bar\ell)$ and $S^0_c=(c_k\bar c_k)$.
$W^3$ mixes with the photon $\gamma$ similarly to the mixing between $\gamma$ and $\rho^0$-meson.
$H$ scalar can be a p-wave excitation of the $Z$, and
the second and third generations can be dynamical excitations.
Notice that $S_\ell^{+2/3}$, $S_c^0$ and $S_\ell^0$ states (if their masses are small) may contribute to the low-energy observables,
e.g., so-called, XYZ states~\cite{Ji:2016ffn}.
\footnote{
Notice that the SM results can be reproduced in some composite models, at least at the tree level,
due to the complementarity between Higgs phase and confining phase~\cite{'tHooft:1998pk,Calmet:2000th}.
}
\begin{table}
\centering
\caption{The haplon quantum numbers}
\label{tab-1}
\begin{tabular}{llllll
\hline
Haplon & Spin [$\hslash$] & $Q$ [$|e|$] & C1 & C2 & ${\rm SU}(2)_h$ \\
\hline
$\alpha$ & $1/2$ & $+1/2$ & 3 & 1 & 2 \\
$\beta$ & $1/2$ & $-1/2$ & 3 & 1 & 2 \\
$\ell$ & 0 & $+1/2$ & $\bar3$ & 1 & 2 \\
$c_k$ & 0 & $-1/6$ & 3 & $3$ & 2 \\
\hline
\end{tabular}
\end{table}
However, new questions arise: Where does this peculiar haplon picture come from? Can it, in turn, result from
a substructure of haplons?
Consider the two scalar ``prepreons''\footnote{For supersymmetric models with ``prepreons'' see, e.g., Ref.~\cite{Pati:1983dh}.}
$\pi_k$ and $\bar\pi_k$, which are ${\rm SU}(3)$ triplets and have the electric charges
of $-1/6$ and $+1/6$, respectively.
Then the set of haplons with their electric and C2 color charges can be reproduced by the triples of ``prepreons''
($\bar\pi_{\bar r}\bar\pi_{\bar g}\bar\pi_{\bar b}\to\{\alpha,\ell\}$, $\pi_r\pi_g\pi_b\to\{\beta,\bar\ell\}$,
$\bar\pi_{\bar i}\,\pi_j\pi_l\to c_k$),
while additional mechanism of spin generation is required.
One can think of possible relation of spin to
a circular color currents similarly to some discussions in the context of
the condensed matter~\cite{chuu_chang_niu} and gravity~\cite{Burinskii:2016agf} theories,
taking into account that the distribution of matter in a composite state can be imagined (in particular,
in the ${\rm SU}(3)$ Yang-Mills theory)
in terms of the wave functions or probability distributions for the effective subcomponents of a finite
size~\cite{Glazek:2016vkl,Glazek:2011wf,Gomez-Rocha:2015esa}.
Then a ``spinning'' and ``nonspinning'' states of the same preon (e.g., $\alpha$ and $\ell$) may form
a supersymmetric multiplet.
Notice that the possibility of multihaplon
states such as $(\beta\bar c_k\bar\ell c_k)$, $(\alpha\bar\ell\beta\bar c_k\bar\beta c_k)$, etc.,
gets more points from recent discoveries of the multiquark hadrons~\cite{Aaij:2015tga}.
\section{Excited leptons}
\label{sec-Ex-leptons}
The excited lepton states defined in the introduction can be particularly important if their masses are smaller than
the leptoquark and leptogluon masses, which can be natural due to the absence of the color charge.
The contact interactions among the SM fermions $f$ and the excited fermions $f^*$ can be generically written as~\cite{PDG2016}
\begin{eqnarray}\label{eq:CI}
&&\mathcal{L}_\text{CI} = \frac{g_*^2}{2\Lambda^2}
\sum_{\alpha,\beta=L,R} \left[ \eta_{\alpha\beta} (\bar f_\alpha\gamma^\mu f_\alpha)(\bar f_\beta\gamma_\mu f_\beta)
\right. \nonumber\\
&&+ \left.\eta_{\alpha\beta}^\prime (\bar f_\alpha\gamma^\mu f_\alpha)(\bar f_\beta^*\gamma_\mu f_\beta^*)
+ \tilde\eta_{\alpha\beta}^{\prime} (\bar f^*_\alpha\gamma^\mu f^*_\alpha)(\bar f_\beta^*\gamma_\mu f^*_\beta)
\right. \nonumber\\
&&+ \left.\eta_{\alpha\beta}^{\prime\prime} (\bar f_\alpha\gamma^\mu f_\alpha)(\bar f_\beta^*\gamma_\mu f_\beta)
+ \text{H.c.} + \dots \right],
\end{eqnarray}
where $\Lambda$ is the contact interaction scale, $g_*^2=4\pi$, and the new parameter values are usually taken of $|\eta^j|\leq1$.
Assuming nearly maximal couplings of $|\eta^j|\simeq1$ and the excited fermion masses of $M_{f^*}\simeq\Lambda$,
the present lower bounds for $\Lambda/\sqrt{|\eta^j|}$ ratios are of the order of
few~TeV~\cite{PDG2016}.
However, if Eq.~\eqref{eq:CI} expresses a ``residue'' effective interactions between the composites (with respect to
the fundamental interactions among their subcomponents) then $|\eta^j|$ couplings can be small,
and even the case of $M_{f^*}\lesssim\Lambda<1$~TeV is not excluded for $|\eta^j|\ll1$ and $\Lambda/\sqrt{|\eta^j|}>>M_{f^*}$.
A particular type of excited leptons that at low energies interact with the SM fermions dominantly through
the contact terms we refer as {\it leptomesons} (LM).\footnote{Notice that the same term ``leptomeson''
was used in the literature for the bound states of colored excitations of $e^+$ and $e^-$~\cite{Pitkanen:1992}.}
The relevant contact terms (with $\eta^{\prime\prime}$ couplings) can be realized, e.g., through the leptoquark exchange.
\begin{figure*}
\centering
\includegraphics[width=0.26\textwidth]{LG_tree_v2} \
\includegraphics[width=0.32\textwidth]{LG_loop1_v2} \,
\includegraphics[width=0.37\textwidth]{LG_loop2_v2}
\caption{Feynman diagrams for the discussed contributions to the CP asymmetry, where
the line direction shows either $L$ or
$B$ flow, ``$\times$'' represents a Majorana mass insertion,
and the black bulb represents a subprocess (e.g., a leptoquark exchange).}
\label{Fig:LG_diagrams}
\end{figure*}
\section{Baryogenesis}
\label{sec-BG}
Possible BG and the dark matter generation by a scalar \mbox{4-haplon} state was considered in Ref.~\cite{Fritzsch:2014gta}.
In this proceedings we discuss if fermionic LM states can provide a successful BG~\cite{Zhuridov:2016xls}.
Similarly to the sterile neutrino $\nu_R$ case, depending on the LM properties, deviation from thermal equilibrium can occur
at either production or freeze-out and decay (compare to the BG via $\nu_R$ oscillations~\cite{Akhmedov:1998qx} and
the usual leptogenesis~\cite{Fukugita_Yanagida}, respectively).
In both scenarios one should replace the Yukawa interactions of $\nu_R$ by the contact interactions of LMs,
which may result in promising effects.
\subsection{BG from LM oscillations}
\label{sec-BG-LM-osc}
Once created in the early universe neutral long-lived LMs oscillate and interact with ordinary matter.
These processes do not violate the total lepton number $L^\text{tot}$ (for Dirac LMs).
However the oscillations violate $CP$ and therefore do not conserve individual lepton numbers $L_i$ for LMs.
Hence the initial state with all zero lepton numbers evolves into a state with $L^\text{tot}=L+\sum_i L_i=0$ but $L_i\neq0$.
At temperatures $T\ll \Lambda$ the LMs communicate
their lepton asymmetry to neutrinos $\nu_\ell$ and charged leptons $\ell$
through the effective interactions,
e.g., $B$-conserving (and $L$-conserving for Dirac LMs) vector coupling
\begin{eqnarray}\label{eq:4vertex}
\sum_{\psi_\ell,f,f^\prime}
\sum_{\alpha,\beta=L,R} \left[
\frac{\epsilon^{\alpha\beta}_{ff^\prime\psi_\ell}}{\Lambda^2}
(\bar f_\alpha\gamma^\mu f_\alpha^\prime)(\bar\psi_{\ell\beta}\gamma_\mu N_{\ell\beta}) \right.\nonumber\\
+ \left.\frac{\tilde\epsilon^{\alpha\beta}_{ff^\prime\psi_\ell}}{\Lambda^2}
(\bar\psi_{\ell\alpha}\gamma^\mu f_\alpha^\prime)(\bar f_{\beta}\gamma_\mu N_{\ell\alpha})
\right]
+ \text{H.c.},
\end{eqnarray}
where $\psi_\ell=\ell,\nu_\ell$ ($\ell=e,\mu,\tau$),
the constants
$\stackrel{\scriptscriptstyle{(\sim)}}\epsilon=4\pi\eta^{\prime\prime}$
can be real,
$f$ and $f^\prime$ denote either quarks or leptons
such that
$Q_{f_\alpha}+Q_{f^{\prime c}_\alpha}+Q_{\psi_{\ell\beta}}=0$,
and $N_\ell$ is the neutral LM flavor state related to the mass eigenstates $N_i$ as
$ N_{\ell\alpha} = \sum_{i=1}^n U^\alpha_{\ell i} N_i$
Suppose that LMs of at least one type $N_i$ remain in
thermal equilibrium till the electroweak symmetry breaking
time $t_\text{EW}$ at which sphalerons become ineffective, and those of
at least one other type $N_j$ come out-of-equilibrium by $t_\text{EW}$.
Hence the lepton number of former (later) affects (has no effect on) the BG.
In result, the final baryon asymmetry after $t_\text{EW}$
is nonzero.
At the time $t\gg t_\text{EW}$ all LMs decay into the leptons and the quarks (hadrons).
For this reason they do not contribute to the dark matter in the universe, and do not destroy
the Big Bang nucleosynthesis.
The system of $n$ types of singlet LMs of a given momentum $k(t)\propto T(t)$ that interact
with the primordial plasma can be described by the $n\times n$ density matrix $\rho(t)$.
In a simplified picture it
satisfies the kinetic equation~\cite{Akhmedov:1998qx
\begin{eqnarray}\label{eq:evolution}
i \frac{d\rho}{dt} = [\hat H,\rho] - \frac{i}{2}\{\Gamma,\rho\} + \frac{i}{2}\{\Gamma^p,1-\rho\},
\end{eqnarray}
where $\Gamma$ ($\Gamma^p$) is the destruction (production) rate, and the effective Hamiltonian can be written as
\begin{eqnarray}
\hat H = V(t) + U \frac{\hat M^2}{2k(t)} U^\dag,
\end{eqnarray}
where $\hat M^2=\text{diag}(M_1^2,\dots,M_n^2)$ with the masses $M_i$ of $N_i$, and $V$ is a real potential.
One of the main features of the discussed BG from LMs is that
the 4-particle interaction cross section that contributes to the destruction rate is
proportional to the total energy of the process $s$ instead of the inverse proportionality
that takes place in the BG from $\nu_R$
oscillations. Indeed, this cross section can be written as
\begin{eqnarray}\label{eq:cross_section}
\sigma(a+b\to c+d) =
\frac{C}{4\pi}|\epsilon|^2 \frac{s}{\Lambda^4}\propto s,
\end{eqnarray}
where
$a, b, c$ and $d$ denote the four interacting particles
($f$, $f^\prime$, $\psi_\ell$ and $N_\ell$), and $C=\mathcal{O}(1)$ is the constant
that includes the color factor in the case of the interaction with quarks.
In result, the interaction rate that equilibrates LMs,
\begin{eqnarray}\label{eq:rate}
\Gamma \propto |\epsilon|^2 \frac{T^5}{\Lambda^4},
\end{eqnarray}
is suppressed by
$(T/\Lambda)^4$ with respect to the Higgs mediated interaction rate
in usual BG via $\nu_R$ oscillations.
The conditions that LMs of type $N_i$ remain in thermal equilibrium till $t_\text{EW}$,
while $N_j$ do not, can be written as
\begin{eqnarray}
\Gamma_i(T_\text{EW}) > H(T_\text{EW}), \qquad
\Gamma_j(T_\text{EW}) < H(T_\text{EW}),
\end{eqnarray}
where $H(T)$ is the Hubble expansion rate. Due to the
suppression factor of $(T_\text{EW}/\Lambda)^4$
the successful BG can be realized with the relatively large couplings $|\epsilon|$ with respect to
the sterile neutrino Yukawas of $Y\sim10^{-7}$ in the BG via $\nu_R$ oscillations~\cite{Akhmedov:1998qx}.
In particular, for $\Lambda\gtrsim10$ and 30~TeV we have $|\epsilon|\gtrsim10^{-4}$ and $10^{-3}$, respectively.
Hence the considered BG scenario can be relevant for the LHC and next colliders without unnatural hierarchy of the couplings.
\subsection{BG from LM decays}
\label{sec-BG-LM-dec}
Suppose that the neutral LMs are Majorana particles ($N=N^c$).
Consider their out-of-equilibrium, $CP$- and $L$-violating decays
in the early universe.
The relevant interactions can be written as
\begin{eqnarray
\frac{\epsilon_{ff^\prime\psi_\ell}^{\alpha R}}{\Lambda^2} (\bar f_\alpha\gamma^\mu f_\alpha^\prime)
(\bar\psi_{\ell R}\gamma_\mu N_{\ell R}) +
\frac{\epsilon_{ff^\prime\psi_\ell}^{S}}{\Lambda^2} (\bar f_R f_L^\prime)
(\bar\psi_{\ell L} N_{\ell R
) \nonumber\\
+
\frac{\epsilon_{ff^\prime\psi_\ell}^{T}}{\Lambda^2} (\bar f \sigma^{\mu\nu} f^\prime)
(\bar\psi_{\ell L} \sigma_{\mu\nu} N_{\ell R
) + \text{H.c.} \hspace{1cm}
\end{eqnarray}
To be more specific in the following we consider the term
\begin{eqnarray
\frac{\lambda_{\ell i}}{\Lambda^2} (\bar q_\alpha\gamma^\mu q_\alpha^\prime)
(\bar\ell_R\gamma_\mu N_{iR
),
\end{eqnarray}
where $\lambda_{\ell i}=\epsilon_{qq^\prime\ell}^{\alpha R} U^R_{\ell i}$ is a complex parameter.
Consider the interference of tree and one-loop diagrams\footnote{Same two-loop self-energy graph
as in Fig.~\ref{Fig:LG_diagrams} (right) was
discussed in the resonant BG scenarios of Refs.~\cite{Dev:2015uca,Davoudiasl:2015jja}, where the baryon asymmetry is
directly produced in the three-body decays of the sterile neutrinos.
Although these mechanisms involve $B$-violating interactions of $qqq\nu_R$ type, they do not lead to fast proton decay
due to the large values of $\nu_R$ mass and the $B$-violating interaction scale of $\mathcal{O}(1)$~TeV.
}
shown in Fig.~\ref{Fig:LG_diagrams}. The final $CP$ asymmetry that is produced in decays of the lightest LMs $N_1$
\begin{eqnarray}
\varepsilon_1 = \frac{1}{\Gamma_1}\, \sum_\ell [\Gamma(N_1 \to \ell_R q_\alpha q_\alpha^{\prime c})
- \Gamma(N_1 \to \ell_R^c q_\alpha^c q_\alpha^\prime)],
\end{eqnarray}
can be non-zero if $\text{Im}[ (\lambda^\dag \lambda)_{1j}^2 ] \neq 0$.
Using the width~\cite{Cakir:2002eu},
\begin{eqnarray}
\Gamma_1 &=&
{\sum_\ell[ \Gamma(N_1 \to \ell_R q_\alpha q_\alpha^{\prime c})
+ \Gamma(N_1 \to \ell_R^c q_\alpha^c q_\alpha^\prime)]} \nonumber\\
&\simeq& \frac{1}{128\pi^3}\, (\lambda^\dag \lambda)_{11} \frac{M_1^5}{\Lambda^4},
\end{eqnarray}
the condition for the decay parameter $K\equiv\Gamma_1 / H(M_1) > 3$
(strong washout regime) translates into the limit of
\begin{eqnarray
(\lambda^\dag \lambda)_{11} \gtrsim 4 \times 10^{-7} \times \left( \frac{\Lambda}{10~\text{TeV}} \right)^4 \times
\left( \frac{1~\text{TeV}}{M_1} \right)^3.
\end{eqnarray}
The final baryon asymmetry can be written as
\begin{eqnarray
\frac{n_B-n_{\bar B}}{s} =
\left(-\frac{28}{79}\right)\times \frac{n_L-n_{\bar L}}{s}
= \left(-\frac{28}{79}\right)\times \frac{\varepsilon_1 \kappa}{g_*},
\end{eqnarray}
where $n_B$ ($n_L$) is the baryon (lepton) number density, $s$ is the entropy density,
$-28/79$ is the sphaleron lepton-to-baryon factor, and $\kappa\leq1$ is the washout coefficient that can be determined by solving
the set of Boltzmann equations.
The observed baryon asymmetry of the universe of~\cite{PDG2016}
\begin{eqnarray}\label{eq:eta_B}
\eta_B = \frac{n_B-n_{\bar B}}{n_\gamma} = 7.04 \times\frac{n_B-n_{\bar B}}{s} \simeq
\times10^{-10},
\end{eqnarray}
where $n_\gamma$ is the photon number density,
can be produced, e.g., for
$K\sim100$ with the degeneracy factor of
\begin{eqnarray}
\mu \equiv \frac{M_2-M_1}{M_1} \lesssim 10^{-6} \left(\frac{M_1}{1~\text{TeV}}\right),
\end{eqnarray}
which enters the resonant $CP$ asymmetry of
\begin{eqnarray}
\varepsilon_i \sim \frac{\text{Im}\{[(\lambda^\dag\lambda)_{ij}]^2\}}{(\lambda^\dag\lambda)_{ii}
(\lambda^\dag\lambda)_{jj}} \frac{\Gamma_j}{M_j} \frac{M_iM_j}{M_i^2-M_j^2}
\sim \mu^{-1} \frac{\Gamma_1}{M_1}.
\end{eqnarray}
Notice that the discussed effective LM-quark-quark-lepton vertex can be realized, e.g.,
through the exchange of ${\rm SU}(2)_L$ singlet scalar leptoquark $S_{0R}$ with
$Y=1/3$.
Relevant interaction terms can be written as
\begin{eqnarray
-\mathcal{L}_\text{int} &=& (g_{ij}\, \bar d_{R}^c N_{iR}
+ f_j\, \bar u^c_R \ell_R ) S_{0R}^j
+ \text{H.c.}
\end{eqnarray}
Then the above expressions are valid with the replacements of
$\lambda \to gf^*$ and $\Lambda \to M_{S_{0R}}$.
The relevant to the successful BG
values of the new couplings
of $|g|\sim |f|\sim 0.01-0.1$, can be interesting for the collider searches.
\begin{figure}
\centering
\includegraphics[width=4.5cm,clip]{nu_mass_v2}
\caption{The discussed contribution to the neutrino masses (line direction shows $L$ flow).}
\label{Fig:nu_mass}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=6.5cm,clip]{nu_mass_example_1_v2}
\caption{Leptoquark (LQ) contribution to the neutrino masses.}
\label{Fig:nu_mass_LQ}
\end{figure}
\subsection{Neutrino masses}
\label{sec-nu-mass}
For Majorana LMs the effective terms of
\begin{eqnarray
\frac{\epsilon_{ff\nu_\ell}^{S}}{\Lambda^2} \bar f_R f_L \,
\bar\nu_{\ell L} N_{\ell R}
+ \frac{\epsilon_{ff\nu_\ell}^{T}}{\Lambda^2} \bar f\sigma^{\mu\nu} f \,
\bar\nu_{\ell L} \sigma_{\mu\nu} N_{\ell R}
+ \text{H.c.}
\end{eqnarray}
can generate the two-loop contributions to the observable light neutrino masses $m_{\nu_\ell}$
that is illustrated for $f=q$ in Fig.~\ref{Fig:nu_mass}, and for a particular model with the leptoquarks
in Fig.~\ref{Fig:nu_mass_LQ}.
A simple estimate of this contribution is
\begin{eqnarray}
m_{\nu_\ell} \sim \sum_i \frac{(\epsilon\, U_{\ell i}^R)^2}{(16\pi^2)^2}\ \frac{M_i^3 m_q^2}{\Lambda^4},
\end{eqnarray}
where $m_q$ is the quark mass.
Hence the present bound of $m_\nu\lesssim2$~eV~\cite{Aseev:2011dq} can be easily satisfied.
\section{Conclusion}
\label{sec-conclusion}
The two new testable baryogenesis scenarios in the models with excited leptons are
introduced, which do not contradict to the observable neutrino masses.
First, {\it the BG from LM oscillations} may take place for relatively light and long-lived LMs,
which do not all decay before $t_\text{EW}$.
Second, {\it the BG from LM decays} can be realized if all LMs decay before $t_\text{EW}$.
A particular models based on the former (later) BG proposal require detailed study of the neutrino potential
(of the Boltzmann equations) to be verified in the future experiments.
In both scenarios the baryon number is violated only by the sphaleron processes
that does not affect the proton stability.
Due to the relatively low temperatures of the discussed BG mechanisms,
an analog of the gravitino problem~\cite{Khlopov1,Balestra} does not exist there.
\section*{Acknowledgements}
The author thanks Marek Zra{\l}ek, Henryk Czy\.z, Bhupal Dev
and Yue Zhang for useful discussions.
This work was supported in part by the Polish National Science Centre, grant number DEC-2012/07/B/ST2/03867.
The author used JaxoDraw~\cite{Binosi:2003yf} to draw the Feynman diagrams.
|
1,116,691,499,865 | arxiv | \section{Introduction}
\vspace*{-5mm
\subsection{Model problem}
By now, the thorough mathematical understanding of convergence and quasi-optimality of $h$-adaptive FEM for second-order elliptic PDEs
has matured. However, the focus of the numerical analysis usually lies on model problems with homogeneous Dirichlet conditions,
i.e.\ $-\Delta u = f$ in $\Omega$ with $u=0$ on $\Gamma=\partial\Omega$,
see e.g.\ ~\cite{ckns,doerfler,ks,mns,stevenson}. \dpr{On a bounded Lipschitz domain in $\Omega\subset{\mathbb R}^2$ with polygonal boundary $\Gamma = \partial\Omega$, we consider}
\begin{align}\label{eq:strongform}
\begin{split}
-\Delta u &= f \quad \text{in } \Omega,\\
u &= g \quad \text{on } \Gamma_D,\\
\partial_n u &= \phi \quad \text{on } \Gamma_N
\end{split}
\end{align}
with mixed Dirichlet-Neumann boundary conditions.
\dpr{The boundary $\Gamma$} is split into two relatively open boundary parts, namely the Dirichlet boundary $\Gamma_D$ and the Neumann boundary $\Gamma_N$, i.e.\ $\Gamma_D \cap \Gamma_N = \emptyset$ and $\overline\Gamma_D \cup \overline\Gamma_N = \Gamma$. \dpr{We assume the surface measure of the Dirichlet boundary to be positive $|\Gamma_D|>0$,}
whereas $\Gamma_N$ is allowed to be empty. The given data formally satisfy $f \in \widetilde H^{-1}(\Omega)$, $g \in H^{1/2}(\Gamma_D)$, and $\phi \in H^{-1/2}(\Gamma_N)$.
As is usually required to derive (localized) a~posteriori error estimators, we assume additional regularity of the given data, namely $f \in L^2(\Omega)$, $g \in H^1(\Gamma_D)$, and $\phi \in L^2(\Gamma_N)$.
\new{Whereas certain work on a posteriori error estimation for~\eqref{eq:strongform} has been done, cf.\ \cite{bcd,sv}, none of the proposed
adaptive algorithms have been proven to converge.}
\new{While} the inclusion of inhomogeneous Neumann conditions $\phi$ into the \new{convergence} analysis seems to be obvious,
incorporating inhomogeneous Dirichlet conditions $g$ is technically more demanding
and requires novel ideas. First, discrete finite element functions cannot satisfy general inhomogeneous Dirichlet conditions. Therefore, the adaptive algorithm has to deal with an additional discretization $g_\ell$
of $g$. Second, this additional error has to be controlled in the natural trace space which is
the fractional-order
Sobolev space $H^{1/2}(\Gamma_D)$. Since the $H^{1/2}$-norm is non-local, the a~posteriori error analysis requires appropriate
localization techniques. These have recently been developed in the context of adaptive boundary element
methods~\cite{agp,cp:symm,cp:hypsing,effp,efgp,kp}:
Under certain orthogonality properties of $g-g_\ell\in H^1(\Gamma_D)$, the natural trace norm $\norm{g-g_\ell}{H^{1/2}(\Gamma_D)}$ is bounded by a locally weighted $H^1$-seminorm $\norm{h_\ell^{1/2}(g-g_\ell)'}{L^2(\Gamma_D)}$. Here, $h_\ell$ is the local mesh-width, and $(\cdot)'$ denotes the arclength derivative.
Finally, in contrast to homogeneous Dirichlet conditions $g=0$, we loose the Galerkin orthogonality in energy norm. This leads to certain technicalities to derive a contractive quasi-error which is equivalent to the overall Galerkin error in $H^1(\Omega)$.
In conclusion, quasi-optimality and even plain convergence of adaptive FEM with non-homogeneous Dirichlet data is a nontrivial
task. To the best of our knowledge, only~\cite{MNS03} analyzes convergence of adaptive FEM with inhomogeneous Dirichlet data. While the authors also consider the 2D model problem~\eqref{eq:strongform}
with $\Gamma_D=\Gamma$ and lowest-order elements, their analysis relies on an artificial non-standard marking criterion.
Quasi-optimal convergence rates are not analyzed and can hardly be expected in general~\cite{ckns}.
It is well-known that the Poisson problem~\eqref{eq:strongform} admits a unique weak solution $u\in H^1(\Omega)$ with $u = g$ on $\Gamma_D$ in the sense of traces which solves the variational formulation
\begin{align}\label{eq:weakform}
\dual{\nabla u}{\nabla v}_\Omega
&= \dual{f}{v}_\Omega + \dual{\phi}{v}_{\Gamma_N}
\quad \text{for all } v \in H^1_D(\Omega).
\end{align}
Here, the test space reads $H^1_D(\Omega) = \set{v\in H^1(\Omega)}{v = 0 \text{ on } \Gamma_D \text{ in the sense of traces}}$, and $\dual\cdot\cdot$ denotes the respective $L^2$-scalar products.
\subsection{Discretization}
For the Galerkin discretization, let ${\mathcal T}_\ell$ be a regular triangulation of $\Omega$ into triangles $T\in{\mathcal T}_\ell$. We use lowest-order conforming elements, where the ansatz space reads
\begin{align}
{\mathcal S}^1({\mathcal T}_\ell)
= \set{V_\ell\in C(\overline\Omega)}{V_\ell|_T \text{ is affine for all }T\in{\mathcal T}_\ell}.
\end{align}
Since a discrete function $U_\ell\in{\mathcal S}^1({\mathcal T}_\ell)$ cannot satisfy \new{general} continuous Dirichlet conditions, we have to discretize the given data $g \in H^1(\Gamma_D)$. According to the Sobolev inequality on the 1D manifold $\Gamma_D$, the given Dirichlet data are continuous on $\overline\Gamma_D$. Therefore, the nodal interpoland $g_\ell$ of $g$ is well-defined. As is usually done in practice, we approximate $g\approx g_\ell$.
Again, it is well-known that there is a unique $U_\ell \in {\mathcal S}^1({\mathcal T}_\ell)$ with $U_\ell = g_\ell$ on $\Gamma_D$ which solves the Galerkin formulation
\begin{align}\label{eq:galerkin}
\dual{\nabla U_\ell}{\nabla V_\ell}_\Omega
&= \dual{f}{V_\ell}_\Omega + \dual{\phi}{V_\ell}_{\Gamma_N}
\quad\text{for all }V_\ell \in {\mathcal S}^1_D({\mathcal T}_\ell).
\end{align}
Here, the test space is given by ${\mathcal S}^1_D({\mathcal T}_\ell) = {\mathcal S}^1({\mathcal T}_\ell) \cap H^1_D(\Omega) = \set{V_\ell \in {\mathcal S}^1({\mathcal T}_\ell)}{V_\ell = 0 \text{ on } \Gamma_D}$.
\subsection{A~posteriori error estimation}
An element-based residual error estimator for this discretization reads
\begin{align}\label{eq1:estimator:T}
\rho_\ell^2 = \sum_{T\in{\mathcal T}_\ell}\rho_\ell(T)^2
\end{align}
with corresponding refinement indicators
\begin{align}\label{eq2:estimator:T}
\begin{split}
\rho_\ell(T)^2
&:= |T|\,\norm{f}{L^2(T)}^2
\\&\quad
+ |T|^{1/2}\big(\norm{[\partial_nU_\ell]}{L^2(\partial T\cap\Omega)}^2
+ \norm{\phi-\partial_nU_\ell}{L^2(\partial T\cap\Gamma_N)}^2
+ \norm{(g-g_\ell)'}{L^2(\partial T\cap\Gamma_D)}^2\big),
\end{split}
\end{align}
where $[\cdot]$ denotes the jump across edges.
We prove reliability and efficiency of $\rho_\ell$ (Proposition~\ref{prop:reliability:rho}) and discrete local reliability (Proposition~\ref{prop:dlr:rho}). Inspired by~\cite{pp}, we introduce an edge-based error estimator $\varrho_\ell$ which reads
\begin{align}\label{eq1:estimator:E}
\varrho_\ell^2 = \sum_{E\in{\mathcal E}_\ell}\varrho_\ell(E)^2.
\end{align}
For an edge $E\in{\mathcal E}_\ell$, its local contributions read
\begin{align}\label{eq2:estimator:E}
\varrho_\ell(E)^2 = \begin{cases}
|E|\norm{[\partial_nU_\ell]}{L^2(E)}^2 + |\omega_{\ell,E}|\norm{f-f_{\omega_{\ell,E}}}{\omega_{\ell,E}}^2
\quad&\text{if }E\subset\Omega,\\
|E|\norm{\phi-\partial_nU_\ell}{L^2(E)}^2
&\text{if }E\subseteq\Gamma_N,\\
|E|\norm{(g-g_\ell)'}{L^2(E)}^2
&\text{if }E\subseteq\Gamma_D.
\end{cases}
\end{align}
Here, $\omega_{\ell,E}\subset\Omega$ denotes the edge patch, and $f_{\omega_{\ell,E}}$ denotes the corresponding integral mean.
The advantage of $\varrho_\ell$ is that the volume residual terms $|T|^{1/2}\norm{f}{L^2(T)}$ in~\eqref{eq2:estimator:T} are replaced by the edge oscillations $|\omega_{\ell,E}|^{1/2}\norm{f-f_{\omega_{\ell,E}}}{\omega_{\ell,E}}$, which are generically of higher order.
\new{The choice of $|E|\norm{(g-g_\ell)'}{L^2(E)}^2$ to measure the contribution of the Dirichlet data approximation is influenced by
the Dirichlet data oscillations, cf.\ Section~\ref{section:oscillations} below.}
We prove that
$\rho_\ell$ and $\varrho_\ell$ are locally equivalent (Lemma~\ref{lemma:local}) and thus obtain reliability and efficiency of $\varrho_\ell$ (Proposition~\ref{prop:reliability}) as well as discrete local reliability (Proposition~\ref{prop:dlr}).
\subsection{Adaptive algorithm}
We use the local contributions of $\varrho_\ell$ to mark edges for refinement in a realization (Algorithm~\ref{algorithm:doerfler})
of the standard adaptive loop (AFEM)
\begin{align}
\boxed{\texttt{solve}}
\quad\to\quad
\boxed{\texttt{estimate}}
\quad\to\quad
\boxed{\texttt{mark}}
\quad\to\quad
\boxed{\texttt{refine}}
\end{align}
Our adaptive algorithm use variants of the the well-studied D\"orfler marking~\cite{doerfler} to mark certain edges for refinement.
Throughout, we use newest vertex bisection, and at least marked edges are bisected. Given some initial mesh ${\mathcal T}_0$,
the algorithm generates successively locally refined meshes ${\mathcal T}_\ell$ with corresponding discrete solutions $U_\ell\in{\mathcal S}^1({\mathcal T}_\ell)$ of~\eqref{eq:galerkin}.
\subsection{Main results}
The first main result (Theorem~\ref{thm:contraction}) states that the \new{adaptive} algorithm leads to a contraction
\begin{align}
\Delta_{\ell+1} \le \kappa\,\Delta_\ell
\quad\text{for all }\ell\in{\mathbb N}_0
\text{ and some constant }0<\kappa<1
\end{align}
for some quasi-error quantity $\Delta_\ell\simeq\varrho_\ell^2$ which is equivalent to the error estimator. In particular, this proves linear convergence of the adaptively generated solutions $U_\ell\in{\mathcal S}^1({\mathcal T}_\ell)$ to the (unknown) weak solution $u\in H^1(\Omega)$ of~\eqref{eq:weakform}.
The main ingredients of the proof are an equivalent error estimator $\widetilde\varrho_\ell\simeq\varrho_\ell$ for which we prove some estimator reduction
\begin{align}
\widetilde\varrho_{\ell+1}^{\,2} \le q\,\widetilde\varrho_\ell^{\,2} + C\,\norm{\nabla(U_{\ell+1}-U_\ell)}{L^2(\Omega)}^2
\quad\text{for all }\ell\in{\mathbb N}_0
\text{ and some }0<\kappa<1
\text{ and }C>0,
\end{align}
see Lemma~\ref{lemma:reduction}, and a quasi-Galerkin orthogonality in Lemma~\ref{lemma:orthogonality}, whereas the general concept follows that of~\cite{ckns}.
The second main result is Theorem~\ref{thm:quasioptimality} which states that the outcome of the adaptive \new{algorithm} is
quasi-optimal in the sense of Stevenson~\cite{stevenson}: Provided the given data
$(f,g,\phi)\in L^2(\Omega)\times H^1(\Gamma_D)\times L^2(\Gamma_N)$ and the corresponding weak solution
$u\in H^1(\Omega)$ of~\eqref{eq:weakform} belong to the approximation class
\begin{align}\label{eq:optimal:class}
\mathbb A_s := \set{(u,f,g,\phi)}{\norm{(u,f,g,\phi)}{\mathbb A_s}:=\sup_{N\in{\mathbb N}}\big(N^s\sigma(N,u,f,g,\phi)\big)<\infty}
\end{align}
with
\begin{align}\label{eq:optimal:norm}
\begin{split}
\sigma(N,u,f,g,\phi)^2 := \inf_{{\mathcal T}_*\in\mathbb T_N} \Big\{&\inf_{W_*\in{\mathcal S}^1({\mathcal T}_*)}
\norm{\nabla(u-W_*)}{L^2(\Omega)}^2
+\new{\oscD{*}^2}
+ \oscT{*}^2 + \oscN{*}^2\Big\},
\end{split}
\end{align}
the adaptively generated solutions also yield convergence order ${\mathcal O}(N^{-s})$, i.e.
\begin{align}\label{eq:optimal:order}
\norm{u-U_\ell}{H^1(\Omega)}
\lesssim \big(\norm{\nabla(u-U_\ell)}{L^2(\Omega)}^2 + \new{\oscD{\ell}^2}\big)^{1/2}
\lesssim (\#{\mathcal T}_\ell-\#{\mathcal T}_0)^{-s}.
\end{align}
Here, $\mathbb T_N$ denotes the set of all triangulations ${\mathcal T}_*$ which can be obtained by local refinement of the initial mesh
${\mathcal T}_0$ such that $\#{\mathcal T}_*-\#{\mathcal T}_0\le N$. Moreover, $\oscT{*}, \new{\oscD{*}}$, and $\oscN{*}$ denote the data oscillations of the volume data $f$,
\new{the Dirichlet data $g$}, and
the Neumann data $\phi$, see Section~\ref{section:oscillations}.
The ingredients for the proof are the observation that the proposed marking \new{strategy is} optimal \new{(Proposition~\ref{prop:doerfler})} and the C\'ea-type estimate
\begin{align}
\begin{split}
&\norm{\nabla(u-U_\ell)}{L^2(\Omega)}^2 + \new{\oscD{\ell}^2}
\le
\c{szoptimality}\big(\inf_{W_\ell \in {\mathcal S}^1({\mathcal T}_\ell)}
\norm{\nabla(u-W_\ell)}{L^2(\Omega)}^2 \!+\! \new{\oscD{\ell}^2} \big)
\end{split}
\end{align}
for the Galerkin solution $U_\ell\in{\mathcal S}^1({\mathcal T}_\ell)$ in Lemma~\ref{lem:quasiopt}.
\new{For 3D, nodal interpolation of the Dirichlet data $g\in H^1(\Gamma)$ is not well-defined. In the literature, it is proposed to discretize $g$ by use of the
$L^2$-projection~\cite{bcd} or the Scott-Zhang projection~\cite{sv}. Our third theorem (Theorem~\ref{thm:3Dconvergence}) states convergence of the
adaptive algorithm for either choice in 2D as well as 3D.
\dpr{The proof relies on the analytical observation that, under adaptive mesh-refinement, the Scott-Zhang projection converges pointwise to a limiting operator (Lemma~\ref{lem:apriori:sz}), which
might be of independent interest.}
Finally, we stress that the same results (Thm.~\ref{thm:contraction},~\ref{thm:quasioptimality},~\ref{thm:3Dconvergence}) hold if the element-based estimator
$\rho_\ell$ from~\eqref{eq1:estimator:T}--\eqref{eq2:estimator:T} instead of the edge-based estimator $\varrho_\ell$ is used and if Algorithm~\ref{algorithm:doerfler}
marks certain elements for refinement.}
\subsection{Outline}
The remainder of this paper is organized as follows: We first collect some necessary preliminaries on, e.g., newest vertex bisection
(Section~\ref{section:nvb}) and the Scott-Zhang quasi-interpolation operator (Section~\ref{section:sz}).
Section~\ref{section:aposteriori} contains the analysis of the a~posteriori error estimators $\rho_\ell$
from~\eqref{eq1:estimator:T}--\eqref{eq2:estimator:T} and $\varrho_\ell$ from~\eqref{eq1:estimator:E}--\eqref{eq2:estimator:E}.
Moreover, we state \new{the adaptive Algorithm} in Section~\ref{section:doerfler}. The
convergence is \new{shown} in Section~\ref{section:convergence}, while the quasi-optimality results are found in
Section~\ref{section:optimal}. Whereas the major part of the paper is concerned with the 2D model problem,
Section~\ref{section:remarks3d} considers convergence of AFEM for 3D. Finally, some numerical experiments conclude the work.
\section{Preliminaries}
\vspace*{-3mm
\subsection{Notation}\label{sec:notation}
Throughout, ${\mathcal T}_\ell$ denotes a regular triangulation which is obtained by $\ell$ steps of (local) newest vertex bisection for a given initial triangulation ${\mathcal T}_0$. By $\mathcal K_\ell:=\mathcal K_\ell^\Omega\cup \mathcal K_\ell^\Gamma$, we denote the set of all interior nodes, respectively the set of all boundary nodes of ${\mathcal T}_\ell$. By ${\mathcal E}_\ell$, we denote the set of all edges of ${\mathcal T}_\ell$ which is split into the interior edges ${\mathcal E}_\ell^\Omega = \set{E\in{\mathcal E}_\ell}{E\cap\Omega\neq\emptyset}$ and boundary edges ${\mathcal E}_\ell^\Gamma = {\mathcal E}_\ell\backslash{\mathcal E}_\ell^\Omega$. We restrict ourselves to meshes ${\mathcal T}_\ell$ such that each $T\in {\mathcal T}_\ell$ has an interior node, i.e.\ $\partial T \cap \mathcal K_\ell^\Omega \neq \emptyset$. Note, that this is only an assumption on the initial mesh ${\mathcal T}_0$. We assume that the partition of $\Gamma$ into Dirichlet boundary $\Gamma_D$ and Neumann boundary $\Gamma_N$ is resolved, i.e.\ ${\mathcal E}_\ell^\Gamma$ is split into ${\mathcal E}_\ell^D=\set{E\in{\mathcal E}_\ell}{E\subseteq\overline\Gamma_D}$ and
${\mathcal E}_\ell^N=\set{E\in{\mathcal E}_\ell}{E\subseteq\overline\Gamma_N}$. Note that ${\mathcal E}_\ell^D$ (resp.\ ${\mathcal E}_\ell^N$) provides a partition of
$\Gamma_D$ (resp.\ $\Gamma_N$).
For a node $z\in\mathcal K_\ell$, the corresponding patch is defined by
\begin{align}\label{eq:patch:z}
\omega_{\ell,z} = \bigcup\set{T\in{\mathcal T}_\ell}{z\in\partial T}.
\end{align}
For an edge $E\in{\mathcal E}_\ell$, the edge patch is defined by
\begin{align}\label{eq:patch:E}
\omega_{\ell,E} = \bigcup\set{T\in{\mathcal T}_\ell}{E\subset\partial T}.
\end{align}
Moreover, for a given node $z\in\mathcal K_\ell$,
\begin{align}
{\mathcal E}_{\ell,z} = \bigcup\set{E\in{\mathcal E}_\ell}{z\in E}
\end{align}
denotes the star of edges originating at $z$.
\begin{figure}[t]
\centering
\psfrag{T0}{}
\psfrag{T1}{}
\psfrag{T2}{}
\psfrag{T3}{}
\psfrag{T4}{}
\psfrag{T12}{}
\psfrag{T34}{}
\includegraphics[width=35mm]{bisec1_00} \quad
\includegraphics[width=35mm]{bisec2left_00} \quad
\includegraphics[width=35mm]{bisec2right_00} \quad
\includegraphics[width=35mm]{bisec3_00} \\
\includegraphics[width=35mm]{bisec1_01} \quad
\includegraphics[width=35mm]{bisec2left_01}\quad
\includegraphics[width=35mm]{bisec2right_01}\quad
\includegraphics[width=35mm]{bisec3_01}
\caption{
For each triangle $T\in{\mathcal T}_\ell$, there is one fixed \emph{reference edge},
indicated by the double line (left, top). Refinement of $T$ is done by bisecting
the reference edge, where its midpoint becomes a new node. The reference
edges of the son triangles $T'\in{\mathcal T}_{\ell+1}$ are opposite to this newest vertex (left, bottom).
To avoid hanging nodes, one proceeds as follows:
We assume that certain edges of $T$, but at least the reference edge,
are marked for refinement (top).
Using iterated newest vertex bisection, the element is then split into
2, 3, or 4 son triangles (bottom).}
\label{fig:nvb}
\end{figure}
\subsection{Newest vertex bisection}
\label{section:nvb}%
Throughout, we assume that newest vertex bisection is used for mesh-refinement, see Figure~\ref{fig:nvb}.
Let ${\mathcal T}_\ell$ be a given mesh and ${\mathcal M}_\ell\subseteq{\mathcal E}_\ell$ an arbitrary set of marked edges. Then, \begin{align}
{\mathcal T}_{\ell+1}={\tt refine}({\mathcal T}_\ell,{\mathcal M}_\ell)
\end{align}
denotes the coarsest regular triangulation such that all marked edges $E\in{\mathcal M}_\ell$ have been bisected. Moreover, we write
\begin{align}
{\mathcal T}_* = {\tt refine}({\mathcal T}_\ell)
\end{align}
if ${\mathcal T}_*$ is a finite refinement of ${\mathcal T}_\ell$, i.e., there are finitely many triangulations ${\mathcal T}_{\ell+1},\dots,{\mathcal T}_n$ and sets of marked edges ${\mathcal M}_\ell\subseteq{\mathcal E}_\ell,\dots,{\mathcal M}_{n-1}\subseteq{\mathcal E}_{n-1}$ such that ${\mathcal T}_*={\mathcal T}_n$ and ${\mathcal T}_{j+1}={\tt refine}({\mathcal T}_j,{\mathcal M}_j)$ for all $j=\ell,\dots,n-1$.%
We stress that, for a fixed initial mesh ${\mathcal T}_0$, only finitely many shapes of triangles $T\in{\mathcal T}_\ell$ appear. In particular, only finitely many shapes of patches~\eqref{eq:patch:z}--\eqref{eq:patch:E} appear. This observation will be used below. Moreover, newest vertex bisection guarantees that any sequence ${\mathcal T}_\ell$ of generated meshes with ${\mathcal T}_{\ell+1}={\tt refine}({\mathcal T}_\ell)$ is uniformly shape regular in the sense of
\begin{align}
\sup_{\ell\in{\mathbb N}}\sigma({\mathcal T}_\ell)<\infty,
\quad\text{where}\quad
\sigma({\mathcal T}_\ell)=\max_{T\in{\mathcal T}}\frac{{\rm diam}(T)^2}{|T|}.
\end{align}%
Further details are found in~\cite[Chapter~4]{verfuerth}.
\subsection{Scott-Zhang quasi-interpolation and discrete lifting operator}
\label{section:sz}%
Our analysis below makes heavy use of the Scott-Zhang projection $P_\ell :H^1(\Omega) \to {\mathcal S}^1({\mathcal T}_\ell)$ from~\cite{sz}: For all nodes $z\in\mathcal K_\ell$, one chooses an edge $E_z\in{\mathcal E}_\ell$ with $z\in E_z$. For $z\in \Gamma$, this choice is restricted to $E_z \subset \Gamma$. Moreover, for $z\in\overline\Gamma_D$, we even enforce $E_z\subset\overline\Gamma_D$. For $w \in H^1(\Omega)$,
$P_\ell w$ is then defined by
\begin{align*}
(P_\ell w)(z):= \dual{\psi_z}{w}_{E_z},
\end{align*}
for a node $z\in\mathcal K_\ell$. Here, $\psi_z \in L^2(E_z)$ denotes the dual basis function defined by $\dual{\psi_z}{\varphi_{z^\prime}}_{E_z}=\delta_{zz^\prime}$, and
$\varphi_z\in{\mathcal S}^1({\mathcal T}_\ell)$ denotes the hat function associated with $z\in\mathcal K_\ell$.
By definition, we then have the following projection properties
\begin{itemize}
\item $P_\ell W_\ell = W_\ell$ for all $W_\ell\in{\mathcal S}^1({\mathcal T}_\ell)$,
\item $(P_\ell w)|_\Gamma = w|_\Gamma$ for all $w\in H^1(\Omega)$ and $W_\ell\in{\mathcal S}^1({\mathcal T}_\ell)$ with $w|_\Gamma = W_\ell|_\Gamma$,
\item $(P_\ell w)|_{\Gamma_D} = w|_{\Gamma_D}$ for all $w\in H^1(\Omega)$ and $W_\ell\in{\mathcal S}^1({\mathcal T}_\ell)$ with $w|_{\Gamma_D} = W_\ell|_{\Gamma_D}$,
\end{itemize}
i.e.\ the projection $P_\ell$ preserves discrete (Dirichlet) boundary data.
Moreover, $P_\ell$ satisfies the following stability property
\begin{align}
\label{eq:scottzhangstability}
\normHe{(1-P_\ell) w}{\Omega} \leq \c{scottzhang}\,\norm{\nabla w}{L^2(\Omega)}\quad \text{for all } w \in H^1(\Omega)
\end{align}
and approximation property
\begin{align}
\label{eq:scottzhangapprox}
\norm{(1-P_\ell) w}{L^2(\Omega)} \leq \c{scottzhang}\,\norm{h_\ell\nabla w}{L^2(\Omega)}\quad \text{for all } w \in H^1(\Omega)
\end{align}
where $\setc{scottzhang}>0$ depends only on $\sigma({\mathcal T}_\ell)$.
Together with the projection property onto ${\mathcal S}^1({\mathcal T}_\ell)$, it is an easy consequence of the stability~\eqref{eq:scottzhangstability} of $P_\ell$ that
\begin{align}
\label{eq:quasioptscottzhang}
\normHe{(1-P_\ell)w}{\Omega}
=\min_{W_\ell\in{\mathcal S}^1({\mathcal T}_\ell)}\normHe{(1-P_\ell)(w-W_\ell)}{\Omega}
\lesssim\min_{W_\ell\in{\mathcal S}^1({\mathcal T}_\ell)}\norm{\nabla(w-W_\ell)}{L^2(\Omega)}
\end{align}
for all $w\in H^1(\Omega)$. In particular, $P_\ell$ is quasi-optimal in the sense of the C\'ea lemma with respect to $\norm{\cdot}{H^1(\Omega)}$ and $\norm{\nabla(\cdot)}{L^2(\Omega)}$, i.e.
\begin{align}
\begin{split}
\normHe{(1-P_\ell)w}{\Omega}
&\lesssim\min_{W_\ell\in{\mathcal S}^1({\mathcal T}_\ell)}\norm{w-W_\ell}{H^1(\Omega)},\\
\norm{\nabla(1-P_\ell)w}{L^2(\Omega)}
&\lesssim\min_{W_\ell\in{\mathcal S}^1({\mathcal T}_\ell)}\norm{\nabla(w-W_\ell)}{L^2(\Omega)}.
\end{split}
\end{align}
Moreover, $P_\ell$ allows to define a discrete lifting operator
\begin{align}
\label{eq:discreteLifting}
{\mathcal L}_\ell := P_\ell {\mathcal L} \,:\,{\mathcal S}^1({\mathcal E}_\ell^\Gamma) \to {\mathcal S}^1({\mathcal T}_\ell),
\quad\text{i.e. }
{\mathcal L}_\ell(W_\ell|_\Gamma)|_\Gamma = W_\ell|_\Gamma
\quad\text{for all }W_\ell\in{\mathcal S}^1({\mathcal T}_\ell)
\end{align}
whose operator norm is uniformly bounded in terms of $\sigma({\mathcal T}_\ell)$. Here, ${\mathcal L}\in L(H^{1/2}(\Gamma); H^1(\Omega))$ denotes an arbitrary lifting operator, i.e.\ $({\mathcal L} w)|_\Gamma = w$ for all $w\in H^{1/2}(\Gamma)$, see e.g.\ \cite{mclean}.
Finally, we put emphasis on the fact that our definition of $P_\ell$ also provides an operator $P_\ell = P_\ell^\Gamma :L^2(\Gamma) \to {\mathcal S}^1({\mathcal E}_\ell^\Gamma)$ which is consistent in the sense that $(P_\ell v)|_\Gamma = P_\ell^\Gamma (v|_\Gamma)$ for all $v\in H^1(\Omega)$.
Using the definition of $H^{1/2}(\Gamma)$ as the trace space of $H^1(\Omega)$
and the stability~\eqref{eq:scottzhangstability}, we see
\begin{align*}
\norm{\widehat g-P_\ell \widehat g}{H^{1/2}(\Gamma)}
&:= \inf\set{\normHe{w}{\Omega}}{w\in H^1(\Omega), w|_{\Gamma} = \widehat g-P_\ell \widehat g}\\
&\le \inf\set{\norm{w-P_\ell w}{H^1(\Omega)}}{w\in H^1(\Omega), w|_\Gamma=\widehat g}\\
&\lesssim\inf\set{\norm{\nabla w}{L^2(\Omega)}}{w\in H^1(\Omega), w|_\Gamma=\widehat g}\\
&\le \inf\set{\norm{w}{H^1(\Omega)}}{w\in H^1(\Omega), w|_\Gamma=\widehat g}
= \norm{\widehat g}{H^{1/2}(\Gamma)}
\end{align*}
for all $\widehat g\in H^{1/2}(\Gamma)$, i.e.\ $P_\ell:H^{1/2}(\Gamma)\to{\mathcal S}^1({\mathcal E}_\ell^\Gamma)$ is a continuous projection with respect to the $H^{1/2}$-norm. In particular, $P_\ell$ also provides a continuous projection $P_\ell = P_\ell^D:H^{1/2}(\Gamma_D)\to{\mathcal S}^1({\mathcal E}_\ell^{D})$, since
\begin{align*}
\norm{g-P_\ell g}{H^{1/2}(\Gamma_D)}
&= \inf\set{\norm{\widehat g-P_\ell \widehat g}{H^{1/2}(\Gamma)}}{\widehat g\in H^{1/2}(\Gamma), \widehat g|_{\Gamma_D}=g}\\
&\lesssim \inf\set{\norm{\widehat g}{H^{1/2}(\Gamma)}}{\widehat g\in H^{1/2}(\Gamma), \widehat g|_{\Gamma_D}=g}
= \norm{g}{H^{1/2}(\Gamma_D)}
\end{align*}
for all $g\in H^{1/2}(\Gamma_D)$. As before, this definition is consistent with the previous notation of $P_\ell$ since $(P_\ell^\Gamma\widehat g)|_{\Gamma_D} = P_\ell^D(\widehat g|_{\Gamma_D})$ for all $\widehat g\in H^{1/2}(\Gamma)$.
\section{A~Posteriori Error Estimation and Adaptive Mesh-Refinement}
\label{section:aposteriori}%
\vspace*{-3mm
\subsection{Data oscillations}
\label{section:oscillations}%
We start with the element data oscillations
\begin{align}
\oscT{\ell}^2:=\sum_{T\in{\mathcal T}_\ell}\oscT{\ell}(T)^2,
\text{ where }
\oscT{\ell}(T)^2:=|T|\,\normL2{f-f_T}{T}^2 \quad \text{for all } T\in{\mathcal T}_\ell
\end{align}
and where $f_T:=|T|^{-1}\int_T f\,dx\in{\mathbb R}$ denotes the integral mean over an element $T\in{\mathcal T}_\ell$. These arise in the efficiency estimate for residual error estimators.
Our residual error estimator will involve the edge data oscillations
\begin{align}\label{eq:osc:edge}
\osc{\ell}^2 := \sum_{E\in{\mathcal E}_\ell^\Omega}\osc{\ell}(E)^2,
\text{ where }
\osc{\ell}(E)^2 := |\omega_{\ell,E}|\,\norm{f-f_{\omega_{\ell,E}}}{L^2(\omega_{\ell,E})}^2
\text{ for all }E\in{\mathcal E}_\ell^\Omega.
\end{align}
Here, $\omega_{\ell,E}\subset\Omega$ is the edge patch from~\eqref{eq:patch:E}, and $f_{\omega_{\ell,E}}\in{\mathbb R}$ is the corresponding integral mean of $f$.
For the analysis, we shall additionally need the node data oscillations
\begin{align}\label{eq:osc:node}
\oscK{\ell}^2 := \sum_{z\in\mathcal K_\ell^\Omega}\oscK{\ell}(z)^2,
\text{ where }
\oscK{\ell}(z)^2 := |\omega_{\ell,z}|\,\norm{f-f_{\omega_{\ell,z}}}{L^2(\omega_{\ell,z})}^2
\text{ for all }z\in\mathcal K_\ell^\Omega.
\end{align}
Here, $\omega_{\ell,z}\subset\Omega$ is the node patch from~\eqref{eq:patch:z}, and $f_{\omega_{\ell,z}}\in{\mathbb R}$ is the corresponding integral mean of $f$.
Moreover, the efficiency needs the Neumann data oscillations
\begin{align}\label{eq:osc:neu}
\oscN{\ell}^2:=\sum_{E\in{\mathcal E}_\ell^N}\oscN\ell(E)^2,
\text{ where }
\oscN{\ell}(E)^2:=|E|\,\normL2{\phi-\phi_E}{E}^2
\text{ for all } E \in {\mathcal E}_\ell^N
\end{align}
and where $\phi_E:=|E|^{-1}\int_E \phi \,dx$ denotes the integral mean over an edge $E\in{\mathcal E}_\ell^N$.
Finally, the approximation of the Dirichlet data $g\approx g_\ell$ is controlled by the Dirichlet data oscillations
\begin{align}\label{eq:osc:dir}
\oscD{\ell}:=\sum_{E\in{\mathcal E}_\ell^D} \oscD{\ell}(E)^2,
\text{ where }
\oscD{\ell}(E)^2:=|E|\normL2{(g-g_\ell)^\prime}{E}^2
\text{ for all }E\in{\mathcal E}_\ell^D.
\end{align}
\new{Recall that, on the 1D manifold $\Gamma_D$, the derivative of the nodal interpoland is the elementwise best approximation of the derivative by piecewise constants, i.e.,
\begin{align}\label{eq:bestapprox}
\norm{(g-g_\ell)'}{L^2(E)} = \min_{c\in{\mathbb R}}\norm{g'-c}{L^2(E)}
\quad\text{for all }E\in{\mathcal E}_\ell^D.
\end{align}
According to the elementwise Pythagoras theorem, this implies
\begin{align}\label{eq:nodal:orthogonality}
\norm{(g-g_\ell)'}{L^2(E)}^2 + \norm{(g_\ell-\widetilde g_\ell)'}{L^2(E)}^2
= \norm{(g-\widetilde g_\ell)'}{L^2(E)}^2
\text{ for all }\widetilde g_\ell\in{\mathcal S}^1({\mathcal E}_\ell^D)
\end{align}
and all Dirichlet edges $E\in{\mathcal E}_\ell^D$. This observation will be crucial in the analysis below.
Moreover,~\eqref{eq:bestapprox} yields
\begin{align}
\norm{h_\ell^{1/2}(g-g_\ell)^\prime}{L^2(\Gamma_D)} = \min_{W_\ell \in {\mathcal S}^1({\mathcal T}_\ell)}\norm{h_\ell^{1/2}(g-W_\ell|_\Gamma)^\prime}{L^2(\Gamma_D)} .
\end{align}
The following result is found in~\cite[Lemma~2.2]{efgp}.
\begin{lemma}\label{lemma:apx}
Let $g\in H^1(\Gamma_D)$ and let $g_\ell$ denote the nodal interpoland of $g_\ell$ on $\overline\Gamma_D$. Then,
\begin{align}\label{eq:apx}
\norm{g-g_\ell}{H^{1/2}(\Gamma_D)}
\le \c{apx}\,\oscD\ell,
\end{align}
where the constant $\setc{apx}>0$ depends only on the shape regularity constant $\sigma({\mathcal T}_\ell)$ and $\Omega$.\qed
\end{lemma}
}
To keep the notation simple, we extend the Dirichlet and the Neumann data oscillations from~\eqref{eq:osc:neu}--\eqref{eq:osc:dir} by zero to all edges $E\in{\mathcal E}_\ell$, e.g.\ $\oscD{\ell}(E) = 0$ for $E\in{\mathcal E}_\ell\backslash{\mathcal E}_\ell^D$.
Moreover, we will write
\begin{align}
\label{eq:defonpatch}
\oscT\ell(\omega_{\ell,z})^2 = \sum_{{T\in{\mathcal T}_\ell}\atop{T\subset\omega_{\ell,z}}}\oscT{\ell}(T)^2
\quad\text{resp.}\quad
\oscN{\ell}({\mathcal E}_{\ell,z})^2 = \sum_{{E\in{\mathcal E}_\ell^N}\atop{E\subset{\mathcal E}_{\ell,z}}}\oscN{\ell}(E)^2
\end{align}
to abbreviate the notation.
\subsection{Element-based residual error estimator}
\label{section:estimator:T}%
Our first proposition states reliability and efficiency of the error estimator $\rho_\ell$ from~\eqref{eq1:estimator:T}--\eqref{eq2:estimator:T}.
\begin{proposition}[reliability and efficiency of $\rho_\ell$]
\label{prop:reliability:rho}
The error estimator $\rho_\ell$ is reliable
\begin{align}\label{eq:rho:reliable}
\norm{u - U_\ell}{H^1(\Omega)}
\le \c{rho:reliable}\,\rho_\ell
\end{align}
and efficient
\begin{align}\label{eq:rho:efficient}
\c{rho:efficient}^{-1}\,\rho_\ell
\le \big(\norm{\nabla(u-U_\ell)}{L^2(\Omega)}^2 + \oscT{\ell}^2 +\oscN{\ell}^2 + \oscD{\ell}^2\big)^{1/2}.
\end{align}
The constants $\setc{rho:reliable},\setc{rho:efficient}>0$ depend
only on the shape regularity constant $\sigma({\mathcal T}_\ell)$ and on $\Omega$.
\end{proposition}
\begin{proof}[Sketch of proof
We consider a continuous auxiliary problem
\begin{align}
\begin{split}
-\Delta w &= 0 \hspace*{12.8mm} \text{in } \Omega,\\
w &= g-g_\ell \quad \text{on } \Gamma_D,\\
\partial_n w &= 0 \hspace*{12.8mm} \text{on } \Gamma_N,
\end{split}
\end{align}
with unique solution $w\in H^1(\Omega)$. We then have norm equivalence $\norm{w}{H^1(\Omega)}\simeq\norm{g-g_\ell}{H^{1/2}(\Gamma_D)}$ as well as $u-U_\ell-w\in H^1_D(\Omega)$. From this, we obtain
\begin{align*}
\norm{u-U_\ell}{H^1(\Omega)}^2
\lesssim \norm{\nabla(u-U_\ell-w)}{L^2(\Omega)}^2 + \norm{g-g_\ell}{H^{1/2}(\Gamma_D)}^2.
\end{align*}
Whereas the second term is controlled by Lemma~\ref{lemma:apx}, the first can be handled as for homogeneous Dirichlet data, i.e. use of the Galerkin orthogonality combined with approximation estimates for a Cl\'ement-type quasi-interpolation operator. Details are found e.g.\ in~\cite{bcd}. This proves reliability~\eqref{eq:rho:reliable}.
By use of bubble functions and local scaling arguments, one obtains the estimates
\begin{align*}
|T|\,\norm{f}{L^2(T)}^2
&\lesssim \norm{\nabla(u-U_\ell)}{L^2(T)}^2 + \oscT{\ell}(T)^2+\oscN{\ell}(\partial T\cap \Gamma_N),\\
|T|^{1/2}\,\norm{[\partial_nU_\ell]}{L^2(E\cap\Omega)}^2
&\lesssim \norm{\nabla(u-U_\ell)}{L^2(\omega_{\ell,E})}^2 + \oscT{\ell}(\omega_{\ell,E})^2\\%+\oscN{\ell}(E)^2,\\
|T|^{1/2}\,\norm{\phi-\partial_nU_\ell}{L^2(E\cap\Gamma_N)}^2
&\lesssim \norm{\nabla(u-U_\ell)}{L^2(\omega_{\ell,E})}^2 + \oscT{\ell}(\omega_{\ell,E})^2+\oscN{\ell}(E\cap\Gamma_N)^2 \,
\end{align*}
where $\omega_{\ell,E}$ denotes the edge patch of $E\in{\mathcal E}_\ell$.
Details are found e.g.\ in~\cite{ao,verfuerth}.
Summing these estimates over all elements, one obtains the efficiency estimate~\eqref{eq:rho:efficient}.
\end{proof}
\begin{proposition}[discrete local reliability of $\rho_\ell$]
\label{prop:dlr:rho}
Let ${\mathcal T}_* = {\tt refine}({\mathcal T}_\ell)$ be an arbitrary refinement of ${\mathcal T}_\ell$ with associated Galerkin solution $U_*\in{\mathcal S}^1({\mathcal T}_*)$. Let $\mathcal R_\ell({\mathcal T}_*):={\mathcal T}_\ell\backslash{\mathcal T}_*$ be
the set of all elements $T\in{\mathcal T}_\ell$ which are refined to generate ${\mathcal T}_*$. Then, there holds
\begin{align}\label{eq:dlr:rho}
\norm{U_*-U_\ell}{H^1(\Omega)}
\le \c{rho:dlr}\,\rho_\ell(\mathcal R_\ell({\mathcal T}_*))
\end{align}
with some constant $\setc{rho:dlr}>0$ which depends only on $\sigma({\mathcal T}_\ell)$ and $\Omega$.
\end{proposition}
\begin{proof}
We consider a discrete auxiliary problem
\begin{align*}
\dual{\nabla W_*}{\nabla V_*}_\Omega = 0
\quad\text{for all }V_*\in{\mathcal S}^1_D({\mathcal T}_*)
\end{align*}
with unique solution $W_*\in{\mathcal S}^1({\mathcal T}_*)$ with $W_*|_{\Gamma_D}=g_*-g_\ell$.
To estimate the $H^1$-norm of $W_*$ in terms of the boundary data,
let ${\mathcal L}_*\,:H^{1/2}(\Gamma) \to {\mathcal S}^1({\mathcal T}_*)$ denote the discrete lifting operator from~\eqref{eq:discreteLifting}. Let $\widehat g_*,\widehat g_\ell\in H^{1/2}(\Gamma)$
be arbitrary extensions of $g_*$ and $g_\ell$, respectively.
Then, we have $V_*=W_* - {\mathcal L}_*(\widehat g_*-\widehat g_\ell)\in{\mathcal S}^1_D({\mathcal T}_*)$.
According to the triangle inequality and a Poincar\'e inequality for $V_*\in{\mathcal S}^1_D({\mathcal T}_*)$, we first observe
\begin{align*}
\norm{W_*}{L^2(\Omega)}
&\le \norm{V_*}{L^2(\Omega)}
+ \norm{{\mathcal L}_*(\widehat g_*-\widehat g_\ell)}{L^2(\Omega)}\\
&\lesssim \norm{\nabla V_*}{L^2(\Omega)}
+ \norm{{\mathcal L}_*(\widehat g_*-\widehat g_\ell)}{L^2(\Omega)}\\
& \lesssim \norm{\nabla W_*}{L^2(\Omega)} + \norm{{\mathcal L}_*(\widehat g_*-\widehat g_\ell)}{H^1(\Omega)}.
\end{align*}
Moreover, the variational formulation for $W_*\in{\mathcal S}^1({\mathcal T}_*)$ yields
\begin{align*}
0
= \dual{\nabla W_*}{\nabla V_*}_\Omega
= \norm{\nabla W_*}{L^2(\Omega)}^2
- \dual{\nabla W_*}{\nabla{\mathcal L}_*(\widehat g_*-\widehat g_\ell)}_\Omega,
\end{align*}
whence by the Cauchy-Schwarz inequality
\begin{align*}
\norm{\nabla W_*}{L^2(\Omega)}
\le \norm{\nabla{\mathcal L}_*(\widehat g_*-\widehat g_\ell)}{L^2(\Omega)}
\lesssim \norm{\widehat g_*-\widehat g_\ell}{H^{1/2}(\Gamma)}.
\end{align*}
Altogether, this proves $\norm{W_*}{H^1(\Omega)}\lesssim \norm{\widehat g _*-\widehat g_\ell}{H^{1/2}(\Gamma)}$. Since the extensions $\widehat g_*,\widehat g_\ell$ were arbitrary and by definition of the $H^{1/2}(\Gamma_D)$-norm, this proves
\begin{align}\label{dpr1:rho}
\norm{W_*}{H^1(\Omega)}
\lesssim \norm{g_*-g_\ell}{H^{1/2}(\Gamma_D)}
\lesssim \norm{h_\ell^{1/2}(g_*-g_\ell)'}{L^2(\Gamma_D)},
\end{align}
where we have finally used that $g_\ell$ is also the nodal interpoland of $g_*$ so that Lemma~\ref{lemma:apx} applies.
For an element $T\in{\mathcal T}_\ell\cap{\mathcal T}_*$ holds $g_*|_{\partial T\cap\Gamma_D} = g_\ell|_{\partial T\cap\Gamma_D}$, and the last term thus satisfies
\begin{align*}
\norm{h_\ell^{1/2}(g_*-g_\ell)'}{L^2(\Gamma_D)}^2
\simeq
\sum_{T\in{\mathcal T}_\ell} |T|^{1/2}\norm{(g_*-g_\ell)'}{L^2(\partial T\cap\Gamma_D)}^2
&=\!\! \sum_{T\in\mathcal R_\ell({\mathcal T}_*)} \!\! |T|^{1/2}\norm{(g_*-g_\ell)'}{L^2(\partial T\cap\Gamma_D)}^2
\end{align*}
With the orthogonality relation~\eqref{eq:nodal:orthogonality} applied for $g_*\in{\mathcal S}^1({\mathcal T}_*|_{\Gamma_D})$, we see
\begin{align*}
\norm{W_*}{H^1(\Omega)}^2
\lesssim \sum_{T\in\mathcal R_\ell({\mathcal T}_*)} \!\!|T|^{1/2}\norm{(g_*-g_\ell)'}{L^2(\partial T\cap\Gamma_D)}^2
\le \sum_{T\in\mathcal R_\ell({\mathcal T}_*)} \!\! |T|^{1/2}\norm{(g-g_\ell)'}{L^2(\partial T\cap\Gamma_D)}^2.
\end{align*}
Finally, we observe $U_*-U_\ell-W_*\in{\mathcal S}^1_D({\mathcal T}_*)$ with
\begin{align*}
\dual{\nabla(U_*-U_\ell-W_*)}{\nabla V_\ell} = 0
\quad\text{for all }V_\ell\in{\mathcal S}^1_D({\mathcal T}_\ell).
\end{align*}
Arguing as in~\cite[Lemma~3.6]{ckns}, we see
\begin{align*}
&\norm{\nabla(U_*-U_\ell-W_*)}{L^2(\Omega)}^2
\\&\quad
\lesssim \sum_{T\in\mathcal R_\ell({\mathcal T}_*)}\big(|T|\,\norm{f}{L^2(T)}^2
+ |T|^{1/2}\,\norm{[\partial_nU_\ell]}{L^2(\partial T\cap\Omega)}^2
+ |T|^{1/2}\,\norm{\phi-\partial_nU_\ell}{L^2(\partial T\cap\Gamma_N)}^2\big)
\end{align*}
Finally, we again use the triangle inequality and the Poincar\'e inequality to see
\begin{align*}
\norm{U_*-U_\ell}{H^1(\Omega)}^2
\lesssim \norm{W_*}{H^1(\Omega)}^2 + \norm{\nabla(U_*-U_\ell-W_*)}{L^2(\Omega)}^2
\end{align*}
and thus obtain the discrete local reliability~\eqref{eq:dlr:rho}.
The constant $\c{rho:dlr}>0$ depends only on $\c{apx}>0$ and on local estimates for the Scott-Zhang projection which are controlled by boundedness of $\sigma({\mathcal T}_\ell)$.
\end{proof}
\subsection{Edge-based residual error estimator}
\label{section:estimator:E}%
In the following,
we show that the edge-based estimator $\varrho_\ell$ from~\eqref{eq1:estimator:E}--\eqref{eq2:estimator:E} is locally equivalent
to the element-based error estimator $\rho_\ell$ from the previous section. The main advantage is that $\varrho_\ell$ replaces the volume residuals
\begin{align}
{{\rm res}}_\ell(T):=|T|\,\norm{f}{L^2(T)}
\end{align}
by the edge oscillations $\osc\ell$.
We define the edge jump contributions
\begin{align}
\eta_\ell(E)^2 := \begin{cases}
|E|\,\norm{[\partial_nU_\ell]}{L^2(E)}^2
\quad&\text{for }E\in{\mathcal E}_\ell^\Omega,\\
|E|\,\norm{\phi-\partial_nU_\ell}{L^2(E)}^2
\quad&\text{for }E\in{\mathcal E}_\ell^N
\end{cases}
\end{align}
where $[\cdot]$ denotes the jump across an interior edge.
Together with the edge oscillations from~\eqref{eq:osc:edge} and the Dirichlet oscillations from~\eqref{eq:osc:dir}, our version of the residual error estimator from~\eqref{eq1:estimator:E}--\eqref{eq2:estimator:E} reads
\begin{align}
\varrho_\ell^2
= \sum_{E\in{\mathcal E}_\ell}\varrho_\ell(E)^2 = \sum_{E\in{\mathcal E}_\ell^\Omega\cup{\mathcal E}_\ell^N}\eta_\ell(E)^2
+ \sum_{E\in{\mathcal E}_\ell^\Omega}\osc{\ell}(E)^2 + \sum_{E\in{\mathcal E}_\ell^D}\oscD{\ell}(E)^2.
\end{align}%
Note that $\osc{\ell}({\mathcal E}_{\ell,z})$, $\eta_\ell({\mathcal E}_{\ell,z})$, and ${{\rm res}}_\ell(\omega_{\ell,E})$ are defined analogously to~\eqref{eq:defonpatch}.
The following lemma implies local equivalence of the estimators $\rho_\ell$ and $\varrho_\ell$.
\begin{lemma}\label{lemma:local}
The following local estimates hold:
\begin{itemize}
\item[{\rm(i)}] $\oscT{\ell}(\omega_{\ell,E}) \le \osc{\ell}(E) \le \c{eq1} {{\rm res}}_\ell(\omega_{\ell,E})$ for all $E\in{\mathcal E}_\ell^\Omega$.
\item[{\rm(ii)}]
${{\rm res}}_\ell(\omega_{\ell,z}) \le \c{eq2} \big(\eta_\ell({\mathcal E}_{\ell,z})
+ \oscK{\ell}(z)\big)$ for all $z\in\mathcal K_\ell^\Omega$.
\item[{\rm(iii)}]
$\c{eq3}^{-1}\,\osc{\ell}({\mathcal E}_{\ell,z}) \le \oscK{\ell}(z) \le \c{eq4}\,\osc{\ell}({\mathcal E}_{\ell,z})$ for all $z\in\mathcal K_\ell^\Omega$.%
\end{itemize}
The constants $\setc{eq1},\setc{eq2},\setc{eq3}>0$ depend only on the shape regularity constant $\sigma({\mathcal T}_\ell)$, whereas $\setc{eq4}>0$ depends on the use of newest vertex bisection and the initial mesh ${\mathcal T}_0$.
\end{lemma}
\begin{proof}[Sketch of proof]
The proof of (i) follows from the fact that taking the integral mean $f_\omega$ is the $L^2$ best approximation by a constant, i.e.
\begin{align*}
\norm{f-f_\omega}{L^2(\omega)} = \min_{c\in{\mathbb R}}\norm{f-c}{L^2(\omega)}
\quad\text{for all measurable }\omega\subseteq\Omega,
\end{align*}
and that the area of neighboring elements can only change up to $\sigma({\mathcal T}_\ell)$. The estimate (ii) is well-known and found, e.g., in~\cite[Section 2.2.4]{ks}. Note that
(ii) essentially needs the condition that each element $T\in{\mathcal T}_\ell$ has an interior node, cf. Section~\ref{sec:notation}. The lower estimate in (iii) follows from the same arguments as (i), namely
\begin{align*}
\norm{f-f_{\omega_{\ell,E}}}{L^2(\omega_{\ell,E})}
\le \norm{f-f_{\omega_{\ell,z}}}{L^2(\omega_{\ell,E})}
\le \norm{f-f_{\omega_{\ell,z}}}{L^2(\omega_{\ell,z})}
\end{align*}
and the fact that ---up to shape regularity--- only finitely many edges belong to ${\mathcal E}_{\ell,z}$.
For $f$ being a piecewise polynomial, the upper estimate in (iii) follows from a scaling argument since both terms,
$\osc{\ell}({\mathcal E}_{\ell,z})\simeq \oscK{\ell}(z)$ define seminorms on ${\mathcal P}^p(\set{T\in{\mathcal T}_\ell}{z\in T})$ with kernel being the constant functions. Note that the equivalence constants depend on the shape of the node patch $\omega_{\ell,z}$, but newest vertex bisection leads only to finitely many shapes of the patches. For arbitrary $f\in L^2(\Omega)$, we first observe that the ${\mathcal T}_\ell$-piecewise integral mean $f_\ell\in{\mathcal P}^0({\mathcal T}_\ell)$, defined by $f_\ell|_T = f_T$ for all $T\in{\mathcal T}_\ell$,
satisfies $(f_\ell)_{\omega_{\ell,E}} = f_{\omega_{\ell,E}}$ as well as
$(f_\ell)_{\omega_{\ell,z}} = f_{\omega_{\ell,z}}$, e.g.
\begin{align*}
(f_\ell)_{\omega_{\ell,z}}
= \frac{1}{|\omega_{\ell,z}|}\int_{\omega_{\ell,z}} f_\ell\,dx
= \frac{1}{|\omega_{\ell,z}|}\sum_{T\subset\omega_{\ell,z}}\int_T f_\ell\,dx
= \frac{1}{|\omega_{\ell,z}|}\sum_{T\subset\omega_{\ell,z}}\int_T f\,dx
= f_{\omega_{\ell,z}}.
\end{align*}
This and the Pythagoras theorem for the integral mean $f_\ell$ prove
\begin{align*}
\norm{f-f_{\omega_{\ell,z}}}{L^2(\omega_{\ell,z})}^2
&= \norm{f-f_\ell}{L^2(\omega_{\ell,z})}^2
+ \norm{f_\ell-f_{\omega_{\ell,z}}}{L^2(\omega_{\ell,z})}^2\\
&\lesssim \sum_{E\in{\mathcal E}_{\ell,z}}\norm{f-f_\ell}{L^2(\omega_{\ell,E})}^2
+ \sum_{E\in{\mathcal E}_{\ell,z}}\norm{f_\ell-f_{\omega_{\ell,z}}}{L^2(\omega_{\ell,E})}^2\\
&= \sum_{E\in{\mathcal E}_{\ell,z}}\norm{f-f_{\omega_{\ell,z}}}{L^2(\omega_{\ell,E})}^2.
\end{align*}
Scaling with $|\omega_{\ell,z}|\simeq|\omega_{\ell,E}|$ concludes the proof.
\end{proof}
\begin{proposition}[reliability and efficiency of $\varrho_\ell$]\label{prop:reliability}
The error estimator $\varrho_\ell$ is reliable
\begin{align}\label{eq:reliable}
\norm{u - U_\ell}{H^1(\Omega)}
\le \c{reliable}\,\varrho_\ell
\end{align}
and efficient
\begin{align}\label{eq:efficient}
\c{efficient}^{-1}\,\varrho_\ell \le \big(\norm{\nabla(u-U_\ell)}{L^2(\Omega)}^2 + \oscT{\ell}^2 +\oscN{\ell}^2 + \oscD{\ell}^2\big)^{1/2}.
\end{align}
The constants $\setc{reliable},\setc{efficient}>0$ depend
only on $\Omega$, the use of newest
vertex bisection, and the initial mesh ${\mathcal T}_0$.
\end{proposition}
\begin{proof}
With the help of the preceding lemma, we obtain equivalence
$\varrho_\ell \simeq \rho_\ell$. Consequently, reliability and efficiency of $\varrho_\ell$ follow from the respective properties of the element-based estimator $\rho_\ell$,
see Proposition~\ref{prop:reliability:rho}.
\end{proof}
\begin{proposition}[discrete local reliability of $\varrho_\ell$]
\label{prop:dlr}
Let ${\mathcal T}_* = {\tt refine}({\mathcal T}_\ell)$ be an arbitrary refinement of ${\mathcal T}_\ell$ with associated Galerkin solution $U_*\in{\mathcal S}^1({\mathcal T}_*)$. Let $\mathcal R_\ell({\mathcal T}_*):={\mathcal T}_\ell\backslash{\mathcal T}_*$ be
the set of all elements $T\in{\mathcal T}_\ell$ which are refined to generate ${\mathcal T}_*$
and
\begin{align}\label{eq:refined:edge}
\mathcal R_\ell({\mathcal E}_*) := \set{E\in{\mathcal E}_\ell}{\exists T\in\mathcal R_\ell({\mathcal T}_*)\quad E\cap T\neq\emptyset}
\end{align}
be the set of all edges which touch a refined element.
Then,
\begin{align}
\#\mathcal R_\ell({\mathcal E}_*) \le \c{refine}\,\#\mathcal R_\ell({\mathcal T}_*)
\end{align}
and
\begin{align}\label{eq:dlr}
\norm{U_*-U_\ell}{H^1(\Omega)}
\le \c{dlr}\,\varrho_\ell(\mathcal R_\ell({\mathcal E}_*))
\end{align}
with constants $\setc{refine},\setc{dlr}>0$ which depend only on $\Omega$, the use of newest
vertex bisection, and the initial mesh ${\mathcal T}_0$.
\end{proposition}
\begin{proof}
According to shape regularity, the number of elements which share a node $z\in\mathcal K_\ell$ is uniformly bounded. Consequently, so is the number of edges which touch an element $T\in\mathcal R_\ell({\mathcal T}_*)$ which will be refined. This proves the estimate $\#\mathcal R_\ell({\mathcal E}_*) \le \c{refine}\,\#\mathcal R_\ell({\mathcal T}_*)$.
To prove~\eqref{eq:dlr}, we use the discrete local reliability of $\rho_\ell$
from Proposition~\ref{prop:dlr:rho}. With the help of Lemma~\ref{lemma:local}, each refinement indicator $\rho_\ell(T)$ for $T\in\mathcal R_\ell({\mathcal T}_*)$ is dominated by finitely many indicators $\varrho_\ell(E)$ for $E\in\mathcal R_\ell({\mathcal E}_*)$, where the number depends only on the shape regularity constant $\sigma({\mathcal T}_\ell)$.
\end{proof}
\subsection{Adaptive algorithm based on D\"orfler marking}
\label{section:doerfler}%
\new{Our} version of the adaptive algorithm has been well-studied in the literature mainly for element-based estimators, cf.\ e.g.~\cite{ckns}.
\begin{algorithm}\label{algorithm:doerfler}
Let adaptivity parameter $0<\theta<1$ and initial triangulation ${\mathcal T}_0$ be given. For each $\ell=0,1,2,\dots$ do:
\begin{itemize}
\item[(i)] Compute discrete solution $U_\ell\in{\mathcal S}^1({\mathcal T}_\ell)$.
\item[(ii)] Compute refinement indicators $\varrho_\ell(E)$ for all $E\in{\mathcal E}_\ell$.
\item[(iii)] Choose set ${\mathcal M}_\ell\subseteq{\mathcal E}_\ell$ with minimal cardinality such that
\begin{align}\label{eq:doerfler}
\theta\,\varrho_\ell^2 \le \varrho_\ell({\mathcal M}_\ell)^2.
\end{align}
\item[(iv)] Generate new mesh ${\mathcal T}_{\ell+1}:={\tt refine}({\mathcal T}_\ell,{\mathcal M}_\ell)$.
\item[(v)] Update counter $\ell\mapsto\ell+1$ and go to {\rm(i)}.
\end{itemize}
\end{algorithm}
\subsection{Adaptive algorithm based on modified D\"orfler marking}
\label{section:becker}%
For (piecewise) smooth data $f\in H^1$ and $g\in H^2$, uniform mesh-refinement guarantees $\osc{\ell} = {\mathcal O}(h^2)$ as well as
$\oscD{\ell} = {\mathcal O}(h^{3/2})$, whereas the error and hence the error estimator $\varrho_\ell$ may at most decay as ${\mathcal O}(h)$.
Consequently, we may expect that the normal jump terms dominate the error estimator~\cite{cv}. This observation led to the following
version of the marking strategy which has essentially been proposed in~\cite{bms}.
We stress, however, that the algorithm in~\cite{bms,bm} is stated with node oscillations $\oscK{\ell}$ instead of edge oscillations
$\osc{\ell}$. Moreover, certain details in the proofs of~\cite{bm} seem to be dubious.
\begin{algorithm}\label{algorithm:becker}
Let adaptivity parameters $0<\theta_1,\theta_2<1$ and $\vartheta>0$ and an initial triangulation ${\mathcal T}_0$ be given. For each $\ell=0,1,2,\dots$ do:
\begin{itemize}
\item[(i)] Compute discrete solution $U_\ell\in{\mathcal S}^1({\mathcal T}_\ell)$.
\item[(ii)] Compute refinement indicators $\varrho_\ell(E)$ for all $E\in{\mathcal E}_\ell$.
\item[(iii.1)] If $\osc{\ell}^2 + \oscD{\ell}^2 \le \vartheta\,\eta_\ell^2$, choose set ${\mathcal M}_\ell\subseteq{\mathcal E}_\ell$ with minimal cardinality such that
\begin{align}\label{eq1:becker}
\theta_1\,\eta_\ell^2 \le \eta_\ell({\mathcal M}_\ell)^2.
\end{align}
\item[(iii.2)] Otherwise, choose set ${\mathcal M}_\ell\subseteq{\mathcal E}_\ell$ with minimal cardinality such that
\begin{align}\label{eq2:becker}
\theta_2\,(\osc{\ell}^2+\oscD{\ell}^2)
\le \osc{\ell}({\mathcal M}_\ell)^2 + \oscD{\ell}({\mathcal M}_\ell)^2.
\end{align}
\item[(iv)] Generate new mesh ${\mathcal T}_{\ell+1}:={\tt refine}({\mathcal T}_\ell,{\mathcal M}_\ell)$.
\item[(v)] Update counter $\ell\mapsto\ell+1$ and go to {\rm(i)}.
\end{itemize}
\end{algorithm}
\section{Convergence of Adaptive Algorithm}
\label{section:convergence}%
\noindent
In this section, we prove a contraction property $\Delta_{\ell+1}\le\kappa\,\Delta_\ell$ for some
quasi-error quantity $\Delta_\ell \simeq \varrho_\ell^2$. To that end, we first introduce a locally equivalent error estimator.
To that end, we first note that the
modified D\"orfler marking~\eqref{eq1:becker}--\eqref{eq2:becker} implies the D\"orfler
marking~\eqref{eq:doerfler}.
\begin{lemma}\label{lemma:doerfler}
If the set ${\mathcal M}_\ell\subseteq{\mathcal E}_\ell$ satisfies the modified D\"orfler marking~\eqref{eq1:becker}--\eqref{eq2:becker} with parameters $0<\theta_1,\theta_2<1$ \and $\vartheta>0$. Then, ${\mathcal M}_\ell$ satisfies the D\"orfler marking~\eqref{eq:doerfler} with parameter
$0<\theta := \min\{\theta_1/(1+\vartheta)\,,\,\theta_2/(1+\vartheta^{-1})\}<1$.
\end{lemma}
\begin{proof}
In case of $\osc{\ell}^2 + \oscD{\ell}^2 \le \vartheta\,\eta_\ell^2$, it holds that
$\varrho_\ell^2 \le (1+\vartheta)\,\eta_\ell^2$. This implies
\begin{align*}
\frac{\theta_1}{1+\vartheta}\,\varrho_\ell^2 \le \theta_1\,\eta_\ell^2
\le \eta_\ell({\mathcal M}_\ell)^2 \le \varrho_\ell({\mathcal M}_\ell)^2.
\end{align*}
Otherwise, it holds that $\varrho_\ell^2 \le (1+\vartheta^{-1})\,(\osc{\ell}^2+\oscD{\ell}^2)$ which yields
\begin{align*}
\frac{\theta_2}{1+\vartheta^{-1}}\,\varrho_\ell^2
\le \theta_2\, \big(\osc{\ell}^2 + \oscD{\ell}^2\big)
\le \big(\osc{\ell}({\mathcal M}_\ell)^2+\oscD{\ell}({\mathcal M}_\ell)^2\big)
\le \varrho_\ell({\mathcal M}_\ell)^2.
\end{align*}
This concludes the proof.
\end{proof}
\begin{lemma}\label{lemma:boundary}
Let $W_{\ell+1} \in {\mathcal S}^1({\mathcal T}_{\ell+1})$ with
$W_{\ell+1}|_\Gamma = g_{\ell+1}$. Define
$W_{\ell+1}^\ell \in {\mathcal S}^1({\mathcal T}_{\ell+1})$ by
\begin{align}\label{eq:boundary_changed}
W_{\ell+1}^\ell(z) = \begin{cases}
W_{\ell+1}(z) & \text{ for } z\in {\mathcal N}_{\ell+1}\backslash \Gamma,\\
g_\ell(z) & \text{ for }z \in {\mathcal N}_{\ell+1}\cap \Gamma.\end{cases}
\end{align}
Then, there holds
\begin{align}\label{eq:stability_boundary}
\norm{W_{\ell+1} - W_{\ell+1}^\ell}{H^1(\Omega)}
\le \c{boundary}\,\norm{g_{\ell+1} - g_\ell}{H^{1/2}(\Gamma)},
\end{align}
where $\setc{boundary}>0$ depends only on $\sigma({\mathcal T}_\ell)$.\qed
\end{lemma}
\begin{lemma}[equivalent error estimator]\label{lemma:eqivalenterrest}
Consider the extended error estimator
\begin{align}
\widetilde\varrho_\ell^{\,2}
= \sum_{E\in{\mathcal E}_\ell^\Omega\cup{\mathcal E}_\ell^N}\eta_\ell(E)^2
+ \sum_{E\in{\mathcal E}_\ell}\wosc{\ell}(E)^2
+ \sum_{E\in{\mathcal E}_\ell^D}\oscD{\ell}(E)^2,
\end{align}
where the oscillation terms $\wosc{\ell}(E)$ read
\begin{align}
\wosc{\ell}(E)^2 := \begin{cases}
\osc{\ell}(E)^2&\text{for }E\in{\mathcal E}_\ell^\Omega,\\
|T_E|\,\norm{f}{L^2(T_E)}^2
\quad&\text{for }E\in{\mathcal E}_\ell^\Gamma
\text{ and }T\in{\mathcal T}_\ell\text{ with }E\subset\partial T_E.
\end{cases}
\end{align}
Then, there holds equivalence in the following sense
\begin{align*}
\c{equiverrest}^{-1}\, \widetilde\varrho_\ell^{\,2} \leq \varrho_\ell^2 \leq \widetilde\varrho_\ell^{\,2}\quad \text{and} \quad
\varrho_\ell(E) \leq \widetilde\varrho_\ell(E) \text{ for all } E \in {\mathcal E}_\ell,
\end{align*}
where $\setc{equiverrest}\ge1$ depends only on $\sigma({\mathcal T}_\ell)$. Particularly, if ${\mathcal M}_\ell\subseteq {\mathcal E}_\ell$ satisfies the D\"orfler marking~\eqref{eq:doerfler}
with $\varrho_\ell$ and $\theta>0$, then ${\mathcal M}_\ell$ satisfies the D\"orfler
marking with $\widetilde\varrho_\ell$ for some modified parameter
$0<\widetilde\theta:={\theta}/{\c{equiverrest}} <1$.
\end{lemma}
\begin{proof}
The estimates $\varrho_\ell(E) \leq \widetilde\varrho_\ell(E) \text{ for all } E \in {\mathcal E}_\ell$ are obvious and imply $\varrho_\ell^2\leq \widetilde\varrho_\ell^{\,2}$. The estimate
$\c{equiverrest}^{-1}\, \widetilde\varrho_\ell^{\,2} \leq \varrho_\ell^2$ follows from Lemma~\ref{lemma:local} (ii) \& (iii). Now, we obtain
\begin{align*}
\widetilde\theta\,\widetilde\varrho_\ell^{\,2} \le \theta\,\varrho_\ell^2
\le \varrho_\ell({\mathcal M}_\ell)^2 \le \widetilde\varrho_\ell({\mathcal M}_\ell)^2,
\end{align*}
i.e.\ the estimator $\widetilde\varrho_\ell$ satisfies the D\"orfler marking~\eqref{eq:doerfler}
with $\widetilde\theta:={\theta}/{\c{equiverrest}}$.
\end{proof}
\begin{lemma}[estimator reduction]\label{lemma:reduction}
Assume that the set ${\mathcal M}_\ell\subseteq{\mathcal E}_\ell$ of marked edges satisfies the D\"orfler
marking~\eqref{eq:doerfler} with $\varrho_\ell$ and some fixed parameter $0<\theta<1$ and that
${\mathcal T}_{\ell+1}={\tt refine}({\mathcal T}_\ell,{\mathcal M}_\ell)$ is obtained by local newest vertex bisection
of ${\mathcal T}_\ell$.
Then, there holds the estimator reduction estimate
\begin{align}\label{eq:reduction}
\widetilde\varrho_{\ell+1}^{\,2}
\le q\,\widetilde\varrho_\ell^{\,2} + \c{reduction}\norm{\nabla(U_{\ell+1}-U_\ell)}{L^2(\Omega)}^2
\end{align}
with some contraction constant $q\in(0,1)$ which depends only on
$\theta\in(0,1)$. The constant $\setc{reduction}>0$ additionally depends
only on the initial mesh ${\mathcal T}_0$.
\end{lemma}
\begin{proof}[Sketch of proof]
For the sake of completeness, we include the idea of the proof of~\eqref{eq:reduction}.
To keep the notation simple, we define $\eta_\ell(E) = 0$ for $E\in{\mathcal E}_\ell^D$ and
$\oscD{\ell}(E) = 0$ for $E\in{\mathcal E}_\ell^\Omega\cup{\mathcal E}_\ell^N$ so that all contributions of $\widetilde\varrho_\ell$
are defined on the entire set of edges ${\mathcal E}_\ell$.%
First, we employ a triangle inequality and the Young inequality to see
\begin{align*}
\widetilde\varrho_{\ell+1}^{\,2}
&\le (1+\delta)\,\Big(\sum_{E\in{\mathcal E}_{\ell+1}^\Omega}|E|\norm{[\partial_nU_\ell]}{L^2(E)}^2
+ \sum_{E\in{\mathcal E}_{\ell+1}^N}|E|\norm{\phi-\partial_nU_\ell}{L^2(E)}^2\Big)
\\&\quad
+ (1+\delta^{-1})\Big(
\sum_{E\in{\mathcal E}_{\ell+1}^\Omega}|E|\norm{[\partial_n(U_{\ell+1}-U_\ell)]}{L^2(E)}^2
+ \sum_{E\in{\mathcal E}_{\ell+1}^N}|E|\norm{\partial_n(U_{\ell+1}-U_\ell)}{L^2(E)}^2
\Big)
\\&\quad
+ \wosc{\ell+1}^2 + \oscD{\ell+1}^2,
\end{align*}
where $\delta>0$ is arbitrary.
Second, a scaling argument proves
\begin{align*}
\sum_{E\in{\mathcal E}_{\ell+1}^\Omega}|E|\norm{[\partial_n(U_{\ell+1}-U_\ell)]}{L^2(E)}^2
+ \sum_{E\in{\mathcal E}_{\ell+1}^\Gamma}|E|\norm{\partial_n(U_{\ell+1}-U_\ell)}{L^2(E)}^2
\le C\, \norm{\nabla(U_{\ell+1}-U_\ell)}{L^2(\Omega)}^2,
\end{align*}
and the constant $C>0$ depends only on $\sigma({\mathcal T}_\ell)$.
Third, we argue as in~\cite[Corollary 3.4]{ckns} to see
\begin{align*}
\sum_{E\in{\mathcal E}_{\ell+1}^\Omega}|E|\norm{[\partial_nU_\ell]}{L^2(E)}^2
+ \sum_{E\in{\mathcal E}_{\ell+1}^N}|E|\norm{\phi-\partial_nU_\ell}{L^2(E)}^2
\le \eta_\ell^2 - \frac12\,\eta_\ell({\mathcal M}_\ell)^2.
\end{align*}
Fourth, it is part of the proof of~\cite[Theorem 5.4]{agp} that
\begin{align*}
\oscD{\ell+1}^2 \le \oscD{\ell}^2 - \frac12\,\oscD{\ell}({\mathcal M}_\ell)^2,
\end{align*}
which essentially follows from the orthogonality relation~\eqref{eq:nodal:orthogonality}.
Fifth, in~\cite[Lemma 6]{pp} it is proven that
\begin{align}\label{eq:wosc}
\wosc{\ell+1}^2
\le \wosc{\ell}^2 - \frac14\,\wosc{\ell}({\mathcal M}_\ell)^2.
\end{align}
Plugging everything together, we see
\begin{align*}
\widetilde\varrho_{\ell+1}^{\,2}
&\le (1+\delta) \big(\widetilde\varrho_\ell^{\,2} - \frac14\,\widetilde\varrho_\ell({\mathcal M}_\ell)^2\big)
+ C(1+\delta^{-1})\,\norm{\nabla(U_{\ell+1}-U_\ell)}{L^2(\Omega)}^2\\
&\le (1+\delta) (1-\widetilde\theta/4)\widetilde\varrho_\ell^{\,2} + C(1+\delta^{-1})\,\norm{\nabla(U_{\ell+1}-U_\ell)}{L^2(\Omega)}^2,
\end{align*}
where we have used that Lemma~\ref{lemma:eqivalenterrest} guarantees the
D\"orfler marking for $\widetilde\varrho_\ell$ in the second estimate.
Finally, it only remains to choose $\delta>0$ sufficiently small so that
$q:=(1+\delta)(1-\widetilde\theta/4)<1$.
\end{proof}
The following lemma states some quasi-Galerkin orthogonality property which allows to overcome the lack of Galerkin orthogonality used
in~\cite{ckns}.
\begin{lemma}[quasi-Galerkin orthogonality]\label{lemma:orthogonality}
Let ${\mathcal T}_*={\tt refine}({\mathcal T}_\ell)$ be an arbitrary refinement of ${\mathcal T}_\ell$
with the associated Galerkin solution $U_*\in{\mathcal S}^1({\mathcal T}_*)$.
Then,
\begin{align}\label{eq:orthogonality}
2\,|\dual{\nabla(u-U_*)}{\nabla(U_*-U_{\ell})}_\Omega|
\le \alpha\,\norm{\nabla(u-U_*)}{L^2(\Omega)}^2
+ \alpha^{-1}\c{orthogonality}\,\norm{h_\ell^{1/2}(g_*-g_\ell)'}{L^2(\Gamma_D)}^2,
\end{align}
for all $\alpha>0$, and consequently
\begin{align}\label{eq:orthogonality1}
\begin{split}
(1-\alpha)\norm{\nabla(u-U_*)}{L^2(\Omega)}^2
&\le \norm{\nabla(u-U_\ell)}{L^2(\Omega)}^2
- \norm{\nabla(U_*-U_\ell)}{L^2(\Omega)}^2
\\&\qquad
+ \alpha^{-1}\c{orthogonality}\,\norm{h_\ell^{1/2}(g_*-g_\ell)'}{L^2(\Gamma_D)}^2
\end{split}
\end{align}
as well as
\begin{align}\label{eq:orthogonality2}
\begin{split}
\norm{\nabla(u-U_\ell)}{L^2(\Omega)}^2
&\le (1+\alpha)\norm{\nabla(u-U_*)}{L^2(\Omega)}^2
+ \norm{\nabla(U_*-U_\ell)}{L^2(\Omega)}^2
\\&\qquad
+ \alpha^{-1}\c{orthogonality}\,\norm{h_\ell^{1/2}(g_*-g_\ell)'}{L^2(\Gamma_D)}^2.
\end{split}
\end{align}
The constant $\setc{orthogonality}>0$ depends only on the shape regularity of $\sigma({\mathcal T}_\ell)$ and $\sigma({\mathcal T}_*)$ and on $\Omega$.
\end{lemma}
\begin{proof}
We recall the Galerkin orthogonality
\begin{align*}
\dual{\nabla(u-U_*)}{\nabla V_*}_\Omega = 0
\quad \text{for all } V_* \in {\mathcal S}^1_D({\mathcal T}_*).
\end{align*}
Let $U_*^\ell\in{\mathcal S}^1({\mathcal T}_*)$ be the unique Galerkin solution solution of~\eqref{eq:galerkin} with $U_*^\ell|_{\Gamma_D} = g_\ell$. We use the Galerkin orthogonality with $V_*= U_*^\ell - U_\ell \in {\mathcal S}^1_D({\mathcal T}_*)$.
This and the Young inequality allow to estimate the $L^2$-scalar product by
\begin{align*}
&2\,|\dual{\nabla(u-U_{*})}{\nabla(U_*-U_{\ell})}_\Omega|
= 2\,|\dual{\nabla(u-U_*)}{\nabla(U_*-U_*^\ell)}_\Omega|
\\&\qquad
\le \alpha\,\norm{\nabla(u-U_*)}{L^2(\Omega)}^2
+ \alpha^{-1}\,\norm{\nabla(U_*-U_*^\ell)}{L^2(\Omega)}^2
\end{align*}
for all $\alpha>0$. To estimate the second contribution on the right-hand side, we proceed as in the proof of Proposition~\ref{prop:dlr:rho} and choose
arbitrary extensions $\widehat g_*,\widehat g_\ell\in H^{1/2}(\Gamma)$ of the nodal interpolands $g_*,g_\ell$ from $\Gamma_D$ to $\Gamma$. Then, we use the test function $V_*=(U_*-U_*^\ell) - {\mathcal L}_*(\widehat g_*-\widehat g_\ell)\in{\mathcal S}^1_D({\mathcal T}_*)$ and the Galerkin orthogonalities for $U_*,U_*^\ell\in{\mathcal S}^1({\mathcal T}_*)$ to see
\begin{align*}
0
= \dual{\nabla(u-U_*^\ell)}{\nabla V_*}_\Omega
- \dual{\nabla(u-U_*)}{\nabla V_*}_\Omega
= \dual{\nabla(U_*-U_*^\ell)}{\nabla V_*}_\Omega.
\end{align*}
Arguing as above, we obtain
\begin{align}\label{dpr1:open}
\norm{\nabla(U_*-U_*^\ell)}{L^2(\Omega)}
\lesssim \norm{g_*-g_\ell}{H^{1/2}(\Gamma_D)}
\lesssim \norm{h_\ell^{1/2}(g_*-g_\ell)'}{L^2(\Gamma_D)}.
\end{align}
This concludes the proof of~\eqref{eq:orthogonality}.
To verify~\eqref{eq:orthogonality1}--\eqref{eq:orthogonality2}, we
use the identity
\begin{align*}
&\norm{\nabla(u-U_\ell)}{L^2(\Omega)}^2
= \norm{\nabla\big((u-U_*)+(U_*-U_\ell)\big)}{L^2(\Omega)}^2
\\&\qquad
= \norm{\nabla(u-U_*)}{L^2(\Omega)}^2
+ 2\,\dual{\nabla(u-U_*)}{\nabla(U_*-U_{\ell})}_\Omega
+ \norm{\nabla(U_*-U_{\ell})}{L^2(\Omega)}^2.
\end{align*}
Rearranging the terms accordingly and use of the quasi-Galerkin orthogonality~\eqref{eq:orthogonality} to estimate the scalar product, concludes the proof.
\end{proof}
\begin{theorem}[contraction of quasi-error]\label{thm:contraction}
For \new{the adaptive algorithm stated in Algorithm~\ref{algorithm:doerfler}} above,
there are constants $\gamma,\lambda>0$ and $0<\kappa<1$ such that the combined error quantity
\begin{align}\label{eq:delta}
\Delta_\ell
:= \norm{\nabla(u - U_\ell)}{L^2(\Omega)}^2
+ \lambda\,\oscD{\ell}^2 + \gamma\,\widetilde\varrho_\ell^{\,2}
\ge 0
\end{align}
satisfies a contraction property
\begin{align}\label{eq:contraction}
\Delta_{\ell+1} \le \kappa\,\Delta_\ell
\quad\text{for all }\ell\in{\mathbb N}_0.
\end{align}
In particular, this implies $\lim\limits_{\ell\to\infty}\varrho_\ell = 0 = \lim\limits_{\ell\to\infty}\norm{u - U_\ell}{H^1(\Omega)}$.
\end{theorem}
\begin{proof}
\new{Using the quasi-Galerkin orthogonality~\eqref{eq:orthogonality1} with ${\mathcal T}_*={\mathcal T}_{\ell+1}$, we see}
\begin{align*}
(1-\alpha)\,\norm{\nabla(u-U_{\ell+1})}{L^2(\Omega)}^2
&\le \norm{\nabla(u-U_{\ell})}{L^2(\Omega)}^2
- \norm{\nabla(U_{\ell+1}-U_{\ell})}{L^2(\Omega)}^2
\\&\qquad
+ \alpha^{-1}\c{orthogonality}\,\norm{h_\ell^{1/2}(g_{\ell+1}-g_\ell)'}{L^2(\Gamma_D)}^2.
\end{align*}
The orthogonality relation~\eqref{eq:nodal:orthogonality} applied for $g_{\ell+1}\in{\mathcal S}^1({\mathcal T}_{\ell+1}|_{\Gamma_D})$ yields
\begin{align*}
\oscD{\ell+1}^2 + \norm{h_\ell^{1/2}(g_{\ell+1}-g_\ell)'}{L^2(\Gamma_D)}^2
\le \norm{h_\ell^{1/2}(g-g_\ell)'}{L^2(\Gamma_D)}^2
= \oscD{\ell}^2.
\end{align*}
Together with the aforegoing estimate, we obtain
\begin{align*}
&(1-\alpha)\,\norm{\nabla(u-U_{\ell+1})}{L^2(\Omega)}^2
+ \alpha^{-1}\c{orthogonality}\,\oscD{\ell+1}^2
\\&\qquad
\le
\norm{\nabla(u-U_{\ell})}{L^2(\Omega)}^2
+ \alpha^{-1}\c{orthogonality}\,\oscD{\ell}^2
- \norm{\nabla(U_{\ell+1}-U_\ell)}{L^2(\Omega)}^2.
\end{align*}
We add the error estimator and use the estimator reduction~\eqref{eq:reduction} to see, for $\beta>0$,
\begin{align*}
&(1-\alpha)\,\norm{\nabla(u-U_{\ell+1})}{L^2(\Omega)}^2
+ \alpha^{-1}\c{orthogonality}\,\oscD{\ell+1}^2
+ \beta\,\widetilde\varrho_{\ell+1}^{\,2}
\\&\qquad
\le \norm{\nabla(u-U_{\ell})}{L^2(\Omega)}^2
+ \alpha^{-1}\c{orthogonality}\,\oscD{\ell}^2
+ \beta\, q\,\widetilde\varrho_\ell^{\,2}
+ (\beta\c{reduction}-1) \, \norm{\nabla(U_{\ell+1}-U_\ell)}{L^2(\Omega)}^2.
\end{align*}
We choose $\beta>0$ sufficiently small to guarantee $\beta\c{reduction}-1\le0$. Then, we use the
reliability~\eqref{eq:reliable} of $\varrho_\ell\le\widetilde\varrho_\ell$ in the form
\begin{align*}
\c{reliable}^{-1}\,\norm{\nabla(u-U_\ell)}{L^2(\Omega)}
\le \c{reliable}^{-1}\,\norm{u-U_\ell}{H^1(\Omega)}
\le \widetilde\varrho_\ell
\end{align*}
to see, for $\varepsilon>0$,
\begin{align*}
&(1-\alpha)\,\norm{\nabla(u-U_{\ell+1})}{L^2(\Omega)}^2
+ \alpha^{-1}\c{orthogonality}\,\oscD{\ell+1}^2
+ \beta\,\widetilde\varrho_{\ell+1}^{\,2}
\\&\qquad
\le (1-\varepsilon\beta\c{reliable}^{-2})\,\norm{\nabla(u-U_{\ell})}{L^2(\Omega)}^2
+ \alpha^{-1}\c{orthogonality}\,\oscD{\ell}^2
+ \beta (q+\varepsilon)\,\widetilde\varrho_\ell^{\,2}.
\end{align*}
Moreover, since $\oscD{\ell}$ is a contribution of $\widetilde\varrho_\ell$, we have $\oscD{\ell}\le\widetilde\varrho_\ell$, whence, for $\delta>0$,
\begin{align*}
&(1-\alpha)\,\norm{\nabla(u-U_{\ell+1})}{L^2(\Omega)}^2
+ \alpha^{-1}\c{orthogonality}\,\oscD{\ell+1}^2
+ \beta\,\widetilde\varrho_{\ell+1}^{\,2}
\\&\qquad
\le (1-\varepsilon\beta\c{reliable}^{-2})\,\norm{\nabla(u-U_{\ell})}{L^2(\Omega)}^2
+ (1-\delta\beta)\,\alpha^{-1}\c{orthogonality}\,\oscD{\ell}^2
+ \beta (q+\varepsilon+\delta\,\alpha^{-1}\c{orthogonality})\,\widetilde\varrho_\ell^{\,2}.
\end{align*}
For $0<\alpha<1$, we may now rearrange this estimate to end up with
\begin{align*}
&\norm{\nabla(u-U_{\ell+1})}{L^2(\Omega)}^2
+ \frac{\c{orthogonality}}{\alpha(1-\alpha)}\,\oscD{\ell+1}^2
+ \frac{\beta}{1-\alpha}\,\widetilde\varrho_{\ell+1}^{\,2}
\\&\qquad
\le \frac{1-\varepsilon\beta\c{reliable}^{-2}}{1-\alpha}\,\norm{\nabla(u-U_{\ell})}{L^2(\Omega)}^2
+ (1-\delta\beta)\,\frac{\c{orthogonality}}{\alpha(1-\alpha)}\,\oscD{\ell}^2
\\&\qquad\qquad
+ (q+\varepsilon+\delta\,\alpha^{-1}\c{orthogonality})\,\frac{\beta}{1-\alpha}\,\widetilde\varrho_\ell^{\,2}.
\end{align*}
It remains to choose the free constants $0<\alpha,\delta,\varepsilon<1$, whereas $\beta>0$ has already been fixed:
\begin{itemize}
\item First, choose $0<\varepsilon<\c{reliable}^2/\beta$ sufficiently small to guarantee $0<q+\varepsilon<1$.
\item Second, choose $0<\alpha<1$ sufficiently small such that $0<(1-\varepsilon\beta\c{reliable}^{-2})/(1-\alpha)<1$.
\item Third, choose $\delta>0$ sufficiently small with $0<q+\varepsilon+\delta\,\alpha^{-1}\c{orthogonality}<1$.
\end{itemize}
With $\gamma:=\beta/(1-\alpha)$, $\lambda:=\alpha^{-1}\,\c{orthogonality}/(1-\alpha)$, and
$0<\kappa<1$ the maximal contraction constant of the three contributions,
we conclude the proof of~\eqref{eq:contraction}.
\end{proof}
\section{Quasi-Optimality of Adaptive Algorithm}
\label{section:optimal}%
\vspace*{-3mm
\subsection{Optimality of marking strategy}
\label{section:optimal:doerfler}%
\new{With Theorem~\ref{thm:contraction}, we have seen that D\"orfler marking~\eqref{eq:doerfler} yields a contraction of $\Delta_\ell \simeq \varrho_\ell^2$.}
In the following, we first observe that the D\"orfler marking~\eqref{eq:doerfler} \new{is not only sufficient but in some sense also necessary to obtain contraction of
the estimator.}
\begin{proposition}[optimality of D\"orfler marking]
\label{prop:doerfler}
Let $\alpha>0$ and assume that the adaptivity parameter $0<\theta<1$ is sufficiently small, more precisely
\begin{align}\label{eq:doerfler:theta}
q_\star := \frac{1-\theta(\c{dlr}^2+1+\alpha^{-1}\c{orthogonality})\c{efficient}^2}{1+\alpha}>0.
\end{align}
Let $0<q\le q_\star$ and ${\mathcal T}_*={\tt refine}({\mathcal T}_\ell)$ and assume that
\begin{align}\label{eq:doerfler:contraction}
\begin{split}
&\big(\norm{\nabla(u-U_*)}{L^2(\Omega)}^2 + \osc{*}^2 + \oscD{*}^2 + \oscN{*}^2\big)
\\&\qquad
\le q\,\big(\norm{\nabla(u-U_\ell)}{L^2(\Omega)}^2 + \osc{\ell}^2 + \oscD{\ell}^2 + \oscN{\ell}^2\big).
\end{split}
\end{align}
Then, there holds the D\"orfler marking for the set $\mathcal R_\ell({\mathcal E}_*)\subseteq{\mathcal E}_\ell$ defined in~\eqref{eq:refined:edge}, i.e.\
\begin{align}\label{eq:doerfler:refined}
\theta\,\varrho_\ell^2
\le \varrho_\ell(\mathcal R_\ell({\mathcal E}_*))^2.
\end{align}
\end{proposition}
\begin{proof}
We start with the elementary observation that $q\le q_\star$ is equivalent to
\begin{align*}
\theta \le \frac{1-q(1+\alpha)}{(\c{dlr}^2+1+\alpha^{-1}\c{orthogonality})\c{efficient}^2}.
\end{align*}
Using the discrete local reliability~\eqref{eq:dlr} and the quasi-Galerkin orthogonality~\eqref{eq:orthogonality2}, we see
\begin{align*}
\c{dlr}^2&\varrho_\ell(\mathcal R_\ell({\mathcal E}_*))^2
\ge \norm{\nabla(U_*-U_\ell)}{L^2(\Omega)}^2
\\&
\ge \norm{\nabla(u-U_\ell)}{L^2(\Omega)}^2 - (1+\alpha)\,\norm{\nabla(u-U_*)}{L^2(\Omega)}^2
- \alpha^{-1}\c{orthogonality}\,\norm{h_\ell^{1/2}(g_*-g_\ell)'}{L^2(\Gamma_D)}^2
\\&
= \big(\norm{\nabla(u-U_\ell)}{L^2(\Omega)}^2 + \osc{\ell}^2 + \oscD{\ell}^2+\oscN{\ell}^2\big)
\\&\quad
- (1+\alpha)\big(\norm{\nabla(u-U_*)}{L^2(\Omega)}^2 + \osc{*}^2 + \oscD{*}^2+\oscN{*}^2\big)
\\&\quad
- \osc{\ell}^2 - \oscD{\ell}^2-\oscN{\ell}^2 + (1+\alpha)(\osc{*}^2 + \oscD{*}^2 +\oscN{*}^2)
\\&\quad
- \alpha^{-1}\c{orthogonality}\,\norm{h_\ell^{1/2}(g_*-g_\ell)'}{L^2(\Gamma_D)}^2
\\&\ge
\big(1-q(1+\alpha)\big)\big(\norm{\nabla(u-U_\ell)}{L^2(\Omega)}^2 + \osc{\ell}^2 + \oscD{\ell}^2+\oscN{\ell}^2\big)
\\&\quad
- \osc{\ell}^2 - \oscD{\ell}^2 -\oscN{\ell}^2+ (1+\alpha)(\osc{*}^2 +\oscD{*}^2+\oscN{*}^2)
\\&\quad
- \alpha^{-1}\c{orthogonality}\,\norm{h_\ell^{1/2}(g_*-g_\ell)'}{L^2(\Gamma_D)}^2,
\end{align*}
where we have finally used Assumption~\eqref{eq:doerfler:contraction}. As in the proof of Proposition~\ref{prop:dlr:rho}, we have
\begin{align*}
\norm{h_\ell^{1/2}(g_*-g_\ell)'}{L^2(\Gamma_D)}^2
\leq \oscD{\ell}(\mathcal R_\ell({\mathcal E}_*))^2
\le \varrho_\ell(\mathcal R_\ell({\mathcal E}_*))^2.
\end{align*}
Moreover, the identities $\oscD{\ell}(E) = \oscD{*}(E)$, $\osc{\ell}(E) = \osc{*}(E)$ and $\oscN{\ell}(E)=\oscN{*}(E)$ for $E\in{\mathcal E}_\ell\backslash\mathcal R_\ell({\mathcal E}_*)$ prove
\begin{align}\label{eq1:identity}
\oscD{\ell}^2 - \oscD{*}^2
&\le \oscD{\ell}(\mathcal R_\ell({\mathcal E}_*))^2,\\
%
\label{eq2:identity}
\osc{\ell}^2 - \osc{*}^2
&\le \osc{\ell}(\mathcal R_\ell({\mathcal E}_*))^2,\\
%
\label{eq3:identity}
\oscN{\ell}^2 - \oscN{*}^2
&\le \oscN{\ell}(\mathcal R_\ell({\mathcal E}_*))^2.
\end{align}
Note that~\eqref{eq2:identity} led to the definition of $\mathcal R_\ell({\mathcal E}_*)$ given above. Together with the efficiency~\eqref{eq:efficient}
and $\oscD{\ell}(\mathcal R_\ell({\mathcal E}_*))^2 + \osc{\ell}(\mathcal R_\ell({\mathcal E}_*))^2 +\oscN{\ell}(\mathcal R_\ell({\mathcal E}_*))^2\le \varrho_\ell(\mathcal R_\ell({\mathcal E}_*))^2$, we may now conclude
\begin{align*}
\big(\c{dlr}^2 + 1 + \alpha^{-1}\c{orthogonality}\big)\,\varrho_\ell(\mathcal R_\ell({\mathcal E}_*))^2
\ge \big(1-q(1+\alpha)\big)\,\c{efficient}^{-2}\,\varrho_\ell^2.
\end{align*}
This is equivalent to $\theta\,\varrho_\ell^2 \le \varrho_\ell(\mathcal R_\ell({\mathcal E}_*))^2$ and led to the definition of $q_\star$.
\end{proof}
\begin{proposition}[optimality of modified D\"orfler marking]
\label{prop:becker}
Let $\alpha>0$ and $0<\theta_2<1$ and assume that the adaptivity parameters $0<\theta_1,\vartheta<1$ are sufficiently small, more precisely
\begin{align}\label{eq:becker:theta}
q_\star:=\max\Big\{\frac{1-\c{efficient}^2\big(\theta_1(1\!+\!\c{dlr}^2) + \vartheta(1\!+\!\c{dlr}^2\!+\!\alpha^{-1}\c{orthogonality})\big)}{1+\alpha}
\,,\,
\frac{1-\theta_2}{(1+\vartheta^{-1})(\c{reliable}^2+1)}\Big\} >0.
\end{align}
Let $0<q\le q_\star$ and ${\mathcal T}_*={\tt refine}({\mathcal T}_\ell)$
and assume that
\begin{align}\label{eq:becker:contraction}
\begin{split}
&\big(\norm{\nabla(u-U_*)}{L^2(\Omega)}^2 + \osc{*}^2 + \oscD{*}^2 + \oscN{*}^2\big)
\\&\qquad
\le q\,\big(\norm{\nabla(u-U_\ell)}{L^2(\Omega)}^2 + \osc{\ell}^2 + \oscD{\ell}^2 + \oscN{\ell}^2\big).
\end{split}
\end{align}
Then, there holds the modified D\"orfler marking for the set $\mathcal R_\ell({\mathcal E}_*)\subseteq{\mathcal E}_\ell$, i.e.\ there holds either
\begin{align}\label{eq1:becker:refined}
\theta_1\,\eta_\ell^2
\le \eta_\ell(\mathcal R_\ell({\mathcal E}_*))^2
\end{align}
in case of $\osc{\ell}^2 + \oscD{\ell}^2 \le \vartheta\,\eta_\ell^2$ or
\begin{align}\label{eq2:becker:refined}
\theta_2\,\big(\osc{\ell}^2 + \oscD{\ell}^2\big)
\le \osc{\ell}(\mathcal R_\ell({\mathcal E}_*))^2 + \oscD{\ell}(\mathcal R_\ell({\mathcal E}_*))^2
\end{align}
otherwise.
\end{proposition}
\begin{proof}
We first assume $\osc{\ell}^2 + \oscD{\ell}^2 \le \vartheta\,\eta_\ell^2$. Arguing as in the proof of Proposition~\ref{prop:doerfler}, we see
\begin{align*}
\c{dlr}^2\varrho_\ell(\mathcal R_\ell({\mathcal E}_*))^2
&\ge \norm{\nabla(U_*-U_\ell)}{L^2(\Omega)}^2
\\&
\ge \big(1-q(1+\alpha)\big)\big(\norm{\nabla(u-U_\ell)}{L^2(\Omega)}^2 + \osc{\ell}^2 + \oscD{\ell}^2+\oscN{\ell}^2\big)
\\&\quad
- \osc{\ell}^2 - \oscD{\ell}^2 -\oscN{\ell}^2+ (1+\alpha)(\osc{*}^2 + \oscD{*}^2+\oscN{*}^2)
\\&\quad
- \alpha^{-1}\c{orthogonality}\,\norm{h_\ell^{1/2}(g_*-g_\ell)'}{L^2(\Gamma_D)}^2
\\&
\ge \big(1-q(1+\alpha)\big)\,\c{efficient}^{-2}\,\eta_\ell^2
- \osc{\ell}^2 - \oscD{\ell}^2-\oscN{\ell}(\mathcal R_\ell({\mathcal E}_*))^2
\\&\quad
- \alpha^{-1}\c{orthogonality}\,\norm{h_\ell^{1/2}(g_*-g_\ell)'}{L^2(\Gamma_D)}^2,
\end{align*}
where we have used~\eqref{eq3:identity}.
Next, we recall the edge-wise definition $\varrho_\ell^2 = \eta_\ell^2 + \osc{\ell}^2 + \oscD{\ell}^2$ and collect all oscillation terms on the right-hand side. Together with $\norm{h_\ell^{1/2}(g_*-g_\ell)'}{L^2(\Gamma_D)}\le\oscD{\ell}$ and $\oscN{\ell}(E)\le\eta_\ell(E)$, this leads to
\begin{align*}
\c{dlr}^2\,\eta_\ell(\mathcal R_\ell({\mathcal E}_*))^2
&\ge \big(1-q(1+\alpha)\big)\,\c{efficient}^{-2}\,\eta_\ell^2
- (1+\c{dlr}^2)\osc{\ell}^2\\
&\quad - (1+\c{dlr}^2+\alpha^{-1}\c{orthogonality})\oscD{\ell}^2-\oscN\ell(\mathcal R_\ell({\mathcal E}_*))^2
\\&
\ge \big[\big(1-q(1+\alpha)\big)\,\c{efficient}^{-2}-\vartheta(1+\c{dlr}^2+\alpha^{-1}\c{orthogonality})\big]\,\eta_\ell^2-\eta_\ell(\mathcal R_\ell({\mathcal E}_*))^2.
\end{align*}
We then conclude
\begin{align*}
\eta_\ell(\mathcal R_\ell({\mathcal E}_*))^2
\ge \frac{1-q(1+\alpha)-\vartheta\c{efficient}^2(1+\c{dlr}^2+\alpha^{-1}\c{orthogonality})}{(1+\c{dlr}^2)\c{efficient}^2}\,\eta_\ell^2
\ge \theta_1\,\eta_\ell^2,
\end{align*}
which follows from our assumption on $0<q \le q_\star<1$ and the definition of $q_\star$ in~\eqref{eq:becker:theta}. This concludes the proof of~\eqref{eq1:becker:refined}.
Second, we assume $\osc{\ell}^2 + \oscD{\ell}^2 > \vartheta\,\eta_\ell^2$.
Recall the estimates~\eqref{eq1:identity}--\eqref{eq3:identity}.
Then, reliabi\-lity~\eqref{eq:reliable} of $\varrho_\ell^2=\eta_\ell^2+\osc{\ell}^2+\oscD{\ell}^2$ and $\oscN{\ell}\leq \eta_\ell$ yield
\begin{align*}
\big(\osc{\ell}^2+\oscD{\ell}^2\big) - \big(\osc{\ell}(\mathcal R_\ell({\mathcal E}_*))^2 &
+ \oscD{\ell}(\mathcal R_\ell({\mathcal E}_*))^2\big)\\
&\le \osc{*}^2 + \oscD{*}^2
\\&
\le q\,\big(\norm{\nabla(u-U_\ell)}{L^2(\Omega)}^2 + \osc{\ell}^2 + \oscD{\ell}^2 +\oscN{\ell}^2\big)
\\ &
\le q\,\big((\c{reliable}^2+1)\,(\eta_\ell^2+\osc{\ell}^2 + \oscD{\ell}^2)\big)
\\ &
< q\,(1+\vartheta^{-1})(\c{reliable}^2+1)\,(\osc{\ell}^2 + \oscD{\ell}^2).
\end{align*}
Rearranging the terms, we obtain
\begin{align*}
\theta_2\,\big(\osc{\ell}^2 + \oscD{\ell}^2\big)
&\le \big[1-q\,(1+\vartheta^{-1})(\c{reliable}^2+1)\big]\,\big(\osc{\ell}^2 + \oscD{\ell}^2\big)
\\
&\le \osc{\ell}(\mathcal R_\ell({\mathcal E}_*))^2 + \oscD{\ell}(\mathcal R_\ell({\mathcal E}_*))^2,
\end{align*}
where the first estimate follows from $0<q\le q_\star<1$ and the definition of $q_\star$ in~\eqref{eq:becker:theta}.
\end{proof}
\subsection{Optimality of newest vertex bisection}
\label{section:optimal:nvb}
The quasi-optimality analysis for adaptive FEM involves two properties of the mesh-refinement which are, so far, only mathematically guaranteed for newest vertex bisection
\cite{bdd,kpp,ks,stevenson:nvb} and local red-refinement
with hanging nodes up to some fixed order~\cite{bn}.
\dpr{First,} it has originally been proven in~\cite{bdd} and lateron improved in~\cite{stevenson:nvb,ks,kpp} that
the sequence of meshes defined inductively by ${\mathcal T}_{\ell+1}:={\tt refine}({\mathcal T}_\ell,{\mathcal M}_\ell)$ with arbitrary ${\mathcal M}_\ell\subseteq{\mathcal E}_\ell$ satisfies
\begin{align}\label{eq1:nvb}
\#{\mathcal T}_\ell - \#{\mathcal T}_0 \le \c{nvb}\,\sum_{j=0}^{\ell-1}\#{\mathcal M}_j
\quad\text{for all }\ell\in{\mathbb N}
\end{align}
with some constant $\setc{nvb}>0$ which depends only on ${\mathcal T}_0$.
This proves that the closure step in newest vertex bisection which avoids hanging nodes and leads to possible bisections of edges
$E\in{\mathcal E}_\ell\backslash{\mathcal M}_\ell$ may not lead to arbitrary many refinements.
For newest vertex bisection, the original analysis of~\cite{bdd} as well as of the successors~\cite{ks,stevenson:nvb} required that the reference edges of the initial mesh ${\mathcal T}_0$ are chosen such that an interior edge $E=T_+\cap T_-\in{\mathcal E}_0^\Omega$ is either the reference edge of both elements $T_+,T_-\in{\mathcal T}_0$ or of none. For the particular 2D situation, the recent work~\cite{kpp} removes any assumption on ${\mathcal T}_0$.
Second, for two meshes ${\mathcal T}'={\tt refine}({\mathcal T}_0)$ and ${\mathcal T}''={\tt refine}({\mathcal T}_0)$ obtained by newest vertex bisection of the initial mesh ${\mathcal T}_0$, there is a unique coarsest common refinement ${\mathcal T}'\oplus{\mathcal T}'' = {\tt refine}({\mathcal T}_0)$ which is a refinement of both ${\mathcal T}'$ and ${\mathcal T}''$. It is shown in~\cite{stevenson,ckns} that ${\mathcal T}'\oplus{\mathcal T}''$ is, in fact, the overlay of these meshes. Moreover, it holds that
\begin{align}\label{eq2:nvb}
\#({\mathcal T}'\oplus{\mathcal T}'') \le \#{\mathcal T}' + \#{\mathcal T}'' - \#{\mathcal T}_0.
\end{align}
\subsection{Definition of approximation class}
\noindent
To state the optimality result, we have to introduce the appropriate approximation
class. Let
\begin{align}
\mathbb T := \set{{\mathcal T}}{{\mathcal T} = {\tt refine}({\mathcal T}_0)}
\end{align}
be the set of all triangulations which can be obtained from ${\mathcal T}_0$ by newest
vertex bisection. Moreover, let
\begin{align}
\mathbb T_N := \set{{\mathcal T}\in\mathbb T}{\#{\mathcal T}-\#{\mathcal T}_0\le N}
\end{align}
be the set of triangulations which have at most $N\in{\mathbb N}$ elements more than the
initial mesh ${\mathcal T}_0$. For $s>0$, the approximation class $\mathbb A_s$
has already been defined in~\eqref{eq:optimal:class}--\eqref{eq:optimal:norm}. The first step is to prove that, up to constants, nodal interpolation of the boundary data yields the best possible approximation of the exact solution.
\begin{lemma}
\label{lem:quasiopt}
The Galerkin solution $U_\ell \in {\mathcal S}^1({\mathcal T}_\ell)$ of~\eqref{eq:galerkin} satisfies
\begin{align}
\begin{split}
&\norm{\nabla(u-U_\ell)}{L^2(\Omega)}^2 + \new{\oscD{\ell}^2}
\leq \c{szoptimality}\big (\inf_{W_\ell \in {\mathcal S}^1({\mathcal T}_\ell)}
\norm{\nabla(u-W_\ell)}{L^2(\Omega)}^2 \!+\! \new{\oscD{\ell}^2} \big),
\end{split}
\end{align}
where $\setc{szoptimality}>0$ depends only on $\Gamma$ and $\sigma({\mathcal T}_\ell)$.
\end{lemma}
\begin{proof}
Let $\widehat g,\widehat g_\ell\in H^{1/2}(\Gamma)$ denote arbitrary
extensions of $g=u|_{\Gamma_D}$ resp.\ $g_\ell$.
Note that $({\mathcal L}_\ell P_\ell \widehat g)|_{\Gamma_D}=(P_\ell u)|_{\Gamma_D}$ as well as
$({\mathcal L}_\ell P_\ell \widehat g_\ell)|_{\Gamma_D}=g_\ell$, where ${\mathcal L}_\ell$ denotes the discrete lifting operator from~\eqref{eq:discreteLifting}.
For $V_\ell\in{\mathcal S}^1_D({\mathcal T}_\ell)$, we thus have
$U_\ell - (V_\ell + {\mathcal L}_\ell P_\ell\widehat g_\ell)\in{\mathcal S}^1_D({\mathcal T}_\ell)$, whence
\begin{align*}
\norm{\nabla(u-U_\ell)}{L^2(\Omega)}^2
= \dual{\nabla(u-U_\ell)}{\nabla(u-(V_\ell + {\mathcal L}_\ell P_\ell\widehat g_\ell))}_\Omega
\end{align*}
according to the Galerkin orthogonality. Therefore, the Cauchy-Schwarz inequality provides the C\'ea-type quasi-optimality
\begin{align*}
\norm{\nabla(u-U_\ell)}{L^2(\Omega)}
\le\min_{V_\ell\in{\mathcal S}^1_D({\mathcal T}_\ell)}\norm{\nabla(u-(V_\ell + {\mathcal L}_\ell P_\ell\widehat g_\ell))}{L^2(\Omega)}.
\end{align*}
We now plug-in $V_\ell = P_\ell u-{\mathcal L}_\ell P_\ell \widehat g\in{\mathcal S}^1_D({\mathcal T}_\ell)$ to see
\begin{align*}
\norm{\nabla(u-U_\ell)}{L^2(\Omega)}
&\le \norm{\nabla(u-P_\ell u + {\mathcal L}_\ell P_\ell(\widehat g-\widehat g_\ell))}{L^2(\Omega)}
\\&
\lesssim \norm{\nabla(u-P_\ell u)}{L^2(\Omega)}
+ \norm{\widehat g-\widehat g_\ell}{H^{1/2}(\Gamma)}.
\end{align*}
Since the extensions $\widehat g,\widehat g_\ell$ of $g,g_\ell$ were arbitrary, we obtain
\begin{align*}
\norm{\nabla(u-U_\ell)}{L^2(\Omega)}
&\lesssim \norm{\nabla(u-P_\ell u)}{L^2(\Omega)}
+ \norm{g-g_\ell}{H^{1/2}(\Gamma_D)}\\
&\lesssim \min_{W_\ell\in{\mathcal S}^1({\mathcal T}_\ell)}\norm{\nabla(u-W_\ell)}{L^2(\Omega)}
+ \norm{h_\ell^{1/2}(g-g_\ell)'}{L^2(\Gamma_D)}\\
&\new{= \min_{W_\ell\in{\mathcal S}^1({\mathcal T}_\ell)}\norm{\nabla(u-W_\ell)}{L^2(\Omega)} + \oscD{\ell}.}
\end{align*}
where we have used the quasi-optimality of the Scott-Zhang projection, see Section~\ref{section:sz}, and Lemma~\ref{lemma:apx}.
\new{Adding $\oscD{\ell}$ to this estimate, we conclude the proof.}
\end{proof}
\subsection{Quasi-optimality result}
Finally, we may formally state the optimality result~\eqref{eq:optimal:order}
described in the introduction.
\begin{theorem}\label{thm:quasioptimality}
Suppose that the adaptivity parameter $0<\theta<1$ in Algorithm~\ref{algorithm:doerfler} satisfies~\eqref{eq:doerfler:theta} \new{so
that the} marking strategy is optimal in the sense of \new{Proposition~\ref{prop:doerfler}}.
Let $U_\ell\in{\mathcal S}^1({\mathcal T}_\ell)$ denote the sequence of discrete solutions generated by
\new{Algorithm~\ref{algorithm:doerfler}}. If the given data and the corresponding weak solution
of~\eqref{eq:weakform} satisfy $(u,f,g,\phi)\in\mathbb A_s$, there holds
\begin{align}
\norm{u-U_\ell}{H^1(\Omega)}
\le\c{afem}(\#{\mathcal T}_\ell-\#{\mathcal T}_0)^{-s},
\end{align}
i.e.\
each possible convergence rate $s>0$ is asymptotically achieved by AFEM. The constant $\setc{afem}>0$ depends only on $\norm{(u,f,g,\phi)}{\mathbb A_s}$, the initial mesh ${\mathcal T}_0$, and the adaptivity parameters.
\end{theorem}
\begin{proof}
Since the proof follows essentially the lines of~\cite{stevenson,ckns}, we leave the elaborate details to the reader. For any $\varepsilon>0$, the definition of the approximation class $\mathbb A_s$ guarantees some triangulation ${\mathcal T}_\varepsilon\in\mathbb T$ such that
\begin{align*}
\inf_{W_\varepsilon \in {\mathcal S}^1({\mathcal T}_\varepsilon)}
\big(\norm{\nabla(u-W_\varepsilon)}{L^2(\Omega)}^2 + \normL2{h_\varepsilon^{1/2}(g-W_\varepsilon|_\Gamma)^\prime}{\Gamma_D}^2
+ \oscT{\varepsilon}^2 + \oscN{\varepsilon}^2\big)^{1/2}
\le\varepsilon
\end{align*}
and
\begin{align*}
\#{\mathcal T}_\varepsilon -\#{\mathcal T}_0 \lesssim \varepsilon^{-1/s},
\end{align*}
where the constant depends only on $\norm{(u,f,g,\phi)}{\mathbb A_s}$.
We now consider the overlay ${\mathcal T}_*:={\mathcal T}_\varepsilon\oplus{\mathcal T}_\ell$. With the help of Lemma~\ref{lem:quasiopt} as well as the elementary estimates $\oscT{*}\le\oscT{\varepsilon}$ and $\oscN{*}\le\oscN{\varepsilon}$, we observe
\begin{align*}
\Lambda_*:=\big(\norm{\nabla(u-U_*)}{L^2(\Omega)}^2 + \oscD{*}^2
+ \oscT{*}^2+\oscN{*}^2\big)^{1/2}
\lesssim\varepsilon,
\end{align*}
since ${\mathcal S}^1({\mathcal T}_\varepsilon)\subseteq{\mathcal S}^1({\mathcal T}_*)$.
Moreover, the overlay estimate~\eqref{eq2:nvb} predicts
\begin{align*}
\#\mathcal R_\ell({\mathcal T}_*)
\le \#{\mathcal T}_*-\#{\mathcal T}_\ell
\le \#{\mathcal T}_\varepsilon-\#{\mathcal T}_0
\lesssim \varepsilon^{-1/s}.
\end{align*}
Note that Lemma~\ref{lemma:local} together with reliability and efficiency of $\varrho_*$ yield
\begin{align*}
\Lambda_* \simeq \big(\norm{\nabla(u-U_*)}{L^2(\Omega)}^2
+ \osc{*}^2 + \oscD{*}^2+\oscN{*}^2\big)^{1/2},
\end{align*}
where $\oscT{*}$ is replaced by $\osc{*}$.
Choosing $\varepsilon = \lambda\big(\norm{\nabla(u-U_\ell)}{L^2(\Omega)}^2 + \oscD{\ell}^2
+ \osc{\ell}^2+\oscN{\ell}^2\big)^{1/2}$ with $\lambda>0$ sufficiently small, we enforce \new{the
reduction~\eqref{eq:doerfler:contraction}} and derive that
$\mathcal R_\ell({\mathcal E}_*)\subseteq{\mathcal E}_\ell$ \new{satisfies the D\"orfler marking} criterion, \new{cf.\ Proposition~\ref{prop:doerfler}}
. Minimality of ${\mathcal M}_\ell$ thus gives
\begin{align*}
\#{\mathcal M}_\ell \le \#\mathcal R_\ell({\mathcal E}_*) \lesssim \#\mathcal R_\ell({\mathcal T}_*)
\lesssim \varepsilon^{-1/s}
\simeq \big(\norm{\nabla(u-U_\ell)}{L^2(\Omega)}^2
+ \osc{\ell}^2 + \oscD{\ell}^2+\oscN{\ell}^2\big)^{-1/(2s)}.
\end{align*}
We next note that
\begin{align*}
\varrho_\ell^2
\simeq \norm{\nabla(u-U_\ell)}{L^2(\Omega)}^2
+ \osc{\ell}^2+\oscD{\ell}^2+\oscN{\ell}^2
\simeq \Delta_\ell
\end{align*}
according to reliability and efficiency of $\varrho_\ell$ and the definition of the contraction quantity $\Delta_\ell$ in Theorem~\ref{thm:contraction}.
Combining the last two lines, we see
\begin{align*}
\#{\mathcal M}_\ell \lesssim \Delta_\ell^{-1/(2s)} \simeq \varrho_\ell^{-1/s}
\quad\text{for all }\ell\in{\mathbb N}_0.
\end{align*}
By use of the closure estimate~\eqref{eq1:nvb} of newest vertex bisection, we obtain
\begin{align*}
\#{\mathcal T}_\ell - \#{\mathcal T}_0
\lesssim \sum_{j=0}^{\ell-1}\#{\mathcal M}_j
\lesssim \sum_{j=0}^{\ell-1}\Delta_j^{-1/(2s)}.
\end{align*}
Note that the contraction property~\eqref{eq:contraction} of $\Delta_j$ implies $\Delta_\ell \le \kappa^{\ell-j}\Delta_j$, whence
$\Delta_j^{-1/(2s)} \le \kappa^{(\ell-j)/(2s)}\Delta_\ell^{-1/(2s)}$. According to $0<\kappa<1$ and the geometric series, this gives
\begin{align*}
\#{\mathcal T}_\ell - \#{\mathcal T}_0
\lesssim \Delta_\ell^{-1/(2s)} \sum_{j=0}^{\ell-1}\kappa^{(\ell-j)/(2s)}
\lesssim \Delta_\ell^{-1/(2s)} \simeq \varrho_\ell^{-1/s}.
\end{align*}
Altogether, we may therefore conclude $\norm{u-U_\ell}{H^1(\Omega)}\lesssim\varrho_\ell \lesssim (\#{\mathcal T}_\ell - \#{\mathcal T}_0)^{-s}$.
\end{proof}
\begin{remark}
All convergence and optimality results in this paper are stated for the edge-based error estimator $\varrho_\ell$. Nevertheless, it is
only a notational modification to see that also the element-based error estimator $\rho_\ell$
from~\eqref{eq1:estimator:T}--\eqref{eq2:estimator:T} leads to quasi-optimally convergent versions of AFEM. To that end,
\new{Algorithm~\ref{algorithm:doerfler} is} slightly modified, and one seeks minimial sets of marked
elements ${\mathcal M}_\ell\subseteq{\mathcal T}_\ell$ instead. For each marked element $T\in{\mathcal M}_\ell$, we mark its reference edge. The convergence
result in Theorem~\ref{thm:contraction} and the optimality result in Theorem~\ref{thm:quasioptimality} hold accordingly.
\end{remark}
\section{Some Remarks on the 3D Case}
\label{section:remarks3d}%
\noindent
So far, we have only considered a 2D model problem~\eqref{eq:strongform}. In 3D, one additional difficulty is that the regularity
assumption $g\in H^1(\Gamma_D)$ is not sufficient to guarantee continuity of $g$. Therefore, one must not use
nodal interpolation to discretize $g\approx g_\ell$ and to define the Dirichlet data oscillations $\oscD{\ell}$.
If we do not use nodal interpolation to approximate $g\approx g_\ell$, the estimator reduction estimate~\eqref{eq:reduction} becomes
\begin{align}\label{eq*:reduction}
\varrho_{\ell+1}^2
\le q\,\varrho_\ell^2 + \c{reduction}\norm{U_{\ell+1}-U_\ell}{H^1(\Omega)}^2,
\end{align}
where $\c{reduction}>0$ additionally depends on $\Omega$. The reason for this is that the analysis provides an additional term $\norm{g_{\ell+1}-g_\ell}{H^{1/2}(\Gamma_D)}^2$ on the right-hand side of~\eqref{eq:reduction} since we loose the
orthogonality relation~\eqref{eq:nodal:orthogonality} which is used in the form
\begin{align*}
\norm{h_{\ell+1}^{1/2}(g-g_{\ell+1})'}{L^2(\Gamma_D)}^2
&\le \norm{h_{\ell+1}^{1/2}(g-g_{\ell+1})'}{L^2(\Gamma_D)}^2
+ \norm{h_{\ell+1}^{1/2}(g_{\ell+1}-g_{\ell})'}{L^2(\Gamma_D)}^2\\
&= \norm{h_{\ell+1}^{1/2}(g-g_{\ell})'}{L^2(\Gamma_D)}^2.
\end{align*}
Instead, an inverse estimate and the Rellich compactness theorem yield
\begin{align*}
\norm{\nabla(U_{\ell+1}-U_\ell)}{L^2(\Omega)}^2 + \norm{h_{\ell}^{1/2}(g_{\ell+1}-g_\ell)'}{L^2(\Gamma_D)}^2
&\lesssim
\norm{\nabla(U_{\ell+1}-U_\ell)}{L^2(\Omega)}^2 + \norm{g_{\ell+1}-g_\ell}{H^{1/2}(\Gamma_D)}^2\\
&\simeq \norm{U_{\ell+1}-U_\ell}{H^1(\Omega)}^2
\end{align*}
which proves~\eqref{eq*:reduction}.
Note that this estimate holds for \emph{any} discretization of $g\approx g_\ell\in{\mathcal S}^1({\mathcal E}_\ell^{D})$ and even in 3D, where the arclength derivative $(\cdot)'$ is replaced by the surface gradient $\nabla_\Gamma(\cdot)$; we refer to~\cite{ghs} for the inverse estimate.%
A possible choice for $g_\ell$ is $g_\ell = \Pi_\ell g$, where $\Pi_\ell:L^2(\Gamma_D)\to{\mathcal S}^1({\mathcal E}_\ell^{D})$ is the
$L^2$-orthogonal projection~\cite{bcd}. \new{Alternatively, $g_\ell = P_\ell g$, with $P_\ell: H^{1/2} \rightarrow {\mathcal S}^1({\mathcal E}_\ell^{D})$ the
Scott-Zhang projection is chosen~\cite{sv}.}
Note that newest vertex bisection of ${\mathcal T}_\ell$ and hence of ${\mathcal E}_\ell^D$ ensures that $\Pi_\ell$ is a stable projection with respect to the
$H^1(\Gamma_D)$-norm~\cite{kpp}. In~\cite{kp}, we prove \new{for either choice} the approximation estimate
\begin{align}
\norm{g-g_\ell}{H^{1/2}(\Gamma_D)}
\lesssim \norm{h_\ell^{1/2}\nabla_\Gamma(g-g_\ell)}{L^2(\Gamma_D)}
=: \oscD{\ell}.
\end{align}
Moreover, we show that, for $g_\ell=\Pi_\ell g$, the a~priori limit $g_\infty:=\lim_\ell g_\ell$ exists strongly in $H^\alpha(\Gamma_D)$
for $0\le\alpha<1$ and even weakly in $H^1(\Gamma_D)$
provided that the discrete spaces ${\mathcal S}^1({\mathcal E}_\ell^{D})$ are nested, i.e.\ ${\mathcal S}^1({\mathcal E}_\ell^{D})\subseteq{\mathcal S}^1({\mathcal T}_{\ell+1}|_{\Gamma_D})$ for all $\ell\in{\mathbb N}_0$. Note, however, that this is always the case for adaptive mesh-refining algorithms. In particular, we have
\begin{align}\label{eq:nestedness}
{\mathcal S}^1({\mathcal T}_\ell) \subseteq {\mathcal S}^1({\mathcal T}_{\ell+1})
\quad\text{for all }\ell\in{\mathbb N}_0.
\end{align}
In the following, we even aim to prove that nestedness~\eqref{eq:nestedness} implies the existence of the a~priori limit $\lim_\ell U_\ell$ in $H^1(\Omega)$. To that end, we need the following lemma.%
\begin{lemma}[a~priori convergence of Scott-Zhang projection]
\label{lem:apriori:sz} We recall the Scott-Zhang projection $P_\ell$ \new{onto ${\mathcal S}^1({\mathcal T}_\ell)$} and make the additional
assumption that the edges $E_z$ are chosen appropriately, i.e.\ for $\omega_{\ell,z} \subset \bigcup({\mathcal T}_\ell \cap {\mathcal T}_{\ell+1})$ we ensure
that the edge $E_{z}$ is chosen for both operators $P_\ell$ and $P_{\ell+1}$.
Then, the Scott-Zhang interpolands $v_\ell:=P_\ell v\in{\mathcal S}^1({\mathcal T}_\ell)$ of arbitrary
$v\in H^1(\Omega)$ converge to some a~priori limit in $H^1(\Omega)$, i.e.\
there holds
\begin{align}\label{eq:apriori:sz}
\norm{P_\infty v-P_\ell v}{H^1(\Omega)}
\xrightarrow{\ell\to\infty}0
\end{align}
for a certain element $P_\infty v\in{\mathcal S}^1({\mathcal T}_\infty):= \overline{\bigcup_{\ell\in N}{\mathcal S}^1({\mathcal T}_\ell)}$.
\end{lemma}
\begin{proof}
We follow the ideas from~\cite{msv} and define the following subsets of $\Omega$:
\begin{align*}
\Omega_\ell^0 &:= \textstyle{\bigcup\{T \in {\mathcal T}_\ell\,:\, \omega_{\ell}(T) \subset \bigcup\big(\bigcup_{i=0}^\infty \bigcap_{j=i}^\infty {\mathcal T}_j\}}\big),\\
\Omega_{\ell} &:= \textstyle{\bigcup\{T \in {\mathcal T}_\ell\,:\, \text{There exists }k \geq 0\text{ s.t. } \omega_{\ell}(T)\text{ is at least uniformly refined in }{\mathcal T}_{\ell+k} \}},\\
\Omega_{\ell}^* &:= \Omega\setminus(\Omega_{\ell} \cup \Omega_\ell^0),
\end{align*}
where $\omega_{\ell}(\omega):=\bigcup\{T \in {\mathcal T}_\ell \,:\, T\cap \omega \neq \emptyset\}$ for all measurable $\omega \subset \Omega$.
According to~\cite[Corollary 4.1]{msv}, it holds that
\begin{align}
\label{eq1:apriorisz}
\lim_{\ell\to\infty}\norm{\chi_{\Omega_\ell}h_\ell}{L^\infty(\Omega)}=0.
\end{align}
Let $\varepsilon>0$ be arbitrary. Since the space $H^2(\Omega)$ is dense in $H^1(\Omega)$, we find $v_\varepsilon \in H^2(\Omega)$ such that
$\norm{v-v_\varepsilon}{H^1(\Omega)} \leq \varepsilon$. Due to local approximation and stability properties of $P_\ell$, we obtain
\begin{align*}
\norm{(1-P_\ell)v}{H^1(\Omega_{\ell})}\lesssim \norm{(1-P_\ell)v_\varepsilon}{H^1(\Omega_{\ell})}+\varepsilon\leq \norm{h_\ell\, D^2 v_\varepsilon}{L^2(\omega_\ell(\Omega_{\ell}))}
+\varepsilon,
\end{align*}
cf.~\cite{sz}.
By use of~\eqref{eq1:apriorisz}, we may choose $\ell_0 \in {\mathbb N}$ sufficiently large
to guarantee
$\norm{h_\ell\,D^2 v_\varepsilon}{L^2(\omega_\ell(\Omega_{\ell}))}\leq
\norm{h_\ell}{L^\infty(\omega_\ell(\Omega_{\ell}))}\norm{D^2 v_\varepsilon}{L^2(\Omega)}\leq \varepsilon$ for all $\ell \geq \ell_0$. Then, there holds
\begin{align}
\label{eq:szunif}
\norm{(1-P_\ell)v}{H^1(\Omega_{\ell})}\lesssim \varepsilon \quad \text{for all }\ell \geq \ell_0.
\end{align}
There holds $\lim_{\ell\to\infty}|\Omega_\ell^*|=0$, cf.~\cite[Proposition 4.2]{msv}, and this provides the existence of $\ell_1 \in {\mathbb N}$ such that
\begin{align}
\label{eq:omstar}
\norm{v}{H^1(\omega_\ell(\Omega_{\ell}^*))}\leq \varepsilon\quad \text{for all } \ell \geq \ell_1
\end{align}
due to the non-concentration of Lebesgue functions.
With these preparations, we finally aim at proving that $P_\ell v$ is a Cauchy sequence in $H^1(\Omega)$. Therefore, let $\ell \geq \max\{\ell_0,\ell_1\}$ and $k\geq 0$ be arbitrary.
First, we use that for any $T \in {\mathcal T}_\ell$, $(P_\ell v)|_T$ depends only on $v|_{\omega_\ell(T)}$. Then, by definition of $\Omega_\ell^0$ and
our assumption on the definition of $P_\ell$ and $P_{\ell+k}$ on ${\mathcal T}_\ell \cap {\mathcal T}_{\ell+k}$, we obtain
\begin{align}
\label{eq1:szapriori}
\norm{P_\ell v- P_{\ell+k} v }{H^1(\Omega_\ell^0)} = 0.
\end{align}
Second, due to the local stability of $P_\ell$ and~\eqref{eq:omstar}, there holds
\begin{align}
\label{eq2:szapriori}
\begin{array}{rcl}
\norm{P_\ell v - P_{\ell+k} v}{H^1(\Omega_{\ell}^*)} &\leq& \norm{P_\ell v}{H^1(\Omega_{\ell}^*)}+\norm{P_{\ell+k} v}{H^1(\Omega_{\ell}^*)}\\
&\lesssim& \norm{v}{H^1(\omega_\ell(\Omega_{\ell}^*))}+\norm{v}{H^1(\omega_{\ell+k}(\Omega_{\ell}^*))}\\
&\leq& 2 \norm{v}{H^1(\omega_\ell(\Omega_{\ell}^*))} \leq 2 \varepsilon.
\end{array}
\end{align}
Third, we proceed by exploiting~\eqref{eq:szunif}. We have
\begin{align}
\label{eq3:szapriori}
\norm{P_\ell v - P_{\ell+k} v}{H^1(\Omega_{\ell})} \leq \norm{P_\ell v - v}{H^1(\Omega_{\ell})}+\normHe{v-P_{\ell+k}v}{\Omega_{\ell}}
\lesssim \varepsilon.
\end{align}
Combining the estimates from~\eqref{eq1:szapriori}--\eqref{eq3:szapriori}, we conclude
$\norm{P_\ell v - P_{\ell+k} v}{H^1(\Omega)} \lesssim \varepsilon$,
i.e.\ $(P_\ell v)$ is a Cauchy sequence in $H^1(\Omega)$ and hence convergent.
\end{proof}
Now, we are able to prove a~priori convergence of $U_\ell$ towards some
a~priori limit $u_\infty$.
\begin{proposition}[a~priori convergence of $U_\ell$]\label{prop:apriori}
Suppose that the discrete spaces satisfy nestedness~\eqref{eq:nestedness} and that $U_\ell\in{\mathcal S}^1({\mathcal T}_\ell)$ solves~\eqref{eq:galerkin} with $g_\ell=\Pi_\ell g$ and $\Pi_\ell:L^2(\Gamma_D)\to{\mathcal S}^1({\mathcal E}_\ell^{D})$ the $L^2$-projection. Then, the a~priori limit $u_\infty:=\lim\limits_{\ell\to\infty}U_\ell\in H^1(\Omega)$
exists.
\end{proposition}
\begin{proof}
For $g_\ell\in H^{1/2}(\Gamma)$, we consider the continuous auxiliary problem
\begin{align*}
-\Delta w_\ell&=0 \quad\text{in }\Omega,\\
w_\ell &= g_\ell\quad\text{on }\Gamma_D,\\
\partial_nw_\ell &=0 \quad\text{on }\Gamma_N.
\end{align*}
Let $w_\ell\in H^1(\Omega)$ be the unique (weak) solution and note that the trace $\widehat g_\ell:=w_\ell|_\Gamma \in H^{1/2}(\Gamma)$ provides an extension of $g_\ell$ with
\begin{align*}
\norm{\widehat g_\ell}{H^{1/2}(\Gamma)}
\le \norm{w_\ell}{H^1(\Omega)}
\lesssim \norm{g_\ell}{H^{1/2}(\Gamma_D)}
\le \norm{\widehat g_\ell}{H^{1/2}(\Gamma)}.
\end{align*}
For arbitrary $k,\ell\in{\mathbb N}$, the same type of arguments proves
\begin{align*}
\norm{\widehat g_\ell-\widehat g_k}{H^{1/2}(\Gamma)}
\simeq \norm{g_\ell-g_k}{H^{1/2}(\Gamma_D)}.
\end{align*}
Since $(g_\ell)$ is a Cauchy sequence in $H^{1/2}(\Gamma_D)$, cf.~\cite{kp}, we obtain that $(\widehat g_\ell)$ is a Cauchy sequence in $H^{1/2}(\Gamma)$, whence convergent with limit $\widehat g_\infty\in H^{1/2}(\Gamma)$.
Second, note that $({\mathcal L}_\ell\widehat g_\ell)|_{\Gamma_D} = g_\ell$, where ${\mathcal L}_\ell=P_\ell{\mathcal L}$ denotes the discrete lifting from~\eqref{eq:discreteLifting}.
Therefore, $\widetilde U_\ell := U_\ell - {\mathcal L}_\ell \widehat g_\ell \in {\mathcal S}^1_D({\mathcal T}_\ell)$ is the unique solution of the variational form
\begin{align}
\label{eq1:weak3d}
\dual{\nabla\widetilde U_\ell}{\nabla V_\ell}_\Omega=\dual{\nabla u}{\nabla V_\ell}_\Omega-\dual{\nabla{\mathcal L}_\ell \widehat g_\ell}{\nabla V_\ell}_\Omega \quad \text{ for all } V_\ell \in {\mathcal S}^1_D({\mathcal T}_\ell).
\end{align}
Third, Lemma~\ref{lem:apriori:sz} implies
\begin{align*}
\normHe{{\mathcal L}_\ell \widehat g_\ell -P_\infty {\mathcal L} \widehat g_\infty}{\Omega}
&\leq \normHe{P_\ell({\mathcal L} \widehat g_\ell - {\mathcal L} \widehat g_\infty)}{\Omega}
+ \normHe{P_\ell {\mathcal L} \widehat g_\infty - P_\infty {\mathcal L} \widehat g_\infty}{\Omega}\\
&\lesssim\normHeh{\widehat g_\ell - \widehat g_\infty}{\Gamma}
+\normHe{P_\ell {\mathcal L} \widehat g_\infty - P_\infty {\mathcal L} \widehat g_\infty}{\Omega}
\xrightarrow{\ell\to\infty} 0.
\end{align*}
Fourth, let $\widetilde U_{\ell,\infty} \in {\mathcal S}^1_D({\mathcal T}_\ell)$ be the unique solution of the discrete auxiliary problem
\begin{align}
\label{eq2:weak3d}
\dual{\nabla\widetilde U_{\ell,\infty}}{\nabla V_\ell}_\Omega=\dual{\nabla u}{\nabla V_\ell}_\Omega-\dual{\nabla P_\infty{\mathcal L} \widehat g_\infty}{\nabla V_\ell}_\Omega
\quad \text{ for all } V_\ell \in {\mathcal S}^1_D({\mathcal T}_\ell).
\end{align}
Due to the nestedness of the ansatz spaces ${\mathcal S}^1_D({\mathcal T}_\ell)$, we derive a priori convergence $\widetilde U_{\ell,\infty} \xrightarrow{\ell\to\infty} \widetilde u_\infty\in H^1(\Omega)$, where $\widetilde u_\infty$ denotes the Galerkin solution with respect to the closure of $\bigcup^\infty_{\ell = 0} {\mathcal S}^1_D({\mathcal T}_\ell)$ in $H^1_0(\Omega)$ , see e.g.~\cite[Lemma~6.1]{bv}. With the stability of~\eqref{eq1:weak3d} and~\eqref{eq2:weak3d}, we obtain
\begin{align*}
\norm{\nabla(\widetilde U_{\ell,\infty} - \widetilde U_{\ell})}{L^2(\Omega)} \lesssim \normHe{{\mathcal L}_\ell \widehat g_\ell- P_\infty {\mathcal L} \widehat g_\infty}{\Omega} \xrightarrow{\ell\to\infty} 0,
\end{align*}
and therefore $\widetilde U_\ell \xrightarrow{\ell\to\infty} \widetilde u_\infty$ in $H^1(\Omega)$.
Finally, we conclude
\begin{align*}
U_\ell = \widetilde U_\ell + {\mathcal L}_\ell \widehat g_\ell \xrightarrow{\ell\to\infty} \widetilde u_\infty + P_\infty {\mathcal L} \widehat g_\infty =:u_\infty \in H^1(\Omega),
\end{align*}
which concludes the proof.
\end{proof}
\new{
\begin{remark}
Note that Proposition~\ref{prop:apriori} also holds if the Scott-Zhang projection is used to discretize $g \approx g_\ell = P_\ell g$.
This immediately follows from Lemma~\ref{lem:apriori:sz}, since $g_\ell = (P_\ell {\mathcal L} g)|_{\Gamma_D} \to (P_\infty {\mathcal L} g)|_{\Gamma_D}$ as $\ell\to\infty$.
\end{remark}
}
\new{
\begin{theorem}\label{thm:3Dconvergence}
Suppose that either the $L^2$-projection $g_\ell = \Pi_\ell g$ or the Scott-Zhang operator $g_\ell = P_\ell g$
is used to discretize the Dirichlet data $g\in H^1(\Gamma)$. Then, Algorithm~\ref{algorithm:doerfler}
guarantees $\lim_\ell \norm{u - U_\ell}{H^1(\Omega)} = 0$ for both 2D and 3D.
\end{theorem}
}
\new{
\begin{proof}
With Proposition~\ref{prop:apriori} and the estimator reduction~\eqref{eq*:reduction}, we obtain
\begin{align*}
\varrho_{\ell+1}^2 \le q\,\varrho_\ell^2 + \alpha_\ell,
\quad\text{where}\quad 0<q<1
\text{ and }\alpha_\ell\ge0
\text{ with }\alpha_\ell\xrightarrow{\ell\to\infty}0.
\end{align*}
From this and elementary calculus, we deduce estimator convergence $\lim_\ell\varrho_\ell = 0$,
cf.~\cite{afp} for the concept of estimator reduction.
According to reliability of $\varrho_\ell$, this yields convergence of the adaptive algorithm.
\end{proof}
}
Note, however, that this convergence result is much weaker than the contraction result of
Theorem~\ref{thm:contraction}. With the techniques of the present paper, it is unclear how to prove a contraction result if the
additional orthogonality relation~\eqref{eq:nodal:orthogonality} fails to hold.
\begin{figure}[h]
\begin{center}%
\includegraphics[width=40mm]{znum_Zmesh.eps
\hspace{20mm}
\includegraphics[width=40mm]{znum_Zmeshadap.eps
\caption{Z-shaped domain with initial mesh ${\mathcal T}_0$ and adaptively generated mesh ${\mathcal T}_9$ with $N = 10966$ for $\theta = 0.5$ in Algorithm~\ref{algorithm:doerfler}. The Dirichlet boundary $\Gamma_D$ is marked with a solid line, whereas the dashed line denotes the Neumann boundary $\Gamma\backslash\Gamma_D$.}
\label{fig:meshes}
\end{center}
\end{figure}
\begin{figure}[t]
\begin{center}
\psfrag{aneumannOsc}{$\eta_{N,\ell}$ (adap.)}
\psfrag{ajumpsN}{$\eta_{\Omega,\ell}$ (adap.)}
\psfrag{ujumpsN}{$\eta_{\Omega,\ell}$ (unif.)}
\psfrag{uneumannOsc}{$\eta_{N,\ell}$ (unif.)}
\psfrag{adirOsc}{$\oscD\ell$ (adap.)}
\psfrag{udirOsc}{$\oscD\ell$ (unif.)}
\psfrag{aedgeOsc}{$\osc\ell$ (adap.)}
\psfrag{uedgeOsc}{$\osc\ell$ (unif.)}
\psfrag{O12}[cc][cc][1][-15]{$\mathcal{O}(N^{-1/2})$}
\psfrag{O27}[cc][cc][1][-10]{$\mathcal{O}(N^{-2/7})$}
\psfrag{O34}[cc][cc][1][-23]{$\mathcal{O}(N^{-3/4})$}
\includegraphics[width=170mm]{znum_estparts.eps}
\caption{Numerical results for $\varrho_\ell$ for uniform and adaptive mesh-refinement with Algorithm~\ref{algorithm:doerfler}
resp.\ \new{the modified D\"orfler marking} and $\theta \in \{0.2, 0.5, 0.8\}$,
plotted over the number of ele\-ments $N = \# {\mathcal T}_\ell$.}
\label{fig:convRho}
\end{center}
\end{figure}
\begin{figure}[t]
\begin{center}
\psfrag{t02}{$\theta = 0.2$ (Alg.~\ref{algorithm:doerfler})}
\psfrag{t05}{$\theta = 0.5$ (Alg.~\ref{algorithm:doerfler})}
\psfrag{t08}{$\theta = 0.8$ (Alg.~\ref{algorithm:doerfler})}
\psfrag{uniform}{uniform}
\psfrag{t02m}{$\theta = 0.2$ (mod.)}
\psfrag{t05m}{$\theta = 0.5$ (mod.)}
\psfrag{t08m}{$\theta = 0.8$ (mod.)}
\psfrag{O12}[cc][cc][1][-25]{$\mathcal{O}(N^{-1/2})$}
\psfrag{O27}[tc][bc][1][-15]{$\mathcal{O}(N^{-2/7})$}
\psfrag{O34}[cc][cc][1][-23]{$\mathcal{O}(N^{-3/4})$}
\includegraphics[width=170mm]{znum_thetacomp.eps}
\caption{Numerical results for $\eta_{\Omega,\ell}$, $\oscD\ell$, and $\eta_{N,\ell}$ for uniform and adaptive mesh-refinement with
Algorithm~\ref{algorithm:doerfler} and $\theta = 0.5$, plotted over the number of ele\-ments $N = \# {\mathcal T}_\ell$. Adaptive refinement leads to optimal
convergence rates.}
\label{fig:convTotal}
\end{center}
\end{figure}
\section{Numerical Experiment}
\label{section:numerics}%
\subsection{Example with known solution}
\noindent
On the Z-shaped domain $\Omega=(-1, 1)^2\backslash {\rm conv}\{(0,0),$\linebreak$ (-1,-1), (0,-1)\}$, we consider the mixed boun\-dary value
problem~\eqref{eq:strongform}, where the partition of the boundary $\Gamma=\partial\Omega$ into Dirichlet boundary $\Gamma_D$ and
Neumann boundary $\Gamma_N$ as well as the initial mesh are shown in Figure~\ref{fig:meshes}. We prescribe the exact solution $u(x)$
in polar coordinates by
\begin{align}
u(x) = r^{4/7}\cos(4\varphi/7)
\quad\text{for }x=r\,(\cos\varphi,\sin\varphi).
\end{align}
Then, $f=-\Delta u\equiv0$, and the solution $u$ as well as its Dirichlet data $g = u|_{\Gamma_D}$ admit a generic singularity at the
reentrant corner $r = 0$.
Figure~\ref{fig:convRho} shows a comparison between uniform and adaptive mesh refinement. For \new{the algorithm} based on the
modified D\"orfler marking, we use $\theta:= \vartheta = \theta_1 = \theta_2$. For \new{both algorithms}, we then vary the adaptivity parameter $\theta$
between $0.2$ and $0.8$. We observe that both adaptive algorithms lead to the optimal convergence rate $\mathcal{O}(N^{-1/2})$
for all choices of $\theta$, whereas uniform refinement leads only to suboptimal convergence behaviour of approximately $\mathcal{O}(N^{-2/7})$.
Note that due to $f \equiv 0$, we have $\osc\ell \equiv 0$ in this example.
In Figure~\ref{fig:convTotal}, we compare the jump terms
\[
\eta_{\Omega,\ell}^2 := \sum_{E\in{\mathcal E}_\ell^\Omega}|E|\norm{[\partial_nU_\ell]}{L^2(E)}^2,
\]
the Dirichlet data oscillations $\oscD\ell$, and the Neumann jump terms
\[
\eta_{N,\ell}^2 := \sum_{E \in {\mathcal E}_\ell^N} |E|\norm{\phi - \partial_nU_\ell}{L^2(E)}^2
\]
for uniform and adaptive refinement.
Due to the corner singularity at $r = 0$, uniform refinement leads to a suboptimal convergence behaviour for $\eta_{\Omega,\ell}$ and even for $\oscD\ell$ and $\eta_{N,\ell}$, i.e.\ all contributions of $\varrho_\ell^2 = \eta_{\Omega,\ell}^2 + \eta_{N,\ell}^2 + \oscD{\ell}$ show the same poor convergence rate of approximately ${\mathcal O}(N^{-2/7})$. For adaptive mesh-refinement, we observe that the optimal order of convergence is retained, namely $\varrho_\ell \simeq \eta_\ell = {\mathcal O}(N^{-1/2})$. Moreover, we even observe optimal convergence behaviour $\oscD{\ell} \simeq \eta_{N,\ell} = {\mathcal O}(N^{-3/4})$ for the boundary contributions of $\varrho_\ell$.
Finally, in Figure~\ref{fig:meshes}, the initital mesh ${\mathcal T}_0$ and the adaptively generated mesh ${\mathcal T}_9$ with $N=10966$ Elements
are visualized. As expected, adaptive refinement is essentially concentrated around the reentrant corner $r = 0$.
\subsection{Example with unknown solution}
On the L-shaped domain $\Omega=(-1,1)^2\setminus (-1,0)\times (0,1)$, we consider the mixed boundary value problem~\eqref{eq:strongform}. The initial configuration with Dirichlet boundary $\Gamma_D$, Neumann boundary $\Gamma_N$, as well as the initial mesh is shown in Figure~\ref{fig:meshes2}.
For the unknown solution $u\in H^1(\Omega)$, we prescribe in polar coordinates with respect to $(0,0)$
\begin{align*}
g &= u|_{\Gamma_D} = r^{2/3}\sin(2\varphi/3)\quad\text{on }\Gamma_D,\\
\phi&= \partial_n u = 0\qquad\quad\quad\;\,\qquad\text{on }\Gamma_N,\\
f&=-\Delta u = |1-r|^{-1/4}\quad\;\,\,\text{in }\Omega.
\end{align*}
There holds $g\in H^1(\Gamma_D)$, $\phi\in L^2(\Gamma_N)$, and $f\in L^2(\Omega)$. Note that the Dirichlet data $g$ has a singularity at the reentrant corner $(0,0)$, whereas the volume force $f$ is singular along the circle around $(0,0)$ with radius $r=1$.
Again, we compare the standard D\"orfler marking strategy as well the modified D\"orfler marking with the uniform approach.
\begin{figure}
\begin{center}
\includegraphics[width=40mm]{lnum_Lmesh.eps
\hspace{20mm}
\includegraphics[width=40mm]{lnum_Lmeshadap.eps
\caption{L-shaped domain with initial mesh ${\mathcal T}_0$ and adaptively generated mesh ${\mathcal T}_9$ with $N = 12177$ for $\theta = 0.5$ in Algorithm~\ref{algorithm:doerfler}. The Dirichlet boundary $\Gamma_D$ is marked with a solid line, whereas the dashed line denotes the Neumann boundary $\Gamma\backslash\Gamma_D$.}
\label{fig:meshes2}
\end{center}
\end{figure}
Figure~\ref{fig:convRho2} shows a comparison between uniform and adaptive mesh refinement. The parameters $\theta=\vartheta=\theta_1=\theta_2$ are varied between $0.2$ and $0.8$. Both adaptive algorithms lead to optimal convergence rate $\mathcal{O}(N^{-1/2})$ for all choices of $\theta$, whereas uniform refinement leads only to a suboptimal rate of $\mathcal{O}(N^{-1/3})$.
\begin{figure}
\begin{center}
\psfrag{t02}{$\theta = 0.2$ (Alg.~\ref{algorithm:doerfler})}
\psfrag{t05}{$\theta = 0.5$ (Alg.~\ref{algorithm:doerfler})}
\psfrag{t08}{$\theta = 0.8$ (Alg.~\ref{algorithm:doerfler})}
\psfrag{uniform}{uniform}
\psfrag{t02m}{$\theta = 0.2$ (mod.)}
\psfrag{t05m}{$\theta = 0.5$ (mod.)}
\psfrag{t08m}{$\theta = 0.8$ (mod.)}
\psfrag{O12}[tc][bc][1][-25]{$\mathcal{O}(N^{-1/2})$}
\psfrag{O13}[cc][cc][1][-15]{$\mathcal{O}(N^{-1/3})$}
\psfrag{O34}{$\hspace{-1.2cm}\mathcal{O}(N^{-3/4})$}
\includegraphics[width=140mm]{lnum_thetacomp.eps}
\caption{Numerical results for $\varrho_\ell$ for uniform and adaptive mesh-refinement with Algorithm~\ref{algorithm:doerfler}
resp.\ \new{the modified D\"orfler marking} and $\theta \in \{0.2, 0.5, 0.8\}$,
plotted over the number of ele\-ments $N = \# {\mathcal T}_\ell$.}
\label{fig:convRho2}
\end{center}
\end{figure}
In Figure~\ref{fig:convTotal2}, we compare the estimator contributions which (in contrast to the previous example) include additional volume oscillations $\osc\ell$. Due to the data singularities, as well as the singularity introduced by the change of the boundary condition, uniform refinement leads only to suboptimal convergence rates for all estimator contributions. For adaptive mesh-refinement, we observe that the optimal order of convergence is retained. This means $\varrho_\ell \simeq \eta_\ell = {\mathcal O}(N^{-1/2})$ and includes even optimal convergence behaviour $\oscD{\ell} \simeq \eta_{N,\ell} = {\mathcal O}(N^{-3/4})$ for the boundary contributions of $\varrho_\ell$.
In Figure~\ref{fig:meshes2}, one observes the adaptive refinement towards the singularity in the reentrant
corner as well as the circular singularity of $f$ and the singularities which stem from the change of boundary conditions.
\begin{figure}
\begin{center}
\psfrag{aneumannOsc}{$\eta_{N,\ell}$ (adap.)}
\psfrag{ajumpsN}{$\eta_{\Omega,\ell}$ (adap.)}
\psfrag{ujumpsN}{$\eta_{\Omega,\ell}$ (unif.)}
\psfrag{uneumannOsc}{$\eta_{N,\ell}$ (unif.)}
\psfrag{adirOsc}{$\oscD\ell$ (adap.)}
\psfrag{udirOsc}{$\oscD\ell$ (unif.)}
\psfrag{aedgeOsc}{$\osc\ell$ (adap.)}
\psfrag{uedgeOsc}{$\osc\ell$ (unif.)}
\psfrag{O12}[tc][bc][1][-17]{$\mathcal{O}(N^{-1/2})$}
\psfrag{O13}[cc][cc][1][-12]{$\mathcal{O}(N^{-1/3})$}
\psfrag{O34}[tc][bc][1][-25]{$\mathcal{O}(N^{-3/4})$}
\includegraphics[width=140mm]{lnum_estparts.eps}
\caption{Numerical results for $\eta_{\Omega,\ell}$, $\oscD\ell$, and $\eta_{N,\ell}$ for uniform and adaptive mesh-refinement with
Algorithm~\ref{algorithm:doerfler} and $\theta = 0.5$, plotted over the number of ele\-ments $N = \# {\mathcal T}_\ell$. Adaptive refinement leads to optimal
convergence rates.}
\label{fig:convTotal2}
\end{center}
\end{figure}
\textbf{Acknowledgement.} The authors M.F. and D.P. are funded by the Austrian Science Fund (FWF) under grant P21732 \textit{Adaptive Boundary Element Method}, which is thankfully acknowledged. M.P. acknowledges support through the project \textit{Micromagnetic Simulations and Computational Design of Future Devices}, funded by the Viennese Science and Technology Fund (WWTF) under grant MA09-029
\newcommand{\bibentry}[2][!]{\ifthenelse{\equal{#1}{!}}{\bibitem{#2}}{\bibitem{#2}}}
|
1,116,691,499,866 | arxiv | \section{Introduction}
Despite the fact that the Standard Model (SM) has been very successful in describing most of elementary particles phenomenology, the Higgs sector of the theory remains unknown so far, and there is no fundamental reason to assume that the Higgs sector must be minimal, i.e., only one Higgs doublet. The simplest extension compatible with the gauge invariance is called the two Higgs doublet model (2HDM), which consists of adding a second Higgs doublet with the same quantum numbers as the first one.
Similarly, the minimal supersymmetric Standard model (MSSM) consists of adding a second Higgs doublet. In the MSSM, two Higgs doublets are introduced in order to cancel the anomaly and to give the fermions masses. The introduction of a second Higgs doublet inevitably means that a charged Higgs boson is in the physical spectra. So, it is very important to study effects of charged scalar particles.
The branching fractions of $\bar{B}\to D\ell\bar{\nu_{\ell}}$ and $\bar{B}\to D^{*}\ell\bar{\nu_{\ell}}$ have been measured in B Factories, where $\ell$ denotes $e$, $\mu$ or $\tau$. We define $R(D^{(*)})$ as the ratios of the branching fractions, that is,
\begin{align}
R(D^{(*)})=\frac{\mathcal{B}(B\to D^{(*)}\tau \bar{\nu}_{\tau})}{\mathcal{B}(B\to D^{(*)}(e{\rm~or~}\mu) \bar{\nu})}.\label{Rdef}
\end{align}
Using the ratio of two branching fractions lowers the hadronic uncertainty. The theoretical predictions in the Standard Model using the heavy-quark effective theory(HQET) on $\bar{B}\to D^{(*)}\tau\bar{\nu}_{\tau}$ are evaluated as
\begin{align}
R(D)_{\rm HQET}&=0.310 \pm 0.011,\\
R(D^{*})_{\rm HQET}&=0.253 \pm 0.003.
\end{align}
These are consistent with the results in Refs. \cite{Tanaka:2010se,Fajfer:2012vx}. $R(D)$ is also evaluated by using hadronic form factors computed in unquenched lattice QCD as $R(D)_{\rm lat}=0.316(12)(7)$, where the errors are statistical and total systematic, respectively \cite{Bailey:2012jg}. In Ref. \cite{Becirevic:2012jf}, $R(D)$ is evaluated by using results of HQET and lattice QCD as $R(D)_{\rm HQET+lat}=0.31(2)$. These theoretical predictions are consistent with each other within their errors. The recent experimental results of $R(D^{(*)})$ by BABAR \cite{Lees:2012xj} are
\begin{align}
R(D)_{\rm exp}&= 0.440 \pm 0.058 \pm 0.042,\label{RDexp} \\
R(D^{*})_{\rm exp}&= 0.332 \pm 0.024 \pm 0.018,\label{RDsexp}
\end{align}
which exceed the Standard Model expectations by 1.9$\sigma$ and 2.6$\sigma$, respectively.
In this paper, we consider an effective weak Hamiltonian such as
\begin{align}
\mathcal{H}_{{\rm eff}}^{(b\to c\ell\bar{\nu}_{\ell})}&=4\frac{G_{F}V_{cb}}{\sqrt{2}}
[\mathcal{O}_{V_L}+m_{\ell}C_{S_R}\mathcal{O}_{S_R}+m_{\ell}C_{S_L}\mathcal{O}_{S_L}]+{\rm H.c}.,\label{Heff}\\
\mathcal{O}_{V_L}&=(\bar{c}\gamma^{\mu}P_L b) (\bar{\ell} \gamma_{\mu}P_L \nu_{\ell}),\\
\mathcal{O}_{S_R}&=(\bar{c}P_R b) (\bar{\ell} P_L \nu_{\ell}),\\
\mathcal{O}_{S_L}&=(\bar{c}P_L b) (\bar{\ell} P_L \nu_{\ell}),
\end{align}
where $P_{R,L}$ are projection operators on states of positive and negative chirality. We assume that the neutrino helicity is only negative. This type or a more general one has been studied in Refs. \cite{Nierste:2008qe,Fajfer:2012vx,Becirevic:2012jf,Bailey:2012jg,Fajfer:2012jt,Crivellin:2012ye,Datta:2012qk,Choudhury:2012hn,Celis:2012dk,He:2012zp,Chen:2005gr} by using some observables, e.g, $R(D^{(*)})$ and $q^2$ distributions of $R$ ratios and angular asymmetry on $\bar{B}\to D^{(*)}\tau \bar{\nu}_{\tau}$, where $q^2=(p_B-p_{D^{(*)}})^2$.
Since a tauon decays into a light meson(lepton) with nutrino(s), the measurements of angular distribution for tauon on $\bar{B}\to D^{(*)}\tau \bar{\nu}_{\tau}$ are difficult. However, angular dependence on $\bar{B}\to D^{(*)}\tau \bar{\nu}_{\tau}$ is important to search for the NP effect. So, we study relations among the coefficients $C_{S_{R,L}}$ and forward-backward asymmetries on $\bar{B}\to D^{(*)}\tau(\to \pi\nu_{\tau})\bar{\nu}_{\tau}$, $\bar{B}\to D^{(*)}\tau(\to \rho\nu_{\tau})\bar{\nu}_{\tau}$, and $\bar{B}\to D^{(*)}\tau(\to a_1\nu_{\tau})\bar{\nu}_{\tau}$, and show that it is possible to determine them almost completely by using the ratios of the branching fractions and forward-backward asymmetries on these modes.
\section{$R(D^{(*)})$ and Forward-Backward Asymmetries}
We use the quantities the ratios $R(D^{(*)})$ defined as (\ref{Rdef}) and the forward-backward asymmetries $A_{FB}$ defined as
\begin{align}
A_{FB}(D^{(*)},M)&=
\frac
{\left(\int_{0}^{1}-\int_{-1}^{0}\right)d\cos \theta_{D^{(*)},M}
\frac{d\Gamma_{D^{(*)},M}}{d\cos \theta_{D^{(*)},M}}
}
{\Gamma_{D^{(*)},M}},\\
d\Gamma_{D^{(*)},M} &= d\Gamma(\bar{B}\to D^{(*)}\tau(\to M\nu_{\tau}) \bar{\nu}_{\tau}),\\
M&=\pi,~\rho~{\rm or}~a_1,
\end{align}
\begin{figure}
\begin{center}
\includegraphics*[width=7cm]{fig1.eps}
\caption{{\footnotesize
$\theta_{D^{(*)},M}$ is the angle between the direction of $D^{(*)}$ and $M(=\pi,~\rho,~a_1)$ in the $\tau-\bar{\nu}_{\tau}$ rest frame.
}}
\label{fig1}
\end{center}
\end{figure}
where $\theta_{D^{(*)},M}$ is the angle between the direction of the $D^{(*)}$ and the $M$ in the $\tau-\bar{\nu}_{\tau}$ rest frame, as seen in Figure \ref{fig1}. The $q^2$ distribution and angular distribution on $\bar{B}\to D^{(*)}\tau\bar{\nu}_{\tau}$ have been analyzed \cite{Fajfer:2012vx,Datta:2012qk,Chen:2005gr}. We can check the differential decay rate on $\bar{B}\to D \tau(\to \pi\nu_{\tau})\bar{\nu}_{\tau}$ in Ref \cite{Nierste:2008qe}.
The differential decay rates are written as
\begin{align}
d\Gamma(\bar{B}\to D^{(*)}\tau \bar{\nu}_{\tau})
=\frac{1}{2m_B}d\Phi_3 \times \sum_{\lambda_{\tau}(,\lambda_{D^*})} |\mathcal{M}^{\lambda_{\tau}}_{(\lambda_{D^*})}(q^2,\cos\theta_{\tau})|^2,
\end{align}
where $\lambda_{\tau}$ is the $\tau$ helicity, $\lambda_{D^*}$is the $D^*$ polarization, $m_B$ is the $B$ meson mass, $q^{\mu} = (p_B-p_{D^{(*)}})^{\mu}$ and $p_{B,D^{(*)}}$ are the $\bar{B},D^{(*)}$ meson
four-momenta. The three-body phase space $d\Phi_3$ is written as
\begin{align}
d\Phi_3=\frac{\sqrt{Q_{+}Q_{-}}}{256\pi^3m_B^2}\left(1-\frac{m_{\tau}^2}{q^2}\right)dq^2 d\cos\theta_{\tau},
\end{align}
where $Q_{\pm}=(m_B \pm m_{D^{(*)}})^2-q^2$ and $m_{D^{(*)}}$ are the $D^{(*)}$ meson masses. Hadronic amplitudes in the matrix elements $\mathcal{M}=\langle D^{(*)}\ell\bar{\nu}_{\ell}|\mathcal{H}_{\rm eff}|\bar{B}\rangle$ are defined as
\begin{align}
\langle D(v_D)|\bar{c}\gamma^{\mu}b|\bar{B}(v_B)\rangle
&=\sqrt{m_Bm_D}[h_{+}(w)(v_B+v_D)^{\mu}+h_{-}(w)(v_B-v_D)^{\mu}],\\
\langle D^{*}(v_{D^{*}},\epsilon)|\bar{c}\gamma^{\mu}b|\bar{B}(v_B)\rangle
&=i\sqrt{m_Bm_{D^{*}}}h_V(w)\varepsilon^{\mu\nu\rho\sigma}\epsilon^*_{\nu}(v_{D^{*}})_{\rho}(v_B)_{\sigma},\\
\langle D^{*}(v_D^{*},\epsilon)|\bar{c}\gamma^{\mu}\gamma_5 b|\bar{B}(v_B)\rangle
&=\sqrt{m_Bm_{D^{*}}}[h_{A_1}(w)(w+1)\epsilon^{*\mu}
-h_{A_2}(w)(\epsilon^{*}\cdot v_B)v_B^{\mu} \nonumber\\
&~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~-h_{A_3}(w)(\epsilon^{*}\cdot v_B)v_{D^{*}}^{\mu}],
\end{align}
where $v_B=p_B/m_B,~v_{D^{(*)}}=p_{D^{(*)}}/m_{D^{(*)}}$ and $w=v_B\cdot v_D$. In the heavy quark limit (HQL), the form factors become related to a single universal form factor, the Isgur-Wise function $\xi(w)$ \cite{Isgur:1989vq,Neubert:1991xw}:
\begin{align}
h_{+}(w)=h_{V}(w)=h_{A_1}(w)=h_{A_3}(w)=\xi(w),\nonumber\\
h_{-}(w)=h_{A_2}(w)=0~~({\rm HQL}).~~~~~~~~
\end{align}
The form factors, including short-distance and 1/$m_Q$ corrections, are known \cite{Caprini:1997mu}. Their form factors involve the unknown parameters, which have been analyzed \cite{Asner:2010qj,Dungel:2010uk}. We relate the (pseudo)scalar hadronic amplitudes to the (axial)vector hadronic amplitudes by using the equations of motion as
\begin{align}
q_{\mu}\langle D|\bar{c}\gamma^{\mu}b|\bar{B}\rangle&=(m_b-m_c)\langle D|\bar{c}b|\bar{B}\rangle,\label{EOM1}\\
q_{\mu}\langle D^{*}|\bar{c}\gamma^{\mu}\gamma_5 b|\bar{B}\rangle&=-(m_b+m_c)\langle D^{*}|\bar{c}\gamma_5 b|\bar{B}\rangle.\label{EOM2}
\end{align}
The other hadronic amplitudes are equal to zero due to parity and time-reversal invariance, i.e., $\langle D|\bar{c}\gamma_5 b|\bar{B}\rangle =$$\langle D|\bar{c}\gamma^{\mu}\gamma_5 b|\bar{B}\rangle =$$\langle D^{*}|\bar{c}b|\bar{B}\rangle = 0$. See the Appendix for more details.
\section{Numerical results}
\begin{figure*}[t]
\begin{center}
\includegraphics*[width=6.2cm]{fig2a.eps}~~~~~~~
\includegraphics*[width=6.2cm]{fig2b.eps}
\caption{{\footnotesize We fix ${\rm Im}(C_{S_{R,L}})=0$. In the left (right) panel, the blue line shows the $C_{S_{R,L}}$ dependence of $R(D)$ ($R(D^*)$), the red line shows the $C_{S_{R,L}}$ dependence of $A_{FB}(D,\pi)$~ ($A_{FB}(D^*,\pi)$), and the light blue band corresponds to the measurement of $R(D)$ $(R(D^*))$ at 95\% C.L..}}
\label{fig2}
\end{center}
\end{figure*}
\begin{figure*}[t]
\begin{center}
\includegraphics*[width=6.2cm]{fig3a.eps}~~~~~~~
\includegraphics*[width=6.2cm]{fig3b.eps}
\caption{{\footnotesize The $C_{S_{R,L}}$ dependence of $A_{FB}(D^{(*)},\pi)$(red solid lines), $A_{FB}(D^{(*)},\rho)$(green dashed lines) and $A_{FB}(D^{(*)},a_1)$(gray dotted-dashed lines).}}
\label{fig3}
\end{center}
\end{figure*}
We evaluate $R(D^{(*)})$ and the forward-backward asymmetries as functions of $C_{S_{R,L}}$ on $\bar{B}\to D^{(*)}\tau\bar{\nu}_{\ell}$ by using heavy-quark symmetry with short-distance and $1/m_Q$ corrections as
\begin{align}
R(D) &=\left[0.310(11)\right]\widetilde{\mathcal{R}},\label{R}\\
R(D^*)&=\left[0.253(3) \right]\widetilde{\mathcal{R}}^*,\\
A_{FB}(D,\pi) &=\left[0.54 +4.0{\rm Re}(C^+)\right]\big/~\widetilde{\mathcal{R}},\\
A_{FB}(D,\rho) &=\left[0.32 +2.4{\rm Re}(C^+)\right]\big/~\widetilde{\mathcal{R}},\\
A_{FB}(D,a_1) &=\left[0.25 +1.9{\rm Re}(C^+)\right]\big/~\widetilde{\mathcal{R}},\\
A_{FB}(D^*,\pi) &=\left[ 0.28 +1.3 {\rm Re}(C^-)\right]\big/~\widetilde{\mathcal{R}}^*,\\
A_{FB}(D^*,\rho) &=\left[ 0.092 +0.79{\rm Re}(C^-)\right]\big/~\widetilde{\mathcal{R}}^*,\\
A_{FB}(D^*,a_1) &=\left[-0.055 +0.62{\rm Re}(C^-)\right]\big/~\widetilde{\mathcal{R}}^*,\label{AFB}
\end{align}
where
\begin{align}
\widetilde{\mathcal{R}} &=1 +9.0{\rm Re}(C^+) + 37|C^+|^2,\\
\widetilde{\mathcal{R}}^*&=1 +1.1{\rm Re}(C^-) +3.9|C^-|^2,\\
C^{+}&\equiv {\rm GeV}^2 \times \left(\frac{C_{S_R}+C_{S_L}}{m_b-m_c}\right),\\
C^{-}&\equiv {\rm GeV}^2 \times \left(\frac{C_{S_R}-C_{S_L}}{m_b+m_c}\right),
\end{align}
and $m_{b,c}$ are the $b,c$ quark masses. We use the $m_b$ and $m_c$ in the $\overline{{\rm MS}}$ scheme at the $m_b$ scale \cite{Xing:2007fb} in this paper's figures. A few percent errors due to the measurements and the hadronic uncertainties remain. These quantities determine ${\rm Re}(C_{S_{R,L}})$ and $|{\rm Im}(C_{S_R}\pm C_{S_L})|$.
In Figure \ref{fig2}, we fix ${\rm Im}(C_{S_{R,L}})=0$. In the left (right) panel, the blue line shows the $C_{S_{R,L}}$ dependence of $R(D)$ ($R(D^*)$), the red line shows the $C_{S_{R,L}}$ dependence of $A_{FB}(D,\pi)$~ ($A_{FB}(D^*,\pi)$), and the light blue band corresponds to the measurement of $R(D)$ $(R(D^*))$ at 95\% C.L..
\begin{figure*}
\begin{center}
\includegraphics*[width=5.5cm]{fig4a.eps}~~~~~~~~
\includegraphics*[width=5.5cm]{fig4b.eps}
\caption{{\footnotesize We fix ${\rm Im}(C_{S_{R,L}})=0$. In the left panel, the light blue or light red regions show the 99\% C.L. allowed regions for $R(D)$ or $R(D^{*})$. In the right panel, four green regions are the 99\% C.L. allowed regions for $R(D)$ and $R(D^{*})$. The regions (\rnum{1}), (\rnum{2}), (\rnum{3}) and (\rnum{4}) are classified by $A_{FB}$ as seen in Eq. (\ref{reg1})-(\ref{reg4}).}}
\label{fig4}
\end{center}
\end{figure*}
In Figure \ref{fig3}, we show the results of the $A_{FB}$ for all modes. In Figure \ref{fig4}, we fix ${\rm Im}(C_{S_{R,L}})=0$. In the left panel, the light blue or light red regions show the 99\% C.L. allowed regions for $R(D)$ or $R(D^{*})$. In the right panel, the four green regions are the 99\% C.L. allowed regions for $R(D)$ and $R(D^{*})$. This fourfold ambiguity cannot be solved by using only $R(D^{(*)})$. However, $A_{FB}$ can among discriminate these regions. For example, the regions (\rnum{1}), (\rnum{2}), (\rnum{3}), and (\rnum{4}) are classified by $A_{FB}(D^{(*)},\pi)$ as
\begin{align}
A_{FB}(D,\pi)\hspace{0.3em}\raisebox{0.4ex}{$>$}\hspace{-0.75em}\raisebox{-.7ex}{$\sim$}\hspace{0.3em} 0&,~~A_{FB}(D^*,\pi)\hspace{0.3em}\raisebox{0.4ex}{$>$}\hspace{-0.75em}\raisebox{-.7ex}{$\sim$}\hspace{0.3em} 0.1~~~~({\rm \rnum{1}}),\label{reg1}\\
A_{FB}(D,\pi)\hspace{0.3em}\raisebox{0.4ex}{$>$}\hspace{-0.75em}\raisebox{-.7ex}{$\sim$}\hspace{0.3em} 0&,~~A_{FB}(D^*,\pi)\hspace{0.3em}\raisebox{0.4ex}{$<$}\hspace{-0.75em}\raisebox{-.7ex}{$\sim$}\hspace{0.3em} 0.1~~~~({\rm \rnum{2}}),\\
A_{FB}(D,\pi)\hspace{0.3em}\raisebox{0.4ex}{$<$}\hspace{-0.75em}\raisebox{-.7ex}{$\sim$}\hspace{0.3em} 0&,~~A_{FB}(D^*,\pi)\hspace{0.3em}\raisebox{0.4ex}{$>$}\hspace{-0.75em}\raisebox{-.7ex}{$\sim$}\hspace{0.3em} 0.1~~~~({\rm \rnum{3}}),\\
A_{FB}(D,\pi)\hspace{0.3em}\raisebox{0.4ex}{$<$}\hspace{-0.75em}\raisebox{-.7ex}{$\sim$}\hspace{0.3em} 0&,~~A_{FB}(D^*,\pi)\hspace{0.3em}\raisebox{0.4ex}{$<$}\hspace{-0.75em}\raisebox{-.7ex}{$\sim$}\hspace{0.3em} 0.1~~~~({\rm \rnum{4}}).\label{reg4}
\end{align}
Since a tauon decays into a light meson(lepton) with nutrino(s), it is difficult to measure the forward-backward asymmetries for tauons and $D^{(*)}$ mesons :
\begin{align}
A_{FB}(D^{(*)})&=
\frac
{\left(\int_{0}^{1}-\int_{-1}^{0}\right)
d\cos \theta_{\tau}
\frac{d\Gamma(B\to D^{(*)}\tau\nu)}{d\cos \theta_{\tau}}
}
{\Gamma(B\to D^{(*)}\tau\nu)},
\end{align}
where $\theta_{\tau}$ is the angle between the tauon and $D^{(*)}$ meson as seen in Figure \ref{fig1}. However, it is not impossible to analyze $A_{FB}(D^{(*)})$ by using information about the position where the tauon decays. An analysis to reconstruct the tauon would start at the LHCb experiment \cite{LHCb}. Then, we evaluate $A_{FB}(D^{(*)})$ as functions of $C_{S_{R,L}}$ as
\begin{align}
A_{FB}(D) &=
\left[ 0.358(1)\right]\left[1 +7.1{\rm Re}(C^+)\right]\big/~\widetilde{\mathcal{R}},\\
A_{FB}(D^*)&=
\left[-0.065(8)\right]\left[1 - 13{\rm Re}(C^-)\right]\big/~\widetilde{\mathcal{R}}^*.
\end{align}
\section{Conclusion}
We have studied the decay modes $\bar{B}\to D^{(*)}\tau\bar{\nu}_{\ell}$ with the charged scalar effects, and show that it is possible to determine ${\rm Re}(C_{S_{R,L}})$ and $|{\rm Im}(C_{S_R}\pm C_{S_L})|$ with the combination of the ratios of branching fractions $R(D^{(*)})$ and the forward-backward asymmetry $A_{FB}(D^{(*)},\pi)$, $A_{FB}(D^{(*)},\rho)$, and $A_{FB}(D^{(*)},a_1)$. When considering the effective weak Hamiltonian (\ref{Heff}), we evaluate $R$ and $A_{FB}$ as functions of $C_{S_{R,L}}$ in Eqs. (\ref{R})-(\ref{AFB}).
As seen in Figure \ref{fig4}, four allowed regions for $R(D^{(*)})$ exist. This fourfold ambiguity cannot be solved by using only $R(D^{(*)})$. However, $A_{FB}$ can discriminate among these regions, because the $C_{S_{R,L}}$ dependence of $R$ and $A_{FB}$ are different, as seen in Figure \ref{fig2}.
\section*{Acknowledgements}
We would like to thank Ryoutaro Watanabe, Yuichiro Kiyo, Jernej Fesel Kamenik and Minoru Tanaka for useful comments.
|
1,116,691,499,867 | arxiv |
\subsection{Datasets}
The model is trained on two publicly available datasets that are benchmarks in the field: The ETH \cite{pellegrini2009you}, with subsets named ETH and HOTEL, and the UCY \cite{lerner2007crowds} datasets, with subsets named ZARA1, ZARA2, and UNIV. The trajectories are sampled at $0.4$ seconds intervals. The model observes 8 time steps, which corresponds to 3.2 seconds, and predicts the next 12 time steps, which corresponds to 4.8 seconds.
To further evaluate and demonstrate Grouptron's performance in densely populated scenarios, we create UNIV-N test sets, where $N$ is the minimum number of people present simultaneously at each time step in the test sets. Each UNIV-N test set contains all time steps that have at least $N$ people in the scene simultaneously from the original UNIV test set. In this way, we created test sets UNIV-40, UNIV-45, and UNIV-50 and, at each time step, there are at least 40, 45, and 50 people in the scene at the same time. These test sets are far more challenging than the original UNIV test set because of the more complex and dynamic interactions at different scales. We train the models on the original UNIV training set and evaluate the models on the UNIV-40, UNIV-45, and UNIV-50 test sets.
\subsection{Evaluation Metrics}
As in prior work \cite{alahi2016social,salzmann2020trajectron++,ivanovic2019trajectron} and more, we used the following metrics to evaluate our model:
\subsubsection{Final Displacement Error}
\begin{equation}
FDE = \frac{\sum_{i\in N}|| \hat{T^i_{T_F}}-T^i_{T_F} ||_2}{N},
\end{equation}
which is the $\textit{l}_2$ distance between the predicted final position and the ground truth final position with prediction horizon $T_F$.
\subsubsection{Average Displacement Error}
\begin{equation}
ADE = \frac{\sum_{i\in N}\sum_{t\in T_F}|| \hat{T^i_t}-T^i_t ||_2}{N\times T_F},
\end{equation}
where $N$ is the total number of pedestrians, $T_F$ is the number of future timesteps we want to predict for, $\hat{T^i_{t}}$ is the predicted trajectory for pedestrian $i$ at timestep $t$. $ADE$ is the mean $\textit{l}_2$ distance between the ground truth and predicted trajectories.
\subsection{Experiment settings}
The Grouptron model is implemented using PyTorch. The model is trained using an Intel I7 CPU and NVIDIA GTX 1080 Ti GPUs for 100 epochs. The batch size is 256. For the HOTEL, UNIV, ZARA1, and ZARA2 datasets, the output dimension for the group-level and scene level encoders are 16. For the ETH dataset, we set the output dimension of the group-level and scene-level encoders to be 8. This is because the ETH test set contains only 2 timesteps with at least 5 people in the scene, out of the total 1161 timesteps. In comparison, the training set contains 1910 timesteps, out of 4976 in total, with at least 5 people. Thus, to help the model learn generalizable representations in this case, we decrease the output dimension of the STGCNs to 8. The learning rate is set to $0.001$ initially and decayed exponentially every epoch with a decay rate of 0.9999. The model is trained using Adam gradient descent and gradients are clipped at $1.0$.
\subsection{Evaluation of the Group Clustering Algorithm}
\begin{figure}[t]
\centering
\includegraphics[width=0.75\columnwidth]{Figure/cluster.png}
\caption{An example of pedestrian group clustering. Pedestrians are divided into five groups. Different colors indicate different groups.}
\label{group}
\vspace{-3mm}
\end{figure}
In Fig. \ref{group}, it is shown that the groups created by the agglomerative clustering method are very close to the natural definition of pedestrian groups. We can see that pedestrians 5 and 6 are travelling in a highly correlated fashion and pedestrians 1, 2, 3, and 4's trajectories are highly similar as well. In both cases, the clustering algorithm is able to correctly cluster these pedestrians into their corresponding groups. This shows that by using agglomerative clustering based on Hausdorff distances, Grouptron is able to successfully generate naturally defined groups.
To quantitatively evaluate the groups generated by the algorithm, we selected 10 random time steps from the ETH training dataset. We invited 10 human volunteers to label groups for the time steps and used the agglomerative clustering method described in Section III-C to generate group clusters, respectively. For both human-generated groups and algorithm-generated groups, the number of groups to be formed is computed using Equation 1. We then compute the average Sørensen–Dice coefficient between human-generated and algorithm-generated groups. That is, we use
\begin{equation}
DSC = \frac{1}{T}\sum_{t\in T}\frac{1}{H}\sum_{h\in H}\frac{2\times |G_{h,t}\cap G_{a,t}|}{|G_{h,t}|+|G_{a,t}|},
\end{equation}
where $T$ is the number of timesteps, $H$ is the total number of human annotators, $G_{a, t}$ is the grouping created by the agglomerative clustering method for time step $t$, $G_{h,t}$ is the grouping created by humans for time step $t$, and $|G_{h}\cap G_{a}|$ measures how many of the groups by humans and the algorithm are exactly the same.
The Average Dice coefficient between human annotators and the algorithm is 0.72. The higher the Dice coefficient, the more similar are the groups created by the agglomerative clustering method and human annotators. Thus, The average Dice Coefficient of 0.72 indicates the groups output by agglomerative clustering method are really similar to human-generated ones.
\subsection{Quantitative Results}
We compare Grouptron's performance with state-of-the-art methods and common baselines in the field in terms of the FDE and ADE metrics, and the results are shown in Table \ref{table1}. Overall, Grouptron outperforms all state-of-the-art methods with considerable decrease in displacement errors. Since Grouptron is built on Trajectron++, we also compare the FDE and ADE values of Grouptron and Trajectron++. We find that Grouptron outperforms Trajectron++ on all 5 datasets by considerable margins. Particularly, on the ETH dataset, Grouptron achieves an FDE of 1.56m, which is 7.1\% better than the FDE value of 1.68m by Trajectron++. Furthermore, Grouptron achieves an FDE of 0.97m on the UNIV dataset. This is 9.3\% reduction in FDE error when compared with the FDE value of 1.07m by Trajectron++ on the same dataset.
Moreover, we compare Grouptron's performance with Trajectron++'s in dense crowds with the UNIV-N datasets in Table \ref{table2}. Overall, Grouptron outperforms Trajectron++ on all the UNIV-N test sets by enormous margins. In particular, Grouptron achieves an FDE of 1.04m and ADE of 0.40m on the UNIV-45 test set, which contains all timesteps from the original UNIV test set that have at least 45 pedestrians in the scene at the same time. This is 16.1\% in FDE improvement when compared with the FDE value of Trajectron++ and 13.0\% in ADE improvement when compared with the ADE value of Trajectron++ on the same test set.
Furthermore, we notice that the state-of-the-art method, Trajectron++, performs substantially worse as the number of pedestrians in the scene increases. Specifically, Trajectron++'s FDE increases from 1.07m to 1.25m as the minimum number of pedestrians in the scene increases from 1 to 50. In contrast, Grouptron's FDE remains relatively stable as the number of pedestrians increases. This shows that Grouptron performs much better and is more robust in densely populated scenarios.
\subsection{Qualitative Analysis}
\begin{figure}[t]
\vspace{3mm}
\centering
\includegraphics[scale=0.33]{Figure/res1.png}
\caption{Examples of Grouptron's predictions on the UNIV dataset. The predictions are the most likely trajectory predictions of the model. Green arrows indicate pedestrians' current positions and directions. Black dashed lines indicate trajectory histories. Grey dashed lines indicate ground truth future trajectories. Green lines are Grouptron's predicted future trajectories. }
\label{qualatative}
\vspace{-3mm}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[scale=0.30]{Figure/res_cmpare.png}
\caption{Comparisons of Grouptron's and Trajectron++'s distributions of 20 most likely future trajectories on examples of the UNIV dataset. Rows indicate different examples and columns represent different methods. Orange stars indicate pedestrians of interest. Yellow stars indicate their companions in the same pedestrian groups. Green arrows indicate pedestrians' current positions and directions. Black dashed lines indicate trajectory histories. Grey dashed lines indicate ground truth future trajectories. The comparisons show that Grouptron is able to produce predictions with higher quality and with better confidence levels.}
\label{qualatative2}
\vspace{-3mm}
\end{figure}
Fig. \ref{qualatative} shows Grouptron's most likely predictions for some examples of the UNIV dataset. Fig. \ref{qualatative}a shows two pedestrian groups crossing paths. We can see that Grouptron's predictions are consistent with the groups. Furthermore, it accurately predicts when and where the two groups' trajectories intersect. Fig. \ref{qualatative}b shows a case where pedestrians are forming groups and merging paths. Grouptron again successfully predicts the formation of this group. Fig. \ref{qualatative}c and Fig. \ref{qualatative}d show Grouptron's performance in densely populated scenes with more than 40 pedestrians. Even in these extremely challenging scenarios for state-of-the-art methods, Grouptron still produces predictions of high quality and the predictions are consistent with pedestrian groups. Furthermore, we can see that even when pedestrian groups are crossing paths or influencing each other, Grouptron successfully predicts these highly dynamic and complex scenarios.
In Fig. \ref{qualatative2}, we compare Grouptron's distributions of 20 most likely predictions with those of Trajectron++'s. Comparing Fig. \ref{qualatative2}a with \ref{qualatative2}b and \ref{qualatative2}c with \ref{qualatative2}d shows that Grouptron's predictions for the pedestrians of interest reflect the interactions within pedestrian groups more accurately. Furthermore, Grouptron's prediction distributions have much smaller ranges, indicating that it is much more confident with prediction outcomes.
\subsection{Problem Formulation}
\input{Formulation}
\subsection{Model Overview}
Rooted in the CVAE architecture in Trajectron++ \cite{salzmann2020trajectron++}, we design a more expressive multi-scale scene encoding structure, which actively takes into consideration the group-level and scene-level information for better representation of crowded scenes where groups of pedestrians are present.
Concretely, we leverage spatio-temporal graphs for each level to model information and interactions at the corresponding level. We refer to our model as Grouptron. Our model is illustrated in Fig. \ref{network}. In this subsection, we provide an overview of the architecture, and in
Section III-C, we elaborate on the details of the Grouptron model.
At the individual level, we construct spatio-temporal graphs for individual pedestrians. The graph is centered at the node whose trajectory we want to predict. We call it the ``current node''. Long short term memory (LSTM) networks \cite{hochreiter1997long} are used to encode this graph. We group the pedestrians with the agglomerative clustering algorithm based on Hausdorff distances \cite{atev2010clustering}. STGCN is used to encode dynamics within the groups.
At the scene level, spatio-temporal graphs are created to model dynamics among pedestrian groups and are encoded using a different STGCN. Lastly, the information across different scales is combined. A decoder is then used to obtain trajectory predictions and the model can output the most possible trajectory or the predicted trajectory distributions.
\subsection{Multi-Scale Scene Encoder}
\subsubsection{Individual-Level Encoder}
The first level of encoding is for the individual pedestrians. We represent information at the individual level using a spatio-temporal graph for the current node. The nodes include the current node and all other nodes that are in the perception range of the current node and nodes whose perception range covers the current node. The node states are the trajectories of the nodes. The edges are directional and there is an edge $e_{i,j}$ if pedestrian $i$ is in the perception range of pedestrian $j$.
To encode the current node's history trajectory, we use an LSTM network with hidden dimension 32. To encode the edges, we first perform an element-wise sum on the states of all neighboring nodes to form a single vector that represents all neighbors of the current node. This vector is then fed into the edge LSTM network which is an LSTM network with hidden dimension 8.
In this way, we obtain two vectors: a vector encoding the trajectory history of the current node and a vector encoding the representation of all the neighbors of the current node.
\subsubsection{Pedestrian Group Clustering}
To cluster nodes into groups based on trajectories, we propose to leverage the agglomerative clustering algorithm \cite{atev2010clustering}, which uses similarity scores based on Hausdorff distances between trajectories. The number of clusters (groups) to create for each scene is determined by:
\begin{equation}
C(N) = (N+1)/2,
\end{equation}
where C is the number of clusters and N is the total number of nodes to be clustered.
Furthermore, We only include nodes with an edge to or from the current node. This is because we only want to include nodes that can potentially influence the current node to avoid unhelpful information from nodes that are too far away from the current node.
\subsubsection{Group-Level Encoder}
For each group, we create a spatio-temporal graph consisting of $G_{g, t} = (V_{g,t}, E_{g, t})$, where $t$ is the time step and $g$ is the group id. $V_{g,t}= \{v_{i,t} \mid \forall i \in \{1,...,N_{g}\}\}$ are all the nodes in the group $g$. The node states are the trajectories of the represented pedestrians. $E_{g, t}=\{e^{i,j}_t \mid \forall i,j \in \{1,...,N_{g}\}$ are the set of edges between the nodes in the current group such that $e^{i,j}_t = 1$ to allow maximum interaction modeling within pedestrian groups.
After forming the aforementioned graphs for each group, they are then passed to the group-level trajectory encoder to obtain the encoded vectors for nodes in each group. The group-level trajectory encoder is an STGCN proposed in \cite{yan2018spatial} and used in \cite{mohamed2020social}. We set the convolution filter size to 3 and use the same weight parameters for all the groups.
We then average the encoded vectors of all nodes in each group to obtain the representations for the corresponding groups. That is, $E_{g} = \frac{1}{N_{g}} \sum_{i=1}^{N_{g}} E_{i}$, where $E_{g}$ is the encoded vector for group $g$, $E_{i}$ is the encoded vector for node $i$ in the output from the group-level trajectory encoder, and $N_g$ is the number of nodes in group $g$.
\subsubsection{Scene-Level Encoder}
After obtaining the encoded vectors for each group, a scene-level spatio-temporal graph with nodes representing groups is created. That is, $G_{scene, t} = (V_{scene,t}, E_{scene, t})$. $V_{scene,t}= \{v_{g, t} \mid \forall\,\text{g} \in \{1,...,G\}\}$, where G is the total number of groups and $t$ is the timestep. The state for each node is $E_{g}$ from the group-level trajectory encoder. $E_{scene, t}=\{e^{i,j}_t \mid \forall i,j \in \{1,...,G\}$ are the set of edges between the groups in the scene. Each $e^{i,j}$ is set to $1$ to allow maximum message passing between group nodes.
We then select the encoded vector corresponding to the last timestep and the group id of the current node as the scene-level encoding: $E_{scene} = E_{g, T}$, where $g$ is the group id of the current node we are encoding for and T is the total number of time steps.
\subsubsection{Multi-Scale Encoder Output}
The output of the multi-scale scene encoder is the concatenation of the following level encoded vectors: the output from the node history encoder, the output from individual-level edge encoder, and the output from the scene-Level encoder. That is, $ E_{multi} = [E_{his};E_{edge};E_{scene}]$, where $E_{his}$ is the encoded vector for the current node's history trajectory, $E_{edge}$ is the vector representing individual-level neighbors, $E_{scene}$ is the encoded vector from the scene-level encoder.
\subsection{Decoder Network}
Together with the latent variable $z$, $E_{multi}$ is passed to the decoder that is a Gated Recurrent Unit (GRU) \cite{chung2014empirical} with 128 dimensions. The output from the GRU decoder is then fed into dynamics integration modules as control actions to output the predicted trajectory distributions, or the single most-likley trajectory, depending on the task.
\subsection{Loss Functions}
We adopt the following objective function for the overall CVAE model:
\begin{equation}
\begin{aligned}
\max_{\phi,\theta,\psi} \sum_{i=1}^{N} & \mathbb{E}_{z \sim q_\phi(\cdot | \mathbf{x}_i,\mathbf{y}_i)} \left[\log p_\psi(\mathbf{y}_i | \mathbf{x}_i,z)\right] \\
& - \beta D_{KL} \left(q_\phi\left(z | \mathbf{x}_i,\mathbf{y}_i\right)||p_\theta\left(z|\mathbf{x}_i\right)\right)+\alpha I_q(\mathbf{x}|z),
\end{aligned}
\end{equation}
where $I_q$ is the mutual information between x and z under the distribution $q_\phi(\mathbf{x} | z)$. We follow the process given in \cite{zhao2019infovae} to compute $I_q$. We approximate $q_\phi\left(z | \mathbf{x}_i,\mathbf{y}_i\right)$ with $p_\theta\left(z|\mathbf{x}_i\right)$, and obtain the unconditioned latent distribution by summing out $\mathbf{x}_i$ over the batch.
\subsection{Human Trajectory Forecasting}
One of the pioneering works of human trajectory forecasting is the Social Force model \cite{helbing1995social}, which applies Newtonian forces to model human motion. Similar methods with strong priors have also been proposed \cite{antonini2006discrete}; yet, most of them rely on hand-crafted energy potential, such as relative distances and rules, to model human motion.
Recently, machine learning methods have been applied to the problem of human trajectory forecasting to obtain models with better performance.
One line of work is to formulate this problem as a deterministic time-series regression problem and then solve it using, e.g., Gaussian Process Regression (GPR) \cite{wang2007gaussian}, inverse reinforcement learning (IRL) \cite{lee2016predicting}, and recurrent neural networks (RNNs) \cite{alahi2016social,jain2016structural,vemula2018social}.
However, the issue of these deterministic regressors is that human behavior is rarely deterministic or unimodal. Hence, generative approaches have become the state-of-the-art trajectory forecasting methods, due to recent advancements in deep generative models \cite{sohn2015learning, goodfellow2014generative} and their ability of generating distributions of potential future trajectories (instead of a single future trajectory). Most of these methods use a recurrent neural network architecture with a latent variable model, such as a conditional variational auto-encoder (CVAE) \cite{salzmann2020trajectron++,lee2017desire,ma2019wasserstein,li2021spatio}, or a generative adversarial network (GAN) \cite{gupta2018social,kosaraju2019social,sadeghian2019sophie,zhao2019multi,li2019conditional} to encode multi-modality. Compared to previous work, we not only consider multi-modality from the perspective of a single agent, but also from the group level; we take into account the phenomenon that people usually move in groups. We show that our group-aware prediction has better understanding of the scenes and achieves better forecasting performance.
\subsection{Graph Convolutional Networks}
Of the methods mentioned above, RNN-based methods have achieved better performance. However, recurrent architectures are parameter inefficient and expensive in training \cite{bai2018empirical}. Besides, to handle spatial context, RNN-based methods need additional structures. Most of them use graph models to encode neighboring pedestrians' information since the topology of graphs is a natural way to represent interactions between pedestrians.
Graph convolutional networks (GCN) introduced in \cite{kipf2016semi} is more suitable for dealing with non-Euclidean data. The Social-BiGAT \cite{kosaraju2019social} introduces a graph attention network \cite{velivckovic2017graph} to model social interactions. GraphSAGE \cite{hamilton2017inductive} aggregates nodes and fuses adjacent nodes in different orders to extract node embeddings. To capture both the spatial and temporal information, Spatio-Temporal Graph Convolutional Networks (STGCN) extends the spatial
GCN to spatio-temporal GCN for skeleton-based action recognition \cite{yan2018spatial}. STGCN is adapted by Social-STGNN \cite{mohamed2020social} for trajectory forecasting, where trajectories are modeled by graphs with edges representing social interactions and weighted by the distances between pedestrians. A development related to our paper is dynamic multi-scale GNN (DMGNN) \cite{li2020dynamic}, which proposes a \textit{multi-scale graph} to model human body relations and extract features at multiple scales for motion prediction. There are two kinds of sub-graphs in the multi-scale graph:
\begin{enumerate*}[label=(\roman*)]
\item single-scale graphs, which connect body components at the same scales, and
\item cross-scale graphs, which form cross-scale connections among body components.
\end{enumerate*}
Based on the multi-scale graphs, a multi-scale graph computational unit is proposed to extract and fuse features across multiple scales. Motivated by this work, we adopt the multi-scale graph strategy for dense crowd forecasting which includes scene-level graphs, group-level graphs, and individual-level graphs.
\subsection{Group-aware Prediction}
People moving in groups (such as friends, family members, etc.) is a common phenomenon and people in each group tend to exhibit similar motion patterns. Motivated by this phenomenon, group-aware methods \cite{rudenko2020human} consider the possibility of human agents being in groups or formations to have more correlated motions than independent ones. They therefore can also model reactions of agents to the moving groups. Human agents can be assigned to different groups by clustering trajectories with similar motion patterns based on methods such as $k$-means clustering \cite{zhong2015learning}, support vector clustering \cite{lawal2016support}, coherent filtering \cite{bisagno2018group}, and spectral clustering methods \cite{atev2010clustering}.
\section{Introduction}
\input{Intro}
\section{Related Work}
\input{RelatedWork}
\section{Grouptron}
\input{Methodology}
\section{Experiments}
\input{Experiments}
\section{Conclusions}
\input{Conclusions}
\bibliographystyle{IEEEtran}
|
1,116,691,499,868 | arxiv | \section{Introduction}
The search for effects of electromagnetic fields on light dates back to the beginnings of modern physics. In matter as a medium for light propagation, the Faraday effect~\cite{Faraday1846} has been known since the middle XIX century. At the turn of the century the possibility that, even in a vacuum, light interacts with an electromagnetic field has been taken into consideration, giving rise to a research field which is still open~\cite{Battesti2013}. The first motivation to look for such a phenomenon was the search for a magnetic moment of the photon. It was eventually seen that an entire class of vacuum nonlinear optical effects are allowed within the framework of Quantum ElectroDynamics (QED)~\cite{Battesti2013}, and, in more general terms, other Non-Linear ElectroDynamics (NLED) theoretical frameworks~\cite{Fouche2016}.
Since its inception, the Michelson-Morley interferometer has attracted the attention of experimentalists seeking to measure a differential light velocity in the presence of a magnetic field. A first experiment has been performed by Morley himself around 1898 and others followed in the first half of the XX century~\cite{Battesti2013}. In recent years this topology has been refined up to the observation of gravitational waves on Earth~\cite{Abbott2016} reaching unprecedented sensitivities in interferometry. Experimental proposals have been put forward in recent~\cite{Grote2015} and less recent years (see~[\cite{Battesti2013}] and references therein) hoping to take advantage of the technological progress in separate-arm interferometry in the domain of light and magnetic field interactions in vacuum.
In the 1970s the expected values for the index of refraction of light polarized parallel, $n_\parallel$, and perpendicular, $n_\perp$, to an applied external magnetic field were calculated~\cite{Bialynicka-Birula1970}, thanks to the previous works of Euler, Kochel, and Heisenberg~\cite{Euler1935,Heisenberg1936}. Following~\cite{Battesti2013}, one can write
\begin{align}
n_{\parallel} &= 1 + {c_{0,2}}\frac{B_{0}^2}{\mu_{0}}\\
n_{\perp} &= 1 + {4c_{2,0}}\frac{B_{0}^2}{\mu_{0}}
\end{align}
where the value of the lowest order coefficients of the development of the Heisenberg-Euler Lagrangian, $c_{2,0}$ and $c_{0,2}$, can be written as~\cite{Euler1935,Heisenberg1936}:
\begin{equation}
c_{2,0} = \frac{2\alpha^2 \hbar^3}{45 m_{e}^4 c^5},\quad c_{0,2} = 7 c_{2,0}
\end{equation}
The QED predicted change in vacuum index of refraction, both parallel and perpendicular to an applied external field $B_0 [\mathrm{T}]$, can be thus be written:
\begin{align}
\delta{n_{\parallel}} &= 1-n_{\parallel} \approx 9\times 10^{-24}\frac{B_{0}^2}{\mathrm{T}^2}\label{eqn:deltanpl}\\
\delta{n_{\perp}} &= 1-n_{\perp} \approx 5\times 10^{-24}\frac{B_{0}^2}{\mathrm{T}^2}\label{eqn:deltanpr}
\end{align}
Following \eqref{eqn:deltanpl} and \eqref{eqn:deltanpr}, $\Delta n$ can be written
\begin{equation}
\Delta n = n_{\parallel} - n_{\perp} \approx 4\times 10^{-24}\frac{{B_0}^2}{\mathrm{T}^2}.
\label{Deltan}
\end{equation}
This form is analogous to one corresponding to the Cotton-Mouton effect, the linear magnetic birefringence in a medium discovered at the beginning of the XX century and studied in detail by A. Cotton and H. Mouton~\cite{Rizzo1997}. Traditionally this measurement of $\Delta n$ is obtained via a measurement of the ellipticity, $\psi$, acquired by a linearly polarized laser beam of wavelength $\lambda$ propagating through the region, $L_B$, of magnetic field, $B$. The resulting ellipticity is due to the phase shift between the two orthogonal polarization components of the light field,
\begin{equation}
\gamma = \phi_{x} - \phi_{y},
\end{equation}
where $\phi_{x}$ is the phase accumulated in the laser field component polarized parallel to the magnetic field, and $\phi_{y}$ is the perpendicular component. For the measurement in vacuum, one can write the induced ellipticity
\begin{equation}
\psi = \pi\frac{L_B}{\lambda} k_\mathrm{CM} B^2,
\end{equation}
where, as predicted by QED, $k_\mathrm{CM}~\approx~4\times10^{-24}\,\mathrm{T}^{-2}$ is the so-called Cotton-Mouton constant of vacuum~\cite{Battesti2013}.
In 1979 Iacopini and Zavattini proposed the use of ultra-precise polarimetry to measure the anisotropy of vacuum in the presence of an external magnetic field\cite{Zavattini1979}, putting forward the development of an instrument that is more sensitive than a separated-arm interferometer, as the measurement concerns ``the phase difference between two components of the same laser beam, and not the phase difference between two spatially separated beams''\cite{Zavattini1979}. In contrast to a differential-arm measurement, their proposal precludes the possibility of measuring $n_{\parallel}$ and $n_{\perp}$ separately, measuring only the difference, $\Delta{n}$. Since then, all attempts to measure vacuum magnetic birefringence (VMB) have been based on this seminal paper~\cite{Battesti2013}, the community agreeing implicitly with their point of view. Currently, the most advanced polarimetry experiment is the one of the PVLAS collaboration~\cite{DellaValle2016}.
In this work, we present data taken with our BMV apparatus~\cite{Cadene2014} from separate, but correlated, measurements of the noise coming from the laser amplitude fluctuations, cavity coupling noise, and cavity mirror birefringence noise. Using these data, we model the different sources of noise in cavity-enhanced polarimetry for the observation of VMB. We compare these results to the phase sensitivity of separate-arm interferometers. In section \ref{sec:seperatearm_interferometers} we present a conceptual scheme for a VMB measurement in a separate-arm Michelson interferometer. The section \ref{sec-resonantly_enhanced_birefringence} is an overview of key formulae concerning resonantly enhanced birefringence measurement using a Fabry-Perot cavity. The following section \ref{sec:measurement_of_linear_magnetic_birefringence} describes the basic principles of polarimetric measurement of the linear magnetic birefringence with special attention to our BMV experiment~\cite{Cadene2014}. In section \ref{sec:apparent_birefringence_noise} we model the different sources contributing to the sensing noise of cavity enhanced polarimeters. The results showing the measured/modeled sensing noise alongside the measured total birefringence sensitivity is given in section \ref{sec:results}. Finally sections \ref{sec:discussion} and \ref{sec:conclusions} present the measured cavity birefringence fluctuations and discussion of the possible sources of such a noise, ending in perspectives and final conclusions.
\section{Separate-arm interferometers}
\label{sec:seperatearm_interferometers}
\begin{figure}
\begin{center}
\includegraphics[width=0.74\columnwidth]{Michelson_Bfield.eps}
\end{center}
\vspace{-9pt}
\caption{Sketch of a conceptual Michelson-Morley interferometer setup to measure a variation of light velocity in vacuum in the presence of an external magnetic field $\mathbf{B}_\mathrm{i}$.}
\vspace{-3pt}
\label{fig:Michelson_Bfield}
\end{figure}
In figure \ref{fig:Michelson_Bfield} we show a conceptual illustration of a Michelson-Morley interferometer setup to detect a variation in the velocity of linearly polarized light propagating through birefringnent vacuum in the presence of a transverse magnetic field, $\bvec{B}_\mathrm{i}$. A modulation of the differential phase between the recombined beams produces a measurable interference pattern corresponding to the difference of light travel time through the two vacuum media in the differential arms. This path-length modulation can be achieved through modulation of the light polarization angle, modulation of the magnetic field orientation, or by modulating the amplitude of the magnetic field. The resulting phase difference can be written:
\begin{equation}
\Delta{\phi} = \phi_\mathrm{i}-\phi_\mathrm{j} = \frac{2\pi}{\lambda}2L_B\delta n,
\label{PS-MMVMB}
\end{equation}
where $\delta n$ is the difference between the vacuum refractive indices of the two arms.
As a point of reference, the leading VMB polarimeter, PVLAS~\cite{DellaValle2016}, cites a sensitivity to change in index of refraction of $\tilde{n}\approx 3\times10^{-19}\,\frac{1}{\mathrm{\sqrt{Hz}}}$ around its detection frequency, $10\,\mathrm{Hz}$, using $1\,\mathrm{\mu}$ light propagating through a region of magnetic field $L_\mathrm{B}=1.6\,\mathrm{m}$. One can back-calculate its sensitivity in terms of phase shift between the two polarization states of the laser field:
\begin{equation}
\tilde{\phi}_\mathrm{PVLAS} =\frac{2\pi}{\lambda} L_\mathrm{B}\tilde{n} \approx 3\times10^{-12} \frac{\mathrm{rad}}{\mathrm{\sqrt{Hz}}}\:\quad\mathrm{at}\;\mathrm{10\,\mathrm{Hz}}
\end{equation}
For comparison, the field of gravitational-wave detection has advanced the state of the art in precision differential-arm interferometry~\cite{ligo_instrument2015}. The $L_0 = 4\,\mathrm{km}$ long advanced LIGO (aLIGO) interferometers measure a strain noise of $\tilde{h} = \frac{\tilde{L}}{L_0}\approx8\times10^{-24}\,\frac{1}{\mathrm{\sqrt{Hz}}}$ at $\mathrm{200\,\mathrm{Hz}}$ (the center of their measurement band) increasing to $2\times10^{-22}\,\frac{1}{\mathrm{\sqrt{Hz}}}$ around $20\,\mathrm{Hz}$, the edge of their detection band\cite{Abbott2016}. For direct comparison, we write this in terms of sensitivity to phase delay between its differential arms:
\begin{align}
\tilde{\phi}_\mathrm{LIGO} =\frac{2\pi}{\lambda} L_\mathrm{0}\tilde{h} &\approx 5\times10^{-12} \frac{\mathrm{rad}}{\mathrm{\sqrt{Hz}}}\:\quad\mathrm{at}\;20\,\mathrm{Hz}\\
&\approx 2\times10^{-13} \frac{\mathrm{rad}}{\mathrm{\sqrt{Hz}}}\:\quad\mathrm{at}\;200\,\mathrm{Hz}
\end{align}
with LIGO using the same $\lambda\approx 1\,\mathrm{\mu}$ light as PVLAS and BMV. This sensitivity is further discussed in comparison to the BMV appartus in section \ref{sec:comparison_of_bmv_and_ligo_phase_sensitivity}.
\section{Resonantly enhanced birefringence: key formulae}
\label{sec-resonantly_enhanced_birefringence}
Polarimeter VMB searches measure the differential phase between the two polarization states of light passing through a vacuum region in the presence of a magnetic field. Current VMB experiments utilize a two-mirror resonant optical cavity to enhance the sensitivity to the differential phase shift resulting from vacuum birefringence. The transfer function for an optical cavity can be expressed in terms of the laser frequency $\omega = kc$, where $c$ is the speed of light. For a field $E_\mathrm{in} = E_{\mathrm{i}}\mathrm{e}^{-{\mathrm{i}}\omega t}$ incident on the cavity of length $L$, the intracavity field can be expressed in terms of the cavity mirrors' amplitude reflectance ($r_1$, $r_2$) and transmittance ($t_1$, $t_2$)
\begin{equation}
E_\mathrm{cav} = \frac{t_{1}\mathrm{e}^{-{\mathrm{i}}\frac{\omega L}{c}}}{1-r_{1}r_{2}\mathrm{e}^{-{\mathrm{i}}\frac{2\omega L}{c}}}E_\mathrm{in} \label{eqn-fieldxferfunc_cav}
\end{equation}
It is apparent in \eqref{eqn-fieldxferfunc_cav} that resonances occur when the laser frequency, $f$, is at integer multiples of the Free Spectral Range $(\mathrm{FSR} = \frac{c}{2L})$:
\begin{equation}
f = \frac{\omega}{2\pi} = N\frac{c}{2L} = N(\mathrm{FSR}) \label{eqn-cav_resonance_freq}
\end{equation}
The cavity finesse, $\mathcal{F}$, is determined by the optical losses in the cavity and is defined as the ratio of the FSR to the linewidth, or Full-Width at Half-Maximum $(\mathrm{FWHM})$, of the resonance peak.
\begin{equation}
\mathcal{F} \triangleq \frac{\mathrm{FSR}}{\mathrm{FWHM}} \approx \frac{\pi \sqrt{r_1 r_2}}{1-r_1 r_2}\label{eqn-Finesse_definition}
\end{equation}
The gain, $g_\mathrm{cav}$, of laser field inside an optical resonator can be written:
\begin{align}
g_\mathrm{cav} &= \left|\frac{E_\mathrm{cav}}{E_\mathrm{in}}\right| = \sqrt{\frac{t_{1}^{2}}{1-r_{1}r_{2}\left(\mathrm{e}^{{\mathrm{i}}\phi}+\mathrm{e}^{-{\mathrm{i}}\phi}\right) + r_{1}^{2}r_{2}^{2} }} \nonumber\\
&= \frac{t_{1}}{\left(1-r_{1}r_{2}\right)} \sqrt{\frac{1}{1+\frac{4r_{1}r_{2}}{\left(1-r_{1}r_{2}\right)^2}\sin^2{\left(\frac{\phi}{2}\right)}} } \label{eqn-mag_trans_field}
\end{align}
where $\phi=2kL= \frac{4\pi}{\lambda}L = 2\pi\frac{f}{\mathrm{FSR}}$ is the cavity round-trip accumulated phase. The frequency response of the amplitude can be seen more clearly to have a filtering effect by simplifying the second term in \eqref{eqn-mag_trans_field}, which we will call $\mathrm{F}_\mathrm{cav}$. Near resonance one can write
\begin{equation}
\mathrm{F}_\mathrm{cav}(f) \approx \frac{1}{\sqrt{1+\frac{4r_{1}r_{2}}{\left(1-r_{1}r_{2}\right)^2}\frac{\pi^2}{\mathrm{FSR}^2}f^2}}= \frac{1}{ \sqrt{1+\left(\frac{f}{f_\mathrm{c}}\right)^{2}} }
\end{equation}
where $f_\mathrm{c} = \frac{\mathrm{FWHM}}{2}$ is the cavity pole frequency.
The phase accumulation $(\Phi)$ of the intracavity field near resonance is given by the expression
\begin{equation}
\Phi = \arctan\left(\frac{\Im{E_\mathrm{cav}}}{\Re{E_\mathrm{cav}}}\right) \label{eqn-Phi_cav}
\end{equation}
We can evaluate this by expanding cavity field \eqref{eqn-fieldxferfunc_cav}:
\begin{align}
\frac{E_\mathrm{cav}}{E_\mathrm{in}} &= \frac{t_{1}\mathrm{e}^{{\mathrm{i}}\frac{\phi}{2}} - t_{1}r_{1}r_{2}\mathrm{e}^{-{\mathrm{i}}\frac{\phi}{2}}}{{1-r_{1}r_{2}\left(\mathrm{e}^{{\mathrm{i}}\phi}+\mathrm{e}^{-{\mathrm{i}}\phi}\right) + r_{1}^{2}r_{2}^{2} }} \nonumber \\
&= \frac{t_{1}\cos{\frac{\phi}{2}}\left(1-r_{1}r_{2}\right)}{1-2r_{1}r_{2}\cos{\phi}+r_{1}^{2}r_{2}^{2}} + \mathrm{i}\frac{t_{1}\sin{\frac{\phi}{2}}\left(1+r_{1}r_{2}\right)}{1-2r_{1}r_{2}\cos{\phi}+r_{1}^{2}r_{2}^{2}} \label{eqn-E_cav_re_im}
\end{align}
When the laser is near the cavity resonance, the approximate total phase accumulated can thus be written in terms of the rount-trip phase and the Finesse:
\begin{equation}
\Phi \approx \frac{1+r_{1}r_{2}}{1-r_{1}r_{2}}\frac{\phi}{2} \approx \frac{\mathcal{F}}{\pi}\phi \\
\end{equation}
In a birefringent cavity the round-trip phase along the fast-axis, $\phi_{y}$, and slow-axis, $\phi_{x}$, for a laser on resonance leads to a total accumulated differential phase of
\begin{equation}
\Gamma = \frac{\mathcal{F}}{\pi}\phi_{x} - \frac{\mathcal{F}}{\pi}\phi_{y} = \frac{\mathcal{F}}{\pi}\gamma \label{eqn-Gamma}
\end{equation}
\section{Measurement of linear magnetic birefringence}
\label{sec:measurement_of_linear_magnetic_birefringence}
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{VMB_Polarimeter_cavity_enhanced.eps}
\end{center}
\caption[Illustration of BMV]{An illustration of a cavity-enhanced VMB polarimeter. A laser is resonant in a Fabry-Perot cavity. A magnetic field is applied transverse to the direction of laser propagation producing vacuum birefringence, which is measured as the differential phase delay between the polarization components of the intracavity field. The signal is enhanced by the phase response of the cavity.}
\label{fig:VMB_Polarimeter_cavity_enhanced}
\end{figure}
The conceptual layout for a VMB search experiment is illustrated in figure \ref{fig:VMB_Polarimeter_cavity_enhanced}. The setup uses a laser field propagating in the $\uvec{z}$ direction with amplitude $E_{\mathrm{i}}$ linearly polarized to some general angle $\theta$ with respect to the x-axis. In the setup, the first polarizer is oriented to transmit light polarized in the direction $\uvec{p}$:
\begin{equation}
\uvec{p} = \cos\theta\uvec{x}+\sin\theta\uvec{y}
\end{equation}
and to reject light polarized in the orthogonal direction, $\uvec{n}$:
\begin{equation}
\uvec{n} = \sin\theta\uvec{x}-\cos\theta\uvec{y}
\end{equation}
In such a configuration the field entering the optical cavity can be described as
\begin{equation}
\bm{E}_\mathrm{in} = E_{\mathrm{i}}\mathrm{e}^{-\mathrm{i} \omega t}\uvec{p}
\end{equation}
The laser frequency is actuated to remain resonant in a Fabry-Perot optical cavity. A magnetic field is applied in the $\uvec{x}$ direction.
To provide tangible parameter values we discuss the second generation BMV experiment, which will use a pulsed field magnet called the `XXL-coil'. The temporal profile for the first test pulses of the XXL-coil can be seen in figure \ref{fig-xxl_temporal}, designed to maximize the interaction between the magnetic field and intracavity field. These pulses delivered up to $18\,\mathrm{T}$ of field over an effective length of $L_B=0.319\,\mathrm{m}$.
\begin{figure}
\begin{center}
\includegraphics[width=0.95\columnwidth]{xxl_temporal.eps}
\end{center}
\caption[The temporal profile of the XXL-coil pulsed field magnet to be used in the second generation BMV experiment.]{The temporal profile of the XXL-coil pulsed field magnet to be used in the second generation BMV experiment. The temporal profile is selected to produce pulses below the cavity pole frequency, $f_\mathrm{c}$}
\label{fig-xxl_temporal}
\end{figure}
In this setup, the intracavity laser field sees a round-trip differential phase retardation, $\gamma_\mathrm{v}$, between its polarization components lying along the fast optical axis $(\uvec{y})$ and slow optical axis $(\uvec{x})$ of the vacuum:
\begin{align}
\gamma_\mathrm{v} &= 2\frac{2\pi}{\lambda}L_\mathrm{B}k_\mathrm{CM} B^2 \label{eqn-gamma_vac}\\
&\approx 5\times10^{-15}\,\mathrm{rad},\\
&\quad(\lambda = 1\,\mathrm{\mu m},\: B^2 L_\mathrm{B}=100\,\mathrm{T}^2\mathrm{m})\nonumber
\end{align}
Additionally, the cavity mirrors have an inherent birefringence~\cite{Bielsa2009}. In each round trip the cavity receives an additional phase from from the end mirror $(\gamma_{\mathrm{m}2})$ as well as the input mirror $(\gamma_{\mathrm{m}1})$ giving a total differential phase per round trip of $\gamma_\mathrm{c} = \gamma_{\mathrm{m}2} + \gamma_{\mathrm{m}1}$. The total round trip birefringence is thus
\begin{equation}
\gamma = \gamma_\mathrm{c} + \gamma_\mathrm{v}.
\end{equation}
As we have recalled earlier, \eqref{eqn-Phi_cav}-\eqref{eqn-Gamma}, the many round trips in the cavity enhances this differential phase by a factor proportional to the finesse of the cavity:
\begin{equation}
\Gamma = \frac{\mathcal{F}}{\pi}\gamma =\frac{\mathcal{F}}{\pi}(\gamma_\mathrm{c} + \gamma_\mathrm{v}), \label{eqn-Gamma_sum}
\end{equation}
with the net effect making the cavity appear as a waveplate of retardation $\Gamma$.
The signal is contained in the differential phase, $\Gamma$, between the two polarization states of the intracavity field,
\begin{equation}
\bm{E}_\mathrm{cav} = E_{\mathrm{c}}\mathrm{e}^{-\mathrm{i} \omega t}\left(\mathrm{e}^{\mathrm{i} \Gamma}\cos\theta\uvec{x}+\sin\theta\uvec{y}\right).
\end{equation}
This phase shift is analyzed into into a measurable power change using a second polarizer called the `analyzer'. The field incident on the analyzer is directly the field transmitted from the cavity, $t_2\bm{E}_\mathrm{cav}$. The analyzer is crossed with the input polarizer, oriented to transmit the $\uvec{n}$ component and to reflect the $\uvec{p}$ component. For nominally amorphous mirrors $\Gamma$ can be very small, it is thus important to account for the imperfection of the analyzer, namely the analyzer's parallel and orthogonal polarization transmission and reflection coefficients: $t_{\parallel} \gg t_{\perp}$ and $r_{\perp} \gg r_{\parallel}$.
\subsection{The signal in transmission, $P_\mathrm{t}$}
\label{sec-the_reflected_signal}
The field reflected by the analyzer, $\bm{E}_\mathrm{t}$, is composed primarily of the light in the original polarization state, $\uvec{p}$. We can calculate this field as follows:
\begin{align}
\bm{E}_\mathrm{t} &= r_{\perp}\left(t_2 \bm{E}_\mathrm{cav}\cdot\uvec{p}\right)\uvec{p} + r_{\parallel}\left(t_2 \bm{E}_\mathrm{cav}\cdot\uvec{n}\right)\uvec{n} \nonumber \\
&= t_{2}E_\mathrm{c}\mathrm{e}^{-\mathrm{i} \omega t}\big[r_{\perp}\left(\mathrm{e}^{\mathrm{i} \Gamma}\cos^{2}\theta+\sin^{2}\theta\right)\uvec{p}\nonumber\\
&\quad+ r_{\parallel}\left(\mathrm{e}^{\mathrm{i} \Gamma}\cos\theta\sin\theta-\sin\theta\cos\theta\right)\uvec{n}\big]
\end{align}
With the field in our pocket, we can calculate the power reflected by the analyzer in terms of the intracavity power, $P_\mathrm{cav}$,
\begin{align}
P_\mathrm{t} &= \bm{E}_\mathrm{t}\cdot\bm{E}_\mathrm{t}^{\ast} \nonumber \\
&= T_{2}P_\mathrm{cav}\big[r_{\perp}^2(\mathrm{e}^{\mathrm{i}\Gamma}\cos^{2}\theta+\sin^{2}\theta) (\mathrm{e}^{-\mathrm{i}\Gamma}\cos^{2}\theta+\sin^{2}\theta)\uvec{p}\cdot\uvec{p} \nonumber\\
&\quad+ r_{\parallel}^{2}\sin\theta\cos\theta(\mathrm{e}^{\mathrm{i} \Gamma}-1) (\mathrm{e}^{-\mathrm{i}\Gamma}-1)\uvec{n}\cdot\uvec{n}\big]\nonumber \\
&= T_{2}P_\mathrm{cav}\bigg[R_{\perp}\left(1-\sin^2(2\theta)\sin^2\left(\frac{\Gamma}{2}\right)\right)\nonumber\\
&\quad+ R_{\parallel}\sin^2(2\theta)\sin^2\left(\frac{\Gamma}{2}\right)\bigg]
\end{align}
Now we can approximate this expression for small $\Gamma$:
\begin{align}
P_\mathrm{t} &\approx T_{2}P_\mathrm{cav}\bigg[R_{\perp}\left(1-\sin^2(2\theta)\left(\frac{\Gamma^2}{4}\right)\right)\nonumber\\
&\quad + R_{\parallel}\sin^2(2\theta)\left(\frac{\Gamma^2}{4}\right)\bigg]
\label{eqn-Pt_small_Gamma}
\end{align}
To maximize the effect of vacuum birefringence, we orient the polarisation to be at an angle $\theta=\frac{\pi}{4}$ with respect to the external field.
In this case \eqref{eqn-Pt_small_Gamma} becomes
\begin{align}
P_\mathrm{t} &\approx T_{2}P_\mathrm{cav}\left[R_{\perp}\left(1-\frac{\Gamma^2}{4}\right) + R_{\parallel}\frac{\Gamma^2}{4}\right]\nonumber\\
&\approx T_{2}P_\mathrm{cav}\left[R_{\perp}\left(1-\frac{\Gamma^2}{4}\right)\right], \label{eqn-Pt_small_Gamma_45deg}
\end{align}
noting that $R_{\parallel}\ll 1$ and $\Gamma^2 \ll 1$.
Here we see that $\frac{\Gamma^2}{4}$ is the fractional power moved from the $\uvec{p}$ to the $\uvec{n}$ polarization state. We see this as a decrease in the $R_{\perp}$ term. This is the total power in the other polarization state, but is suppressed by the small $R_{\parallel}$ of the analyzer.
\subsection{The signal in extinction, $P_\mathrm{ext}$}
The signal transmitted through the polarizer is comprised of the light that has been transformed into the $\uvec{n}$ polarization state due to the intracavity birefringence, as well as unwanted light in the $\uvec{p}$ polarization state that leaks through the polarizer. The field in this extinction channel can be described as
\begin{align}
\bm{E}_\mathrm{ext} &= t_{\perp}(t_2 \bm{E}_\mathrm{cav}\cdot\uvec{p})\uvec{p} + t_{\parallel}(t_2 \bm{E}_\mathrm{cav}\cdot\uvec{n})\uvec{n} \nonumber \\
&= t_{2}E_\mathrm{c}\mathrm{e}^{-\mathrm{i} \omega t}\big[t_{\perp}(\mathrm{e}^{\mathrm{i} \Gamma}\cos^{2}\theta+\sin^{2}\theta)\uvec{p} \nonumber\\
&+ t_{\parallel}(\mathrm{e}^{\mathrm{i} \Gamma}\cos\theta\sin\theta-\sin\theta\cos\theta)\uvec{n}\big]
\end{align}
We can now write down the power transmitted through the analyzer in terms of the intracavity power, $P_\mathrm{cav}$:
\begin{align}
P_\mathrm{ext} &= \bm{E}_\mathrm{ext}\cdot\bm{E}_\mathrm{ext}^{\ast} \nonumber \\
&= T_2 P_\mathrm{cav}\bigg[T_{\perp}\left(1-\sin^2(2\theta)\sin^2\left(\frac{\Gamma}{2}\right)\right) \nonumber\\
&+ T_{\parallel}\sin^2(2\theta)\sin^2\left(\frac{\Gamma}{2}\right)\bigg]
\end{align}
Again, we will approximate this expression for small $\Gamma$; additionally, it is worth rewriting this expression in terms of a measurable quantity, the analyzer's extinction ratio, $\sigma^2 \triangleq \frac{T_{\perp}}{T_{\parallel}}$, thus
\begin{equation}
P_\mathrm{ext} \approx T_{\parallel}T_{2}P_\mathrm{cav}\left[\sigma^2\left(1-\sin^2(2\theta)\frac{\Gamma^2}{4}\right) + \sin^2(2\theta)\frac{\Gamma^2}{4}\right] \label{eqn-Pext_small_Gamma}
\end{equation}
Again we make the polarisation to be at an angle $\theta = \frac{\pi}{4}$ with respect to the magnetic field. In this case \eqref{eqn-Pext_small_Gamma} becomes
\begin{align}
P_\mathrm{ext} &\approx T_{\parallel}T_{2}P_\mathrm{cav}\left[\sigma^2\left(1-\frac{\Gamma^2}{4}\right) + \frac{\Gamma^2}{4}\right]\nonumber\\
&\approx T_{\parallel}T_{2}P_\mathrm{cav}\left[\sigma^2 + \frac{\Gamma^2}{4}\right],
\label{eqn-Pext_small_Gamma_45deg}
\end{align}
Here we note again that $\sigma^2$ and $\Gamma^2$ are both much less than $1$. Similarly, $\frac{\Gamma^2}{4}$ is the fractional power moved from the $\uvec{p}$ to the $\uvec{n}$ polarization state. Here $\sigma^2$ is the fraction of power leaking from the undesired polarization state $(\uvec{p})$ into the $P_\mathrm{ext}$ signal.
\section{Apparent Birefringence Noise}
\label{sec:apparent_birefringence_noise}
When trying to measure an effect as small as the vacuum Cotton-Mutton effect it is necessary to understand and characterize the competing noise sources. In our case, the signal is measured as the change in power $(P_\mathrm{ext})$ incident on the extinction channel photodetector $(\mathrm{PD}_\mathrm{ext})$. Power fluctuations from sources other than vacuum birefringence are indistinguishable from the power variation resulting from a vacuum-birefringence induced ellipticity.
\subsection{The signal as variation in laser power}
We start the noise investigation by first examining how a time-varying round-trip cavity birefringence, $\delta{\gamma}$, produces a time-varying power, $\delta{P_\mathrm{ext}}$, incident on photodetector $\mathrm{PD_{ext}}$. Once we see how the $\delta{\gamma}$ couples to $\delta{P_\mathrm{ext}}$ we can see how unwanted fluctuations in power, $\delta{P_\mathrm{noise}}$, masquerade as an apparent fluctuation in birefringence, $\delta\gamma_\mathrm{noise}$.
For the case of aligning the laser field's input polarization to $45^{\circ}$ with respect slow axis we can fully write the power incident on the photodetector $\mathrm{PD_{ext}}$ as a function of a dynamical input power, $P_\mathrm{in}$, viewed as amplitude sidebands filtered by the cavity filter, $\mathrm{F}_\mathrm{cav}$, and the round trip differential phase accumulation, $\gamma$, enhanced by the cavity finesse, $\mathcal{F}$:
\begin{equation}
P_\mathrm{ext}= T_{\parallel}\left(\mathrm{F}_\mathrm{cav}\sigma^2+\mathrm{F}_\mathrm{cav}^{2}\frac{\mathcal{F}^2}{\pi^2}\frac{\gamma^2}{4}\right)T_{2}T_{1}\frac{\mathcal{F}^2}{\pi^2}P_\mathrm{in}. \label{eqn-Pext_of_Pin}
\end{equation}
Many noise sources, such as beam motion and laser frequency noise, interfere with the coupling of the laser field into the cavity mode. Since the cavity power couples to the output power through the (stable) transmission constant of the end mirror, $T_{2}$, it is more practical to write the signal in terms of the cavity power:
\begin{equation}
P_\mathrm{ext} = T_{\parallel}\left(\sigma^2+\mathrm{F}_\mathrm{cav}\frac{\mathcal{F}^2}{\pi^2}\frac{\gamma^2}{4}\right)T_{2}P_\mathrm{cav} \label{eqn-Pext_of_Pcav}
\end{equation}
Power fluctuations vary around a mean DC power on the detector, $\bar{P}_\mathrm{ext}$. At DC the cavity filter is $\mathrm{F}_\mathrm{cav}=1$. In terms of the average cavity power, $\bar{P}_\mathrm{cav}$, and the mean round-trip intracavity birefringence, $\gamma_{0}$:
\begin{align}
\bar{P}_\mathrm{ext} &= P_\mathrm{ext}(\gamma = \gamma_{0}; \mathrm{F}_\mathrm{cav}=1; P_\mathrm{cav}=\bar{P}_\mathrm{cav})\nonumber\\
&=T_{\parallel}\left(\sigma^2+\frac{\mathcal{F}^2}{\pi^2}\frac{\gamma_{0}^2}{4}\right)T_{2}\bar{P}_\mathrm{cav} \label{eqn-Pext_ave}
\end{align}
Next, we examine the behavior around this mean by expanding the power \eqref{eqn-Pext_of_Pcav} as a function of $\gamma$ about the bias point $\gamma_{0}$:
\begin{equation}
P_\mathrm{ext}(\gamma) = P_\mathrm{ext}(\gamma_{0})+\frac{1}{1!}\frac{\mathop{}\!\mathrm{d} P_\mathrm{ext} }{\mathop{}\!\mathrm{d} \gamma}\bigg|_{\substack{\gamma_{0}}}(\gamma-\gamma_{0}) + \mathcal{O}
\end{equation}
In the case of pulsed field magnets, we are interested in the dynamical birefringence. We therefor want to rewrite this in terms of $\delta{P_\mathrm{ext}} = P_\mathrm{ext}(\gamma) - P_\mathrm{ext}(\gamma_{0})$ and $\delta{\gamma} = \gamma-\gamma_{0}$. Making the assumption the expected vacuum birefringence is much less than the static birefringence of the mirrors ($\delta{\gamma} \ll \gamma_{0}$) we can neglect $\mathcal{O}$ and this expression becomes
\begin{equation}
\delta{P_\mathrm{ext}} = \frac{\mathop{}\!\mathrm{d} P_\mathrm{ext} }{\mathop{}\!\mathrm{d} \gamma}\bigg|_{\substack{\gamma_{0}}} \delta{\gamma} = T_{\parallel}\left(2 \mathrm{F}_\mathrm{cav}\frac{\mathcal{F}^2}{\pi^2}\frac{\gamma_{0}}{4}\right)T_{2}P_\mathrm{cav}\delta{\gamma} \label{eqn-P_expanded}
\end{equation}
We normalize this expression \eqref{eqn-P_expanded} by dividing by the average power \eqref{eqn-Pext_ave} to find the relative power change due to $\delta{\gamma}$:
\begin{equation}
\frac{\delta{P_\mathrm{ext}}}{\bar{P}_\mathrm{ext}} = \mathrm{F}_\mathrm{cav}\frac{2\gamma_{0}}{\frac{4\pi^2}{\mathcal{F}^2}\sigma^2+\gamma_{0}^2}\delta{\gamma} \label{eqn-RPN_of_gamma}
\end{equation}
Conversely we can write the apparent birefringence fluctuations as function of relative power fluctuations. Since we are concerned with the spectra of the signals, we switch to the convention of writing the linear spectral densities (LSD), $\delta{g}(f)$ as $\tilde{g}$
\begin{equation} \tilde{\gamma}=\frac{1}{\mathrm{F}_\mathrm{cav}}\frac{\frac{4\pi^2}{\mathcal{F}^2}\sigma^2+\gamma_{0}^2}{2\gamma_{0}}\frac{\tilde{P}_\mathrm{ext}}{\bar{P}_\mathrm{ext}}. \label{eqn-gamma_of_rpn}
\end{equation}
\subsection{Intracavity power noise}
Fluctuations in the intracavity laser field couple to apparent birefringence noise directly. Here we will refer to unsuppressed intracavity power noise as residual (res) power noise.
We are interested in the power fluctuations measured at $\mathrm{PD}_\mathrm{ext}$, $\tilde{P}_\mathrm{ext:res}$, due to intracavity power fluctuations, $\tilde{P}_\mathrm{cav}$, alone. At some static round-trip cavity birefringence, $\gamma_0$ we see
\begin{equation}
\tilde{P}_\mathrm{ext:res}=T_{\parallel}\left(\sigma^2+\mathrm{F}_\mathrm{cav}\frac{\mathcal{F}^2}{\pi^2}\frac{\gamma_{0}^2}{4}\right)T_{2}\tilde{P}_\mathrm{cav}
\end{equation}
We can then compute the coupling of intracavity residual power noise to apparent birefringence noise using the relationship \eqref{eqn-gamma_of_rpn}, giving:
\begin{equation}
\tilde{\gamma}_\mathrm{res}=\frac{1}{\mathrm{F}_\mathrm{cav}}\frac{2\pi^2\sigma^2}{\mathcal{F}^2\gamma_0}\frac{\tilde{P}_\mathrm{cav}}{\bar{P}_\mathrm{cav}} + \frac{\gamma_0}{2}\frac{\tilde{P}_\mathrm{cav}}{\bar{P}_\mathrm{cav}}\label{eqn:gamma_residual}
\end{equation}
We see from the two terms in \eqref{eqn:gamma_residual} that unsuppressed laser power noise couples into our signal through two mechanisms. The first is leakage of the undesired polarization state $\uvec{p}$ through an imperfect extinction ratio, $\sigma^2$, of the analyzer. The second is the coupling of laser power into the signal's polarization state, $\uvec{n}$, via a static birefringence of the cavity, $\gamma_{0}$.
\subsection{NEP noise}
The photodetector has several inherent sources of detection noise. Typical sources of electrical noise include dark-current shot noise, photo-current shot noise, Johnson noise of the resistor, and also (in the case of amplified detectors) amplifier electronic noise. The photodetector manufacturer usually calculates and measures the sum of the detector's intrinsic noise sources and gives the value as the equivalent value if measured as optical laser power noise. This value is called the photodetector's Noise Equivalent Power (NEP) and is in units of $[\frac{\mathrm{W}}{\mathrm{\sqrt{Hz}}}]$.
This relative power noise at $\mathrm{PD_{ext}}$ is simply the extinction photodetector's NEP, $\tilde{P}_\mathrm{NEP}$, divided by the average incident, $\bar{P}_\mathrm{ext}$, from which we can calculate the expected contribution to apparent birefringence noise:
\begin{equation}
\tilde{\gamma}_\mathrm{NEP}=\frac{1}{\mathrm{F}_\mathrm{cav}}\frac{2\pi^2\tilde{P}_\mathrm{NEP}}{\gamma_0\mathcal{F}^2 T_\parallel T_2 \bar{P}_\mathrm{cav}}
\end{equation}
\subsection{Shot noise}
Shot noise, or photon counting noise, is the name given to the fluctuation in the rate of photons incident on a photon detector. The average laser power $\bar{P}\,[\mathrm{W}]$ is given by the product of the single photon energy, $\frac{hc}{\lambda}\,[\mathrm{J}]$, and the average rate of photons counted, $\bar{N}_{\gamma}\,[\frac{1}{\mathrm{s}}]$; or, the average rate of photons is measured by:
\begin{equation}
\bar{N}_\mathrm{\gamma} = \frac{\bar{P}}{\frac{hc}{\lambda}}
\end{equation}
The variance of counting is determined by Poissonian statistics. The standard deviation of this distribution is given by the square root of the average rate, $\sigma_{\gamma}=\sqrt{\bar{N}_{\gamma}}$. A fluctuation in photon incidence rate is, at its core, a fluctuation in laser power. The single-sided linear spectral density of power fluctuations is given by:
\begin{equation}
\tilde{P}_\mathrm{shot} = \sqrt{2}\frac{hc}{\lambda}\sqrt{\bar{N}_{\gamma}} = \sqrt{2\frac{hc}{\lambda}\bar{P}}
\end{equation}
We can then compute the expected shot noise contribution to be
\begin{equation}
\tilde{\gamma}_\mathrm{shot}=\frac{1}{\mathrm{F}_\mathrm{cav}}\frac{\pi}{\mathcal{F}} \sqrt{ \frac{2hc}{T_\parallel \lambda T_2 \bar{P}_\mathrm{cav}}\left( \frac{4\pi^2\sigma^2}{\mathcal{F}^2\gamma_{0}^{2}}+1 \right) }
\end{equation}
\subsection{Digitization noise}
Voltage noise inserted between the photodetector and analog-to-digital converter (ADC) appears directly as laser power noise, mitigated by the response of the detector. At the data acquisition card the voltage recorded is proportional to the laser power incident on the photodetector, with the constants of proportionality being the photodiode's responsivity $R_\mathrm{PD}$, and the detector's transimpedance gain, $G_\mathrm{PD}$. We can write the apparent laser power noise due to an ADC noise, $\tilde{V}_\mathrm{ADC}$:
\begin{equation}
\tilde{\mathrm{P}}_\mathrm{ADC} = \frac{1}{G_\mathrm{PD}R_\mathrm{PD}}\tilde{V}_\mathrm{ADC}
\end{equation}
In an ideal ADC there is still an inherent noise to digitizing an analog signal. This is the result of recording a continuous signal at specific instants separated by the sampling time $T_\mathrm{s} = \frac{1}{f_\mathrm{s}}$ and with an amplitude recorded at discrete `counts' at integer multiples of the least significant bit voltage,
\begin{equation}
V_\mathrm{LSB} = \frac{\Delta{V}_\mathrm{ADC}}{2^{N}},
\end{equation}
for an ADC with an input voltage range of $\Delta{V}_\mathrm{ADC}$ and a resolution of $N$ effective bits. The LSD of the voltage noise from digitization is
\begin{equation}
\tilde{V}_\mathrm{ADC} = \sqrt{\tilde{S}_\mathrm{ADC}} = \frac{V_\mathrm{LSB}}{\sqrt{6 f_\mathrm{s}}} \label{eqn-LSD_digi_noise}
\end{equation}
We can thus write the apparent birefringence noise expected due to the signal quantization of the extinction channel photodetector with an ideal ADC as
\begin{equation}
\tilde{\gamma}_\mathrm{ADC}=\frac{1}{\mathrm{F}_\mathrm{cav}}\sqrt{\frac{2}{3 f_\mathrm{s}}}\frac{2^{-N}\pi^2\Delta{V}_\mathrm{ADC}}{G_\mathrm{PD}R_\mathrm{PD}\gamma_0\mathcal{F}^2 T_\parallel T_2 \bar{P}_\mathrm{cav}}
\end{equation}
\subsection{Laser frequency noise}
In a birefringent cavity, the resonance frequency depends on the polarization of the intracavity laser field~\cite{Berceau2012}. The difference between the resonance frequency for ordinary beam and the extraordinary beam, $\delta \nu$, is proportional to $\gamma_{0}$. In BMV, the laser frequency is locked to the resonance the ordinary beam, thus the extraordinary beam is slightly off resonance. The ratio between the intensity of extraordinary to ordinary light is~\cite{Berceau2012}
\begin{equation}
a = \frac{1}{1+\frac{4\mathcal{F}^2}{\pi^2}\sin^2(\gamma_{0})}.
\end{equation}
Small errors in the coupling of the laser to the longitudinal mode of the cavity can produce amplitude modulation in the extraordinary resonant field. If $\delta \nu_n$ is the cavity locking frequency noise, the $a$ ratio can be written as~\cite{Berceau2012}
\begin{equation}
a = \frac{1+\frac{4\mathcal{F}^2}{\pi^2}\sin^2(\frac{\pi}{\mathcal{F}}\frac{\delta \nu_n}{\mathrm{FWHM}})}{1+\frac{4\mathcal{F}^2}{\pi^2}\sin^2(\gamma_{0}-\frac{\pi}{\mathcal{F}}\frac{\delta \nu_n}{\mathrm{FWHM}})}
\end{equation}
Typically, $\frac{\pi}{\mathcal{F}}\frac{\delta \nu_n}{\mathrm{FWHM}} < \gamma_{0} \ll 1$ and therefore
\begin{equation}
a \approx 1-\frac{4\mathcal{F}^2}{\pi^2}\left(\gamma_{0}^2-2\gamma_{0}\frac{\pi}{\mathcal{F}}\frac{\delta \nu_n}{\mathrm{FWHM}}\right).
\end{equation}
This longitudinal mode mismatch appears as a fluctuation in birefringence:
\begin{equation}
\gamma_\mathrm{freq} \approx \frac{\mathcal{F}}{\pi}\gamma_0^2\frac{\delta \nu_n}{\mathrm{FWHM}}
\end{equation}
Applying some typical experimental parameter values ($\frac{\mathcal{F}}{\pi}= 10^5,\:\gamma_0=10^{-8}\,\mathrm{rad},\:\delta{\nu} = 1\,\frac{\mathrm{mHz}}{\sqrt{\mathrm{Hz}}},$ and $\mathrm{FWHM} =100\,\mathrm{Hz}$~\cite{Berceau2012}\cite{Cantatore1995}), we expect this to produce birefringence noise on the order of $\gamma_\mathrm{freq}\approx 10^{-17}\frac{\mathrm{rad}}{\sqrt{\mathrm{Hz}}}$, $2$ orders of magnitude below the expected level of vacuum birefringence.
\section{Results}
\label{sec:results}
\begin{figure}
\begin{center}
\includegraphics[width=0.71\columnwidth]{portrait_bmv_diagram_sans2ndpwrstab.eps}
\end{center}
\vspace{-6pt}
\caption[Simplified illustration of the BMV experiment.]{Simplified illustration of the BMV experiment.}
\label{fig:portrait_bmv_diagram_sans2ndpwrstab}
\end{figure}
Here we compare the measured birefringence fluctuations in BMV to the correlated measurements and models of sensing noise outlined in this paper. The layout of the BMV experiment is shown in figure \ref{fig:portrait_bmv_diagram_sans2ndpwrstab}. The input optics start with a $0.5\,\mathrm{W},\:\lambda=1064\,\mathrm{nm}$ Nd:YAG laser. The beam passes through an acousto-optic modulator (AOM) for power and frequency actuation before being sent through a single-mode fiber. The resulting beam is phase-modulated at $\Omega_\mathrm{EOM}=10\,\mathrm{MHz}$ in an electro-optic modulator (EOM) to apply frequency sidebands. The carrier and frequency sidebands are sent through a Faraday isolator, where a pickoff after the isolator sends light to photodetector PD-inj, used in-loop, feeding-back to the AOM for the power stabilization.
The main polarimeter optics employ a $\sigma^2=2\times10^{-7}$ polarizer after which $\approx 20\,\mathrm{mW}$ of laser power is incident on the $\mathcal{F} = 440\:000$ cavity. The cavity reflected light is diverted to the PD-r by the Faraday isolator where the photodetector's RF signal is demodulated with $\Omega_\mathrm{EOM}$ to produce the error signal in a feedback loop to actuate the laser frequency, keeping it tuned to the cavity's resonant frequency in an implementation of the Pound-Drever-Hall (PDH) technique~\cite{Black2001}. The cavity transmitted light is passed through a matched polarizer (the `analyzer') to analyze polarization changes into laser power changes. The transmission channel is detected at PD-t to measure common fluctuations in the intracavity field; the spectrum of intracavity power noise is plotted in figure \ref{fig:dsp_0003_intracavityLSD}. Finally, the extinction channel is measured by PD-ext, monitoring changes in the polarization of the laser field.
We present in figure \ref{fig:dsp_0003} the measured birefringence spectrum along with the modeled sensing noise components. The linear spectral density of the measured intracavity birefringence fluctuations in radians per square root of hertz is plotted in {\em cyan}. In {\em magenta} is the measured noise contribution due to intracavity power noise coupling into the main signal polarization state through the cavity static birefringence, $\gamma_0$. The coupling of intracavity power noise through by leakage through the analyzer's extinction ratio, $\sigma^2$, is plotted in {\em yellow}. These two noise models are derived from correlated measurements of the intracavity laser power (FIG. \ref{fig:dsp_0003_intracavityLSD}) and measurements of the respective optical parameters. Noise due to the extinction channel photodetector electronic noise (NEP) is plotted in {\em grey}; BMV uses the low noise Femto OE-200-IN1 photoreceiver with an NEP $\approx 9\,\frac{\mathrm{fW}}{\sqrt{\mathrm{Hz}}}$. The contribution from shot noise is plotted in {\em red}. The spectrum of voltage noise on the Hioki digital oscilliscope used for data acquisition (DAQ) was taken by measuring the input shorted with a $50\,\mathrm{ohm}$ terminator; the equivalent birefringence spectral noise density due to this DAQ noise is plotted in {\em blue}. The specification for the Hioki DAQ gives the voltage resolution as $V_\mathrm{LSB} = \frac{\Delta{V}_\mathrm{ADC}}{1600}$, compatible with the measured noise. This equates to roughly $N=10.6$ effective bits. For a point of reference, the calculated noise contribution from an ideal 14-bit ADC is plotted in {\em green}.
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{dsp_0003_intracavityLSD.eps}
\end{center}
\caption[Linear spectral density of the intracavity laser power.]{Linear spectral density of the intracavity laser power. }
\label{fig:dsp_0003_intracavityLSD}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{dsp_0003.eps}
\end{center}
\caption[The BMV noise budget.]{The BMV noise budget. The measured sensitivity (cyan) is compared to the modeled noise sources, discussed in the text.}
\label{fig:dsp_0003}
\end{figure}
The shown data set was taken after rotating the cavity mirrors to align the optical axis of the cavity to the polarization of the laser field, minimizing the ellipticity induced by the cavity. The mean intracavity round-trip birefringence was measured to be $\gamma_0 = 1.4\times10^{-8}\,\mathrm{rad}$. This effort is rewarded with a significantly reduced contribution from residual laser power noise coupling through the $\gamma_0$ channel, albeit this remains the largest modeled noise term at low frequencies (below $100\,\mathrm{Hz}$). With a low light level on the extinction detector, PD-ext, shot noise is a large contribution to the total sensing noise, and becomes the limiting noise source at high frequencies.
\section{Discussion}
\label{sec:discussion}
In this paper we modeled and measured various sources of sensing noise and showed how they appear as birefringence noise, limiting the sensitivity of cavity enhanced polarimeters. From this we can differentiate actual cavity birefringence fluctuations from sensing noise. Here we discuss possible causes of dynamical cavity birefringence and compare the phase sensitivity of polarimeters to state of the art separate-arm interferometers, namely the LIGO gravitational-wave detector, in the context of measuring vacuum birefringence.
\subsection{Dynamical cavity birefringence}
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{dsp_0003_residual.eps}
\end{center}
\caption[Measured residual birefringence in the BMV polarimeter.]{Measured residual birefringence in the BMV polarimeter.}
\label{fig:dsp_0003_residual}
\end{figure}
As was detailed in the previous sections, the measured sensitivity can be modeled only partially by the sensing noise. Figure \ref{fig:dsp_0003_residual} shows actual dynamical birefringence of cavity as the difference between the measured birefringence and the correlated measurements of sensing noise. This residual noise is calculated by subtracting the time series of the auxiliary measured channels (scaled by the appropriate measured optical parameters: $\sigma^2$, $\gamma_0$, and $\mathcal{F}$ as outlined section \ref{sec:apparent_birefringence_noise}) from the time series of the measured birefringence. The plot shows the linear spectral density of the resulting time series is at the level of $10^{-12}\,\frac{\mathrm{rad}}{\sqrt{\mathrm{Hz}}}$ for a large part of the frequency spectrum. This source of cavity birefringence noise is of import to cavity enhanced polarimetry in general. The PVLAS experiment proposes to check temperature effects on cavity mirror birefringence~\cite{DellaValle2016} and has already measured for the effect of intracavity beam motion in their experiment, finding it not to be their dominating noise source ~\cite{DellaValle2016}. As it has a different geometry and measurement frequency, these potential noise sources must be characterized in the BMV apparatus as well. We discuss them here, briefly.
A potential source of dynamical birefringence in a Fabry-Perot cavity is the motion of the laser beam on the surface of the cavity mirrors. Dielectric mirrors have been shown to have an inherent anisotropy~\cite{Bielsa2009}. The phase shift induced on an incident polarized laser beam may depend on the point of the mirror surface reflecting the light. Thus, if the pointing of the laser beam fluctuates with respect to the mirror position, a fluctuation in birefringence could be observed. While recently studies of the mirror birefringence's positional dependence have been limited, a two-dimensional surface characterization of this effect has been reported~\cite{Micossi1993}. As, to the best of our knowledge, this is the only characterization of this kind we use it as a guide to estimate an order of magnitude of birefringence noise. The referenced chart indicates that a translation of $1\,\mathrm{mm}$ results in about a $1\%$ variation in $\gamma$. Following ref.~\cite{Bielsa2009}, high finesse dielectric mirrors have shown a birefringence per reflection on the order of $10^{-6}\,\mathrm{rad}$. Assuming a linear response, a translation noise of about $100\,\frac{\mathrm{nm}}{\sqrt{\mathrm{Hz}}}$ would be needed to reproduce our measured cavity birefringence noise, a value compatible with the mechanical motion measured in commercial kinematic mirror mounts (see e.g.~\cite{Hartman2014}). Furthermore, fluctuations in the incidence angle must be taken into consideration, as a non-zero incidence angle produces a differential phase shift between the polarization states that is proportional to the square of the angle of incidence~\cite{Bouchiat1982}.To evaluate the noise induced by fluctuations in incident angle, $\theta$, let us recall that the corresponding birefringence per reflection can be written as $\gamma_\theta = \gamma_{i}\theta^2$~[\cite{Carusotto1989}]. Again, to induce the noise level that we observe, an angle of about $10^{-3}\,\frac{\mathrm{rad}}{\sqrt{\mathrm{Hz}}}$ would be necessary. In a $2\,\mathrm{m}$ cavity this would translate to a linear shift of $2\,\frac{\mathrm{mm}}{\sqrt{\mathrm{Hz}}}$, which would need to be much larger than translational estimate above.
Thermal effects on the reflective surface of the cavity mirrors is another potential source of intracavity mirror birefringence noise. Photon absorption produces localized temperature changes on the mirror surface, changing the optical path length experienced as the laser passes through the outer coating layers. The phase retardation between the laser field's polarization states should change proportionally, but this phenomenon has yet to be measured. Variations in power incident on the cavity mirrors translate into temperature noise and potentially into birefringence noise. One can naively estimate this birefringence variation by assuming that $\frac{\delta\gamma_\mathrm{i}}{\gamma_i} \approx \frac{\delta L}{L_0}$, where $L_0$ represents the mirror layer thickness. $\delta{L}$ can be written as $\alpha \delta{T} L_0$ where $\delta{T}$ is the variation of the mirror temperature and $\alpha$ the thermal expansion coefficient. Again, to reproduce the observed birefringence noise, where $L_0 \approx \lambda \approx 1\,\mathrm{\mu}$, $\gamma_i \approx 10^{-6}\,\mathrm{rad}$, and $\alpha \approx 10^{-6}\,\mathrm{K}^{-1}$~[\cite{Evans2008}]. To explain our measured intracavity birefringence (fig. \ref{fig:dsp_0003_residual}) one would need a $\delta T$ of the order of $1\,\frac{\mathrm{K}}{\sqrt{\mathrm{Hz}}}$, an unrealistically high value in BMV's measurement range of $10-100\,\mathrm{Hz}$; however, this potentially describes future fundamental noise floor resulting from the spectrum of heating due to intracavity shot noise, and warrants future investigation.
\subsection{Comparison of BMV and LIGO phase sensitivity}
\label{sec:comparison_of_bmv_and_ligo_phase_sensitivity}
In contrast to differential arm interferometry (such as a Michelson interferometer), a cavity-enhanced polarimetry measurement of vacuum birefringence has the advantage of being, to first order, insensitive to interferometric path length changes. As was outlined in the introduction, the polarimeter topology is restricted to the measurement of the difference between the indicies of refraction perpendicular and parallel to a transverse magnetic field. This limits the measurable physics in testing non-linear electrodynamic theories \cite{Fouche2016}. Furthermore, as this paper has shown, the sensitvity of polarimeter searches are subject to fluctuations in birefringence of the interferometer mirrors. As such, the measurement of vacuum polarization in a split-arm interferometer has interest.
Large-scale ultra precise resonantly enhanced-Michelson interferometers, such as the LIGO interferometers\cite{ligo_instrument2015}, have been built for the observation of gravitational waves. For this purpose they seek to have the highest strain $(h=\frac{\delta L}{L_0})$ sensitivity to differential length changes, and have been designed with long baseline arm lengths $L_0=4\,\mathrm{km}$. One can imagine building a smaller scale interferometer more suited to the purpose of measuring magnetic field excited vacuum polarization along its baseline arm. However, gravitational-wave detectors make a good point of comparison for sensitivity to VMB measurements, being the most sensitive differential interferometers ever built, particularly in their most sensitive frequency band, $30\,\mathrm{Hz}-3\,\mathrm{kHz}$, which is compatible with characteristic frequencies of pulsed magnets, $30-200\,\mathrm{Hz}$.
To compare the instruments' sensitivities we state the aLIGO strain sensitivity, $\tilde{h}$, in terms of sensitivity to differential arm intracavity phase delay, $\widetilde{\delta{\phi}}$:
\begin{equation}
\widetilde{\delta{\phi}} = \frac{2\pi}{\lambda} L_0\tilde{h}
\end{equation}
where aLIGO uses the same $\lambda=1064\,\mathrm{nm}$ laser light as BMV. Using the strain sensitivity representative of the aLIGO Livingston observatory in 2015 \cite{Abbott2016, ligo_strain}, we plot the equivalent aLIGO phase sensitivity alongside the measured phase sensitivity of the BMV apparatus in figure \ref{fig:dsp_0003_LIGO_BMV_medres}.
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{dsp_0003_LIGO_BMV_medres.eps}
\end{center}
\caption[Comparison of phase sensitivities for current experiments in Michelson (aLIGO) and polarimeter (BMV) topologies.]{Comparison of phase sensitivities for current experiments in Michelson (aLIGO)\cite{Abbott2016, ligo_strain} and polarimeter (BMV) topologies.}
\label{fig:dsp_0003_LIGO_BMV_medres}
\end{figure}
With extraordinary investment and engineering \cite{ligo_instrument2015}, Michelson interferometers can exceed the current phase sensitivity achieved by the BMV polarimeter, an interesting notion for the future study of non-linear electrodynamics. In either case, the QED predicted additional accumulated phase perpendicular and parallel to the external magnetic field for $1\,\mu$ light through a region of $B^2 L = 100\,{\mathrm{T}^2\mathrm{m}}$ are
\begin{align}
\phi_\parallel &= \frac{2\pi}{\lambda}\frac{14\alpha^{2}\hbar^{3}}{45m_\mathrm{e}^{4}c^{5}}\frac{B^{2}}{\mu_0}L_B\approx5.5\times10^{-15}\,\mathrm{rad}\\
\phi_\perp &= \frac{2\pi}{\lambda}\frac{8\alpha^{2}\hbar^{3}}{45m_\mathrm{e}^{4}c^{5}}\frac{B^{2}}{\mu_0}L_B\approx3.1\times10^{-15}\,\mathrm{rad}
\end{align}
clearly calling for future improvements in both optical sensitivity and pulsed magnet technique. We note that the pulsed magnet signal has a predictable form, and that the signal to noise ratio is improved by making a correlated average of data taken over many pulses \cite{Cadene2014}.
\section{Conclusions}
\label{sec:conclusions}
We have modeled several noise sources for cavity-enhanced precision polarimetry vacuum birefringence searches. We found our sensing-noise budget to be compatible with the measured sensitivity of the BMV polarimeter. It offers an explanation of the measured sensitivity at high frequencies, however, it appears that the sensing noise cannot fully explain the measured birefringence. This suggests that, at low frequencies, BMV's sensitivity is limited by cavity mirror birefringence fluctuations. This noise (figure \ref{fig:dsp_0003_residual}) has a similar spectrum and is correlated in time with measured intracavity power noise shown (figure \ref{fig:dsp_0003}) suggesting that both effects could have common source, such as relative motion between the cavity laser mode and position of the cavity mirrors.
Beam position and thermal effects are sources of birefringence noise which have not yet been studied experimentally. This is a required work to better understand and subsequently mitigate their impact on the sensitivity of precision polarimetry experiments. Further more, one could circumvent such a noise source if the intrinsic birefringence of dielectric mirrors could be reduced during mirror fabrication. Unfortunately, the origin of birefringence in dielectric mirrors is still unexplained thus, again, its experimental study is a necessary step in the advancement of the field of polarimetric vacuum birefringence searches.
The phase sensitivities reached by differential-arm interferometers and by VMB polarimeters are within an order of magnitude for much of their measurement bands. This is due to the significant effort and resources spent in the field of gravitational-wave detection to reduce displacement noise, and provides a techincal case to accompany an already intriguing science case\cite{Fouche2016} for alternate vacuum polarization search topologies. For polarimeter searches, our study suggests that reaching the necessary sensitivity poses a different set of challenges, firstly that of overcoming cavity-mirror birefringence noise.
\section{Acknowledgments}
This research has been partially supported by ANR (Grant No. ANR-14-CE32-0006) in the framework of the ``Programme des Investissements d'Avenir''. We thank all the members of the BMV project and in particular M. Fouch\'e who greatly contributed in the recent years to the BMV experiment. We also acknowledge the contributions of J. Renoud and M. Humbert.
|
1,116,691,499,869 | arxiv | \section{Introduction}
Hopf cyclic cohomology was invented by Connes-Moscovici in 1998 \cite{ConnMosc98}. It is now beyond dispute
that this cohomology is a fundamental tool in noncommutative geometry. Admitting coefficients is one of the most significant properties of this theory \cite{HajaKhalRangSomm04-I,HajaKhalRangSomm04-II,JaraStef}. These coefficients are called stable-anti-Yetter-Drinfeld (SAYD) modules \cite{HajaKhalRangSomm04-I}.
\medskip
\noindent A ``geometric" Hopf algebra is a Hopf algebra associated to (Lie) algebraic group or Lie algebra via certain functors. Such Hopf algebras are defined as representative (smooth) polynomial functions on the object in question or as the universal enveloping algebras of the Lie algebra or even as a bicrossed product of such Hopf algebras. The latter procedure is called semi-dualization. The resulting Hopf algebra via semi-dualization is usually neither commutative nor cocommutative \cite{Maji}.
\medskip
\noindent The study of SAYD modules over ``geometric" Hopf algebras begins in \cite{RangSutl}, where we proved that any representation of the Lie algebra induces a SAYD module over the associated Hopf algebra. Therefore those SAYD modules are called induced modules \cite{RangSutl}. We also proved that the Hopf cyclic cohomology of the associated Hopf algebra is isomorphic to the Lie algebra cohomology of the Lie algebra with coefficients in the original representation.
\medskip
\noindent In \cite{RangSutl-II}, the notion of SAYD modules over Lie algebras was defined and studied. It was observed that the corresponding cyclic complex has been known with different names for different SAYD modules. As the main example we proved that
the (truncated) polynomial algebra of a Lie algebra is a SAYD module. The corresponding cyclic complex is identified with the (truncated) Weil algebra \cite{RangSutl-II}. In the same paper we identify the category of SAYD modules over the enveloping algebra of a Lie algebra with those on the Lie algebra.
\medskip
\noindent Let us recall the main result of \cite{RangSutl-II} as follows. For an arbitrary Lie algebra $\mathfrak{g}$, the comultiplication of $U(\mathfrak{g})$ does not use the Lie algebra structure of $\mathfrak{g}$. This fact has been discouraged attention in comodules over $U(\mathfrak{g})$. It is shown that such comodules are in one to one correspondence with the nilpotent modules over the symmetric algebra $S(\mathfrak{g}^\ast)$ . Using this fundamental fact we can identify AYD modules over $U(\mathfrak{g})$ with modules over the semi-direct product Lie algebra $\widetilde\mathfrak{g}=\mathfrak{g}^\ast>\hspace{-4pt}\vartriangleleft \mathfrak{g}$. Here $\mathfrak{g}^\ast=\mathop{\rm Hom}\nolimits(\mathfrak{g}, {\mathbb C})$ is considered to be a commutative Lie algebra and to be acted upon by $\mathfrak{g}$ via the coadjoint representation. We show that the notion of comodule over Lie algebras make sense. Furthermore, SAYD modules over Lie algebras and the cyclic cohomology of a Lie algebra with coefficients in such modules is defined. It is shown that SAYD modules over $U(\mathfrak{g})$ and over $\mathfrak{g}$ have a one-to-one correspondence and their cyclic homologies are identified.
\medskip
\noindent Let $\mathfrak{g}=\mathfrak{g}_1\bowtie\mathfrak{g}_2$ be a bicrossed sum Lie algebra. Let us denote $R(\mathfrak{g}_2)$ and $U(\mathfrak{g}_1)$ by ${\cal F}$ and ${\cal U}$ respectively. Here $R(\mathfrak{g}_2)$ is the Hopf algebra of all representative functions on $\mathfrak{g}_2$, and $U(\mathfrak{g}_1)$ is the universal enveloping algebra of $\mathfrak{g}_1$. A module-comodule over ${\cal H}:={\cal F}\blacktriangleright\hspace{-4pt}\vartriangleleft {\cal U}$ is naturally a module-comodule over ${\cal U}$ and comodule-module over ${\cal F}$.
In \cite{RangSutl}, we completely determined those module-comodule whose ${\cal U}$-coaction and ${\cal F}$-action is trivial. It is proved that such a module-comodule is induced by a module over $\mathfrak{g}$ if and only if it is a YD module over ${\cal H}$.
\medskip
\noindent Continuing our study in \cite{RangSutl, RangSutl-II}, we completely determine SAYD modules over the bicrossed product Hopf algebra ${\cal H}={\cal F}\blacktriangleright\hspace{-4pt}\vartriangleleft {\cal U}$. Roughly speaking, we show that SAYD modules over ${\cal H}$ and SAYD module over $\mathfrak{g}$ are the same. We then take advantage of a spectral sequence in \cite{JaraStef} to prove a van Est isomorphism between the Hopf cyclic cohomology of ${\cal H}$ with coefficients in $~^\sigma{M}_\delta= M\otimes~ ^\sigma{\mathbb C}_\delta$ and the Lie algebra cohomology of $\mathfrak{g}$ relative to a Levi subalgebra with coefficients in a $\mathfrak{g}$-module $M$.
\medskip
\noindent One of the results of this paper is about the SAYD modules over Connes-Moscovici Hopf algebras ${\cal H}_n$. We know that ${\cal H}_n$ is the bicrossed product Hopf algebra of ${\cal F}(N)$ and $U(gl_n^{\rm aff})$ \cite{MoscRang09}. However, the group $N$ is not of finite type. So we cannot apply our theory freely on ${\cal H}_n$. We overcome this problem by carefully analyzing the SAYD modules over ${\cal H}_n$ to reduce the case to a finite type problem. As a result, we prove that ${\cal H}_n$ has no AYD module except the most natural one, ${\mathbb C}_\delta$, which was found by Connes-Mosocovici in \cite{ConnMosc98}.
\medskip
\noindent To illustrate our theory in a nontrivial example we introduce a SAYD module over the Schwarzian Hopf algebra ${\cal H}_{1\rm S}$ introduced in \cite{ConnMosc98}. By definition, ${\cal H}_{1\rm S}$ is a quotient Hopf algebra of ${\cal H}_1$ by the Hopf ideal generated by
$$\delta_2-\frac{1}{2}\delta_1^2 .$$
Here $\delta_i$ are generators of ${\cal F}(N)$. So the Hopf algebra ${\cal H}_{\rm 1S}$ is generated by
$$X,\quad Y,\quad \delta_1$$
\medskip
\noindent As we know, ${\cal H}_{1\rm S}{^{\rm cop}}$ is isomorphic to $R({\mathbb C})\blacktriangleright\hspace{-4pt}\vartriangleleft U(gl_1^{\rm aff})$. So our theory guarantees that any suitable SAYD module $M$ over $sl_2=gl_1^{\rm aff}\bowtie {\mathbb C}$ will produce a SAYD module $M_\delta$ over ${\cal H}_{1\rm S}{^{\rm cop}}$. We take the truncated polynomial algebra $M=S(sl_2)\nsb{2}$ as our candidate. The resulting 4-dimensional SAYD module $M_\delta$ is then generated by
$$ {\bf 1}, \quad {\bf R}^X, \quad {\bf R}^Y,\quad {\bf R}^Z,$$
\noindent with the ${\cal H}_{1\rm S}{^{\rm cop}}$ action and coaction defined by
$$
\left.
\begin{array}{c|ccc}
\lhd & X & Y & \delta_1 \\[.2cm]
\hline
&&&\\[-.2cm]
{\bf 1} & 0 & 0 & {\bf R}^Z \\[.1cm]
{\bf R}^X & -{\bf R}^Y & 2{\bf R}^X & 0 \\[.1cm]
{\bf R}^Y & -{\bf R}^Z & {\bf R}^Y & 0 \\[.1cm]
{\bf R}^Z & 0 & 0 & 0 \\
\end{array}
\right.\qquad
\begin{array}{rl}
&\blacktriangledown: M_\delta \longrightarrow \mathcal{H}_{\rm 1S}{^{\rm cop}} \otimes M_\delta \\[.2cm]
\hline
&\\[-.2cm]
& {\bf 1} \mapsto 1 \otimes {\bf 1} + X \otimes {\bf R}^X + Y \otimes {\bf R}^Y \\
& {\bf R}^X \mapsto 1 \otimes {\bf R}^X \\
& {\bf R}^Y \mapsto 1 \otimes {\bf R}^Y + \delta_1 \otimes {\bf R}^X \\
& {\bf R}^Z \mapsto 1 \otimes {\bf R}^Z + \delta_1 \otimes {\bf R}^Y + \frac{1}{2}\delta_1^2 \otimes {\bf R}^X.
\end{array}
$$
\noindent The surprises here are the nontriviality of the action of $\delta_1$ and the appearance of $X$ and $Y$ in the coaction. In other words this is not an induced module \cite{RangSutl}.
\medskip
\noindent We illustrate our results in this paper on this example. We then apply the machinery developed in \cite{MoscRang09} by Moscovici and one of the authors to prove that the following two cocycles generates the Hopf cyclic cohomology of ${\cal H}_{1\rm S}{^{\rm cop}}$ with coefficients in $M_\delta$.
\begin{align*}
&c^{\rm odd} = {\bf 1} \otimes \delta_1 + {\bf R}^Y \otimes X + {\bf R}^X \otimes \delta_1X + {\bf R}^Y \otimes \delta_1Y + 2 {\bf R}^Z \otimes Y , \\[.4cm]
& c^{{\rm even}} = {\bf 1} \otimes X \otimes Y - {\bf 1} \otimes Y \otimes X + {\bf 1} \otimes Y \otimes \delta_1Y - {\bf R}^X \otimes XY \otimes X \\
& - {\bf R}^X \otimes Y^2 \otimes \delta_1X - {\bf R}^X \otimes Y \otimes X^2 + {\bf R}^Y \otimes XY \otimes Y + {\bf R}^Y \otimes Y^2 \otimes \delta_1Y \\
& + {\bf R}^Y \otimes X \otimes Y^2 + {\bf R}^Y \otimes Y \otimes \delta_1Y^2 - {\bf R}^Y \otimes Y \otimes X - {\bf R}^X \otimes XY^2 \otimes \delta_1 \\
& - \frac{1}{3}{\bf R}^X \otimes Y^3 \otimes {\delta_1}^2 + \frac{1}{3} {\bf R}^Y \otimes Y^3 \otimes \delta_1 - \frac{1}{4} {\bf R}^X \otimes Y^2 \otimes {\delta_1}^2 - \frac{1}{2} {\bf R}^Y \otimes Y^2 \otimes \delta_1.
\end{align*}
\noindent As can be seen by the inspection, the expression of the above cocycles cannot be easily found with bare hands. It is the mentioned machinery in \cite{MoscRang09} which allows to arrive at this elaborate formulae.
\tableofcontents
\section{Matched pair of Lie algebras and SAYD modules over double crossed sum Lie algebras}
In this section, matched pair of Lie algebras and their bicrossed sum Lie algebras are reviewed. We also recall double crossed product of Hopf algebras from \cite{Maji}. Next, we provide a brief account of SAYD modules over Lie algebras from \cite{RangSutl-II}. Finally we investigate the relation between SAYD modules over the double crossed sum Lie algebra of a matched pair of Lie algebras and SAYD modules over the individual Lie algebras.
\subsection{Matched pair of Lie algebras and mutual pair of Hopf algebras}
Let us recall the notion of matched pair of Lie algebras from \cite{Maji}. A pair of Lie algebras $(\mathfrak{g}_1, \mathfrak{g}_2)$ is called a matched pair if there
are linear maps
\begin{equation}\label{g-1-g-2 action}
\alpha: \mathfrak{g}_2\otimes \mathfrak{g}_1\rightarrow \mathfrak{g}_2, \quad \alpha_X(\zeta)=\zeta\triangleleft X, \quad \beta:\mathfrak{g}_2\otimes \mathfrak{g}_1\rightarrow \mathfrak{g}_1, \quad \beta_\zeta(X)=\zeta\triangleright X,
\end{equation}
\noindent satisfying the following conditions,
\begin{align}\label{mp-L-1}
&[\zeta,\xi]\triangleright X=\zeta\triangleright(\xi\triangleright X)-\xi\triangleright(\zeta\triangleright X),\\\label{mp-L-2}
& \zeta\triangleleft[X, Y]=(\zeta\triangleleft X)\triangleleft Y-(\zeta\triangleleft Y)\triangleleft X, \\\label{mp-L-3}
&\zeta\triangleright[X, Y]=[\zeta\triangleright X, Y]+[X,\zeta\triangleright Y] +
(\zeta\triangleleft X)\triangleright Y-(\zeta\triangleleft Y)\triangleright X, \\\label{mp-L-4}
&[\zeta,\xi]\triangleleft X=[\zeta\triangleleft X,\xi]+[\zeta,\xi\triangleleft X]+ \zeta\triangleleft(\xi\triangleright X)-\xi\triangleleft(\zeta\triangleright X).
\end{align}
\noindent Given a matched pair of Lie algebras $(\mathfrak{g}_1,\mathfrak{g}_2)$, one defines a double crossed sum Lie algebra
$\mathfrak{g}_1\bowtie \mathfrak{g}_2$. Its underlying vector space is $\mathfrak{g}_1\oplus\mathfrak{g}_2$ and its Lie bracket
is defined by:
\begin{equation}
[X\oplus\zeta, Z\oplus\xi]=([X, Z]+\zeta\triangleright Z-\xi\triangleright X)\oplus ([\zeta,\xi]+\zeta\triangleleft Z-\xi\triangleleft X).
\end{equation}
\noindent Both $\mathfrak{g}_1$ and $\mathfrak{g}_2$ are Lie subalgebras of $\mathfrak{g}_1\bowtie\mathfrak{g}_2$ via obvious inclusions. Conversely, if for a
Lie algebra $\mathfrak{g}$ there are two Lie subalgebras $\mathfrak{g}_1$ and $\mathfrak{g}_2$ so that $\mathfrak{g}=\mathfrak{g}_1\oplus\mathfrak{g}_2$ as vector spaces, then
$(\mathfrak{g}_1, \mathfrak{g}_2)$ forms a matched pair of Lie algebras and $\mathfrak{g}\cong \mathfrak{g}_1\bowtie \mathfrak{g}_2$ as Lie algebras \cite{Maji}.
In this case, the actions of $\mathfrak{g}_1$ on $\mathfrak{g}_2$ and $\mathfrak{g}_2$ on $\mathfrak{g}_1$ for $\zeta\in \mathfrak{g}_2$ and $X\in\mathfrak{g}_1$ are uniquely determined by
\begin{equation}\label{lie-actions}
[\zeta,X]=\zeta\triangleright X+\zeta\triangleleft X, \qquad \zeta\in \mathfrak{g}_2, \quad X\in\mathfrak{g}_1.
\end{equation}
\noindent Next, we recall the notion of double crossed product Hopf algebra. Let $({\cal U},{\cal V})$ be a pair of Hopf algebras such that ${\cal V}$ is a
right ${\cal U}-$module coalgebra and ${\cal U}$ is left ${\cal V}-$module coalgebra. We call them mutual pair if their actions satisfy the following conditions.
\begin{align}\label{mutual-1}
&v\triangleright(u^1u^2)= (v\ps{1}\triangleright u^1\ps{1})((v\ps{2}\triangleleft u^1\ps{2})\triangleright u^2),\quad 1\triangleleft u=\varepsilon(u),\\ \label{mutual-2}
&(v^1v^2)\triangleleft u= (v^1\triangleleft(v^2\ps{1}\triangleright u\ps{1}))(v^2\ps{2}\triangleleft u\ps{2}),\quad
v\triangleright 1=\varepsilon(v),\\\label{mutual-3}
&\sum v\ps{1}\triangleleft u\ps{1}\otimes v\ps{2}\triangleright u\ps{2}=\sum v\ps{2}\triangleleft u\ps{2}\otimes v\ps{1}\triangleright u\ps{1}.
\end{align}
\noindent Having a mutual pair of Hopf algebras, one constructs the double crossed product Hopf algebra ${\cal U} \bowtie {\cal V}$. As a coalgebra, ${\cal U} \bowtie {\cal V}$ is ${\cal U}\otimes{\cal V} $. However, its algebra structure is defined by the rule
\begin{equation}
(u^1 \bowtie v^1)(u^2 \bowtie v^2):= u^1(v^1\ps{1}\triangleright u^2\ps{1}) \bowtie (v^1\ps{2}\triangleleft u^2\ps{2})v^2,
\end{equation}
together with $1 \bowtie 1$ as its unit. The antipode of ${\cal U} \bowtie {\cal V}$ is defined by
\begin{equation}
S(u \bowtie v)=(1 \bowtie S(v))(S(u)\bowtie 1)= S(v\ps{1})\triangleright S(u\ps{1})\bowtie S(v\ps{2})\triangleleft S(u\ps{2}).
\end{equation}
\noindent It is shown in \cite{Maji} that if $\mathfrak{a}=\mathfrak{g}_1\bowtie\mathfrak{g}_2$ is a double crossed sum of Lie algebras, then the
enveloping algebras $(U(\mathfrak{g}_1),U(\mathfrak{g}_2))$ becomes a mutual pair of Hopf algebras. Moreover, $U(\mathfrak{a})$ and $ U(\mathfrak{g}_1) \bowtie U(\mathfrak{g}_2)$ are isomorphic as Hopf algebras.
\medskip
\noindent In terms of the inclusions
\begin{equation}
i_1:U(\mathfrak{g}_1) \to U(\mathfrak{g}_1 \bowtie \mathfrak{g}_2) \quad \mbox{ and } \quad i_2:U(\mathfrak{g}_2) \to U(\mathfrak{g}_1 \bowtie \mathfrak{g}_2),
\end{equation}
\noindent the Hopf algebra isomorphism mentioned above is
\begin{equation}
\mu\circ (i_1 \otimes i_2):U(\mathfrak{g}_1) \bowtie U(\mathfrak{g}_2) \to U(\mathfrak{a}).
\end{equation}
\noindent Here $\mu$ is the multiplication on $U(\mathfrak{g})$. We easily observe that there is a linear map
\begin{equation}
\Psi:U(\mathfrak{g}_2) \bowtie U(\mathfrak{g}_1) \to U(\mathfrak{g}_1) \bowtie U(\mathfrak{g}_2),
\end{equation}
\noindent satisfying
\begin{equation}
\mu \circ (i_2 \otimes i_1) = \mu \circ (i_1 \otimes i_2) \circ \Psi\,.
\end{equation}
\noindent The mutual actions of $U(\mathfrak{g}_1)$ and $U(\mathfrak{g}_2)$ are defined as follows
\begin{equation}
\rhd := (\mathop{\rm Id}\nolimits_{U(\mathfrak{g}_2)} \otimes \varepsilon) \circ \Psi \quad \mbox{ and } \quad \lhd := (\varepsilon \otimes \mathop{\rm Id}\nolimits_{U(\mathfrak{g}_1)}) \circ \Psi\,.
\end{equation}
\subsection{SAYD modules over double crossed sum Lie algebras}
We first review the Lie algebra coactions and SAYD modules over Lie algebras. To this end, let us first introduce the notion of comodule over a Lie algebra.
\begin{definition}
\cite{RangSutl-II}. A vector space $M$ is a left comodule over a Lie algebra $\mathfrak{g}$ if there is a map $\blacktriangledown_{\mathfrak{g}}: M \rightarrow \mathfrak{g} \otimes M, \quad m \mapsto m\nsb{-1}\otimes m\nsb{0}$ such that
\begin{equation}\label{g-comod}
m\nsb{-2}\wedge m\nsb{-1}\otimes m\nsb{0}=0,
\end{equation}
where $$m\nsb{-2}\otimes m\nsb{-1}\otimes m\nsb{0}= m\nsb{-1}\otimes (m\nsb{0})\nsb{-1}\otimes (m\nsb{0})\nsb{0}.$$
\end{definition}
\noindent By \cite[Proposition 5.2]{RangSutl-II}, corepresentations of a Lie algebra $\mathfrak{g}$ are nothing but the representations of the symmetric algebra $S(\mathfrak{g}^\ast)$. The most natural corepresentation of a Lie algebra $\mathfrak{g}$, with a basis $\Big\{X_1, \ldots ,X_N\Big\}$ and dual basis $\Big\{\theta^1,\ldots ,\theta^N\Big\}$, is $M = S(\mathfrak{g}^\ast)$ via $m \mapsto X_i \otimes m\theta^i$. This is called the Koszul coaction. The corresponding representation on $M = S(\mathfrak{g}^\ast)$ coincides with the initial multiplication of the symmetric algebra.
\medskip
\noindent Next, let $\blacktriangledown_{\mathfrak{g}}:M \to \mathfrak{g} \otimes M$ be a left $\mathfrak{g}$-comodule structure on the linear space $M$. If the $\mathfrak{g}$-coaction is locally conilpotent, {\it i.e., }\ for any $m \in M$ there exists $n \in \mathbb{N}$ such that $\blacktriangledown^n_{\mathfrak{g}}(m) = 0$, then it is possible to construct a $U(\mathfrak{g})$-coaction structure $\blacktriangledown_{U}:M \to U(\mathfrak{g}) \otimes M$ on $M$, \cite[Proposition 5.7]{RangSutl-II}. Conversely, any comodule over $U(\mathfrak{g})$ results a locally conilpotent comodule over $\mathfrak{g}$ via its composition with the canonical projection $\pi:U(\mathfrak{g})\rightarrow \mathfrak{g}$ as follows:
$$
\xymatrix {
\ar[dr]_{\blacktriangledown_{\mathfrak{g}}} M \ar[r]^{\blacktriangledown_U\;\;\;\;\;\;} & U(\mathfrak{g}) \otimes M \ar[d]^{\pi \otimes \mathop{\rm Id}\nolimits} \\
& \mathfrak{g} \otimes M
}
$$
\noindent We denote the category of locally conilpotent left $\mathfrak{g}$-comodules by $^{\mathfrak{g}}\rm{conil}{\cal M}$, and we have $^{\mathfrak{g}}\rm{conil}{\cal M} = \, ^{U(\mathfrak{g})}{\cal M}$, \cite[Proposition 5.8]{RangSutl-II}.
\begin{definition}
\cite{RangSutl-II}. Let $M$ be a right module and left comodule over a Lie algebra $\mathfrak{g}$. We call $M$ a right-left AYD module over $\mathfrak{g}$ if
\begin{equation}\label{AYD-condition}
\blacktriangledown_{\mathfrak{g}}(m \cdot X) = m\nsb{-1} \otimes m\nsb{0} \cdot X + [m\nsb{-1}, X] \otimes m\nsb{0}.
\end{equation}
Moreover, $M$ is called stable if
\begin{equation}\label{stability-condition}
m\nsb{0} \cdot m\nsb{-1} = 0.
\end{equation}
\end{definition}
\begin{example}\label{ex-1}\rm{
Let $\mathfrak{g}$ be a Lie algebra with a basis $\Big\{X_1, \ldots, X_N\Big\}$ and a dual basis $\Big\{\theta^1,\ldots,\theta^N\Big\}$, and $M = S(\mathfrak{g}^\ast)$ be the symmetric algebra of $\mathfrak{g}^\ast$. We consider the following action of $\mathfrak{g}$ on $S(\mathfrak{g}^\ast)$:
\begin{equation}\label{action-1}
S(\mathfrak{g}^\ast) \otimes \mathfrak{g} \rightarrow S(\mathfrak{g}^\ast), \quad m \otimes X \mapsto m \lhd X := -{\cal L}_X(m) + \delta(X)m
\end{equation}
Here, ${\cal L}:\mathfrak{g} \rightarrow \mathop{\rm End}\nolimits S(\mathfrak{g}^\ast)$ is the coadjoint representation of $\mathfrak{g}$ on $S(\mathfrak{g}^\ast)$ and $\delta \in \mathfrak{g}^\ast$ is the trace of the adjoint representation of the Lie algebra $\mathfrak{g}$ on itself. Via the action \eqref{action-1} and the Koszul coaction
\begin{equation}\label{coaction-1}
M \rightarrow \mathfrak{g} \otimes M, \quad m \mapsto X_i \otimes m\theta^i,
\end{equation}
$M = S(\mathfrak{g}^\ast)$ is a SAYD module over the Lie algebra $\mathfrak{g}$.
}\end{example}
\begin{example}\label{ex-2}\rm{
Let $\mathfrak{g}$ be a Lie algebra and $M = S(\mathfrak{g}^\ast)\nsb{2q}$ be a truncation of the symmetric algebra of $\mathfrak{g}^\ast$. Then by the action \eqref{action-1} and the coaction \eqref{coaction-1}, $M$ becomes an SAYD module over the Lie algebra $\mathfrak{g}$. Note that in this case the coaction is locally conilpotent.
}\end{example}
\noindent We recall from \cite{HajaKhalRangSomm04-I} the definition of a right-left stable-anti-Yetter-Drinfeld module over a Hopf algebra ${\cal H}$. Let $M$ be a right module and left comodule over a Hopf algebra ${\cal H}$. We say that it is a stable-anti-Yetter-Drinfeld (SAYD) module over ${\cal H}$ if
\begin{align}\label{AYD-Hopf}
&\blacktriangledown(m\cdot h)= S(h\ps{3})m\ns{-1}h\ps{1}\otimes m\ns{0}\cdot h\ps{2},\\\label{stability-Hopf}
&m\ns{0} \cdot m\ns{-1}=m,
\end{align}
for any $v\in V$ and $h\in {\cal H}$.
\noindent According to \cite[Proposition 5.10]{RangSutl-II}, AYD modules over a Lie algebra $\mathfrak{g}$ with locally conilpotent coaction are in one to one correspondence with AYD modules over the universal enveloping algebra $U(\mathfrak{g})$. In this case, while it is possible to carry the $\mathfrak{g}$-stability to $U(\mathfrak{g})$-stability \cite[Lemma 5.11]{RangSutl-II}, the converse is not necessarily true \cite[Example 5.12]{RangSutl-II}.
\medskip
\noindent A family of examples of SAYD modules over a Lie algebra $\mathfrak{g}$ is given by the modules over the Weyl algebra $D(\mathfrak{g})$, \cite[Corollary 5.14]{RangSutl-II}. As for finite dimensional examples, it is proven in \cite{RangSutl-II} that there is no non-trivial $sl_2$-coaction that makes a simple two dimensional $sl_2$-module an SAYD module over $sl_2$.
\medskip
\noindent Let $(\mathfrak{g}_1,\mathfrak{g}_2)$ be a matched pair of Lie algebras, with $\mathfrak{a} := \mathfrak{g}_1 \bowtie \mathfrak{g}_2$ as their double crossed sum Lie algebra. A vector space $M$ is a module over $\mathfrak{a}$ if and only if it is a module over $\mathfrak{g}_1$ and $\mathfrak{g}_2$, such that
\begin{align}
(m \cdot Y) \cdot X - (m \cdot X) \cdot Y = m \cdot (Y \rhd X) + m \cdot (Y \lhd X)
\end{align}
is satisfied. In the converse argument one considers the $\mathfrak{a}$ action on $M$ by
\begin{equation}\label{module on doublecrossed sum}
m \cdot (X \oplus Y) = m \cdot X + m \cdot Y.
\end{equation}
\noindent For the comodule structures we have the following analogous result. If $M$ is a comodule over $\mathfrak{g}_1$ and $\mathfrak{g}_2$ via
\begin{equation}
m \mapsto m\nsb{-1} \otimes m\nsb{0} \in \mathfrak{g}_1 \otimes M \quad \text{ and } \quad m \mapsto m\ns{-1} \otimes m\ns{0} \in \mathfrak{g}_2 \otimes M,
\end{equation}
\noindent then we define the following linear map
\begin{equation}\label{comodule on doublecrossed sum}
m \mapsto m\nsb{-1} \otimes m\nsb{0} + m\ns{-1} \otimes m\ns{0} \in \mathfrak{a} \otimes M.
\end{equation}
\noindent Conversely, if $M$ is a $\mathfrak{a}$-comodule via $\blacktriangledown_\mathfrak{a}:M\rightarrow \mathfrak{a}\otimes M$, then we define the linear maps with the help of projections
\begin{equation}\label{auxy2}
\xymatrix{
\ar[r]^{\blacktriangledown_{\mathfrak{a}}} M \ar[dr]_{\blacktriangledown_{\mathfrak{g}_1}} & \mathfrak{a} \otimes M \ar[d]^{p_1 \otimes \mathop{\rm Id}\nolimits} \\
& \mathfrak{g}_1 \otimes M}
\qquad \text{and }\qquad \xymatrix{
\ar[r]^{\blacktriangledown_{\mathfrak{a}}} M \ar[dr]_{\blacktriangledown_{\mathfrak{g}_2}} & \mathfrak{a} \otimes M \ar[d]^{p_2 \otimes \mathop{\rm Id}\nolimits} \\
& \mathfrak{g}_2 \otimes M}
\end{equation}
\begin{proposition}\label{proposition-comodule-doublecrossed-sum}
A vector space $M$ is an $\mathfrak{a}$-comodule if and only if it is a $\mathfrak{g}_1$-comodule and $\mathfrak{g}_2$-comodule such that
\begin{equation}\label{aux-18}
m\nsb{-1} \otimes m\nsb{0}\ns{-1} \otimes m\nsb{0}\ns{0} = m\ns{0}\nsb{-1} \otimes m\ns{-1} \otimes m\ns{0}\nsb{0}.
\end{equation}
\end{proposition}
\begin{proof}
Assume first that $M$ is an $\mathfrak{a}$-comodule. By the $\mathfrak{a}$-coaction compatibility, we have
\begin{align}\label{aux-20}
\begin{split}
& m\nsb{-2} \wedge m\nsb{-1} \otimes m\nsb{0} + m\nsb{-1} \wedge m\nsb{0}\ns{-1} \otimes m\nsb{0}\ns{0} \\
& + m\ns{-1} \wedge m\ns{0}\nsb{-1} \otimes m\ns{0}\nsb{0} + m\ns{-2} \wedge m\ns{-1} \otimes m\ns{0} = 0.
\end{split}
\end{align}
\noindent Applying the antisymmetrization map $\alpha:\mathfrak{a} \wedge \mathfrak{a} \to U(\mathfrak{a}) \otimes U(\mathfrak{a})$ we get
\begin{align}\label{aux-19}
\begin{split}
& (m\nsb{-1} \otimes 1) \otimes (1 \otimes m\nsb{0}\ns{-1}) \otimes m\nsb{0}\ns{0} - (1 \otimes m\nsb{0}\ns{-1}) \otimes (m\nsb{-1} \otimes 1) \otimes m\nsb{0}\ns{0} \\
& + (1 \otimes m\ns{-1}) \otimes (m\ns{0}\nsb{-1} \otimes 1) \otimes m\ns{0}\nsb{0} - (m\ns{0}\nsb{-1} \otimes 1) \otimes (1 \otimes m\ns{-1}) \otimes m\ns{0}\nsb{0} = 0.
\end{split}
\end{align}
\noindent Finally, applying $\mathop{\rm Id}\nolimits \otimes \varepsilon_{U(\mathfrak{g}_1)} \otimes \varepsilon_{U(\mathfrak{g}_2)} \otimes \mathop{\rm Id}\nolimits$ on the both hand sides of the above equation to get the equation \eqref{aux-18}.
\medskip
\noindent Let $m \mapsto m\pr{-1} \otimes m\pr{0} \in \mathfrak{a} \otimes M$ denote the $\mathfrak{a}$-coaction on $M$. Also let $p_1:\mathfrak{a} \to \mathfrak{g}_1$ and $p_2:\mathfrak{a} \to \mathfrak{g}_2$ be the projections onto the subalgebras $\mathfrak{g}_1$ and $\mathfrak{g}_2$ respectively. Then the $\mathfrak{a}$-coaction is
\begin{equation}
m\pr{-1} \otimes m\pr{0} = p_1(m\pr{-1}) \otimes m\pr{0} + p_2(m\pr{-1}) \otimes m\pr{0}.
\end{equation}
\noindent Next, we shall prove that
\begin{equation}
m \mapsto p_1(m\pr{-1}) \otimes m\pr{0} \in \mathfrak{g}_1 \otimes M \quad \text{ and } \quad m \mapsto p_2(m\pr{-1}) \otimes m\pr{0} \in \mathfrak{g}_2 \otimes M
\end{equation}
\noindent are coactions. To this end, we observe that the
\begin{equation}
\alpha(p_1(m\pr{-2}) \wedge p_1(m\pr{-1})) \otimes m\pr{0} = (p_1 \otimes p_1)(\alpha(m\pr{-2} \wedge m\pr{-1})) \otimes m\pr{0} = 0,
\end{equation}
\noindent for $M$ is an $\mathfrak{a}$-comodule.
\medskip
\noindent Since the antisymmetrization map $\alpha:\mathfrak{g}_1 \wedge \mathfrak{g}_1 \to U(\mathfrak{g}_1) \otimes U(\mathfrak{g}_1)$ is injective, we have
\begin{equation}
p_1(m\pr{-2}) \wedge p_1(m\pr{-1}) \otimes m\pr{0} = 0,
\end{equation}
\noindent proving that $m \mapsto p_1(m\pr{-1}) \otimes m\pr{0}$ is a $\mathfrak{g}_1$-coaction. Similarly $m \mapsto p_2(m\pr{-1}) \otimes m\pr{0}$ is a $\mathfrak{g}_2$-coaction on $M$.
\medskip
\noindent Conversely, assume that $M$ is a $\mathfrak{g}_1$-comodule and $\mathfrak{g}_2$-comodule such that the compatibility \eqref{aux-18} is satisfied. Then obviously \eqref{aux-20} is true, which is the $\mathfrak{a}$-comodule compatibility for the coaction \eqref{comodule on doublecrossed sum}.
\end{proof}
\noindent We proceed by investigating the relations between AYD modules over the Lie algebras $\mathfrak{g}_1$ and $\mathfrak{g}_2$, and AYD modules over the double crossed sum Lie algebra $\mathfrak{a} = \mathfrak{g}_1 \bowtie \mathfrak{g}_2$.
\begin{proposition}\label{auxx-2}
Let $(\mathfrak{g}_1,\mathfrak{g}_2)$ be a matched pair of Lie algebras, $\mathfrak{a} = \mathfrak{g}_1 \bowtie \mathfrak{g}_2$, and $M \in \, ^{\mathfrak{a}}\rm{conil}{\cal M}_{\mathfrak{a}}$. Then, $M$ is an AYD module over $\mathfrak{a}$ if and only if $M$ is an AYD module over $\mathfrak{g}_1$ and $\mathfrak{g}_2$, and the following conditions are satisfied
\begin{align}\label{prop-ax2-1}
&(m \cdot X)\ns{-1} \otimes (m \cdot X)\ns{0} = m\ns{-1} \lhd X \otimes m\ns{0} + m\ns{-1} \otimes m\ns{0} \cdot X, \\[.1cm]\label{prop-ax2-2}
& m\ns{-1} \rhd X \otimes m\ns{0} = 0, \\[.1cm]\label{prop-ax2-3}
& (m \cdot Y)\nsb{-1} \otimes (m \cdot Y)\nsb{0} = - Y \rhd m\nsb{-1} \otimes m\nsb{0} + m\nsb{-1} \otimes m\nsb{0} \cdot Y, \\[.1cm]\label{prop-ax2-4}
& Y \lhd m\nsb{-1} \otimes m\nsb{0} = 0,
\end{align}
for any $X \in \mathfrak{g}_1$, $Y \in \mathfrak{g}_2$ and any $m \in M$.
\end{proposition}
\begin{proof}
For $M \in \, ^{\mathfrak{a}}\rm{conil}{\cal M}_{\mathfrak{a}}$, assume that $M$ is an AYD module over the double crossed sum Lie algebra $\mathfrak{a}$ via the coaction
\begin{equation}
m \mapsto m\pr{-1} \otimes m\pr{0} = m\nsb{-1} \otimes m\nsb{0} + m\ns{-1} \otimes m\ns{0}.
\end{equation}
\noindent As the $\mathfrak{a}$-coaction is locally conilpotent, by \cite[Proposition 5.10]{RangSutl-II} we have $M \in \, ^{U(\mathfrak{a})}\mathcal{AYD}_{U(\mathfrak{a})}$. Then since the projections
\begin{equation}
\pi_1:U(\mathfrak{a}) = U(\mathfrak{g}_1) \bowtie U(\mathfrak{g}_2) \to U(\mathfrak{g}_1), \quad \pi_2:U(\mathfrak{a}) = U(\mathfrak{g}_1) \bowtie U(\mathfrak{g}_2) \to U(\mathfrak{g}_2)
\end{equation}
\noindent are coalgebra maps, we conclude that $M$ is a comodule over $U(\mathfrak{g}_1)$ and $U(\mathfrak{g}_2)$. Finally, since $U(\mathfrak{g}_1)$ and $U(\mathfrak{g}_2)$ are Hopf subalgebras of $U(\mathfrak{a})$, AYD conditions on $U(\mathfrak{g}_1)$ and $U(\mathfrak{g}_2)$ are immediate, and thus $M$ is an AYD module over $\mathfrak{g}_1$ and $\mathfrak{g}_2$.
\medskip
\noindent We now prove the compatibility conditions \eqref{prop-ax2-1}, \ldots, \eqref{prop-ax2-4}. To this end, we will make use of the AYD condition for an arbitrary $X \oplus Y \in \mathfrak{a}$ and $m \in M$. On one hand side we have
\begin{align}\label{aux-21}
\begin{split}
& [m\pr{-1}, X \oplus Y] \otimes m\pr{0} + m\pr{-1} \otimes m\pr{0} \cdot (X \oplus Y) = \\
& [m\nsb{-1} \oplus 0, X \oplus Y] \otimes m\nsb{0} + [0 \oplus m\ns{-1}, X \oplus Y] \otimes m\ns{0} \\
& + m\nsb{-1} \otimes m\nsb{0} \cdot (X \oplus Y) + m\ns{-1} \otimes m\ns{0} \cdot (X \oplus Y) \\
& = ([m\nsb{-1},X] - Y \rhd m\nsb{-1} \oplus -Y \lhd m\nsb{-1}) \otimes m\nsb{0} \\
& + (m\ns{-1} \rhd X \oplus [m\ns{-1},Y] + m\ns{-1} \lhd X) \otimes m\ns{0} \\
& + (m\nsb{-1} \oplus 0) \otimes m\nsb{0} \cdot (X \oplus Y) + (0 \oplus m\ns{-1}) \otimes m\ns{0} \cdot (X \oplus Y) \\
& = ((m \cdot X)\nsb{-1} \oplus 0) \otimes (m \cdot X)\nsb{0} + (0 \oplus (m \cdot Y)\ns{-1}) \otimes (m \cdot Y)\ns{0} \\
& + (- Y \rhd m\nsb{-1} \oplus -Y \lhd m\nsb{-1}) \otimes m\nsb{0} + (m\ns{-1} \rhd X \oplus m\ns{-1} \lhd X) \otimes m\ns{0} \\
& + (m\nsb{-1} \oplus 0) \otimes m\nsb{0} \cdot Y + (0 \oplus m\ns{-1}) \otimes m\ns{0} \cdot X.
\end{split}
\end{align}
\noindent On the other hand,
\begin{align}\label{aux-22}
\begin{split}
& (m \cdot (X \oplus Y))\pr{-1} \otimes (m \cdot (X \oplus Y))\pr{0} = ((m \cdot X)\nsb{-1} \oplus 0) \otimes (m \cdot X)\nsb{0} + \\
& ((m \cdot Y)\nsb{-1} \oplus 0) \otimes (m \cdot Y)\nsb{0} + (0 \oplus (m \cdot X)\ns{-1}) \otimes (m \cdot X)\ns{0} \\
& + (0 \oplus (m \cdot Y)\ns{-1}) \otimes (m \cdot Y)\ns{0}.
\end{split}
\end{align}
\noindent Since $M$ is an AYD module over $\mathfrak{g}_1$ and $\mathfrak{g}_2$, AYD compatibility \eqref{aux-21} = \eqref{aux-22} translates into
\begin{align}\label{aux-23}
\begin{split}
& ((m \cdot Y)\nsb{-1} \oplus 0) \otimes (m \cdot Y)\nsb{0} + (0 \oplus (m \cdot X)\ns{-1}) \otimes (m \cdot X)\ns{0} = \\
& (- Y \rhd m\nsb{-1} \oplus -Y \lhd m\nsb{-1}) \otimes m\nsb{0} + (m\ns{-1} \rhd X \oplus m\ns{-1} \lhd X) \otimes m\ns{0} \\
& + (m\nsb{-1} \oplus 0) \otimes m\nsb{0} \cdot Y + (0 \oplus m\ns{-1}) \otimes m\ns{0} \cdot X.
\end{split}
\end{align}
\noindent Finally, we set $Y := 0$ to get \eqref{prop-ax2-1} and \eqref{prop-ax2-2}. The equations \eqref{prop-ax2-3} and \eqref{prop-ax2-4} are similarly implied by setting $X:=0$.
\medskip
\noindent The converse argument is clear.
\end{proof}
\noindent In general, if $M$ is an AYD module over the double crossed sum Lie algebra $\mathfrak{a} = \mathfrak{g}_1 \oplus \mathfrak{g}_2$, then $M$ is not necessarily an AYD module over the Lie algebras $\mathfrak{g}_1$ and $\mathfrak{g}_2$.
\begin{example}\label{example-sl2}\rm{
Consider the Lie algebra $sl_2 = \Big\langle X,Y,Z \Big\rangle$,
\begin{equation}
[Y,X] = X, \quad [Z,X] = Y, \quad [Z,Y] = Z.
\end{equation}
\noindent Then, $sl_2 = \mathfrak{g}_1 \bowtie \mathfrak{g}_2$ for $\mathfrak{g}_1 = \Big\langle X,Y \Big\rangle$ and $\mathfrak{g}_2 = \Big\langle Z \Big\rangle$.
\medskip
\noindent In view of Example \ref{ex-1}, the symmetric algebra $M = S({sl_2}^\ast)$ is a right-left AYD module over $sl_2$. The module structure is defined by the coadjoint action, that coincides with \eqref{action-1} since $sl_2$ is unimodular, and comodule structure is given by the Koszul coaction \eqref{coaction-1}.
\medskip
\noindent We now show that it is not an AYD module over $\mathfrak{g}_1$. Let $\Big\{\theta^X,\theta^Y,\theta^Z\Big\}$ be a dual basis for $sl_2$. The linear map
\begin{equation}
\blacktriangledown_{\mathfrak{g}_1}: M \to \mathfrak{g}_1 \otimes M, \quad m \mapsto X \otimes m\theta^X + Y \otimes m\theta^Y,
\end{equation}
which is the projection onto the Lie algebra $\mathfrak{g}_1$, endows $M$ with a left $\mathfrak{g}_1$-comodule structure. However, the AYD compatibility on $\mathfrak{g}_1$ is not satisfied. Indeed, on one side we have
\begin{equation}
\blacktriangledown_{\mathfrak{g}_1}(m \lhd X) = X \otimes (m \lhd X)\theta^X + Y \otimes (m \lhd X)\theta^Y,
\end{equation}
and the other one we get
\begin{align}
\begin{split}
& [X,X] \otimes m\theta^X + [Y,X] \otimes m\theta^Y + X \otimes (m\theta^X) \lhd X + Y \otimes (m\theta^Y) \lhd X = \\
& X \otimes (m \lhd X)\theta^X + Y \otimes (m \lhd X)\theta^Y - Y \otimes m\theta^Z.
\end{split}
\end{align}
}\end{example}
\begin{remark}{\rm
Assume that the mutual actions of $\mathfrak{g}_1$ and $\mathfrak{g}_2$ are trivial. In this case, if $M$ is an AYD module over $\mathfrak{g}_1 \bowtie \mathfrak{g}_2$, then it is an AYD module over $\mathfrak{g}_1$ and $\mathfrak{g}_2$.
\medskip
\noindent To see it, let us apply $p_1\otimes \mathop{\rm Id}\nolimits_M$ on the both hand sides of the AYD condition \eqref{AYD-condition} for $X \oplus 0 \in \mathfrak{a}$, where $p_1:\mathfrak{a} \to \mathfrak{g}_1$ is the obvious projection. That is
\begin{align}\label{auxy1}
\begin{split}
& p_1([m\pr{-1}, X \oplus 0]) \otimes m\pr{0} + p_1(m\pr{-1}) \otimes m\pr{0} \cdot (X \oplus 0) \\
& = p_1((m \cdot (X \oplus 0))\pr{-1}) \otimes (m \cdot (X \oplus 0))\pr{0}.
\end{split}
\end{align}
Since in this case the projection $p_1:\mathfrak{a} \to \mathfrak{g}_1$ is a map of Lie algebras, the equation \eqref{auxy1} reads
\begin{align}
\begin{split}
& [p_1(m\pr{-1}), X] \otimes m\pr{0} + p_1(m\pr{-1}) \otimes m\pr{0} \cdot X = p_1((m \cdot X)\pr{-1}) \otimes (m \cdot X)\pr{0},
\end{split}
\end{align}
which is the AYD compatibility for the $\mathfrak{g}_1$-coaction.
Similarly, one proves that $M$ is an AYD module over the Lie algebra $\mathfrak{g}_2$.
}\end{remark}
\noindent Let $\mathfrak{a} = \mathfrak{g}_1 \bowtie \mathfrak{g}_2$ be a double crossed sum Lie algebra and $M$ be an SAYD module over $\mathfrak{a}$. By the next example we show that $M$ is not necessarily stable over $\mathfrak{g}_1$ and $\mathfrak{g}_2$.
\begin{example}{\rm
Consider the Lie algebra $\mathfrak{a} = gl_2 = \Big\langle Y^1_1, Y^1_2, Y^2_1, Y^2_2 \Big\rangle$ with a dual basis $\Big\{\theta^1_1, \theta^2_1, \theta^1_2, \theta^2_2\Big\}$.
\medskip
\noindent We have a decomposition $gl_2 = \mathfrak{g}_1 \bowtie \mathfrak{g}_2$, where $\mathfrak{g}_1 = \Big\langle Y^1_1, Y^1_2 \Big\rangle$ and $\mathfrak{g}_2 = \Big\langle Y^2_1, Y^2_2 \Big\rangle$. Let $M := S({gl_2}^\ast)$ be the symmetric algebra as an SAYD module over $gl_2$ with the action \eqref{action-1} and the Koszul coaction \eqref{coaction-1} as in Example \ref{ex-1}. Then the $\mathfrak{g}_1$-coaction on $M$ becomes
\begin{equation}
m \mapsto m\ns{-1} \otimes m\ns{0} = Y^1_1 \otimes m\theta^1_1 + Y^1_2 \otimes m\theta^2_1.
\end{equation}
Accordingly, since $\delta(Y^1_1) = 0 = \delta(Y^1_2)$ we have
\begin{equation}
{\theta^2_1}\ns{0} \lhd {\theta^2_1}\ns{-1} = - {\cal L}_{Y^1_1}\theta^1_1 - {\cal L}_{Y^1_2}\theta^2_1 = - \theta^2_1\theta^1_1 \neq 0.
\end{equation}
}\end{example}
\noindent We know that if a comodule over a Lie algebra $\mathfrak{g}$ is locally conilpotent then it can be lifted to a comodule over $U(\mathfrak{g})$. In the rest of this section, we are interested in translating Proposition \ref{auxx-2} in terms of AYD modules over universal enveloping algebras.
\begin{proposition}\label{auxx-1}
Let $\mathfrak{a} = \mathfrak{g}_1 \bowtie \mathfrak{g}_2$ be a double crossed sum Lie algebra and $M$ be a left comodule over $\mathfrak{a}$. Then $\mathfrak{a}$-coaction is locally conilpotent if and only if the corresponding $\mathfrak{g}_1$-coaction and $\mathfrak{g}_2$-coaction are locally conilpotent.
\end{proposition}
\begin{proof}
By \eqref{auxy2} we know that $\blacktriangledown_{\mathfrak{a}} = \blacktriangledown_{\mathfrak{g}_1} + \blacktriangledown_{\mathfrak{g}_2}$. Therefore,
\begin{equation}
\blacktriangledown_{\mathfrak{a}}^2(m) = m\nsb{-2} \otimes m\nsb{-1} \otimes m\nsb{0} + m\ns{-2} \otimes m\ns{-1} \otimes m\ns{0} + m\nsb{-1} \otimes m\nsb{0}\ns{-1} \otimes m\nsb{0}\ns{0}.
\end{equation}
By induction we assume that
\begin{align}
\begin{split}
& \blacktriangledown_{\mathfrak{a}}^k(m) = m\nsb{-k} \otimes \ldots \otimes m\nsb{-1} \otimes m\nsb{0} + m\ns{-k} \otimes \ldots \otimes m\ns{-1} \otimes m\ns{0} \\
& + \sum_{p + q = k} m\nsb{-p} \otimes \ldots \otimes m\nsb{-1} \otimes m\nsb{0}\ns{-q} \otimes \ldots \otimes m\nsb{0}\ns{-1} \otimes m\nsb{0}\ns{0},
\end{split}
\end{align}
and we apply the coaction one more times to get
\begin{align}
\begin{split}
& \blacktriangledown_{\mathfrak{a}}^{k+1}(m) = m\nsb{-k-1} \otimes \ldots \otimes m\nsb{-1} \otimes m\nsb{0} + m\ns{-k-1} \otimes \ldots \otimes m\ns{-1} \otimes m\ns{0} \\
& + \sum_{p + q = k} m\nsb{-p} \otimes \ldots \otimes m\nsb{-1} \otimes m\nsb{0}\ns{-q} \otimes \ldots \otimes m\nsb{0}\ns{-1} \otimes m\nsb{0}\ns{0}\nsb{-1} \otimes m\nsb{0}\ns{0}\nsb{0} \\
& + \sum_{p + q = k} m\nsb{-p} \otimes \ldots \otimes m\nsb{-1} \otimes m\nsb{0}\ns{-q-1} \otimes \ldots \otimes m\nsb{0}\ns{-1} \otimes m\nsb{0}\ns{0} \\
& = m\nsb{-k-1} \otimes \ldots \otimes m\nsb{-1} \otimes m\nsb{0} + m\ns{-k-1} \otimes \ldots \otimes m\ns{-1} \otimes m\ns{0} \\
& + \sum_{p + q = k} m\nsb{-p-1} \otimes \ldots \otimes m\nsb{-1} \otimes m\nsb{0}\ns{-q} \otimes \ldots \otimes m\nsb{0}\ns{-1} \otimes m\nsb{0}\ns{0} \otimes m\nsb{0}\ns{0} \\
& + \sum_{p + q = k} m\nsb{-p} \otimes \ldots \otimes m\nsb{-1} \otimes m\nsb{0}\ns{-q-1} \otimes \ldots \otimes m\nsb{0}\ns{-1} \otimes m\nsb{0}\ns{0}.
\end{split}
\end{align}
On the second equality we used \eqref{aux-18}. This result immediately implies the claim.
\end{proof}
\noindent Let $M$ be a locally conilpotent comodule over $\mathfrak{g}_1$ and $\mathfrak{g}_2$. We denote by
\begin{equation}
M \to U(\mathfrak{g}_1) \otimes M, \quad m \mapsto m\snsb{-1} \otimes m\snsb{0}
\end{equation}
the lift of the $\mathfrak{g}_1$-coaction and similarly by
\begin{equation}
M \to U(\mathfrak{g}_2) \otimes M, \quad m \mapsto m\sns{-1} \otimes m\sns{0}
\end{equation}
the lift of the $\mathfrak{g}_2$-coaction.
\begin{corollary}\label{aux-58}
Let $\mathfrak{a} = \mathfrak{g}_1 \bowtie \mathfrak{g}_2$ be a double crossed sum Lie algebra and $M \in \, ^{\mathfrak{a}}\rm{conil}{\cal M}_{\mathfrak{a}}$. Then the $\mathfrak{a}$-coaction
lifts to the $U(\mathfrak{a})$-coaction
\begin{equation}
m \mapsto m\snsb{-1} \otimes m\snsb{0}\sns{-1} \otimes m\snsb{0}\sns{0} \in U(\mathfrak{g}_1) \bowtie U(\mathfrak{g}_2) \otimes M.
\end{equation}
\end{corollary}
\begin{proposition}\label{aux-24}
Let $\mathfrak{a} = \mathfrak{g}_1 \bowtie \mathfrak{g}_2$ be a double crossed sum Lie algebra and $M \in \, ^{\mathfrak{a}}\rm{conil}{\cal M}_{\mathfrak{a}}$. Then $M$ is a AYD module over $\mathfrak{a}$ if and only if $M$ is a AYD module over $\mathfrak{g}_1$ and $\mathfrak{g}_2$, and the following conditions are satisfied for any $m \in M$, any $u \in U(\mathfrak{g}_1)$ and $v \in U(\mathfrak{g}_2)$.
\begin{align}\label{auxy3}
&(m \cdot u)\sns{-1} \otimes (m \cdot u)\sns{0} = m\sns{-1} \lhd u\ps{1} \otimes m\sns{0} \cdot u\ps{2}, \\\label{auxy4}
&m\sns{-1} \rhd u \otimes m\sns{0} = u \otimes m, \\\label{auxy5}
&(m \cdot v)\snsb{-1} \otimes (m \cdot v)\snsb{0} = S(v\ps{2}) \rhd m\snsb{-1} \otimes m\snsb{0} \cdot v\ps{1}, \\\label{auxy6}
&v \lhd m\snsb{-1} \otimes m\snsb{0} = v \otimes m.
\end{align}
\end{proposition}
\begin{proof}
Let $M$ be AYD module over $\mathfrak{a}$. Since the coaction is conilpotent, it lifts to an AYD module over $U(\mathfrak{a})$ by \cite[Proposition 5.7]{RangSutl-II}. We write the AYD condition of Hopf algebras \eqref{AYD-Hopf} for $u \bowtie 1 \in U(\mathfrak{a})$,
\begin{align}\label{auxy7}
\begin{split}
& (m \cdot u)\snsb{-1} \otimes (m \cdot u)\snsb{0}\sns{-1} \otimes (m \cdot u)\snsb{0}\sns{0} = \\
& (S(u\ps{3}) \otimes 1)(m\snsb{-1} \otimes m\snsb{0}\sns{-1})(u\ps{1} \otimes 1) \otimes m\snsb{0}\sns{0} \cdot (u\ps{2} \otimes 1) = \\
& S(u\ps{4})m\snsb{-1}(m\snsb{0}\sns{-2} \rhd u\ps{1}) \otimes m\snsb{0}\sns{-1} \lhd u\ps{2} \otimes m\snsb{0}\sns{0} \cdot u\ps{3}.
\end{split}
\end{align}
Applying $\varepsilon \otimes \mathop{\rm Id}\nolimits \otimes \mathop{\rm Id}\nolimits$ on the both hand sides of \eqref{auxy7}, we get \eqref{auxy3}.
Similarly we get
\begin{equation}
(m \cdot u)\snsb{-1} \otimes (m \cdot u)\snsb{0} = S(u\ps{3})m\snsb{-1}(m\snsb{0}\sns{-1} \rhd u\ps{1}) \otimes m\snsb{0}\sns{0} \cdot u\ps{2},
\end{equation}
which yields the following equation after using AYD condition on the left hand side
\begin{equation}
S(u\ps{3})m\snsb{-1}u\ps{1} \otimes m\snsb{0}u\ps{2} = S(u\ps{3})m\snsb{-1}(m\snsb{0}\sns{-1} \rhd u\ps{1}) \otimes m\snsb{0}\sns{0} \cdot u\ps{2}.
\end{equation}
This immediately implies \eqref{auxy4}. Switching to the Lie algebra $\mathfrak{g}_2$ and writing the AYD condition with a $1 \bowtie v \in U(\mathfrak{a})$, we obtain \eqref{auxy5} and \eqref{auxy6}.
\medskip
\noindent Conversely, for $M \in \, ^{\mathfrak{a}}\rm{conil}{\cal M}_{\mathfrak{a}}$ which is also an AYD module over $\mathfrak{g}_1$ and $\mathfrak{g}_2$, assume that \eqref{auxy3},\ldots,\eqref{auxy6} are satisfied. Then $M$ is an AYD module over $U(\mathfrak{g}_1)$ and $U(\mathfrak{g}_2)$. We show that \eqref{auxy3} and \eqref{auxy4} together imply the AYD condition for the elements of the form $u \bowtie 1 \in U(\mathfrak{g}_1) \bowtie U(\mathfrak{g}_2)$. Indeed,
\begin{align}
\begin{split}
& (m \cdot u)\snsb{-1} \otimes (m \cdot u)\snsb{0}\sns{-1} \otimes (m \cdot u)\snsb{0}\sns{0} = \\
& S(u\ps{3})m\snsb{-1}u\ps{1} \otimes (m\snsb{0} \cdot u\ps{2})\sns{-1} \otimes (m\snsb{0} \cdot u\ps{2})\sns{0} = \\
& S(u\ps{4})m\snsb{-1}u\ps{1} \otimes m\snsb{0}\sns{-1} \lhd u\ps{2} \otimes m\snsb{0}\sns{0} \cdot u\ps{3} = \\
& S(u\ps{4})m\snsb{-1}(m\snsb{0}\sns{-2} \rhd u\ps{1}) \otimes m\snsb{0}\sns{-1} \lhd u\ps{2} \otimes m\snsb{0}\sns{0} \cdot u\ps{3},
\end{split}
\end{align}
where the first equality follows from the AYD condition on $U(\mathfrak{g}_1)$, the second equality follows from the \eqref{auxy3}, and the last equality is obtained by using \eqref{auxy4}. Similarly, using \eqref{auxy5} and \eqref{auxy6} we prove the AYD condition for the elements of the form $1 \bowtie v \in U(\mathfrak{g}_1) \bowtie U(\mathfrak{g}_2)$. The proof is then complete, since the AYD condition is multiplicative.
\end{proof}
\noindent The following generalization of Proposition \ref{aux-24} is now straightforward.
\begin{corollary}\label{auxx-24}
Let $({\cal U},{\cal V})$ be a mutual pair of Hopf algebras and $M$ a linear space. Then $M$ is an AYD module over ${\cal U}\bowtie {\cal V}$ if and only if $M$ is an AYD module over ${\cal U}$ and ${\cal V}$, and the following conditions are satisfied for any $m \in M$, any $u \in {\cal U}$ and $v \in {\cal V}$.
\begin{align}\label{auxy13}
&(m \cdot u)\sns{-1} \otimes (m \cdot u)\sns{0} = m\sns{-1} \lhd u\ps{1} \otimes m\sns{0} \cdot u\ps{2}, \\\label{auxy14}
&m\sns{-1} \rhd u \otimes m\sns{0} = u \otimes m, \\\label{auxy15}
& (m \cdot v)\snsb{-1} \otimes (m \cdot v)\snsb{0} = S(v\ps{2}) \rhd m\snsb{-1} \otimes m\snsb{0} \cdot v\ps{1}, \\\label{auxy16}
&v \lhd m\snsb{-1} \otimes m\snsb{0} = v \otimes m,
\end{align}
\end{corollary}
\section{Lie-Hopf algebras and their SAYD modules}
\label{Sec-Lie-Hopf}
In this section we first recall the associated matched pair of Hopf algebras to a matched pair of Lie algebras from \cite{RangSutl}. We then identify the AYD modules over the universal enveloping algebra of a double crossed sum Lie algebra with the YD modules over the corresponding bicrossed product Hopf algebra. Finally we prove that the only finite dimensional SAYD module over the Connes-Moscovici Hopf algebras is the one-dimensional one found in \cite{ConnMosc98}.
\subsection{Lie-Hopf algebras}
Let us first review the bicrossed product construction from \cite{Maji}. Let ${\cal U}$ and ${\cal F}$ be two Hopf algebras. A linear map $$ \blacktriangledown:{\cal U}\rightarrow{\cal U}\otimes {\cal F} , \qquad \blacktriangledown u \, = \, u^{\pr{0}} \otimes u^{\pr{1}} \, , $$ defines a right
coaction and equips
${\cal U}$ with a right ${\cal F}-$comodule coalgebra structure, if the
following conditions are satisfied for any $u\in {\cal U}$:
\begin{align}
&u^{\pr{0}}\ps{1}\otimes u^{\pr{0}}\ps{2}\otimes u^{\pr{1}}= {u\ps{1}}^{\pr{0}}\otimes {u\ps{2}}^{\pr{0}}\otimes {u\ps{1}}^{\pr{1}}{u\ps{2}}^{\pr{1}}, \quad \varepsilon(u^{\pr{0}})u^{\pr{1}}=\varepsilon(u)1.
\end{align}
We then form a cocrossed product coalgebra ${\cal F}\blacktriangleright\hspace{-4pt} < {\cal U}$. It has ${\cal F}\otimes {\cal U}$ as underlying vector space and the coalgebra structure is given by
\begin{align}
&\Delta(f\blacktriangleright\hspace{-4pt} < u)= f\ps{1}\blacktriangleright\hspace{-4pt} < {u\ps{1}}^{\pr{0}}\otimes f\ps{2}{u\ps{1}}^{\pr{1}}\blacktriangleright\hspace{-4pt} < u\ps{2}, \quad \varepsilon(f\blacktriangleright\hspace{-4pt} < u)=\varepsilon(f)\varepsilon(u).
\end{align}
In a dual fashion, ${\cal F}$ is called a {left ${\cal U}-$module algebra}, if ${\cal U}$ acts from the left on ${\cal F}$ via a left action $$
\triangleright : {\cal F}\otimes {\cal U} \rightarrow {\cal F}
$$ which satisfies the following conditions for any $u\in {\cal U}$, and $f,g\in {\cal F}$ :
\begin{align}
&u\triangleright 1=\varepsilon(u)1, \quad u\triangleright(fg)=(u\ps{1}\triangleright f)(u\ps{2}\triangleright g).
\end{align}
This time we can endow the underlying vector space ${\cal F}\otimes {\cal U}$ with
an algebra structure, to be denoted by ${\cal F}>\hspace{-4pt}\vartriangleleft {\cal U}$, with $1>\hspace{-4pt}\vartriangleleft 1$
as its unit and the product
\begin{equation}
(f>\hspace{-4pt}\vartriangleleft u)(g>\hspace{-4pt}\vartriangleleft v)=f \;u\ps{1}\triangleright g>\hspace{-4pt}\vartriangleleft u\ps{2}v.
\end{equation}
\noindent A pair of Hopf algebras $({\cal F},{\cal U})$ is called a matched pair of Hopf algebras if they are equipped, as above, with an action and a coaction which satisfy the following compatibility conditions
\begin{align}\label{mp1}
& \Delta(u\triangleright f)={u\ps{1}}^{\pr{0}} \triangleright f\ps{1}\otimes {u\ps{1}}^{\pr{1}}(u\ps{2}\triangleright f\ps{2}), \quad \varepsilon(u\triangleright f)=\varepsilon(u)\varepsilon(f) \\ \label{mp4}
& \blacktriangledown(uv)={u\ps{1}}^{\pr{0}} v^{\pr{0}}\otimes {u\ps{1}}^{\pr{1}}(u\ps{2}\triangleright v^{\pr{1}}), \quad \blacktriangledown(1)=1 \otimes 1 \\
& {u\ps{2}}^{\pr{0}}\otimes (u\ps{1}\triangleright f){u\ps{2}}^{\pr{1}}={u\ps{1}}^{\pr{0}}\otimes {u\ps{1}}^{\pr{1}}(u\ps{2}\triangleright f).
\end{align}
for any $u\in{\cal U}$, and any $f\in {\cal F}$. We then form a new Hopf algebra ${\cal F}\blacktriangleright\hspace{-4pt}\vartriangleleft {\cal U}$, called the bicrossed product of the matched pair $({\cal F} , {\cal U})$. It has ${\cal F}\blacktriangleright\hspace{-4pt} < {\cal U}$ as the underlying coalgebra and ${\cal F}>\hspace{-4pt}\vartriangleleft {\cal U}$ as the underlying algebra. The antipode is given by
\begin{equation}\label{anti}
S(f\blacktriangleright\hspace{-4pt}\vartriangleleft u)=(1\blacktriangleright\hspace{-4pt}\vartriangleleft S(u^{\pr{0}}))(S(fu^{\pr{1}})\blacktriangleright\hspace{-4pt}\vartriangleleft 1) , \qquad f \in {\cal F} , \, u \in {\cal U}.
\end{equation}
\noindent Next, we recall Lie-Hopf algebras from \cite{RangSutl}. A Lie-Hopf algebra produces a bicrossed product Hopf algebra such that one of the Hopf algebras involved is commutative and the other one is the universal enveloping algebra of a Lie algebra.
\medskip
\noindent Let ${\cal F}$ be a commutative Hopf algebra on which a Lie algebra $\mathfrak{g}$ acts by derivations. Then the vector space $\mathfrak{g} \otimes {\cal F}$ endowed with the bracket
\begin{equation}\label{bracket}
[X\otimes f, Y\otimes g]= [X,Y]\otimes fg+ Y\otimes \varepsilon(f)X\triangleright g- X\otimes \varepsilon(g) Y\triangleright f
\end{equation}
becomes a Lie algebra. Next, we assume that ${\cal F}$ coacts on $\mathfrak{g}$ via $\blacktriangledown_\mathfrak{g}:\mathfrak{g}\rightarrow \mathfrak{g}\otimes {\cal F}$. We say that the coaction $\blacktriangledown_\mathfrak{g}:\mathfrak{g}\rightarrow \mathfrak{g}\otimes {\cal F}$ satisfies the structure identity of $\mathfrak{g}$ if $\blacktriangledown_\mathfrak{g}:\mathfrak{g}\rightarrow \mathfrak{g}\otimes {\cal F}$ is a Lie algebra map. Finally one uses the action of $\mathfrak{g}$ on ${\cal F}$ and the coaction of ${\cal F}$ on $\mathfrak{g}$ to define the following useful action of $\mathfrak{g}$ on ${\cal F}\otimes {\cal F}$:
\begin{equation}
X\bullet (f^1 \otimes f^2)= \sum X^{\pr{0}}\triangleright f^1\otimes X^{\pr{1}} f^2 + f^1\otimes X\triangleright f^2.
\end{equation}
We are now ready to define the notion of Lie-Hopf algebra.
\begin{definition}\label{def-Lie-Hopf}
\cite{RangSutl}. Let a Lie algebra $\mathfrak{g}$ act on a commutative Hopf algebra ${\cal F}$ by derivations. We say that ${\cal F}$ is a $\mathfrak{g}$-Hopf algebra if
\begin{enumerate}
\item ${\cal F}$ coacts on $\mathfrak{g}$ and its coaction satisfies the structure identity of $\mathfrak{g}$.
\item $\Delta$ and $\varepsilon$ are $\mathfrak{g}$-linear, that is $\Delta(X\triangleright f)=X\bullet\Delta(f)$, \quad $\varepsilon(X\triangleright f)=0$, \quad $f\in {\cal F}$ and $X\in \mathfrak{g}$.
\end{enumerate}
\end{definition}
\noindent If ${\cal F}$ is a $\mathfrak{g}$-Hopf algebra, then $U(\mathfrak{g})$ acts on ${\cal F}$ naturally and makes it a $U(\mathfrak{g})$-module algebra. On the other hand, we extend the coaction $\blacktriangledown_\mathfrak{g}$ of ${\cal F}$ on $\mathfrak{g}$ to a coaction $\blacktriangledown_{U}$ of ${\cal F}$ on $U(\mathfrak{g})$ inductively via the rule \eqref{mp4}.
\medskip
\noindent As for the corresponding bicrossed product Hopf algebra, we have the following result.
\begin{theorem}\label{Theorem-Lie-Hopf-matched-pair}
\cite{RangSutl}. Let ${\cal F}$ be a commutative Hopf algebra and $\mathfrak{g}$ be a Lie algebra. Then the pair $({\cal F},U(\mathfrak{g}))$ is a matched pair of Hopf algebras if and only if ${\cal F}$ is a $\mathfrak{g}$-Hopf algebra.
\end{theorem}
\noindent A class of examples of Lie-Hopf algebras arises from matched pairs of Lie algebras. To be able to express such an example, let us recall first the definition of $R(\mathfrak{g})$, the Hopf algebra of representative functions on a Lie algebra $\mathfrak{g}$.
\begin{equation}\notag
R(\mathfrak{g})=\Big\{f\in U(\mathfrak{g})^\ast \mid \exists \; I\subseteq \ker f \text{ such that } \; \dim(\ker f)/I < \infty \Big\}.
\end{equation}
The finite codimensionality condition in the definition of $R(\mathfrak{g})$ guarantees that for any $f\in R(\mathfrak{g})$ there exist a finite number of functions $f_i',
f_i''\in R(\mathfrak{g})$ such that for any $u^1,u^2\in U(\mathfrak{g})$,
\begin{equation}
f(u^1u^2)=\sum_{i}f_i'(u^1)f_i''(u^2).
\end{equation}
The Hopf algebraic structure of $R(\mathfrak{g})$ is summarized by:
\begin{align}
&\mu: R(\mathfrak{g})\otimes R(\mathfrak{g})\rightarrow R(\mathfrak{g}), && \mu(f\otimes g)(u)=f(u\ps{1})g(u\ps{2}),\\ &\eta:{\mathbb C}\rightarrow R(\mathfrak{g}),&& \eta(1)=\varepsilon,\\ &\Delta:R(\mathfrak{g})\rightarrow R(\mathfrak{g})\otimes R(\mathfrak{g}),&&\\\notag & \Delta(f)=\sum_if_i'\otimes
f_i'',&& \text{if}\;\; f(u^1u^2)= \sum f_i'(u^1)f_i''(u^2),\\
& S: R(\mathfrak{g})\rightarrow R(\mathfrak{g}), &&S(f)(u)=f(S(u)).
\end{align}
\noindent The following proposition produces a family of examples.
\begin{proposition}\label{Proposition-matched-Lie-Hopf-Lie}
\cite{RangSutl}. For any matched pair of Lie algebras $(\mathfrak{g}_1,\mathfrak{g}_2)$, the Hopf algebra $R(\mathfrak{g}_2)$ is a $\mathfrak{g}_1$-Hopf algebra.
\end{proposition}
\subsection{SAYD modules over Lie-Hopf algebras}
Let us start with a very brief introduction to SAYD modules over Hopf algebras. Let ${\cal H}$ be a Hopf algebra. By definition, a character $\theta: {\cal H}\rightarrow {\mathbb C}$ is an algebra map.
A group-like $\sigma\in {\cal H}$ is the dual object of the character, {\it i.e., }\ $\Delta(\sigma)=\sigma\otimes \sigma$. The pair $(\theta,\sigma)$ is called a modular pair in involution \cite{ConnMosc00} if
\begin{equation}
\theta(\sigma)=1, \quad \text{and}\quad S_\theta^2=Ad_\sigma,
\end{equation}
where $Ad_\sigma(h)= \sigma h\sigma^{-1}$ and $S_\delta$ is defined by
\begin{equation}
S_\theta(h)=\theta(h\ps{1})S(h\ps{2}).
\end{equation}
We recall from \cite{HajaKhalRangSomm04-I} the definition of a right-left stable-anti-Yetter-Drinfeld module over a Hopf algebra ${\cal H}$. Let $M$ be a right module and left comodule over a Hopf algebra ${\cal H}$. We say that it is stable-anti-Yetter-Drinfeld (SAYD) module over ${\cal H}$ if
\begin{equation}
\blacktriangledown(m\cdot h)= S(h\ps{3})m\ns{-1}h\ps{1}\otimes m\ns{0}\cdot h\ps{2},\qquad m\ns{0} \cdot m\ns{-1}=v,
\end{equation}
for any $m\in M$ and $h\in {\cal H}$.
It is shown in \cite{HajaKhalRangSomm04-I} that any modular pair in involution defines a one dimensional SAYD module and all one dimensional SAYD modules come this way.
\medskip
\noindent If $M$ is a module over a bicrossed product Hopf algebra $\mathcal{F} \blacktriangleright\hspace{-4pt}\vartriangleleft \mathcal{U}$, then by the fact that $\mathcal{F}$ and $\mathcal{U}$ are subalgebras of $\mathcal{F} \blacktriangleright\hspace{-4pt}\vartriangleleft \mathcal{U}$ we can immediately conclude that $M$ is a module on $\mathcal{F}$ and $\mathcal{U}$. More explicitly, we have the following elementary lemma, see \cite[Lemma 3.4]{RangSutl-II}.
\begin{lemma}\label{module on bicrossed product}
Let $(\mathcal{F}, \mathcal{U})$ be a matched pair of Hopf algebras and $M$ a linear space. Then $M$ is a right module over the bicrossed product Hopf algebra $\mathcal{F} \blacktriangleright\hspace{-4pt}\vartriangleleft \mathcal{U}$ if and only if $M$ is a right module over $\mathcal{F}$ and a right module over $\mathcal{U}$, such that
\begin{equation}\label{aux-25}
(m \cdot u) \cdot f = (m \cdot (u\ps{1} \rhd f)) \cdot u\ps{2}.
\end{equation}
\end{lemma}
Let $(\mathfrak{g}_1,\mathfrak{g}_2)$ be a matched pair of Lie algebras and $M$ be a module over the double crossed sum $\mathfrak{g}_1 \bowtie \mathfrak{g}_2$ such that $\mathfrak{g}_1 \bowtie \mathfrak{g}_2$-coaction is locally conilpotent.
Being a right $\mathfrak{g}_1$-module, $M$ has a right $U(\mathfrak{g}_1)$-module structure. Similarly, since it is a locally conilpotent left $\mathfrak{g}_2$-comodule, $M$ is a right $R(\mathfrak{g}_2)$-module. Then we define
\begin{align}\label{aux-17}
\begin{split}
& M \otimes R(\mathfrak{g}_2) \blacktriangleright\hspace{-4pt}\vartriangleleft U(\mathfrak{g}_1) \to M \\
& m \otimes (f \blacktriangleright\hspace{-4pt}\vartriangleleft u) \mapsto (m \cdot f) \cdot u = f(m\sns{-1})m\sns{0} \cdot u.
\end{split}
\end{align}
\begin{corollary}
Let $(\mathfrak{g}_1,\mathfrak{g}_2)$ be a matched pair of Lie algebras and $M$ be an AYD module over the double crossed sum $\mathfrak{g}_1 \bowtie \mathfrak{g}_2$ such that $\mathfrak{g}_1 \bowtie \mathfrak{g}_2$-coaction is locally conilpotent. Then $M$ has a right $R(\mathfrak{g}_2) \blacktriangleright\hspace{-4pt}\vartriangleleft U(\mathfrak{g}_1)$-module structure via \eqref{aux-17}.
\end{corollary}
\begin{proof}
For $f \in R(\mathfrak{g}_2)$, $u \in U(\mathfrak{g}_1)$ and $m \in M$, we have
\begin{align}
\begin{split}
& (m \cdot u) \cdot f = f((m \cdot u)\sns{-1})(m \cdot u)\sns{0} = f(m\sns{-1} \lhd u\ps{1})m\sns{0} \cdot u\ps{2} \\
& = (u\ps{1} \rhd f)(m\sns{-1})m\sns{0} \cdot u\ps{2} = (m \cdot (u\ps{1} \rhd f)) \cdot u\ps{2}.
\end{split}
\end{align}
Here in the second equality we used Proposition \ref{aux-24}. So by Lemma \ref{module on bicrossed product} the proof is complete.
\end{proof}
\noindent Let us assume that $M$ is a left comodule over the bicrossed product $\mathcal{F} \blacktriangleright\hspace{-4pt}\vartriangleleft \mathcal{U}$. Since the projections $\pi_1 := \mathop{\rm Id}\nolimits_{\mathcal{F}} \otimes \varepsilon_{\mathcal{U}}:\mathcal{F} \blacktriangleright\hspace{-4pt}\vartriangleleft \mathcal{U} \to \mathcal{F}$ and $\pi_2 := \varepsilon_{\mathcal{F}} \otimes \mathop{\rm Id}\nolimits_{\mathcal{U}}:\mathcal{F} \blacktriangleright\hspace{-4pt}\vartriangleleft \mathcal{U} \to \mathcal{U}$ are coalegbra maps, $M$ becomes a left $\mathcal{F}$-comodule as well as a left $\mathcal{U}$-comodule via $\pi_1$ and $\pi_2$. Denoting these comodule structures by
\begin{equation}
m \mapsto m^{\sns{-1}} \otimes m^{\sns{0}} \in \mathcal{F} \otimes M \quad \text{and} \quad m \mapsto m\snsb{-1} \otimes m\snsb{0} \in \mathcal{U} \otimes M,
\end{equation}
we mean the $\mathcal{F} \blacktriangleright\hspace{-4pt}\vartriangleleft \mathcal{U}$-comodule structure is
\begin{equation}\label{auxy12}
m \mapsto m^{\sns{-1}} \otimes m^{\sns{0}}\snsb{-1} \otimes m^{\sns{0}}\snsb{0} \in \mathcal{F} \blacktriangleright\hspace{-4pt}\vartriangleleft \mathcal{U} \otimes M.
\end{equation}
\begin{lemma}\label{comodule on bicrossed product}
Let $(\mathcal{F}, \mathcal{U})$ be a matched pair of Hopf algebras and $M$ a linear space. Then $M$ is a left comodule over the bicrossed product Hopf algebra $\mathcal{F} \blacktriangleright\hspace{-4pt}\vartriangleleft \mathcal{U}$ if and only if it is a left comodule over $\mathcal{F}$ and a left comodule over $\mathcal{U}$, such that for any $m \in M$
\begin{equation}\label{aux-27}
(m^{\sns{0}}\snsb{-1})^{\pr{0}} \otimes m^{\sns{-1}} \cdot (m^{\sns{0}}\snsb{-1})^{\pr{1}} \otimes m^{\sns{0}}\snsb{0} = m\snsb{-1} \otimes (m\snsb{0})^{\sns{-1}} \otimes (m\snsb{0})^{\sns{0}},
\end{equation}
where $u \mapsto u^{\pr{0}} \otimes u^{\pr{1}} \in \mathcal{U} \otimes \mathcal{F}$ is the right $\mathcal{F}$-coaction on $\mathcal{U}$.
\end{lemma}
\begin{proof}
Let assume that $M$ is a comodule over the bicrossed product Hopf algebra $\mathcal{F} \blacktriangleright\hspace{-4pt}\vartriangleleft \mathcal{U}$. Then by the coassociativity of the coaction, we have
\begin{align}\label{aux-28}
\begin{split}
& m^{\sns{-2}} \blacktriangleright\hspace{-4pt}\vartriangleleft (m^{\sns{0}}\snsb{-2})^{\pr{0}} \otimes m^{\sns{-1}} \cdot (m^{\sns{0}}\snsb{-2})^{\pr{1}} \blacktriangleright\hspace{-4pt}\vartriangleleft m^{\sns{0}}\snsb{-1} \otimes m^{\sns{0}}\snsb{0} \\
& = m^{\sns{-1}} \blacktriangleright\hspace{-4pt}\vartriangleleft m^{\sns{0}}\snsb{-1} \otimes (m^{\sns{0}}\snsb{0})^{\sns{-1}} \blacktriangleright\hspace{-4pt}\vartriangleleft (m^{\sns{0}}\snsb{0})^{\sns{0}}\snsb{-1} \otimes (m^{\sns{0}}\snsb{0})^{\sns{0}}\snsb{0}.
\end{split}
\end{align}
By applying $\varepsilon_{\mathcal{F}} \otimes \mathop{\rm Id}\nolimits_{\mathcal{U}} \otimes \mathop{\rm Id}\nolimits_{\mathcal{F}} \otimes \varepsilon_{\mathcal{U}} \otimes \mathop{\rm Id}\nolimits_{M}$ on both hand sides of \eqref{aux-28}, we get
\begin{equation}
(m^{\sns{0}}\snsb{-1})^{\pr{0}} \otimes m^{\sns{-1}} \cdot (m^{\sns{0}}\snsb{-1})^{\pr{1}} \otimes m^{\sns{0}}\snsb{0} = m\snsb{-1} \otimes (m\snsb{0})^{\sns{-1}} \otimes (m\snsb{0})^{\sns{0}}.
\end{equation}
Conversely, assume that \eqref{aux-27} holds for any $m \in M$. This results
\begin{align}
\begin{split}
& m^{\sns{-2}} \otimes (m^{\sns{0}}\snsb{-1})^{\pr{0}} \otimes m^{\sns{-1}} \cdot (m^{\sns{0}}\snsb{-1})^{\pr{1}} \otimes m^{\sns{0}}\snsb{0} \\
& = m^{\sns{-1}} \otimes m^{\sns{0}}\snsb{-1} \otimes (m^{\sns{0}}\snsb{0})^{\sns{-1}} \otimes (m^{\sns{0}}\snsb{0})^{\sns{0}},
\end{split}
\end{align}
which implies \eqref{aux-28} {\it i.e., }\ the coassociativity of the $\mathcal{F} \blacktriangleright\hspace{-4pt}\vartriangleleft \mathcal{U}$-coaction.
\end{proof}
\begin{corollary}
Let $(\mathfrak{g}_1,\mathfrak{g}_2)$ be a matched pair of Lie algebras and $M$ be an AYD module over the double crossed sum $\mathfrak{g}_1 \bowtie \mathfrak{g}_2$ with locally finite action and locally conilpotent coaction. Then $M$ has a left $R(\mathfrak{g}_2) \blacktriangleright\hspace{-4pt}\vartriangleleft U(\mathfrak{g}_1)$-comodule structure.
\end{corollary}
\begin{proof}
Since $M$ is a locally conilpotent left $\mathfrak{g}_1$-comodule, it has a left $U(\mathfrak{g}_1)$-comodule structure. On the other hand, being a locally finite right $\mathfrak{g}_2$-module, $M$ is a left $R(\mathfrak{g}_2)$-comodule \cite{Hoch74}.
By Proposition \ref{aux-24} we have
\begin{equation}
(m \cdot v)\snsb{-1} \otimes (m \cdot v)\snsb{0} = S(v\ps{2}) \rhd m\snsb{-1} \otimes m\snsb{0} \cdot v\ps{1},
\end{equation}
or in other words
\begin{equation}
v\ps{2} \rhd (m \cdot v\ps{1})\snsb{-1} \otimes (m \cdot v\ps{1})\snsb{0} = m\snsb{-1} \otimes m\snsb{0} \cdot v.
\end{equation}
Using the $R(\mathfrak{g}_2)$-coaction on $M$ and $R(\mathfrak{g}_2)$-coaction on $U(\mathfrak{g}_1)$, we can translate this equality into
\begin{equation}
(m^{\sns{-1}} \cdot (m^{\sns{0}}\snsb{-1})^{\pr{1}})(v) (m^{\sns{0}}\snsb{-1})^{\pr{0}} \otimes m^{\sns{0}}\snsb{0} = m\snsb{-1} \otimes (m\snsb{0})^{\sns{0}}((m\snsb{0})^{\sns{-1}})(v).
\end{equation}
Finally, by the non-degenerate pairing between $U(\mathfrak{g}_2)$ and $R(\mathfrak{g}_2)$ we get
\begin{equation}
(m^{\sns{0}}\snsb{-1})^{\pr{0}} \otimes m^{\sns{-1}} \cdot (m^{\sns{0}}\snsb{-1})^{\pr{1}} \otimes m^{\sns{0}}\snsb{0} = m\snsb{-1} \otimes (m\snsb{0})^{\sns{-1}} \otimes (m\snsb{0})^{\sns{0}},
\end{equation}
{\it i.e., }\ the $R(\mathfrak{g}_2) \blacktriangleright\hspace{-4pt}\vartriangleleft U(\mathfrak{g}_1)$-coaction compatibility.
\end{proof}
\noindent Our next challenge is to identify the Yetter-Drinfeld modules over Lie-Hopf algebras.
\bigskip
\noindent Let us recall that a right module left comodule $M$ over a Hopf algebra $\mathcal{H}$ is called a YD module if
\begin{equation}\label{YD compatibility}
h\ps{2}(m \cdot h\ps{1})\ns{-1} \otimes (m \cdot h\ps{1})\ns{0} = m\ns{-1}h\ps{1} \otimes m\ns{0} \cdot h\ps{2}
\end{equation}
for any $h \in \mathcal{H}$ and any $m \in M$.
\begin{proposition}\label{aux-41}
Let $(\mathcal{F}, \mathcal{U})$ be a matched pair of Hopf algebras and $M$ be a right module and left comodule over $\mathcal{F} \blacktriangleright\hspace{-4pt}\vartriangleleft \mathcal{U}$ such that via the corresponding module and comodule structures it becomes a YD-module over ${\cal U}$. Then $M$ is a YD-module over ${\cal F}\blacktriangleright\hspace{-4pt}\vartriangleleft {\cal U}$ if and only if $M$ is a YD-module over ${\cal F}$ via the corresponding module and comodule structures, and the following conditions are satisfied
\begin{align}\label{auxy8}
&(m \cdot f)\snsb{-1} \otimes (m \cdot f)\snsb{0} = m\snsb{-1} \otimes m\snsb{0} \cdot f, \\\label{auxy9}
&m^{\sns{-1}}f\ps{1} \otimes m^{\sns{0}}f\ps{2} = m^{\sns{-1}}(m^{\sns{0}}\snsb{-1} \rhd f\ps{1}) \otimes m^{\sns{0}}\snsb{0} \cdot f\ps{2}, \\\label{auxy10}
&m^{\sns{-1}} \otimes m^{\sns{0}} \cdot u = (u\ps{1})^{\pr{1}}(u\ps{2} \rhd (m \cdot (u\ps{1})^{\pr{0}})^{\sns{-1}}) \otimes (m \cdot (u\ps{1})^{\pr{0}})^{\sns{0}}, \\\label{auxy11}
&m\snsb{-1}u^{\pr{0}} \otimes m\snsb{0} \cdot u^{\pr{1}} = m\snsb{-1}u \otimes m\snsb{0}.
\end{align}
\end{proposition}
\begin{proof}
First we assume that $M$ is a YD module over $\mathcal{F} \blacktriangleright\hspace{-4pt}\vartriangleleft \mathcal{U}$. Since ${\cal F}$ is a Hopf subalgebra of ${\cal F} \blacktriangleright\hspace{-4pt}\vartriangleleft {\cal U}$, $M$ is a YD module over ${\cal F}$.
\medskip
\noindent Next, we prove the compatibilities \eqref{auxy8},\ldots,\eqref{auxy11}. Writing \eqref{YD compatibility} for an arbitrary $f \blacktriangleright\hspace{-4pt}\vartriangleleft 1 \in \mathcal{F} \blacktriangleright\hspace{-4pt}\vartriangleleft \mathcal{U}$, we get
\begin{align}
\begin{split}
& (f\ps{2} \blacktriangleright\hspace{-4pt}\vartriangleleft 1) \cdot ((m \cdot f\ps{1})^{\sns{-1}} \blacktriangleright\hspace{-4pt}\vartriangleleft (m \cdot f\ps{1})^{\sns{0}}\snsb{-1}) \otimes (m \cdot f\ps{1})^{\sns{0}}\snsb{0} \\
& = (m^{\sns{-1}} \blacktriangleright\hspace{-4pt}\vartriangleleft m^{\sns{0}}\snsb{-1}) \cdot (f\ps{1} \blacktriangleright\hspace{-4pt}\vartriangleleft 1) \otimes m^{\sns{0}}\snsb{0} \cdot f\ps{2}.
\end{split}
\end{align}
Using the YD condition on $\mathcal{F}$ on the left hand side, we get
\begin{align}\label{aux-33}
\begin{split}
& f\ps{2}(m \cdot f\ps{1})^{\sns{-1}} \blacktriangleright\hspace{-4pt}\vartriangleleft (m \cdot f\ps{1})^{\sns{0}}\snsb{-1} \otimes (m \cdot f\ps{1})^{\sns{0}}\snsb{0} \\
& = m^{\sns{-1}}f\ps{1} \blacktriangleright\hspace{-4pt}\vartriangleleft (m^{\sns{0}}f\ps{2})\snsb{-1} \otimes (m^{\sns{0}}f\ps{2})\snsb{0} \\
& = m^{\sns{-1}}(m^{\sns{0}}\snsb{-2} \rhd f\ps{1}) \blacktriangleright\hspace{-4pt}\vartriangleleft m^{\sns{0}}\snsb{-1} \otimes m^{\sns{0}}\snsb{0} \cdot f\ps{2}.
\end{split}
\end{align}
Now we apply $\varepsilon_{\mathcal{F}} \otimes \mathop{\rm Id}\nolimits_{\mathcal{U}} \otimes \mathop{\rm Id}\nolimits_M$ on the both hand sides of \eqref{aux-33} to get \eqref{auxy8}.
Similarly we apply $\mathop{\rm Id}\nolimits_{\mathcal{F}} \otimes \varepsilon_{\mathcal{U}} \otimes \mathop{\rm Id}\nolimits_M$ to get \eqref{auxy9}.
\medskip
\noindent By the same argument, the YD compatibility of ${\cal F}\blacktriangleright\hspace{-4pt}\vartriangleleft {\cal U}$ for an element of the form $1 \blacktriangleright\hspace{-4pt}\vartriangleleft u \in \mathcal{F} \blacktriangleright\hspace{-4pt}\vartriangleleft \mathcal{U}$, followed by the YD compatibility of ${\cal U}$ yields \eqref{auxy10} and \eqref{auxy11}.
\medskip
\noindent Conversely, assume that $M \in \, ^{\mathcal{F}}\mathcal{YD}_{\mathcal{F}}$ and \eqref{auxy8},\ldots,\eqref{auxy11} are satisfied. We will prove that the YD condition over $\mathcal{F} \blacktriangleright\hspace{-4pt}\vartriangleleft \mathcal{U}$ holds for the elements of the forms $f \blacktriangleright\hspace{-4pt}\vartriangleleft 1 \in \mathcal{F} \blacktriangleright\hspace{-4pt}\vartriangleleft \mathcal{U}$ and $1 \blacktriangleright\hspace{-4pt}\vartriangleleft u \in \mathcal{F} \blacktriangleright\hspace{-4pt}\vartriangleleft \mathcal{U}$. By \eqref{auxy9}, we have
\begin{align}
\begin{split}
& m^{\sns{-1}}f\ps{1} \blacktriangleright\hspace{-4pt}\vartriangleleft (m^{\sns{0}}f\ps{2})\snsb{-1} \otimes (m^{\sns{0}}f\ps{2})\snsb{0} \\
& = m^{\sns{-1}}(m^{\sns{0}}\snsb{-1} \rhd f\ps{1}) \blacktriangleright\hspace{-4pt}\vartriangleleft (m^{\sns{0}}\snsb{0} \cdot f\ps{2})\snsb{-1} \otimes (m^{\sns{0}}\snsb{0} \cdot f\ps{2})\snsb{0},
\end{split}
\end{align}
which, by using \eqref{auxy8}, implies the YD compatibility for the elements of the form $f \blacktriangleright\hspace{-4pt}\vartriangleleft 1 \in \mathcal{F} \blacktriangleright\hspace{-4pt}\vartriangleleft \mathcal{U}$.
\medskip
\noindent Next, by \eqref{auxy10} we have
\begin{align}
\begin{split}
& (u\ps{1})^{\pr{1}}(u\ps{2} \rhd (m \cdot (u\ps{1})^{\pr{0}})^{\sns{-1}}) \blacktriangleright\hspace{-4pt}\vartriangleleft u\ps{3}(m \cdot (u\ps{1})^{\pr{0}})^{\sns{0}}\snsb{-1} \otimes (m \cdot (u\ps{1})^{\pr{0}})^{\sns{0}}\snsb{0} \\
& = m^{\sns{-1}} \blacktriangleright\hspace{-4pt}\vartriangleleft u\ps{2}(m^{\sns{0}} \cdot u\ps{1})\snsb{-1} \otimes (m^{\sns{0}} \cdot u\ps{1})\snsb{0}, \end{split}
\end{align}
which amounts to the YD compatibility for the elements of the form $1 \blacktriangleright\hspace{-4pt}\vartriangleleft u \in \mathcal{F} \blacktriangleright\hspace{-4pt}\vartriangleleft \mathcal{U}$ by using YD compatibility over $\mathcal{U}$ and \eqref{auxy11}.
\medskip
\noindent Since YD condition is multiplicative, it is then satisfied for any $f \blacktriangleright\hspace{-4pt}\vartriangleleft u \in \mathcal{F} \blacktriangleright\hspace{-4pt}\vartriangleleft \mathcal{U}$, and hence we have proved that $M$ is YD module over $\mathcal{F} \blacktriangleright\hspace{-4pt}\vartriangleleft \mathcal{U}$.
\end{proof}
\begin{proposition}\label{aux-46}
Let $(\mathfrak{g}_1,\mathfrak{g}_2)$ be a matched pair of finite dimensional Lie algebras, $M$ an AYD module over the double crossed sum $\mathfrak{g}_1 \bowtie \mathfrak{g}_2$ with locally finite action and locally conilpotent coaction. Then, by the action \eqref{aux-17} and the coaction \eqref{auxy12}, $M$ becomes a right-left YD module over $R(\mathfrak{g}_2) \blacktriangleright\hspace{-4pt}\vartriangleleft U(\mathfrak{g}_1)$.
\end{proposition}
\begin{proof}
We prove the proposition by verifying the conditions of Proposition \ref{aux-41}.
Since $M$ is an AYD module over $\mathfrak{g}_1 \bowtie \mathfrak{g}_2$ with a locally conilpotent coaction, it is an AYD module over $U(\mathfrak{g}_1) \bowtie U(\mathfrak{g}_2)$. In particular, it is a left comodule over $U(\mathfrak{g}_1) \bowtie U(\mathfrak{g}_2)$ with the following coaction as proved in Corollary \ref{aux-58}
\begin{equation}
m \mapsto m\snsb{-1} \bowtie m\snsb{0}\sns{-1} \otimes m\snsb{0}\sns{0} \in U(\mathfrak{g}_1) \bowtie U(\mathfrak{g}_2).
\end{equation}
By the coassociativity of this coaction, we have
\begin{equation}\label{aux-38}
m\sns{0}\snsb{-1} \otimes m\sns{-1} \otimes m\sns{0}\snsb{0} = m\snsb{-1} \otimes m\snsb{0}\sns{-1} \otimes m\snsb{0}\sns{0}.
\end{equation}
Thus, the application of $\mathop{\rm Id}\nolimits_{U(\mathfrak{g}_1)} \otimes f \otimes \mathop{\rm Id}\nolimits_M$ on both hand sides results \eqref{auxy8}.
\medskip
\noindent Using \eqref{auxy6} and \eqref{aux-38} we get
\begin{align}
\begin{split}
& v\ps{2}(m \cdot v\ps{1})\sns{-1} \otimes (m \cdot v\ps{1})\sns{0} = (v\ps{2} \lhd (m \cdot v\ps{1})\sns{0}\snsb{-1})(m \cdot v\ps{1})\sns{-1} \otimes (m \cdot v\ps{1})\sns{0}\snsb{0} \\
& = (v\ps{2} \lhd (m \cdot v\ps{1})\snsb{-1})(m \cdot v\ps{1})\snsb{0}\sns{-1} \otimes (m \cdot v\ps{1})\snsb{0}\sns{0}.
\end{split}
\end{align}
Then applying $f \otimes \mathop{\rm Id}\nolimits_M$ to both sides and using the non-degenerate pairing between $R(\mathfrak{g}_2)$ and $U(\mathfrak{g}_2)$, we conclude \eqref{auxy9}.
\medskip
\noindent To verify \eqref{auxy10}, we use the $U(\mathfrak{g}_1) \bowtie U(\mathfrak{g}_2)$-module compatibility on $M$, {\it i.e., }\ for any $u \in U(\mathfrak{g}_1)$, $v \in U(\mathfrak{g}_2)$ and $m \in M$,
\begin{equation}
(m \cdot v) \cdot u = (m \cdot (v\ps{1} \rhd u\ps{1})) \cdot (v\ps{2} \lhd u\ps{2}).
\end{equation}
Using the non-degenerate pairing between $R(\mathfrak{g}_2)$ and $U(\mathfrak{g}_1)$, we rewrite this equality as
\begin{align}
\begin{split}
& m^{\sns{-1}}(v)m^{\sns{0}} \cdot u = (m \cdot (v\ps{1} \rhd u\ps{1}))^{\sns{-1}}(v\ps{2} \lhd u\ps{2})(m \cdot (v\ps{1} \rhd u\ps{1}))^{\sns{0}} \\
& = u\ps{2} \rhd (m \cdot (v\ps{1} \rhd u\ps{1}))^{\sns{-1}}(v\ps{2})(m \cdot (v\ps{1} \rhd u\ps{1}))^{\sns{-1}} \\
& = (u\ps{1})^{\pr{1}}(v\ps{1})(u\ps{2} \rhd (m \cdot (u\ps{1})^{\pr{0}})^{\sns{-1}})(v\ps{2})(m \cdot (u\ps{1})^{\pr{0}})^{\sns{0}} \\
& = [(u\ps{1})^{\pr{1}}(u\ps{2} \rhd (m \cdot (u\ps{1})^{\pr{0}})^{\sns{-1}})](v)(m \cdot (u\ps{1})^{\pr{0}})^{\sns{0}},
\end{split}
\end{align}
which means \eqref{auxy10}.
\medskip
\noindent Using the $U(\mathfrak{g}_1) \bowtie U(\mathfrak{g}_2)$-coaction compatibility \eqref{aux-38}, together with \eqref{auxy4}, we have
\begin{align}
\begin{split}
& m\snsb{-1}u^{\pr{0}} \otimes m\snsb{0} \cdot u^{\pr{1}} = m\snsb{-1}u^{\pr{0}}u^{\pr{1}}(m\snsb{0}\sns{-1}) \otimes m\snsb{0}\sns{0} \\
& = m\snsb{-1}(m\snsb{0}\sns{-1} \rhd u) \otimes m\snsb{0}\sns{0} = m\sns{0}\snsb{-1}(m\sns{-1} \rhd u) \otimes m\sns{0}\snsb{0} \\
& = m\snsb{-1}u \otimes m\sns{0},
\end{split}
\end{align}
which is \eqref{auxy11}.
\end{proof}
\noindent We are now ready to express the main result of this section.
\begin{theorem}
Let $(\mathcal{F}, \mathcal{U})$ be a matched pair of Hopf algebras such that ${\cal F}$ is commutative and ${\cal U}$ is cocommutative, and $\Big\langle\;, \Big\rangle:\mathcal{F} \times \mathcal{V} \to \mathbb{C}$ a non-degenerate Hopf pairing. Then $M$ is an AYD-module over ${\cal U}\bowtie{\cal V}$ if and only if $M$ is a YD-module over ${\cal F}\blacktriangleright\hspace{-4pt}\vartriangleleft {\cal U}$ such that by the corresponding module and comodule structures it is a YD-module over ${\cal U}$.
\end{theorem}
\begin{proof}
Let $M \in \, ^{\mathcal{F} \blacktriangleright\hspace{-4pt}\vartriangleleft \mathcal{U}}\mathcal{YD}_{\mathcal{F} \blacktriangleright\hspace{-4pt}\vartriangleleft \mathcal{U}} \, \cap \, ^{\mathcal{U}}\mathcal{YD}_{\mathcal{U}}$.
We first prove that $M \in {\cal M}_{\mathcal{U} \bowtie \mathcal{V}}$. By Proposition \ref{aux-41}, we have
\eqref{auxy10}. Evaluating both sidesof this equality on an arbitrary $v \in \mathcal{V}$ we get
\begin{align}
(m \cdot v) \cdot u = (m \cdot (v\ps{1} \rhd u\ps{1})) \cdot (v\ps{2} \lhd u\ps{2}).
\end{align}
This proves that $M$ is a right module on the double crossed product $\mathcal{U} \bowtie \mathcal{V}$.
\medskip
\noindent Next, we show that $M \in \, ^{\mathcal{U} \bowtie \mathcal{V}}{\cal M}$. This time using \eqref{auxy8}
and the duality between right $\mathcal{F}$-action and left $\mathcal{V}$-coaction we get
\begin{align}
f(m\sns{-1})m\sns{0}\snsb{-1} \otimes m\sns{0}\snsb{0} = f(m\snsb{0}\sns{-1})m\snsb{-1} \otimes m\snsb{0}\sns{0}.
\end{align}
Since the pairing is non-degenerate, we conclude that $M$ is a left comodule over ${\cal U}\bowtie{\cal V}$.
\medskip
\noindent Finally, we prove that AYD condition over $\mathcal{U} \bowtie \mathcal{V}$ is satisfied by using Corollary \ref{auxx-24}, that is we show that \eqref{auxy13},\ldots,\eqref{auxy16} are satisfied.
\medskip
\noindent Firstly, by considering the Hopf duality between the $\mathcal{F}$ and $\mathcal{V}$, the right $\mathcal{F} \blacktriangleright\hspace{-4pt}\vartriangleleft \mathcal{U}$-module compatibility reads
\begin{equation}
f((m \cdot u)\sns{-1})(m \cdot u)\sns{0} = f(m\sns{-1} \lhd u\ps{1})m\sns{0} \cdot u\ps{2}.
\end{equation}
Hence \eqref{auxy13} holds.
\medskip
\noindent Secondly, by \eqref{auxy11} and the Hopf duality between ${\cal F}$ and ${\cal V}$, we get
\begin{equation}
m\snsb{-1}u^{\pr{0}}u^{\pr{1}}(m\snsb{0}\sns{-1}) \otimes m\snsb{0}\sns{0} = m\snsb{-1}(m\snsb{0}\sns{-1} \rhd u) \otimes m\snsb{0}\sns{0} = m\snsb{-1}u \otimes m\snsb{0},
\end{equation}
which immediately imply \eqref{auxy14}.
\medskip
\noindent Thirdly, evaluating the left $\mathcal{F} \blacktriangleright\hspace{-4pt}\vartriangleleft \mathcal{U}$-coaction compatibility
\begin{equation}
(m^{\sns{0}}\snsb{-1})^{\pr{0}} \otimes m^{\sns{-1}} \cdot (m^{\sns{0}}\snsb{-1})^{\pr{1}} \otimes m^{\sns{0}}\snsb{-1} = m\snsb{-1} \otimes (m\snsb{0})^{\sns{-1}} \otimes (m\snsb{0})^{\sns{0}}
\end{equation}
on an arbitrary $v \in \mathcal{V}$, we get
\begin{equation}
v\ps{2} \rhd (m \cdot v\ps{1})\snsb{-1} \otimes (m \cdot v\ps{1})\snsb{0} = m\snsb{-1} \otimes m\snsb{0} \cdot v,
\end{equation}
which immediately implies \eqref{auxy15}.
\medskip
\noindent Finally, evaluating the left hand side of the evaluation \eqref{auxy9} on an arbitrary $v \in \mathcal{V}$, we get
\begin{align}
\begin{split}
& LHS = f\ps{1}(v\ps{2})(m \cdot v\ps{1}) \cdot f\ps{2} = f\ps{1}(v\ps{2})f\ps{2}((m \cdot v\ps{1})\sns{-1}) (m \cdot v\ps{1})\sns{0} \\
& = f(v\ps{2}(m \cdot v\ps{1})\sns{-1}) (m \cdot v\ps{1})\sns{0} = f(m\sns{-1} \cdot v\ps{1}) m\sns{0} \cdot v\ps{2},
\end{split}
\end{align}
and the the right hand side turns into
\begin{align}
\begin{split}
& RHS = (m \cdot v\ps{1})\snsb{-1} \rhd f\ps{1}(v\ps{2})f\ps{2}((m \cdot v\ps{1})\snsb{0}\sns{-1})(m \cdot v\ps{1})\snsb{0}\sns{0} \\
& = f\ps{1}(v\ps{2} \lhd (m \cdot v\ps{1})\snsb{-1})f\ps{2}((m \cdot v\ps{1})\snsb{0}\sns{-1})(m \cdot v\ps{1})\snsb{0}\sns{0} \\
& = f\ps{1}(v\ps{4} \lhd (m\sns{0} \cdot v\ps{2})\snsb{-1})f\ps{2}(S(v\ps{3})m\sns{-1}v\ps{1})(m\sns{0} \cdot v\ps{2})\snsb{0} \\
& = f([v\ps{4} \lhd (m\sns{0} \cdot v\ps{2})\snsb{-1}]S(v\ps{3})m\sns{-1}v\ps{1})(m\sns{0} \cdot v\ps{2})\snsb{0},
\end{split}
\end{align}
where on the third equality we use \eqref{auxy8}. So we get
\begin{align}
m\sns{-1} \cdot v\ps{1} \otimes m\sns{0} \cdot v\ps{2} = [v\ps{4} \lhd (m\sns{0} \cdot v\ps{2})\snsb{-1}]S(v\ps{3})m\sns{-1}v\ps{1} \otimes (m\sns{0} \cdot v\ps{2})\snsb{0}.
\end{align}
Using the cocommutativity of ${\cal V}$, we conclude \eqref{auxy16}.
\bigskip
\noindent Conversely, take $M \in \, ^{\mathcal{U} \bowtie \mathcal{V}}\mathcal{AYD}_{\mathcal{U} \bowtie \mathcal{V}}$. Then $M$ is a left comodule over ${\cal F}\blacktriangleright\hspace{-4pt}\vartriangleleft {\cal U}$ by \eqref{auxy15} and a right module over ${\cal F}\blacktriangleright\hspace{-4pt}\vartriangleleft {\cal U}$ by \eqref{auxy13}. So by Proposition \ref{aux-41} it suffices to verify \eqref{auxy8},\ldots, \eqref{auxy11}.
\medskip
\noindent Indeed, \eqref{auxy8} follows from the coaction compatibility over ${\cal U} \bowtie {\cal V}$. The condition \eqref{auxy9} is the consequence of \eqref{auxy16}. The equation \eqref{auxy10} is obtained from the module compatibility over ${\cal U} \bowtie {\cal V}$. Finally \eqref{auxy11} follows from \eqref{auxy14}.
\end{proof}
\begin{proposition}\label{aux-59}
Let $(\mathfrak{g}_1,\mathfrak{g}_2)$ be a matched pair of Lie algebras and $M$ be an AYD module over the double crossed sum $\mathfrak{g}_1 \bowtie \mathfrak{g}_2$ with locally finite action and locally conilpotent coaction. Assume also that $M$ is stable over $R(\mathfrak{g}_2)$ and $U(\mathfrak{g}_1)$. Then $M$ is stable over $R(\mathfrak{g}_2) \blacktriangleright\hspace{-4pt}\vartriangleleft U(\mathfrak{g}_1)$.
\end{proposition}
\begin{proof}
For an $m \in M$, using the $U(\mathfrak{g}_1 \bowtie \mathfrak{g}_2)$-comodule compatibility \eqref{aux-38}, we get
\begin{align}
\begin{split}
& (m^{\sns{0}})\snsb{0} \cdot (m^{\sns{-1}} \blacktriangleright\hspace{-4pt}\vartriangleleft (m^{\sns{0}})\snsb{-1}) = ((m^{\sns{0}})\snsb{0} \cdot m^{\sns{-1}}) \cdot (m^{\sns{0}})\snsb{-1} \\
& = (m^{\sns{0}} \cdot m^{\sns{-1}})\snsb{0} \cdot (m^{\sns{0}} \cdot m^{\sns{-1}})\snsb{-1} = m\snsb{0} \cdot m\snsb{-1} = m.
\end{split}
\end{align}
\end{proof}
\subsection{AYD modules over the Connes-Moscovici Hopf algebras}
In this subsection we investigate the finite dimensional SAYD modules over the Connes-Moscovici Hopf algebras ${\cal H}_n$. Let us first recall from \cite{MoscRang09} the bicrossed product decomposition of the Connes-Moscovici Hopf algebras.
\medskip
\noindent Let $\mathop{\rm Diff}\nolimits({\mathbb R}^n)$ denote the group of diffeomorphisms on ${\mathbb R}^n$. Via the splitting $\mathop{\rm Diff}\nolimits({\mathbb R}^n) = G \cdot N$, where $G$ is the group of affine transformation on ${\mathbb R}^n$ and
\begin{equation}
N = \Big\{\psi \in \mathop{\rm Diff}\nolimits({\mathbb R}^n) \; \Big| \; \psi(0) = 0, \; \psi'(0) = \mathop{\rm Id}\nolimits\Big\},
\end{equation}
we have ${\cal H}_n = {\cal F}(N) \blacktriangleright\hspace{-4pt}\vartriangleleft U(\mathfrak{g})$. Elements of the Hopf algebra ${\cal F}:={\cal F}(N)$ are called regular functions. They are the coefficients of the Taylor expansions at $0 \in {\mathbb R}^n$ of the elements of the group $N$. Here, $\mathfrak{g}$ is the Lie algebra of the group $G$ and ${\cal U}:=U(\mathfrak{g})$ is the universal enveloping algebra of $\mathfrak{g}$.
\medskip
\noindent On the other hand, by \cite{Fuks} the Lie algebra $\mathfrak{a}$ of formal vector fields on ${\mathbb R}^n$ admits the filtration
\begin{align}
\mathfrak{a} = \mathfrak{l}_{-1} \supseteq \mathfrak{l}_{0} \supseteq \mathfrak{l}_{1} \supseteq \ldots
\end{align}
with the bracket
\begin{align}
[\mathfrak{l}_p,\mathfrak{l}_q] \subseteq \mathfrak{l}_{p+q}.
\end{align}
Here, the subalgebra $\mathfrak{l}_k \subseteq \mathfrak{a}$, $k \geq -1$, consists of the vector fields $\sum f_i\partial/\partial x^i$ such that $f_1, \ldots, f_n$ belongs to the $(k+1)$st power of the maximal ideal of the ring of formal power series. Then it is immediate to conclude
\begin{equation}
gl_n = \mathfrak{l}_0/\mathfrak{l}_1, \quad \mathfrak{l}_{-1}/\mathfrak{l}_0 \cong \mathbb{R}^n, \quad\text{and} \quad \mathfrak{g}_1 = \mathfrak{l}_{-1}/\mathfrak{l}_0 \oplus \mathfrak{l}_0/\mathfrak{l}_1 \cong {gl_n}^{\mbox{aff}}.
\end{equation}
\noindent As a result, setting $\mathfrak{n} := \mathfrak{l}_1$, the Lie algebra $\mathfrak{a}$ admits the decomposition $\mathfrak{a} = \mathfrak{g} \oplus \mathfrak{n}$, and hence we have a matched pair of Lie algebras $(\mathfrak{g},\mathfrak{n})$. The Hopf algebra ${\cal F}(N)$ is isomorphic with $R(\mathfrak{n})$ via the following non-degenerate pairing
\begin{equation}
\Big\langle \alpha^i_{ j_1,\ldots,j_p}, Z^{k_1, \ldots, k_q}_l\Big\rangle= \delta^p_q\delta^i_l\delta_{j_1,\ldots,j_p}^{k_1, \ldots, k_q}.
\end{equation}
Here
\begin{equation}
\alpha^i_{ j_1,\ldots,j_p}(\psi)= {\left.\frac{\partial^p}{\partial x^{j_p}\ldots \partial x^{j_1}}\right|}_{x=0}\psi^i(x),
\end{equation}
and
\begin{equation}
Z^{k_1, \ldots, k_q}_l= x^{k_1}\ldots x^{k_q}\frac{\partial}{\partial x^l}.
\end{equation}
We refer the reader to \cite{ConnMosc98} for more details on this duality.
\medskip
\noindent Let $\delta$ be the trace of the adjoint representation of $\mathfrak{g}$ on itself. Then it is known that ${\mathbb C}_\delta$ is a SAYD module over the Hopf algebra ${\cal H}_n$ \cite{ConnMosc98}.
\begin{lemma}\label{lemma-(co)action-trivial}
For any YD module over ${\cal H}_n$, the action of ${\cal U}$ and the coaction of ${\cal F}$ are trivial.
\end{lemma}
\begin{proof}
Let $M$ be a finite dimensional YD module over $\mathcal{H}_n = {\cal F} \blacktriangleright\hspace{-4pt}\vartriangleleft {\cal U}$.
One uses the same argument as in Proposition \ref{aux-41} to show that $M$ is a module over $\mathfrak{a}$. However we know that $\mathfrak{a}$ has no nontrivial finite dimensional representation by \cite{Fuks}. We conclude that the ${\cal U}$ action and the ${\cal F}$-coaction on $M$ are trivial.
\end{proof}
\noindent Let us introduce the isotropy subalgebra $\mathfrak{g}_0\subset \mathfrak{g}$ by
\begin{align}
\mathfrak{g}_0 = \Big\{X \in \mathfrak{g}_1\,\Big|\, Y \rhd X = 0, \forall Y \in \mathfrak{g}_2\Big\} \subseteq \mathfrak{g}_1.
\end{align}
By the construction of $\mathfrak{a}$ it is obvious that $\mathfrak{g}_0$ is generated by $Z^i_j$. So $\mathfrak{g}_0\cong gl_n$.
By the definition of the coaction $\blacktriangledown_{\cal U}:{\cal U}\rightarrow {\cal U}\otimes {\cal F}$ we see that $U(\mathfrak{g}_0)={\cal U}^{co{\cal F}}$.
\begin{lemma}\label{lemma-coaction-lands}
For any finite dimensional YD module $M$ over ${\cal H}_n$ the coaction $$\blacktriangledown:M\rightarrow {\cal H}_n\otimes M$$ lands merely in $U(\mathfrak{g}_0)\otimes M.$
\end{lemma}
\begin{proof}
By Lemma \ref{lemma-(co)action-trivial} we know that ${\cal U}$-action and ${\cal F}$-coaction
on $M$ are trivial. Hence, the left coaction $M \to {\cal F}\blacktriangleright\hspace{-4pt}\vartriangleleft {\cal U} \otimes M$ becomes $m \mapsto 1 \blacktriangleright\hspace{-4pt}\vartriangleleft m\snsb{-1} \otimes m\snsb{0}$. The coassociativity of the coaction
\begin{equation}
1 \blacktriangleright\hspace{-4pt}\vartriangleleft m\snsb{-2} \otimes 1 \blacktriangleright\hspace{-4pt}\vartriangleleft m\snsb{-1} \otimes m\snsb{0} = 1 \blacktriangleright\hspace{-4pt}\vartriangleleft (m\snsb{-2})^{\pr{0}} \otimes (m\snsb{-2})^{\pr{1}} \blacktriangleright\hspace{-4pt}\vartriangleleft m\snsb{-1} \otimes m\snsb{0}
\end{equation}
implies that
\begin{equation}
m \mapsto m\snsb{-1} \otimes m\snsb{0} \in {\cal U}^{co{\cal F}} \otimes M =U(\mathfrak{g}_0) \otimes M.
\end{equation}
\end{proof}
\begin{lemma}\label{lemma-coaction-trivial}
Let $M$ be a finite dimensional YD module over the Hopf algebra ${\cal H}_n$ then the coaction of ${\cal H}_n$ on $M$ is trivial.
\end{lemma}
\begin{proof}
By Lemma \ref{lemma-coaction-lands} we know that the coaction of ${\cal H}_n$ on $M$ lands in $U(\mathfrak{g}_0)\otimes M$. Since $U(\mathfrak{g}_0)$ is a Hopf subalgebra of ${\cal H}_n$, it is obvious that $M$ is an AYD module over $U(\mathfrak{g}_0)$. Since $\mathfrak{g}_0$ is finite dimensional, $M$ becomes an AYD module over $\mathfrak{g}_0$.
\medskip
\noindent Let us express the $\mathfrak{g}_0$-coaction for an arbitrary basis element $m^i \in M$ as
\begin{align}
m^i \mapsto m^i\nsb{-1} \otimes m^i\nsb{0} = \alpha^{ip}_{kq}Z^q_p \otimes m^k \in \mathfrak{g}_0 \otimes M.
\end{align}
Then AYD condition over $\mathfrak{g}_0$ becomes
\begin{align}
\alpha^{ip}_{kq}[Z^q_p,Z] \otimes m^k = 0.
\end{align}
Choosing an arbitrary $Z = Z^{p_0}_{q_0} \in gl_n = \mathfrak{g}_0$, we get
\begin{align}
\alpha^{ip_0}_{kq}Z^q_{q_0} \otimes m^k - \alpha^{ip}_{kq_0}Z^{p_0}_p \otimes m^k = 0,
\end{align}
or equivalently
\begin{align}
\alpha^{ip_0}_{kq_0}(Z^{q_0}_{q_0} - Z^{p_0}_{p_0}) + \sum_{q \neq q_0}\alpha^{ip_0}_{kq}Z^q_{q_0} - \sum_{p \neq p_0}\alpha^{ip}_{kq_0}Z^{p_0}_p = 0.
\end{align}
Thus for $n \geq 2$ we have proved that the $\mathfrak{g}_0$-coaction is trivial. Hence its lift as a $U(\mathfrak{g}_0)$-coaction that we have started with is trivial. This proves that the ${\cal U}$ coaction and hence the ${\cal H}_n$ coaction on $M$ is trivial.
\medskip
\noindent On the other hand, for $n = 1$, the YD condition for $X \in {\cal H}_1$ reads, in view of the triviality of the action of ${gl_1}^{\rm aff}$,
\begin{equation}
Xm\pr{-1} \otimes m\pr{0} + Z(m \cdot \delta_1)\pr{-1} \otimes (m \cdot \delta_1)\pr{0} = m\pr{-1}X \otimes m\pr{0}.
\end{equation}
By Lemma \ref{lemma-coaction-lands} we know that the coaction lands in $U({gl_1}^{\rm aff})$. Together with the relation $[Z,X] = X$ this forces the ${\cal H}_1$-coaction (and also the action) to be trivial.
\end{proof}
\begin{lemma}\label{lemma-action-trivial}
Let $M$ be a finite dimensional YD module over the Hopf algebra ${\cal H}_n$. Then the action of ${\cal H}_n$ on $M$ is trivial.
\end{lemma}
\begin{proof}
By Lemma \ref{lemma-(co)action-trivial} we know that the action of ${\cal H}_n$ on $M$ is concentrated on the action of ${\cal F}$ on $M$. So it suffices to prove that this action is trivial.
\medskip
\noindent For an arbitrary $m \in M$ and $1 \blacktriangleright\hspace{-4pt}\vartriangleleft X_k \in \mathcal{H}_n$, we write the YD compatibility. First we calculate
\begin{align}
\begin{split}
& \Delta^2(1 \blacktriangleright\hspace{-4pt}\vartriangleleft X_k) = \\
& (1 \blacktriangleright\hspace{-4pt}\vartriangleleft 1) \otimes (1 \blacktriangleright\hspace{-4pt}\vartriangleleft 1) \otimes (1 \blacktriangleright\hspace{-4pt}\vartriangleleft X_k) + (1 \blacktriangleright\hspace{-4pt}\vartriangleleft 1) \otimes (1 \blacktriangleright\hspace{-4pt}\vartriangleleft X_k) \otimes (1 \blacktriangleright\hspace{-4pt}\vartriangleleft 1) \\
& + (1 \blacktriangleright\hspace{-4pt}\vartriangleleft X_k) \otimes (1 \blacktriangleright\hspace{-4pt}\vartriangleleft 1) \otimes (1 \blacktriangleright\hspace{-4pt}\vartriangleleft 1) + (\delta^p_{qk} \blacktriangleright\hspace{-4pt}\vartriangleleft 1) \otimes (1 \blacktriangleright\hspace{-4pt}\vartriangleleft Y_p^q) \otimes (1 \blacktriangleright\hspace{-4pt}\vartriangleleft 1) \\
& + (1 \blacktriangleright\hspace{-4pt}\vartriangleleft 1) \otimes (\delta^p_{qk} \blacktriangleright\hspace{-4pt}\vartriangleleft 1) \otimes (1 \blacktriangleright\hspace{-4pt}\vartriangleleft Y_p^q) + (\delta^p_{qk} \blacktriangleright\hspace{-4pt}\vartriangleleft 1) \otimes (1 \blacktriangleright\hspace{-4pt}\vartriangleleft 1) \otimes (1 \blacktriangleright\hspace{-4pt}\vartriangleleft Y_p^q).
\end{split}
\end{align}
Since, by Lemma \ref{lemma-coaction-trivial}, the coaction of ${\cal H}_n$ on $M$ is trivial, the AYD condition can be written as
\begin{align}
\begin{split}
& (1 \blacktriangleright\hspace{-4pt}\vartriangleleft 1) \otimes m \cdot X_k = S(1 \blacktriangleright\hspace{-4pt}\vartriangleleft X_k) \otimes m + 1 \blacktriangleright\hspace{-4pt}\vartriangleleft 1 \otimes m \cdot X_k + 1 \blacktriangleright\hspace{-4pt}\vartriangleleft X_k \otimes m + \\
& \delta^p_{qk} \blacktriangleright\hspace{-4pt}\vartriangleleft 1 \otimes m \cdot Y_p^q - 1 \blacktriangleright\hspace{-4pt}\vartriangleleft Y_p^q \otimes m \cdot \delta^p_{qk} - \delta^p_{qk} \blacktriangleright\hspace{-4pt}\vartriangleleft Y_p^q \otimes m = \\
& \delta^p_{qk} \blacktriangleright\hspace{-4pt}\vartriangleleft 1 \otimes m \cdot Y_p^q + Y_i^j \rhd \delta^i_{jk} \blacktriangleright\hspace{-4pt}\vartriangleleft 1 \otimes m - 1 \blacktriangleright\hspace{-4pt}\vartriangleleft Y_p^q \otimes m \cdot \delta^p_{qk}.
\end{split}
\end{align}
Therefore,
\begin{align}
m \cdot \delta^p_{qk} = 0.
\end{align}
Finally, by the module compatibility on a bicrossed product ${\cal F} \blacktriangleright\hspace{-4pt}\vartriangleleft {\cal U}$, we get
\begin{align}
(m \cdot X_l) \cdot \delta^p_{qk} = m \cdot (X_l \rhd \delta^p_{qk}) + (m \cdot \delta^p_{qk}) \cdot X_l,
\end{align}
which in turn, by using one more time the triviality of the $U(\mathfrak{g}_1)$-action on $M$, implies
\begin{align}
m \cdot \delta^p_{qkl} = 0.
\end{align}
Similarly we have
\begin{align}
m \cdot \delta^p_{qkl_1 \ldots l_s} = 0, \quad \forall s
\end{align}
This proves that the ${\cal F}$-action and a posteriori
the ${\cal H}_n$ action on $M$ is trivial.
\end{proof}
\noindent Now we prove the main result of this section.
\begin{theorem}
The only finite dimensional AYD module over the Connes-Moscovici Hopf algebra $\mathcal{H}_n$ is ${\mathbb C}_\delta$.
\end{theorem}
\begin{proof}
By Lemma \ref{lemma-action-trivial} and Lemma \ref{lemma-coaction-trivial} we know that the only finite dimensional YD module on ${\cal H}_n$ is the trivial one. On the other hand, by the result of M. Staic in \cite{Stai} we know that the category of AYD modules and the category of YD modules over a Hopf algebra $H$ are equivalent provided $H$ has a modular pair in involution $(\theta,\sigma)$. In fact the equivalence functor between these two categories are simply given by
\begin{equation}
^H\mathcal{YD}_H\noindent M \longmapsto ~ {^\sigma}M_\theta:=M\otimes\;^\sigma{\mathbb C}_\theta\in\;\; ^H\mathcal{AYD}_H.
\end{equation}
Since by the result of Connes-Moscovici in \cite{ConnMosc98} the Hopf algebra ${\cal H}_n$ admits a modular pair in involution $(\delta,1)$, we conclude that the only finite dimensional AYD module on ${\cal H}_n$ is ${\mathbb C}_\delta$.
\end{proof}
\section{Hopf-cyclic cohomology with coefficients}
Thanks to the results in the second section, we know all SAYD modules over a Lie-Hopf algebra $(R(\mathfrak{g}_2),\mathfrak{g}_1)$ in terms of AYD modules over the ambient Lie algebra $\mathfrak{g}_1\bowtie\mathfrak{g}_2$. The next natural question is the Hopf cyclic cohomology of the bicrossed product Hopf algebra with coefficients in such a module $^\sigma M_\delta$, where $(\delta,\sigma)$ is the natural modular pair in involution associated to $(\mathfrak{g}_1,\mathfrak{g}_2)$ and $M$ is a SAYD module over $\mathfrak{g}_1\bowtie\mathfrak{g}_2$ . To answer this question we need to prove a van Est type theorem between Hopf cyclic complex of the Hopf algebra $R(\mathfrak{g}_2)\blacktriangleright\hspace{-4pt}\vartriangleleft U(\mathfrak{g}_1)$ with coefficients $^\sigma M_\delta$ and the relative perturbed Koszul complex of $\mathfrak{g}_1\bowtie\mathfrak{g}_2$ with coefficient $M$ introduced in \cite{RangSutl-II}. Actually we observe that our strategy in \cite{RangSutl} can be improved to include all cases, not only the induced coefficients introduced in \cite{RangSutl}. The main obstacle here is the $R(\mathfrak{g}_2)$ action and $U(\mathfrak{g}_1)$ coaction which prevent us from having two trivial (co)bounadry maps. The first one is the Hochschild coboundary map of $U(\mathfrak{g}_1)$ and the second one is the Connes boundary map of $R(\mathfrak{g}_2)$. We observe that the filtration on $^\sigma M_\delta$ originally defined by Jara-Stefan in \cite{JaraStef} is extremely helpful. In the first page of the spectral sequence associated to such filtration these two (co)boundary vanish and the situation descends to the case of \cite{RangSutl}.
\subsection{Relative Lie algebra cohomology and cyclic cohomology of Hopf algebras}
For a Lie subalgebra $\mathfrak{h}\subseteq \mathfrak{g}$ and a right $\mathfrak{g}$-module $M$ we define the relative cochains by
\begin{equation}
C^q(\mathfrak{g},\mathfrak{h},M)=\Big\{\alpha \in C^q(\mathfrak{g},M)=\wedge^q \mathfrak{g}^\ast \otimes M \Big|\; \iota(X) \alpha = {\cal L}_X(\alpha) = 0, \quad X\in \mathfrak{h}\Big\},
\end{equation}
where
\begin{align}
&\iota(X)(\alpha)(X_1,\ldots,X_{q})=\alpha(X,X_1,\ldots,X_q), \\
&{\cal L}_X(\alpha)(X_1,\ldots,X_{q})=\\\notag
&\sum(-1)^i\alpha([X,X_i], X_1,\ldots, \widehat{X}_i,\ldots,X_q)+\theta(X_1,\ldots,X_q)X.
\end{align}
We can identify $C^q(\mathfrak{g},\mathfrak{h},M)$ with $\mathop{\rm Hom}\nolimits_{\mathfrak{h}}(\wedge^q(\mathfrak{g}/\mathfrak{h}),M)$ which is $(\wedge^q(\mathfrak{g}/\mathfrak{h})^\ast \otimes M)^\mathfrak{h}$, where the action of $\mathfrak{h}$ on $\mathfrak{g}/\mathfrak{h}$ is induced by the adjoint action of $\mathfrak{h}$ on $\mathfrak{g}$.
\medskip
\noindent It is checked in \cite{ChevEile} that the Chevalley-Eilenberg coboundary $d_{\rm CE}: C^q(\mathfrak{g},M) \rightarrow C^{q+1}(\mathfrak{g},M)$
\begin{align}
\begin{split}
& d_{\rm CE}(\alpha)(X_0, \ldots,X_q)=\sum_{i<j} (-1)^{i+j}\alpha([X_i,X_j], X_0\ldots \widehat{X}_i, \ldots, \widehat{X}_j, \ldots, X_q)+\\
& ~~~~~~~~~~~~~~~~~~~~~~~\sum_{i}(-1)^{i+1}\alpha(X_0,\ldots,\widehat{X}_i,\ldots X_q)X_i.
\end{split}
\end{align}
is well defined on $C^{\bullet}(\mathfrak{g},\mathfrak{h},M)$. We denote the homology of the complex $(C^\bullet(\mathfrak{g},\mathfrak{h},M),d_{\rm CE})$ by $H^\bullet(\mathfrak{g},\mathfrak{h},M)$ and refer to it as the relative Lie algebra cohomology of $\mathfrak{h}\subseteq \mathfrak{g}$ with coefficients in $M$.
\medskip
\noindent Next, we recall the perturbed Koszul complex $W(\mathfrak{g}, M)$ from \cite{RangSutl-II}. Let $M$ be a right $\mathfrak{g}$-module and $S(\mathfrak{g}^\ast)$-module satisfying
\begin{equation}
(m \cdot X_j) \cdot \theta^t = m \cdot (X_j \rhd \theta^t) + (m \cdot \theta^t) \cdot X_j.
\end{equation}
Then $M$ is a module over the semi direct sum Lie algebra $\widetilde{\mathfrak{g}} = \mathfrak{g}^\ast >\hspace{-4pt}\vartriangleleft \mathfrak{g}$.
\medskip
\noindent Let $M$ be a module over the Lie algebra $\widetilde\mathfrak{g}$. Then
\begin{align}
&\text{$M$ is called unimodular stable if }\qquad \sum_k (m \cdot X_k) \cdot \theta^k = 0 , \\\notag
&\text{$M$ is called stable if }\qquad \sum_k (m \cdot \theta^k) \cdot X_k = 0.
\end{align}
By \cite[Proposition 4.3]{RangSutl-II}, if $M$ is unimodular stable, then $M_{\beta}:= M\otimes{\mathbb C}_{\beta}$ is stable over $\mathfrak{g}$. Here $\beta$ is the trace of the adjoint representation of the Lie algebra $\mathfrak{g}$ on itself.
\medskip
\noindent For a unimodular stable right $\widetilde\mathfrak{g}$-module $M$, the graded space $W^n(\mathfrak{g},M) := \wedge^n \mathfrak{g}^* \otimes M$, $n \geq 0$, becomes a mixed complex with Chevalley-Eilenberg coboundary $$d_{\rm CE}: W^n(\mathfrak{g},M) \to W^{n+1}(\mathfrak{g},M)$$ and
the Kozsul differential
\begin{align}
\begin{split}
& d_{\rm K}:W^n(\mathfrak{g},M) \to W^{n-1}(\mathfrak{g},M) \\
& \alpha \otimes m \mapsto \sum_i \iota_{X_i}(\alpha) \otimes m \lhd \theta^i.
\end{split}
\end{align}
By \cite[Proposition 5.13]{RangSutl-II}, a unimodular stable right $\widetilde\mathfrak{g}$-module is a right-left unimodular stable AYD module, unimodular SAYD in short, over the Lie algebra $\mathfrak{g}$.
\medskip
\noindent Finally we introduce the relative perturbed Koszul complex,
\begin{equation}
W(\mathfrak{g}, \mathfrak{h}, M) = \Big\{f \in W(\mathfrak{g}, M)\, \Big|\, \iota(Y)f = 0, \iota(Y)(d_{\rm CE}f) = 0 , \forall Y \in \mathfrak{h}\Big\}.
\end{equation}
We have the following result.
\begin{lemma}\label{lemma-relative-Koszul}
$d_{\rm K}(W(\mathfrak{g}, \mathfrak{h}, M)) \subseteq W(\mathfrak{g}, \mathfrak{h}, M)$.
\end{lemma}
\begin{proof}
For any $\alpha \otimes m \in W^{n+1}(\mathfrak{g}, \mathfrak{h}, M)$ and any $Y \in \mathfrak{h}$,
\begin{align}
\begin{split}
& \iota(Y)(d_{\rm K}(\alpha \otimes m)) = \iota(Y)((-1)^n\iota(m\nsb{-1})\alpha \otimes m\nsb{0}) = (-1)^n\iota(Y)\iota(m\nsb{-1})\alpha \otimes m\nsb{0} \\
& = (-1)^{n-1}\iota(m\nsb{-1})\iota(Y)\alpha \otimes m\nsb{0} = d_{\rm K}(\iota(Y)\alpha \otimes m) = 0
\end{split}
\end{align}
Similarly, using $d_{\rm CE} \circ d_{\rm K} + d_{\rm K} \circ d_{\rm CE} = 0$,
\begin{align}
\iota(Y)(d_{\rm CE}(d_{\rm K}(\alpha \otimes m))) = - \iota(Y)(d_{\rm K} \circ d_{\rm CE}(\alpha \otimes m)) = - d_{\rm K}(\iota(Y)d_{\rm CE}(\alpha \otimes m)) = 0.
\end{align}
\end{proof}
\begin{definition}
Let $\mathfrak{g}$ be a Lie algebra, $\mathfrak{h} \subseteq \mathfrak{g}$ be a Lie subalgebra and $M$ be a unimodular SAYD module over $\mathfrak{g}$. We call the homology of the mixed subcomplex $(W(\mathfrak{g}, \mathfrak{h}, M), d_{\rm CE} + d_{\rm K})$ the relative periodic cyclic cohomology of the Lie algebra $\mathfrak{g}$ relative to the Lie subalgebra $\mathfrak{h}$ with coefficients in unimodular stable right $\widetilde\mathfrak{g}$-module $M$. We use the notation $\widetilde{HP}^{\bullet}(\mathfrak{g},\mathfrak{h},M)$.
\end{definition}
\noindent In case of the trivial Lie algebra coaction, this cohomology becomes the relative Lie algebra cohomology.
\medskip
\noindent We conclude this subsection by a brief account of cyclic cohomology of Hopf algebras. Let $M$ be a right-left SAYD module over a Hopf algebra ${\cal H}$. Let
\begin{equation}
C^q({\cal H},M):= M\otimes {\cal H}^{\otimes q}, \quad q\ge 0.
\end{equation}
\noindent We recall the following operators on $C^{\bullet}({\cal H},M)$
\begin{align*}
&\text{face operators} \quad\partial_i: C^q({\cal H},M)\rightarrow C^{q+1}({\cal H},M), && 0\le i\le q+1\\
&\text{degeneracy operators } \quad\sigma_j: C^q({\cal H},M)\rightarrow C^{q-1}({\cal H},M),&& \quad 0\le j\le q-1\\
&\text{cyclic operators} \quad\tau: C^q({\cal H},M)\rightarrow C^{q}({\cal H},M),&&
\end{align*}
by
\begin{align}
\begin{split}
&\partial_0(m\otimes h^1\ot\dots\ot h^q)=m\otimes 1\otimes h^1\ot\dots\ot h^q,\\
&\partial_i(m\otimes h^1\ot\dots\ot h^q)=m\otimes h^1\ot\dots\ot h^i\ps{1}\otimes h^i\ps{2}\ot\dots\ot h^q,\\
&\partial_{q+1}(m\otimes h^1\ot\dots\ot h^q)=m\pr{0}\otimes h^1\ot\dots\ot h^q\otimes m\pr{-1},\\
&\sigma_j (m\otimes h^1\ot\dots\ot h^q)= (m\otimes h^1\ot\dots\ot \varepsilon(h^{j+1})\ot\dots\ot h^q),\\
&\tau(m\otimes h^1\ot\dots\ot h^q)=m\pr{0}h^1\ps{1}\otimes S(h^1\ps{2})\cdot(h^2\ot\dots\ot h^q\otimes m\pr{-1}),
\end{split}
\end{align}
where ${\cal H}$ acts on ${\cal H}^{\otimes q}$ diagonally.
\medskip
\noindent The graded module $C({\cal H},M)$ endowed with the above operators is then a cocyclic module \cite{HajaKhalRangSomm04-II}, which means that $\partial_i,$ $\sigma_j$ and $\tau$ satisfy the following identities
\begin{eqnarray}
\begin{split}
& \partial_{j} \partial_{i} = \partial_{i} \partial_{j-1}, \hspace{35 pt} \text{ if} \quad\quad i <j,\\
& \sigma_{j} \sigma_{i} = \sigma_{i} \sigma_{j+1}, \hspace{30 pt} \text{ if} \quad\quad i \leq j,\\
&\sigma_{j} \partial_{i} = \label{rel1}
\begin{cases}
\partial_{i} \sigma_{j-1}, \quad
&\text{if} \hspace{18 pt}\quad\text{$i<j$}\\
\text{Id} \quad\quad\quad
&\text{if}\hspace{17 pt} \quad \text{$i=j$ or $i=j+1$}\\
\partial_{i-1} \sigma_{j} \quad
&\text{ if} \hspace{16 pt}\quad \text{$i>j+1$},\\
\end{cases}\\
&\tau\partial_{i}=\partial_{i-1} \tau, \hspace{43 pt} 1\le i\le q\\
&\tau \partial_{0} = \partial_{q+1}, \hspace{43 pt} \tau \sigma_{i} = \sigma _{i-1} \tau, \hspace{33 pt} 1\le i\le q\\ \label{rel2}
&\tau \sigma_{0} = \sigma_{n} \tau^2, \hspace{43 pt} \tau^{q+1} = \mathop{\rm Id}\nolimits.
\end{split}
\end{eqnarray}
\noindent One uses the face operators to define the Hochschild coboundary
\begin{align}
\begin{split}
&b: C^{q}({\cal H},M)\rightarrow C^{q+1}({\cal H},M), \qquad \text{by}\qquad b:=\sum_{i=0}^{q+1}(-1)^i\partial_i.
\end{split}
\end{align}
It is known that $b^2=0$. As a result, one obtains the Hochschild complex of the coalgebra ${\cal H}$ with coefficients in the bicomodule $M$. Here, the right ${\cal H}$-comodule structure on $M$ defined trivially. The cohomology of $(C^\bullet({\cal H},M),b)$ is denoted by $H^\bullet_{\rm coalg}(H,M)$.
\medskip
\noindent One uses the rest of the operators to define the Connes boundary operator,
\begin{align}
\begin{split}
&B: C^{q}({\cal H},M)\rightarrow C^{q-1}({\cal H},M), \qquad \text{by}\qquad B:=\left(\sum_{i=0}^{q}(-1)^{qi}\tau^{i}\right) \sigma_{q-1}\tau.
\end{split}
\end{align}
\noindent It is shown in \cite{Conn83} that for any cocyclic module we have $b^2=B^2=(b+B)^2=0$. As a result, one defines the cyclic cohomology of ${\cal H}$ with coefficients in the SAYD module $M$, which is denoted by $HC^{\bullet}({\cal H},M)$, as the total cohomology of the bicomplex
\begin{align}
C^{p,q}({\cal H},M)= \left\{ \begin{matrix} M\otimes {\cal H}^{\otimes q-p},&\quad\text{if}\quad 0\le p\le q, \\
&\\
0, & \text{otherwise.}
\end{matrix}\right.
\end{align}
\noindent One also defines the periodic cyclic cohomology of ${\cal H}$ with coefficients in $M$, which is denoted by $HP^\ast({\cal H},M)$, as the total cohomology of direct sum total of the bicomplex
\begin{align}
C^{p,q}({\cal H},M)= \left\{ \begin{matrix} M\otimes {\cal H}^{\otimes q-p},&\quad\text{if}\quad p\le q, \\
&\\
0, & \text{otherwise.}
\end{matrix}\right.
\end{align}
\noindent It can be seen that the periodic cyclic complex and hence the cohomology is $\mathbb{Z}_2$ graded.
\subsection{Hopf-cyclic cohomology of Lie-Hopf algebras}
Our first task in this subsection is to calculate the periodic cyclic cohomology of $R(\mathfrak{g})$ with coefficients in a general SAYD module. This will generalize our result in \cite[ Theorem 4.7]{RangSutl}, where the coefficients were the induced ones, {\it i.e., }\ those SAYD modules induced by a module over $\mathfrak{g}$.
\medskip
\noindent In the second subsubsection we prove the main result of this paper. Roughly speaking, we identify the Hopf cyclic cohomology of a bicrossed product Hopf algebra, associated with a Lie algebra decomposition, with the Lie algebra cohomology of the ambient Lie algebra. The novelty here is the fact that we are able to prove such a van Est type theorem with the most general coefficients.
\subsubsection{Hopf algebra of representative functions}
Let $M$ be a locally finite $\mathfrak{g}$-module and locally conilpotent $\mathfrak{g}$-comodule. We first define a left $R(\mathfrak{g})$-coaction on $M$. It is known, by \cite{Hoch74}(see also \cite{RangSutl}), that the linear map
\begin{equation}
M \to U(\mathfrak{g})^* \otimes M , \quad m \mapsto m^{\sns{-1}} \otimes m^{\sns{0}}
\end{equation}
defined by the rule $m^{\sns{-1}}(u)m^{\sns{0}} = m \cdot u$, defines a left $R(\mathfrak{g})$-comodule structure
\begin{align}
\begin{split}
& \blacktriangledown: M \to R(\mathfrak{g}) \otimes M \\
& m \mapsto m^{\sns{-1}} \otimes m^{\sns{0}}.
\end{split}
\end{align}
Then using the left $U(\mathfrak{g})$-comodule on $M$, we define the right $R(\mathfrak{g})$-module structure
\begin{align}
\begin{split}
& M \otimes R(\mathfrak{g}) \to M \\
& m \otimes f \mapsto m \cdot f := f(m\snsb{-1})m\snsb{0}.
\end{split}
\end{align}
\begin{proposition}\label{proposition-AYD-R(g)}
Let $M$ be locally finite as a $\mathfrak{g}$-module and locally conilpotent as a $\mathfrak{g}$-comodule. If $M$ is an AYD over $\mathfrak{g}$, then it is an AYD over $R(\mathfrak{g})$.
\end{proposition}
\begin{proof}
Since $M$ is AYD module over the Lie algebra $\mathfrak{g}$ with a locally conilpotent $\mathfrak{g}$-coaction, it is an AYD over $U(\mathfrak{g})$.
\medskip
\noindent Let $m \in M$, $f \in R(\mathfrak{g})$ and $u \in U(\mathfrak{g})$. On one hand side of the AYD condition we have
\begin{align}
\blacktriangledown_{R(\mathfrak{g})}(m \cdot f)(u) = (m \cdot f) \cdot u = f(m\snsb{-1})m\snsb{0} \cdot u,
\end{align}
and on the other hand,
\begin{align}
\begin{split}
& (S(f\ps{3})m^{\sns{-1}}f\ps{1})(u)m^{\sns{0}} \cdot f\ps{2} = \\
& S(f\ps{3})(u\ps{1})m^{\sns{-1}}(u\ps{2})f\ps{1}(u\ps{3})f\ps{2}((m^{\sns{0}})\snsb{-1})(m^{\sns{0}})\snsb{0} = \\
& f\ps{2}(S(u\ps{1}))m^{\sns{-1}}(u\ps{2})f\ps{1}(u\ps{3}(m^{\sns{0}})\snsb{-1})(m^{\sns{0}})\snsb{0} = \\
& m^{\sns{-1}}(u\ps{2})f(u\ps{3}(m^{\sns{0}})\snsb{-1}S(u\ps{1}))(m^{\sns{0}})\snsb{0} = \\
& f(u\ps{3}(m^{\sns{-1}}(u\ps{2})m^{\sns{0}})\snsb{-1}S(u\ps{1}))(m^{\sns{-1}}(u\ps{2})m^{\sns{0}})\snsb{0} = \\
& f(u\ps{3}(m \cdot u\ps{2})\snsb{-1}S(u\ps{1}))(m \cdot u\ps{2})\snsb{0} = \\
& f(u\ps{3}S(u\ps{2}\ps{3})m\snsb{-1}u\ps{2}\ps{1}S(u\ps{1}))m\snsb{0} \cdot u\ps{2}\ps{2} = \\
& f(u\ps{5}S(u\ps{4})m\snsb{-1}u\ps{2}S(u\ps{1}))m\snsb{0} \cdot u\ps{3} = \\
& f(m\snsb{-1})m\snsb{0} \cdot u,
\end{split}
\end{align}
where we used the AYD condition on $U(\mathfrak{g})$ on the sixth equality. This proves that $M$ is an AYD module over $R(\mathfrak{g})$.
\end{proof}
\begin{theorem}\label{g-R(g)spectral sequence}
Let $\mathfrak{g}$ be a Lie algebra and $\mathfrak{g} = \mathfrak{h} \ltimes \mathfrak{l}$ be a Levi decomposition. Let $M$ be a unimodular SAYD module over $\mathfrak{g}$ as a locally finite $\mathfrak{g}$-module and locally conilpotent $\mathfrak{g}$-comodule. Assume also that $M$ is stable over $R(\mathfrak{g})$. Then the periodic cyclic cohomology of $\mathfrak{g}$ relative to the subalgebra $\mathfrak{h} \subseteq \mathfrak{g}$ with coefficients in $M$ is the same as the periodic cyclic cohomology of $R(\mathfrak{g})$ with coefficients in $M$. In short,
\begin{align}
\widetilde{HP}(\mathfrak{g}, \mathfrak{h}, M) \cong HP(R(\mathfrak{g}), M)
\end{align}
\end{theorem}
\begin{proof}
Since $M$ is a unimodular stable AYD module over $\mathfrak{g}$, by Lemma \ref{lemma-relative-Koszul} the relative perturbed Koszul complex $(W(\mathfrak{g}, \mathfrak{h}, M), d_{\rm CE} + d_{\rm K})$ is well defined. On the other hand, since $M$ is locally finite as a $\mathfrak{g}$-module and locally conilpotent as a $\mathfrak{g}$-comodule, it is an AYD module over $R(\mathfrak{g})$ by Proposition \ref{proposition-AYD-R(g)}. Together with the assumption that $M$ is stable over $R(\mathfrak{g})$, the Hopf-cyclic complex $(C(R(\mathfrak{g}),M), b + B)$ is well defined.
\medskip
\noindent Since $M$ is a unimodular SAYD over $\mathfrak{g}$, $M_{\beta}:= M\otimes{\mathbb C}_{\beta}$ is SAYD over $\mathfrak{g}$ by \cite[Proposition 4.3]{RangSutl-II}. Here $\beta$ is the trace of the adjoint representation of the Lie algebra $\mathfrak{g}$ on itself. Therefore, by \cite[Lemma 6.2]{JaraStef} we have the filtration $M = \cup_{p \in \mathbb{Z}}F_pM$ defined as $F_0M = M^{coU(\mathfrak{g})}$ and inductively
\begin{equation}
F_{p+1}M/F_pM = (M/F_pM)^{coU(\mathfrak{g})}.
\end{equation}
This filtration naturally induces an analogous filtration on the complexes as
\begin{equation}
F_jW(\mathfrak{g}, \mathfrak{h}, M) = W(\mathfrak{g}, \mathfrak{h}, F_jM), \quad \text{and} \quad F_jC(R(\mathfrak{g}),M) = C(R(\mathfrak{g}),F_jM).
\end{equation}
\noindent We now show that the (co)boundary maps $d_{\rm CE},d_{\rm K},b,B$ respect this filtration. To do so for $d_{\rm K}$ and $d_{\rm CE}$, it suffices to show that the $\mathfrak{g}$-action and $\mathfrak{g}$-coaction on $M$ respect the filtration; which is done by the same argument as in the proof of Theorem 6.2 in \cite{RangSutl-II}. Similarly, to show that the Hochschild coboundary $b$ and the Connes boundary map $B$ respect the filtration we need to show that $R(\mathfrak{g})$-action and $R(\mathfrak{g})$-coaction respects the filtration.
\medskip
\noindent Indeed, for an element $m \in F_pM$, writing the $U(\mathfrak{g})$-coaction as
\begin{equation}
m\snsb{-1} \otimes m\snsb{0} = 1 \otimes m + m\ns{-1} \otimes m\ns{0}, \qquad m\ns{-1} \otimes m\ns{0} \in U(\mathfrak{g}) \otimes F_{p-1}M,
\end{equation}
we get for any $f \in R(\mathfrak{g})$
\begin{equation}
m \cdot f = \varepsilon(f)m + f(m\ns{-1})m\ns{0} \in F_pM.
\end{equation}
This proves that $R(\mathfrak{g})$-action respects the filtration. To prove that $R(\mathfrak{g})$-coaction respects the filtration, we first write the coaction on $m \in F_pM$ as
\begin{equation}
m \mapsto \sum_i f^i \otimes m_i \in R(\mathfrak{g}) \otimes M.
\end{equation}
By \cite{Hoch-book} there are elements $u_j \in U(\mathfrak{g})$ such that $f^i(u_j) = \delta^i_j$. Hence, for any $m_{i_0}$ we have
\begin{equation}
m_{i_0} = \sum_i f^i(u_{i_0}) m_i = m\cdot u_{i_0} \in F_pM.
\end{equation}
We have proved that $R(\mathfrak{g})$-coaction respects the filtration.
\medskip
\noindent Next, we write the $E_1$ terms of the associated spectral sequences. We have
\begin{align}
\begin{split}
& E_1^{j,i}(\mathfrak{g},\mathfrak{h},M) = H^{i+j}(F_jW(\mathfrak{g}, \mathfrak{h}, M)/F_{j-1}W(\mathfrak{g}, \mathfrak{h}, M)) \\
& = H^{i+j}(W(\mathfrak{g}, \mathfrak{h}, F_jM/F_{j-1}M)) = \bigoplus_{i+j = n \, mod \, 2}H^n(\mathfrak{g}, \mathfrak{h}, F_jM/F_{j-1}M),
\end{split}
\end{align}
where on the last equality we used the fact that $F_{j}M/F_{j-1}M$ has trivial $\mathfrak{g}$-coaction.
\medskip
\noindent Similarly we have
\begin{align}
\begin{split}
& E_1^{j,i}(R(\mathfrak{g}),M) = H^{i+j}(F_jC(R(\mathfrak{g}),M)/F_{j-1}C(R(\mathfrak{g}),M)) \\
& = H^{i+j}(C(R(\mathfrak{g}),F_jM/F_{j-1}M)) = \bigoplus_{i+j = n\, mod\,2}H^n_{\rm coalg}(R(\mathfrak{g}),F_jM/F_{j-1}M)),
\end{split}
\end{align}
where on the last equality we could use \cite[Theorem 4.3]{RangSutl} due to the fact that $F_jM/F_{j-1}M$ has trivial $\mathfrak{g}$-coaction, hence trivial $R(\mathfrak{g})$-action.
\medskip
\noindent Finally, under the hypothesis of the theorem, a quasi-isomorphism between $E_1$ terms is given by \cite[Theorem 4.7]{RangSutl}.
\end{proof}
\begin{remark}{\rm
If $M$ has a trivial $\mathfrak{g}$-comodule structure, then $d_{\rm K} = 0$ and hence the above theorem descents to \cite[Theorem 4.7]{RangSutl}.
}\end{remark}
\subsubsection{Bicrossed product Hopf algebras}
Let $M$ be an AYD module over a double crossed sum Lie algebra $\mathfrak{g}_1 \bowtie \mathfrak{g}_2$ with a locally finite $\mathfrak{g}_1 \bowtie \mathfrak{g}_2$-action and a locally conilpotent $\mathfrak{g}_1 \bowtie \mathfrak{g}_2$-coaction. Then by Proposition \ref{aux-46} $M$ is a YD module over the bicrossed product Hopf algebra $R(\mathfrak{g}_2) \blacktriangleright\hspace{-4pt}\vartriangleleft U(\mathfrak{g}_1)$.
\medskip
\noindent Let also $(\delta,\sigma)$ be an MPI for the Hopf algebra $R(\mathfrak{g}_2) \blacktriangleright\hspace{-4pt}\vartriangleleft U(\mathfrak{g}_1)$, see \cite[Theorem 3.2]{RangSutl}. Then $\, ^{\sigma}M_{\delta} := M \otimes \, ^{\sigma}\mathbb{C}_{\delta}$ is an AYD module over the bicrossed product Hopf algebra $R(\mathfrak{g}_2) \blacktriangleright\hspace{-4pt}\vartriangleleft U(\mathfrak{g}_1)$.
\medskip
\noindent Finally, let us assume that $M$ is stable over $R(\mathfrak{g}_2)$ and $U(\mathfrak{g}_1)$. Then $\, ^{\sigma}M_{\delta}$ is stable if and only if
\begin{align}
\begin{split}
& m \otimes 1_{\mathbb{C}} = (m^{\sns{0}}\snsb{0} \otimes 1_{\mathbb{C}}) \cdot (m^{\sns{-1}} \blacktriangleright\hspace{-4pt}\vartriangleleft m^{\sns{0}}\snsb{-1})(\sigma \blacktriangleright\hspace{-4pt}\vartriangleleft 1) \\
& = (m^{\sns{0}}\snsb{0} \otimes 1_{\mathbb{C}}) \cdot (m^{\sns{-1}} \blacktriangleright\hspace{-4pt}\vartriangleleft 1)(1 \blacktriangleright\hspace{-4pt}\vartriangleleft m^{\sns{0}}\snsb{-1})(\sigma \blacktriangleright\hspace{-4pt}\vartriangleleft 1) \\
& = (m^{\sns{0}}\snsb{0} \cdot m^{\sns{-1}} \otimes 1_{\mathbb{C}}) \cdot (1 \blacktriangleright\hspace{-4pt}\vartriangleleft m^{\sns{0}}\snsb{-1})(\sigma \blacktriangleright\hspace{-4pt}\vartriangleleft 1) \\
& = ((m^{\sns{0}}\snsb{0} \cdot m^{\sns{-1}}) \cdot m^{\sns{0}}\snsb{-1}\delta(m^{\sns{0}}\snsb{-2}) \otimes 1_{\mathbb{C}}) \cdot (\sigma \blacktriangleright\hspace{-4pt}\vartriangleleft 1) \\
& = m\snsb{0}\delta(m\snsb{-1})\sigma \otimes 1_{\mathbb{C}}.
\end{split}
\end{align}
Here, on the fourth equality we have used Proposition \ref{aux-59}. In other words, $M$ satisfying the hypothesis of the Proposition \ref{aux-59}, $\, ^{\sigma}M_{\delta}$ is stable if and only if
\begin{align}\label{aux-47}
m\snsb{0}\delta(m\snsb{-1})\sigma = m
\end{align}
\begin{theorem}\label{aux-63}
Let $(\mathfrak{g}_1, \mathfrak{g}_2)$ be a matched pair of Lie algebras and $\mathfrak{g}_2 = \mathfrak{h} \ltimes \mathfrak{l}$ be a Levi decomposition such that $\mathfrak{h}$ is $\mathfrak{g}_1$ invariant and it acts on $\mathfrak{g}_1$ by derivations. Let $M$ be a unimodular SAYD module over $\mathfrak{g}_1 \bowtie \mathfrak{g}_2$ with a locally finite $\mathfrak{g}_1 \bowtie \mathfrak{g}_2$-action and locally conilpotent $\mathfrak{g}_1 \bowtie \mathfrak{g}_2$-coaction. Finally assume that $\, ^{\sigma}M_{\delta}$ is stable. Then we have
\begin{equation}
HP(R(\mathfrak{g}_2) \blacktriangleright\hspace{-4pt}\vartriangleleft U(\mathfrak{g}_1),\, ^{\sigma}M_{\delta}) \cong \widetilde{HP}(\mathfrak{g}_1 \bowtie \mathfrak{g}_2, \mathfrak{h}, M)
\end{equation}
\end{theorem}
\begin{proof}
Let $C(U(\mathfrak{g}_1) \blacktriangleright\hspace{-4pt} < R(\mathfrak{g}_2),\, ^{\sigma}M_{\delta})$ be the complex computing the Hopf-cyclic cohomology of the bicrossed product Hopf algebra $\mathcal{H} = R(\mathfrak{g}_2) \blacktriangleright\hspace{-4pt}\vartriangleleft U(\mathfrak{g}_1)$ with coefficients in the SAYD module $^{\sigma}M_{\delta}$.
\medskip
\noindent By \cite{MoscRang09}, Theorem 3.16, this complex is quasi-isomorphic with the total complex of the mixed complex $\mathfrak{Z}(\mathcal{H},R(\mathfrak{g}_2);\, ^{\sigma}M_{\delta})$ whose cylindrical structure is given by the following operators. The horizontal operators are
\begin{align}\label{horizontal-operators}
\begin{split}
& \overset{\ra}{\partial}_{0}(m \otimes \tilde{f} \otimes \tilde{u}) = m \otimes 1 \otimes f^1 \ot\dots\ot f^p \otimes \tilde{u} \\
& \overset{\ra}{\partial}_{i}(m \otimes \tilde{f} \otimes \tilde{u}) = m \otimes f^1 \ot\dots\ot \Delta(f^i)\ot\dots\ot f^p \otimes \tilde{u} \\
& \overset{\ra}{\partial}_{p+1}(m \otimes \tilde{f} \otimes \tilde{u}) = m\pr{0} \otimes f^1 \ot\dots\ot f^p \otimes \tilde{u}^{\pr{-1}}m\pr{-1} \rhd 1_{R(\mathfrak{g}_2)} \otimes \tilde{u}^{\pr{0}} \\
& \overset{\rightarrow}{\sigma}_{j}(m \otimes \tilde{f} \otimes \tilde{u}) = m \otimes f^1 \ot\dots\ot \varepsilon(f^{j+1}) \ot\dots\ot f^p \otimes \tilde{u} \\
&\overset{\ra}{\tau}(m \otimes \tilde{f} \otimes \tilde{u}) = m\pr{0} f^1\ps{1} \otimes S(f^1\ps{2}) \cdot (f^2 \ot\dots\ot f^p \otimes \tilde{u}^{\pr{-1}}m\pr{-1} \rhd 1_{R(\mathfrak{g}_2)} \otimes \tilde{u}^{\pr{0}}),
\end{split}
\end{align}
and the vertical operators are
\begin{align}\label{vertical-operators}
\begin{split}
&\uparrow\hspace{-4pt}\partial_{0}(m \otimes \tilde{f} \otimes \tilde{u}) = m \otimes \tilde{f} \otimes \dot 1 \otimes u^1\ot\dots\ot u^q \\
&\uparrow\hspace{-4pt}\partial_{i}(m \otimes \tilde{f} \otimes \tilde{u}) = m \otimes \tilde{f} \otimes u^0 \ot\dots\ot \Delta(u^i) \ot\dots\ot u^q \\
&\uparrow\hspace{-4pt}\partial_{q+1}(m \otimes \tilde{f} \otimes \tilde{u}) = m\pr{0} \otimes \tilde{f} \otimes u^1 \ot\dots\ot u^q \otimes \wbar{m\pr{-1}} \\
&\uparrow\hspace{-4pt}\sigma_j(m \otimes \tilde{f} \otimes \tilde{u}) = m \otimes \tilde{f} \otimes u^1 \ot\dots\ot \varepsilon(u^{j+1}) \ot\dots\ot u^q \\
&\uparrow\hspace{-4pt}\tau(m \otimes \tilde{f} \otimes \tilde{u}) = m\pr{0} u^1\ps{4}S^{-1}(u^1\ps{3} \rhd 1_{R(\mathfrak{g}_2)}) \otimes \\
& S(S^{-1}(u^1\ps{2}) \rhd 1_{R(\mathfrak{g}_2)}) \cdot \left( S^{-1}(u^1\ps{1}) \rhd \tilde{f} \otimes S(u^1\ps{5}) \cdot (u^2 \ot\dots\ot u^q \otimes \wbar{m\pr{-1}})
\right).
\end{split}
\end{align}
for any $m \in \, ^{\sigma}M_{\delta}$.
\medskip
\noindent Here,
\begin{equation}
U(\mathfrak{g}_1)^{\otimes \, q} \to \mathcal{H} \otimes U(\mathfrak{g}_1)^{\otimes \, q}, \quad \tilde{u} \mapsto \tilde{u}^{\pr{-1}} \otimes \tilde{u}^{\pr{0}}
\end{equation}
arises from the left $\mathcal{H}$-coaction on $U(\mathfrak{g}_1)$ that coincides with the original $R(\mathfrak{g}_2)$-coaction, \cite[Proposition 3.20]{MoscRang09}. On the other hand, $U(\mathfrak{g}_1) \cong \mathcal{H} \otimes_{R(\mathfrak{g}_2)} \mathbb{C} \cong \mathcal{H}/\mathcal{H}R(\mathfrak{g}_2)^+$ as coalgebras via the map $(f \blacktriangleright\hspace{-4pt}\vartriangleleft u) \otimes_{R(\mathfrak{g}_2)} 1_{\mathbb{C}} \mapsto \varepsilon(f)u$ and $\wbar{f \blacktriangleright\hspace{-4pt}\vartriangleleft u} = \varepsilon(f)u$ denotes the corresponding class.
\medskip
\noindent Since $M$ is a unimodular SAYD module over $\mathfrak{g}_1 \bowtie \mathfrak{g}_2$, it admits the filtration $(F_pM)_{p \in \mathbb{Z}}$ defined similarly as before. We recall by Proposition \ref{proposition-comodule-doublecrossed-sum} that $\mathfrak{g}_1 \bowtie \mathfrak{g}_2$-coaction is the summation of $\mathfrak{g}_1$-coaction and $\mathfrak{g}_2$-coaction. Therefore, since $\mathfrak{g}_1 \bowtie \mathfrak{g}_2$-coaction respects the filtration, we conclude that $\mathfrak{g}_1$-coaction and $\mathfrak{g}_2$-coaction respect the filtration. Similarly, since $\mathfrak{g}_1 \bowtie \mathfrak{g}_2$-action respects the filtration, we conclude that $\mathfrak{g}_1$-action and $\mathfrak{g}_2$-action respects the filtration. Finally, by a similar argument as in the proof of Theorem \ref{g-R(g)spectral sequence} we can say that $R(\mathfrak{g}_2)$-action and $R(\mathfrak{g}_2)$-coaction respect the filtration.
\medskip
\noindent As a result, the (co)boundary maps $d_{\rm CE}$ and $d_{\rm K}$ of the complex $W(\mathfrak{g}_1 \bowtie \mathfrak{g}_2, \mathfrak{h}, M)$, and $b$ and $B$ from $C(U(\mathfrak{g}_1) \blacktriangleright\hspace{-4pt} < R(\mathfrak{g}_2), \, ^{\sigma}M_{\delta})$ respect the filtration.
\medskip
\noindent Next, we proceed to the $E_1$ terms of the associated spectral sequences. We have
\begin{align}\label{aux-48}
\begin{split}
& E_1^{j,i}(R(\mathfrak{g}_2) \blacktriangleright\hspace{-4pt}\vartriangleleft U(\mathfrak{g}_1), \, ^{\sigma}M_{\delta}) = \\
& H^{i+j}(F_jC(U(\mathfrak{g}_1) \blacktriangleright\hspace{-4pt} < R(\mathfrak{g}_2),\, ^{\sigma}M_{\delta})/F_{j-1}C(U(\mathfrak{g}_1) \blacktriangleright\hspace{-4pt} < R(\mathfrak{g}_2),\, ^{\sigma}M_{\delta})),
\end{split}
\end{align}
where
\begin{align}
\begin{split}
& F_jC(U(\mathfrak{g}_1) \blacktriangleright\hspace{-4pt} < R(\mathfrak{g}_2),\, ^{\sigma}M_{\delta})/F_{j-1}C(U(\mathfrak{g}_1) \blacktriangleright\hspace{-4pt} < R(\mathfrak{g}_2),\, ^{\sigma}M_{\delta}) = \\
& C(U(\mathfrak{g}_1) \blacktriangleright\hspace{-4pt} < R(\mathfrak{g}_2), F_j \, ^{\sigma}M_{\delta}/F_{j-1} \, ^{\sigma}M_{\delta}).
\end{split}
\end{align}
Since $$F_j \, ^{\sigma}M_{\delta}/F_{j-1} \, ^{\sigma}M_{\delta} = \, ^{\sigma}(F_jM/F_{j-1}M)_{\delta} =: \, ^{\sigma}\wbar{M}_{\delta}$$ has a trivial $\mathfrak{g}_1 \bowtie \mathfrak{g}_2$-comodule structure, its $U(\mathfrak{g}_1)$-comodule structure and $R(\mathfrak{g}_2)$-module structure are also trivial. Therefore, it is an $R(\mathfrak{g}_2)$-SAYD in the sense of \cite{MoscRang09}. In this case, by \cite[Proposition 3.16]{MoscRang09}, $\mathfrak{Z}(\mathcal{H},R(\mathfrak{g}_2);\, ^{\sigma}M_{\delta})$ is a bicocyclic module and the cohomology in \eqref{aux-48} is computed from the total of the following bicocyclic complex
\begin{align}\label{bicocyclic-bicomplex}
\begin{xy} \xymatrix{ \vdots\ar@<.6 ex>[d]^{\uparrow B} & \vdots\ar@<.6 ex>[d]^{\uparrow B}
&\vdots \ar@<.6 ex>[d]^{\uparrow B} & &\\
^\sigma{\overline{M}}_\delta \otimes U(\mathfrak{g}_1)^{\otimes 2} \ar@<.6 ex>[r]^{\overset{\ra}{b}}\ar@<.6
ex>[u]^{ \uparrow b } \ar@<.6 ex>[d]^{\uparrow B}&
^\sigma{\overline{M}}_\delta\otimes U(\mathfrak{g}_1)^{\otimes 2}\otimes R(\mathfrak{g}_2) \ar@<.6 ex>[r]^{\overset{\ra}{b}}\ar@<.6 ex>[l]^{\overset{\ra}{B}}\ar@<.6 ex>[u]^{ \uparrow b }
\ar@<.6 ex>[d]^{\uparrow B} & ^\sigma{\overline{M}}_\delta\otimes U(\mathfrak{g}_1)^{\otimes 2}\otimes R(\mathfrak{g}_2)^{\otimes 2}
\ar@<.6 ex>[r] \ar@<.6 ex>[l]^{~~\overset{\ra}{B}} \ar@<.6 ex>[u]^{ \uparrow b }
\ar@<.6 ex>[d]^{\uparrow B}& \ar@<.6 ex>[l] \hdots&\\
^\sigma{\overline{M}}_\delta \otimes U(\mathfrak{g}_1) \ar@<.6 ex>[r]^{\overset{\ra}{b}}\ar@<.6 ex>[u]^{ \uparrow b }
\ar@<.6 ex>[d]^{\uparrow B}& ^\sigma{\overline{M}}_\delta \otimes U(\mathfrak{g}_1) \otimes R(\mathfrak{g}_2) \ar@<.6 ex>[r]^{\overset{\ra}{b}}
\ar@<.6 ex>[l]^{\overset{\ra}{B}}\ar@<.6 ex>[u]^{ \uparrow b } \ar@<.6 ex>[d]^{\uparrow B}
&^\sigma{\overline{M}}_\delta\otimes U(\mathfrak{g}_1) \otimes R(\mathfrak{g}_2)^{\otimes 2} \ar@<.6 ex>[r] \ar@<.6 ex>[l]^{\overset{\ra}{B}}\ar@<.6 ex>[u]^{ \uparrow b }
\ar@<.6 ex>[d]^{\uparrow B}&\ar@<.6 ex>[l] \hdots&\\
^\sigma{\overline{M}}_\delta \ar@<.6 ex>[r]^{\overset{\ra}{b}}\ar@<.6 ex>[u]^{ \uparrow b }&
^\sigma{\overline{M}}_\delta\otimes R(\mathfrak{g}_2) \ar@<.6 ex>[r]^{\overset{\ra}{b}}\ar[l]^{\overset{\ra}{B}}\ar@<.6
ex>[u]^{ \uparrow b }&^\sigma{\overline{M}}_\delta\otimes R(\mathfrak{g}_2)^{\otimes 2} \ar@<.6
ex>[r]^{~~\overset{\ra}{b}}\ar@<.6 ex>[l]^{\overset{\ra}{B}}\ar@<1 ex >[u]^{ \uparrow b }
&\ar@<.6 ex>[l]^{~~\overset{\ra}{B}} \hdots& .}
\end{xy}
\end{align}
\noindent Moreover, by \cite[Proposition 5.1]{RangSutl}, the total of the bicomplex \eqref{bicocyclic-bicomplex} is quasi-isomorphic to the total of the bicomplex
\begin{align}
\begin{xy} \xymatrix{ \vdots & \vdots
&\vdots &&\\
\;^\sigma{\overline{M}}_\delta\otimes \wedge^2{\mathfrak{g}_1}^\ast \ar[u]^{d_{\rm CE}}\ar[r]^{b^\ast_{R(\mathfrak{g}_2)}\;\;\;\;\;\;\;\;\;\;}& \;^\sigma{\overline{M}}_\delta\otimes \wedge^2{\mathfrak{g}_1}^\ast\otimes R(\mathfrak{g}_2) \ar[u]^{d_{\rm CE}} \ar[r]^{b^\ast_R(\mathfrak{g}_2)}& \;^\sigma{\overline{M}}_\delta\otimes \wedge^2{\mathfrak{g}_1}^\ast\otimes R(\mathfrak{g}_2)^{\otimes 2} \ar[u]^{d_{\rm CE}} \ar[r]^{\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;b^\ast_{R(\mathfrak{g}_2)}} & \hdots& \\
\;^\sigma{\overline{M}}_\delta\otimes {\mathfrak{g}_1}^\ast \ar[u]^{d_{\rm CE}}\ar[r]^{b^\ast_{R(\mathfrak{g}_2)}\;\;\;\;\;\;\;\;\;\;\;\;\;}& \;^\sigma{\overline{M}}_\delta\otimes {\mathfrak{g}_1}^\ast\otimes R(\mathfrak{g}_2) \ar[u]^{d_{\rm CE}} \ar[r]^{b^\ast_{R(\mathfrak{g}_2)}}& \;^\sigma{\overline{M}}_\delta\otimes {\mathfrak{g}_1}^\ast\otimes R(\mathfrak{g}_2)^{\otimes 2} \ar[u]^{d_{\rm CE}} \ar[r]^{\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;b^\ast_R(\mathfrak{g}_2) }& \hdots& \\
\;^\sigma{\overline{M}}_\delta\ar[u]^{d_{\rm CE}}\ar[r]^{b^\ast_{R(\mathfrak{g}_2)}~~~~~~~}& \;^\sigma{\overline{M}}_\delta\otimes R(\mathfrak{g}_2) \ar[u]^{d_{\rm CE}}\ar[r]^{b^\ast_R(\mathfrak{g}_2)}& \;^\sigma{\overline{M}}_\delta\otimes R(\mathfrak{g}_2)^{\otimes 2} \ar[u]^{d_{\rm CE}} \ar[r]^{\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;b^\ast_{R(\mathfrak{g}_2)}} & \hdots&, }
\end{xy}
\end{align}
where $b^\ast_{R(\mathfrak{g}_2)}$ is the coalgebra Hochschild coboundary with coefficients in the $R(\mathfrak{g}_2)$-comodule $\;^\sigma{\overline{M}}_\delta \otimes \wedge^q{\mathfrak{g}_1}^\ast$.
\medskip
\noindent Similarly,
\begin{align}\label{aux-49}
\begin{split}
& E_1^{j,i}(\mathfrak{g}_1 \bowtie \mathfrak{g}_2,\mathfrak{h},M) = H^{i+j}(F_jW(\mathfrak{g}_1 \bowtie \mathfrak{g}_2,\mathfrak{h},M)/F_{j-1}W(\mathfrak{g}_1 \bowtie \mathfrak{g}_2,\mathfrak{h},M)) \\
& = H^{i+j}(W(\mathfrak{g}_1 \bowtie \mathfrak{g}_2,\mathfrak{h},F_jM/F_{j-1}M)) = \bigoplus_{i+j = n \, mod \, 2}H^n(\mathfrak{g}_1 \bowtie \mathfrak{g}_2, \mathfrak{h}, F_jM/F_{j-1}M)
\end{split}
\end{align}
where the last equality follows from the fact that $F_jM/F_{j-1}M$ has a trivial $\mathfrak{g}_1 \bowtie \mathfrak{g}_2$-comodule structure.
\medskip
\noindent Finally, the quasi isomorphism between the $E_1$-terms \eqref{aux-48} and \eqref{aux-49} is given by the Corollary 5.10 of \cite{RangSutl}.
\end{proof}
\begin{remark}
{\rm
In case of a trivial $\mathfrak{g}_1 \bowtie \mathfrak{g}_2$-coaction, this theorem becomes \cite[Corollary 5.10]{RangSutl}. In this case, $U(\mathfrak{g}_1)$-coaction and $R(\mathfrak{g}_2)$-action are trivial, therefore the condition \eqref{aux-47} is obvious.}
\end{remark}
\section{Illustration}
In this section, first we exercise our method in Sections \ref{Sec-Lie-Hopf} to provide a highly nontrivial 4-dimensional SAYD module over ${\cal H}_{1\rm S}{^{\rm cop}}\cong R({\mathbb C})\blacktriangleright\hspace{-4pt}\vartriangleleft U({gl_1}^{\rm aff})$, the Schwarzian Hopf
algebra, which is introduced in \cite{ConnMosc98}. The merit of this example is the nontrivially of the $R({\mathbb C})$-action and the
$U({gl_1}^{\rm aff})$-coaction which were assumed to be trivial for induced modules in \cite{RangSutl}. We then illustrate Theorem \ref{aux-63} by computing two hand sides of the theorem. At the end we explicitly compute
the representative cocycles for these cohomology classes.
From now on we denote $R({\mathbb C})$ by ${\cal F}$, $U({gl_1}^{\rm aff})$ by ${\cal U}$ and ${\cal H}_{1\rm S}{^{\rm cop}}$ by ${\cal H}$.
\subsection{A 4-dimensional SAYD module on the Schwarzian Hopf algebra}
Let us first recall the Lie algebra $sl_2$ as a double crossed sum Lie algebra. We have $sl_2 = \mathfrak{g}_1 \bowtie \mathfrak{g}_2$, $\mathfrak{g}_1 = {\mathbb C}\Big\langle X,Y \Big\rangle$, $\mathfrak{g}_2 = {\mathbb C}\Big\langle Z \Big\rangle$, and the Lie bracket is
\begin{equation}
[Y,X] = X, \quad [Z,X] = Y, \quad [Z,Y] = Z.
\end{equation}
\noindent Let us take $M = S({sl_2}^\ast)\nsb{2}$. By Example \ref{ex-2}, $M$ is an SAYD over $sl_2$ via the coadjoint action and the Koszul coaction.
\medskip
\noindent Writing ${\mathfrak{g}_2}^\ast = {\mathbb C}\Big\langle \delta_1 \Big\rangle$, we have ${\cal F} = R(\mathfrak{g}_2) = \mathbb{C}[\delta_1]$. Also, ${\cal U} = U(\mathfrak{g}_1)$ and it is immediate to realize that ${\cal F} \blacktriangleright\hspace{-4pt}\vartriangleleft {\cal U} \cong {{\cal H}_{1 \rm S}}{^{\rm cop}}$ \cite{MoscRang07}.
\medskip
\noindent Next, we construct the ${\cal F} \blacktriangleright\hspace{-4pt}\vartriangleleft {\cal U}$-(co)action explicitly and we verify that $\, ^{\sigma}M_{\delta}$ is an SAYD over ${\cal F} \blacktriangleright\hspace{-4pt}\vartriangleleft {\cal U}$. Here, $(\sigma,\delta)$ is the canonical modular pair in involution associated to the bicrossed product ${\cal F} \blacktriangleright\hspace{-4pt}\vartriangleleft {\cal U}$ \cite{RangSutl}.
By definition $\delta = Tr \circ ad_{\mathfrak{g}_1}$. Let us compute $\sigma \in {\cal F}$ from the right ${\cal F}$-coaction on ${\cal U}$.
\medskip
\noindent Considering the formula $[v,X] = v \rhd X \oplus v \lhd X$, the action $\mathfrak{g}_2 \lhd \mathfrak{g}_1$ is given by
\begin{equation}
Z \lhd X = 0, \quad Z \lhd Y = Z.
\end{equation}
Similarly, the action $\mathfrak{g}_2 \rhd \mathfrak{g}_1$ is
\begin{equation}
Z \rhd X = Y, \quad Z \rhd Y = 0.
\end{equation}
\noindent Dualizing the left action $\mathfrak{g}_2 \rhd \mathfrak{g}_1$, we have the ${\cal F}$-coaction on ${\cal U}$ as follows
\begin{align}
\begin{split}
& {\cal U} \to {\cal U} \otimes {\cal F}, \quad u \mapsto u^{\pr{0}} \otimes u^{\pr{1}} \\
& X \mapsto X \otimes 1 + Y \otimes \delta_1 \\
& Y \mapsto Y \otimes 1.
\end{split}
\end{align}
Hence, by \cite{RangSutl} Section 3.1
\begin{equation}
\sigma = det \left(
\begin{array}{cc}
1 & \delta_1 \\
0 & 1 \\
\end{array}
\right) = 1.
\end{equation}
\noindent On the other hand, by the Lie algebra structure of $\mathfrak{g}_1 \cong {gl_1}^{\rm aff}$, we have
\begin{equation}
\delta(X) = 0, \qquad \delta(Y) = 1.
\end{equation}
\noindent Next, we express the ${\cal F} \blacktriangleright\hspace{-4pt}\vartriangleleft {\cal U}$-coaction on $M = S({sl_2}^\ast)\nsb{2}$ explicitly. A vector space basis of $M$ is given by $\Big\{1_M, R^X, R^Y, R^Z\Big\}$ and the $\mathfrak{g}_1$-coaction (Kozsul) is
\begin{equation}
M \to \mathfrak{g}_1 \otimes M, \quad 1_M \mapsto X \otimes R^X + Y \otimes R^Y, \quad R^i \mapsto 0, \quad i = X,Y,Z.
\end{equation}
\noindent Note that the application of this coaction twice is zero, thus it is locally conilpotent. Then the corresponding ${\cal U}$ coaction is
\begin{align}
\begin{split}
& M \to {\cal U} \otimes M, \quad m \mapsto m\snsb{-1} \otimes m\snsb{0} \\
& 1_M \mapsto 1 \otimes 1_M + X \otimes R^X + Y \otimes R^Y \\
& R^i \mapsto 1 \otimes R^i, \quad i = X,Y,Z.
\end{split}
\end{align}
\noindent To determine the left ${\cal F}$-coaction, we need to dualize the right $\mathfrak{g}_2$-action. We have
\begin{equation}
1_M \lhd Z = 0, \quad R^X \lhd Z = 0, \quad R^Y \lhd Z = R^X, \quad R^Z \lhd Z = R^Y,
\end{equation}
implying
\begin{align}
\begin{split}
& M \to {\cal F} \otimes M, \quad m \mapsto m^{\sns{-1}} \otimes m^{\sns{0}} \\
& 1_M \mapsto 1 \otimes 1_M \\
& R^X \mapsto 1 \otimes R^X \\
& R^Y \mapsto 1 \otimes R^Y + \delta_1 \otimes R^X \\
& R^Z \mapsto 1 \otimes R^Z + \delta_1 \otimes R^Y + \frac{1}{2}\delta_1^2 \otimes R^X.
\end{split}
\end{align}
As a result, ${\cal F} \blacktriangleright\hspace{-4pt}\vartriangleleft {\cal U}$-coaction appears as follows
\begin{align}
\begin{split}
& M \to {\cal F} \blacktriangleright\hspace{-4pt}\vartriangleleft {\cal U} \otimes M, \quad m \mapsto m^{\sns{-1}} \blacktriangleright\hspace{-4pt}\vartriangleleft m^{\sns{0}}\snsb{-1} \otimes m^{\sns{0}}\snsb{0} \\
& 1_M \mapsto 1 \otimes 1_M + X \otimes R^X + Y \otimes R^Y \\
& R^X \mapsto 1 \otimes R^X \\
& R^Y \mapsto 1 \otimes R^Y + \delta_1 \otimes R^X \\
& R^Z \mapsto 1 \otimes R^Z + \delta_1 \otimes R^Y + \frac{1}{2}\delta_1^2 \otimes R^X.
\end{split}
\end{align}
\noindent Let us next determine the right ${\cal F} \blacktriangleright\hspace{-4pt}\vartriangleleft {\cal U}$-action. It is enough to determine the ${\cal U}$-action and ${\cal F}$-action separately. The action of ${\cal U}$ is directly given by
\begin{align}
\begin{split}
& 1_M \lhd X = 0, \quad 1_M \lhd Y = 0 \\
& R^X \lhd X = -R^Y, \quad R^X \lhd Y = R^X \\
& R^Y \lhd X = -R^Z, \quad R^Y \lhd Y = 0 \\
& R^Z \lhd X = 0, \quad R^Z \lhd Y = -R^Z.
\end{split}
\end{align}
\noindent To be able to see the ${\cal F}$-action, we determine the $\mathfrak{g}_2$-coaction. This follows from the Kozsul coaction on $M$, {\it i.e., }\
\begin{align}
\begin{split}
& M \to U(\mathfrak{g}_2) \otimes M, \quad m \mapsto m\sns{-1} \otimes m\sns{0} \\
& 1_M \mapsto 1 \otimes 1_M + Z \otimes R^Z \\
& R^i \mapsto 1 \otimes R^i, \quad i = X,Y,Z.
\end{split}
\end{align}
Hence, ${\cal F}$-action is given by
\begin{equation}
1_M \lhd \delta_1 = R^Z, \quad R^i \lhd \delta_1 = 0, \quad i = X,Y,Z.
\end{equation}
\noindent We will now check carefully that $M$ is a YD module over the bicrossed product Hopf algebra ${\cal F} \blacktriangleright\hspace{-4pt}\vartriangleleft {\cal U}$. We leave to the reader to check that $M$ satisfies the conditions introduced in Lemma \ref{module on bicrossed product} and Lemma \ref{comodule on bicrossed product}; that is $M$ is a module and comodule on ${\cal F} \blacktriangleright\hspace{-4pt}\vartriangleleft {\cal U}$ respectively. We proceed to the verification of the YD condition on the bicrossed product Hopf algebra ${\cal F} \blacktriangleright\hspace{-4pt}\vartriangleleft {\cal U}$.
\medskip
\noindent By the multiplicative property of the YD condition, it is enough to check that the condition holds for the elements $X, Y, \delta_1 \in {\cal F} \blacktriangleright\hspace{-4pt}\vartriangleleft {\cal U}$.
\medskip
\noindent For simplicity of the notation, we write the ${\cal F} \blacktriangleright\hspace{-4pt}\vartriangleleft {\cal U}$-coaction as $m \mapsto m\pr{-1} \otimes m\pr{0}$.
\medskip
\noindent We begin with $1_M \in M$ and $X \in {\cal F} \blacktriangleright\hspace{-4pt}\vartriangleleft {\cal U}$. On one hand we have
\begin{align}
\begin{split}
& X\ps{2} \cdot (1_M \lhd X\ps{1})\pr{-1} \otimes (1_M \lhd X\ps{1})\pr{0} = \\
& (1_M \lhd X)\pr{-1} \otimes (1_M \lhd X)\pr{0} + X 1_M\pr{-1} \otimes 1_M\pr{0} + \delta_1(1_M \lhd Y)\pr{-1} \otimes (1_M \lhd Y)\pr{0} = \\
& X \otimes 1_M + X^2 \otimes R^X + XY \otimes R^Y,
\end{split}
\end{align}
and on the other hand,
\begin{align}
\begin{split}
& 1_M\pr{-1} X\ps{1} \otimes 1_M\pr{0} \lhd X\ps{2} = \\
& 1_M\pr{-1} X \otimes 1_M\pr{0} + 1_M\pr{-1} \otimes 1_M\pr{0} \lhd X + 1_M\pr{-1} Y \otimes 1_M\pr{0} \lhd \delta_1 = \\
& X \otimes 1_M + X^2 \otimes R^X + YX \otimes R^Y + X \otimes R^X \lhd X + Y \otimes R^Y \lhd X + Y \otimes 1_M \lhd \delta_1.
\end{split}
\end{align}
In view of $[Y,X]=X$, $R^X \lhd X = -R^X$, $R^Y \lhd X = -R^Z$ and $1_M \lhd \delta_1 = R^Z$, we have the YD compatibility is satisfied for $1_M \in M$ and $X \in {\cal F} \blacktriangleright\hspace{-4pt}\vartriangleleft {\cal U}$.
\medskip
\noindent We proceed to check the condition for $1_M \in M$ and $Y \in {\cal F} \blacktriangleright\hspace{-4pt}\vartriangleleft {\cal U}$. We have
\begin{align}
\begin{split}
& Y\ps{2} \cdot (1_M \lhd Y\ps{1})\pr{-1} \otimes (1_M \lhd Y\ps{1})\pr{0} = \\
& (1_M \lhd Y)\pr{-1} \otimes (1_M \lhd Y)\pr{0} + Y 1_M\pr{-1} \otimes 1_M\pr{0} = \\
& Y \otimes 1_M + YX \otimes R^X + Y^2 \otimes R^Y,
\end{split}
\end{align}
and
\begin{align}
\begin{split}
& 1_M\pr{-1} Y\ps{1} \otimes 1_M\pr{0} \lhd Y\ps{2} = \\
& 1_M\pr{-1} Y \otimes 1_M\pr{0} + 1_M\pr{-1} \otimes 1_M\pr{0} \lhd Y = \\
& Y \otimes 1_M + XY \otimes R^X + Y^2 \otimes R^Y + X \otimes R^X \lhd Y.
\end{split}
\end{align}
We use $[Y,X]=X$ and $R^X \lhd Y = R^X$, and hence the YD condition is satisfied for $1_M \in M$ and $Y \in {\cal F} \blacktriangleright\hspace{-4pt}\vartriangleleft {\cal U}$.
\medskip
\noindent For $1_M \in M$ and $\delta_1 \in {\cal F} \blacktriangleright\hspace{-4pt}\vartriangleleft {\cal U}$ we have
\begin{align}
\begin{split}
& \delta_1\ps{2} (1_M \lhd \delta_1\ps{1})\pr{-1} \otimes (1_M \lhd \delta_1\ps{1})\pr{0} = \\
& (1_M \lhd \delta_1)\pr{-1} \otimes (1_M \lhd \delta_1)\pr{0} + \delta_1 1_M\pr{-1} \otimes 1_M\pr{0} = \\
& 1 \otimes R^Z + \delta_1 \otimes R^Y + \frac{1}{2}\delta_1^2 \otimes R^X + \delta_1 \otimes 1_M + \delta_1X \otimes R^X + \delta_1Y \otimes R^Y.
\end{split}
\end{align}
On the other hand,
\begin{align}
\begin{split}
& 1_M\pr{-1} \delta_1\ps{1} \otimes 1_M\pr{0} \lhd \delta_1\ps{2} = \\
& 1_M\pr{-1} \delta_1 \otimes 1_M\pr{0} + 1_M\pr{-1} \otimes 1_M\pr{0} \lhd \delta_1 = \\
& \delta_1 \otimes 1_M + X\delta_1 \otimes R^X + Y\delta_1 \otimes R^Y + 1 \otimes 1_M \lhd \delta_1.
\end{split}
\end{align}
Thus, the YD condition for $1_M \in M$ and $\delta_1 \in {\cal F} \blacktriangleright\hspace{-4pt}\vartriangleleft {\cal U}$ follows from $[X,\delta_1] = \frac{1}{2}\delta_1^2$, $[Y,\delta_1] = \delta_1$ and $1_M \lhd \delta_1 = R^Z$.
\medskip
\noindent Next, we consider $R^X \in M$ and $X \in {\cal F} \blacktriangleright\hspace{-4pt}\vartriangleleft {\cal U}$. In this case we have,
\begin{align}
\begin{split}
& X\ps{2} (R^X \lhd X\ps{1})\pr{-1} \otimes (R^X \lhd X\ps{1})\pr{0} = \\
& (R^X \lhd X)\pr{-1} \otimes (R^X \lhd X)\pr{0} + X R^X\pr{-1} \otimes R^X\pr{0} + \delta_1 (R^X \lhd Y)\pr{-1} \otimes (R^X \lhd Y)\pr{0} = \\
& -1 \otimes R^Y - \delta_1 \otimes R^X + X \otimes R^X + \delta_1 \otimes R^X = \\
& -1 \otimes R^Y + X \otimes R^X.
\end{split}
\end{align}
On the other hand,
\begin{align}
\begin{split}
& R^X\pr{-1} X\ps{1} \otimes R^X\pr{0} \lhd X\ps{2} = \\
& R^X\pr{-1} X \otimes R^X\pr{0} + R^X\pr{-1} \otimes R^X\pr{0} \lhd X + R^X\pr{-1} Y \otimes R^X\pr{0} \lhd \delta_1 = \\
& X \otimes R^X + 1 \otimes R^X \lhd X,
\end{split}
\end{align}
and we have the equality in view of the fact that $R^X \lhd X = -R^Y$.
\medskip
\noindent For $R^X \in M$ and $Y \in {\cal F} \blacktriangleright\hspace{-4pt}\vartriangleleft {\cal U}$, on one hand side we have
\begin{align}
\begin{split}
& Y\ps{2} (R^X \lhd Y\ps{1})\pr{-1} \otimes (R^X \lhd Y\ps{1})\pr{0} = \\
& (R^X \lhd Y)\pr{-1} \otimes (R^X \lhd Y)\pr{0} + Y R^X\pr{-1} \otimes R^X\pr{0} = \\
& 1 \otimes R^X + Y \otimes R^X,
\end{split}
\end{align}
and on the other hand,
\begin{align}
\begin{split}
& R^X\pr{-1} Y\ps{1} \otimes R^X\pr{0} \lhd Y\ps{2} = \\
& R^X\pr{-1} Y \otimes R^X\pr{0} + R^X \otimes R^X\pr{0} \lhd Y = \\
& Y \otimes R^X + 1 \otimes R^X \lhd Y.
\end{split}
\end{align}
The equality is the consequence of $R^X \lhd Y = R^X$.
\medskip
\noindent For $R^X \in M$ and $\delta_1 \in {\cal F} \blacktriangleright\hspace{-4pt}\vartriangleleft {\cal U}$ we have
\begin{align}
\begin{split}
& \delta_1\ps{2} (R^X \lhd \delta_1\ps{1})\pr{-1} \otimes (R^X \lhd \delta_1\ps{1})\pr{0} = \\
& (R^X \lhd \delta_1)\pr{-1} \otimes (R^X \lhd \delta_1)\pr{0} + \delta_1 R^X\pr{-1} \otimes R^X\pr{0} = \\
& \delta_1 \otimes R^X,
\end{split}
\end{align}
and
\begin{align}
\begin{split}
& R^X\pr{-1} \delta_1\ps{1} \otimes R^X\pr{0} \lhd \delta_1\ps{2} = \\
& R^X\pr{-1} \delta_1 \otimes R^X\pr{0} + R^X\pr{-1} \otimes R^X\pr{0} \lhd \delta_1 = \\
& \delta_1 \otimes R^X + 1 \otimes R^X \lhd \delta_1.
\end{split}
\end{align}
The result follows from $R^X \lhd \delta_1 = 0$.
\medskip
\noindent We proceed to verify the condition for $R^Y \in M$. For $R^Y \in M$ and $X \in {\cal F} \blacktriangleright\hspace{-4pt}\vartriangleleft {\cal U}$, we have
\begin{align}
\begin{split}
& X\ps{2} (R^Y \lhd X\ps{1})\pr{-1} \otimes (R^Y \lhd X\ps{1})\pr{0} = \\
& (R^Y \lhd X)\pr{-1} \otimes (R^Y \lhd X)\pr{0} + X R^Y\pr{-1} \otimes R^Y\pr{0} + \delta_1 (R^Y \lhd Y)\pr{-1} \otimes (R^Y \lhd Y)\pr{0} = \\
& -1 \otimes R^Z - \delta_1 \otimes R^Y - \frac{1}{2}\delta_1^2 \otimes R^X + X \otimes R^Y + X\delta_1 \otimes R^X,
\end{split}
\end{align}
as well as
\begin{align}
\begin{split}
& R^Y\pr{-1} X\ps{1} \otimes R^Y\pr{0} \lhd X\ps{2} = \\
& R^Y\pr{-1} X \otimes R^Y\pr{0} + R^Y\pr{-1} \otimes R^Y\pr{0} \lhd X + R^Y\pr{-1} Y \otimes R^Y\pr{0} \lhd \delta_1 = \\
& X \otimes R^Y + \delta_1X \otimes R^X + 1 \otimes R^Y \lhd X + \delta_1 \otimes R^X \lhd X.
\end{split}
\end{align}
To see the equality, we use $[X,\delta_1] = \frac{1}{2}\delta_1^2$, $R^Y \lhd X = -R^Z$ and $R^X \lhd X = -R^Y$.
\medskip
\noindent Similarly for $R^Y \in M$ and $Y \in {\cal F} \blacktriangleright\hspace{-4pt}\vartriangleleft {\cal U}$, we have on one hand
\begin{align}
\begin{split}
& Y\ps{2} (R^Y \lhd Y\ps{1})\pr{-1} \otimes (R^Y \lhd Y\ps{1})\pr{0} = \\
& (R^Y \lhd Y)\pr{-1} \otimes (R^Y \lhd Y)\pr{0} + Y R^Y\pr{-1} \otimes R^Y\pr{0} = \\
& Y \otimes R^Y + Y\delta_1 \otimes R^X,
\end{split}
\end{align}
and on the other hand,
\begin{align}
\begin{split}
& R^Y\pr{-1} Y\ps{1} \otimes R^Y\pr{0} \lhd Y\ps{2} = \\
& R^Y\pr{-1} Y \otimes R^Y\pr{0} + R^Y\pr{-1} \otimes R^Y\pr{0} \lhd Y = \\
& Y \otimes R^Y + \delta_1Y \otimes R^X + \delta_1 \otimes R^X \lhd Y.
\end{split}
\end{align}
Hence the equality by $[Y,\delta_1] = \delta_1$ and $R^X \lhd Y = R^X$.
\medskip
\noindent Finally, for $R^Y \in M$ and $\delta_1 \in {\cal F} \blacktriangleright\hspace{-4pt}\vartriangleleft {\cal U}$ we have
\begin{align}
\begin{split}
& \delta_1\ps{2} (R^Y \lhd \delta_1\ps{1})\pr{-1} \otimes (R^Y \lhd \delta_1\ps{1})\pr{0} = \\
& (R^Y \lhd \delta_1)\pr{-1} \otimes (R^Y \lhd \delta_1)\pr{0} + \delta_1 R^Y\pr{-1} \otimes R^Y\pr{0} = \\
& \delta_1 \otimes R^Y + \delta_1^2 \otimes R^X,
\end{split}
\end{align}
and
\begin{align}
\begin{split}
& R^Y\pr{-1} \delta_1\ps{1} \otimes R^Y\pr{0} \lhd \delta_1\ps{2} = \\
& R^Y\pr{-1} \delta_1 \otimes R^Y\pr{0} + R^Y\pr{-1} \otimes R^Y\pr{0} \lhd \delta_1 = \\
& \delta_1 \otimes R^Y + \delta_1^2 \otimes R^X.
\end{split}
\end{align}
\noindent Now we check the condition for $R^Z \in M$. For $R^Z \in M$ and $X \in {\cal F} \blacktriangleright\hspace{-4pt}\vartriangleleft {\cal U}$,
\begin{align}
\begin{split}
& X\ps{2} (R^Z \lhd X\ps{1})\pr{-1} \otimes (R^Z \lhd X\ps{1})\pr{0} = \\
& (R^Z \lhd X)\pr{-1} \otimes (R^Z \lhd X)\pr{0} + X R^Z\pr{-1} \otimes R^Z\pr{0} + \delta_1 (R^Z \lhd Y)\pr{-1} \otimes (R^Z \lhd Y)\pr{0} = \\
& X \otimes R^Z + X\delta_1 \otimes R^Y + \frac{1}{2}X\delta_1^2 \otimes R^X - \delta_1 \otimes R^Z - \delta_1^2 \otimes R^Y - \frac{1}{2}\delta_1^3 \otimes R^X.
\end{split}
\end{align}
On the other hand,
\begin{align}
\begin{split}
& R^Z\pr{-1} X\ps{1} \otimes R^Z\pr{0} \lhd X\ps{2} = \\
& R^Z\pr{-1} X \otimes R^Z\pr{0} + R^Z\pr{-1} \otimes R^Z\pr{0} \lhd X + R^Z\pr{-1} Y \otimes R^Z\pr{0} \lhd \delta_1 = \\
& X \otimes R^Z + \delta_1X \otimes R^Y + \frac{1}{2}\delta_1^2X \otimes R^X + \delta_1 \otimes R^Y \lhd X + \frac{1}{2}\delta_1^2 \otimes R^X \lhd X.
\end{split}
\end{align}
Equality follows from $[X,\delta_1] = \frac{1}{2}\delta_1^2$, $R^Y \lhd X = -R^Z$ and $R^X \lhd X = -R^Y$.
\medskip
\noindent Next, we consider $R^Z \in M$ and $Y \in {\cal F} \blacktriangleright\hspace{-4pt}\vartriangleleft {\cal U}$. In this case we have
\begin{align}
\begin{split}
& Y\ps{2} (R^Z \lhd Y\ps{1})\pr{-1} \otimes (R^Z \lhd Y\ps{1})\pr{0} = \\
& (R^Z \lhd Y)\pr{-1} \otimes (R^Z \lhd Y)\pr{0} + Y R^Z\pr{-1} \otimes R^Z\pr{0} = \\
& -1 \otimes R^Z - \delta_1 \otimes R^Y - \frac{1}{2}\delta_1^2 \otimes R^X + Y \otimes R^Z + Y\delta_1 \otimes R^Y + \frac{1}{2}Y\delta_1^2 \otimes R^X,
\end{split}
\end{align}
and on the other hand,
\begin{align}
\begin{split}
& R^Z\pr{-1} Y\ps{1} \otimes R^Z\pr{0} \lhd Y\ps{2} = \\
& R^Z\pr{-1} Y \otimes R^Z\pr{0} + R^Z\pr{-1} \otimes R^Z\pr{0} \lhd Y = \\
& Y \otimes R^Z + \delta_1Y \otimes R^Y + \frac{1}{2}\delta_1^2Y \otimes R^X + 1 \otimes R^Z \lhd Y + \frac{1}{2}\delta_1^2 \otimes R^X \lhd Y.
\end{split}
\end{align}
Equality follows from $[Y,\delta_1] = \delta_1$, $R^Z \lhd Y = -R^Z$ and $R^X \lhd Y = R^X$.
\medskip
\noindent Finally we check the YD compatibility for $R^Z \in M$ and $\delta_1 \in {\cal F} \blacktriangleright\hspace{-4pt}\vartriangleleft {\cal U}$. We have
\begin{align}
\begin{split}
& \delta_1\ps{2} (R^Z \lhd \delta_1\ps{1})\pr{-1} \otimes (R^Z \lhd \delta_1\ps{1})\pr{0} = \\
& (R^Z \lhd \delta_1)\pr{-1} \otimes (R^Z \lhd \delta_1)\pr{0} + \delta_1 R^Z\pr{-1} \otimes R^Z\pr{0} = \\
& \delta_1 \otimes R^Z + \delta_1^2 \otimes R^Y + \frac{1}{2}\delta_1^3 \otimes R^X,
\end{split}
\end{align}
and
\begin{align}
\begin{split}
& R^Z\pr{-1} \delta_1\ps{1} \otimes R^Z\pr{0} \lhd \delta_1\ps{2} = \\
& R^Z\pr{-1} \delta_1 \otimes R^Z\pr{0} + R^Z\pr{-1} \otimes R^Z\pr{0} \lhd \delta_1 = \\
& \delta_1 \otimes R^Z + \delta_1^2 \otimes R^Y + \frac{1}{2}\delta_1^3 \otimes R^X.
\end{split}
\end{align}
\noindent We have proved that $M$ is a YD module over the bicrossed product Hopf algebra ${\cal F} \blacktriangleright\hspace{-4pt}\vartriangleleft {\cal U} = {{\cal H}_{1 \rm S}}^{\rm cop}$.
\medskip
\noindent Let us now check the stability condition. Since in this case $\sigma = 1$, $\,^{\sigma}M_{\delta}$ has the same coaction as $M$. Thus, $(m \otimes 1_{\mathbb{C}})\pr{-1} \otimes (m \otimes 1_{\mathbb{C}})\pr{0} \in {\cal F} \blacktriangleright\hspace{-4pt}\vartriangleleft {\cal U} \otimes \,^{\sigma}M_{\delta}$ denoting the coaction, we have
\begin{align*}\notag
& (1_M \otimes 1_{\mathbb{C}})\pr{0} \cdot (1_M \otimes 1_{\mathbb{C}})\pr{-1} = (1_M \otimes 1_{\mathbb{C}}) \cdot 1 + (R^X \otimes 1_{\mathbb{C}}) \cdot X + (R^Y \otimes 1_{\mathbb{C}}) \cdot Y \\\notag
& = 1_M \otimes 1_{\mathbb{C}} + R^X \cdot X\ps{2}\delta(X\ps{1}) \otimes 1_{\mathbb{C}} + R^Y \cdot Y\ps{2}\delta(Y\ps{1}) \otimes 1_{\mathbb{C}} \\\notag
& = 1_M \otimes 1_{\mathbb{C}} + R^X \cdot X \otimes 1_{\mathbb{C}} + R^X \delta(X) \otimes 1_{\mathbb{C}} + R^Y \cdot Y \otimes 1_{\mathbb{C}} + R^Y \delta(Y) \otimes 1_{\mathbb{C}} \\
& = 1_M \otimes 1_{\mathbb{C}}.\\[.5cm]
& (R^X \otimes 1_{\mathbb{C}})\pr{0} \cdot (R^X \otimes 1_{\mathbb{C}})\pr{-1} = (R^X \otimes 1_{\mathbb{C}}) \cdot 1
= R^X \otimes 1_{\mathbb{C}}.\\[.5cm]
& (R^Y \otimes 1_{\mathbb{C}})\pr{0} \cdot (R^Y \otimes 1_{\mathbb{C}})\pr{-1} = (R^Y \otimes 1_{\mathbb{C}}) \cdot 1 + (R^X \otimes 1_{\mathbb{C}}) \cdot \delta_1
= R^Y \otimes 1_{\mathbb{C}}.\\[.5cm]
& (R^Z \otimes 1_{\mathbb{C}})\pr{0} \cdot (R^Z \otimes 1_{\mathbb{C}})\pr{-1} =\\
&\hspace{5cm} (R^Z \otimes 1_{\mathbb{C}}) \cdot 1 + (R^Y \otimes 1_{\mathbb{C}}) \cdot \delta_1 + (R^X \otimes 1_{\mathbb{C}}) \cdot \frac{1}{2}\delta_1^2 = R^Z \otimes 1_{\mathbb{C}}.
\end{align*}
Hence the stability is satisfied.
\medskip
\noindent We record our discussion in the following proposition.
\begin{proposition}
The four dimensional module-comodule $$M_\delta:=M \otimes \mathbb{C}_{\delta}= {\mathbb C}\Big\langle 1_M, R^X,R^Y,R^Z\Big\rangle \otimes {\mathbb C}_\delta$$ is an SAYD module over the the Schwarzian Hopf algebra ${\cal H}_{\rm 1S}{^{\rm cop}}$, via the action and coaction
$$
\left.
\begin{array}{c|ccc}
\lhd & X & Y & \delta_1 \\[.2cm]
\hline
&&&\\[-.2cm]
{\bf 1} & 0 & 0 & {\bf R}^Z \\[.1cm]
{\bf R}^X & -{\bf R}^Y & 2{\bf R}^X & 0 \\[.1cm]
{\bf R}^Y & -{\bf R}^Z & {\bf R}^Y & 0 \\[.1cm]
{\bf R}^Z & 0 & 0 & 0 \\
\end{array}
\right.\qquad
\begin{array}{rl}
&\blacktriangledown: M_\delta \longrightarrow \mathcal{H}_{\rm 1S}{^{\rm cop}} \otimes M_\delta \\[.2cm]
\hline
&\\[-.2cm]
& {\bf 1} \mapsto 1 \otimes {\bf 1} + X \otimes {\bf R}^X + Y \otimes {\bf R}^Y \\
& {\bf R}^X \mapsto 1 \otimes {\bf R}^X \\
& {\bf R}^Y \mapsto 1 \otimes {\bf R}^Y + \delta_1 \otimes {\bf R}^X \\
& {\bf R}^Z \mapsto 1 \otimes {\bf R}^Z + \delta_1 \otimes {\bf R}^Y + \frac{1}{2}\delta_1^2 \otimes {\bf R}^X.
\end{array}
$$
\noindent Here, ${\bf 1} := 1_M \otimes {\mathbb C}_\delta, \; {\bf R}^X := R^X \otimes {\mathbb C}_\delta, \; {\bf R}^Y := R^Y \otimes {\mathbb C}_\delta, \; {\bf R}^Z := R^Z \otimes {\mathbb C}_\delta$.
\end{proposition}
\subsection{Computation of $\widetilde{HP}(sl_2,S(sl_2^\ast)\nsb{2})$ }
This subsection is devoted to the computation of $\widetilde{HP}(sl_2,M)$ by demonstrating explicit representatives of the cohomology classes.
We know that the perturbed Koszul complex $(W(sl_2,M), d_{\rm CE} + d_{\rm K})$ computes this cohomology.
\medskip
\noindent Being an SAYD over $U(sl_2)$, $M$ admits the filtration $(F_pM)_{p \in \mathbb{Z}}$ from \cite{JaraStef}. Explicitly,
\begin{equation}
F_0M = \{R^X,R^Y,R^Z\}, \quad F_pM = M, \quad p \geq 1.
\end{equation}
The induced filtration on the complex is
\begin{equation}
F_j(W(sl_2,M)) := W(sl_2,F_jM),
\end{equation}
and the $E_1$ term of the associated spectral sequence is
\begin{equation}
E_1^{j,i}(sl_2,M) = H^{i+j}(W(sl_2,F_jM)/W(sl_2,F_{j-1}M)) \cong H^{i+j}(W(sl_2,F_jM/F_{j-1}M)).
\end{equation}
\noindent Since $F_jM/F_{j-1}M$ has trivial $sl_2$-coaction, the boundary $d_{\rm K}$ vanish on the quotient complex $W(sl_2,F_jM/F_{j-1}M)$ and hence
\begin{equation}
E_1^{j,i}(sl_2,M) = \bigoplus_{i+j \cong \bullet \, mod \, 2}H^{\bullet}(sl_2,F_jM/F_{j-1}M).
\end{equation}
\noindent In particular,
\begin{equation}
E_1^{0,i}(sl_2,M) = H^i(W(sl_2,F_0M)) \cong \bigoplus_{i \cong \bullet \, mod \, 2}H^{\bullet}(sl_2,F_0M) = 0.
\end{equation}
The last equality follows from the Whitehead's theorem (noticing that $F_0M$ is an irreducible $sl_2$-module of dimension greater than 1). For $j = 1$ we have $M/F_0M \cong {\mathbb C}$ and hence
\begin{equation}
E_1^{1,i}(sl_2,M) = \bigoplus_{i+1 \cong \bullet \, mod \, 2}H^{\bullet}(sl_2),
\end{equation}
which gives two cohomology classes as a result of Whitehead's 1st and 2nd lemmas. Finally, by $F_pM = M$ for $p \ge 1$, we have $E_1^{j,i}(sl_2,M) = 0$ for $j \geq 2$.
\medskip
\noindent Let us now write the complex as
\begin{equation}
W(sl_2,M) = W^{\rm even}(sl_2,M) \oplus W^{\rm odd}(sl_2,M),
\end{equation}
where
\begin{equation}
W^{\rm even}(sl_2,M) = M \oplus (\wedge^2 {sl_2}^* \otimes M), \quad W^{\rm odd}(sl_2,M) = ({sl_2}^* \otimes M) \oplus (\wedge^3 {sl_2}^* \otimes M).
\end{equation}
\noindent Next, we demonstrate the explicit cohomology cocycles of $\widetilde{HP}(sl_2,M)$. First, let us take $1_M \in W^{\rm even}(sl_2,M)$. It is immediate to observe $d_{\rm CE}(1_M) = 0$ as well as $d_{\rm K}(1_M) = 0$. On the other hand, in the level of spectral sequence it descends to the nontrivial class $0$ of the cohomology of $sl_2$. Hence, it is a representative of the even cohomology class.
\medskip
\noindent Secondly, we consider $$(2\theta^X \otimes R^Z - \theta^Y \otimes R^Y \; , \; \theta^X \wedge \theta^Y \wedge \theta^Z \otimes 1_M) \in W^{\rm odd}(sl_2,M)$$ Here $\Big\{\theta^X,\theta^Y,\theta^Z\Big\}$ is the dual basis corresponding to the basis $\Big\{X,Y,Z\Big\}$ of $sl_2$. Let us show that it is a $d_{\rm CE} + d_{\rm K}$-cocycle. It is immediate that
\begin{equation}
d_{\rm CE}(\theta^X \wedge \theta^Y \wedge \theta^Z \otimes 1_M) = 0
\end{equation}
As for the Koszul differential,
\begin{align}
\begin{split}
& d_{\rm K}(\theta^X \wedge \theta^Y \wedge \theta^Z \otimes 1_M) = \\
& \iota_X(\theta^X \wedge \theta^Y \wedge \theta^Z) \otimes R^X + \iota_Y(\theta^X \wedge \theta^Y \wedge \theta^Z) \otimes R^Y + \iota_Z(\theta^X \wedge \theta^Y \wedge \theta^Z) \otimes R^Z = \\
& \theta^Y \wedge \theta^Z \otimes R^X - \theta^X \wedge \theta^Z \otimes R^Y + \theta^X \wedge \theta^Y \otimes R^Z.
\end{split}
\end{align}
\noindent On the other hand, we have
\begin{align}
\begin{split}
& d_{\rm CE}(\theta^X \otimes R^Z) = \theta^X \wedge \theta^Y \otimes R^Z - \theta^Y \wedge \theta^X \otimes R^Z \cdot Y - \theta^Z \wedge \theta^X \otimes R^Z \cdot Z \\
& = \theta^X \wedge \theta^Z \otimes R^Y,
\end{split}
\end{align}
and
\begin{align}
\begin{split}
& d_{\rm CE}(\theta^Y \otimes R^Y) = \theta^X \wedge \theta^Z \otimes R^Y - \theta^X \wedge \theta^Y \otimes R^Y \cdot X - \theta^Z \wedge \theta^Y \otimes R^Z \cdot Z \\
& = \theta^X \wedge \theta^Y \otimes R^Z + \theta^X \wedge \theta^Z \otimes R^Y + \theta^Y \wedge \theta^Z \otimes R^X.
\end{split}
\end{align}
Therefore,
\begin{equation}
d_{\rm CE}(2\theta^X \otimes R^Z - \theta^Y \otimes R^Y) = - \theta^X \wedge \theta^Y \otimes R^Z + \theta^X \wedge \theta^Z \otimes R^Y - \theta^Y \wedge \theta^Z \otimes R^X.
\end{equation}
We also have
\begin{equation}
d_{\rm K}(2\theta^X \otimes R^Z - \theta^Y \otimes R^Y) = 2R^ZR^X - R^YR^Y = 0 - 0 = 0.
\end{equation}
Therefore, we can write
\begin{equation}
(d_{\rm CE} + d_{\rm K})((2\theta^X \otimes R^Z - \theta^Y \otimes R^Y\;,\;\theta^X \wedge \theta^Y \wedge \theta^Z \otimes 1_M)) = 0.
\end{equation}
Finally we note that $(2\theta^X \otimes R^Z - \theta^Y \otimes R^Y \; , \; \theta^X \wedge \theta^Y \wedge \theta^Z \otimes 1_M)$ descends, in the $E_1$-level of the spectral sequence, to the cohomology class represented by the 3-cocycle $$\theta^X \wedge \theta^Y \wedge \theta^Z.$$ Hence, it represents the odd cohomology class.
Let us summarize our discussion so far
\begin{proposition}
The periodic cyclic cohomology of the Lie algebra $sl_2$ with coefficients in SAYD module $M:=S({sl_2}^\ast)_{[2]}$ is represented by
\begin{align}
&\widetilde{HP}^0(sl_2,M)={\mathbb C}\Big\langle 1_M\Big\rangle, \\
&\widetilde{HP}^1(sl_2,M)={\mathbb C}\Big\langle (2\theta^X \otimes R^Z - \theta^Y \otimes R^Y\;,\;\theta^X\wedge \theta^Y\wedge \theta^Z \otimes 1_M) \Big\rangle.
\end{align}
\end{proposition}
\subsection{Computation of $HP({\cal H}_{1\rm S}, M_\delta)$}
We now consider the complex $C({\cal U} \blacktriangleright\hspace{-4pt} < {\cal F}, M_{\delta})$, which computes the periodic Hopf cyclic cohomology
\begin{equation}
HP({\cal H}_{1\rm S}{^{\rm cop}},M_{\delta}) = HP({\cal F} \blacktriangleright\hspace{-4pt}\vartriangleleft {\cal U},M_{\delta}).
\end{equation}
\noindent We can immediately conclude that $M_{\delta}$ is also an SAYD module over $U(sl_2)$ with the same action and coaction due to the unimodularity of $sl_2$. The corresponding filtration is then given by
\begin{equation}
F_0M_{\delta} = F_0M \otimes \mathbb{C}_{\delta} = {\mathbb C}\Big\langle R^X,R^Y,R^Z\Big\rangle \otimes \mathbb{C}_{\delta}, \quad F_pM_{\delta} = M_{\delta}, p \geq 1.
\end{equation}
\noindent We will first derive a Cartan type homotopy formula for Hopf cyclic cohomology, as in \cite{MoscRang07}. One notes that in \cite{MoscRang07} the SAYD module was one dimensional. We have to adapt the homotopy formula to fit our situation. To this end, let
\begin{equation}
D_Y:\mathcal{H} \to \mathcal{H}, \quad D_Y(h) := hY.
\end{equation}
Obviously, $D_Y$ is an $\mathcal{H}$-linear coderivation. Hence the operators
\begin{align}
\begin{split}
& \mathcal{L}_{D_Y}:C^n({\cal U} \blacktriangleright\hspace{-4pt} < {\cal F},M_{\delta}) \to C^n({\cal U} \blacktriangleright\hspace{-4pt} < {\cal F},M_{\delta}) \\
& {\cal L}_{D_Y}(m\otimes_{\cal H} c^0\ot\dots\ot c^n)=\sum_{i=0}^{n}m\otimes c^0_{\cal H} \ot\dots\ot D_Y(c_i) \ot\dots\ot c^n,
\end{split}
\end{align}
\begin{align}
\begin{split}
& e_{D_Y}:C^n({\cal U} \blacktriangleright\hspace{-4pt} < {\cal F},M_{\delta}) \to C^{n+1}({\cal U} \blacktriangleright\hspace{-4pt} < {\cal F},M_{\delta}) \\
& e_{D_Y}(m\otimes_{\cal H} c^0\ot\dots\ot c^n)=(-1)^{n}m\pr{0}\otimes_{\cal H} c^0\ps{2}\otimes c^1\ot\dots\ot c^n\otimes m\pr{-1}D_Y(c^0\ps{1}),
\end{split}
\end{align}
and
\begin{align}
\begin{split}
& E_{D_Y}:C^n({\cal U} \blacktriangleright\hspace{-4pt} < {\cal F},M_{\delta}) \to C^{n-1}({\cal U} \blacktriangleright\hspace{-4pt} < {\cal F},M_{\delta}) \\
&E_{D_Y}^{j,i}(m\otimes_{\cal H} c^0\ot\dots\ot c^{n})=\\
&=(-1)^{n(i+1)}\epsilon(c^0)m\pr{0}\otimes_{\cal H} c^{n-i+2}\ot\dots\ot c^{n+1}\otimes
m\pr{-1}c_1\ot\dots\ot m\pr{-(j-i)}c^{j-i}\otimes
\\
&\otimes m\pr{-(j-i+1)}D_Y(c^{j-i+1})\otimes m\pr{-(j-i+2)}c^{j-i+2}\ot\dots\ot
m\pr{-(n-i+1)}c^{n-i+1},
\end{split}
\end{align}
satisfy, by \cite[Proposition 3.7]{MoscRang07},
\begin{equation}
[E_{D_Y} + e_{D_Y}, b + B] = \mathcal{L}_{D_Y}.
\end{equation}
\noindent We next obtain an analogous of \cite[Lemma 3.8]{MoscRang07}.
\begin{lemma}\label{aux-68}
We have
\begin{equation}
\mathcal{L}_{D_Y} = I - \widetilde{ad}Y,
\end{equation}
where
\begin{equation}
\widetilde{ad}Y(m_{\delta} \otimes \widetilde{f} \otimes \widetilde{u}) = m_{\delta} \otimes \widetilde{ad}Y(\widetilde{f} \otimes \widetilde{u}) - (m \cdot Y)_{\delta} \otimes \widetilde{f} \otimes \widetilde{u},
\end{equation}
and $m_{\delta} := m \otimes 1_{\mathbb{C}}$.
\end{lemma}
\begin{proof}
Let us first recall the isomorphism
\begin{equation}
\Theta := \Phi_2 \circ \Phi_1 \circ \Psi:C^{\bullet}_{\mathcal{H}}({\cal U} \blacktriangleright\hspace{-4pt} < {\cal F},M_{\delta}) \to \mathfrak{Z}^{\bullet,\bullet}
\end{equation}
of cocyclic modules. By \cite{MoscRang09}, we know that
\begin{align}\label{aux-76}
\begin{split}
& \Psi(m_{\delta} \otimes_{\mathcal{H}} u^0 \blacktriangleright\hspace{-4pt} < f^0 \otimes \ldots \otimes u^n \blacktriangleright\hspace{-4pt} < f^n) = \\
& m_{\delta} \otimes_{\mathcal{H}} {u^0}^{\pr{-n-1}}f^0 \otimes \ldots \otimes {u^0}^{\pr{-1}} \ldots \otimes {u^n}^{\pr{-1}}f^n \otimes {u^0}^{\pr{0}} \otimes \ldots \otimes {u^n}^{\pr{0}} \\
& \Psi^{-1}(m_{\delta} \otimes_{\mathcal{H}} f^0 \otimes \ldots \otimes f^n \otimes u^0 \otimes \ldots \otimes u^n) = \\
& m_{\delta} \otimes_{\mathcal{H}} {u^0}^{\pr{0}} \blacktriangleright\hspace{-4pt} < S^{-1}({u^0}^{\pr{-1}})f^0 \otimes \ldots \otimes {u^n}^{\pr{0}} \blacktriangleright\hspace{-4pt} < S^{-1}({u^0}^{\pr{-n-1}}{u^1}^{\pr{-n}} \ldots {u^n}^{\pr{-1}})f^n,
\end{split}
\end{align}
\begin{align}\label{aux-77}
\begin{split}
& \Phi_1(m_{\delta} \otimes_{\mathcal{H}} f^0 \otimes \ldots \otimes f^n \otimes u^0 \otimes \ldots \otimes u^n) = \\
& m_{\delta} \cdot u^0\ps{2} \otimes_{{\cal F}} S^{-1}(u^0\ps{1}) \rhd (f^0 \otimes \ldots \otimes f^n) \otimes S(u^0\ps{3}) \cdot (u^1 \otimes \ldots \otimes u^n) \\
& \Phi_1^{-1}(m_{\delta} \otimes_{{\cal F}} f^0 \otimes \ldots \otimes f^n \otimes u^1 \otimes \ldots \otimes u^n) = \\
& m_{\delta} \otimes_{\mathcal{H}} f^0 \otimes \ldots \otimes f^n \otimes 1_{U(\mathfrak{g}_1)} \otimes u^1 \otimes \ldots \otimes u^n,
\end{split}
\end{align}
and
\begin{align}\label{aux-78}
\begin{split}
& \Phi_2(m_{\delta} \otimes_{{\cal F}} f^0 \otimes \ldots \otimes f^n \otimes u^1 \otimes \ldots \otimes u^n) = \\
& m_{\delta} \cdot f^0\ps{1} \otimes S(f^0\ps{2}) \cdot (f^1 \otimes \ldots \otimes f^n \otimes u^1 \otimes \ldots \otimes u^n) \\
& \Phi_2^{-1}(m_{\delta} \otimes f^1 \otimes \ldots \otimes f^n \otimes u^1 \otimes \ldots \otimes u^n) = \\
& m_{\delta} \otimes_{{\cal F}} 1_{{\cal F}} \otimes f^1 \otimes \ldots \otimes f^n \otimes u^1 \otimes \ldots \otimes u^n.
\end{split}
\end{align}
\noindent Here, the left $\mathcal{H}$-coaction on ${\cal U}$ is the one corresponding to the right ${\cal F}$-coaction. Namely,
\begin{equation}\label{aux-65}
u^{\pr{-1}} \otimes u^{\pr{0}} = S(u^{\pr{1}}) \otimes u^{\pr{0}}.
\end{equation}
\noindent We also recall that
\begin{align}
\begin{split}
& \Phi:\mathcal{H} = {\cal F} \blacktriangleright\hspace{-4pt}\vartriangleleft {\cal U} \to {\cal U} \blacktriangleright\hspace{-4pt} < {\cal F} \\
& \Phi(f \blacktriangleright\hspace{-4pt}\vartriangleleft u) = u^{\pr{0}} \blacktriangleright\hspace{-4pt} < fu^{\pr{1}}\\
& \Phi^{-1}(u \blacktriangleright\hspace{-4pt} < f) = fS^{-1}(u^{\pr{1}}) \blacktriangleright\hspace{-4pt}\vartriangleleft u^{\pr{0}}.
\end{split}
\end{align}
\noindent Therefore, we have
\begin{align}
\begin{split}
& \Theta \circ \mathcal{L}_{D_Y} \circ \Theta^{-1} (m_{\delta} \otimes f^1 \otimes \ldots \otimes f^n \otimes u^1 \otimes \ldots \otimes u^n) = \\
& \Theta \circ \mathcal{L}_{D_Y} \circ \Psi^{-1} \circ \Phi_1^{-1} \circ \Phi_2^{-1} (m_{\delta} \otimes f^1 \otimes \ldots \otimes f^n \otimes u^1 \otimes \ldots \otimes u^n) = \\
& \Theta \circ \mathcal{L}_{D_Y} \circ \Psi^{-1} \circ \Phi_1^{-1} (m_{\delta} \otimes_{{\cal F}} 1_{{\cal F}} \otimes f^1 \otimes \ldots \otimes f^n \otimes u^1 \otimes \ldots \otimes u^n) = \\
& \Theta \circ \mathcal{L}_{D_Y} \circ \Psi^{-1} (m_{\delta} \otimes_{\mathcal{H}} 1_{{\cal F}} \otimes f^1 \otimes \ldots \otimes f^n \otimes 1_{{\cal U}} \otimes u^1 \otimes \ldots \otimes u^n) = \\
& \Theta \circ \mathcal{L}_{D_Y} (m_{\delta} \otimes_{\mathcal{H}} 1_{{\cal U}} \blacktriangleright\hspace{-4pt} < 1_{{\cal F}} \otimes {u^1}^{\pr{0}} \blacktriangleright\hspace{-4pt} < S^{-1}({u^1}^{\pr{-1}}) \rhd f^1 \otimes \ldots \\
& \ldots \otimes {u^n}^{\pr{0}} \blacktriangleright\hspace{-4pt} < S^{-1}({u^1}^{\pr{-n}} \ldots {u^n}^{\pr{-1}}) \rhd f^n) = \\
& \Theta \circ \mathcal{L}_{D_Y} (m_{\delta} \otimes_{\mathcal{H}} 1_{{\cal U}} \blacktriangleright\hspace{-4pt} < 1_{{\cal F}} \otimes {u^1}^{\pr{0}} \blacktriangleright\hspace{-4pt} < {u^1}^{\pr{1}}f^1 \otimes \ldots \otimes {u^n}^{\pr{0}} \blacktriangleright\hspace{-4pt} < {u^n}^{\pr{1}} \ldots {u^1}^{\pr{n}}f^n),
\end{split}
\end{align}
where on the last equality we have used \eqref{aux-65}. In order to apply $\mathcal{L}_{D_Y}$, we make the observation
\begin{align}
\begin{split}
& \Phi D_Y \Phi^{-1} (u \blacktriangleright\hspace{-4pt} < f) = \Phi (fS^{-1}(u^{\pr{1}}) \blacktriangleright\hspace{-4pt}\vartriangleleft u^{\pr{0}}Y) = \\
& (u^{\pr{0}}Y)^{\pr{0}} \blacktriangleright\hspace{-4pt} < fS^{-1}(u^{\pr{1}})(u^{\pr{0}}Y)^{\pr{1}} = \\
& {u^{\pr{0}}\ps{1}}^{\pr{0}}Y^{\pr{0}} \blacktriangleright\hspace{-4pt} < fS^{-1}(u^{\pr{1}}){u^{\pr{0}}\ps{1}}^{\pr{1}}(u^{\pr{0}}\ps{2} \rhd Y^{\pr{1}}) = \\
& {u\ps{1}}^{\pr{0}}Y^{\pr{0}} \blacktriangleright\hspace{-4pt} < fS^{-1}({u\ps{1}}^{\pr{2}}{u\ps{2}}^{\pr{1}}){u\ps{1}}^{\pr{1}}({u\ps{2}}^{\pr{0}} \rhd Y^{\pr{1}}) = \\
& uY \blacktriangleright\hspace{-4pt} < f,
\end{split}
\end{align}
using the action-coaction compatibilities of a bicrossed product. Hence,
\begin{align}
\begin{split}
& \Theta \circ \mathcal{L}_{D_Y} (m_{\delta} \otimes_{\mathcal{H}} 1_{{\cal U}} \blacktriangleright\hspace{-4pt} < 1_{{\cal F}} \otimes {u^1}^{\pr{0}} \blacktriangleright\hspace{-4pt} < {u^1}^{\pr{1}}f^1 \otimes \ldots \otimes {u^n}^{\pr{0}} \blacktriangleright\hspace{-4pt} < {u^n}^{\pr{1}} \ldots {u^1}^{\pr{n}}f^n) = \\
& \Theta (m_{\delta} \otimes_{\mathcal{H}} Y \blacktriangleright\hspace{-4pt} < 1_{{\cal F}} \otimes {u^1}^{\pr{0}} \blacktriangleright\hspace{-4pt} < {u^1}^{\pr{1}}f^1 \otimes \ldots \otimes {u^n}^{\pr{0}} \blacktriangleright\hspace{-4pt} < {u^n}^{\pr{1}} \ldots {u^1}^{\pr{n}}f^n) + \\
& \sum_{i = 1}^n \Theta (m_{\delta} \otimes_{\mathcal{H}} 1_{{\cal U}} \blacktriangleright\hspace{-4pt} < 1_{{\cal F}} \otimes {u^1}^{\pr{0}} \blacktriangleright\hspace{-4pt} < {u^1}^{\pr{1}}f^1 \otimes \ldots \\
& \ldots \otimes {u^i}^{\pr{0}}Y \blacktriangleright\hspace{-4pt} < {u^i}^{\pr{1}} \ldots {u^1}^{\pr{i}}f^i \otimes \ldots \otimes {u^n}^{\pr{0}} \blacktriangleright\hspace{-4pt} < {u^n}^{\pr{1}} \ldots {u^1}^{\pr{n}}f^n).
\end{split}
\end{align}
\noindent We notice,
\begin{align}
\begin{split}
& \Psi (m_{\delta} \otimes_{\mathcal{H}} Y \blacktriangleright\hspace{-4pt} < 1_{{\cal F}} \otimes {u^1}^{\pr{0}} \blacktriangleright\hspace{-4pt} < {u^1}^{\pr{1}}f^1 \otimes \ldots \otimes {u^n}^{\pr{0}} \blacktriangleright\hspace{-4pt} < {u^n}^{\pr{1}} \ldots {u^1}^{\pr{n}}f^n) = \\
& m_{\delta} \otimes_{\mathcal{H}} Y^{\pr{-n-1}} \cdot 1_{{\cal F}} \otimes Y^{\pr{-n}}({u^1}^{\pr{0}})^{\pr{-n}}{u^1}^{\pr{1}}f^1 \otimes \ldots \\
& \ldots \otimes Y^{\pr{-1}}({u^1}^{\pr{0}})^{\pr{-1}} \ldots ({u^n}^{\pr{0}})^{\pr{-1}}{u^n}^{\pr{1}} \ldots {u^1}^{\pr{n}}f^n \otimes Y^{\pr{0}} \otimes ({u^1}^{\pr{0}})^{\pr{0}} \otimes \ldots \otimes ({u^n}^{\pr{0}})^{\pr{0}} \\
& = m_{\delta} \otimes_{\mathcal{H}} 1_{{\cal F}} \otimes f^1 \otimes \ldots \otimes f^n \otimes Y \otimes u^1 \otimes \ldots \otimes u^n,
\end{split}
\end{align}
where on the last equality we have used \eqref{aux-65}. Similarly,
\begin{align}
\begin{split}
& \Psi (m_{\delta} \otimes_{\mathcal{H}} 1_{{\cal U}} \blacktriangleright\hspace{-4pt} < 1_{{\cal F}} \otimes {u^1}^{\pr{0}} \blacktriangleright\hspace{-4pt} < {u^1}^{\pr{1}}f^1 \otimes \ldots \\
& \ldots \otimes {u^i}^{\pr{0}}Y \blacktriangleright\hspace{-4pt} < {u^i}^{\pr{1}} \ldots {u^1}^{\pr{i}}f^i \otimes \ldots \otimes {u^n}^{\pr{0}} \blacktriangleright\hspace{-4pt} < {u^n}^{\pr{1}} \ldots {u^1}^{\pr{n}}f^n) = \\
& m_{\delta} \otimes_{\mathcal{H}} 1_{{\cal F}} \otimes f^1 \otimes \ldots \otimes f^n \otimes 1_{{\cal U}} \otimes u^1 \otimes \ldots u^iY \otimes \ldots \otimes u^n.
\end{split}
\end{align}
\noindent Therefore,
\begin{align}
\begin{split}
& \Theta (m_{\delta} \otimes_{\mathcal{H}} Y \blacktriangleright\hspace{-4pt} < 1_{{\cal F}} \otimes {u^1}^{\pr{0}} \blacktriangleright\hspace{-4pt} < {u^1}^{\pr{1}}f^1 \otimes \ldots \otimes {u^n}^{\pr{0}} \blacktriangleright\hspace{-4pt} < {u^n}^{\pr{1}} \ldots {u^1}^{\pr{n}}f^n) + \\
& \sum_{i = 1}^n \Theta (m_{\delta} \otimes_{\mathcal{H}} 1_{{\cal U}} \blacktriangleright\hspace{-4pt} < 1_{{\cal F}} \otimes {u^1}^{\pr{0}} \blacktriangleright\hspace{-4pt} < {u^1}^{\pr{1}}f^1 \otimes \ldots \\
& \ldots \otimes {u^i}^{\pr{0}}Y \blacktriangleright\hspace{-4pt} < {u^i}^{\pr{1}} \ldots {u^1}^{\pr{i}}f^i \otimes \ldots \otimes {u^n}^{\pr{0}} \blacktriangleright\hspace{-4pt} < {u^n}^{\pr{1}} \ldots {u^1}^{\pr{n}}f^n) = \\
& \Phi_2 \circ \Phi_1 (m_{\delta} \otimes_{\mathcal{H}} 1_{{\cal F}} \otimes \widetilde{f} \otimes Y \otimes \widetilde{u} + m_{\delta} \otimes_{\mathcal{H}} 1_{{\cal F}} \otimes \widetilde{f} \otimes 1_{{\cal U}} \otimes \widetilde{u} \cdot Y) = \\
& \Phi_2 (m_{\delta} \cdot Y\ps{2} \otimes_{{\cal F}} S^{-1}(Y\ps{1}) \rhd (1_{{\cal F}} \otimes \widetilde{f}) \otimes S(Y\ps{3}) \cdot \widetilde{u} + m_{\delta} \otimes_{{\cal F}} 1_{{\cal F}} \otimes \widetilde{f} \otimes \widetilde{u} \cdot Y).
\end{split}
\end{align}
\noindent Considering he fact that $Y \in \mathcal{H}$ is primitive, and hence $adY(f) = [Y,f] = Y \rhd f$, we conclude
\begin{align}
\begin{split}
& \Phi_2 (m_{\delta} \cdot Y\ps{2} \otimes_{{\cal F}} S^{-1}(Y\ps{1}) \rhd (1_{{\cal F}} \otimes \widetilde{f}) \otimes S(Y\ps{3}) \cdot \widetilde{u} + m_{\delta} \otimes_{{\cal F}} 1_{{\cal F}} \otimes \widetilde{f} \otimes \widetilde{u} \cdot Y) = \\
& \Phi_2 (- m_{\delta} \otimes_{{\cal F}} 1_{{\cal F}} \otimes adY(\widetilde{f}) \otimes \widetilde{u} - m_{\delta} \otimes_{{\cal F}} 1_{{\cal F}} \otimes \widetilde{f} \otimes Y \cdot \widetilde{u} + \\
& m_{\delta} \cdot Y \otimes_{{\cal F}} 1_{{\cal F}} \otimes \widetilde{f} \otimes \widetilde{u} + m_{\delta} \otimes_{{\cal F}} 1_{{\cal F}} \otimes \widetilde{f} \otimes \widetilde{u} \cdot Y) = \\
& m_{\delta} \cdot Y \otimes_{{\cal F}} \widetilde{f} \otimes \widetilde{u} - m_{\delta} \otimes_{{\cal F}} adY(\widetilde{f}) \otimes \widetilde{u} - m_{\delta} \otimes_{{\cal F}} \widetilde{f} \otimes adY(\widetilde{u}).
\end{split}
\end{align}
\noindent Finally, we recall $m_{\delta} \cdot Y = (m \cdot Y\ps{1})_{\delta}\delta(Y\ps{2}) = (m \cdot Y)_{\delta} + m_{\delta}$ to finish the proof.
\end{proof}
\begin{lemma}\label{aux-69}
The operator $\widetilde{ad}Y$ commutes with the horizontal operators \eqref{horizontal-operators} and the vertical operators \eqref{vertical-operators}.
\end{lemma}
\begin{proof}
We start with the horizontal operators. For the first horizontal coface, we have
\begin{align}
\begin{split}
& \overset{\ra}{\partial}_{0}(\widetilde{ad}Y(m_{\delta} \otimes \widetilde{f} \otimes \widetilde{u})) = \overset{\ra}{\partial}_0(m_{\delta} \otimes adY(\widetilde{f} \otimes \widetilde{u}) - (m \cdot Y)_{\delta} \otimes \widetilde{f} \otimes \widetilde{u}) = \\
& m_{\delta} \otimes 1 \otimes adY(\widetilde{f} \otimes \widetilde{u}) - (m \cdot Y)_{\delta} \otimes 1 \otimes \widetilde{f} \otimes \widetilde{u} = \\
& \widetilde{ad}Y (\overset{\ra}{\partial}_0(m_{\delta} \otimes \widetilde{f} \otimes \widetilde{u}))).
\end{split}
\end{align}
For $\overset{\ra}{\partial}_{i}$ with $1 \leq i \leq n$, the commutativity is a consequence of $adY \circ \Delta = \Delta \circ adY$ on ${\cal F}$. To see this, we notice that
\begin{align}
\begin{split}
& \Delta(adY(f)) = \Delta(Y \rhd f) = {Y\ps{1}}^{\pr{0}} \rhd f\ps{1} \otimes {Y\ps{1}}^{\pr{1}}(Y\ps{2} \rhd f\ps{2}) \\
& = adY(f\ps{1}) \otimes f\ps{2} + f\ps{1} \otimes adY(f\ps{2}) = adY(\Delta(f)).
\end{split}
\end{align}
\noindent For the commutation with the last horizontal coface operator, we proceed as follows. First we observe
\begin{align}
\begin{split}
& \widetilde{ad}Y(\overset{\ra}{\partial}_{n+1}(m_{\delta} \otimes \widetilde{f} \otimes \widetilde{u})) = \widetilde{ad}Y({m\pr{0}}_{\delta} \otimes \widetilde{f} \otimes \widetilde{u}^{\pr{-1}}m\pr{-1} \rhd 1_{{\cal F}} \otimes \widetilde{u}^{\pr{0}}) = \\
& {m\pr{0}}_{\delta} \otimes adY(\widetilde{f}) \otimes [\widetilde{u}^{\pr{-1}}m\pr{-1}] \otimes \widetilde{u}^{\pr{0}} + {m\pr{0}}_{\delta} \otimes \widetilde{f} \otimes adY(\widetilde{u}^{\pr{-1}}m\pr{-1} \rhd 1_{{\cal F}})\otimes \widetilde{u}^{\pr{0}} \\
& + {m\pr{0}}_{\delta} \otimes \widetilde{f} \otimes [\widetilde{u}^{\pr{-1}}m\pr{0}] \otimes adY(\widetilde{u}^{\pr{0}}) - (m\pr{0} \cdot Y)_{\delta} \otimes \widetilde{f} \otimes [\widetilde{u}^{\pr{-1}}m\pr{-1}] \otimes \widetilde{u}^{\pr{0}}.
\end{split}
\end{align}
\noindent Next, for any $h = g \blacktriangleright\hspace{-4pt}\vartriangleleft u \in \mathcal{H}$ and $f \in {\cal F}$, on one hand we have
\begin{equation}
adY(h \rhd f) = adY(g(u \rhd f)) = adY(g)(u \rhd f) + g(Yu \rhd f),
\end{equation}
and on the other hand,
\begin{align*}
&adY(h) \rhd f = (adY(g) \blacktriangleright\hspace{-4pt}\vartriangleleft u + g \blacktriangleright\hspace{-4pt}\vartriangleleft Yu - g \blacktriangleright\hspace{-4pt}\vartriangleleft uY) \rhd f \\
&\hspace{1.9cm}= adY(g)(u \rhd f) + g(Yu \rhd f) - g(uY \rhd f).
\end{align*}
\noindent In other words,
\begin{equation}
adY(h \rhd f) = adY(h) \rhd f + h \rhd adY(f).
\end{equation}
\noindent Therefore we have
\begin{align}
\begin{split}
& {m\pr{0}}_{\delta} \otimes \widetilde{f} \otimes adY(\widetilde{u}^{\pr{-1}}m\pr{-1} \rhd 1_{{\cal F}})\otimes \widetilde{u}^{\pr{0}} = \\
& {m\pr{0}}_{\delta} \otimes \widetilde{f} \otimes adY(\widetilde{u}^{\pr{-1}})m\pr{-1} \rhd 1_{{\cal F}}\otimes \widetilde{u}^{\pr{0}} + \\
& {m\pr{0}}_{\delta} \otimes \widetilde{f} \otimes \widetilde{u}^{\pr{-1}}adY(m\pr{-1}) \rhd 1_{{\cal F}}\otimes \widetilde{u}^{\pr{0}}.
\end{split}
\end{align}
\noindent Recalling \eqref{aux-65} and the coaction - multiplication compatibility on a bicrossed product, we observe that
\begin{align}
\begin{split}
& {adY(u)}^{\pr{-1}} \otimes {adY(u)}^{\pr{0}} = S({adY(u)}^{\pr{1}}) \otimes {adY(u)}^{\pr{0}} = \\
& S({Y\ps{1}}^{\pr{1}}(Y\ps{2} \rhd u^{\pr{1}})) \otimes {Y\ps{1}}^{\pr{0}}u^{\pr{0}} - S({u\ps{1}}^{\pr{1}}(u\ps{2} \rhd Y^{\pr{1}})) \otimes {u\ps{1}}^{\pr{0}}Y^{\pr{0}} = \\
& S(u^{\pr{1}}) \otimes Yu^{\pr{0}} + S(Y \rhd u^{\pr{1}}) \otimes u^{\pr{0}} - S(u^{\pr{1}}) \otimes u^{\pr{0}}Y,
\end{split}
\end{align}
where we have used $Y^{\pr{0}} \otimes Y^{\pr{1}} = Y \otimes 1$. This follows from $[Z,Y] = Z$ implying $Z \rhd Y = 0$.
\medskip
\noindent By \cite[Lemma 1.1]{MoscRang11} we also have $S(Y \rhd f) = Y \rhd S(f)$ for any $f \in {\cal F}$. Hence we can conclude
\begin{equation}\label{aux-72}
{adY(u)}^{\pr{-1}} \otimes {adY(u)}^{\pr{0}} = adY(u^{\pr{-1}}) \otimes u^{\pr{0}} + u^{\pr{-1}} \otimes adY(u^{\pr{0}}),
\end{equation}
which implies immediately that,
\begin{equation}
{adY(\widetilde{u})}^{\pr{-1}} \otimes {adY(\widetilde{u})}^{\pr{0}} = adY(\widetilde{u}^{\pr{-1}}) \otimes \widetilde{u}^{\pr{0}} + \widetilde{u}^{\pr{-1}} \otimes adY(\widetilde{u}^{\pr{0}}).
\end{equation}
\noindent Finally, by the right-left AYD compatibility of $M$ over ${\cal H}$ we have
\begin{equation}
(m \cdot Y)\pr{-1} \otimes (m \cdot Y)\pr{0} = m\pr{-1} \otimes m\pr{0} \cdot Y - adY{m\pr{-1}} \otimes m\pr{0}.
\end{equation}
\noindent So, $\widetilde{ad}Y$ commutes with the last horizontal coface $\overset{\ra}{\partial}_{n+1}$ as
\begin{align}
\begin{split}
& \widetilde{ad}Y(\overset{\ra}{\partial}_{n+1}(m_{\delta} \otimes \widetilde{f} \otimes \widetilde{u})) = {m\pr{0}}_{\delta} \otimes adY(\widetilde{f}) \otimes \widetilde{u}^{\pr{-1}}m\pr{-1} \rhd 1_{{\cal F}} \otimes \widetilde{u}^{\pr{0}} + \\
& {m\pr{0}}_{\delta} \otimes \widetilde{f} \otimes {adY(\widetilde{u})}^{\pr{-1}}m\pr{-1} \rhd 1_{{\cal F}}\otimes {adY(\widetilde{u})}^{\pr{0}} - \\
& {(m \cdot Y)\pr{0}}_{\delta} \otimes \widetilde{f} \otimes \widetilde{u}^{\pr{-1}}(m \cdot Y)\pr{0} \rhd 1_{{\cal F}} \otimes \widetilde{u}^{\pr{0}} = \\
& \overset{\ra}{\partial}_{n+1}(\widetilde{ad}Y(m_{\delta} \otimes \widetilde{f} \otimes \widetilde{u})).
\end{split}
\end{align}
\noindent It is immediate to observe the commutation $\sigma_j \circ \widetilde{ad}Y = \widetilde{ad}Y \circ \sigma_j$ with the horizontal degeneracy operators.
\medskip
\noindent We now consider the horizontal cyclic operator. Let us first note that
\begin{equation}
m_{\delta} \cdot f = (m \cdot f)_{\delta}, \qquad f\in {\cal F}.
\end{equation}
We then have
\begin{align}
\begin{split}
& \widetilde{ad}Y(\overset{\ra}{\tau}(m_{\delta} \otimes \widetilde{f} \otimes \widetilde{u})) = \\
& \widetilde{ad}Y(({m\pr{0}} \cdot f^1\ps{1})_{\delta} \otimes S(f^1\ps{2}) \cdot (f^2 \ot\dots\ot f^p \otimes \widetilde{u}^{\pr{-1}}m\pr{-1} \rhd 1_{{\cal F}} \otimes \widetilde{u}^{\pr{0}})) = \\
& ({m\pr{0}} \cdot f^1\ps{1})_{\delta} \otimes adY(S(f^1\ps{2})) \cdot (f^2 \ot\dots\ot f^p \otimes \widetilde{u}^{\pr{-1}}m\pr{-1} \rhd 1_{{\cal F}} \otimes \widetilde{u}^{\pr{0}}) + \\
& ({m\pr{0}} \cdot f^1\ps{1})_{\delta} \otimes S(f^1\ps{2}) \cdot (adY(f^2 \ot\dots\ot f^p) \otimes \widetilde{u}^{\pr{-1}}m\pr{-1} \rhd 1_{{\cal F}} \otimes \widetilde{u}^{\pr{0}}) + \\
& ({m\pr{0}} \cdot f^1\ps{1})_{\delta} \otimes S(f^1\ps{2}) \cdot (f^2 \ot\dots\ot f^p \otimes adY(\widetilde{u}^{\pr{-1}})m\pr{-1} \rhd 1_{{\cal F}} \otimes \widetilde{u}^{\pr{0}}) + \\
& ({m\pr{0}} \cdot f^1\ps{1})_{\delta} \otimes S(f^1\ps{2}) \cdot (f^2 \ot\dots\ot f^p \otimes \widetilde{u}^{\pr{-1}}adY(m\pr{-1}) \rhd 1_{{\cal F}} \otimes \widetilde{u}^{\pr{0}}) + \\
& ({m\pr{0}} \cdot f^1\ps{1})_{\delta} \otimes S(f^1\ps{2}) \cdot (f^2 \ot\dots\ot f^p \otimes \widetilde{u}^{\pr{-1}}m\pr{-1} \rhd 1_{{\cal F}} \otimes adY(\widetilde{u}^{\pr{0}})) - \\
& (({m\pr{0}} \cdot f^1\ps{1}) \cdot Y)_{\delta} \otimes S(f^1\ps{2}) \cdot (f^2 \ot\dots\ot f^p \otimes \widetilde{u}^{\pr{-1}}m\pr{-1} \rhd 1_{{\cal F}} \otimes \widetilde{u}^{\pr{0}}).
\end{split}
\end{align}
\noindent Next, by the commutativity of $adY$ with the left $\mathcal{H}$-coaction on ${\cal U}$ as well as with the antipode, we can immediately conclude
\begin{align}
\begin{split}
& \widetilde{ad}Y(\overset{\ra}{\tau}(m_{\delta} \otimes \widetilde{f} \otimes \widetilde{u})) = \\
& ({m\pr{0}} \cdot f^1\ps{1})_{\delta} \otimes S(adY(f^1\ps{2})) \cdot (f^2 \ot\dots\ot f^p \otimes \widetilde{u}^{\pr{-1}}m\pr{-1} \rhd 1_{{\cal F}} \otimes \widetilde{u}^{\pr{0}}) + \\
& ({m\pr{0}} \cdot f^1\ps{1})_{\delta} \otimes S(f^1\ps{2}) \cdot (adY(f^2 \ot\dots\ot f^p) \otimes \widetilde{u}^{\pr{-1}}m\pr{-1} \rhd 1_{{\cal F}} \otimes \widetilde{u}^{\pr{0}}) + \\
& ({m\pr{0}} \cdot f^1\ps{1})_{\delta} \otimes S(f^1\ps{2}) \cdot (f^2 \ot\dots\ot f^p \otimes {adY(\widetilde{u})}^{\pr{-1}}m\pr{-1} \rhd 1_{{\cal F}} \otimes {adY(\widetilde{u})}^{\pr{0}}) + \\
& ({m\pr{0}} \cdot f^1\ps{1})_{\delta} \otimes S(f^1\ps{2}) \cdot (f^2 \ot\dots\ot f^p \otimes \widetilde{u}^{\pr{-1}}adY(m\pr{-1}) \rhd 1_{{\cal F}} \otimes \widetilde{u}^{\pr{0}}) - \\
& (({m\pr{0}} \cdot f^1\ps{1}) \cdot Y)_{\delta} \otimes S(f^1\ps{2}) \cdot (f^2 \ot\dots\ot f^p \otimes \widetilde{u}^{\pr{-1}}m\pr{-1} \rhd 1_{{\cal F}} \otimes \widetilde{u}^{\pr{0}}).
\end{split}
\end{align}
\noindent Then by the module compatibility over the bicrossed product $\mathcal{H} = {\cal F} \blacktriangleright\hspace{-4pt}\vartriangleleft {\cal U}$, we have
\begin{equation}
(m \cdot Y) \cdot f = (m \cdot f) \cdot Y + m \cdot adY(f).
\end{equation}
Therefore,
\begin{align}
\begin{split}
& \widetilde{ad}Y(\overset{\ra}{\tau}(m_{\delta} \otimes \widetilde{f} \otimes \widetilde{u})) = \\
& ({m\pr{0}})_{\delta} \cdot adY(f^1\ps{1}) \otimes S(f^1\ps{2}) \cdot (f^2 \ot\dots\ot f^p \otimes \widetilde{u}^{\pr{-1}}m\pr{-1} \rhd 1_{{\cal F}} \otimes \widetilde{u}^{\pr{0}}) + \\
& {m\pr{0}}_{\delta} \cdot f^1\ps{1} \otimes S(adY(f^1\ps{2})) \cdot (f^2 \ot\dots\ot f^p \otimes \widetilde{u}^{\pr{-1}}m\pr{-1} \rhd 1_{{\cal F}} \otimes \widetilde{u}^{\pr{0}}) + \\
& {m\pr{0}}_{\delta} \cdot f^1\ps{1} \otimes S(f^1\ps{2}) \cdot (adY(f^2 \ot\dots\ot f^p) \otimes \widetilde{u}^{\pr{-1}}m\pr{-1} \rhd 1_{{\cal F}} \otimes \widetilde{u}^{\pr{0}}) + \\
& {m\pr{0}}_{\delta} \cdot f^1\ps{1} \otimes S(f^1\ps{2}) \cdot (f^2 \ot\dots\ot f^p \otimes {adY(\widetilde{u})}^{\pr{-1}}m\pr{-1} \rhd 1_{{\cal F}} \otimes {adY(\widetilde{u})}^{\pr{0}}) - \\
& {(m \cdot Y)\pr{0}}_{\delta} \cdot f^1\ps{1} \otimes S(f^1\ps{2}) \cdot (f^2 \ot\dots\ot f^p \otimes \widetilde{u}^{\pr{-1}}(m \cdot Y)\pr{-1} \rhd 1_{{\cal F}} \otimes \widetilde{u}^{\pr{0}}).
\end{split}
\end{align}
\noindent Finally, by the commutativity $adY \circ \Delta = \Delta \circ adY$ on ${\cal F}$ we finish as
\begin{align}
\begin{split}
& \widetilde{ad}Y(\overset{\ra}{\tau}(m_{\delta} \otimes \widetilde{f} \otimes \widetilde{u})) = \\
& {m\pr{0}}_{\delta} \cdot {ad(f^1)}\ps{1} \otimes S({ad(f^1)}\ps{2}) \cdot (f^2 \ot\dots\ot f^p \otimes \widetilde{u}^{\pr{-1}}m\pr{-1} \rhd 1_{{\cal F}} \otimes \widetilde{u}^{\pr{0}}) + \\
& {m\pr{0}}_{\delta} \cdot f^1\ps{1} \otimes S(f^1\ps{2}) \cdot (adY(f^2 \ot\dots\ot f^p) \otimes \widetilde{u}^{\pr{-1}}m\pr{-1} \rhd 1_{{\cal F}} \otimes \widetilde{u}^{\pr{0}}) + \\
& {m\pr{0}}_{\delta} \cdot f^1\ps{1} \otimes S(f^1\ps{2}) \cdot (f^2 \ot\dots\ot f^p \otimes {adY(\widetilde{u})}^{\pr{-1}}m\pr{-1} \rhd 1_{{\cal F}} \otimes {adY(\widetilde{u})}^{\pr{0}}) - \\
& {(m \cdot Y)\pr{0}}_{\delta} \cdot f^1\ps{1} \otimes S(f^1\ps{2}) \cdot (f^2 \ot\dots\ot f^p \otimes \widetilde{u}^{\pr{-1}}(m \cdot Y)\pr{-1} \rhd 1_{{\cal F}} \otimes \widetilde{u}^{\pr{0}}) \\
& = \overset{\ra}{\tau}(\widetilde{ad}Y(m_{\delta} \otimes \widetilde{f} \otimes \widetilde{u})).
\end{split}
\end{align}
\noindent We continue with the vertical operators. We see that
\begin{equation}
\uparrow\hspace{-4pt}\partial_i \circ \widetilde{ad}Y = \widetilde{ad}Y \circ \uparrow\hspace{-4pt}\partial_i, \quad 0 \leq i \leq n
\end{equation}
are similar to their horizontal counterparts. One notes that this time the commutativity $adY \circ \Delta = \Delta \circ adY$ on ${\cal U}$ is needed.
\medskip
\noindent Commutativity with the last vertical coface operator follows, similarly as the horizontal case, from the AYD compatibility on $M$ over ${\cal H}$. Indeed,
\begin{align}
\begin{split}
& \widetilde{ad}Y(\uparrow\hspace{-4pt}\partial_{n+1}(m_{\delta} \otimes \widetilde{f} \otimes \widetilde{u})) = \widetilde{ad}Y({m\pr{0}}_{\delta} \otimes \widetilde{f} \otimes \widetilde{u} \otimes \wbar{m\pr{-1}}) = \\
& {m\pr{0}}_{\delta} \otimes adY(\widetilde{f} \otimes \widetilde{u}) \otimes \wbar{m\pr{-1}} + {m\pr{0}}_{\delta} \otimes \widetilde{f} \otimes \widetilde{u} \otimes \wbar{adY(m\pr{-1})} \\
& - {(m\pr{0} \cdot Y)}_{\delta} \otimes \widetilde{f} \otimes \widetilde{u} \otimes \wbar{m\pr{-1}} = \\
& {m\pr{0}}_{\delta} \otimes adY(\widetilde{f} \otimes \widetilde{u}) \otimes \wbar{m\pr{-1}} - {(m \cdot Y)\pr{0}}_{\delta} \otimes \widetilde{f} \otimes \widetilde{u} \otimes \wbar{(m \cdot Y)\pr{-1}} = \\
& \uparrow\hspace{-4pt}\partial_{n+1}(\widetilde{ad}Y(m_{\delta} \otimes \widetilde{f} \otimes \widetilde{u})).
\end{split}
\end{align}
\noindent Finally, we show the commutativity of $\widetilde{ad}Y$ with the vertical cyclic operator. First, we notice that we can rewrite it as
\begin{align}
\begin{split}
&\uparrow\hspace{-4pt}\tau(m_{\delta} \otimes \widetilde{f} \otimes \widetilde{u}) = ({m\pr{0}}_{\delta} \cdot u^1\ps{4}) \cdot S^{-1}(u^1\ps{3} \rhd 1_{{\cal F}}) \, \otimes \\
& S(S^{-1}(u^1\ps{2}) \rhd 1_{{\cal F}}) \cdot \left( S^{-1}(u^1\ps{1}) \rhd \widetilde{f} \otimes S(u^1\ps{5}) \cdot (u^2 \ot\dots\ot u^q \otimes \wbar{m\pr{-1}})
\right) \\
& = {m\pr{0}}_{\delta} \cdot u^1\ps{2} \otimes S^{-1}(u^1\ps{1}) \rhd \widetilde{f} \otimes S(u^1\ps{3}) \cdot (u^2 \ot\dots\ot u^q \otimes \wbar{m\pr{-1}}) \\
& = {(m\pr{0} \cdot u^1\ps{3})}_{\delta}\delta(u^1\ps{2}) \otimes S^{-1}(u^1\ps{1}) \rhd \widetilde{f} \otimes S(u^1\ps{4}) \cdot (u^2 \ot\dots\ot u^q \otimes \wbar{m\pr{-1}}).
\end{split}
\end{align}
\noindent Therefore we have
\begin{align}
\begin{split}
& \widetilde{ad}Y(\uparrow\hspace{-4pt}\tau(m_{\delta} \otimes \widetilde{f} \otimes \widetilde{u})) = \\
& \widetilde{ad}Y({m\pr{0}}_{\delta} \cdot u^1\ps{2} \otimes S^{-1}(u^1\ps{1}) \rhd \widetilde{f} \otimes S(u^1\ps{3}) \cdot (u^2 \ot\dots\ot u^q \otimes \wbar{m\pr{-1}})) = \\
& {m\pr{0}}_{\delta} \cdot u^1\ps{2} \otimes adY(S^{-1}(u^1\ps{1}) \rhd \widetilde{f}) \otimes S(u^1\ps{3}) \cdot (u^2 \ot\dots\ot u^q \otimes \wbar{m\pr{-1}}) + \\
& {m\pr{0}}_{\delta} \cdot u^1\ps{2} \otimes S^{-1}(u^1\ps{1}) \rhd \widetilde{f} \otimes adY(S(u^1\ps{3})) \cdot (u^2 \ot\dots\ot u^q \otimes \wbar{m\pr{-1}}) + \\
& {m\pr{0}}_{\delta} \cdot u^1\ps{2} \otimes S^{-1}(u^1\ps{1}) \rhd \widetilde{f} \otimes S(u^1\ps{3}) \cdot (adY(u^2 \ot\dots\ot u^q) \otimes \wbar{m\pr{-1}}) + \\
& {m\pr{0}}_{\delta} \cdot u^1\ps{2} \otimes S^{-1}(u^1\ps{1}) \rhd \widetilde{f} \otimes S(u^1\ps{3}) \cdot (u^2 \ot\dots\ot u^q \otimes \wbar{adY(m\pr{-1})}) - \\
& {(m\pr{0} \cdot u^1\ps{3}Y)}_{\delta}\delta(u^1\ps{2}) \otimes S^{-1}(u^1\ps{1}) \rhd \widetilde{f} \otimes S(u^1\ps{4}) \cdot (u^2 \ot\dots\ot u^q \otimes \wbar{m\pr{-1}}).
\end{split}
\end{align}
\noindent Recalling that
\begin{equation}\label{aux-71}
adY(h \rhd f) = adY(h) \rhd f + h \rhd adY(f),
\end{equation}
we then straightforwardly extend it to
\begin{equation}
adY(h \rhd \widetilde{f}) = adY(h) \rhd \widetilde{f} + h \rhd adY(\widetilde{f}).
\end{equation}
\noindent As a result, we have
\begin{align}
\begin{split}
{m\pr{0}}_{\delta} \cdot u^1\ps{2} \otimes adY(S^{-1}(u^1\ps{1}) \rhd \widetilde{f}) \otimes S(u^1\ps{3}) \cdot (u^2 \ot\dots\ot u^q \otimes \wbar{m\pr{-1}}) = \\
{m\pr{0}}_{\delta} \cdot u^1\ps{2} \otimes S^{-1}(adY(u^1\ps{1})) \rhd \widetilde{f} \otimes S(u^1\ps{3}) \cdot (u^2 \ot\dots\ot u^q \otimes \wbar{m\pr{-1}}) + \\
{m\pr{0}}_{\delta} \cdot u^1\ps{2} \otimes S^{-1}(u^1\ps{1}) \rhd adY(\widetilde{f}) \otimes S(u^1\ps{3}) \cdot (u^2 \ot\dots\ot u^q \otimes \wbar{m\pr{-1}}).
\end{split}
\end{align}
\noindent Next, we observe that
\begin{align}
\begin{split}
& - {(m\pr{0} \cdot u^1\ps{3}Y)}_{\delta}\delta(u^1\ps{2}) \otimes S^{-1}(u^1\ps{1}) \rhd \widetilde{f} \otimes S(u^1\ps{4}) \cdot (u^2 \ot\dots\ot u^q \otimes \wbar{m\pr{-1}}) = \\
& {(m\pr{0} \cdot adY(u^1\ps{3}))}_{\delta}\delta(u^1\ps{2}) \otimes S^{-1}(u^1\ps{1}) \rhd \widetilde{f} \otimes S(u^1\ps{4}) \cdot (u^2 \ot\dots\ot u^q \otimes \wbar{m\pr{-1}}) -\\
& {(m\pr{0} \cdot Y)}_{\delta} \cdot u^1\ps{2} \otimes S^{-1}(u^1\ps{1}) \rhd \widetilde{f} \otimes S(u^1\ps{3}) \cdot (u^2 \ot\dots\ot u^q \otimes \wbar{m\pr{-1}}),
\end{split}
\end{align}
where,
\begin{equation}\label{aux-73}
(m \cdot adY(u\ps{2}))_{\delta} \delta(u\ps{1}) = m_{\delta} \cdot adY(u).
\end{equation}
\noindent Therefore we have
\begin{align}
\begin{split}
& \widetilde{ad}Y(\uparrow\hspace{-4pt}\tau(m_{\delta} \otimes \widetilde{f} \otimes \widetilde{u})) = \\
& {m\pr{0}}_{\delta} \cdot u^1\ps{2} \otimes S^{-1}(u^1\ps{1}) \rhd adY(\widetilde{f}) \otimes S(u^1\ps{3}) \cdot (u^2 \ot\dots\ot u^q \otimes \wbar{m\pr{-1}}) + \\
& {m\pr{0}}_{\delta} \cdot u^1\ps{2} \otimes S^{-1}(adY(u^1\ps{1})) \rhd \widetilde{f} \otimes S(u^1\ps{3}) \cdot (u^2 \ot\dots\ot u^q \otimes \wbar{m\pr{-1}}) + \\
& {m\pr{0}}_{\delta} \cdot adY(u^1\ps{2}) \otimes S^{-1}(u^1\ps{1}) \rhd \widetilde{f} \otimes S(u^1\ps{4}) \cdot (u^2 \ot\dots\ot u^q \otimes \wbar{m\pr{-1}}) + \\
& {m\pr{0}}_{\delta} \cdot u^1\ps{2} \otimes S^{-1}(u^1\ps{1}) \rhd \widetilde{f} \otimes S(adY(u^1\ps{3})) \cdot (u^2 \ot\dots\ot u^q \otimes \wbar{m\pr{-1}}) + \\
& {m\pr{0}}_{\delta} \cdot u^1\ps{2} \otimes S^{-1}(u^1\ps{1}) \rhd \widetilde{f} \otimes S(u^1\ps{3}) \cdot (adY(u^2 \ot\dots\ot u^q) \otimes \wbar{m\pr{-1}}) - \\
& {(m \cdot Y)\pr{0}}_{\delta} \cdot u^1\ps{2} \otimes S^{-1}(u^1\ps{1}) \rhd \widetilde{f} \otimes S(u^1\ps{3}) \cdot (u^2 \ot\dots\ot u^q \otimes \wbar{(m \cdot Y)\pr{-1}}).
\end{split}
\end{align}
\noindent Then the commutativity $\Delta \circ adY = adY \circ \Delta$ on ${\cal U}$ finishes the proof as
\begin{align}
\begin{split}
& \widetilde{ad}Y(\uparrow\hspace{-4pt}\tau(m_{\delta} \otimes \widetilde{f} \otimes \widetilde{u})) = \\
& {m\pr{0}}_{\delta} \cdot u^1\ps{2} \otimes S^{-1}(u^1\ps{1}) \rhd adY(\widetilde{f}) \otimes S(u^1\ps{3}) \cdot (u^2 \ot\dots\ot u^q \otimes \wbar{m\pr{-1}}) + \\
& {m\pr{0}}_{\delta} \cdot adY(u^1)\ps{2} \otimes S^{-1}(adY(u^1)\ps{1}) \rhd \widetilde{f} \otimes S(adY(u^1)\ps{3}) \cdot (u^2 \ot\dots\ot u^q \otimes \wbar{m\pr{-1}}) + \\
& {m\pr{0}}_{\delta} \cdot u^1\ps{2} \otimes S^{-1}(u^1\ps{1}) \rhd \widetilde{f} \otimes S(u^1\ps{3}) \cdot (adY(u^2 \ot\dots\ot u^q) \otimes \wbar{m\pr{-1}}) - \\
& {(m \cdot Y)\pr{0}}_{\delta} \cdot u^1\ps{2} \otimes S^{-1}(u^1\ps{1}) \rhd \widetilde{f} \otimes S(u^1\ps{3}) \cdot (u^2 \ot\dots\ot u^q \otimes \wbar{(m \cdot Y)\pr{-1}}) \\
& = \uparrow\hspace{-4pt}\tau(\widetilde{ad}Y(m_{\delta} \otimes \widetilde{f} \otimes \widetilde{u})).
\end{split}
\end{align}
\end{proof}
\noindent For the generators $X,Y,\delta_1 \in \mathcal{H}$, it is already known that
\begin{equation}
adY(Y) = 0, \quad adY(X) = X, \quad adY(\delta_1) = \delta_1.
\end{equation}
\noindent We recall here the action of $Y \in sl_2$ as
\begin{equation}
1_M \lhd Y = 0, \quad R^X \lhd Y = R^X, \quad R^Y \lhd Y = 0, \quad R^Z \lhd Y = -R^Z.
\end{equation}
\noindent Hence we define the following weight on the cyclic complex by
\begin{align}
\begin{split}
& \nm{Y} = 0, \quad \nm{X} = 1, \quad \nm{\delta_1} = 1, \\
& \nm{1_M} = 0, \quad \nm{R^X} = -1, \quad \nm{R^Y} = 0, \quad \nm{R^Z} = 1,
\end{split}
\end{align}
we can express the following property of the operator $\widetilde{ad}Y$;
\begin{equation}
\widetilde{ad}Y(m_{\delta} \otimes \widetilde{f} \otimes \widetilde{u}) = \nm{m_{\delta} \otimes \widetilde{f} \otimes \widetilde{u}}m_{\delta} \otimes \widetilde{f} \otimes \widetilde{u},
\end{equation}
where $\nm{m_{\delta} \otimes \widetilde{f} \otimes \widetilde{u}} := \nm{m} + \nm{\widetilde{f}} + \nm{\widetilde{u}}$.
\medskip
\noindent Hence, the operator $\widetilde{ad}Y$ acts as a grading (weight) operator. Extending the above grading to the cocyclic complex $\mathfrak{Z}^{\bullet, \bullet}$, we have
\begin{equation}
\mathfrak{Z}^{\bullet, \bullet} = \bigoplus_{k \in \mathbb{Z}}\mathfrak{Z}[k]^{\bullet, \bullet},
\end{equation}
where
\begin{equation}
\mathfrak{Z}[k] = \Big\{m_{\delta} \otimes \widetilde{f} \otimes \widetilde{u} \, \Big| \, \nm{m_{\delta} \otimes \widetilde{f} \otimes \widetilde{u}}= k\Big\}.
\end{equation}
\noindent As a result of Lemma \ref{aux-69}, we can say that $\mathfrak{Z}[k]$ is a subcomplex for any $k \in \mathbb{Z}$, and hence the cohomology inherits the grading. Namely,
\begin{equation}
HP(\mathcal{H},M_{\delta}) = \bigoplus_{k \in \mathbb{Z}}H(\mathfrak{Z}[k]).
\end{equation}
\noindent Moreover, using Lemma \ref{aux-68} we conclude the following analogous of Corollary 3.10 in \cite{MoscRang07}.
\begin{corollary}\label{corollary-weight-1}
The cohomology is captured by the weight 1 subcomplex, {\it i.e., }\
\begin{equation}
H(\mathfrak{Z}[1]) = HP(\mathcal{H},M_{\delta}), \quad H(\mathfrak{Z}[k]) = 0, \quad k \neq 1.
\end{equation}
\end{corollary}
\begin{proposition}
The odd and even periodic Hopf cyclic cohomology of ${\cal H}_{1\rm S }{^{\rm cop}}$ with coefficients in $M_\delta$ are both one dimensional. Their classes approximately are given by the following cocycles in the $E_1$ term of the natural spectral sequence associated to $M_\delta$.
\begin{align}
&c^{\rm odd}={\bf 1} \otimes \delta_1 \in E_1^{1,{\rm odd}}\\
&c^{\rm even} = {\bf 1} \otimes X \otimes Y - {\bf 1} \otimes Y \otimes X + {\bf 1} \otimes Y \otimes \delta_1Y \in E_1^{1,{\rm even}}.
\end{align}
Here, ${\bf 1} := 1_M \otimes {\mathbb C}_\delta$.
\end{proposition}
\begin{proof}
We have seen that all cohomology classes are concentrated in the weight 1 subcomplex.
On the other hand, $E_1$ term of the spectral sequence associated to the above mentioned filtration on $M_{\delta}$ is
\begin{equation}
E_1^{j,i}(\mathcal{H},M_{\delta}) = H^{i+j}(C({\cal U} \blacktriangleright\hspace{-4pt} < {\cal F}, F_jM_{\delta} / F_{j-1}M_{\delta})),
\end{equation}
where $F_{0}M_{\delta} / F_{-1}M_{\delta} \cong F_{0}M_{\delta}$, $F_{1}M_{\delta} / F_0M_{\delta} \cong \mathbb{C}_{\delta}$ and $F_{j+1}M_{\delta} / F_jM_{\delta} = 0$ for $j \geq 1$.
\medskip
\noindent Therefore,
\begin{equation}
E_1^{0,i}(\mathcal{H},M_{\delta}) = 0, \quad E_1^{1,i}(\mathcal{H},M_{\delta}) = H^i(C({\cal U} \blacktriangleright\hspace{-4pt} < {\cal F}, \mathbb{C}_{\delta})), \quad E_1^{j,i}(\mathcal{H},M_{\delta}) = 0, \quad j \geq 1.
\end{equation}
\noindent So the spectral sequence collapses at the $E_2$ term and we get
\begin{equation}
E_2^{0,i}(\mathcal{H},M_{\delta}) \cong E_{\infty}^{0,i}(\mathcal{H},M_{\delta}) = 0,
\end{equation}
\begin{equation}\label{aux-70}
E_2^{1,i}(\mathcal{H}, M_\delta) \cong E_{\infty}^{1,i}(\mathcal{H}) = F_1H^i(C({\cal U} \blacktriangleright\hspace{-4pt} < {\cal F}, M_{\delta})) / F_0H^i(C({\cal U} \blacktriangleright\hspace{-4pt} < {\cal F}, M_{\delta})),
\end{equation}
and
\begin{equation}
E_2^{j,i}(\mathcal{H}, M_\delta) \cong E_{\infty}^{j,i}(\mathcal{H},M_\delta) = 0, \quad j \geq 2.
\end{equation}
\noindent By definition of the induced filtration on the cohomology groups, we have
\begin{align}
\begin{split}
& F_1H^i(C({\cal U} \blacktriangleright\hspace{-4pt} < {\cal F}, M_{\delta})) = H^i(C({\cal U} \blacktriangleright\hspace{-4pt} < {\cal F}, F_1M_{\delta})) = \\
& H^i(C({\cal U} \blacktriangleright\hspace{-4pt} < {\cal F}, M_{\delta})),
\end{split}
\end{align}
and
\begin{align}
\begin{split}
& F_0H^i(C({\cal U} \blacktriangleright\hspace{-4pt} < {\cal F}, M_{\delta})) = H^i(C({\cal U} \blacktriangleright\hspace{-4pt} < {\cal F}, F_0M_{\delta})) \cong \\
& H^i(W(sl_2, F_0M)) = 0,
\end{split}
\end{align}
where the last equality follows from the Whitehead's theorem.
\end{proof}
\subsubsection{Construction of a representative cocycle for the odd class}
In this subsection we first compute the odd cocycle in the total complex ${\rm{Tot}}^{\bullet}({\cal F},{\cal U},M_\delta)$ of the bicomplex \eqref{bicocyclic-bicomplex}.
Let us recall the total mixed complex
\begin{equation}
{\rm{Tot}}^{\bullet}({\cal F},{\cal U},M_\delta):=\bigoplus_{p+q = \bullet}M_\delta \otimes {\cal F}^{\otimes\;p} \otimes {\cal U}^{\otimes\;q},
\end{equation}
with the operators
\begin{align}
& \overset{\ra}{b}_p=\sum_{i=0}^{p+1} (-1)^{i}\overset{\ra}{\partial}_i, \qquad \uparrow\hspace{-4pt}b_q=\sum_{i=0}^{q+1} (-1)^{i}\uparrow\hspace{-4pt}\partial_i, \qquad b_T=\sum_{p+q=n}\overset{\ra}{b}_p +(-1)^p \uparrow\hspace{-4pt}b_q \\
& \overset{\ra}{B}_p=(\sum_{i=0}^{p-1}(-1)^{(p-1)i}\overset{\ra}{\tau}^i)\overset{\rightarrow}{\sigma}_{p-1}\overset{\ra}{\tau}, \quad \uparrow\hspace{-4pt}B_q=(\sum_{i=0}^{q-1}(-1)^{(q-1)i}\uparrow\hspace{-4pt}\tau^i)\uparrow\hspace{-4pt}\sigma_{q-1}\uparrow\hspace{-4pt}\tau, \\\notag
& \hspace{3cm} \quad B_T=\sum_{p+q=n}\overset{\ra}{B}_p+(-1)^p\uparrow\hspace{-4pt}B_q.
\end{align}
\begin{proposition}
Let
\begin{equation}
c':={\bf 1} \otimes \delta_1 \in M_\delta \otimes {\cal F}
\end{equation}
and
\begin{equation}
c''':= {\bf R}^Y \otimes X + 2{\bf R}^Z \otimes Y \in M_\delta \otimes {\cal U}.
\end{equation}
Then $c'+c''' \in {\rm{Tot}}^1({\cal F},{\cal U},M_\delta)$ is a Hochschild cocycle.
\end{proposition}
\begin{proof}
We start with the element $
c':={\bf 1} \otimes \delta_1 \in M_\delta \otimes {\cal F} .$
\medskip
\noindent The equality $\uparrow\hspace{-4pt}b(c')=0$ is immediate to notice. Next, we observe that
\begin{align}
\begin{split}
& \overset{\ra}{b}(c') = -{\bf R}^X \otimes \delta_1 \otimes X - {\bf R}^Y \otimes \delta_1 \otimes Y \\
& = -{\bf R}^X \otimes \delta_1 \otimes X + {\bf R}^Y \otimes \delta_1 \otimes Y + {\bf R}^X \otimes {\delta_1}^2 \otimes Y - {\bf R}^X \otimes {\delta_1}^2 \otimes Y - 2{\bf R}^Y \otimes \delta_1 \otimes Y \\
& = \uparrow\hspace{-4pt}b({\bf R}^Y \otimes X+ 2{\bf R}^Z \otimes Y).
\end{split}
\end{align}
So, for the element $c''':={\bf R}^Y \otimes X + 2{\bf R}^Z \otimes Y \in M_\delta \otimes {\cal U}$, we have $\overset{\ra}{b}(c') - \uparrow\hspace{-4pt}b(c''') = 0$. Finally we notice $\overset{\ra}{b}(c''')=0$.
\end{proof}
\begin{proposition}
The element $c'+c''' \in {\rm{Tot}}^1({\cal F},{\cal U},M_\delta)$ is a Connes cycle.
\end{proposition}
\begin{proof}
Using the action of ${\cal F}$ and ${\cal U}$ on $M_\delta$, we directly conclude that on one hand side we have $\uparrow\hspace{-4pt}B(c') = {\bf R}^Z$, and on the other hand $\overset{\ra}{B}(c''') = -{\bf R}^Z$.
\end{proof}
\bigskip
\noindent Our next task is to send this cocycle to the cyclic complex $C^1({\cal H},M_\delta)$. This is a two step process. We first use the Alexander-Whitney map
\begin{equation}\label{AW}
\, AW:= \bigoplus_{p+q=n} AW_{p,q}: \mathop{\rm Tot}\nolimits^n({\cal F},{\cal U},M_\delta)\rightarrow \mathfrak{Z}^{n,n},
\end{equation}
\begin{equation*}
AW_{p,q}: {\cal F}^{\otimes p}\otimes {\cal U}^{\otimes q}\longrightarrow {\cal F}^{\otimes
p+q}\otimes {\cal U}^{\otimes p+q}
\end{equation*}
\begin{equation*}
AW_{p,q}=(-1)^{p+q}\underset{
p\;\text{times}}{\underbrace{\uparrow\hspace{-4pt}\partial_0\uparrow\hspace{-4pt}\partial_0\dots
\uparrow\hspace{-4pt}\partial_0}}\overset{\ra}{\partial}_n\overset{\ra}{\partial}_{n-1}\dots \overset{\ra}{\partial}_{p+1} \, .
\end{equation*}
to pass to the diagonal complex $\mathfrak{Z}^{\bullet,\bullet}({\cal H},{\cal F},M_\delta)$. It is checked that
\begin{equation}
AW_{1,0}(c') = - {\bf 1} \otimes \delta_1 \otimes 1 - {\bf R}^X \otimes \delta_1 \otimes X - {\bf R}^Y \otimes \delta_1 \otimes Y,
\end{equation}
as well as
\begin{equation}
AW_{0,1}(c''') = - {\bf R}^Y \otimes 1 \otimes X - 2{\bf R}^Z \otimes 1 \otimes Y.
\end{equation}
\noindent Summing them up we get
\begin{equation}
c^{\rm odd}_{\rm diag} := - {\bf 1} \otimes \delta_1 \otimes 1 - {\bf R}^X \otimes \delta_1 \otimes X - {\bf R}^Y \otimes \delta_1 \otimes Y - {\bf R}^Y \otimes 1 \otimes X - 2{\bf R}^Z \otimes 1 \otimes Y.
\end{equation}
\noindent Finally, via the quasi-isomorphism
\begin{align}\label{PSI-1}
\begin{split}
& \Psi: \mathfrak{Z}^{\bullet,\bullet}({\cal H},{\cal F},M_\delta) \longrightarrow C^\bullet({\cal H}, M_\delta) \\
&\Psi(m\otimes f^1\ot\dots\ot f^n \otimes u^1\ot\dots\ot u^n)\\
&=\sum m\otimes f^1\blacktriangleright\hspace{-4pt}\vartriangleleft u^1\ns{0}\otimes f^2u^1\ns{1}\blacktriangleright\hspace{-4pt}\vartriangleleft u^2\ns{0}\ot\dots\ot \\
& \ot\dots\ot f^n u^1\ns{n-1} \dots u^{n-1}\ns{1}\blacktriangleright\hspace{-4pt}\vartriangleleft u^n ,
\end{split}
\end{align}
which is recalled from \cite{RangSutl},
we carry the element $c^{\rm odd}_{\rm diag} \in \mathfrak{Z}^{2,2}({\cal H},{\cal F},M_\delta)$ to
\begin{equation}\label{c-odd}
c^{\rm odd} = - \Big({\bf 1} \otimes \delta_1 + {\bf R}^Y \otimes X + {\bf R}^X \otimes \delta_1X + {\bf R}^Y \otimes \delta_1Y + 2 {\bf R}^Z \otimes Y \Big) \in C^1({\cal H}, M_\delta).
\end{equation}
\begin{proposition}
The element $c^{\rm odd}$ defined in \eqref{c-odd} is a Hochschild cocycle.
\end{proposition}
\begin{proof}
We first calculate its images under the Hochschild coboundary $b:C^1(\mathcal{H},M_{\delta}) \to C^2(\mathcal{H},M_{\delta})$.
\begin{align}
\begin{split}
& b({\bf 1} \otimes \delta_1) = {\bf 1} \otimes 1_{\mathcal{H}} \otimes \delta_1 - {\bf 1} \otimes \Delta(\delta_1) + {\bf 1} \otimes \delta_1 \otimes 1 + {\bf R}^Y \otimes \delta_1 \otimes Y + {\bf R}^X \otimes \delta_1 \otimes X \\
& = {\bf R}^Y \otimes \delta_1 \otimes Y + {\bf R}^X \otimes \delta_1 \otimes X, \\[.2cm]
& b({\bf R}^Y \otimes X) = {\bf R}^Y \otimes 1_{\mathcal{H}} \otimes X - {\bf R}^Y \otimes \Delta(X) + {\bf R}^Y \otimes X \otimes 1_{\mathcal{H}} + {\bf R}^X \otimes X \otimes \delta_1 \\
& = {\bf R}^X \otimes X \otimes \delta_1 - {\bf R}^Y \otimes Y \otimes \delta_1, \\[.2cm]
& b({\bf R}^X \otimes \delta_1X) = {\bf R}^X \otimes 1_{\mathcal{H}} \otimes \delta_1X - {\bf R}^X \otimes \Delta(\delta_1X) + {\bf R}^X \otimes \delta_1X \otimes 1_{\mathcal{H}} \\
& = - {\bf R}^X \otimes \delta_1 \otimes X - {\bf R}^X \otimes \delta_1Y \otimes \delta_1 - {\bf R}^X \otimes X \otimes \delta_1 - {\bf R}^X \otimes Y \otimes {\delta_1}^2, \\[.2cm]
& b({\bf R}^Y \otimes \delta_1Y) = {\bf R}^Y \otimes 1_{\mathcal{H}} \otimes \delta_1Y - {\bf R}^Y \otimes \Delta(\delta_1Y) + {\bf R}^Y \otimes \delta_1Y \otimes 1_{\mathcal{H}} + {\bf R}^X \otimes \delta_1Y \otimes \delta_1 \\
& = {\bf R}^X \otimes \delta_1Y \otimes \delta_1 - {\bf R}^Y \otimes \delta_1 \otimes Y - {\bf R}^Y \otimes Y \otimes \delta_1, \\[.2cm]
& b({\bf R}^Z \otimes Y) = {\bf R}^Z \otimes 1_{\mathcal{H}} \otimes Y - {\bf R}^Z \otimes \Delta(Y) + {\bf R}^Z \otimes Y \otimes 1_{\mathcal{H}} \\
& + {\bf R}^Y \otimes Y \otimes \delta_1 + \frac{1}{2} {\bf R}^X \otimes Y \otimes {\delta_1}^2 \\
& = {\bf R}^Y \otimes Y \otimes \delta_1 + \frac{1}{2} {\bf R}^X \otimes Y \otimes {\delta_1}^2.
\end{split}
\end{align}
\noindent Now, summing up we get
\begin{equation}
b({\bf 1} \otimes \delta_1 + {\bf R}^Y \otimes X + {\bf R}^X \otimes \delta_1X + {\bf R}^Y \otimes \delta_1Y + 2 \cdot {\bf R}^Z \otimes Y) = 0.
\end{equation}
\end{proof}
\begin{proposition}
The Hochschild cocycle $c^{\rm odd}$ defined in \eqref{c-odd} vanishes under the Connes boundary map.
\end{proposition}
\begin{proof}
The Connes boundary is defined on the normalized bi-complex by the formula
\begin{equation}
B = \sum_{i = 0}^n (-1)^{ni}\tau^i \circ \sigma_{-1},
\end{equation}
where
\begin{equation}
\sigma_{-1}(m_{\delta} \otimes h^1 \otimes \ldots \otimes h^{n+1}) = m_{\delta} \cdot h^1\ps{1} \otimes S(h^1\ps{2}) \cdot (h^2 \otimes \ldots \otimes h^{n+1})
\end{equation}
is the extra degeneracy. Accordingly,
\begin{align}
\begin{split}
& B({\bf 1} \otimes \delta_1 + {\bf R}^Y \otimes X + {\bf R}^X \otimes \delta_1X + {\bf R}^Y \otimes \delta_1Y + 2 \cdot {\bf R}^Z \otimes Y) = \\
& {\bf 1} \cdot \delta_1 + {\bf R}^Y \cdot X + {\bf R}^X \cdot \delta_1X + {\bf R}^Y \cdot \delta_1Y + 2 \cdot {\bf R}^Z \cdot Y = \\
& {\bf R}^Z - {\bf R}^Z = 0.
\end{split}
\end{align}
\end{proof}
\subsubsection{Construction of a representative cocycle for the even class}
\begin{proposition}
Let
\begin{align}
\begin{split}
& c := {\bf 1} \otimes X \otimes Y - {\bf 1} \otimes Y \otimes X - {\bf R}^X \otimes XY \otimes X - {\bf R}^X \otimes Y \otimes X^2 + {\bf R}^Y \otimes XY \otimes Y \\
& \hspace{1.2cm} + {\bf R}^Y \otimes X \otimes Y^2 - {\bf R}^Y \otimes Y \otimes X \in M_\delta \otimes {\cal U}^{\otimes\;2}
\end{split}
\end{align}
and
\begin{align}
\begin{split}
& c'' := - {\bf R}^X \otimes \delta_1 \otimes XY^2 + \frac{2}{3}{\bf R}^X \otimes {\delta_1}^2 \otimes Y^3 +\frac{1}{3}{\bf R}^Y \otimes \delta_1 \otimes Y^3 \\
& \hspace{1.2cm} - \frac{1}{4}{\bf R}^X \otimes {\delta_1}^2 \otimes Y^2 - \frac{1}{2}{\bf R}^Y \otimes \delta_1 \otimes Y^2 \in M_\delta \otimes {\cal F} \otimes {\cal U}.
\end{split}
\end{align}
Then $c+c'' \in {\rm{Tot}}^2({\cal F},{\cal U},M_\delta)$ is a Hochschild cocycle.
\end{proposition}
\begin{proof}
We start with the element
\begin{align}
\begin{split}
& c := {\bf 1} \otimes X \otimes Y - {\bf 1} \otimes Y \otimes X - {\bf R}^X \otimes XY \otimes X - {\bf R}^X \otimes Y \otimes X^2 + {\bf R}^Y \otimes XY \otimes Y \\
& + {\bf R}^Y \otimes X \otimes Y^2 - {\bf R}^Y \otimes Y \otimes X.
\end{split}
\end{align}
\noindent It is immediate that $\overset{\ra}{b}(c) = 0$ . To be able to compute $\uparrow\hspace{-4pt}b(c)$, we need to determine the following ${\cal F}$-coaction.
\begin{align}
\begin{split}
& {\bf R}^X \otimes (XY)^{\pr{-1}}X^{\pr{-1}} \otimes (XY)^{\pr{0}} \otimes X^{\pr{0}} \\
& + {\bf R}^X \otimes (X^2)^{\pr{-1}} \otimes Y \otimes (X^2)^{\pr{0}} - {{\bf R}^Y}\pr{0} \otimes {{\bf R}^Y}\pr{-1}(XY)^{\pr{-1}} \otimes (XY)^{\pr{0}} \otimes Y \\
& - {{\bf R}^Y}\pr{0} \otimes {{\bf R}^Y}\pr{-1}X^{\pr{-1}} \otimes X^{\pr{0}} \otimes Y^2 + {{\bf R}^Y}\pr{0} \otimes {{\bf R}^Y}\pr{-1}X^{\pr{-1}} \otimes Y \otimes X^{\pr{0}}.
\end{split}
\end{align}
\noindent Hence, observing
\begin{align}
\begin{split}
& \blacktriangledown(X^2) = (X^2)^{\pr{0}} \otimes (X^2)^{\pr{1}} = {X\ps{1}}^{\pr{0}}X^{\pr{0}} \otimes {X\ps{1}}^{\pr{1}}(X\ps{2} \rhd X^{\pr{1}}) \\
& = X^2 \otimes 1 + 2XY \otimes \delta_1 + X \otimes \delta_1 + Y^2 \otimes {\delta_1}^2 + \frac{1}{2}Y \otimes {\delta_1}^2,
\end{split}
\end{align}
and
\begin{equation}
\blacktriangledown(XY) = (XY)^{\pr{0}} \otimes (XY)^{\pr{1}} = X^{\pr{0}}Y \otimes X^{\pr{1}}, \quad XY \mapsto XY \otimes 1 + Y^2 \otimes \delta_1,
\end{equation}
we have
\begin{align}
\begin{split}
& \uparrow\hspace{-4pt}b_0(c) = -{\bf R}^X \otimes \delta_1 \otimes Y^2 \otimes X - {\bf R}^X \otimes \delta_1 \otimes XY \otimes Y + {\bf R}^X \otimes {\delta_1}^2 \otimes Y^2 \otimes Y \\
& - 2{\bf R}^X \otimes \delta_1 \otimes Y \otimes XY - {\bf R}^X \otimes \delta_1 \otimes Y \otimes X + {\bf R}^X \otimes {\delta_1}^2 \otimes Y \otimes Y^2 \\
& + \frac{1}{2}{\bf R}^X \otimes {\delta_1}^2 \otimes Y \otimes Y - {\bf R}^X \otimes \delta_1 \otimes XY \otimes Y + {\bf R}^Y \otimes \delta_1 \otimes Y^2 \otimes Y \\
& + {\bf R}^X \otimes {\delta_1}^2 \otimes Y^2 \otimes Y - {\bf R}^X \otimes \delta_1 \otimes X \otimes Y^2 + {\bf R}^Y \otimes \delta_1 \otimes Y \otimes Y^2 \\
& + {\bf R}^X \otimes {\delta_1}^2 \otimes Y \otimes Y^2 + {\bf R}^X \otimes \delta_1 \otimes Y \otimes X - {\bf R}^Y \otimes \delta_1 \otimes Y \otimes Y - {\bf R}^X \otimes {\delta_1}^2 \otimes Y \otimes Y.
\end{split}
\end{align}
\noindent It is now clear that
\begin{align}
\begin{split}
& \uparrow\hspace{-4pt}b(c) = \overset{\ra}{b}\big({\bf R}^X \otimes \delta_1 \otimes XY^2 - \frac{2}{3}{\bf R}^X \otimes {\delta_1}^2 \otimes Y^3 -\frac{1}{3}{\bf R}^Y \otimes \delta_1 \otimes Y^3 \\
& \hspace{1.6cm} + \frac{1}{4}{\bf R}^X \otimes {\delta_1}^2 \otimes Y^2 + \frac{1}{2}{\bf R}^Y \otimes \delta_1 \otimes Y^2 \big).
\end{split}
\end{align}
\noindent Therefore, for the element
\begin{align}
\begin{split}
& c'' := - {\bf R}^X \otimes \delta_1 \otimes XY^2 + \frac{2}{3}{\bf R}^X \otimes {\delta_1}^2 \otimes Y^3 +\frac{1}{3}{\bf R}^Y \otimes \delta_1 \otimes Y^3 \\
& \hspace{1.2cm} - \frac{1}{4}{\bf R}^X \otimes {\delta_1}^2 \otimes Y^2 - \frac{1}{2}{\bf R}^Y \otimes \delta_1 \otimes Y^2,
\end{split}
\end{align}
we have $\overset{\ra}{b}(c'') + \uparrow\hspace{-4pt}b(c) = 0$.
\noindent Finally we observe that,
\begin{align}
\begin{split}
& \uparrow\hspace{-4pt}b(c'') = {\bf R}^X \otimes \delta_1 \otimes \delta_1 \otimes Y^3 - \frac{4}{3}{\bf R}^X \otimes \delta_1 \otimes \delta_1 \otimes Y^3 + \frac{1}{3}{\bf R}^X \otimes \delta_1 \otimes \delta_1 \otimes Y^3 \\
& \hspace{1.5cm} + \frac{1}{2}{\bf R}^X \otimes \delta_1 \otimes \delta_1 \otimes Y^2 - \frac{1}{2}{\bf R}^X \otimes \delta_1 \otimes \delta_1 \otimes Y^2 = 0.
\end{split}
\end{align}
\end{proof}
\begin{proposition}
The element $c+c'' \in {\rm{Tot}}^2({\cal F},{\cal U},M_\delta)$ vanishes under the Connes boundary map.
\end{proposition}
\begin{proof}
As above, we start with
\begin{align}
\begin{split}
& c := {\bf 1} \otimes X \otimes Y - {\bf 1} \otimes Y \otimes X - {\bf R}^X \otimes XY \otimes X - {\bf R}^X \otimes Y \otimes X^2 + {\bf R}^Y \otimes XY \otimes Y \\
& + {\bf R}^Y \otimes X \otimes Y^2 - {\bf R}^Y \otimes Y \otimes X.
\end{split}
\end{align}
\noindent To compute $\overset{\ra}{B}$, it suffices to consider the horizontal extra degeneracy operator $\overset{\rightarrow}{\sigma}_{-1}:=\overset{\rightarrow}{\sigma}_{1}\overset{\ra}{\tau}$. We have,
\begin{equation}
\overset{\rightarrow}{\sigma}_{-1}({\bf 1} \otimes X \otimes Y) = {\bf 1} \cdot X\ps{1} \otimes S(X\ps{2})Y = -{\bf 1} \otimes XY,
\end{equation}
and
\begin{equation}
\overset{\rightarrow}{\sigma}_{-1}({\bf 1} \otimes Y \otimes X) = {\bf 1} \cdot Y\ps{1} \otimes S(Y\ps{2})X = {\bf 1} \otimes X - {\bf 1} \otimes YX = -{\bf 1} \otimes XY.
\end{equation}
Therefore we proved that $\overset{\rightarrow}{\sigma}_{-1}({\bf 1} \otimes X \otimes Y - {\bf 1} \otimes Y \otimes X) = 0$. For the other terms in $c \in M_\delta \otimes {\cal U}^{\otimes\;2}$ we proceed by
\begin{align}
\begin{split}
& \overset{\rightarrow}{\sigma}_{-1}({\bf R}^X \otimes XY \otimes X) = {\bf R}^X \cdot X\ps{1}Y\ps{1} \otimes S(Y\ps{2})S(X\ps{2})X \\
& = {\bf R}^X \cdot XY \otimes X + {\bf R}^X \otimes YX^2 - {\bf R}^X \cdot X \otimes YX - {\bf R}^X \cdot Y \otimes X^2 \\
& = {\bf R}^Y \otimes XY + {\bf R}^X \otimes YX^2 - 2{\bf R}^X \otimes X^2,
\end{split}
\end{align}
\begin{align}
\begin{split}
& \overset{\rightarrow}{\sigma}_{-1}({\bf R}^Y \otimes XY \otimes Y) = {\bf R}^Y \cdot XY \otimes Y + {\bf R}^Y \otimes YXY - {\bf R}^Y \cdot X \otimes Y^2 - {\bf R}^Y \cdot Y \otimes XY \\
& = {\bf R}^Y \otimes XY^2 + {\bf R}^Z \otimes Y^2,
\end{split}
\end{align}
\begin{equation}
\overset{\rightarrow}{\sigma}_{-1}({\bf R}^X \otimes Y \otimes X^2) = 2{\bf R}^X \otimes X^2 - {\bf R}^X \otimes YX^2,
\end{equation}
\begin{equation}
\overset{\rightarrow}{\sigma}_{-1}({\bf R}^Y \otimes X \otimes Y^2) = -{\bf R}^Z \otimes Y^2 - {\bf R}^Y \otimes XY^2,
\end{equation}
and finally
\begin{equation}
\overset{\rightarrow}{\sigma}_{-1}({\bf R}^Y \otimes Y \otimes X) = {\bf R}^Y \otimes X - {\bf R}^Y \otimes YX = - {\bf R}^Y \otimes XY.
\end{equation}
\noindent This way we prove
\begin{equation}
\overset{\rightarrow}{\sigma}_{-1}(- {\bf R}^X \otimes XY \otimes X - {\bf R}^X \otimes Y \otimes X^2 + {\bf R}^Y \otimes XY \otimes Y + {\bf R}^Y \otimes X \otimes Y^2 - {\bf R}^Y \otimes Y \otimes X) = 0.
\end{equation}
\noindent Hence we conclude that $\overset{\ra}{B}(c)=0$.
\medskip
\noindent Since the action of $\delta_1$ on $F_0M_\delta = {\mathbb C}\Big\langle {\bf R}^X, {\bf R}^Y, {\bf R}^Z \Big\rangle$ is trivial, we have $\uparrow\hspace{-4pt}B(c'')=0$. The horizontal counterpart $\overset{\ra}{B}(c'') = 0$ follows from the following observations. First we notice
\begin{equation}
({\bf R}^X \otimes {\delta_1}^2) \cdot Y = {\bf R}^X \cdot Y\ps{1} \otimes S(Y\ps{2}) \rhd {\delta_1}^2 = 0,
\end{equation}
and secondly,
\begin{equation}
({\bf R}^Y \otimes \delta_1) \cdot Y = {\bf R}^Y \cdot Y\ps{1} \otimes S(Y\ps{2}) \rhd \delta_1 = 0.
\end{equation}
\end{proof}
\noindent Next, we send the element $c+c'' \in {\rm Tot}^2({\cal F}, {\cal U},M_\delta)$ to the cyclic complex $C({\cal H},M_\delta)$. As before, on the first step we use the Alexander-Whitney map to land in the diagonal complex $\mathfrak{Z}({\cal H},{\cal F},M_\delta)$. To this end, we have
\begin{align}
\begin{split}
& AW_{0,2}(c) = \; \uparrow\hspace{-4pt}\partial_0\uparrow\hspace{-4pt}\partial_0(c) \\
& = {\bf 1} \otimes 1 \otimes 1 \otimes X \otimes Y - {\bf 1} \otimes 1 \otimes 1 \otimes Y \otimes X - {\bf R}^X \otimes 1 \otimes 1 \otimes XY \otimes X \\
& - {\bf R}^X\otimes 1 \otimes 1 \otimes Y \otimes X^2 + {\bf R}^Y \otimes 1 \otimes 1 \otimes XY \otimes Y + {\bf R}^Y \otimes 1 \otimes 1 \otimes X \otimes Y^2 \\
& - {\bf R}^Y \otimes 1 \otimes 1 \otimes Y \otimes X,
\end{split}
\end{align}
and
\begin{align}
\begin{split}
& AW_{1,1}(c) =\; \uparrow\hspace{-4pt}\partial_0\overset{\ra}{\partial}_1(c) \\
& = - {\bf R}^X \otimes 1 \otimes \delta_1 \otimes XY^2 \otimes 1 + \frac{2}{3}{\bf R}^X \otimes 1 \otimes {\delta_1}^2 \otimes Y^3 \otimes 1 +\frac{1}{3}{\bf R}^Y \otimes 1 \otimes \delta_1 \otimes Y^3 \otimes 1 \\
& - \frac{1}{4}{\bf R}^X \otimes 1 \otimes {\delta_1}^2 \otimes Y^2 \otimes 1 - \frac{1}{2}{\bf R}^Y \otimes 1 \otimes \delta_1 \otimes Y^2 \otimes 1.
\end{split}
\end{align}
\noindent As a result, we obtain the element
\begin{align}
\begin{split}
& c^{{\rm even}}_{{\rm diag}} = {\bf 1} \otimes 1 \otimes 1 \otimes X \otimes Y - {\bf 1} \otimes 1 \otimes 1 \otimes Y \otimes X - {\bf R}^X \otimes 1 \otimes 1 \otimes XY \otimes X \\
& - {\bf R}^X\otimes 1 \otimes 1 \otimes Y \otimes X^2 + {\bf R}^Y \otimes 1 \otimes 1 \otimes XY \otimes Y + {\bf R}^Y \otimes 1 \otimes 1 \otimes X \otimes Y^2 \\
& - {\bf R}^Y \otimes 1 \otimes 1 \otimes Y \otimes X - {\bf R}^X \otimes 1 \otimes \delta_1 \otimes XY^2 \otimes 1 + \frac{2}{3}{\bf R}^X \otimes 1 \otimes {\delta_1}^2 \otimes Y^3 \otimes 1 \\
& +\frac{1}{3}{\bf R}^Y \otimes 1 \otimes \delta_1 \otimes Y^3 \otimes 1 - \frac{1}{4}{\bf R}^X \otimes 1 \otimes {\delta_1}^2 \otimes Y^2 \otimes 1 - \frac{1}{2}{\bf R}^Y \otimes 1 \otimes \delta_1 \otimes Y^2 \otimes 1.
\end{split}
\end{align}
\noindent On the second step, we use the map \eqref{PSI-1} to obtain
\begin{align}\label{even-cocycle}
\begin{split}
& c^{{\rm even}}:= \Psi\big(c^{{\rm even}}_{{\rm diag}}\big) = {\bf 1} \otimes X \otimes Y - {\bf 1} \otimes Y \otimes X + {\bf 1} \otimes Y \otimes \delta_1Y - {\bf R}^X \otimes XY \otimes X \\
& - {\bf R}^X \otimes Y^2 \otimes \delta_1X - {\bf R}^X \otimes Y \otimes X^2 + {\bf R}^Y \otimes XY \otimes Y + {\bf R}^Y \otimes Y^2 \otimes \delta_1Y \\
& + {\bf R}^Y \otimes X \otimes Y^2 + {\bf R}^Y \otimes Y \otimes \delta_1Y^2 - {\bf R}^Y \otimes Y \otimes X - {\bf R}^X \otimes XY^2 \otimes \delta_1 \\
& - \frac{1}{3}{\bf R}^X \otimes Y^3 \otimes {\delta_1}^2 + \frac{1}{3} {\bf R}^Y \otimes Y^3 \otimes \delta_1 - \frac{1}{4} {\bf R}^X \otimes Y^2 \otimes {\delta_1}^2 - \frac{1}{2} {\bf R}^Y \otimes Y^2 \otimes \delta_1,\\
\end{split}
\end{align}
in $ C^2({\cal H}, M_\delta)$.
\begin{proposition}
The element $c^{\rm even}$ defined in \eqref{even-cocycle} is a Hochschild cocycle.
\end{proposition}
\begin{proof}
We first recall that
\begin{align}
\begin{split}
& b({\bf 1} \otimes X \otimes Y - {\bf 1} \otimes Y \otimes X + {\bf 1} \otimes Y \otimes \delta_1Y) = \\
& -{\bf R}^X \otimes X \otimes Y \otimes X - {\bf R}^Y \otimes X \otimes Y \otimes Y + {\bf R}^X \otimes Y \otimes X \otimes X + {\bf R}^Y \otimes Y \otimes X \otimes Y \\
& - {\bf R}^X \otimes Y \otimes \delta_1Y \otimes X - {\bf R}^Y \otimes Y \otimes \delta_1Y \otimes Y.
\end{split}
\end{align}
\noindent Next we compute
\begin{align}
\begin{split}
& b({\bf R}^X \otimes XY \otimes X) = - {\bf R}^X \otimes X \otimes Y \otimes X - {\bf R}^X \otimes Y \otimes X \otimes X - {\bf R}^X \otimes Y^2 \otimes \delta_1 \otimes X \\
& - {\bf R}^X \otimes Y \otimes \delta_1Y \otimes X + {\bf R}^X \otimes XY \otimes Y \otimes \delta_1, \\[.2cm]
& b({\bf R}^X \otimes Y^2 \otimes \delta_1X) = - 2{\bf R}^X \otimes Y \otimes Y \otimes \delta_1X + {\bf R}^X \otimes Y^2 \otimes \delta_1 \otimes X + {\bf R}^X \otimes Y^2 \otimes X \otimes \delta_1 \\
& + {\bf R}^X \otimes Y^2 \otimes \delta_1Y \otimes \delta_1 + {\bf R}^X \otimes Y^2 \otimes Y \otimes {\delta_1}^2, \\[.2cm]
& b({\bf R}^X \otimes Y \otimes X^2) = 2 {\bf R}^X \otimes Y \otimes X \otimes X + {\bf R}^X \otimes Y \otimes XY \otimes \delta_1 + {\bf R}^X \otimes Y \otimes Y \otimes X\delta_1 \\
& + {\bf R}^X \otimes Y \otimes YX \otimes \delta_1 + {\bf R}^X \otimes Y \otimes Y \otimes \delta_1X + {\bf R}^X \otimes Y \otimes Y^2 \otimes {\delta_1}^2, \\[.2cm]
& b({\bf R}^Y \otimes XY \otimes Y) = - {\bf R}^Y \otimes X \otimes Y \otimes Y - {\bf R}^Y \otimes Y \otimes X \otimes Y - {\bf R}^Y \otimes Y^2 \otimes \delta_1 \otimes Y \\
& - {\bf R}^Y \otimes Y \otimes \delta_1Y \otimes Y - {\bf R}^X \otimes XY \otimes Y \otimes \delta_1, \\[.2cm]
& b({\bf R}^Y \otimes Y^2 \otimes \delta_1Y) = -2 {\bf R}^Y \otimes Y \otimes Y \otimes \delta_1Y + {\bf R}^Y \otimes Y^2 \otimes \delta_1 \otimes Y + {\bf R}^Y \otimes Y^2 \otimes Y \otimes \delta_1 \\
& - {\bf R}^X \otimes Y^2 \otimes \delta_1Y \otimes \delta_1,
\end{split}
\end{align}
as well as
\begin{align}
\begin{split}
& b({\bf R}^Y \otimes X \otimes Y^2) = - {\bf R}^Y \otimes Y \otimes \delta_1 \otimes Y^2 + 2{\bf R}^Y \otimes X \otimes Y \otimes Y - {\bf R}^X \otimes X \otimes Y^2 \otimes \delta_1, \\[.2cm]
& b({\bf R}^Y \otimes Y \otimes \delta_1Y^2) = {\bf R}^Y \otimes Y \otimes \delta_1 \otimes Y^2 + {\bf R}^Y \otimes Y \otimes Y^2 \otimes \delta_1 + 2{\bf R}^Y \otimes Y \otimes \delta_1Y \otimes Y \\
& + 2{\bf R}^Y \otimes Y \otimes Y \otimes \delta_1Y - {\bf R}^X \otimes Y \otimes \delta_1Y^2 \otimes \delta_1, \\[.2cm]
& b({\bf R}^Y \otimes Y \otimes X) = {\bf R}^Y \otimes Y \otimes Y \otimes \delta_1 - {\bf R}^X \otimes Y \otimes X \otimes \delta_1, \\[.2cm]
& b({\bf R}^X \otimes XY^2 \otimes \delta_1) = - {\bf R}^X \otimes X \otimes Y^2 \otimes \delta_1 - {\bf R}^X \otimes Y^2 \otimes X \otimes \delta_1 - 2 {\bf R}^X \otimes XY \otimes Y \otimes \delta_1 \\
& -2 {\bf R}^X \otimes Y \otimes XY \otimes \delta_1 - 2{\bf R}^X \otimes Y^2 \otimes \delta_1Y \otimes \delta_1 - {\bf R}^X \otimes Y^3 \otimes \delta_1 \otimes \delta_1 \\
& - {\bf R}^X \otimes Y \otimes \delta_1Y^2 \otimes \delta_1, \\[.2cm]
& b({\bf R}^X \otimes Y^3 \otimes {\delta_1}^2) = -3 {\bf R}^X \otimes Y^2 \otimes Y \otimes {\delta_1}^2 - 3{\bf R}^X \otimes Y \otimes Y^2 \otimes {\delta_1}^2 + 2 {\bf R}^X \otimes Y^3 \otimes \delta_1 \otimes \delta_1, \\[.2cm]
& b({\bf R}^Y \otimes Y^3 \otimes \delta_1) = -3 {\bf R}^Y \otimes Y^2 \otimes Y \otimes \delta_1 - 3 {\bf R}^Y \otimes Y \otimes Y^2 \otimes \delta_1 - {\bf R}^X \otimes Y^3 \otimes \delta_1 \otimes \delta_1, \\[.2cm]
& b({\bf R}^X \otimes Y^2 \otimes {\delta_1}^2) = -2 {\bf R}^X \otimes Y \otimes Y \otimes {\delta_1}^2 + 2 {\bf R}^X \otimes Y^2 \otimes \delta_1 \otimes \delta_1, \\[.2cm]
& b({\bf R}^Y \otimes Y^2 \otimes \delta_1) = -2 {\bf R}^Y \otimes Y \otimes Y \otimes \delta_1 - {\bf R}^X \otimes Y^2 \otimes \delta_1 \otimes \delta_1
\end{split}
\end{align}
Summing up, we get the result.
\end{proof}
\begin{proposition}
The Hochschild cocycle $c^{\rm even}$ defined in \eqref{even-cocycle} vanishes under the Connes boundary map.
\end{proposition}
\begin{proof}
We will first prove that the extra degeneracy operator $\sigma_{-1}$ vanishes on
\begin{align}
\begin{split}
& c := {\bf 1} \otimes X \otimes Y - {\bf 1} \otimes Y \otimes X + {\bf 1} \otimes Y \otimes \delta_1Y - {\bf R}^X \otimes XY \otimes X \\
& - {\bf R}^X \otimes Y^2 \otimes \delta_1X - {\bf R}^X \otimes Y \otimes X^2 + {\bf R}^Y \otimes XY \otimes Y + {\bf R}^Y \otimes Y^2 \otimes \delta_1Y \\
& + {\bf R}^Y \otimes X \otimes Y^2 + {\bf R}^Y \otimes Y \otimes \delta_1Y^2 - {\bf R}^Y \otimes Y \otimes X \in C^2({\cal H}, M_\delta).
\end{split}
\end{align}
We observe that
\begin{align}
\begin{split}
&\sigma_{-1}({\bf 1} \otimes X \otimes Y - {\bf 1} \otimes Y \otimes X + {\bf 1} \otimes Y \otimes \delta_1Y) = 0,\\[.2cm]
&\sigma_{-1}({\bf R}^X \otimes XY \otimes X) = {\bf R}^Y \otimes XY + {\bf R}^X \otimes X^2Y - {\bf R}^X \otimes \delta_1XY^2,\\[.2cm]
&\sigma_{-1}({\bf R}^X \otimes Y^2 \otimes \delta_1X) = {\bf R}^X \otimes \delta_1XY^2,\\[.2cm]
&\sigma_{-1}({\bf R}^X \otimes Y \otimes X^2) = -{\bf R}^X \otimes X^2Y,\\[.2cm]
&\sigma_{-1}({\bf R}^Y \otimes XY \otimes Y) = {\bf R}^Z \otimes Y^2 + {\bf R}^Y \otimes XY^2 - {\bf R}^Y \otimes \delta_1Y^3,\\[.2cm]
&\sigma_{-1}({\bf R}^Y \otimes Y^2 \otimes \delta_1Y) = {\bf R}^Y \otimes \delta_1Y^3,\\[.2cm]
&\sigma_{-1}({\bf R}^Y \otimes X \otimes Y^2) = -{\bf R}^Z \otimes Y^2 - {\bf R}^Y \otimes XY^2 + {\bf R}^Y \otimes \delta_1Y^3,\\[.2cm]
&\sigma_{-1}({\bf R}^Y \otimes Y \otimes \delta_1Y^2) = - {\bf R}^Y \otimes \delta_1Y^3,\\[.2cm]
&\sigma_{-1}({\bf R}^Y \otimes Y \otimes X) = -{\bf R}^Y \otimes XY.
\end{split}
\end{align}
\noindent As a result, we obtain $\sigma_{-1}(c) = 0$. On the second step, we prove that Connes boundary map $B$ vanishes on
\begin{align}
\begin{split}
& c'' := - {\bf R}^X \otimes XY^2 \otimes \delta_1 - \frac{1}{3}{\bf R}^X \otimes Y^3 \otimes {\delta_1}^2 + \frac{1}{3} {\bf R}^Y \otimes Y^3 \otimes \delta_1 \\
& - \frac{1}{4} {\bf R}^X \otimes Y^2 \otimes {\delta_1}^2 - \frac{1}{2} {\bf R}^Y \otimes Y^2 \otimes \delta_1 \in C^2({\cal H}, M_\delta).
\end{split}
\end{align}
\noindent Indeed, as in this case $B = (\mathop{\rm Id}\nolimits - \tau) \circ \sigma_{-1}$, it suffices to observe that
\begin{align}
\begin{split}
&\sigma_{-1}({\bf R}^X \otimes XY^2 \otimes \delta_1)\\
& = - {\bf R}^Y \otimes \delta_1Y^2 - {\bf R}^X \otimes \delta_1XY^2 - \frac{1}{2}{\bf R}^X \otimes
{\delta_1}^2Y^2 + {\bf R}^X \otimes {\delta_1}^2Y^3,\\[.2cm]
&\sigma_{-1}({\bf R}^X \otimes Y^3 \otimes {\delta_1}^2) = - {\bf R}^X \otimes {\delta_1}^2Y^3,\\[.2cm]
&\sigma_{-1}({\bf R}^Y \otimes Y^3 \otimes \delta_1) = - {\bf R}^Y \otimes \delta_1Y^3,\\[.2cm]
&\sigma_{-1}({\bf R}^X \otimes Y^2 \otimes {\delta_1}^2) = {\bf R}^X \otimes {\delta_1}^2Y^2,\\[.2cm]
&\sigma_{-1}({\bf R}^Y \otimes Y^2 \otimes \delta_1) = {\bf R}^Y \otimes \delta_1Y^2,\\[.2cm]
\end{split}
\end{align}
together with
\begin{align}
\begin{split}
&\tau({\bf R}^Y \otimes \delta_1Y^2) = - {\bf R}^Y \otimes \delta_1Y^2 - {\bf R}^X \otimes {\delta_1}^2Y^2,\\[.2cm]
&\tau({\bf R}^X \otimes {\delta_1}^2Y^2) = {\bf R}^X \otimes {\delta_1}^2Y^2,\\[.2cm]
&\tau({\bf R}^Y \otimes \delta_1Y^3) = {\bf R}^Y \otimes \delta_1Y^3 + {\bf R}^X \otimes {\delta_1}^2Y^3,\\[.2cm]
&\tau({\bf R}^X \otimes {\delta_1}^2Y) = - {\bf R}^Y \otimes {\delta_1}^2Y,\\[.2cm]
&\tau({\bf R}^X \otimes {\delta_1}^2Y^3) = - {\bf R}^Y \otimes {\delta_1}^2Y^3,\\[.2cm]
&\tau({\bf R}^X \otimes \delta_1XY^2) = {\bf R}^Y \otimes \delta_1Y^2 + {\bf R}^X \otimes \delta_1XY^2 + \frac{1}{2}{\bf R}^X \otimes {\delta_1}^2Y^2 - {\bf R}^X \otimes {\delta_1}^2Y^3.
\end{split}
\end{align}
\end{proof}
\noindent We summarize our results in this section by the following theorem.
\begin{theorem}
The odd and even periodic Hopf cyclic cohomology of the Schwarzian Hopf algebra ${\cal H}_{\rm 1S}{^{\rm cop}}$ with coefficients in the 4-dimensional SAYD module $M_\delta = S({sl_2}^\ast)\nsb{2}$ are given by
\begin{equation}
HP^{\rm odd}({\cal H}_{\rm 1S}{^{\rm cop}},M_\delta) = {\mathbb C} \Big\langle {\bf 1} \otimes \delta_1 + {\bf R}^Y \otimes X + {\bf R}^X \otimes \delta_1X + {\bf R}^Y \otimes \delta_1Y + 2 {\bf R}^Z \otimes Y \Big\rangle,
\end{equation}
and
\begin{align}
\begin{split}
& HP^{\rm even}({\cal H}_{\rm 1S}{^{\rm cop}},M_\delta) = {\mathbb C} \Big\langle {\bf 1} \otimes X \otimes Y - {\bf 1} \otimes Y \otimes X + {\bf 1} \otimes Y \otimes \delta_1Y - {\bf R}^X \otimes XY \otimes X \\
& - {\bf R}^X \otimes Y^2 \otimes \delta_1X - {\bf R}^X \otimes Y \otimes X^2 + {\bf R}^Y \otimes XY \otimes Y + {\bf R}^Y \otimes Y^2 \otimes \delta_1Y \\
& + {\bf R}^Y \otimes X \otimes Y^2 + {\bf R}^Y \otimes Y \otimes \delta_1Y^2 - {\bf R}^Y \otimes Y \otimes X - {\bf R}^X \otimes XY^2 \otimes \delta_1 \\
& - \frac{1}{3}{\bf R}^X \otimes Y^3 \otimes {\delta_1}^2 + \frac{1}{3} {\bf R}^Y \otimes Y^3 \otimes \delta_1 - \frac{1}{4} {\bf R}^X \otimes Y^2 \otimes {\delta_1}^2 - \frac{1}{2} {\bf R}^Y \otimes Y^2 \otimes \delta_1 \Big\rangle.
\end{split}
\end{align}
\end{theorem}
\bibliographystyle{amsplain}
|
1,116,691,499,870 | arxiv | \section{Introduction}
The idea of using a variational principle to bound the spectrum of an
operator is familiar to anyone who has taken an undergraduate course
in quantum mechanics: given a Hamiltonian $H$ whose spectrum is
bounded below by $E_0$, we must have for any normalized state $| \psi
\rangle$ in our Hilbert space $E_0 \leq \langle \psi | H | \psi \rangle$.
We can thus obtain an upper bound on the ground state energy of the
system by plugging in various ``test functions'' $|\psi \rangle$,
allowing these to vary, and finding out how low we can make the
expectation value of $H$.
This technique is not particular to quantum
mechanics; rather, it is a statement about the properties of
self-adjoint operators on a Hilbert space. In particular, we can make
a similar statement in the context of linear field theories. Suppose
we have a second-order linear field theory, dependent on some
background fields and some dynamical fields $\psi^\alpha$, whose
equations of motion can be put into the form
\begin{equation}
\label{simpleform}
- \frac{\partial^2}{\partial t^2} \psi^\alpha =
{\mathcal{T}^\alpha}_\beta \, \psi^\beta
\end{equation}
for some linear spatial differential operator
$\mathcal{T}$. Suppose, further, that we can find an inner product
$(\cdot, \cdot)$ such that $\mathcal{T}$ is self-adjoint under this
inner product, i.e.,
\begin{equation}
\label{simpleTsym}
(\psi_1, \mathcal{T} \psi_2) = (\mathcal{T} \psi_1, \psi_2)
\end{equation}
for all $\psi_1$ and $\psi_2$. Then the spectrum of $\mathcal{T}$
will correspond to the squares of the frequencies of oscillation of
the system; and we can obtain information about the lower bound of
this spectrum via a variational principle of the form
\begin{equation}
\label{basicvarprin}
\omega_0^2 \leq \frac{ (\psi, \mathcal{T} \psi)}{(\psi, \psi)}.
\end{equation}
Moreover, if we find that we can make this quantity negative for some
test function $\psi$, then the system is unstable
\cite{Waldstability}; and we can obtain an upper bound on the
timescale of this instability by finding how negative $\omega_0^2$
must be.
One might think that this would be a simple way to analyze any
linear (or linearized) field theory; in practice, however, it is
easier said than done. A serious problem arises when we consider
linearizations of covariant field theories. In such theories, the
linearized equations of motion are not all of the form
\eqref{simpleform}; instead, we obtain non-deterministic equations
(due to gauge freedom) and equations which are not evolution equations
but instead relate the initial data for the fields $\psi^\alpha$
(since covariant field theories on a spacetime become constrained when
decomposed into ``space + time''.) Further, even if we have equations
of the form \eqref{simpleform}, it is not always evident how to find
an inner product under which the time-evolution operator $\mathcal{T}$
is self-adjoint (or even if such an inner product exists.) Thus, it
would seem that this method is of limited utility in the context of
linearized field theories.
In spite of these difficulties, Chandrasekhar successfully derived a
variational principle to analyse the stability of spherically
symmetric solutions of Einstein gravity with perfect-fluid sources
\cite{Chandra1, Chandra2}. The methods used in these works to
eliminate the constraint equations, put the equations in the form
\eqref{simpleform}, and obtain an inner product seemed rather
particular to the theory he was examining, and it was far from certain
that they would generalize to an arbitrary field theory. In a recent
paper \cite{GVP}, we showed that these methods did in a certain sense
``have to work out'', by describing a straightforward procedure by
which the gauge could be fixed, the constraints could be solved, and
an inner product could be obtained under which the resulting
time-evolution operator $\mathcal{T}$ is self-adjoint. We review this
procedure in Section \ref{GVPsec} of the paper. We then use this
``generalized variational principle'' to analyze three alternative
theories of gravity which have garnered some attention in recent
years: $f(R)$ gravity, the current interest in which was primarily
inspired by the work of Carroll \textsl{et al.~}\cite{CDTT};
Einstein-{\ae}ther theory, a toy model of Lorentz-symmetry breaking
proposed by Jacobson and Mattingly \cite{Aether1, Aetherwave}; and
TeVeS, a covariant theory of MOND proposed by Bekenstein \cite{TeVeS}.
We apply our techniques to these theories in Sections \ref{fRsec},
\ref{aethersec}, and \ref{TeVeSsec}, respectively.
We will use the sign conventions of \cite{WaldGR} throughout. Units
will be those in which $c = G = 1$.
\section{Review of the Generalized Variational Principle \label{GVPsec}}
\subsection{Symplectic dynamics}
We first introduce some necessary concepts and notation. Consider a
covariant field theory with an action of the form
\begin{equation}
S = \int \form{\mathcal{L}} = \int \mathcal{L}[\Psi] \form{\epsilon}
\end{equation}
where $\mathcal{L}[\Psi]$ is a scalar depending on some set
$\Psi$ of dynamical tensor fields including the spacetime metric.
(For convenience, we will describe the gravitational degrees of freedom
using the inverse metric $g^{ab}$ rather than the metric $g_{ab}$
itself.) To obtain the equations of motion for the dynamical fields,
we take the variation of the four-form $\mathcal{L} \form{\epsilon}$ with
respect to the dynamical fields $\Psi$:
\begin{equation}
\label{genLvar}
\delta \left(\mathcal{L} \form{\epsilon} \right) = \left(
\mathcal{E}_\Psi \,
\delta \Psi + \nabla_a \theta^a [\Psi, \delta \Psi] \right)
\form{\epsilon}
\end{equation}
where a sum over all fields comprising $\Psi$ is implicit in the first
term. Requiring that $\delta S = 0$ under this variation then implies
that the quantities $\mathcal{E}_\Psi$ vanish for each dynamical field
$\Psi$.
The second term in \eqref{genLvar} defines the vector field $\theta^a
[\Psi, \delta \Psi]$. The three-form $\form{\theta}$ dual to this
vector field (i.e., $\theta_{bcd} = \theta^a \epsilon_{abcd}$) is the
``symplectic potential current.'' Taking
the antisymmetrized second variation of this quantity, we then obtain
the symplectic current three-form $\form{\omega}$ for the theory:
\begin{equation}
\label{genomega}
\form{\omega}[\Psi; \delta_1 \Psi, \delta_2 \Psi] = \delta_1
\form{\theta}[\Psi, \delta_2 \Psi] - \delta_2 \form{\theta}[\Psi,
\delta_1 \Psi].
\end{equation}
In terms of the vector field $\omega^a$ dual to $\form{\omega}$, this
can also be written as
\begin{equation}
\label{genomegaalt}
\omega^a \epsilon_{abcd} = \delta_1 (\theta_2^a \epsilon_{abcd} ) -
\delta_2 ( \theta_1^a \epsilon_{abcd} ).
\end{equation}
The symplectic form for the theory is then obtained by integrating
the pullback of this three-form over a spacelike three-surface
$\Sigma$:
\begin{equation}
\label{gensympform}
\Omega[\Psi; \delta_1 \Psi, \delta_2 \Psi] = \int_\Sigma
\form{\bar{\omega}}[\Psi; \delta_1 \Psi, \delta_2 \Psi].
\end{equation}
If we define $n^a$ as the future-directed timelike normal to $\Sigma$
and $\boldsymbol{e}$ to be the induced volume three-form on $\Sigma$
(i.e., $e_{bcd} = n^a \epsilon_{abcd}$), this can be written in terms
of $\omega^a$ instead:
\begin{equation}
\label{gensympformalt}
\Omega = - \int_\Sigma (\omega^a n_a) \boldsymbol{e}.
\end{equation}
In performing the calculations which follow, it is this second
expression for $\Omega$ which will be most useful to us.
\subsection{Obtaining a variational principle}
In \cite{GVP}, we presented a procedure by which a variational
principle for spherically symmetric perturbations of static,
spherically symmetric spacetimes could generally be obtained. For our
purposes, we will outline this method; further details can be found
in the original paper.
The method described in \cite{GVP} consists of the following steps:
\begin{enumerate}
\item Vary the action to obtain the equations of motion
$(\mathcal{E}_G)_{ab}$ corresponding to the variation of the metric,
as well as any other equations of motion $\mathcal{E}_A$
corresponding to the variations of any matter fields present. This
variation will also yield the dual $\theta^a$ of the symplectic
current potential; take the antisymmetrized variation of this
quantity (as in \eqref{genomegaalt}) to obtain the symplectic form
\eqref{gensympformalt}.
\item Fix the gauge for the metric, and choose an appropriate set of
spacetime functions to describe the matter fields. Throughout this
paper, we will choose our coordinates such that the metric takes the
form
\begin{equation}
\label{spheremet}
\mathrm{d} s^2 = -e^{2 \Phi(r,t)} \, \mathrm{d} t^2 + e^{2 \Lambda(r,t)} \, \mathrm{d}
r^2 + r^2 \mathrm{d} \Omega^2
\end{equation}
for some functions $\Phi$ and $\Lambda$.\footnote{Such a set of
coordinates can always be found for a spherically symmetric
spacetime \cite{MTW}.}
Our spacetimes will be static at zero-order (i.e., $(\pder{}{t})^a$
is a Killing vector field at zero-order), but non-static in
first-order perturbations. We will therefore have $\Phi(r,t) =
\Phi(r) + \phi(r,t)$, where $\phi$ is a first-order quantity;
similarly, we define the first-order quantity $\lambda$ such that
$\Lambda(r,t) = \Lambda(r) + \lambda(r,t)$. In other words,
\begin{equation}
\delta g^{tt} = 2 e^{-2 \Phi(r)} \phi(r,t)
\end{equation}
and
\begin{equation}
\delta g^{rr} = -2 e^{-2 \Lambda(r)} \lambda (r,t).
\end{equation}
Similarly, all matter fields will be static in the background, but
possibly time-dependent at first order.
\item Write the linearized equations of motion and the symplectic
form in terms of these perturbational fields (metric and matter.)
\item Solve the linearized constraints. One of the main results of
\cite{GVP} was to show that this can be done quite generally for
spherically symmetric perturbations off of a static, spherically
symmetric background. Specifically, suppose the field content
$\Psi$ of our theory consists of the inverse metric $g^{ab}$ and a
single tensor field $A^{a_1 \dots a_n} {}_{b_1 \dots b_m}$. (The
generalization to multiple tensor fields is straightforward.) Let
$(\mathcal{E}_A)_{a_1 \dots a_n} {}^{b_1 \dots b_m}$ denote the
equation of motion associated with $A^{a_1 \dots a_n} {}_{b_1 \dots
b_m}$. We define the \emph{constraint tensor} as
\begin{multline}
\label{Cdef}
C_{cd} = 2 (\mathcal{E}_G)_{cd} \\
- g_{ce} \sum_i A^{a_1 \dots a_n} {}_{b_1 \dots d \dots b_m}
(\mathcal{E}_A)_{a_1 \dots a_n} {}^{b_1 \dots e \dots b_m} \\
+ g_{ce} \sum_i A^{a_1 \dots e \dots a_n} {}_{b_1 \dots b_m}
(\mathcal{E}_A)_{a_1 \dots d \dots a_n} {}^{b_1 \dots b_m}
\end{multline}
where the summations run over all possible index ``slots'', from $1$
to $n$ and from $1$ to $m$ for the first and second summation
respectively. It can then be shown that if the background equations
of motion hold, and the matter equation of motion also holds to
first order, the perturbations of the tensor $C_{ab}$ will satisfy
\begin{equation}
\label{Fdef1}
\fpder{F}{t} = - r^2 e^{\Phi - \Lambda} \delta C_{rt}
\end{equation}
and
\begin{equation}
\label{Fdef2}
\fpder{F}{r} = - r^2 e^{\Lambda - \Phi} \delta C_{tt}
\end{equation}
for some quantity $F$ which is linear in the first-order fields.
Moreover, the first-order constraint equations $\delta C_{tt} =
\delta C_{tr} = 0$ will be satisfied if and only if $F = 0$. We
can then solve this equation algebraically for one of our
perturbational fields, usually the metric perturbation $\lambda$.
We will refer to the equation $F = 0$ as the \emph{preconstraint
equation}.
\item Eliminate the metric perturbation $\phi$ from the equations.
As $\phi$ cannot appear without a radial derivative (due to
residual gauge freedom)\footnote{The
exception to this statement is when the matter fields under
consideration have non-zero components in the $t$-direction. In
this case, $\phi$ and its derivatives can appear in linear
combination with perturbations of the matter fields; however, a
redefinition of the matter fields will suffice to eliminate such
cases. \label{phifoot}}, we must find an algebraic equation for
$\pder{\phi}{r}$. The first-order equation $\delta C_{rr} = 0$,
will, in general, serve this purpose \cite{GVP}.
Use the above relations for $\lambda$ and $\pder{\phi}{r}$ to
eliminate the metric perturbations completely from the
perturbational equations of motion and the symplectic form,
leaving a ``reduced'' set of equations of motion and a ``reduced''
symplectic form solely in terms of the matter variables.
\item Determine if the reduced equations of motion take the form
\eqref{simpleform}. If so, read off the time-evolution operator
$\mathcal{T}$.
\item Determine if the symplectic form, written in terms of the
reduced dynamical variables $\psi^\alpha$, is of the form
\begin{equation}
\label{Wdef}
\Omega(\Psi; \psi_1^\alpha, \psi_2^\alpha) = \int_\Sigma
\form{W}_{\alpha \beta} \left( \fpder{\psi_1^\alpha}{t} \psi_2^\beta
- \fpder{\psi_2^\alpha}{t} \psi_1^\beta \right)
\end{equation}
for some three-form $\form{W}_{\alpha \beta}$. Then (as we showed
in \cite{GVP}) we must have
$\form{W}_{\alpha \beta} = \form{W}_{\beta \alpha}$ and
\begin{equation}
\label{Tsym}
\int_\Sigma \form{W}_{\alpha \beta} \psi_1^\alpha \mathcal{T}^\beta
{}_\gamma \psi_2^\gamma = \int_\Sigma \form{W}_{\alpha \beta}
\psi_2^\alpha \mathcal{T}^\beta {}_\gamma \psi_1^\gamma.
\end{equation}
\item Determine whether $\form{W}_{\alpha \beta}$ is positive
definite, in the sense that
\begin{equation}
\label{Wposdef}
\int_\Sigma \form{W}_{\alpha \beta} \psi^\alpha \psi^\beta \geq 0
\end{equation}
for all $\psi^\alpha$, with equality holding only when $\psi^\alpha =
0$.\footnote{If $\form{W}_{\alpha \beta}$ is \emph{negative}
definite in this sense, we use the negative of this quantity as
our inner product, and the construction proceeds identically.} If
this is the case, we can define an inner product on the space of all
reduced fields:
\begin{equation}
\label{geninnerprod}
(\psi_1, \psi_2) \equiv \int_\Sigma \form{W}_{\alpha \beta}
\psi_1^\alpha \psi_2^\beta.
\end{equation}
Equation \eqref{Tsym} then shows that $\mathcal{T}$ is a symmetric
operator under this inner product. Thus, we can write down our
variational principle of the form \eqref{basicvarprin}.
\end{enumerate}
It is important to note that each step of this procedure is clearly
delineated. While the procedure can fail at certain steps (the
reduced equations of motion can fail to be of the form
\eqref{simpleform}, for example, or $\form{W}_{\alpha \beta}$ can fail
to be positive definite), there is not any ``art'' required to apply
this procedure to an arbitrary covariant field theory. (In practice,
as we shall see, there are certain shortcuts that may arise which we
can exploit; however, the ``long way'' we have described here will
still work.) In the following three sections, we will use this
formalism to analyze the stability of $f(R)$ gravity,
Einstein-{\ae}ther theory, and TeVeS.
\section{$f(R)$ gravity \label{fRsec}}
\subsection{Theory}
In $f(R)$ gravity, the Ricci scalar $R$ in the Einstein-Hilbert action
is replaced by an arbitrary function of $R$, leaving the rest of the
action unchanged; in other words, the Lagrangian four-form
$\form{\mathcal{L}}$ is of the form
\begin{equation}
\label{fRlag}
\form{\mathcal{L}} = \frac{1}{16 \pi} f(R) \form{\epsilon} +
\form{\mathcal{L}}_{\text{mat}}[A, g^{ab}]
\end{equation}
where $A$ denotes the collection of matter fields, with tensor indices
suppressed. Taking the variation of this action with respect to the
metric, we obtain the equation of motion
\begin{multline}
\label{4thordereqn}
f'(R) R_{ab} - \frac{1}{2} f(R) g_{ab} \\ - \nabla_a \nabla_b f'(R) +
g_{ab} \Box f'(R) = 8 \pi T_{ab}
\end{multline}
where $T_{ab}$, given by
\begin{equation}
\label{Tabdef}
\delta \form{\mathcal{L}}_{\text{mat}} = - \frac{1}{2} (T_{ab}
\delta g^{ab}) \form{\epsilon},
\end{equation}
is the matter stress-energy tensor, and $f'(R) = \mathrm{d} f/\mathrm{d} R$.
This equation is fourth-order in the metric, and as such is somewhat
difficult to deal with. We can reduce this fourth-order equation to
two second-order equations using an equivalent scalar-tensor theory
\cite{Odintsov, Chiba}.\footnote{This equivalence was also used in the
special case $f(R) = R - 2 \Lambda + \alpha R^2$ in \cite{Whitt}.}
This equivalent theory contains two dynamical
gravitational variables, the inverse metric $g^{ab}$ and a scalar field
$\alpha$, in addition to the matter fields. The Lagrangian is given by
\begin{equation}
\label{sctensaction}
\form{\mathcal{L}} = \frac{1}{16 \pi} \left( f'(\alpha) R + f(\alpha) -
\alpha f'(\alpha) \right) \form{\epsilon} +
\form{\mathcal{L}_{\text{mat}}}[A, g^{ab}].
\end{equation}
Varying the gravitational part of action with respect to $g^{ab}$ and
$\alpha$ gives us
\begin{equation}
\label{scvariation}
\delta \form{\mathcal{L}} = \left(
(\mathcal{E}_G)_{ab} \delta g^{ab} + \mathcal{E}_\alpha \delta
\alpha + \mathcal{E}_A \delta A + \nabla_a \theta^a \right)
\form{\epsilon}
\end{equation}
where $\mathcal{E}_A$ denotes the matter equations of motion,
\begin{multline}
\label{fReineq}
(\mathcal{E}_G)_{ab} = \frac{1}{16 \pi} \bigg[ f'(\alpha) G_{ab} -
\nabla_a \nabla_b
f'(\alpha) + g_{ab} \Box f'(\alpha) \\ \left. - \frac{1}{2} g_{ab} (
f(\alpha) - \alpha f'(\alpha)) - 8 \pi T_{ab} \right]
\end{multline}
and
\begin{equation}
\mathcal{E}_\alpha = f''(\alpha) (R - \alpha),
\end{equation}
and the vector $\theta^a$ is our symplectic potential current:
\begin{multline}
\label{fRtheta}
\theta^a = f'(\alpha)
\theta^a_{\text{Ein}} + \theta^a_\text{mat} \\ + \frac{1}{16 \pi}
\left( (\nabla_b
f'(\alpha)) \delta g^{ab} - (\nabla^a f'(\alpha)) g_{bc} \delta
g^{bc} \right).
\end{multline}
The vector $\theta^a_\text{mat}$ above is the symplectic potential
current resulting from variation of $\form{\mathcal{L}}_\text{mat}$,
and $\theta^a_\text{Ein}$ is the symplectic potential current for pure
Einstein gravity, i.e.,
\begin{equation}
\label{thetaein}
\theta^a_\text{Ein} = \frac{1}{16 \pi} \left( g_{bc} \nabla^a \delta
g^{bc} - \nabla_b \delta g^{ab} \right).
\end{equation}
The equations of motion are then given by $(\mathcal{E}_G)_{ab} = 0$
and $\mathcal{E}_\alpha = 0$.
Assuming that $f''(\alpha) \neq 0$, this second equation implies that
$R = \alpha$, and
substituting this relation into \eqref{fReineq} yields the equation
of motion obtained in \eqref{4thordereqn}. Hereafter, we will use
the form of the equations obtained from the action
\eqref{sctensaction}.
\subsection{Obtaining a variational principle}
Before we examine the stability of spherically symmetric static
solutions in $f(R)$ gravity with perfect fluid matter, we must first
consider the question of whether physically realistic solutions exist.
In particular, do there exist solutions to the equations of motion
which reproduce Newtonian gravity, up to small relativistic
corrections? There has recently been a good deal of debate on this
subject \cite{fRint1, fRint2, fRint3, fRrefute, fRsolar}. We will not
comment directly on this controversy here except to say that if $f(R)$
gravity (or any theory) does not allow interior solutions with $R
\approx 8 \pi G \rho$ to be matched to exterior solutions with $R$
close to zero, then it is difficult to see how such a theory could
reproduce Newtonian dynamics in a nearly-flat spacetime. We will
therefore give the theory the ``benefit of the doubt,'' and assume
that such solutions exist.
An initial attempt to address the stability of such solutions was made
by Dolgov and Kawasaki \cite{DolKaw}; their perturbation analysis
implied that stars would be extremely unstable in Carroll \textsl{et
al.}'s $f(R)$ gravity, with a characteristic time scale of
approximately $10^{-26}$ seconds. Their results, while suggestive,
nevertheless failed to take into account the constrained nature of the
theory: the stress-energy tensor, the metric, and the scalar $\alpha$
cannot all be varied independently. The imposition of a constraint
can, of course, change whether or not a given system is stable; as a
trivial example, consider a particle moving in the potential $V(x,y) =
x^2 - y^2$, with and without the constraint $y =
\text{const}$.\footnote{In certain scalar-tensor theories
\cite{Harada}, the scalar perturbations decouple from the metric and
matter perturbations; it is then legitimate in such theories to
``ignore'' the constraints if we concern
ourselves only with the scalar field. However, this decoupling does
not occur in $f(R)$ gravity theory. To see this, we can rewrite the
Lagrangian in terms of a rescaled metric $\tilde{g}^{ab} =
\Omega^{-2}(\sigma) g^{ab} = e^{\sigma/2 \sqrt{3 \pi}} g^{ab}$,
putting it in the form \cite{Odintsov, Chiba}
\begin{equation}
\form{\mathcal{L}} = \left( \frac{1}{16 \pi} \tilde{R} - \frac{1}{2}
\nabla_a \sigma \nabla^a \sigma - V(\sigma) \right)
\form{\tilde{\epsilon}} + \form{\mathcal{L}_\text{mat}}[A, e^{\sigma
/ 2 \sqrt{3 \pi}} \tilde{g}^{ab}]
\end{equation}
where $\sigma$ is related to the scalar $\sigma$ by $f'(\alpha) = e^{
\sigma/2 \sqrt{3 \pi}}$ and the exact form of the potential
$V(\sigma)$ is determined by the function $f(R)$. The decoupling of
the scalar perturbations, however, requires $V(\sigma_0) = 0$ and
$\Omega'(\sigma_0)/\Omega = 0$, where $\sigma_0$ is the background
value of $\sigma$. The former condition will not hold for a generic
$f(R)$, and the latter condition will not hold for any $f(R)$ for which $f''(R) \neq 0$.
}
In what follows, we will show that Dolgov and Kawasaki's conclusion is,
nonetheless, correct: stars in CDTT $f(R)$ gravity do in fact have an
ultra-short timescale instability.
To describe the fluid matter, we will use the ``Lagrangian
coordinate'' formalism, as in Section V of \cite{GVP}. In this
formalism, the fluid is described by considering the manifold
$\mathcal{M}$ of all fluid worldlines in the spacetime, equipped with
a volume three-form $\form{N}$. If we introduce three ``fluid
coordinates'' $X^A$ on $\mathcal{M}$, with $A$ running from one to
three, then the motion of the fluid in our spacetime manifold $M$ is
completely described by a map $\chi: M \to \mathcal{M}$ associating
with every spacetime event $x$ the fluid worldline $X^A(x)$ passing
through it. The matter Lagrangian is then given by
\begin{equation}
\form{\mathcal{L}} = - \varrho(\nu) \form{\epsilon}
\end{equation}
where $\nu$, the ``number density'' of the fluid, is given by
\begin{equation}
\nu^2 = \frac{1}{6} N_{abc} N^{abc}.
\end{equation}
In turn, $N_{abc}$, the ``number current'' of the fluid, is given by
\begin{equation}
N_{abc} = N_{ABC}(X) \nabla_a X^A \nabla_b X^B \nabla_c X^C.
\end{equation}
We will be purely concerned with spherically symmetric solutions and
radial perturbations of the fluid; thus, we will take our Lagrangian
coordinates to be of the form
\begin{align}
X^R &= r, & X^\Theta &= \theta, & X^\Phi = \varphi
\end{align}
in the background, and consider only perturbations $\delta X^R \equiv
\xi(r,t)$ at first order (i.e., $\delta X^\Theta = \delta X^\Phi = 0$.)
We can then easily obtain the background equations of motion; these
are
\begin{subequations}
\begin{multline} \label{fRGtteq0}
(\mathcal{E}_G)_{tt} = e^{2 \Phi - 2 \Lambda} \left[ - \frac{\partial^{2}}{\partial r^{2}}
f'(\alpha) + \left( \fpder{\Lambda}{r} - \frac{2}{r} \right)
\frac{\partial}{\partial r} f'(\alpha) \right. \\
\left. + \left(\frac{2}{r}
\fpder{\Lambda}{r} + \frac{1}{r^2} (e^{2 \Lambda } - 1) \right)
f'(\alpha) \right] \\
- \frac{1}{2} e^{2 \Phi } (\alpha f'(\alpha) - f(\alpha)) - e^{2 \Phi } \varrho = 0,
\end{multline}
\begin{multline} \label{fRGrreq0}
(\mathcal{E}_G)_{rr} = \left(\fpder{\Phi}{r} + \frac{2}{r} \right)
\frac{\partial}{\partial r} f'(\alpha) \\ + \left(\frac{2}{r}
\fpder{\Phi}{r} - \frac{1}{r^2} (e^{2 \Lambda} - 1) \right) f'(\alpha)
\\ + \frac{1}{2} e^{2 \Lambda} (\alpha f'(\alpha) - f(\alpha)) - e^{2
\Lambda} \left( \varrho' \nu - \varrho \right) = 0,
\end{multline}
\begin{multline}
\label{fRGththeq0}
(\mathcal{E}_G)_{\theta \theta} = \\
r^2 e^{-2 \Lambda} \left[ \frac{\partial^{2}}{\partial r^{2}} f'(\alpha) + \left( \fpder{\Phi}{r} - \fpder{\Lambda}{r}
+ \frac{1}{r} \right) \frac{\partial}{\partial r} f'(\alpha) \right. \\
\left. + \left(\fpdert{\Phi}{r} -
\fpder{\Phi}{r} \fpder{\Lambda}{r} + \left(\fpder{\Phi}{r} \right)^2 +
\frac{1}{r} \left( \fpder{\Phi}{r} - \fpder{\Lambda}{r} \right)
\right)f'(\alpha) \right] \\
- \frac{r^2}{2} (\alpha f'(\alpha) - f(\alpha)) - r^2 \left( \varrho' \nu -
\varrho \right) = 0,
\end{multline}
\begin{multline} \label{fRscalareq}
\mathcal{E}_\alpha = 2 f''(\alpha) e^{-2 \Lambda} \left[ -\fpdert{\Phi}{r} + \fpder{\Phi}{r}
\fpder{\Lambda}{r} - \left(\fpder{\Phi}{r} \right)^{2} \right. \\
\left. + \frac{2}{r}
\left(\fpder{\Lambda}{r} - \fpder{\Phi}{r} \right) + \frac{1}{r^{2}} \left( e^{2 \Lambda} -
1 \right)\right] - f''(\alpha) \alpha = 0,
\end{multline}
and
\begin{equation} \label{fRhydroeq}
(\mathcal{E}_X)_R = \varrho'' \fpder{\nu}{r} + \fpder{\Phi}{r}
\varrho' \nu = 0,
\end{equation}
\end{subequations}
where $\varrho' = \mathrm{d} \varrho/\mathrm{d} \nu$. These equations are not all
independent; in particular, the Bianchi identity implies that
\eqref{fRGththeq0} is automatically satisfied if the other four
equations are satisfied. Note that under the substitutions $\varrho
\to \rho$, $\varrho' \nu - \varrho \to P$, the ``matter terms'' in
these equations take on their familiar forms for a perfect fluid.
We next obtain the symplectic form for the theory. The contribution
$\theta^a_\text{grav}$ to the symplectic potential current from the
gravitational portion of the action is given by \eqref{fRtheta}; to
obtain the symplectic form, we then take the antisymmetrized second
variation of $\form{\theta}$, as in \eqref{genomegaalt}. Performing
this variation, we find that
\begin{multline}
\label{fRtensomega}
\omega^a_\text{grav} = f'(\alpha) \omega^a_{\text{Ein}} + \frac{1}{2}
\delta_1 g^{bc} \delta_2 g^{ad} g_{bc} \nabla_d (f'(\alpha)) \\ +
\left[ \delta_1 (f'(\alpha)) \nabla_b \delta_2 g^{cd} - \nabla_b
\left(\delta_1 (f'(\alpha)) \right) \delta_2 g^{cd} \right] \\ \times
(g^{ab} g_{cd} - \delta^a {}_c \delta^b {}_d ) - [1 \leftrightarrow 2]
\end{multline}
where $\omega^a_\text{Ein}$ is defined, analogously to
$\theta^a_\text{Ein}$, to be the symplectic current associated with
pure Einstein gravity. This is equal to \cite{BurnettWald}
\begin{equation}
\label{omegaein}
\omega_\text{Ein}^a = S^{a} {}_{bc} {}^d {}_{ef} (\delta_2 g^{bc}
\nabla_d \delta_1 g^{ef} - \delta_1 g^{bc} \nabla_d \delta_2 g^{ef}),
\end{equation}
where
\begin{multline}
S^{a} {}_{bc} {}^d {}_{ef} = \frac{1}{16 \pi} \left( \delta^a {}_e
\delta^d {}_c g_{bf} - \frac{1}{2} g^{ad} g_{be} g_{cf} \right. \\
\left. - \frac{1}{2} \delta^a {}_b \delta^d {}_c g_{ef} - \frac{1}{2}
\delta^a {}_e \delta^d {}_f g_{bc} + \frac{1}{2} g^{ad} g_{bc} g_{ef}
\right).
\end{multline}
The contribution to the symplectic current from the matter terms in
the Lagrangian coordinate formalism was calculated in \cite{GVP}; we
simply cite the result for the $t$-component of $\omega^a$ in a static
background here:
\begin{multline}
\label{csympcurr}
t_a \omega^a_\text{mat} = - t_a \frac{\varrho'}{2 \nu} N_{ABC}
\nabla_b X^B \nabla_c X^C \left[ \delta_1 g^{ad} \delta_2 X^A N_{d}
{}^{bc} \right. \\
\left. + 3 \delta_2 X^A \nabla^{[a} (N_{DEF}
\delta_1 X^D) \nabla^b X^E \nabla^{c]} X^F - [1 \leftrightarrow 2] \right],
\end{multline}
where the antisymmetrization in the second term is over the tensor
indices only (not the fluid-space indices). Writing out $\omega^t$ in
terms of our perturbational variables, we have
\begin{multline}
\label{fRomegat}
\omega^t = \varrho' \nu e^{2 \Lambda - 2 \Phi} \left( \fpder{\xi_1}{t}
\xi_2 - \fpder{\xi_2}{t} \xi_1 \right) \\ + e^{- 2 \Phi} \left( b_1
\fpder{\lambda_2}{t} - \fpder{b_1}{t} \lambda_2 - b_2
\fpder{\lambda_1}{t} + \fpder{b_2}{t} \lambda_1 \right)
\end{multline}
where we have defined $b = \delta( f'(\alpha)) = f''(\alpha) \delta
\alpha$. (Note that the $t$-component of the symplectic current about
a spherically symmetric static solution vanishes in pure Einstein
gravity, i.e., $\omega^t_\text{Ein} = 0$.)
The first main step towards obtaining a variational principle is to
solve the linearized constraints. To do this, we must calculate the
constraint tensor $C_{ab}$, as defined in \eqref{Cdef}. Since all the
fields in our theory other than the metric are scalars with respect to
the spacetime metric, the resulting expression is particularly simple:
\begin{equation}
\label{fRC}
C_{ab} = 2 (\mathcal{E}_G)_{ab},
\end{equation}
where $(\mathcal{E}_G)_{ab}$, the metric equation of motion, is given
by \eqref{fReineq}. We can then find algebraic equations for
$\lambda$ and $\pder{\phi}{r}$. The algebraic equation for $\lambda$
will be given by the solution of the equation $F = 0$, where $F$ is
given by \eqref{Fdef1} and \eqref{Fdef2}. Similarly, an algebraic
equation for $\pder{\phi}{r}$ can be found by solving the equation
\begin{equation}
\delta C_{rr} = (\delta \mathcal{E}_G)_{rr} = 0.
\end{equation}
The situation is thus very similar to the example of pure Einstein
gravity coupled to perfect fluid matter described in \cite{GVP}; only
the precise form of the perturbational equations of motion $F = 0$ and
$(\delta \mathcal{E}_G)_{rr} = 0$ are different. In the $f(R)$
gravity case, these equations are
\begin{equation}
\label{fRF}
F = \mathcal{S} \lambda - \fpder{b}{r} + \fpder{\Phi}{r} b - 8 \pi
e^{2 \Lambda} \varrho' \nu \xi
\end{equation}
and
\begin{multline}
\label{fRGrr1}
(\delta \mathcal{E}_G)_{rr} = e^{- 2 \Lambda} \mathcal{S}
\fpder{\phi}{r} - e^{-2 \Phi} \frac{\partial^2 b}{\partial t^2} -
e^{-2 \Lambda} \left( \fpder{\Phi}{r} + \frac{2}{r} \right)
\fpder{b}{r} \\ - e^{-2 \Lambda} \left( 2 \left( \fpder{\Phi}{r} +
\frac{2}{r} \right) \mathcal{S} - \frac{6}{r^2} f'(\alpha) \right)
\lambda \\ + \left( e^{-2 \Lambda} \frac{2}{r} \fpder{\Phi}{r} -
\frac{1}{r^2} \left( 1 - e^{-2 \Lambda} \right) + \frac{\alpha}{2}
\right) b \\ - 8 \pi \varrho'' \nu^2 \left( \fpder{\xi}{r} + \left(
\fpder{\Lambda}{r} + \frac{2}{r} + \frac{1}{\nu} \fpder{\nu}{r}
\right) \xi - \lambda \right),
\end{multline}
where we have introduced the new quantity
\begin{equation}
\label{Sdef}
\mathcal{S} = \fpder{}{r} f'(\alpha) +\frac{2}{r} f'(\alpha)
\end{equation}
for notational convenience. Equation \eqref{fRF} then implies that
our algebraic equation for $\lambda$ is
\begin{equation}
\label{fRlambda}
\lambda = \mathcal{S}^{-1} \left( \fpder{b}{r} - \fpder{\Phi}{r} b +
8 \pi e^{2 \Lambda} \varrho' \nu \xi \right).
\end{equation}
We could plug this result into \eqref{fRGrr1} to obtain an algebraic
equation for $\pder{\phi}{r}$; however, the resulting expression is
somewhat complicated. In fact, we can derive a simpler expression for
$\pder{\phi}{r}$. To do so, we note that $(\mathcal{E}_G)_{ab} -
\frac{1}{2} g_{ab} (\mathcal{E}_G)_c {}^c = 0$ if and only if
$(\mathcal{E}_G)_{ab} = 0$. In the case of $f(R)$ gravity, this
equation is given by
\begin{multline}
f'(\alpha) R_{ab} - \frac{1}{2} g_{ab} \Box f'(\alpha) - \nabla_a
\nabla_b f'(\alpha) \\ + \frac{1}{2} g_{ab} \left(f(\alpha) -
f'(\alpha) \alpha \right) - 8 \pi \left( T_{ab} - \frac{1}{2} g_{ab}
T^c {}_c \right) = 0.
\end{multline}
Using the trace of this equation to eliminate the $\Box f'(\alpha)$
term, along with the equation $R = \alpha$, we can show that
\begin{multline}
\label{fRneweq}
f'(\alpha) R_{ab} + \frac{1}{6} g_{ab} \left(f(\alpha) - 2 f'(\alpha)
\alpha \right) - \nabla_a \nabla_b f'(\alpha) \\ - 8 \pi \left( T_{ab}
- \frac{1}{3} g_{ab} T^c {}_c \right) = 0.
\end{multline}
To zero-order, the $\theta \theta$-component of this equation is
\begin{multline}
\label{fRneweq0}
\frac{1}{r} f'(\alpha) \left( \fpder{\Lambda}{r} - \fpder{\Phi}{r} +
\frac{1}{r} \left( e^{2 \Lambda} - 1 \right) \right) \\ + \frac{1}{6}
e^{2 \Lambda} \left( f(\alpha) - 2 \alpha f'(\alpha) \right) -
\frac{1}{r} \fpder{}{r}f'(\alpha) - \frac{8 \pi}{3} e^{2 \Lambda}
\varrho = 0,
\end{multline}
and to first order, it is
\begin{widetext}
\begin{multline}
\label{fRphipr}
\frac{1}{r} f'(\alpha) \left( \fpder{\lambda}{r} - \fpder{\phi}{r} +
\frac{2}{r} e^{2 \Lambda} \lambda \right) + \left( \frac{1}{r} \left(
\fpder{\Lambda}{r} - \fpder{\Phi}{r} \right) + \frac{1}{r^2} \left(
e^{2 \Lambda} - 1 \right) - \frac{1}{6} e^{2 \Lambda} \left(
\frac{f'(\alpha)}{f''(\alpha)} + 2 \alpha \right) \right) b -
\frac{1}{r} \fpder{b}{r} \\ + \frac{1}{3} e^{2 \Lambda} \left(
f(\alpha) - 2 \alpha f'(\alpha) - 16 \pi \varrho \right) \lambda -
\frac{8 \pi}{3} \varrho' \nu e^{2 \Lambda} \left( \fpder{\xi}{r} +
\left( \fpder{\Lambda}{r} + \frac{2}{r} + \frac{1}{\nu} \fpder{\nu}{r}
\right) \xi - \lambda \right) = 0.
\end{multline}
\end{widetext}
This equation, with $\lambda$ given by \eqref{fRlambda}, can then be
solved algebraically for $\pder{\phi}{r}$ in terms of the matter
variables and their derivatives.
Our next step is to obtain the reduced matter equations of motion,
i.e., eliminate the metric degrees of freedom $\lambda$ and
$\pder{\phi}{r}$ from the remaining first-order equations of motion.
These remaining equations of motion are the ``scalar'' equation $R -
\alpha = 0$, which at first order is
\begin{multline}
-\frac{\partial^2 \phi}{\partial r^2} + \fpder{\Phi}{r}
\fpder{\lambda}{r} + \fpder{\Lambda}{r} \fpder{\phi}{r} - 2
\fpder{\Phi}{r} \fpder{\phi}{r} + \frac{2}{r} \left(
\fpder{\lambda}{r} - \fpder{\phi}{r} \right) \\ + \frac{2}{r^{2}}
e^{2 \Lambda} \lambda + e^{2 \Lambda - 2 \Phi} \frac{\partial^2
\lambda}{\partial t^2} - e^{2 \Lambda} \left( \alpha \lambda +
\frac{b}{2 f''(\alpha)} \right) = 0
\end{multline}
and the matter equation of motion, which as in the case of pure
Einstein gravity is
\begin{multline}
\varrho' \left[ -e^{2 \Lambda - 2 \Phi} \frac{\partial^2 \xi}{\partial
t^2} + \frac{\partial \phi}{\partial r} \right] \\ + \left(
\frac{\partial}{\partial r} + \frac{\partial \Phi}{\partial r}
\right) \left[ \varrho'' \nu \left( \frac{\partial \xi}{\partial r}
+ \left( \frac{1}{\nu} \frac{\partial \nu}{\partial r} +
\frac{\partial \Lambda}{\partial r} + \frac{2}{r} \right) \xi -
\lambda \right)\right] \\ = 0.
\end{multline}
We can then eliminate the gravitational equations of motion using
\eqref{fRlambda} and \eqref{fRphipr}, leaving equations solely in
terms of the matter variables $b$ and $\xi$. Equivalently, we can
write our ``reduced'' equations in terms of $b$ and a new variable
$\zeta$, defined by
\begin{equation}
\label{zetadef}
\zeta \equiv \xi - \mathcal{S}^{-1} b.
\end{equation}
(The utility of this new variable will become evident when we obtain
the reduced symplectic form.) Performing these algebraic
manipulations, the resulting reduced evolution equation for $\zeta$
takes the form
\begin{equation}
\label{fRzetaeq}
e^{2 \Lambda - 2 \Phi} \varrho' \nu \frac{\partial^2 \zeta}{\partial
t^2} = \mathcal{A}_1 \frac{\partial^2 \zeta}{\partial r^2} +
\mathcal{A}_2 \fpder{\zeta}{r} + \mathcal{A}_3 \zeta + \mathcal{A}_4
\fpder{b}{r} + \mathcal{A}_5 b
\end{equation}
where the coefficients $\mathcal{A}_i$ are dependent on the background
fields. Similarly, the evolution equation for $b$ takes the form
\begin{equation}
\label{fRbeq}
e^{ - 2 \Phi} \frac{\partial^2 b}{\partial t^2} = \mathcal{B}_1
\frac{\partial^2 b}{\partial r^2} + \mathcal{B}_2 \fpder{b}{r} +
\mathcal{B}_3 b + \mathcal{B}_4 \fpder{\zeta}{r} + \mathcal{B}_5 \zeta.
\end{equation}
We see from the form of the above equations that $f(R)$ gravity has
cleared another hurdle required to have a valid variational principle:
the reduced equations do in fact take the form \eqref{simpleform},
containing only second derivatives in time and only up to second
radial derivatives.
We can also apply these constraint equations to the symplectic form.
For the symplectic form, we only need to eliminate $\lambda$ using the
first-order constraint equation \eqref{fRlambda}. Performing an
integration by parts to eliminate the mixed derivatives that result,
and applying the background equations of motion, we obtain our reduced
symplectic form:
\begin{multline}
\label{fRsympform}
\Omega = 4 \pi \int \mathrm{d} r \, r^2 e^{\Lambda + \Phi} \left[ e^{-2
\Phi} \mathcal{S}^{-2} \frac{6}{r^2} f'(\alpha) \fpder{b_1}{t} b_2
\right. \\ \left. + e^{2 \Lambda - 2 \Phi} \varrho' \nu
\fpder{\zeta_1}{t} \zeta_2 \right] - [1 \leftrightarrow 2].
\end{multline}
As noted above \eqref{Wdef}, this reduced symplectic form defines a
three-form $\form{W}_{\alpha \beta}$. For a valid variational
principle to exist, this $\form{W}_{\alpha \beta}$ must be positive
definite in the sense of \eqref{Wposdef}. We can see that the
$\form{W}_{\alpha \beta}$ defined by \eqref{fRsympform} is positive
definite if and only if $f'(\alpha) > 0$ and $\varrho' \nu > 0$ in our
background solutions. The latter condition will hold for any matter
satisfying the null energy condition, since $\varrho' \nu = \rho + P$;
however, the former condition must be checked in order to determine
whether a valid variational principle exists.\footnote{Should
$f'(\alpha)$ fail to be positive in the background solutions, all is
not necessarily lost; we can still attempt to analyze the reduced
equations that we have obtained. See Section \ref{TeVeSsec} for an
example of such an analysis.} For the particular $f(R)$ chosen by
Carroll \textsl{et al.} \cite{CDTT}, we have $f'(R) = 1 + \mu^4 / R^2
> 0$, and so $\form{W}_{\alpha \beta}$ is always positive definite in
the required sense for this choice of $f(R)$.
All that remains is to actually write down the variational principle
for $f(R)$ gravity. As noted above \eqref{basicvarprin}, the
variational principle will take the form
\begin{equation}
\label{fRsimpform}
\omega_0^2 \leq \frac{ (\psi, \mathcal{T} \psi)}{(\psi, \psi)}
\end{equation}
where $\mathcal{T}$ is the time-evolution operator. In our case,
$\psi$ denotes a two-component vector, whose components are the
functions $\zeta$ and $b$. The inner product in which $\mathcal{T}$
is self-adjoint can be read off from \eqref{fRsympform}, and thus the
denominator of \eqref{fRsimpform} will be
\begin{equation}
\label{fRdenom}
(\psi, \psi) = 4 \pi \int \mathrm{d} r \, r^2 e^{\Lambda - \Phi} \left[
\mathcal{S}^{-2} f'(\alpha) \frac{6}{r^2} b^2 + e^{2 \Lambda} \varrho'
\nu \zeta^2 \right].
\end{equation}
The numerator of \eqref{fRsimpform} is, of course, rather more
complicated. After multiple integrations by parts and applications of
the background equations of motion, we can put this quantity in the
form
\begin{multline}
\label{fRnumer}
(\psi, \mathcal{T} \psi) = 4 \pi \int \mathrm{d} r \, \left[ \mathcal{C}_1
\left( \fpder{b}{r} \right)^2 + \mathcal{C}_2 \left( \fpder{\zeta}{r}
\right)^2 \right. \\ \left. + \mathcal{C}_3 \left( \zeta \fpder{b}{r}
- b \fpder{\zeta}{r} \right) + \mathcal{C}_4 b^2 + \mathcal{C}_5
\zeta^2 + 2 \mathcal{C}_6 b \zeta \right]
\end{multline}
where the $\mathcal{C}_i$ coefficients are functions of the background
fields. The exact forms of the $\mathcal{C}_i$ are given in Appendix
\ref{fRapp}.
\subsection{Discussion}
All of our results thus far have been independent of the choice of
$f(R)$ (assuming, of course, that $f''(R) \neq 0$.) In the case of
Carroll \textsl{et al.}'s $f(R)$ gravity, we can now use this
variational principle to show that this theory is highly unstable for
Newtonian solutions. For the choice $f(R) = R - \mu^4 / R$, we have
$f'(R) = 1 + \mu^4 / R^2$ and $f''(R) = - 2 \mu^4 / R^3$. Moreover,
for a quasi-Newtonian stellar interior, we will have $\alpha = R
\approx \rho$, where $\rho$ is the matter density in the star. In
particular, this implies that
\begin{equation}
\label{fRapproxvar}
\mathcal{C}_4 \approx 2 e^{\Phi + \Lambda} \mathcal{S}^{-2}
\frac{(f'(\alpha))^2}{f''(\alpha)} \approx - e^{\Phi + \Lambda}
\mathcal{S}^{-2} \frac{\rho^3}{\mu^4}
\end{equation}
since the above term will dominate over all the others in
$\mathcal{C}_4$. (Note that for a star with the density of the Sun,
and the choice of $\mu \approx 10^{-27} \text{ m}^{-1}$ made by
Carroll \textsl{et al.}, $\rho/\mu^2 \approx 10^{10}$.) This implies
that for a test function $\psi$ with $\zeta$ set to zero, we will have
\begin{equation}
\omega_0^2 \lesssim \frac{\int \mathrm{d} r \, e^{\Lambda + \Phi}
\mathcal{S}^{-2} \left[ - \frac{\rho^3}{\mu^4} b^2 + 6 e^{-2 \Lambda}
\left( \fpder{b}{r} \right)^2 \right] }{ 6 \int \mathrm{d} r \, e^{\Lambda -
\Phi} \mathcal{S}^{-2} b^2 }
\end{equation}
(note that $f'(\alpha) = 1 + \mu^4/\rho^2 \approx 1$ in the stellar
interior.) As a representative mass distribution, we take the
Newtonian mass profile of an $n=1$ polytrope:
\begin{equation}
\rho(r) = \rho_0 \frac{ R \sin \left( \frac{r}{R} \right)}{r}
\end{equation}
where $R$ is the radius of the star and $\rho_0$ is its central
density. We take $\rho_0$ to be of a typical stellar density, $\rho_0
\approx 10^{-24} \text{ metres}^{-2}$. Numerically integrating
\eqref{fRapproxvar} with a test function of the form
\begin{equation}
\label{fRtestfn}
b = \begin{cases} 1 - \frac{r}{R} & r \leq R \\ 0 & r > R \end{cases},
\end{equation}
we find that
\begin{equation}
\label{fRbound}
\omega_0^2 \lesssim -9 \times 10^{35} \text{ metres}^{-2}
\end{equation}
which corresponds to an instability timescale of $\tau \approx 4
\times 10^{-27}$ seconds. This timescale is of the same magnitude as
the instability found in \cite{DolKaw}.
We see, then, that for the $f(R)$ originally chosen by Carroll
\textsl{et al.}, the theory is extremely unstable in the presence of
matter. Moreover, a similar argument will obtain for any choice of
$f(R)$ for which quasi-Newtonian solutions exist and for which
$f''(R)$ is sufficiently small and negative at stellar-density
curvature scales. We can always pick a test function $b(r)$ lying
entirely inside the stellar interior such that $\pder{b}{r}$ is of
order $b/R$. Thus, if $\left| f'(\rho) / f''(\rho) \right| \gg 1/R^2$
for a typical stellar density $\rho$, the $b^2$ term in
\eqref{fRapproxvar} will dominate the $(\pder{b}{r})^2$ term, and the
resulting lower bound on $\omega_0^2$ will then be of order
$f'(\alpha) / f'' (\alpha)$. If the choice of $f(\alpha)$ results in
this quantity being negative, quasi-Newtonian stellar solutions will
be unstable in the corresponding theory. We can thus rule out any
theory (e.g., \cite{CDTT, Zhang}) for which $f''(\rho)$ is
sufficiently small and negative.
This result seems to be closely related, if not identical to, the
``high-curvature'' instabilities found in cosmological solutions by
Song, Hu, and Sawicki \cite{SoHuSa}. Particularly telling is the fact
that the instability time scale found in their work is proportional to
$\sqrt{|f''(R)/f'(R)|}$, the same time-scale found in the present
work.
\section{Einstein-{\ae}ther theory \label{aethersec}}
\subsection{Theory}
Einstein-{\ae}ther theory \cite{Aether1,Aetherwave} was first
formulated as a toy model of a gravitational theory in which Lorentz
symmetry is dynamically broken. This theory contains, along with the
metric $g_{ab}$, a vector field $u^a$ which is constrained (via a
Lagrange multiplier $Q$) to be unit and timelike. The Lagrangian
four-form for this theory is
\begin{multline}
\label{EAlag}
\form{\mathcal{L}} = \left( \frac{1}{16 \pi} R + K^{ab} {}_{cd}
\nabla_a u^c \nabla_b u^d + Q (u^a u_a + 1) \right) \form{\epsilon} \\
+ \form{\mathcal{L}}_\text{mat} \left[ A, g^{ab}, u^a \right]
\end{multline}
where $A$ denotes any matter fields present in the theory, and
\begin{equation}
\label{Kdef}
K^{ab} {}_{cd} = c_1 g^{ab} g_{cd} + c_2 \delta^a {}_c \delta^b {}_d +
c_3 \delta^a {}_d \delta^b {}_c - c_4 u^a u^b g_{cd}.
\end{equation}
The $c_i$ constants determine the strength of the vector field's
coupling to gravity, as well as its dynamics.\footnote{Note that our
definitions of the coefficients $c_i$ differ from those in
\cite{Aether1,Aetherwave,Aethersph,AetherBH}, as do the respective
metric signature conventions. The net result is that to translate
between our results and those of the above papers, one must flip the
signs of all four coefficients.} (Note that in the case $c_3 = - c_1
> 0$ and $c_2 = c_4 = 0$, we have the conventional kinetic term for a
Maxwell field; this special case was examined in \cite{Kost1}, prior
to Jacobson and Mattingly's work.) In the present work, we will work
in the ``vacuum theory'', i.e., in the absence of matter fields $A$.
Performing the variation of the Lagrangian four-form, we find that
\begin{equation}
\label{EAvar}
\delta \form{\mathcal{L}} = \left( (\mathcal{E}_G)_{ab} \delta
g^{ab} + (\mathcal{E}_u)_a \delta u^a + \mathcal{E}_Q \delta Q +
\nabla_a \theta^a \right)
\form{\epsilon}
\end{equation}
where
\begin{subequations}
\begin{multline}
\label{EAeineq}
(\mathcal{E}_G)_{ab} = \frac{1}{16 \pi} G_{ab} + \nabla_c \left(
J^c {}_{(a} u_{b)} + J_{(ab)} u^c - J_{(a} {}^c u_{b)} \right) \\ + c_1
\left( \nabla_a
u^c \nabla_b u_c - \nabla^c
u_a \nabla_c u_b \right) + c_4 \dot{u}_a \dot{u}_b \\ - Q u_a u_b -
\frac{1}{2} g_{ab} J^c {}_d \nabla_c u^d,
\end{multline}
\begin{equation}
\label{EAveceq}
(\mathcal{E}_u)_a = -2 \nabla_b J^b {}_a - 2 c_4 \dot{u}_b \nabla_a
u^b + 2 Q u_a,
\end{equation}
\begin{equation}
\label{EAQeq}
(\mathcal{E}_Q) = u^a u_a + 1,
\end{equation}
\end{subequations}
and
\begin{equation}
\label{EAtheta}
\theta^a = \theta^a_\text{Ein} + 2 J^a {}_b \delta u^b + \left( J_b {}^a u_c
- J^a {}_b u_c - J_{bc} u^a \right) \delta g^{bc}.
\end{equation}
In the above, we have introduced the notation $\dot{u}^a = u^b
\nabla_b u^a$ and $J^a {}_b = K^{ac} {}_{bd} \nabla_c u^d$;
$\theta^a_\text{Ein}$ is defined by \eqref{thetaein}, as above.
Equation \eqref{EAQeq} is the constraint that $u^a$ is unit and
timelike, while \eqref{EAeineq} and \eqref{EAveceq} are the equations
of motion for $g^{ab}$ and $u^a$, respectively. If desired, we can
eliminate the Lagrange multiplier $Q$ from these equations by
contracting \eqref{EAveceq} with $u^a$, resulting in the equation
\begin{equation}
\label{Qeq}
Q = - u^a \nabla_b J^b {}_a - c_4 u^a \dot{u}_b \nabla_a u^b.
\end{equation}
We now take the variation of the symplectic potential current
to obtain the symplectic current. The resulting expression can be
written in the form
\begin{equation}
\label{EAomega}
\omega^a = \omega^a_\text{Ein} + \omega^a_\text{vec}
\end{equation}
where $\omega^a_\text{Ein}$ is the usual symplectic current for
pure Einstein gravity (given by \eqref{omegaein}), and
$\omega^a_\text{vec}$ is obtained by taking the antisymmetrized
variation of the last two terms in \eqref{EAtheta}. The exact form
of this tensor expression is rather complicated, and can be found in
Appendix \ref{EAapp}.
\subsection{Obtaining a variational principle}
As before, we are primarily concerned with the $t$-component of
$\omega^a$, and its form in terms of the perturbational variables. We
will use the usual spherical gauge \eqref{spheremet} for
$g^{ab}$. We will further assume that $u^a \propto t^a$ in the
background solution (the so-called ``static {\ae}ther'' assumption),
and that to first order we have
\begin{equation}
\label{EAupsdef}
u^\mu = (e^{-\Phi} - \phi e^{-\Phi}, \upsilon, 0, 0),
\end{equation}
i.e., the first-order perturbation to $u^t$ is $- \phi e^{-\Phi}$
while the first-order perturbation to $u^r$ is $\upsilon$. (Note that
under these conventions, $g_{ab} u^a u^b = -1$ to first order, as
required.) The background equations of motion can then be calculated
to be
\begin{multline}
\label{EAGtt0}
(\mathcal{E}_G)_{tt} = e^{2\Phi - 2 \Lambda} \left[ \frac{2}{r}
\fpder{\Lambda}{r} +
\frac{1}{r^2} \left( e^{2 \Lambda} - 1 \right) \right. \\ \left. +
c_{14} \left(
\fpdert{\Phi}{r} + \frac{1}{2} \left( \fpder{\Phi}{r} \right)^2 -
\fpder{\Phi}{r} \left( \fpder{\Lambda}{r} - \frac{2}{r} \right)
\right) \right]
\end{multline}
and
\begin{equation}
\label{EAGrr0}
(\mathcal{E}_G)_{rr} = \frac{2}{r} \fpder{\Phi}{r} - \frac{1}{r^2}
\left( e^{2 \Lambda} - 1 \right) - \frac{c_{14}}{2} \left(
\fpder{\Phi}{r} \right)^2,
\end{equation}
where we have defined $c_{14} = c_1 + c_4$ and used the equation
\begin{multline}
\label{EAQ0}
Q = e^{-2 \Lambda} \left[ c_3 \fpder{\Phi}{r} \left(
\fpder{\Lambda}{r} - \frac{2}{r} \right) \right. \\ \left. - (c_1 +
c_3 + 2 c_4) \left(
\fpder{\Phi}{r} \right)^2 - c_3 \fpdert{\Phi}{r} \right]
\end{multline}
to eliminate $Q$. The final equation of motion, $(\mathcal{E}_u)_a =
0$, is satisfied trivially if the {\ae}ther is static.
In terms of these variables, the $t$-component of $\omega^a$ can be
calculated to be
\begin{multline}
\label{EAomegat}
\omega^t = 2 e^{-2\Phi} \left[ c_{123} \fpder{\lambda_1}{t} \lambda_2 +
c_{123} e^{\Phi} \fpder{\upsilon_1}{r} \lambda_2
\right. \\ + e^{\Phi} \left( c_{123} \fpder{\Lambda}{r} + (c_{123}- c_{14})
\fpder{\Phi}{r} + c_2 \frac{2}{r} \right) \upsilon_1 \lambda_2 \\
\left. + c_{14} e^{\Phi} \upsilon_1 \fpder{\phi_2}{r} + e^{2
\Lambda} c_{14} \upsilon_1 \fpder{\upsilon_2}{t} \right] - [1 \leftrightarrow 2]
\end{multline}
where we have defined $c_{123} = c_1 + c_2 + c_3$. In what follows,
we will assume that $c_{14} \neq 0$, and, except where otherwise
noted, that $c_{123} \neq 0$ as well.
Our next step, as usual, is to solve the constraints. However, in
this theory we have the added complication of the presence of a
vector field. This means, in particular, that the tensor $C_{ab}$ (as
defined in \eqref{Cdef}) is not merely proportional to
$(\mathcal{E}_G)_{ab}$, as in the previous section, but is instead
\begin{align}
C_{ab} &= 2 (\mathcal{E}_G)_{ab} + u_a (\mathcal{E}_u)_b \\
&= \nonumber \frac{1}{8 \pi} G_{ab} + 2 \nabla_c \left(
J_{(ab)} u^c - J_{(a} {}^c u_{b)} \right) \\ \nonumber & \quad + 2
\left(\nabla_c J^c {}_{[a}
u_{b]} + \nabla_c u_{(a} J^c {}_{b)} \right) \\ \nonumber & \quad + 2 c_1
\left( \nabla_a u^c \nabla_b u_c - \nabla^c
u_a \nabla_c u_b \right) \\ & \quad - 2 c_4 \left(u_a \dot{u}_c
\nabla_b u^c -
\dot{u}_a \dot{u}_b \right) - g_{ab} J^c {}_d \nabla_c u^d.
\end{align}
Writing out $\delta C_{rt}$ in terms of our perturbational
quantities, we find that it is indeed a total time derivative, with
\begin{multline}
\label{EAFeq}
F = 2 r^2 e^{\Phi - \Lambda} \left( \left( \frac{2}{r} - c_{14}
\fpder{\Phi}{r} \right) \lambda \right. \\ \left. + c_{14} e^{2
\Lambda - \Phi} \frac{\partial
\upsilon}{\partial t} + c_{14} \frac{\partial \phi}{\partial r}
\right).
\end{multline}
Note that this quantity depends on $\phi$; $\phi$ and its derivatives
can appear in the preconstraint equation $F=0$ when our background
tensor fields have non-vanishing $t$-components, as noted in Footnote
\ref{phifoot}.
The remaining non-trivial equations of motion are $(\delta
\mathcal{E}_G)_{rr} = 0$ and $(\delta \mathcal{E}_u)_r = 0$; in terms
of the perturbational variables, these are
\begin{multline}
\label{EAGrreq1}
(\delta \mathcal{E}_G )_{rr} = - \frac{2}{r^2} e^{2\Lambda} \lambda +
c_{123} e^{2 \Lambda - 2\Phi} \frac{\partial^2
\lambda}{\partial t^2} \\ + \left( \frac{2}{r} - c_{14} \fpder{\Phi}{r}
\right) \fpder{\phi}{r} + c_{123} e^{2\Lambda - \Phi}
\frac{\partial^2 \upsilon}{\partial r \partial t} \\
+ e^{2 \Lambda - \Phi} \left( \frac{2 c_2}{r} + c_{123}
\fpder{\Lambda}{r} + (c_{123} - c_{14}) \fpder{\Phi}{r} \right)
\fpder{\upsilon}{t}
\end{multline}
and
\begin{widetext}
\begin{multline}
\label{EAveceq1}
\frac{1}{2}(\delta \mathcal{E}_u)_r = c_{14} e^{2 \Lambda -
2 \Phi} \frac{\partial^2 \upsilon}{\partial t^2} - c_{123} \frac{\partial^2
\upsilon}{\partial r^2} - c_{123} e^{-\Phi} \frac{\partial^2
\lambda}{\partial r \partial t} + c_{14} e^{-\Phi} \frac{\partial^2
\phi}{\partial r \partial
t} \\ - c_{123} \left( \frac{2}{r} + \fpder{\Lambda}{r} +
\fpder{\Phi}{r} \right) \fpder{\upsilon}{r} + e^{-\Phi} \left(
(c_{123} - c_{14}) \fpder{\Phi}{r} - (c_1 + c_3) \frac{2}{r} \right)
\fpder{\lambda}{t} \\ - \left[ (c_{123} - c_{14} ) \frac{\partial^2
\Phi}{\partial r^2} + c_{123} \frac{\partial^2 \Lambda}{\partial
r^2} + c_{14} \fpder{\Lambda}{r} \fpder{\Phi}{r} + \frac{2}{r}
\left( (c_1 + c_3) \fpder{\Lambda}{r} + (c_3 - c_4) \fpder{\Phi}{r}
\right) - c_{123} \frac{2}{r^2} \right] \upsilon.
\end{multline}
\end{widetext}
While we could follow the methods outlined in Section \ref{GVPsec}
to reduce these equations to the basic form
\eqref{simpleform}, it is actually simpler to pursue a different path.
If we solve \eqref{EAFeq} for $\pder{\phi}{r}$, rather than $\lambda$
as usual, and plug the resulting expressions into \eqref{EAGrreq1} and
\eqref{EAveceq1}, there result the equations
\begin{equation}
\label{modEAGrreq1}
\fpder{\psi}{t} - e^{2 \Phi - 2 \Lambda} \frac{2}{r^2} \left(
\frac{2}{c_{14}} + 1 \right) \lambda = 0
\end{equation}
and
\begin{multline}
\label{modEAveceq1}
\fpder{\psi}{r} + \left( \frac{c_1 + c_3 + 1}{c_{123}} \frac{2}{r} -
\fpder{\Phi}{r} \right) \psi \\ - \frac{(c_1 + c_3 + 1)(c_{123} +
2 c_2 - 2)}{c_{123}} \upsilon = 0,
\end{multline}
where we have introduced the new variable $\psi$, defined as
\begin{multline}
\label{EApsidef}
\psi = c_{123} \left( \fpder{\lambda}{t} + e^{\Phi}
\fpder{\upsilon}{r} \right) \\ + \left(c_{123} \left(
\fpder{\Lambda}{r} + \fpder{\Phi}{r} \right) + (c_2 - 1)
\frac{2}{r^2} \right) e^\Phi \upsilon.
\end{multline}
Equations \eqref{modEAveceq1} and \eqref{EApsidef} can then be
combined to eliminate any explicit $\upsilon$ terms:
\begin{multline}
\fpder{\lambda}{t} + \frac{r^2}{a} \left[ \frac{c_{123}}{2} \left(
\fpdert{\psi}{r} + \left( \fpder{\Lambda}{r} -
\fpder{\Phi}{r} + \frac{4}{r} \right) \fpder{\psi}{r}
\right)\right. \\ \left.
+ \left( - c_{123} \fpder{\Lambda}{r} \fpder{\Phi}{r} +
\left(\frac{c_{123}}{c_{14}} + c_{123} - c_2 + 1 \right) \frac{1}{r}
\fpder{\Lambda}{r} \right. \right. \\ \left. \left. + \left(
\frac{c_{123}}{c_{14}} - c_2 + 1 \right)
\frac{1}{r} \fpder{\Phi}{r} \right) \psi \right] = 0
\end{multline}
where we have defined the constant $a$ to be
\begin{equation}
\label{EAadef}
a = (c_1 + c_3 + 1)(c_{123} + 2 c_2 - 2).
\end{equation}
We can then use this equation and \eqref{modEAGrreq1} to write down a
single second-order time-evolution equation for $\psi$:
\begin{multline}
\label{EAredeq}
e^{2\Lambda - 2 \Phi} r^2 \frac{c_{14}}{2 + c_{14}}
\fpdert{\psi}{t} \\ + \frac{r^2}{a} \left[ c_{123} \left(
\fpdert{\psi}{r} + \left( \fpder{\Lambda}{r} -
\fpder{\Phi}{r} + \frac{4}{r} \right) \fpder{\psi}{r} \right)
\right. \\ \left.
+ \left( - c_{123} \fpder{\Lambda}{r} \fpder{\Phi}{r} +
\left(\frac{c_{123}}{c_{14}} + c_1 + c_3 + 1 \right) \frac{1}{r}
\fpder{\Lambda}{r} \right. \right. \\ \left. \left. + \left(
\frac{c_{123}}{c_{14}} - c_2 + 1 \right)
\frac{1}{r} \fpder{\Phi}{r} \right) \psi \right] = 0.
\end{multline}
Moreover, using \eqref{EAFeq} to eliminate the
$\pder{\phi}{r}$ term from \eqref{EAomega}, we find that
\begin{equation}
\label{EAredomega}
\omega^t = 2 c_{123} e^{-2 \Phi} (\lambda_2 \psi_1 - \lambda_1 \psi_2)
\end{equation}
and the reduced symplectic form can be written as
\begin{multline}
\label{EArefsympform}
\Omega(\psi_1, \psi_2) \\ = - 4 \pi \frac{c_{123} c_{14}}{2 + c_{14}}
\int_0^\infty \mathrm{d} r \, r^4 e^{3 \Lambda - 3 \Phi}
\left( \fpder{\psi_1}{t} \psi_2 - \fpder{\psi_2}{t} \psi_1 \right)
\end{multline}
The form $\form{W}_{\alpha \beta}$ thus defined may be positive or
negative definite, depending on the signs of $c_{123}$ and $c_{14}$
(recall that we are assuming that $c_{123}$ and $c_{14}$ are
non-vanishing); however, it is never indefinite. Thus, the
symplectic form is in fact of the required
form \eqref{Wdef}, and the equations of motion can be put in the form
\eqref{simpleform}. We can therefore write down our variational
principle of the form \eqref{basicvarprin}; the denominator is
\begin{equation}
\label{EAdenom}
(\psi, \psi) = - 4 \pi \frac{c_{123} c_{14}}{2 + c_{14}} \int_0^\infty
\mathrm{d} r \, r^4 e^{3 \Lambda - 3 \Phi} \psi^2
\end{equation}
and the numerator is
\begin{widetext}
\begin{multline}
\label{EAnumer}
(\psi, \mathcal{T} \psi) = - 4 \pi \frac{c_{123}}{ a} \int_0^\infty
\mathrm{d} r \,
r^4 e^{\Lambda - \Phi} \left[ c_{123} \left( \fpder{\psi}{r}
\right)^2 \right. \\ \left. +
\left(c_{123} \fpder{\Lambda}{r} \fpder{\Phi}{r} - \left(
\frac{c_{123}}{c_{14}} - c_2 + 1 \right)
\frac{1}{r} \fpder{\Phi}{r} -
\left(\frac{c_{123}}{c_{14}} + c_1 + c_3 + 1 \right) \frac{1}{r}
\fpder{\Lambda}{r} \right) \psi^2 \right].
\end{multline}
\end{widetext}
\subsection{Discussion}
We can now examine the properties of this variational principle to
determine the stability of Einstein-{\ae}ther theory. The simplest
case to examine is that of flat space. Let
us denote the coefficient of $\psi^2$ in \eqref{EAnumer} by $Z(r)$,
i.e.,
\begin{multline}
\label{EAZdef}
Z(r) = c_{123} \fpder{\Lambda}{r} \fpder{\Phi}{r} - \left(
\frac{c_{123}}{c_{14}} - c_2 + 1 \right)
\frac{1}{r} \fpder{\Phi}{r} \\ -
\left(\frac{c_{123}}{c_{14}} + c_1 + c_3 + 1 \right) \frac{1}{r}
\fpder{\Lambda}{r}.
\end{multline}
In the case of flat spacetime, $Z(r)$ vanishes, and we are left with
\begin{equation}
\label{EAvarflat}
\omega_0^2 \leq \frac{ c_{123} (2 + c_{14}) }{c_{14} a} \frac{
\int_0^\infty \mathrm{d} r \, r^4 \left(
\fpder{\psi}{r} \right)^2}{\int_0^\infty
\mathrm{d} r \, r^4 \psi^2}.
\end{equation}
Since both integrands are strictly positive, we conclude that flat
space is stable to spherically symmetric perturbations in
Einstein-{\ae}ther theory if and only if
\begin{equation}
\label{EAflatstab}
\frac{ c_{123} (2 + c_{14}) }{c_{14} (c_1 + c_3 + 1)(c_{123} + 2 c_2
- 2)} > 0.
\end{equation}
This is the same condition found in \cite{Aetherwave} for the
stability of the ``trace'' wave mode (modulo the aforementioned sign
differences). The other stability conditions found in
\cite{Aetherwave} --- those corresponding to the ``transverse
traceless metric'' and ``transverse {\ae}ther'' wave modes --- are
incompatible with spherical symmetry, and thus our analysis cannot
reproduce these conditions.
More broadly, we can also analyze the stability of the general
spherically symmetric solutions described in \cite{Aethersph}. These
(exact) solutions are described in terms of a parameter $Y$:
\begin{equation}
r(Y) = r_\text{min} \frac{Y - Y_-}{Y} \left( \frac{Y - Y_-}{Y -
Y_+} \right)^{\frac{1}{2 + Y_+}}
\end{equation}
\begin{equation}
\Phi(Y) = - \frac{Y_+}{2(2 + Y_+)} \ln \left( \frac{1 - Y/Y_-}{1 -
Y/Y_+} \right)
\end{equation}
\begin{equation}
\Lambda(Y) = \frac{1}{2} \ln \left( - \frac{c_{14}}{8} (Y - Y_+) (Y
- Y_-) \right)
\end{equation}
where the constants $Y_\pm$ are given by
\begin{equation}
Y_\pm = - \frac{4}{c_{14}} \left( -1 \pm \sqrt{1 + \frac{c_{14}}{2}}
\right)
\end{equation}
and $r_\text{min}$ is a constant of integration related to the mass
$M$ via
\begin{equation}
r_\text{min} = \frac{2 M}{- Y_+} (- 1 - Y_+)^{(1 + Y_+)/(2+Y_+)}
\end{equation}
We can then obtain $Z(Y)$ by writing out \eqref{EAZdef} in terms of
these functions of $Y$ (noting that, for example, $\pder{\Lambda}{r} =
(\pder{\Lambda}{Y})/(\pder{r}{Y})$), and then plot $Z$ and $r$
parametrically. The resulting function is
shown in Figure \ref{Zfig}, for $c_{123} = \pm c_{14}$ and $c_{14} =
-0.1$.
\begin{figure}
\includegraphics[width=\figwidth]{zplot}
\caption{\label{Zfig} Representative plots of $Z(r)$ versus $r$ for
the solutions described in \cite{Aethersph}. The solid and dashed
lines correspond to $c_{123} = \pm 0.1$, respectively; in both
cases, $c_2 = 0$ and $c_{14} = - 0.1$. Note that the choice of
parameters leading to the dashed plot would lead to instability in
the asymptotic region.}
\end{figure}
The first thing we see is that in the asymptotic region, the sign of
$Z(r)$ is determined by the sign of $c_{123}$. To quantify this, we
can obtain an asymptotic expansion of $Z(r)$ as $r \to \infty$, noting
that $r \to \infty$ as $Y \to 0$. Doing this, we find that to
leading order in $M/r$,
\begin{equation}
\label{EAZasymp}
Z(r) = c_{123} \frac{M}{r^3} + \dots
\end{equation}
Thus, the coefficient of the $\psi^2$ term in \eqref{EAnumer} is, to
leading order in the asymptotic region, the same sign as that of the
$(\pder{\psi}{r})^2$
term. We can therefore conclude that for a test function in the
``asymptotic region'' of one of these solutions (i.e., $r \gg M$), the
conditions on the $c_i$'s are the same as those in flat
spacetime.\footnote{ Note that if the leading-order coefficient in
\eqref{EAZasymp} had been different from that of the
$(\fpder{\psi}{r})^2$ term in \eqref{EAnumer}, we would have
obtained a new constraint on the $c_i$ coefficients; the matching
of these two coefficients seems to be coincidental.}
For a realistic spacetime, of course, there will be some ball of
matter in the central region, and the region where the vacuum solution
holds will be precisely the region where $r \gg M$; thus, we can
conclude that for a normal star in Einstein-{\ae}ther theory, the
exterior is stable if and only if \eqref{EAflatstab} holds.
(Henceforth, we will assume that the $c_i$ coefficients have been
chosen with this constraint in mind, unless otherwise specified.)
We also note that as
$r$ approaches $r_\text{min}$, $Z(r)$ diverges negatively. One might
ask whether a test function in this region (in a spherically symmetric
spacetime surrounding a compact object, say) could lead to an
instability. We investigated this question numerically using our
variational principle; however, our results for $(\psi, \mathcal{T}
\psi)$ were positive for all test functions $\psi$ that we
tried. Roughly speaking, the derivative terms in \eqref{EAdenom}
always won out over the effects of the negative $Z(r)$.\footnote{We
have been assuming a ``static {\ae}ther''
throughout this work; however, there also exist solutions in which
the {\ae}ther vector $u^a$ is not globally aligned with the
time-translation vector field \cite{AetherBH}. Thus, even supposing
that such ``extremely compact stars'' are phenomenologically
realistic, and that the exterior solutions for such objects are
unstable, these ``non-static {\ae}ther'' solutions might still be
physically viable.}
This is, of course, far from a definitive proof of the positivity of
\eqref{EAnumer}; and it should be emphasized that the above analysis
has not considered the effects of matter on the stability of such
solutions. Nevertheless, the above results are at least indicative
that the spherically symmetric vacuum solutions Einstein-{\ae}ther
theory do not possess any serious stability problems.
\subsection{The case of $c_{123} = 0$}
The above analysis
assumed that $c_{123}$ was non-vanishing; however, the action
originally considered by Jacobson and Mattingly \cite{Aether1} used
a ``Maxwellian'' action, i.e., the kinetic terms for the {\ae}ther
field were of the form $F_{ab} F^{ab}$, corresponding to $c_3 = -c_1$
and $c_2 = 0$. This case is therefore of some interest.
In the preceding analysis, the perturbational equations of motion
\eqref{EAGrreq1} and \eqref{EAveceq1} were obtained without any
assumptions concerning $c_{123}$, as was the preconstraint equation
\eqref{EAFeq}. We can therefore simply set $c_{123}$ to zero in these
equations and use \eqref{EAFeq} to solve for $\pder{\phi}{r}$, as we
did in the general case. After application of the background
equations of motion, there result the equations
\begin{equation}
\label{EAc0eq1}
- \frac{2}{r^2} \left( \frac{2}{c_{14}} + 1 \right) \lambda + e^{2
\Lambda - \Phi} \frac{2}{r} (c_2 - 1) \fpder{\upsilon}{t} = 0
\end{equation}
and
\begin{equation}
\label{EAc0eq2}
- e^{-\Phi} \frac{2}{r} (1 - c_2) \fpder{\lambda}{t} - Z_0 \upsilon =
0,
\end{equation}
where we have defined $Z_0$ in terms of the background fields:
\begin{multline}
Z_0 = \frac{c_{14}}{2} \left( \fpder{\Phi}{r} \right)^2 -
\frac{2}{r} c_2 \left( \fpder{\Lambda}{r} + \fpder{\Phi}{r}
\right) \\ + \frac{2}{r} \fpder{\Lambda}{r} +
\frac{1}{r^2} \left( e^{2 \Lambda} - 1 \right).
\end{multline}
We can then combine these two equations to make a single second-order
equation for $\upsilon$:
\begin{equation}
\label{EAc0upseq}
e^{2 \Lambda - 2 \Phi} \frac{c_{14} (c_2 - 1)^2}{2 + c_{14}}
\fpdert{\upsilon}{t} - Z_0 \upsilon = 0.
\end{equation}
Since there are no radial derivatives to contend with, the solutions to
this equation are all of the form
\begin{equation}
\label{EAc0form}
\upsilon(r,t) = f(r) \exp \left[ \pm \sqrt{Z_0(r) \left(
\frac{2}{c_{14}} + 1 \right) } e^{\Phi - \Lambda}
|c_2 - 1| t \right]
\end{equation}
where $f(r)$ is an arbitrary function of $r$.
\begin{figure}
\includegraphics[width=\figwidth]{z0plot}
\caption{ \label{Z0fig} Plot of $(\frac{2}{c_{14}} + 1 ) Z_0(r)$,
relevant for Einstein-{\ae}ther theory with $c_{123} = 0$. In
this plot, we have chosen $c_{14} = -0.1$ and $c_2 = 0.1$. }
\end{figure}
Stability of these solutions will thus depend on the sign of the
quantity $(\frac{2}{c_{14}} + 1)Z_0(r)$. We can
plot $Z_0(r)$ parametrically, as was done in the general case; the
resulting function, shown in Figure \ref{Z0fig}, falls off rapidly in
the asymptotic region. We can also perform an asymptotic expansion
similar to that done in the general case to find the large-$r$
behaviour of $Z_0(r)$; the result is that
\begin{equation}
Z_0(r) \approx (1-c_2) c_{14} \frac{M^2}{r^4} + \dots
\end{equation}
We conclude that in the $c_{123} = 0$ case, the spherically symmetric
static solutions of Einstein-{\ae}ther theory are unstable unless
\begin{equation}
\label{EAc0cond}
(c_2 - 1) (c_{14} + 2) > 0,
\end{equation}
i.e., unless either $c_2 > 1 $ and $c_{14} > -2$, or $c_2 < 1$ and
$c_{14} < -2$. Note that this excludes the limit of ``small $c_i$''
often considered in works such as \cite{Aetherwave}. In the
``Maxwellian'' case, this instability is likely related to the
non-boundedness of the Hamiltonian \cite{Clayton}; however, our
analysis also suggests that non-standard kinetic terms can in fact
stabilize the theory (at least in the timelike {\ae}ther case),
contrary to the arguments made in that work.
Finally, we note that even if \eqref{EAc0cond} holds, the
perturbational solutions of the form \eqref{EAc0form} may still be
problematic, as the radial gradients of $\upsilon$ will grow linearly
with time. To quantify what magnitude of gradients would be acceptable,
we note that in the $c_{123} = 0$ case, there is only one dynamical
degree of freedom; thus, $\upsilon$ uniquely determines $\lambda$ and
$\pder{\phi}{r}$. Setting $c_{123} = 0$ in \eqref{EAFeq} and
\eqref{EAGrreq1} allows us to solve these equations for $\lambda$ and
$\pder{\phi}{r}$; in particular,
\begin{multline}
\fpder{\phi}{r} = -\left[ \frac{2}{r^2} c_{14} e^{2 \Lambda} +
\left( \frac{2}{r} - c_{14} \fpder{\Phi}{r} \right) \left(
\frac{2}{r} c_2 - c_{14} \fpder{\Phi}{r} \right) \right] \\
\times \left[ \frac{2}{r^2} c_{14} e^{2 \Lambda} +
\left( \frac{2}{r} - c_{14} \fpder{\Phi}{r} \right)^2 \right]^{-1}
e^{2 \Lambda - \Phi} \fpder{\upsilon}{t},
\end{multline}
which simplifies in a nearly-flat spacetime to be
\begin{equation}
\fpder{\phi}{r} \approx - \frac{c_{14} + 2 c_2}{c_{14} + 2}
\fpder{\upsilon}{t}.
\end{equation}
Applying this to our solution \eqref{EAc0upseq}, we find that for
sufficiently large $t$
\begin{equation}
\left| \fpdert{\phi}{r} \right| \approx 2 \sqrt{(c_2 - 1)^3 (c_{14} + 2
c_2)} \frac{M}{r^3} \left| \fpder{\phi}{r} \right| t
\end{equation}
We can estimate typical scales for $\pder{\phi}{r}$ by looking at
other sources, such as planets. The scale of perturbations due to the
Earth's gravitational field (at its surface) is given by $\pder{\phi}{r}
\approx M_\oplus / r_\oplus^2$, where $M_\oplus$ and $r_\oplus$ are
the mass and radius of the Earth, respectively. Thus, the perturbation
to $\partial^2 \phi / \partial r^2$ near the Earth's surface will be
of the order
\begin{equation}
\left| \fpdert{\phi}{r} \right| \approx 2 \sqrt{(c_2 - 1)^3 (c_{14} + 2
c_2)} \frac{M_\odot M_\oplus}{R^3 r_\oplus^2} t
\end{equation}
where $R$ is the radius of Earth's orbit and $M_\odot$ is the mass of
the Sun. This enhancement to $\partial^2 \phi / \partial r^2$ will
lead to an observable change in the tidal effects due to the Sun's
gravity; demanding that these remain small relative to the normal
Newtonian tidal effects, i.e., $\partial^2 \phi / \partial r^2 \ll
\partial^2 \Phi / \partial r^2 \approx 2 M_\odot/R^3$, we then have
\begin{equation}
\sqrt{(c_2 - 1)^3 (c_{14} + 2 c_2)}
\ll \frac{r_\oplus^2}{M_\oplus t} \approx 1.9 \times 10^{-10}
\end{equation}
for $t \approx 5 \times 10^9$ years (the approximate age of the Earth.)
We see, therefore, that the requirement that perturbational tidal
effects remain small severely constrains our choices of $c_{14}$ and $c_2$.
\section{T\lowercase{e}V\lowercase{e}S \label{TeVeSsec}}
\subsection{Theory}
TeVeS (short for
``{\bfseries{Te}}nsor-{\bfseries{Ve}}ctor-{\bfseries{S}}calar'') is a
modified gravity theory proposed by Bekenstein \cite{TeVeS} in an
attempt to create a fully covariant theory of Milgrom's Modified
Newtonian Dynamics (MOND). The fields present in this theory consist
of the metric; a vector field $u^a$ which (as in Einstein-{\ae}ther
theory) is constrained by a Lagrange multiplier $Q$ to be unit and
timelike; and two scalar fields, $\alpha$ and $\sigma$. The
Lagrangian four-form is
\begin{equation}
\label{TevesLag}
\form{\mathcal{L}} = \left( \mathcal{L}_g + \mathcal{L}_v +
\mathcal{L}_s + \mathcal{L}_m \right)
\form{\epsilon}
\end{equation}
where $\mathcal{L}_g$ is the usual Einstein-Hilbert action,
\begin{equation}
\label{TevesLg}
\mathcal{L}_g = \frac{1}{16 \pi} R;
\end{equation}
$\mathcal{L}_s$ is the ``scalar part'' of the action,
\begin{equation}
\label{TevesLs}
\mathcal{L}_s = - \frac{1}{2} \sigma^2 (g^{ab} - u^a u^b) \nabla_a
\alpha \nabla_b
\alpha - \frac{1}{4} \ell^{-2} \sigma^4 F(k \sigma^2),
\end{equation}
with $k$ and $\ell$ positive constants of the theory (with ``length
dimensions'' zero and one, respectively), and $F(x)$ a free function;
$\mathcal{L}_v$ is the vector part of the action,
\begin{equation}
\label{TevesLv}
\mathcal{L}_v = - \frac{K}{32 \pi} F_{ab} F^{ab} + Q (u^a u_a + 1),
\end{equation}
with $K$ a positive dimensionless constant and $F_{ab} = \nabla_a u_b -
\nabla_b u_a$; and $\mathcal{L}_m$ the matter action,
non-minimally coupled to the metric:
\begin{equation}
\label{TevesLm}
\mathcal{L}_m \form{\epsilon} = \form{\mathcal{L}}_\text{mat} [A, e^{2 \alpha}
g^{ab} + 2 u^a u^b \sinh (2 \alpha) ]
\end{equation}
where $\form{\mathcal{L}}_\text{mat} [A,g^{ab}]$ would be the
minimally coupled
matter Lagrangian for the matter fields $A$. Note that if we ignore
the scalar and matter portions of the Lagrangian, this Lagrangian is
the same as that for Einstein-{\ae}ther theory \eqref{EAlag}, with
$c_1 = - c_3 = -K/16 \pi$ and $c_2 = c_4 = 0$. In the present work,
we will work exclusively with the ``vacuum'' ($\mathcal{L}_m = 0$) theory.
Taking the variation of \eqref{TevesLag} to obtain the equations of
motion and the symplectic current, we find that
\begin{multline}
\delta \form{\mathcal{L}} = \left( (\mathcal{E}_G)_{ab} \delta g^{ab}
+ (\mathcal{E}_u)_a \delta u^a + \mathcal{E}_\alpha \delta \alpha
\right. \\ \left. +
\mathcal{E}_\sigma \delta \sigma + \mathcal{E}_Q \delta Q +
\nabla_a \theta^a \right) \form{\epsilon},
\end{multline}
where
\begin{subequations}
\begin{multline}
\label{Teveseineq}
(\mathcal{E}_G)_{ab} = \frac{1}{16 \pi} G_{ab} - \frac{1}{2} \sigma^2
\nabla_a \alpha \nabla_b \alpha - Q u_a u_b
\\ + \frac{K}{16 \pi} \left( 2 u_{(a} \nabla^c F_{b)c} - F_{ac} F_b
{}^c + \frac{1}{4} g_{ab} F_{cd} F^{cd} \right)
\\ + \frac{1}{2} g_{ab} \left(
\frac{1}{2} \sigma^2 ( \nabla^c \alpha \nabla_c \alpha - \dot{\alpha}^2
) + \frac{1}{4} \ell^{-2} \sigma^4 F(k \sigma^2) \right),
\end{multline}
\begin{equation}
\label{Tevesveceq}
(\mathcal{E}_u)_a = \sigma^2 \dot{\alpha} \nabla_a \alpha +
\frac{K}{8 \pi} \nabla^b F_{ba} + 2 Q u_a,
\end{equation}
\begin{equation}
\label{Tevesaleq}
\mathcal{E}_\alpha = \nabla_a \left( \sigma^2 (\nabla^a \alpha
- u^a \dot{\alpha} ) \right),
\end{equation}
\begin{multline}
\label{Tevessigeq}
\mathcal{E}_\sigma = -\sigma \bigg[\nabla^a
\alpha \nabla_a \alpha - \dot{\alpha}^2 \\ \left. + \ell^{-2}
\sigma^2 \left( F(k
\sigma^2) + \frac{1}{2} k \sigma^2 F'( k \sigma^2) \right) \right],
\end{multline}
\end{subequations}
and
\begin{multline}
\label{Tevestheta}
\theta^a = \theta^a_\text{Ein} - \sigma^2 (\nabla^a \alpha - u^a
\dot{\alpha} ) \delta \alpha \\ + \frac{K}{8 \pi} \left( F_b {}^a
\delta u^b + F^a {}_b u_c \delta g^{bc} \right).
\end{multline}
In the above, we have defined $\dot{\alpha} \equiv u^a \nabla_a
\alpha$. The remaining equation
$\mathcal{E}_Q = 0$ is identical to that in Einstein-{\ae}ther theory,
\eqref{EAQeq}. For a static solution, in our usual gauge, the
background equations of motion become
\begin{subequations}
\begin{multline}
\label{TevesGtt0}
(\mathcal{E}_G)_{tt} = e^{2 \Phi - 2 \Lambda} \left[ \frac{1}{16
\pi} \left( \frac{2}{r} \fpder{\Lambda}{r} + \frac{1}{r^2} (e^{2
\Lambda} - 1) \right) \right. \\ + \frac{K}{16 \pi}
\left( - \fpdert{\Phi}{r} +
\fpder{\Phi}{r} \left( \fpder{\Lambda}{r} - \frac{2}{r} \right) -
\frac{1}{2} \left( \fpder{\Phi}{r} \right)^2 \right) \\ \left. -
\frac{1}{4} \sigma^2 \left( \fpder{\alpha}{r} \right)^2 -
e^{2 \Lambda} \frac{1}{8 \ell^2} \sigma^4 F(k \sigma^2) \right],
\end{multline}
\begin{multline}
\label{TevesGrr0}
(\mathcal{E}_G)_{rr} = \frac{1}{16 \pi} \left(\frac{2}{r}
\fpder{\Phi}{r} - \frac{1}{r^2} ( e^{2 \Lambda} - 1) \right) +
\frac{K}{16 \pi} \left( \fpder{\Phi}{r} \right)^2 \\ - \frac{1}{4}
\sigma^2 \left( \fpder{\alpha}{r} \right)^2 + e^{2 \Lambda}
\frac{1}{8 \ell^2} \sigma^4 F(k \sigma^2) ,
\end{multline}
\begin{equation}
\label{Tevesalph0}
\mathcal{E}_\alpha = \frac{e^{-\Phi - \Lambda}}{r^2} \fpder{}{r}
\left( \sigma^2 r^2 e^{\Phi - \Lambda} \fpder{\alpha}{r} \right),
\end{equation}
and
\begin{equation}
\label{Tevessig0}
\mathcal{E}_\sigma = - e^{-2 \Lambda} \left( \fpder{\alpha}{r}
\right)^2 + \frac{1}{k\ell^2} y(k \sigma^2),
\end{equation}
\end{subequations}
where we have defined $y(x) = -x F(x) - \frac{1}{2} x^2 F'(x)$ (as in
\cite{TeVeS}) and used the equation for $Q$,
\begin{equation}
\label{TevesQ0}
Q = \frac{K}{16 \pi} e^{-2 \Lambda} \left( - \fpdert{\Phi}{r} +
\fpder{\Phi}{r} \left( \fpder{\Lambda}{r} - \frac{2}{r} \right)
\right),
\end{equation}
to simplify. (As in Einstein-{\ae}ther theory, the background equation
$(\mathcal{E}_u)_a = 0$ is satisfied trivially by a static {\ae}ther.)
The symplectic form for the theory can now be calculated from
\eqref{Tevestheta}. The result will be essentially the same as that
in Einstein-{\ae}ther theory (with the appropriate values of the
$c_i$'s), with added terms stemming from the variations of
$\mathcal{L}_s$:
\begin{equation}
\label{Tevesomega}
\omega^a = \omega^a_\text{Ein} + \omega^a_\text{vec} + \omega^a_\text{s}
\end{equation}
where $\omega^a_\text{Ein}$ is given by \eqref{omegaein},
$\omega^a_\text{vec}$ is given by \eqref{EAomegavec} with $c_1 = -c_3
= - K/16 \pi$ and $c_2 = c_4 = 0$, and
\begin{multline}
\label{Tevesomegas}
\omega^a_\text{s} = - \sigma^2 \left[ (\nabla^a \alpha - u^a
\dot{\alpha} ) \left( 2 \frac{\delta_1 \sigma}{\sigma} - \frac{1}{2}
g_{bc} \delta_1 g^{bc} \right) \right. \\ + \delta_1 g^{ab}
\nabla_b \alpha + (g^{ab} - u^a u^b) \nabla_b \delta_1 \alpha
\\ - 2
u^{(a} \delta_1 u^{b)} \nabla_b \alpha \bigg] \delta_2 \alpha.
\end{multline}
\subsection{Applying the variational formalism}
The next step is to write out the $t$-component of $\omega^a$ in terms
of the perturbational variables. We will take our metric
perturbations to have the usual form, and our vector perturbation to
be of the same form as was used for Einstein-{\ae}ther theory
\eqref{EAupsdef}. For the two scalar fields, we define $\delta
\alpha \equiv \beta$ and $\delta \sigma \equiv \tau$. Calculating
$\omega^t_\text{s}$ in terms of these perturbational variables, and
using the results of \eqref{EAomegat}, we find that
\begin{multline}
\label{Tevesomegat}
\omega^t = e^{-2 \Phi} \left\{ 2 \sigma^2 \fpder{\beta_1}{t} \beta_2
\right. \\
\left. + e^{\Phi} \upsilon_1 \left[
\frac{K}{8 \pi} \left(\fpder{\Phi}{r} \lambda_2 - \fpder{\phi_2}{r}
- e^{2 \Lambda - \Phi} \fpder{\upsilon_2}{t} \right) + \sigma^2
\fpder{\alpha}{r} \beta_2 \right]
\right\} \\ - [1 \leftrightarrow 2].
\end{multline}
We now turn to the question of the constraints. The tensor $C_{ab}$
is again given by $C_{ab} = (\mathcal{E}_G)_{ab} + u_a
(\mathcal{E}_u)_b$, and thus to first order we have $\delta C_{rt} =
(\mathcal{E}_G)_{rt} + \delta u_r (\mathcal{E}_u)_t$ (since $u^r = 0$
in the background.) Calculating $\delta C_{rt}$ in terms of our
perturbational variables, we find that the preconstraint equation is
\begin{multline}
\label{TevesF}
F = r^2 e^{ \Phi - \Lambda} \left[ \sigma^2 \fpder{\alpha}{r} \beta
- \frac{1}{8 \pi} \left( \frac{2}{r} + K \fpder{\Phi}{r} \right)
\lambda \right. \\ \left. + \frac{K}{8 \pi} e^{2 \Lambda - \Phi}
\fpder{\upsilon}{t} + \frac{K}{8 \pi} \fpder{\phi}{r} \right] = 0.
\end{multline}
In principle, we could now use this equation, together with the
equation
\begin{multline}
\label{TevesCrr}
0 = \delta C_{rr} = e^{2 \Lambda} \left( \frac{\sigma^4}{2 \ell^2} F(k
\sigma^2) - \frac{1}{4 \pi r^2} \right) \lambda \\+ \sigma \left(
\frac{1}{k \ell^2} e^{2 \Lambda} y(k \sigma^2) -
\left(\fpder{\alpha}{r} \right)^2 \right) \tau + e^{2 \Lambda - \Phi}
\frac{K}{8 \pi} \fpder{\Phi}{r} \fpder{\upsilon}{t} \\- \sigma^2
\fpder{\alpha}{r} \fpder{\beta}{r} + \frac{1}{8 \pi} \left(
\frac{2}{r} + K \fpder{\Phi}{r} \right) \fpder{\phi}{r}
\end{multline}
to obtain equations for $\lambda$ and $\pder{\phi}{r}$ in terms of the
``matter'' variables $\upsilon$, $\beta$, and $\tau$. However, it is
simpler to pursue a similar tactic to the one we used in the reduction
of Einstein-{\ae}ther theory. To wit, the $r$-component of $(\delta
\mathcal{E}_u)_a$ is
\begin{multline}
(\delta \mathcal{E}_u)_r = \left( 2 e^{2 \Lambda} Q + \sigma^2
\left(\fpder{\alpha}{r} \right)^2 \right) \upsilon + e^{- \Phi}
\sigma^2 \fpder{\alpha}{r} \fpder{\beta}{t} \\ + e^{-\Phi} \frac{K}{8
\pi} \left( \fpder{\Phi}{r} \fpder{\lambda}{t} - e^{2 \Lambda -
\Phi} \fpdert{\upsilon}{t} - \fpdertm{\phi} \right) = 0.
\end{multline}
We can use \eqref{TevesF} to simplify this equation; the result
is
\begin{multline}
\label{Teveschievol}
e^\Phi \left( 2 e^{2 \Lambda} Q + \sigma^2 \left(\fpder{\alpha}{r}
\right)^2 \right) \upsilon \\ = \fpder{}{t} \left( \frac{1}{4 \pi r}
\lambda - 2 \sigma^2 \fpder{\alpha}{r} \beta \right)
\end{multline}
We then define the new variable $\chi$ as
\begin{equation}
\label{Teveschidef}
\chi = \frac{1}{4 \pi r}
\lambda - 2 \sigma^2 \fpder{\alpha}{r} \beta.
\end{equation}
The evolution equation for $\beta$, meanwhile, is given by the
equation $\delta \mathcal{E}_\alpha = 0$; in terms of the
perturbational fields, it is
\begin{multline}
\label{Tevesbetaeq}
- 2 e^{2 \Lambda -2 \Phi}
\fpdert{\beta}{t} + \fpdert{\beta}{r} + \left(\frac{2}{\sigma}
\fpder{\sigma}{r} + \fpder{\Phi}{r} - \fpder{\Lambda}{r} +
\frac{2}{r} \right) \fpder{\beta}{r} \\ + \fpder{\alpha}{r} \left(
\fpder{\phi}{r} - \fpder{\lambda}{r} \right) + \frac{2}{\sigma}
\fpder{\alpha}{r} \fpder{\tau}{r} \\ - \frac{1}{\sigma^2}
\fpder{\alpha}{r} \fpder{\sigma}{r} \tau - e^{2 \Lambda - \Phi}
\fpder{\alpha}{r} \fpder{\upsilon}{t} = 0.
\end{multline}
Finally, the field $\tau$, being the perturbation of the auxiliary field
$\sigma$, can be solved for algebraically in the equation $\delta
\mathcal{E}_\sigma = 0$:
\begin{equation}
\label{Tevestau}
\ell^{-2} \sigma y'(k \sigma^2) \tau + e^{-2 \Lambda} \left( \left(
\fpder{\alpha}{r} \right)^2 \lambda - \fpder{\alpha}{r}
\fpder{\beta}{r} \right) = 0.
\end{equation}
We can thus follow the following procedure to write the evolution
equations solely in terms of $\beta$ and $\chi$: first, we use the
preconstraint equation \eqref{TevesF} to eliminate the combination
$\pder{\phi}{r} + e^{2 \Lambda - \Phi} \pder{\upsilon}{t}$ in favour
of $\lambda$ and $\beta$; next, we use \eqref{Tevestau}
to eliminate $\tau$ and its spatial derivatives; and finally, we use
the definition of $\chi$ \eqref{Teveschidef} to eliminate $\lambda$
and its derivatives in favour of $\beta$ and $\chi$. Applying our
procedure to \eqref{TevesCrr} results in an equation depending on
$\pder{\upsilon}{t}$ as well as $\beta$, $\pder{\beta}{r}$, and
$\chi$; combining this equation with \eqref{Teveschievol} then yields
a second-order evolution equation for $\chi$ of the form
\begin{equation}
\label{Tevesreducedchi}
\fpdert{\chi}{t} = \mathcal{W} \chi + \mathcal{V}_3 \fpder{\beta}{r}
+ \mathcal{V}_4 \beta.
\end{equation}
where the coefficients $\mathcal{V}_i$ and $\mathcal{W}$ are dependent on
the background fields. Similarly, applying this procedure to
\eqref{Tevesbetaeq}, and using \eqref{TevesCrr} to eliminate a
resulting $\pder{\upsilon}{t}$ term, we obtain a second-order equation
for $\beta$:
\begin{equation}
\label{Tevesreducedbeta}
\fpdert{\beta}{t} = \mathcal{U}_1 \fpdert{\beta}{r} +
\mathcal{U}_2 \fpder{\beta}{r} + \mathcal{U}_3 \beta + \mathcal{V}_1
\fpder{\chi}{r} + \mathcal{V}_2 \chi,
\end{equation}
where, again, the $\mathcal{U}_i$'s and $\mathcal{V}_i$'s depend on
the background fields. Note that \eqref{Tevesreducedchi} has no
dependence on the spatial derivatives of $\chi$, only upon $\chi$
itself.
We also need to reduce the symplectic form and express it in terms of
$\beta$ and $\chi$. Applying the preconstraint equation \eqref{TevesF}
to \eqref{Tevesomegat}, and using the definition of $\chi$
\eqref{Teveschidef} along with \eqref{Teveschievol}, we find that
\begin{multline}
\label{Tevessympform}
\Omega = 4 \pi \int \mathrm{d} r \, r^2 e^{\Lambda - \Phi} \left[
\frac{1}{H} \fpder{\chi_1}{t} \chi_2 + 2 \sigma^2
\fpder{\beta_1}{t} \beta_2 \right] \\ - [1 \leftrightarrow 2],
\end{multline}
where
\begin{equation}
H = - \left( 2 e^{2 \Lambda} Q + \sigma^2 \left(\fpder{\alpha}{r}
\right)^2 \right).
\end{equation}
We can also apply the background equations of motion, along with the
Equation \eqref{TevesQ0}, to this quantity to obtain
\begin{equation}
\label{TevesH2}
H = \frac{1}{4 \pi r} \left(\fpder{\Lambda}{r} + \fpder{\Phi}{r}
\right) - 2 \sigma^2 \left(\fpder{\alpha}{r} \right)^2.
\end{equation}
For a variational principle to exist for this theory, the
form $\form{W}_{\alpha \beta}$ defined by \eqref{Tevessympform} must
be positive definite. In our case, this means that the coefficients
of both $(\pder{\chi_1}{t}) \chi_2$ and $(\pder{\beta_1}{t}) \beta_2$
in the integrand of \eqref{Tevessympform} must be positive for the
background solution about which we are perturbing. While the
coefficient for the latter term is obviously always positive, the
situation for the former coefficient (namely, $H$) is not so clear.
To address this issue, we need to know the properties of the
spherically symmetric static background solutions of TeVeS. These
solutions (with a ``static {\ae}ther'') are described in
\cite{Tevessph}. In our gauge, they are most simply described in
terms of a parameter $z$:\footnote{This $z$ is the ``radial
coordinate'' in isotropic spherical coordinates, as used in
\cite{Tevessph}.}
\begin{equation}
\label{TevesBGr}
r(z) = \frac{z^2 - z_c^2}{z} \left( \frac{
z - z_c}{z + z_c} \right)^{- z_g / 4
z_c}
\end{equation}
\begin{equation}
\label{TevesBGPhi}
e^{\Phi(z)} = \left( \frac{z - z_c}{z +
z_c} \right)^{z_g/4 z_c}
\end{equation}
\begin{equation}
\label{TevesBGLambda}
e^{\Lambda(z)} = \frac{z^2 - z_c^2}{(z^2 + z_c^2) - \frac{1}{2} z
z_g}
\end{equation}
\begin{equation}
\label{TevesBGalph}
\alpha(z) = \alpha_c + \frac{k m_s}{8 \pi z_c} \ln \left( \frac{ z -
z_c}{z + z_c} \right)
\end{equation}
where $z_c$, $z_g$, $m_s$, and $\alpha_c$ are constants of integration.
The first three of these are related by
\begin{equation}
\label{Tevesrcdef}
z_c = \frac{z_g}{4} \sqrt{ 1 + \frac{k}{\pi} \left( \frac{ m_s}{z_g}
\right)^2 - \frac{K}{2} }
\end{equation}
while $\alpha_c$ ``sets the value of $\alpha$ at $\infty$.'' The
constant $m_s$, which can be thought of as the ``scalar charge'' of
the star, is defined by an integral over the central mass
distribution \cite{TeVeS}; for a perfect fluid with $\rho + 3 P \geq
0$, $m_s$ is non-negative. Finally, the constant $z_g$ is
also defined in terms of an integral over the central matter
distribution \cite{TeVeS}.
\begin{figure}
\includegraphics[width=\figwidth]{hplot}
\caption{\label{Hfigure} Plot of $H(r)$ versus $r$ for TeVeS. In
this plot, $k$ and $K$ are both chosen to be $10^{-2}$, and $m_s =
z_g = 1$.}
\end{figure}
Plotting $H(r)$ parametrically (Figure \ref{Hfigure}), we see that this
function is strictly
negative. In fact, in the $z \to \infty$ limit, we have $r/z = 1 +
\mathcal{O}(z_c/z)$, and so we can take $r \approx z$ to a good
approximation. Calculating $H(r)$ in terms of $\Phi(r)$ and
$\Lambda(r)$, we find that as $r \to \infty$,
\begin{equation}
\label{TevesHapprox}
H(r) \approx - \frac{1}{16 \pi} \left( \frac{k}{\pi} m_s^2 +
\frac{K}{2} z_g^2 \right) \frac{1}{r^4}
\end{equation}
which is negative for any positive choice of $k$ and $K$. Therefore,
it is not possible to straightforwardly apply the variational
principle to TeVeS, since the quadratic form defined by
$\form{W}_{\alpha \beta}$ is indefinite and thus cannot used as an
inner product.\footnote{ Note that this also implies that the
``perturbational Hamiltonian'' of TeVeS, as defined in equation (54)
of \cite{GVP}, has an indefinite kinetic term.}
\subsection{WKB analysis}
While we cannot derive a variational principle with TeVeS, the fact
that we have been able to ``reduce'' the equations of motion to an
unconstrained form still allows us to analyse the stability of its
spherically symmetric solutions. Let us consider a WKB ansatz, of the
form
\begin{equation}
\label{Tevesansatz}
\begin{bmatrix} \beta(r) \\ \chi(r) \end{bmatrix} = e^{i(\omega(r) t +
\kappa r)}
\begin{bmatrix} f_\beta(r) \\ f_\chi(r) \end{bmatrix}
\end{equation}
with $\kappa$ very large compared to the scale of variation of the
background functions $\mathcal{U}_i$, $\mathcal{V}_i$, and
$\mathcal{W}$. We will further choose the the functions $f_i(r)$ and
$\omega(r)$ are chosen to be ``slowly varying'' relative to the scale
defined by $\kappa$, i.e.
\begin{align}
\label{Tevesslowvary}
\frac{1}{f_i(r)} \fpder{f_i}{r} &\ll \kappa, &
\frac{1}{\omega(r)} \fpder{\omega}{r} &\ll \kappa.
\end{align}
Under this assumption, we will then have
\begin{equation}
\fpder{}{r} \left( e^{i(\omega(r) t + \kappa r)} f_i(r) \right)
\approx i \kappa e^{i(\omega(r) t + \kappa r)} f_i(r).
\end{equation}
Now let us apply the time-evolution operator $\mathcal{T}$ implicitly
defined by \eqref{Tevesreducedbeta} and \eqref{Tevesreducedchi} to our
ansatz \eqref{Tevesansatz}. We see that for sufficiently large
$\kappa$, the highest-derivative terms will dominate the
lower-derivative terms. Thus, to a good approximation we will have
\begin{equation}
\mathcal{T} \begin{bmatrix} \beta \\ \chi \end{bmatrix} \approx
\begin{bmatrix} -\kappa^2 \mathcal{U}_1(r) & i \kappa
\mathcal{V}_1(r) \\ i \kappa \mathcal{V}_3 (r) &
\mathcal{W}(r) \end{bmatrix}
\begin{bmatrix} f_\beta(r) \\ f_\chi(r) \end{bmatrix} e^{i(\omega(r) t +
\kappa r)}.
\end{equation}
Then, in the limit of large $\kappa$, our ansatz \eqref{Tevesansatz}
will be an approximate eigenvector of $\mathcal{T}$ if there exist a
$f_\beta(r)$ and $f_\chi (r)$ such that
\begin{equation}
\label{Teveskmatrix}
- \omega^2(r) \begin{bmatrix} f_\beta(r) \\ f_\chi(r) \end{bmatrix} =
\begin{bmatrix} -\kappa^2 \mathcal{U}_1(r) & i \kappa
\mathcal{V}_1(r) \\ i \kappa \mathcal{V}_3 (r) &
\mathcal{W}(r) \end{bmatrix}
\begin{bmatrix} f_\beta(r) \\ f_\chi(r) \end{bmatrix}.
\end{equation}
In other words, in the limit of large $\kappa$, the problem of finding
modes of $\mathcal{T}$ is a simple two-dimensional eigenvalue problem
where the eigenvalues are functions of $r$. In this limit, the
eigenvalues of this matrix are (to leading order in $\kappa$)
\begin{equation}
\omega^2(r) \approx \left\{ \kappa^2 \mathcal{U}_1(r), -
\mathcal{W}(r) + \frac{ \mathcal{V}_1 (r) \mathcal{V}_3
(r)}{\mathcal{U}_1 (r)} \right\}.
\end{equation}
It is not difficult to verify that for sufficiently large $\kappa$,
our assumptions for the ansatz \eqref{Tevesslowvary} are satisfied.
It remains to write out the functions $\mathcal{U}_1(r)$,
$\mathcal{V}_1(r)$, $\mathcal{V}_3(r)$, and $\mathcal{W}(r)$ in terms
of the background functions. These can be shown to be:
\begin{equation}
\mathcal{U}_1 = \frac{1}{2} e^{2 \Phi - 2 \Lambda} \left( 1 +
e^{-2 \Lambda} \frac{2 \ell^2 (\fpder{\alpha}{r})^2}{ \sigma^2 y'(k
\sigma^2)}
\right) ,
\end{equation}
\begin{equation}
\mathcal{V}_1 = -\frac{r}{4} e^{2 \Phi - 2 \Lambda}
\fpder{\alpha}{r} \left( 1 + e^{-2 \Lambda}
\frac{2 \ell^2 (\fpder{\alpha}{r})^2}{ \sigma^2 y'(k \sigma^2)}
\right) ,
\end{equation}
\begin{multline}
\mathcal{V}_3 = 16 \pi r e^{2 \Phi - 2 \Lambda} \sigma^2
\fpder{\alpha}{r} \\ \times \left( 8 \pi \sigma^2 \left( \fpder{\alpha}{r}
\right) - \frac{1}{r} \left( \fpder{\Lambda}{r} + \fpder{\Phi}{r}
\right) \right),
\end{multline}
and
\begin{multline}
\mathcal{W} = r^2 e^{2 \Phi - 2 \Lambda} \left( \frac{2}{K} - 1 - 8 \pi
\sigma^2 \left(\fpder{\alpha}{r} \right)^2 \right) \\ \times \left( 8 \pi
\sigma^2 \left(\fpder{\alpha}{r} \right)^2 - \frac{1}{r} \left(
\fpder{\Lambda}{r} + \fpder{\Phi}{r} \right) \right) .
\end{multline}
This implies that the eigenvalues of the matrix in
\eqref{Teveskmatrix} are
\begin{equation}
\label{Tevesomega1}
\omega^2(r) \approx \frac{\kappa^2}{2} e^{2 \Phi - 2 \Lambda} \left(
1 + e^{-2 \Lambda} \frac{2 \ell^2 (\fpder{\alpha}{r})^2}{ \sigma^2
y'(k \sigma^2)}
\right)
\end{equation}
and
\begin{multline}
\label{Tevesomega2}
\omega^2(r) \approx
e^{2 \Phi - 2 \Lambda} \left( \frac{2}{K} - 1 \right) \\
\times \left( \frac{1}{r} \left( \fpder{\Lambda}{r} +
\fpder{\Phi}{r} \right) -
8 \pi \sigma^2 \left( \fpder{\alpha}{r} \right)^2
\right).
\end{multline}
We can see that this first eigenvalue \eqref{Tevesomega1} is always
positive as long as $y'(x) > 0$; indeed, the choice of $y(x)$ made in
\cite{TeVeS} does satisfy this inequality. The second eigenvalue
\eqref{Tevesomega2}, however, is just
\begin{equation}
\omega^2(r) = 4 \pi e^{2 \Phi - 2 \Lambda} \left(\frac{2}{K}
- 1 \right) H(r).
\end{equation}
Since $H(r)$ is always negative, we conclude that this second mode is
unstable for $0 < K < 2$. Further, for a spherically symmetric
solution outside a Newtonian star, the approximation
\eqref{TevesHapprox} is valid; thus, to lowest non-vanishing order in
$r^{-1}$, we have
\begin{equation}
\omega^2(r) \approx - \frac{1}{4} \left( \frac{2}{K} - 1 \right) \left(
\frac{k}{\pi} m_s^2+ \frac{K}{2} z_g^2 \right) \frac{1}{r^4}.
\end{equation}
\subsection{Discussion}
The above result for $\omega^2(r)$ implies that for stability of
Newtonian solutions, we must have $K > 2$, ruling out the range of
parameters originally considered
in \cite{TeVeS}. To estimate the time scale of the instability when
$K < 2$, we first note that unless $k$ and $K$ are much larger than
the ratio of the star's radius $R$ to its Schwarzchild radius $m_g$, the
parameters $m_g$, $m_s$, and $z_g$ are all approximately equal,
differing by terms of $\mathcal{O}(m_g/R)$ \cite{TeVeS}. In
particular, if $k$ and $K$ are of the same order and $K \not \approx 2$,
we find that the timescale of this instability will be on the order of
$10^6$ seconds---approximately two weeks---for points near the surface
of the Sun.
Note also that the effective gravitational potential for
nonrelativistic motion is
\begin{equation}
\Phi = \Xi \Phi_N + \alpha,
\end{equation}
where
\begin{equation}
\Xi = e^{-2 \alpha_c} (1 - K/2)^{-1},
\end{equation}
$\Phi_N$ is the Newtonian gravitational potential, and $\alpha_c$ is
the asymptotic value of $\alpha$, determined by the cosmological
boundary conditions \cite{BekReview}. Expanding the asymptotic
behaviour of $\Phi_N$ and $\alpha$ about infinity, we find that
\begin{equation}
\label{Teveseffpot}
\Phi \approx \alpha_c - e^{-2 \alpha_c} \left( \frac{1}{1 - K/2} +
\frac{k}{4 \pi} \right) \frac{m_g}{r}
\end{equation}
where in the weak-field limit, we have
\begin{equation}
m_g \approx 4 \pi \int r^2 \rho
\end{equation}
and $m_s \approx e^{-2 \alpha_c} m_g$. The coefficient in parentheses in
\eqref{Teveseffpot} can be thought of as renormalizing the gravitational
constant $G$ by some overall factor; the first term in parentheses
comes from the usual perturbations to the ``physical metric''
component $\tilde{g}_{tt}$, while the second term comes from variation
of the scalar field $\alpha$. However, if $K > 2$, this would
make the metric contribution to the ``effective gravitational
constant'' negative. This could be remedied by choosing $k > 8 \pi /
(K - 2)$. However, it is unclear whether the theory, in this
parameter regime, would still be experimentally viable; for example,
Big Bang nucleosynthesis constraints require that $k < 0.75$, and
the CMB power spectrum also constrains the viable regions
of parameter space \cite{Skor}.\footnote{We
also note that if the gravitational effects of the scalar
are dominant over those of the tensor, the solutions of TeVeS would
greatly resemble those of the so-called ``stratified theories'' with
time-orthogonal space slices \cite{Will}. As these are ruled out
experimentally via geophysical experiments, we could not make $k$ or
$K$ too large without running afoul of these experimental
constraints.}
We also note that the eigenvector corresponding to the unstable
mode of $\mathcal{T}$ will satisfy (to leading order)
\begin{equation}
i \kappa f_\beta + \frac{\mathcal{V}_1}{\mathcal{U}_1} f_\chi
\approx 0
\end{equation}
or, in the limit of large $\kappa$, $f_\beta \approx 0$. In other
words, this unstable mode should manifest itself in growth of the
radial component of the vector field $u^a$ rather than growth of the
scalar $\alpha$ (cf.\ \eqref{Teveschievol}). An instability of the
vector field was also found in \cite{TeVeScosmo1} by Dodelson and
Liguori. However, it seems unlikely that this is the same instability
for three reasons. First, the instability found in \cite{TeVeScosmo1}
was found in a cosmological context, not a Newtonian-gravity context;
since the cosmological solutions of TeVeS are in a very real sense
separate from the Newtonian solutions (existing on two different
branches of the function $y(x)$), it is difficult to draw a direct
correspondence between the stability properties of these two types of
solutions. Second, the instability found in \cite{TeVeScosmo1}
manifests itself only in the limit of a matter-dominated Universe;
the instability we have found exists \textsl{in vacuo}. Third,
Dodelson and Liguori's instability requires $K$ to be sufficiently
small relative to $k$, while our instability is present for all $k >
0$ and $0 < K < 2$.
Finally, it is perhaps notable that the vector field $u^a$ in TeVeS has
``Maxwellian'' kinetic terms, which we found in Section
\ref{aethersec} to be unstable in the context of Einstein-{\ae}ther
theory. It is possible, then, that this instability could be cured
via a more general kinetic term for the vector field.
|
1,116,691,499,871 | arxiv | \section{Introduction}
Absorption-driven X-ray variability detected in active galactic nuclei (AGNs) has been interpreted as the passage of absorbing clouds, occulting the central X-ray source (e.g., \cite{Risaliti2011,2014MNRAS.442.2116T}). Absorption features are expected in the ultraviolet and X-ray spectra when a large cloud or a group of small clouds crosses our line of sight \citep{2009ApJ...695..781B,2012MNRAS.424.2255W}. The absorbing clouds seem to have properties (locations and scales) similar to those of the emitting clouds of the broad-line emission region (BLR), so the ultraviolet and X-ray absorbing clouds may be the clouds known as the BLR clouds.
Strong absorption-driven variability is expected when low ionization absorbers with neutral cores are eclipsing the central X-ray source \citep{Risaliti2011,2009NewAR..53..140G}. In this case, the iron absorption lines are physically associated with the absorber \citep{Risaliti2011}.
The BLR, with a size between $\sim$10$^{-4}$ to 0.1 pc \citep{2019A&A...628A..26P}, is illuminated by the photo-ionizing continuum radiation of the AGN, reprocessing it into
emission lines \citep{2009NewAR..53..140G}.
The cloud geometry is unknown, but it is expected to be irregular. The cloud sizes, determined by different methods, are of the order of $10^{12}-10^{14}$ cm \citep{Risaliti2011,2019A&A...628A..26P}. It has been proposed that the BLR gas may be moving on inflowing or outflowing elliptical trajectories \citep{2018ApJ...866...75W}. \cite{2015MNRAS.446.1848K} showed that the BLR clouds may be distributed in a disc-like configuration in AGNs. There is also a hypothesis that the BLR is formed by dusty clouds transferred to regions above the plane of the accretion disk by radiation pressure \citep{2011A&A...525L...8C}.
The column density $N_H$ and the covering factor $C_F$ of the clouds vary proportionally during the occultation, so that an increment of these parameters indicates the occurrence of an occultation of the X-ray source by a cloud (or clouds), which is absorbing/reflecting radiation \citep{2014MNRAS.442.2116T}. Therefore, the absorption-driven X-ray variability of the source is interpreted as the change of the absorbing column density along the line of sight due to occultation by clouds in the BLR, with about the same size as the X-ray source \citep{Risaliti2009}. It is acceptable to think that the coverage time of the source depends on the cloud size and its velocity. Occultation events in AGNs together with the geometrical and kinematical parameters of eclipsing clouds have been studied before (e.g., \cite{Risaliti2009,Risaliti2011,2014MNRAS.442.2116T}).
BLR models describe the geometry and physical conditions in the line emitting region of AGNs \citep{2017ApJ...847...56E,2018ApJ...866...75W,2020MNRAS.492.5540M}. The broad emission lines may originate in cool clouds ($T\sim 10^4-10^5$ K, e.g. \cite{1990ApJ...352..423M,1996MNRAS.283.1322K}). The cool clouds are considered magnetically confined and in thermal equilibrium, in order to explain their long survival times \citep{1997MNRAS.284..717K}.
Considering the standard AGN model, it is thought that the supermassive black hole (SMBH) is surrounded by an accretion disk and a dusty torus, and that the BLR clouds move in Keplerian orbits inside the dusty torus \citep{Goad2012,Muller2020}, while NLR (narrow-line region) clouds are external to the dusty torus \citep{2015ARA&A..53..365N}. In addition, it is expected we only observe NLRs towards Seyfert 2 galaxies as the torus obscures the BLRs. On the other hand, both NLRs and BLRs would be observed toward Seyfert 1 galaxies as the system is more face-on relative to the observer.
X-ray variability of the Seyfert 1 galaxies that we will consider in our study has been studied by \cite{1999ApJ...524..667T}, who proposed that differences in the mass of the SMBH and the accretion rate could explain the X-ray variability in the studied AGNs, but differences in the physical conditions and geometry of the circumnuclear gas can also be invoked to explain the observed X-ray variability.\\
The present study aims to identify the presence of cloud occultations of the central X-ray source in six AGNs (see Table \ref{table1}) by using the hardness-ratio (HR) light curves, as well as to characterize the physical properties of the clouds eclipsing the central X-ray source. For our study, we selected five Seyfert 1 galaxies since it is more likely to observe BLRs towards this type of galaxies.
We have also included one Seyfert 2 galaxy (NGC 7314) in our sample to find possible differences in the derived parameters of the Seyfert 1 galaxies with those of the Seyfert 2 galaxy.
Here, we study the physical properties of BLR clouds following the methods used by \cite{2007ApJ...659L.111R,Risaliti2011}. As far as we are aware, these methods have not been used before to study BLR clouds of our sample of galaxies, except for Mrk 766 (included in our study to check part of our results and for comparison purposes).
\cite{2014MNRAS.439.1403M} studied obscuring clouds in NGC 3783 and NGC 3227, proposing that they are located in the dusty torus, but the method used by \cite{2014MNRAS.439.1403M} to constrain obscuring cloud locations is different from those used by \cite{2007ApJ...659L.111R,Risaliti2011}.
The objects presented in this contribution are only a small sample, which will be larger in future research. Nevertheless, we suggest the results from this work provide important new insight into the AGN obscuring phenomenon. In Section \ref{Observations}, we describe the observations and data reduction. In Section \ref{Analysis}, we present the results of temporal and spectral analysis of the X-ray data. In Section \ref{Discussion}, we discuss the relation found between the equivalent width of the Fe K$\alpha$ line at 6.4 keV and the mass of the supermassive black hole, as well as the physical properties of the eclipsing clouds. Finally, in Section \ref{Conclusion} we report the conclusions of the present study.
\section{Observations and data reduction}\label{Observations}
The XMM-Newton space mission has made many observations of Seyfert galaxies, and a large database is available through the XMM-Newton Science Archive\footnote{http://nxsa.esac.esa.int/nxsa-web/\#search}. We selected a sample of six galaxies from this archive with long observations times of at least $\sim$40 ks, which is necessary to see changes in the HR light curves of the galaxies and thus identify possible occultations of the central X-ray source (see below).
These galaxies are also selected for our study as the mass of the supermassive black holes is known and changes within $\sim$(1-80)$\times$10$^6$ M$_{\odot}$ \citep{Christopher2002,Onken2003,Giacche2014,Piotrovich2015,Emma2016}, which will allow us to find possible relations between the mass of supermassive black holes with derived parameters. This is a pilot project which will be extended to other galaxies with high redshift $z$, in forthcoming research.
The selected data were observed with the EPIC instrument. The Seyfert type, distance, observation ID, and the exposure time are given in Table \ref{table1}, for each galaxy. The \texttt{epproc} task of the Science Analysis Software\footnote{See \url{https://www.cosmos.esa.int/web/xmm-newton/sas}} (SAS, version 19.1.0) was used to run the default pipeline processing, thus obtaining the calibrated event lists. We then used the \texttt{evselect} task of SAS for filtering the data, as well as for extracting light curves and spectra toward the central regions of the galaxies. Fig. \ref{Sample_galaxies} shows the X-ray emission maps of the six galaxies. We selected a region free from contamination sources for the background of each galaxy. We used the same radius size of the source for the background regions.\\
We have studied the X-ray emission arising in the central circular regions with a radius of 21 arcsec for NGC 7314, 25 arcsec for NGC 3783, NGC 279, Mrk 766, and 29 arcsec for NGC 3516 (see Fig. \ref{Sample_galaxies}). These galaxies are nearby sources with redshifts within 0.004-0.031 and bright with 2-10 keV fluxes within $\sim$(1-5)$\times$10$^{-11}$ erg cm$^{-2}$ s$^{-1}$ (see below), so the above source radii can be adopted for our study. As mentioned above, five of the studied galaxies are Seyfert 1 type, so the detection of eclipsing events is expected, while the Seyfert 2 galaxy is included in this study for comparative purposes.
\begin{table}
\centering
\caption{Observational parameters of the studied galaxies.}
\label{table1}
\begin{tabular}{lcrcr}
\hline
Galaxy & type & distance & obsIDs & exposure time\\
& & (Mpc) & & ($\times$10$^3$ sec)\\
\hline
NGC 3783 & Seyfert 1 & 41.6 & 0780860901 & 115.0\\
& & & 0780861001 & 57.0 \\\hline
Mrk 279 & Seyfert 1 & 131.7 & 0302480401 & 59.8\\
& & & 0302480501 & 59.8 \\
& & & 0302480601 & 38.2 \\ \hline
Mrk 766 & Seyfert 1 & 64.9 & 0304030101 & 95.5\\
& & & 0304030301 & 98.9 \\
& & & 0304030401 & 98.9 \\ \hline
NGC 3227 & Seyfert 1 & 22.4 & 0782520201 & 92.0\\
& & & 0782520301 & 74.0\\
& & & 0782520401 & 84.0\\
& & & 0782520501 & 87.0\\ \hline
NGC 7314 & Seyfert 2 & 15.8 & 0725200101 & 140.5\\
& & & 0725200301 & 132.1\\ \hline
NGC 3516 & Seyfert 1 & 38.9 & 0401210401 & 52.2\\ & & & 0401210501 & 69.1\\
& & & 0401210601 & 68.5\\
& & & 0401211001 & 68.6\\
\hline
\end{tabular}
\end{table}
\section{Analysis of the X-ray data}\label{Analysis}
\subsection{Temporal analysis}\label{Analysis1}
We show the 1-10 and 6-10 keV flux light curves in Figs. \ref{HRngc3783}-\ref{HRngc3516} for the observations of the six galaxies included in our study. In these figures, we also show the hardness-ratio (HR) light curves ($F(6-10)/F(1-5)$), which reveal time-intervals with strong changes in the X-ray radiation, likely originating as a consequence of occultations of the central X-ray source by clouds crossing the line of sight. As mentioned above, for this analysis we extracted data towards the circular region of the six galaxies.\\
In Fig. \ref{HRngc3783}, the time-interval 1 shows a strong change\footnote{In this paper, we consider a change in the HR light curve when this change is greater than 25\% inside the same time-interval or there is a change in the HR light curves greater than 25\% between time-intervals.} in the HR light curve in NGC 3783. None of the studied time-intervals show changes in the HR curves in Mrk 279 (see Fig. \ref{HRmrk279}). The time-intervals 1-2 reveal strong changes in the HR curves in Mrk 766 (see Fig. \ref{HRmrk766}). We have labeled three sub time-intervals as SUB-INT 1, SUB-INT 2, and SUB-INT 3 in Fig. \ref{HRmrk766}, where changes in the HR light curves of Mrk 766 are observed. These time-intervals were studied in detail by \citet{Risaliti2011}. As mentioned above, we included Mrk 766 in our study for verifying part of the results with those of \citet{Risaliti2011}.
The time-interval 1 and sub time-intervals SUB-INT 1 and SUB-INT 2 show variations in the HR light curves in NGC 3227 (see Fig. \ref{HRngc3227}). For NGC 7314 (see Fig. \ref{HRngc7314}), the sub time-interval labeled as SUB-INT 1 shows a change in the HR curve. We notice in Fig. \ref{HRngc3516} a change in the HR light curve of the time-interval 3 in NGC 3516. In summary, NGC 3783, Mrk 766, NGC 3227, NGC 7314, and NGC 3516 show changes in their HR curves, which could be a consequence of occultations of the central X-ray source by orbiting clouds. The HR changes evidence the possible occurrence of three cloud occulations in Mrk 766 (already discovered by \cite{Risaliti2011}) and NGC 3227, and one cloud occultation in NGC 3783, NGC 7314 and NGC 3516.\\
To see the possible effects of the soft X-ray emission (0.5-1 keV) on the ratio of the light curves, we show in Figures~\ref{HR_galaxies1} and \ref{HR_galaxies2} the F(6-10 keV)/F(0.7-1 keV) curves and the F(0.7-1 keV)/F(0.5-0.7 keV) curves. In these figures, we notice that the shape of the F(6-10 keV)/F(0.7-1 keV) curves is quite similar to that of the (6-10 keV)/F(5-1 keV) curves shown in Figures \ref{HRngc3783}-\ref{HRngc3516}. However, we find differences between the F(0.7-1 keV)/F(0.5-0.7 keV) curves and the F(6-10 keV)/F(0.7-1 keV) curves in NGC 3783, Mrk 766, NGC 3227, and NGC 3516 (see Figures \ref{HR_galaxies1} and \ref{HR_galaxies2}).
The F(0.7-1 keV)/F(0.5-0.7 keV) curves do not show changes as those shown by the F(6-10 keV)/F(0.7-1 keV) curves in NGC 3227 and NGC 3516 (top and bottom panels in Figure \ref{HR_galaxies2}). There is an inversion in the shape of the curves of the SUB-INT 1, SUB-INT 2, and SUB-INT 3 relative to the curve of the third time interval in Mrk 766 (see the bottom panel in Figure \ref{HR_galaxies1}). The shape of the F(0.7-1 keV)/F(0.5-0.7 keV) curve in NGC 3783 is flatter than that of F(6-10 keV)/F(0.7-1 keV). There are no differences in the shape of the curves of Mrk 279 shown in the middle panel of Figure \ref{HR_galaxies1}.
The F(0.7-1 keV)/F(0.5-0.7 keV) curve is noisier than that of F(6-10 keV)/F(0.7-1 keV) in NGC 7314, which does not allow us to see the occurrence of possible eclipsing events (see the middle panel of Fig. \ref{HR_galaxies2}). The fact that the F(0.7-1 keV)/F(0.5-0.7 keV) curves do not show the occurrence of eclipsing events in NGC 3227 and NGC 3516, and that this curve is flatter than that of F(6-10 keV)/F(0.7-1 keV) in NGC 3783 suggest that the eclipsing events are best observed in the hard X-ray band (> 1 keV).
In the following section, we show a spectral analysis for the whole time-intervals, as well as a time-resolved analysis of the intervals or sub-intervals showing variations in the HR F(6-10 keV)/F(1-5 keV) curves.
\begin{figure*}
\centering
\includegraphics[width=0.9\textwidth]{AGNs.eps}
\caption{1-10 keV band maps of the six AGNs studied in this paper. The black circle shows the region used as background, while the red circle shows the region used to extract spectra and light curves for each galaxy.}
\label{Sample_galaxies}
\end{figure*}
\begin{figure}
\centering
\includegraphics[trim=1.5cm 0 1.8cm 0,width=0.42\textwidth]{Curve_ratio_NGC3783.eps}
\caption{The 1-10 and 6-10 keV flux light curves (top panel) and hardness-ratio (bottom panel) of NGC 3783. The time-interval 1 showing a change in the hardness-ratio is indicated.}\label{HRngc3783}
\end{figure}
\begin{figure}
\centering
\includegraphics[trim=1.5cm 0 1.8cm 0,width=0.42\textwidth]{Curve_Ratio_Mrk279.eps}
\caption{The 1-10 and 6-10 keV flux light curves (top panel) and hardness-ratio (bottom panel) of Mrk 279.}\label{HRmrk279}
\end{figure}
\begin{figure}
\centering
\includegraphics[trim=2.7cm 0 3.5cm 0,width=0.33\textwidth]{Curve_ratio_Mrk766.eps}
\caption{The 1-10 and 6-10 keV flux light curves (top panel) and hardness-ratio (bottom panel) of Mrk 766. The sub-intervals 1-3 showing a change in the hardness-ratio are indicated.}\label{HRmrk766}
\end{figure}
\begin{figure}
\centering
\includegraphics[trim=4.6cm 0 4.7cm 0,width=0.44\textwidth]{Curve_ratio_NGC3227.eps}
\caption{The 1-10 and 6-10 keV flux light curves (top panels) and hardness-ratio (bottom panels) of NGC 3227. The interval 1 and sub-intervals 1-2 showing a change in the hardness-ratio are indicated.}\label{HRngc3227}
\end{figure}
\begin{figure}
\centering
\includegraphics[trim=1cm 0 1.3cm 0,width=0.47\textwidth]{Curve_ratio_NGC7314.eps}
\caption{The 1-10 and 6-10 keV flux light curves (top panels) and hardness-ratio (bottom panels) of NGC 7314. The sub-interval 1 showing a change in the hardness-ratio is indicated.}\label{HRngc7314}
\end{figure}
\begin{figure}
\centering
\includegraphics[trim=0.5cm 0 1cm 0,width=0.47\textwidth]{Curve_ratio_NGC3516.eps}
\caption{The 1-10 and 6-10 keV flux light curves (top panels) and hardness-ratio (bottom panels) of NGC 3516. The time-interval 3 showing a change in the hardness-ratio is indicated.}\label{HRngc3516}
\end{figure}
\subsection{Spectral analysis}\label{Spectral_analysis}
For this analysis, we have used spectra extracted towards the same central regions used for the temporal analysis. A sample of the spectra corresponding to the first time-interval for each galaxy is shown in \mbox{Fig. \ref{figure1}}.
Using Xspec v12.12.0 \citep{1996ASPC..101...17A}, we modeled the spectra considering a partial absorber with free column density ($N_{\rm H}$) and covering fraction ($C_{\rm F}$), one Gaussian emission line at 6.4 keV and a power-law continuum. The $N_{\rm H}$ and $C_{\rm F}$ components are included in our modeling as we are interested in studying changes in these two parameters of the absorbers that could be eclipsing the central X-ray source.
Our best-fitting models are indicated in Fig. \ref{figure1} and the estimated parameters are listed in Table \ref{Table2}. The above model allowed us to fit well the spectra of the six galaxies - we obtained a reduced $\chi^2$ within $\sim$0.9-1.2. An additional Gaussian line is needed to fit the data of NGC 3783, which is fixed at 6.15 keV and has an equivalent width (EW) of $\sim$250 eV. This additional line was also used by \cite{Blustin2002} to fit the broad component of the neutral Fe K$\alpha$ line of NGC 3783. We have a better fit of the spectrum from time-interval 1 for NGC 3783, considering an absorption line at $\sim$6.7 keV with a full width half maximum (FWHM) of 33 eV.
Furthermore, we introduced an absorption feature at $\sim$7.15 keV with an FWHM within 260-310 eV to obtain a better fit of the spectra for the three time-intervals in Mrk 766.
The presence of absorption features only in NGC 3783 and Mrk 766 may indicate that the absorber in these two galaxies has a lower ionization degree than in the other galaxies included in our study, which is suggested by \cite{2012MNRAS.424.2255W} as an important parameter affecting the X-ray absorption.
On the other hand, an additional Gaussian line at $\sim$7 keV is required to fit the hydrogen-like Fe K$\alpha$ line in the spectra of the Seyfert 2 galaxy NGC 7314. In the next section, we will study the spectra showing changes in the HR curves by considering shorter time-scales.\\
We also modeled the spectra of the three time-intervals for Mrk 279 and Mrk 766 considering the above components plus the pexrav model \citep{Magdziarz95} to account for neutral reflection.
We modeled the spectrum of the three time intervals of both galaxies keeping all the parameters free.
The parameters obtained with this analysis are given in Table \ref{Table22}. We notice that there is very little difference between the parameters listed in Table \ref{Table2} and those obtained using the pexrav component. Therefore, in this paper, we will consider models without the pexrav component, for simplicity.
We have presented the averaged value of the EW of the Fe K$\alpha$ line as a function of the mass of the SMBH in Fig.~\ref{Massvsew}, where the SMBH mass of the Seyfert 1 galaxies and NGC 7314 is taken from \cite{2015PASP..127...67B} and \cite{Emma2016}, respectively. In Fig.~\ref{Massvsew}, we also show a high SMBH mass of 2$\times$10$^7$ $M_{\odot}$ for NGC 3227 (labeled as NGC 3227* in Fig.~\ref{Massvsew}) found by \cite{2008ApJS..174...31H}.
As can be seen in this figure, it appears that the EW is well anti-correlated with the mass of the SMBH when we consider the high SMBH mass for NGC 3227. The Seyfert 1 galaxies in Fig.~\ref{Massvsew} follow a linear decreasing relationship, while the EW value of the Seyfert 2 galaxy (NGC 7314) is an outlier outside this relationship, which will be discussed later.
To study the dependence between the variables (excluding the Seyfert 2 galaxy), we calculated the Pearson's coefficient and use the least-squares regression taking into account the low (case a) and high (case b) mass estimate of the SMBH for NGC 3227. The Pearson's coefficient was calculated using the PYTHON routine pearsonr of SciPy, while the regression was estimated with the lmfit package\footnote{https://lmfit.github.io/lmfit-py/intro.html}. Thus, we found the following regression for the relation between the EW and the mass of the SMBH in the case a:
\begin{equation}\label{equa1}
EW(eV)=(-80.59\pm33.39)\log(M_{\rm BH} (10^6 M_{\odot}))+(712.00\pm240.95)
\end{equation}
and in the case b:
\begin{equation}\label{equa2}
EW(eV)=(-187.25\pm62.25)\log(M_{\rm BH} (10^6 M_{\odot}))+(1491.54\pm452.98)
\end{equation}
The above relations for the cases a and b are indicated with a blue line and red line, respectively, in Fig. \ref{Massvsew}. We estimated the Pearson's coefficient of -0.48 and -0.87 for the anti-correlation between the logarithm of the SMBH mass and the EW in the case a and b, respectively. The above relations have a p-value of 0.41 in the case a and 0.059 in the case b.
Therefore, there is a better anti-correlation between the two variables in the case b than in the case a.
\begin{figure}
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=0.9\textwidth]{Ratio_NGC3783.eps}
\end{subfigure}
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=0.9\textwidth]{Ratio_Mrk279.eps}
\end{subfigure}
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=0.9\textwidth]{Ratio_Mrk766.eps}
\end{subfigure}
\caption{The curves of the 6-10 keV flux over
the 0.7-1.0 keV flux (red) and the 0.7-1.0 keV flux over the 0.5-0.7 keV flux (green) for NGC 3783, Mrk 279, and Mrk 766 from the top to the bottom. Different intervals or sub-intervals showing a change in the hardness ratio F(6-10 keV)/F(1-5 keV) are indicated for NGC 3783 and Mrk 766.}\label{HR_galaxies1}
\end{figure}
\begin{figure}
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[trim=3cm 0 2cm 0,width=0.9\textwidth]{Ratio_NGC3227.eps}
\end{subfigure}
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[trim=2.1cm 0 1.1cm 0,width=0.9\textwidth]{Ratio_NGC7314.eps}
\end{subfigure}
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[trim=1.5cm 0 1cm 0,width=0.9\textwidth]{Ratio_NGC3516.eps}
\end{subfigure}
\caption{The curves of the 6-10 keV flux over the 0.7-1.0 keV flux (red) and the 0.7-1.0 keV flux over the 0.5-0.7 keV flux (green) for NGC 3227, NGC 7314, and NGC 3516 from the top to the bottom. Different intervals or sub-intervals showing a change in the hardness ratio F(6-10 keV)/F(1-5 keV) are indicated for NGC 3227, NGC 7314, and NGC 3516.}\label{HR_galaxies2}
\end{figure}
\begin{figure*}
\begin{subfigure}{0.33\textwidth}
\includegraphics[trim=0 2.3cm 0 0,width=4.5cm,angle=-90]{NGC3783_inter1.ps}
\vspace{0.3cm}
\end{subfigure}
\begin{subfigure}{0.33\textwidth}
\includegraphics[trim=0 2.3cm 0 0,width=4.5cm,angle =-90]{Mrk279_inter1.ps}
\vspace{0.3cm}
\end{subfigure}
\begin{subfigure}{0.33\textwidth}
\includegraphics[trim=0 2.3cm 0 0,width=4.5cm,angle =-90]{Mrk766_inter1.ps}
\vspace{0.3cm}
\end{subfigure}
\par\bigskip
\begin{subfigure}{0.33\textwidth}
\includegraphics[trim=0 2.3cm 0 0,width=4.5cm,angle =-90]{NGC3227_inter1.ps}
\vspace{0.3cm}
\end{subfigure}
\begin{subfigure}{0.33\textwidth}
\includegraphics[trim=0 2.3cm 0 0,width=4.5cm,angle =-90]{NGC7314_inter1.ps}
\vspace{0.3cm}
\end{subfigure}
\begin{subfigure}{0.33\textwidth}
\centering
\includegraphics[trim=0 2.3cm 0 0,width=4.5cm,angle=-90]{NGC3516_inter1.ps}
\vspace{0.3cm}
\end{subfigure}
\par\bigskip
\caption{The top panels in each plot show spectra for our sample of AGNs together with the best-fit models (red line) for the first time-interval indicated in Table~\ref{table1}. The bottom panels in each plot show the contribution to the $\chi^2$ value with sign according to the difference of the data and the model for each data point.}\label{figure1}
\end{figure*}
\subsection{Time-resolved analysis}
In this section, we analyze the spectra of the time-intervals or sub time-intervals showing changes in the HR curves (see Fig. \ref{HRngc3783} and \ref{HRmrk766}-\ref{HRngc3516}) considering short time scales that allow us to identify changes in the relative flux, $N_{\rm H}$ and $C_{\rm F}$. For this, we use uniform time bins and include a number of counts that are large enough for resolving changes in $N_{\rm H}$ and $C_{\rm F}$, using our models. We focus our spectral analysis on the hard X-ray emission (>1 keV) because, as indicated in Section \ref{Analysis1}, NGC 3227 and NGC 3516 do not reveal the occurrence of eclipsing events in the F(0.7-1 keV)/F(0.5-0.7 keV) curves, while the F(6-10 keV)/F(0.7-1 keV) curves do that. Another argument supporting the choice of the hard X-ray range in our analysis is the following: considering energies <2 keV in the spectral analysis leads to values of $N_{\rm H}$<1$\times$10$^{22}$ cm$^{-2}$ and values of $C_{\rm F}$ close to 1 in NGC 3227 and NGC 7314, making it impossible to follow changes in the $C_{\rm F}$ values.
For this analysis, we use the models described in Section \ref{Spectral_analysis} excluding the pexrav component.
The results of our analysis are shown in Table \ref{parameters_sixgalaxies} and Fig. \ref{pargalaxy1}-\ref{parNGC3516}, where we use a number followed by a letter for labeling the time bins. The number of the bin corresponds to that of the time-interval or the sub time-interval. We do not analyze the second time-interval of Mrk 766 because this was already done in \cite{Risaliti2011}.\\
We notice that the first time-interval in Mrk 766 (see Fig.~\ref{pargalaxy1}, middle panel), the first and second time-intervals in NGC 3227 (see Fig.~\ref{pargalaxy2}, left and middle panels), and the third time-interval in NGC 3516 show $N_{\rm H}$ values with a similar trend to that of the $C_{\rm F}$, but the $N_{\rm H}$ values appear more dispersed (see Fig.~\ref{parNGC3516}). The $N_{\rm H}$ value in the sub time-interval 1F of NGC 3227 is out of the trend (see the left panel in Fig.~\ref{pargalaxy2}), which is not clear.
The $N_{\rm H}$ reveals the highest values in the sub time-intervals 1B and 1C (within SUB-INT 1) of NGC 7314, where we see the change in the HR light curve (see Fig.~\ref{pargalaxy1}, right panel), and the lowest value of $C_{\rm F}$ matches well the lowest value of the HR curve in the SUB-INT 1.
The observed trend between $N_{\rm H}$ and $C_{\rm F}$, in the above time intervals or sub-time intervals of NGC 3227, NGC 7314 and NGC 3516, is consistent with that was found for Mrk 766 by \cite{Risaliti2011}, supporting the idea that during the time intervals that show this trend, a cloud or clouds obscure the center X-ray source.
On the other hand, the values of $N_{\rm H}$ in NGC 3783 show a trend of decreasing with time, while the values of $C_{\rm F}$ reveal a modest increase.
The $N_{\rm H}$ reaches its maximum value around 1.77$\times$10$^6$ seconds and the values of $C_{\rm F}$ show a clear decrease with time in the third time-interval of NGC 3227 (see Fig.~\ref{pargalaxy2}, right panel). It is not clear why the values of $N_{\rm H}$ do not follow the values of the $C_{\rm F}$ in the first time-interval of NGC 3783 and the third time-interval of NGC 3227, which show changes in the HR curves. We also notice in Fig. \ref{pargalaxy1}-\ref{parNGC3516} that there is a clear anti-correlation between the HR curve and the relative flux for the studied galaxies, except for NGC 7314. It is difficult to check changes of the relative flux and of the other two parameters in SUB-INT 1 of NGC 7314, because there are only two possible estimates within this sub time-interval (see right panel of Fig. \ref{pargalaxy1}).
To see the variations in the values of $N_{\rm H}$ and $C_{\rm F}$ with time and their uncertainties, Figure \ref{NhvsCf_contour} shows contour plots of $C_{\rm F}$ versus $N_{\rm H}$ of several representative time intervals given in Table \ref{parameters_sixgalaxies}. We see in this figure that there are uncertainties with overlapping ranges. We estimate these uncertainties with Xspec at 68 and 90 percent confidence levels, which allow us to distinguish the changes in both parameters with time.
For example, the values of $N_{\rm H}$ and $C_{\rm F}$ tend to increase with time in the second time interval of NGC 3227 (panel e in Fig. \ref{NhvsCf_contour}), while the values of $C_{\rm F}$ increase and those of $N_{\rm H}$ decrease with time for the first time interval of NGC 3783 (panel b in Fig. \ref{NhvsCf_contour}).
\section{Discussion}\label{Discussion}
\subsection{Relation between the EW of the 6.4 keV Fe line with the black hole mass}
The best anti-correlation between the values of the EW of the Fe K$\alpha$ line with the SMBH mass is shown in Fig. \ref{Massvsew} for the five Seyfert 1 galaxies studied in this paper, which is fitted using the equation \ref{equa2}. On the other hand, the average EW of the Syfert 2 type NGC 7314 is not consistent with this relation. NGC 7314 with a SMBH mass of 0.9$\times$10$^6$ M$\odot$ reveals an average EW of 100$\pm$11 eV, which is a factor of 2.3 lower than that of Mrk 766 with a SMBH mass of 6.6$\times$10$^6$ M$\odot$. The average EW for NGC 7314, derived in this contribution, is consistent with the 82$^{+19}_{-21}$ eV found by \cite{2013ApJ...767..121Z} for the same galaxy.
Considering the linear regression in Fig.~\ref{Massvsew} fitted with equation \ref{equa2}, an EW of $\sim$380 eV is expected for a Seyfert 1 AGN with a SMBH mass of 0.9$\times$10$^6$ M$\odot$ equal to that of NGC 7314.
This contrasts with the findings by \cite{2011A&A...532A..84S}, who found that the EW values of the Fe K$\alpha$ line in Seyfert 2 galaxies are higher than in Seyfert 1 galaxies, which could be a consequence of measuring the EW against the depressed continuum by the torus obscuration in Seyfert 2 galaxies \citep{2011A&A...532A..84S}. On the contrary, our findings are consistent with the attenuation in the luminosity of the Fe K$\alpha$ line in Compton-thick Seyfert 2 galaxies (likely due to absorption of the reflected component) compared to that of Seyfert 1 galaxies with the same mid-IR luminosity \citep{2014MNRAS.441.3622R}.\\
Figure \ref{Massvsew} suggests that the SMBH mass can directly affect the Fe K$\alpha$ line emission region in Seyfert 1 galaxies.
The EW-SMBH mass anti-correlation is expected from the EW-X-ray luminosity anti-correlation known as the X-ray Baldwin effect \citep{1993ApJ...413L..15I,2004MNRAS.347..316P,2010ApJS..187..581S}.
\cite{1997ApJ...477..602N} found that the Fe K$\alpha$ line emission in Seyfert 1 galaxies likely originates in the accretion disk orbiting the SMBH.
On the other hand, \cite{2011ApJ...727...19F} discovered that the Fe K$\alpha$ line is likely generated in the Compton-thick torus because they found a relation between the EW of the Fe K$\alpha$ line and the absorption column density in Seyfert galaxies.
The accretion rate, proposed as responsible for producing the X-ray Baldwin effect \citep{2009ApJ...690.1322W}, could play an important role in causing the EW-SMBH mass anti-correlation if the Fe K$\alpha$ emission line is generated in the accretion disk.
\subsection{Physical properties of the eclipsing clouds}
We analyzed with the same procedure described by \cite{Risaliti2011} the X-ray variability of six Seyfert galaxies (see Table \ref{table1}), including the galaxy Mrk 766 discussed by \citet{Risaliti2011}. These observations were achieved with the \textit{XMM-Newton} mission using the PN instrument. The AGNs are bright enough [F(2-10) keV] $\gtrsim$ 1$\times$ 10$^{-11}$ erg cm$^{-2}$ s$^{-1}$ with a long observation time of at least $\sim$40 ks. In the same way, we carried out a preliminary analysis of the HR between the high and low-energy light curves [(6-10 keV)$/$(1-5 keV)] to identify intervals where the strongest spectral variations of the X-ray radiation take place due to the occultation by the clouds of the BLR. Then, we identified the occultation events and estimated their durations from the HR light curves.
We can estimate the cloud velocity, having information about the eclipse time and the variation of the covering factor during the occultation, using the expression by \cite{Risaliti2011} derived assuming that the central X-ray source has a size of at least 5$R_g$: $V > 2.5 \times 10^3~ M_6 T^{-1}_4 \sqrt{\Delta C_F} $ [km/s], where $M_6$ is the black hole mass in units of $10^6$ $M_{\odot}$, $T_4$ the occultation time in units of $10^4$ seconds and $\Delta C_F$ is the covering factor variation during the eclipse.
As we will see below, our sources have sizes much larger than five gravitational radii ($R_g = 2\frac{G~ M_{BH}}{c^2}$), so the above expression can be applied undoubtedly.
On the other side, considering that the obscuring clouds are moving with Keplerian velocities, and combining the transverse velocity of the clouds with their occultation times derived from the variability of the hardness-ratio light curves for each galaxy, a geometrical limit on the X-ray source size can be obtained using the expression given by \cite{2007ApJ...659L.111R}: $D_s = (G M_{BH})^\frac{1}{3} T^\frac{2}{3}$, where $M_{\rm BH}$ is the black hole mass and $T$ is the occultation time. To estimate the size of the eclipsing cloud we consider the distribution of material across the BLR by the covering factor $\Omega\over {4\pi}$, where $\Omega$ is the angle subtended by the clouds, so we use the expression $D_c =4\sqrt{ C_F} D_s$.
The physical parameters of the sources and BLR clouds are given in Table \ref{Table5}.
We have not included Mrk 279 in Table \ref{Table5} as this galaxy does not show changes in the HR curves, which is needed for deriving occultation times and variation in the covering factor.
This is the reason why we have not included the parameters of some time-intervals or sub time-intervals of the other AGNs in Table \ref{Table5} either.
The derived values in Table \ref{Table5} for the first and third time-intervals of NGC 3783 and NGC 3227, respectively, may be biased because the $N_{\rm H}$ values do not follow the $C_F$ values, as mentioned above, which is expected when there are gradients in both parameters across the line of sight (see \cite{Risaliti2011}).
It is noted, that at X-ray energies a single cloud can nearly fully block our view of the X-ray source since the cloud is larger than the continuum source (with sizes within $\sim$(30-100) $R_g$). The clouds have linear dimensions of $\sim$10$^{14}$-10$^{15}$ cm, velocities $>$750 km s$^{-1}$, assuming that the clouds orbit the SMBH with Keplerian velocities, and densities $n_c\sim 10^{8}-10^{9}$ cm$^{-3}$. From our analysis, we found that the column density of eclipsing clouds is about $10^{23}$ cm$^{-2}$ (except for NGC 7314 with a column density of 2$\times$10$^{22}$ cm$^{22}$ cm$^{-2}$, see Table \ref{Table2}) and the linear scale of the cloud is of the order of $10^{14}$ cm, then the corresponding density of the cloud is within $10^{8}-10^{9}$ cm$^{-3}$ as was indicated above (see Table \ref{Table5}). This is in agreement with the results by \cite{2012MNRAS.424.2255W}, who found that strong X-ray absorption takes place when eclipsing clouds with column densities of 10$^{22}$-$10^{23}$ cm$^{-2}$ are far from the central black hole, leading to lower ionization degrees and larger opacities of the clouds.
As expected from the expressions used in our estimates (the black hole mass is directly proportional to the velocity and size of the cloud), NGC 3516 with a massive black hole of 25 million solar masses evidences the coupling feedback between the central black hole and the surrounding galaxy, making cloud velocities faster and the size of the clouds and the continuum source larger. Assuming Keplerian velocities, we can estimate distances ($r_{\rm c}$) of the absorbing clouds from the central X-ray source, which are given in the last two columns of Table \ref{Table5}.
The cloud in NGC 3783 shows a velocity $>$750 km s$^{-1}$ slightly higher than those below 700 km s$^{-1}$ found for the NLR in AGNs \citep{1988ApJS...67..373E,2005MNRAS.357..220R}, while the other AGNs reveal cloud velocities $>$1122 km s$^{-1}$, so we may be observing a NLR cloud instead of a BLR cloud in NGC 3783.
As mentioned above, the cloud velocity derived for NGC 3783 may be biased because the values of $N_{\rm H}$ do not follow those of $C_{\rm F}$ as expected when there are variations in both parameters during a eclipsing event. The high density of 8$\times$10$^7$ cm$^{-3}$ estimated for NGC 3783 supports the idea that the eclipsing cloud in this AGN is likely a BLR cloud. This density is much larger than those of $<10^5$ cm$^{-3}$ found for NLRs \citep{2006A&A...447..863N}.
The changes in HR curves plus the variation trend between the $N_{\rm H}$ values and the $C_{\rm F}$ values in Mrk 766, NGC 3227, NGC 7314, and NGC 3516 evidence that BLR clouds are likely eclipsing the central X-ray sources. The BLR clouds eclipsing the central X-ray source in Mrk 766 were studied in detail in \cite{Risaliti2011}. The occultation events observed in NGC 3516 and NGC 3227 are in agreement with the flux variations that could be due to the passage of clumps across the line of sight in NGC 3516 \citep{2011ApJ...733...48T} and the detection of a transient obscuration event in NGC 3227 \citep{2021A&A...652A.150M}, which would be caused by obscuring winds as opposed to eclipsing clouds. \cite{2011A&A...535A..62E} proposed that neutral gas grazing the clumpy torus in NGC 7314 and crossing our line of sight is responsible for the variations in the absorption properties in NGC 7314, which is consistent with our findings but our study shows that the absorber is located in the BLR. \cite{2020A&A...634A..65D} estimated a density of $>$7$\times$10$^7$ cm$^{-3}$ for the obscuring gas in the BLR of NGC 3783, which is consistent with the density of the eclipsing cloud found for this galaxy. The column density and the distance (from the central X-ray source) of the eclipsing cloud in NGC 3783 agree with those derived by \cite{2004ApJ...602..648R} using the absorption line at 6.67 keV, which was also detected in our study (see Section \ref{Spectral_analysis}).
Moreover, the column density and covering factor estimated for the eclipsing cloud in NGC 3516 are in agreement with those found by \cite{2008A&A...483..161T} for an absorber eclipsing the continuum central source in NGC 3516.
The derived distances $r_{\rm c}$ of the eclipsing clouds from the X-ray source are within (0.3-8)$\times$10$^4$ $R_{\rm g}$ (see Table \ref{Table5}).
The $r_{\rm c}$ value of 8$\times$10$^4$ $R_{\rm g}$ derived for NGC 3783 agrees with those of $(7.4-8.6)\times$10$^4$ $R_{\rm g}$ estimated by \cite{2014MNRAS.439.1403M} for the X-ray absorbing clouds located at the dusty torus of this AGN.
The $r_{\rm c}$ values within (3-8)$\times$10$^3$ $R_{\rm g}$ derived for Mrk 766 are consistent with those derived by \cite{Risaliti2011} for BLR clouds in this AGN. We find $r_{\rm c}$ values within $\sim$(1-2)$\times$10$^4$ $R_{\rm g}$ for NGC 3227, which are lower than those of (0.7-2)$\times$10$^5$ $R_{\rm g}$ found for X-ray absorbing clouds located in the dusty torus of this galaxy \citep{2014MNRAS.439.1403M}.
We would obtain lower distances around $10^3$ $R_{\rm g}$ for the eclipsing clouds in NGC 3227 if we consider the high SMBH mass of 2$\times$10$^7$ M$_{\odot}$ used in our analysis in Section \ref{Spectral_analysis}, which are lower than the BLR cloud distances within $\sim$(0.5-7)$\times$10$^4$ $R_{\rm g}$ measured for NGC 3227 \citep{2014MNRAS.439.1403M}.
Furthermore, the $r_{\rm c}$ values of 9.6$\times$10$^{15}$ cm derived for NGC 7314 and of 3.2$\times$10$^{16}$ cm derived for NGC 3516 are consistent with those expected in BLRs of AGNs with a SMBH mass within $\sim$(0.1-2)$\times$10$^7$ M$_{\odot}$ \citep[see][]{2019A&A...628A..26P}.
Thus, the clouds obscuring the central X-ray source in Mrk 766, NGC 3227, NGC 7314, and NGC 3516 show $r_{\rm c}$ values of (0.3-3.6)$\times$10$^4$ $R_{\rm g}$, typical of BLR clouds.
\begin{table*}
\centering
\caption{Spectral analysis parameters.}
\label{Table2}
\begin{threeparttable}
\begin{tabular}{llccrcrrr}
\hline
Galaxy & Interval & $\Gamma$\tnote{a} & N$_{\rm H}$\tnote{b} & C$_{\rm F}$\tnote{c} & Energy\tnote{d} & EW\tnote{e} & f(2-10 keV)\tnote{f} & $\chi^2$/d.o.f.\\
& & & ($\times$10$^{22}$ cm$^{-2}$) & & (keV) & (eV) & ($\times$10$^{-11}$ erg s$^{-1}$ cm$^{-2}$) & \\
\hline
NGC 3783 & 1 & 1.61$\pm$0.02 & 8.61$^{+0.26}_{-0.25}$ & 0.83$\pm$0.01 & 6.42$^{+0.01}_{-0.01}$ & 156$^{+17}_{-20}$ & 2.41$\pm$0.01 & 799/787\\
& 2 & 1.77$\pm$0.01 & 8.08$^{+0.38}_{-0.37}$ & 0.76$\pm$0.01 & 6.40$^{+0.01}_{-0.01}$ & 144$^{+28}_{-27}$ & 3.00$\pm$0.02 & 760/753\\
\hline
Mrk 279 & 1 & 1.97$\pm$0.06 & 33.06$^{+35.33}_{-11.09}$ & 0.20$\pm$0.03 & 6.42$^{+0.02}_{-0.03}$ & 82$^{+20}_{-20}$ & 2.68$\pm$0.02 & 617/657\\
& 2 & 1.94$\pm$0.05 & 119.13$^{+59.88}_{-29.51}$ & 0.34$\pm$0.04 & 6.37$^{+0.04}_{-0.04}$ & 102$^{+41}_{-43}$ & 2.35$\pm$0.03 & 416/441\\
& 3 & 1.97$\pm$0.06 & 98.60$^{+43.65}_{-22.99}$ & 0.30$\pm$0.02 & 6.40$^{+0.04}_{-0.13}$& 83$^{+43}_{-41}$ & 2.36$\pm$0.04 & 324/344\\
\hline
Mrk 766 & 1 & 1.91$\pm$0.10 & 11.10$^{+1.26}_{-1.13}$ & 0.56$\pm$0.01 & 6.40\tnote{g} & 308$^{+86}_{-89}$ & 0.72$\pm$0.01 & 699/591\\
& 2 & 2.18$\pm$0.02 & 10.66$^{+2.98}_{-2.38}$ & 0.30$\pm$0.01 & 6.40\tnote{g} & 180$^{+66}_{-63}$ & 1.11$\pm$0.01 & 555/572\\
& 3 & 2.15$\pm$0.08 & 6.76$^{+3.42}_{-2.66}$ & 0.19$\pm$0.04 & 6.40\tnote{g} & 206$^{+71}_{-65}$ & 1.40$\pm$0.01 & 602/608\\
\hline
NGC 3227 & 1 & 1.67$\pm$0.05 & 11.95$^{+1.73}_{-1.51}$ & 0.32$\pm$0.01 & 6.41$^{+0.02}_{-0.02}$ & 117$^{+18}_{-16}$ & 3.13$\pm$0.02 & 795/735 \\
& 2 & 1.60$\pm$0.06 & 7.70$^{+2.95}_{-2.32}$ & 0.17$\pm$0.02 & 6.41$^{+0.01}_{-0.01}$ & 146$^{+17}_{-17}$ & 2.59$\pm$0.02 & 778/740 \\
& 3 & 1.70$\pm$0.01 & 16.67$^{+4.05}_{-3.09}$ & 0.19$\pm$0.01 & 6.39$^{+0.02}_{-0.02}$ & 95$^{+15}_{-12}$ & 3.14$\pm$0.01 & 840/769 \\
& 4 & 1.72$\pm$0.05 & 8.08$^{+2.94}_{-3.54}$ & 0.16$\pm$0.05 & 6.41$^{+0.02}_{-0.01}$ & 111$^{+15}_{-15}$ & 3.69$\pm$0.02& 819/766 \\
\hline
NGC 7314 & 1 & 1.81$\pm$0.01& 1.35$^{+0.78}_{-0.38}$ & 0.68$\pm$0.02 & 6.44$^{+0.02}_{-0.03}$ & 86$^{+13}_{-14}$ & 2.39$\pm$0.01 & 915/782 \\
& 2 & 1.80$\pm$0.04& 1.98$^{+0.84}_{-0.80}$ & 0.48$\pm$0.22 & 6.42$^{+0.03}_{-0.03}$ & 114$^{+18}_{-17}$ & 1.99$\pm$0.01 & 841/763 \\
\hline
NGC 3516 & 1 & 2.03$\pm$0.01& 7.37$^{+0.56}_{-0.53}$ & 0.45$\pm$0.01 & 6.39$^{+0.03}_{-0.02}$ & 89$^{+19}_{-20}$ & 5.25$\pm$0.02 & 928/768 \\
& 2 & 2.05$\pm$0.01& 7.90$^{+0.51}_{-0.48}$ & 0.48$\pm$0.01 & 6.37$^{+0.02}_{-0.01}$ & 123$^{+15}_{-13}$ & 4.58$\pm$0.02 & 954/766 \\
& 3 & 2.02$\pm$0.01 & 8.07$^{+0.43}_{-0.41}$ & 0.61$\pm$0.01 & 6.38$^{+0.02}_{-0.01}$ & 139$^{+18}_{-16}$ & 3.56$\pm$0.02 & 864/743 \\
& 4 & 2.05$\pm$0.04 & 7.13$^{+0.73}_{-0.77}$ & 0.48$\pm$0.03 & 6.41$^{+0.17}_{-0.06}$ & 133$^{+17}_{-15}$ & 4.59$\pm$0.02 & 890/769 \\
\hline
\end{tabular}
\begin{tablenotes}
\item[a] Photon index of the continuum. The uncertainties given in this table are at the 90 per cent confidence level for one parameter of interest.
\item[b] Column density of hydrogen.
\item[c] Covering factor.
\item[d] Peak energy of the Fe K$\alpha$ line.
\item[e] Equivalent width of the 6.4 Fe K$\alpha$ line.
\item[f] The 2-10 keV flux.
\item[g] Fixed parameter.
\end{tablenotes}
\end{threeparttable}
\end{table*}
\begin{table*}
\centering
\caption{Spectral analysis parameters obtained using pexrav.}
\label{Table22}
\begin{threeparttable}
\begin{tabular}{llccrcrrr}
\hline
Galaxy & Interval & $\Gamma$\tnote{a} & N$_{\rm H}$\tnote{b} & C$_{\rm F}$\tnote{c} & Energy\tnote{d} & EW\tnote{e} & f(2-10)\tnote{f} & $\chi^2$/d.o.f.\\
& & & ($\times$10$^{22}$ cm$^{-2}$) & & (keV) & (eV) & ($\times$10$^{-11}$ erg s$^{-1}$ cm$^{-2}$) & \\
\hline
Mrk 279 & 1 & 1.99$\pm$0.05 & 30.76$^{+22.19}_{-8.55}$ & 0.20$\pm$0.03 & 6.42$^{+0.02}_{-0.03}$ & 80$^{+1}_{-4}$ & 2.68$\pm$0.02 & 616/657\\
& 2 & 1.89$\pm$0.13 & 111.78$^{+28.26}_{-16.38}$ & 0.40$\pm$0.01 & 6.37$^{+0.04}_{-0.04}$ & 95$^{+23}_{-65}$ & 2.35$\pm$0.03 & 416/436\\
& 3 & 1.83$\pm$0.50 & 95.02$^{+30.55}_{-18.82}$ & 0.40$\pm$0.01 & 6.38$^{+0.04}_{-0.04}$ & 73$^{+38}_{-36}$ & 2.36$\pm$0.05 & 322/341\\
\hline
Mrk 766 & 1 & 1.91$\pm$0.06 & 8.03$^{+1.10}_{-0.99}$ & 0.49$\pm$0.02 & 6.40\tnote{g} & 113$^{+91}_{-60}$ & 0.72$\pm$0.01 & 694/592\\
& 2 & 1.91$\pm$0.02 & 15.48$^{+3.84}_{-2.99}$ & 0.20$\pm$0.01 & 6.40\tnote{g} & 135$^{+45}_{-41}$ & 1.11$\pm$0.01 & 555/570\\
& 3 & 2.07$\pm$0.08 & 4.07$^{+4.35}_{-3.46}$ & 0.14$\pm$0.01 & 6.40\tnote{g} & 127$^{+100}_{-93}$ & 1.40$\pm$0.02 & 600/607\\
\hline
\end{tabular}
\begin{tablenotes}
\item[a] Photon index of the continuum. The uncertainties given in this table are at the 90 per cent confidence level for one parameter of interest.
\item[b] Column density of hydrogen.
\item[c] Covering factor.
\item[d] Peak energy of the Fe K$\alpha$ line.
\item[e] Equivalent width of the 6.4 Fe K$\alpha$ line.
\item[f] The 2-10 keV flux.
\item[g] Fixed parameter.
\end{tablenotes}
\end{threeparttable}
\end{table*}
\begin{figure}
\centering
\includegraphics[trim={1.4cm 0 1.4cm 0},width=0.43\textwidth]{EW_vs_mass1.eps}
\caption{EW of the Fe K$\alpha$ line as function of SMBH mass, for each galaxy. The red line is the fitted line regression obtained considering NGC 3227 with a $\log_{10}(M_{\rm BH})$ of 7.3 $M_{\odot}$ (labeled as NGC 3227* in this figure) and the other four Seyfert 1 galaxies, while the blue line is the fitted line regression obtained considering NGC 3227 with a $\log_{10}(M_{\rm BH})$ of 6.7 $M_{\odot}$ and the other four Seyfert 1 galaxies (see in the text).}\label{Massvsew}
\end{figure}
\begin{table}
\scriptsize
\centering
\caption{Parameters of the time-resolved spectral analysis.}
\label{parameters_sixgalaxies}
\begin{threeparttable}
\begin{tabular}{llcrcr}
\hline
Galaxy & Inter. & N$_{\rm H}$\tnote{a} & C$_{\rm F}$\tnote{b} & F/F'\tnote{c} & $\chi^2$/d.o.f.\\
& & ($\times$10$^{22}$ cm$^{-2}$) & & & \\
\hline
NGC 3783 &1A & 9.12$^{+1.60}_{-1.62}$& 0.82$\pm$0.03 & 0.61$\pm$0.01 & 397/409\\
&1B & 9.66$^{+0.95}_{-0.93}$& 0.79$\pm$0.04 & 0.69$\pm$0.01 & 400/463\\
&1C & 8.70$^{+0.98}_{-1.04}$& 0.81$\pm$0.02 & 0.62$\pm$0.01 & 383/389\\
&1D & 10.67$^{+0.86}_{-0.81}$& 0.82$\pm$0.01 & 0.64$\pm$0.01 & 380/408\\
&1E & 8.97$^{+0.66}_{-0.70}$& 0.85$\pm$0.02 & 0.87$\pm$0.02 & 529/559\\
&1F & 8.87$^{+0.60}_{-0.60}$& 0.87$\pm$0.01 & 0.97$\pm$0.01 & 593/565\\
&1G & 8.55$^{+0.64}_{-0.62}$& 0.83$\pm$0.01 & 0.94$\pm$0.01 & 536/557\\
&1H & 8.76$^{+1.31}_{-1.05}$& 0.86$\pm$0.02 & 0.95$\pm$0.01 & 507/555\\
\hline
Mrk 766&1A & 21.37$^{+6.19}_{-4.60}$& 0.67$\pm$0.02 & 0.37$\pm$0.02 & 124/108\\
&1B & 13.49$^{+4.67}_{-3.80}$& 0.67$\pm$0.03 & 0.40$\pm$0.02 & 109/106\\
&1C & 11.17$^{+2.45}_{-2.24}$& 0.76$\pm$0.03 & 0.46$\pm$0.02 & 124/121\\
&1D & 13.52$^{+2.54}_{-2.11}$& 0.68$\pm$0.02 & 0.66$\pm$0.02 & 192/174\\
&2A & 4.00$^{+1.87}_{-1.63}$& 0.63$\pm$0.26 & 0.68$\pm$0.02 & 230/191\\
&2B & 8.05$^{+3.40}_{-2.71}$& 0.52$\pm$0.05 & 0.74$\pm$0.02 & 197/194\\
&2C & 13.11$^{+5.93}_{-4.18}$& 0.50$\pm$0.04 & 0.44$\pm$0.02 & 142/153\\
&2D & 22.02$^{+6.35}_{-4.73}$& 0.74$\pm$0.02 & 0.37$\pm$0.02 & 118/90\\
\hline
NGC 3227&1A & 13.68$^{+5.26}_{-3.70}$& 0.36$\pm$0.02 & 0.82$\pm$0.02 & 374/407\\
&1B & 20.59$^{+12.30}_{-7.51}$& 0.30$\pm$0.02 & 0.75$\pm$0.01 & 304/353\\
&1C & 12.62$^{+5.80}_{-3.76}$& 0.35$\pm$0.02 & 0.70$\pm$0.01 & 371/351\\
&1D & 10.11$^{+3.80}_{-2.85}$& 0.36$\pm$0.03 & 0.78$\pm$0.01 & 380/371\\
&1E & 11.31$^{+3.59}_{-2.76}$& 0.40$\pm$0.02 & 0.78$\pm$0.01 & 359/371\\
&1F & 19.91$^{+4.51}_{-3.61}$& 0.29$\pm$0.02 & 0.93$\pm$0.02 & 371/407\\
&1G & 7.43$^{+9.80}_{-5.40}$& 0.11$\pm$0.02 & 1.04$\pm$0.02 & 443/434\\
&2A & 0.18$^{+0.27}_{-0.18}$& 0.22$\pm$0.28 & 0.94$\pm$0.01 & 479/474\\
&2B & 1.40$^{+5.60}_{-0.60}$& 0.27$\pm$0.13 & 0.85$\pm$0.01 & 456/446\\
&2C & 3.07$^{+6.01}_{-3.03}$& 0.17$\pm$0.03 & 0.66$\pm$0.01 & 388/394\\
&2D & 13.72$^{+3.33}_{-2.69}$& 0.34$\pm$0.02 & 0.60$\pm$0.01 & 378/364\\
&2E & 10.67$^{+4.35}_{-3.23}$& 0.25$\pm$0.03 & 0.56$\pm$0.01 & 335/346\\
&2F & 13.53$^{+2.58}_{-2.48}$& 0.25$\pm$0.02 & 0.63$\pm$0.01 & 365/374\\
&2G & 13.88$^{+7.88}_{-5.56}$& 0.32$\pm$0.04 & 0.57$\pm$0.01 & 294/318\\
&3A & 16.02$^{+2.79}_{-2.35}$& 0.33$\pm$0.02 & 0.73$\pm$0.01 & 471/469 \\
&3B & 12.00$^{+1.80}_{-1.52}$& 0.30$\pm$0.01 & 0.67$\pm$0.01 & 446/441 \\
&3C & 15.88$^{+7.33}_{-4.63}$& 0.28$\pm$0.01 & 0.62$\pm$0.01 & 395/427 \\
&3D & 37.52$^{+27.00}_{-14.00}$& 0.11$\pm$0.02 & 0.75$\pm$0.01 & 416/436 \\
&3E &30.33$^{+6.49}_{-4.99}$&0.22$\pm$0.01 & 0.95$\pm$0.01 & 557/515 \\
&3F & 20.03$^{+11.32}_{-7.19}$& 0.17$\pm$0.02& 1.11$\pm$0.01 & 599/545 \\
&3G & 10.52$^{+3.04}_{-2.27}$& 0.15$\pm$0.01 & 1.05$\pm$0.01 & 479/503\\
\hline
NGC 7314&1A & 2.67$^{+4.41}_{-2.02}$& 0.35$\pm$0.05 & 0.63$\pm$0.01 & 182/203\\
&1B & 9.28$^{+4.76}_{-3.35}$& 0.41$\pm$0.05 & 0.53$\pm$0.02 & 160/173\\
&1C & 4.93$^{+5.78}_{-3.78}$& 0.24$\pm$0.20 & 0.63$\pm$0.02 & 220/206\\
&1D & 3.97$^{+3.35}_{-2.64}$& 0.42$\pm$0.25 & 0.84$\pm$0.02 & 244/261\\
&1E & 4.47$^{+3.11}_{-2.50}$& 0.45$\pm$0.20 & 0.67$\pm$0.02 & 218/220\\
&1F & 0.60$^{+0.12}_{-0.12}$& 0.85$\pm$0.13 & 1.04$\pm$0.02 & 288/306\\
&1G & 1.00$^{+0.24}_{-0.23}$& 0.65$\pm$0.14 & 0.67$\pm$0.02 & 254/217\\
\hline
NGC 3516&3A & 10.09$^{+1.23}_{-1.12}$& 0.61$\pm$0.02 & 0.69$\pm$0.01 & 496/456\\
&3B & 7.64$^{+1.19}_{-1.09}$& 0.62$\pm$0.03 & 0.62$\pm$0.01 & 429/425\\
&3C & 6.78$^{+1.19}_{-1.08}$& 0.62$\pm$0.04 & 0.62$\pm$0.01 & 437/421\\
&3D & 9.54$^{+1.25}_{-1.13}$& 0.61$\pm$0.02 & 0.70$\pm$0.01 & 489/456\\
&3E & 7.24$^{+1.51}_{-1.63}$& 0.63$\pm$0.05 & 0.81$\pm$0.01 & 441/482\\
&3F & 5.71$^{+0.89}_{-0.84}$& 0.61$\pm$0.04 & 1.02$\pm$0.01 & 555/532\\
&3G & 8.43$^{+1.01}_{-0.92}$& 0.59$\pm$0.02 & 0.97$\pm$0.01 & 580/523\\
&3H & 6.96$^{+1.01}_{-0.92}$& 0.58$\pm$0.03 & 1.03$\pm$0.01 & 523/535\\
\hline
\end{tabular}
\begin{tablenotes}
\item[a] Column density of hydrogen. The uncertainties given in this table are at the 90 per cent confidence level for one parameter of interest.
\item[b] Covering factor.
\item[c] Relative flux, which is the 2-10 keV flux normalized to that estimated in the second, third, first, fourth, and fourth time-intervals for NGC 3783, Mrk 766, NGC 3227, NGC 7314 and NGC 3516, respectively.
\item Note: Mrk 279 does not show changes in the HR curves, so it does not appear in this table.
\end{tablenotes}
\end{threeparttable}
\end{table}
\begin{table*}
\centering
\caption{Derived physical parameters.}
\label{Table5}
\begin{threeparttable}
\begin{tabular}{ccrrrrrrrrrr}
\hline
Galaxy & Interval or & M$_{BH}$\tnote{a} & $T_4$\tnote{b} & $\Delta C_{\rm F}$\tnote{c} & V$_{\rm c}$\tnote{d} & D$_{\rm s}$\tnote{e} & D$_{\rm c}$\tnote{f} & n$_{\rm c}$\tnote{g} & r$_{\rm c}$\tnote{h} & r$_{\rm c}$\tnote{i}\\
& sub-interval &($\times$10$^{6}~M_{\odot}$) & ($\times$10$^4 s$) & & (km s$^{-1}$) & ($\times$10$^{13}$ cm) & ($\times$10$^{13}$ cm) & ($\times 10^{8}$cm$^{-3}$) & ($\times$10$^{15}$ cm) & (10$^4$ R$_{\rm g}$)\\
\hline
NGC 3783 & 1 & 12.0 & 12.0 & 0.09 & 750 & 28.5 & 104.0 & 0.8 & 286.0 & 8.0\\
\hline
Mrk 766 & 1 & 6.6 & 4.0 & 0.34 & 2420 & 11.2 & 33.7 & 3.3 & 15.2 & 0.8\\
& 2 & & 2.6 & 0.32 & 3612 & 8.4 & 18.5 & 5.8 & 6.8 & 0.3\\
\hline
NGC 3227 & 1 & 4.8 & 5.4 & 0.72 & 1897 & 12.4 & 28.0 & 4.3
& 18.0 & 1.2\\
& 2 & & 6.0 & 0.50 & 1423 & 13.3 & 21.9 & 3.5 & 32.0 & 2.2\\
& 3 & & 4.3 & 0.49 & 1966 & 10.6 & 18.5 & 9.0 & 16.8 & 1.2\\
\hline
NGC 7314 & 1 & 0.9& 1.3 & 0.42 & 1122 & 2.7 & 7.6 & 2.6 & 9.6 & 3.6\\
\hline
NGC 3516 & 3 & 25.1 & 5.5 & 0.08 & 3227 & 21.7 & 67.7 & 1.2 & 32.3 & 0.4\\
\hline
\end{tabular}
\begin{tablenotes}
\item[a] The SMBH masses are taken from \cite{2015PASP..127...67B} except for NGC 7314 whose SMBH mass is taken from \cite{Emma2016}.
\item[b] The occultation time.
\item[c] The covering factor variation during the eclipse.
\item[d] Velocity of the eclipsing cloud. This value is a lower limit for the velocity.
\item[e] Size of the central X-ray source.
\item[f] Size of the eclipsing cloud.
\item[g] Particle density of the eclipsing cloud.
\item[h] Distance between the X-ray source and the obscuring cloud.
\item[i] Distance between the X-ray source and the obscuring cloud in units of R$_{\rm g}$.
\end{tablenotes}
\end{threeparttable}
\end{table*}
\begin{figure*}
\begin{subfigure}{.3\textwidth}
\includegraphics[width=1\textwidth,trim=4.2cm 0 4.2cm 0,clip]{HR_NGC3783.eps}
\end{subfigure}
\begin{subfigure}{.3\textwidth}
\includegraphics[width=1\textwidth,trim=4cm 0 4cm 0,clip]{HR_Mrk766.eps}
\end{subfigure}
\begin{subfigure}{.3\textwidth}
\includegraphics[width=1\textwidth,trim=3.6cm 0 3.6cm 0,clip]{HR_NGC7314.eps}
\end{subfigure}
\caption{\textbf{Left panel:} the hardness-ratio, relative flux, column density, and covering factor go from the top to the bottom for NGC 3783 in the first time-interval. The relative flux is the 2-10 keV flux normalized to that estimated in the second interval. \textbf{Middle panel:} as in the left panel but for Mrk 766 in the first time-interval and the relative flux is normalized to that in the third interval. \textbf{Right panel:} as in the left panel but for NGC 7314 in the second time-interval and the relative flux is normalized to that in the first time-interval.}\label{pargalaxy1}
\end{figure*}
\begin{figure*}
\begin{subfigure}{.3\textwidth}
\includegraphics[width=1\textwidth,trim=4cm 0 4cm 0,clip]{HR_NGC3227_1.eps}
\end{subfigure}
\begin{subfigure}{.3\textwidth}
\includegraphics[width=1\textwidth,trim=4cm 0 4cm 0,clip]{HR_NGC3227_2.eps}
\end{subfigure}
\begin{subfigure}{.3\textwidth}
\includegraphics[width=1\textwidth,trim=3.8cm 0 3.8cm 0,clip]{HR_NGC3227_3.eps}
\end{subfigure}
\caption{\textbf{Left panel:} the hardness-ratio, relative flux, column density, and covering factor go from the top to the bottom for NGC 3227 in the first time-interval. The relative flux is the 2-10 keV flux normalized to that estimated in the fourth interval. \textbf{Middle panel:} as in the left panel but for the second time-interval. \textbf{Right panel:} as in the left panel but for the third time-interval.}\label{pargalaxy2}
\end{figure*}
\begin{figure}
\centering
\includegraphics[trim=4.3cm 0 4.6cm 0,clip,width=0.3\textwidth]{HR_NGC3516_3.eps}
\caption{From the top to the bottom: the hardness-ratio, relative flux, column density and covering factor for NGC 3516 in the second time interval. The relative flux is the 2-10 keV flux normalized to that estimated in the fourth time-interval.}\label{parNGC3516}
\end{figure}
\begin{figure*}
\begin{subfigure}{.33\textwidth}
\includegraphics[width=1\textwidth]{NGC3783_cont_int1.eps}
\caption{NGC 3783}
\end{subfigure}
\begin{subfigure}{.33\textwidth}
\includegraphics[width=1\textwidth]{Mrk766_cont_int1.eps}
\caption{Mrk 766}
\end{subfigure}
\begin{subfigure}{.33\textwidth}
\includegraphics[width=1\textwidth]{NGC7314_cont_int1.eps}
\caption{NGC 7314}
\end{subfigure}
\begin{subfigure}{.33\textwidth}
\includegraphics[width=1\textwidth]{NGC3227_cont_int1.eps}
\caption{NGC 3227}
\end{subfigure}
\begin{subfigure}{.33\textwidth}
\includegraphics[width=1\textwidth]{NGC3227_cont_int2.eps}
\caption{NGC 3227}
\end{subfigure}
\begin{subfigure}{.33\textwidth}
\includegraphics[width=1\textwidth]{NGC3516_cont_int3.eps}
\caption{NGC 3516}
\end{subfigure}
\caption{Contour plots of $N_{\rm H}$ versus $C_{\rm F}$ for several representative intervals given in Table \ref{parameters_sixgalaxies}. The red contours show the 90 percent confidence level, while the black contours show the 68 percent confidence level. Labels refer to the time sub-intervals as defined in Table \ref{parameters_sixgalaxies}.}\label{NhvsCf_contour}
\end{figure*}
\section{Conclusions}\label{Conclusion}
We carried out the spectral and temporal analysis of X-ray data of six galaxies observed with the \textit{XMM-Newton} telescope, which was needed for deriving the physical parameters of clouds eclipsing the central X-ray source in five of the six galaxies. Using the hardness-ratio light curves, we identified occultation events towards the central regions of NGC 3783, NGC 3227, NGC 7314, and NGC 3516, as well as corroborated the occultation events in Mrk 766.
The physical size of the central X-ray sources ($\sim$(3-28)$\times$10$^{13}$cm) are less than the size of the eclipsing clouds, thus a single cloud can block the X-ray source and absorb the X-ray spectrum.
The eclipsing clouds in Mrk 766, NGC 3227, NGC 7314, and NGC 3516 have large values of the column densities ($\sim$10$^{22}$-10$^{23}$ cm$^{-2}$) and are located at distances of $\sim$(0.3-3.6)$\times$10$^4$ $R_{\rm g}$, typical of BLR clouds, leading to the notorious temporal variability of the X-ray flux during the time that the cloud is crossing our line of sight. On the other hand, the cloud obscuring the X-ray source in NGC 3783 is likely located in the dusty torus.
We see that the covering factor changes from object to object and the existence of intervening clouds is a common feature in AGNs. The gas in the BLR, located in the vicinity of the black hole is moving at Keplerian velocities of $>$1122 km s$^{-1}$ (excluding the velocities derived for NGC 3783 because toward this source the estimate of the cloud velocities may be biased).
We found a good anti-correlation with a slope of -187$\pm$62 between the known mass of the SMBHs with the EW of the 6.4 keV Fe line for the five Seyfert 1 galaxies considering a $\log_{\rm 10}(M_{\rm BH})$=7.3 M$_{\odot}$ for NGC 3227 in our statistical analysis.
The average value of the EW of NGC 7314, a Seyfert 2 type, does not agree with the SMBH mass-EW relation found for the five Seyfert 1 galaxies, supporting previous results \citep{2014MNRAS.441.3622R}.
\section*{Acknowledgements}
This research was based on observations obtained with \textit{XMM-Newton}, an ESA science mission with instruments and contributions directly funded by ESA Member States and NASA.
We acknowledge the use of the SAS software, developed by ESA's Science Operations Centre staff. This research has used a non-linear least-squares fitting (ttps://lmfit.github.io/lmfit-py/intro.htm), as well as the routine pearsonr of SciPY (https://docs.scipy.org/doc/ scipy/reference/generated/scipy.stats.pearsonr.html).
We thank the anonymous referees for their helpful comments and suggestions that really helped improve the manuscript considerably.
\section*{Data Availability}
The X-ray data used in this study are accessible from the \textit{XMM-Newton} online archive\footnote{http://nxsa.esac.esa.int/nxsa-web/\#search}.
\bibliographystyle{mnras}
|
1,116,691,499,872 | arxiv | \section{Approach}
\label{sec:approach}
We now present our approach.
\subsection{Generating video foreground proposals}\label{sec:proposals}
\textcolor{black}{Existing propagation-based video segmentation methods rely on human input
(a bounding box, contour, or scribble) at the onset to generate results.}
The key idea behind our Click Carving approach is to
flip this process. \textcolor{black}{Instead of the human annotator providing a foreground
region from scratch, the system generates many plausible segmentation mask hypotheses and
the annotator efficiently navigates to the best ones with point clicks.}
Specifically, we use state-of-the-art region proposal generation algorithms
to generate 1000s of possible foreground segmentations for the first video
frame.\footnote{\textcolor{black}{For clarity of presentation, we describe the process as
always propagating from the first annotated frame. However, the system can
be initialized from arbitrary frames.}} Region proposal methods aim to obtain high recall at the cost of low precision.
Even though this guarantees that at least a few of these segmentations will
be of good quality, it is difficult to filter out the best ones automatically
with existing techniques.
To generate accurate region proposals in videos, we use the multiscale
combinatorial grouping (MCG) algorithm~\cite{APBMM2014} with both static and
motion boundaries. The original algorithm uses image boundaries to obtain a
hierarchical segmentation, followed by a grouping procedure to obtain
region-based foreground object proposals. The video datasets that we use in
this work have both static and moving objects. We observed that due to
factors like motion blur etc., static image boundaries are not very reliable
in many cases. On the other hand, optical flow provides a strong cue about
the objects contours while the object is in motion. Hence we also use motion
boundaries~\cite{weinzaepfel:hal-01142653} to generate per-frame motion
region proposals using MCG. The two sources are complementary in nature: for
static objects, the per-frame region proposals obtained using static
boundaries will be more accurate, and vice versa.
Figure~\ref{fig:preprocess} illustrates this with an example. Both the person
and bike (Figure \ref{fig:preprocess}a) are in motion. As a result, we get
weaker static boundaries (Figure \ref{fig:preprocess}b). Figure
\ref{fig:preprocess}c shows the best static proposal for each object; the
proposal quality for the bike is very poor. On the other hand, the motion
boundaries (Figure \ref{fig:preprocess}d) are much stronger and result in
very accurate proposals for both the person and the bike (Figure
\ref{fig:preprocess}e).
In summary, given a video frame, we generate the set
of foreground region proposals ($\mathcal{M}$) for it by taking the union
between the static region proposals ($\mathcal{M}_{static}$) and motion
region proposals ($\mathcal{M}_{motion}$), i.e., $\mathcal{M} =
\{\mathcal{M}_{static} \cup \mathcal{M}_{motion}\}$. On average we generate a
total of about 2000 proposals per frame, resulting in a very high overall
recall. \textcolor{black}{(The Mean Average Best Overlap score (MABO) is 78.3 on the three datasets that we use.
This is computed by selecting the proposal with highest overlap score in each frame and taking a dataset-wide average).} \textcolor{black}{ In what follows, we explain how Click
Carving allows a user to efficiently navigate to the best proposal among
these thousands of candidates.}
\begin{figure*}[t]
\centering
\scriptsize
\begin{tabular}{cx{3mm}cccccc}
{\bf User Click} & & {\bf ContourMap} &
\multicolumn{5}{c}{\bf Top-5 ranked proposals} \\
\includegraphics[keepaspectratio=true,scale=0.12]{images/cat002_07_1_image_1.jpeg} &
&
\includegraphics[keepaspectratio=true,scale=0.12]{images/cat002_07_1_wt_image_1.jpeg} &
\includegraphics[keepaspectratio=true,scale=0.12]{images/cat002_07_1_prop_1_1.jpeg} &
\includegraphics[keepaspectratio=true,scale=0.12]{images/cat002_07_1_prop_1_2.jpeg} &
\includegraphics[keepaspectratio=true,scale=0.12]{images/cat002_07_1_prop_1_3.jpeg} &
\includegraphics[keepaspectratio=true,scale=0.12]{images/cat002_07_1_prop_1_4.jpeg} &
\includegraphics[keepaspectratio=true,scale=0.12]{images/cat002_07_1_prop_1_5.jpeg} \\
\includegraphics[keepaspectratio=true,scale=0.12]{images/cat002_07_1_image_2.jpeg} &
&
\includegraphics[keepaspectratio=true,scale=0.12]{images/cat002_07_1_wt_image_2.jpeg} &
\includegraphics[keepaspectratio=true,scale=0.12]{images/cat002_07_1_prop_2_1.jpeg} &
\includegraphics[keepaspectratio=true,scale=0.12]{images/cat002_07_1_prop_2_2.jpeg} &
\includegraphics[keepaspectratio=true,scale=0.12]{images/cat002_07_1_prop_2_3.jpeg} &
\includegraphics[keepaspectratio=true,scale=0.12]{images/cat002_07_1_prop_2_4.jpeg} &
\includegraphics[keepaspectratio=true,scale=0.12]{images/cat002_07_1_prop_2_5.jpeg} \\
\includegraphics[keepaspectratio=true,scale=0.11]{images/soldier_image_1.jpeg} &
&
\includegraphics[keepaspectratio=true,scale=0.11]{images/soldier_wt_image_1.jpeg} &
\includegraphics[keepaspectratio=true,scale=0.11]{images/soldier_prop_1_1.jpeg} &
\includegraphics[keepaspectratio=true,scale=0.11]{images/soldier_prop_1_2.jpeg} &
\includegraphics[keepaspectratio=true,scale=0.11]{images/soldier_prop_1_3.jpeg} &
\includegraphics[keepaspectratio=true,scale=0.11]{images/soldier_prop_1_4.jpeg} &
\includegraphics[keepaspectratio=true,scale=0.11]{images/soldier_prop_1_5.jpeg} \\
\includegraphics[keepaspectratio=true,scale=0.11]{images/soldier_image_2.jpeg} &
&
\includegraphics[keepaspectratio=true,scale=0.11]{images/soldier_wt_image_2.jpeg} &
\includegraphics[keepaspectratio=true,scale=0.11]{images/soldier_prop_2_1.jpeg} &
\includegraphics[keepaspectratio=true,scale=0.11]{images/soldier_prop_2_2.jpeg} &
\includegraphics[keepaspectratio=true,scale=0.11]{images/soldier_prop_2_3.jpeg} &
\includegraphics[keepaspectratio=true,scale=0.11]{images/soldier_prop_2_4.jpeg} &
\includegraphics[keepaspectratio=true,scale=0.11]{images/soldier_prop_2_5.jpeg} \\
\includegraphics[keepaspectratio=true,scale=0.11]{images/soldier_image_3.jpeg} &
&
\includegraphics[keepaspectratio=true,scale=0.11]{images/soldier_wt_image_3.jpeg} &
\includegraphics[keepaspectratio=true,scale=0.11]{images/soldier_prop_3_1.jpeg} &
\includegraphics[keepaspectratio=true,scale=0.11]{images/soldier_prop_3_2.jpeg} &
\includegraphics[keepaspectratio=true,scale=0.11]{images/soldier_prop_3_3.jpeg} &
\includegraphics[keepaspectratio=true,scale=0.11]{images/soldier_prop_3_4.jpeg} &
\includegraphics[keepaspectratio=true,scale=0.11]{images/soldier_prop_3_5.jpeg} \\
\end{tabular}
\caption{Click Carving based foreground segmentation. Best viewed on pdf. See text for details.}
\label{fig:approach}
\vspace{4pt}
\end{figure*}
\subsection{Click Carving for discovering an object mask}
The region proposal step yields a large set of segmentation hypotheses
(1000s), out of which only a few are very accurate object segmentations.
\textcolor{black}{A naive approach that asks an annotator to manually scan through all
proposals is both tedious and inefficient.} We now explain how our Click
Carving algorithm effectively and very quickly identifies the quality
segmentations. We show that within a few clicks, it is possible to obtain a
very high quality segmentation of the desired object of interest. \textcolor{black}{We
stress that while Click Carving assists in getting the mask for a single
frame, it is closely tied to the video source due to the motion-based
proposals.}
At a high level, our Click Carving algorithm converts the user clicks into
votes cast for the underlying region proposals. The user initiates the
algorithm by clicking somewhere on the boundary of the object of interest.
This click casts a vote for all the proposals whose boundaries also
\textcolor{black}{(nearly)} intersect with the user click. Using these votes, the
underlying region proposals are re-ranked and the user is presented with the top-$k$ proposals having the highest votes.
\textcolor{black}{This process of clicking and re-ranking iterates. At any time,} the user
can choose any of the top-$k$ as the final segmentation if he/she is
satisfied, or he/she can continue to re-rank by clicking and casting more
votes.
More specifically, we characterize each proposal, $\mathcal{M}_j \in
\mathcal{M}$ with the following four components
($\mathcal{M}_j^m,\mathcal{M}_j^e,\mathcal{M}_j^s,\mathcal{M}_j^v$):
\begin{itemize}
\item Segmentation mask ($\mathcal{M}_j^m$): This quantity represents the
actual region segmentation mask obtained from the MCG region proposal
algorithm.
\item Contour mask ($\mathcal{M}_j^e$): Our algorithm requires the user to
click on the object boundaries, which as we will show later is much more
discriminative than clicking on interior points and results in a much faster
filtering of good segmentations. To infer the votes on the boundaries, we
convert the segmentation mask $\mathcal{M}_j^m$ into a contour mask. This
contour mask only contains the boundary pixels from $\mathcal{M}_j^m$. For
error tolerance, we dilate the boundary mask by 5 pixels on either side. This
reduces the sensitivity of the exact user click location, \textcolor{black}{which need not
coincide exactly with the mask boundary.}
\item Objectness score ($\mathcal{M}_j^s$): We use the objectness score from
the MCG algorithm~\cite{APBMM2014} to break ties if multiple region proposals get
the same number of votes. \textcolor{black}{This score reflects the likelihood of a given region to be an accurate object segmentation.}
\item User votes ($\mathcal{M}_j^v$): This quantity represents the total
number of user votes received by a particular proposal at any given time.
\textcolor{black}{It is initialized to 0.}
\end{itemize}
As a first step, we begin by computing a lookup table which allows us to
efficiently account for the votes cast for each proposal by the user. Let $n$
be the total number of pixels in a given image and $m$ be the total number of
region proposals generated for that image. We define and precompute a lookup
table $\mathcal{T} \in \{0,1\}^{n \times m}$ as follows:
\begin{equation}
\mathcal{T}(i,j) =
\left\{
\begin{array}{ll}
\vspace{5pt}
1 & \text{ if } \mathcal{M}_j^e(i)=1 \\
0 & \text{ otherwise},
\end{array}
\right. \label{eq:lut} \end{equation} where $i$ denotes a particular pixel
and $j$ denotes a particular region proposal.
When the user clicks at a particular pixel location $c$, the weights for each
of the region proposal are updated as follows: \begin{equation}
\mathcal{M}_j^v = \mathcal{M}_j^v + \mathcal{T}(c,j).
\label{eq:vots}
\end{equation}
The updated set of votes is used to re-rank all the region proposals. The
proposals with equal votes are ranked in the order of their objectness
scores. This interactive re-ranking procedure continues until the user is
satisfied with any of the top-$k$ proposals and chooses that as the final
segmentation. \textcolor{black}{In our implementation, $k$ is set such that $k$ copies of
the image, one proposal on each, fit easily on one screen
($k=9$).}
Figure~\ref{fig:approach} illustrates our user interface and explains this
process with two examples. We show the user interaction on the leftmost
column. Red circles denote clicks. The ``ContourMap" column shows the average contour map of the top-5 ranked proposals after the user click. \textcolor{black}{Here the colors are a heat-map
coding of the number of votes for a boundary fragment.}
Remaining columns show the top-5 ranked proposals.
The top two rows show an example frame from a ``cat" video in the
iVideoSeg~\cite{Nagaraja_2015_ICCV} dataset. The user\footnote{\textcolor{black}{We discuss
our user study in the next section.}} places the first click on the
left side of the object (top left image). We see that the resulting top
ranked proposals (5 foreground images in top row) align well to the current
user click, \textcolor{black}{meaning they all contain a boundary near the click point.}
The average contour map of these top ranked proposals, informs the user
about areas that have been carved well already (red lines) and which areas
may need more attention (blue lines, \textcolor{black}{or contours on the true object that
remain uncolored}). The user observes that \textcolor{black}{most} current top-$k$
segmentations are missing the cat's right leg and decides to place the next
click there (second row, leftmost image). The next ranking of the proposals
brings up segmentations which cover the entire object accurately.
In the next example, we consider a frame from the ``soldier" video in the
Segtrack-v2 dataset~\cite{rehg-iccv2013}. The user decides to place a click
on the right side of the object (third row, leftmost image). This click
itself retrieves a very good segmentation for the soldier. However, to
explore further, the user continues by making more clicks. Each new
constraint eliminates the bad proposals from the previous step, and after
just 3 clicks, all the top-ranked proposals are of good quality.
\subsection{User clicking strategies}
\label{sec:users}
To quantitatively evaluate Click Carving, we employ both real human
annotators and simulated users with different clicking strategies. \textcolor{black}{We
design a series of clicking strategies to simulate, each of which represents
a hypothesis for how a user might efficiently convey which object boundaries
remain missing in the top proposals. While real users are arguably the best
way to judge final impact of our system (and so we use them), the simulated user models are
complementary. They allow us to run extensive trials and to see at scale
which strategies are most effective.} Simulated human users have also been
studied in interactive segmentation for brush stroke
placement~\cite{kohli-ijcv}.
We categorize the user models into three groups: human annotators, boundary
clickers, and interior clickers.
\vspace{2pt}
{\bf Human annotators:} We conduct a user study to analyze the
performance of our method by recruiting \textcolor{black}{3} human annotators to work on
each image. \textcolor{black}{The 3 annotators included a computer vision student and 2 non-expert users.} The human annotators were encouraged to click on object boundaries,
while observing the current best segmentations. \textcolor{black}{They were also given some time to familiarize themselves with the interface, before starting the actual experiments.} They had a
choice to stop by choosing one of the segmentations among the top ranked ones
or continue clicking to explore further. A maximum budget of 10 clicks was
used to limit the total annotation time. The target object was indicated to
them before starting the experiment. In the case of multiple objects, each
object was chosen as the target object in a sequential manner. We recorded
the number of clicks, time spent, and the best object mask chosen by the user
during each segmentation. The user corresponding to the median number of
clicks is used for our quantitative evaluation.
\vspace{2pt}
{\bf Boundary clickers:} We design three simulated users which operate by
clicking on object boundaries. To simulate these artificial users, we make
use of the ground-truth segmentation mask of the target object. Equidistant
points are sampled from the ground truth object contour to define object
boundaries. Each simulated boundary clicker starts from the same initial
point. We use principal component analysis (PCA) on the ground truth shape to
find the axis of maximum shape variation. We consider a ray from the centroid
of the object mask along the direction of this principal axis. The furthest
point on the object boundary where this ray intersects is chosen as the
starting point. The three boundary clickers that we design differ in how they
make subsequent clicks from this starting point. They are: \vspace{5pt}\\ (a)
\emph{\textbf{Uniform clicker:}} To obtain uniformly spaced clicks, we divide
the total number of boundary points by the maximum click budget to obtain a
fixed distance interval $d$. Starting from the initial point and walking
along the boundary, a click is made every $d$ points apart from the previous
click location. \vspace{5pt}\\ (b) \emph{\textbf{Submod clicker:}} The uniform user has a
high level of redundancy, since it clicks at locations which are still close
to the previous clicks; hence the gain in information between two consecutive
clicks might be small. Next we design a boundary clicker that tries to impact
the maximum boundary region with each subsequent clicks. This is done by
placing the click at a boundary point which is furthest away from its nearest
user click among all boundary points. This resembles the sub-modular subset
selection problem\textcolor{black}{~\cite{krause07nectar}}, where one tries to maximize the set
coverage while choosing a subset. We employ a greedy algorithm to find the
next best point.\vspace{5pt}\\ (c) \emph{\textbf{Active clicker:}} The previous two
methods only looked at the ground truth segmentation to devise a click
strategy, without taking into account the segmentation performance after each
click is added. Our active clicking strategy takes into account the current
best segmentation among the top-$k$ (vs.~the ground truth) and uses that to
make the next click decision. It is
similar in design to the Submod user, except that it skips those
boundary points which have already been labeled correctly by the top-ranked
proposal. We find that this active simulated user comes the closest in
mimicking the actual human annotators (see results for details).
\vspace{2pt}
{\bf Interior clickers:} A novel insight of our method is the discriminative
nature of boundary clicks. In contrast, default behavior and \textcolor{black}{previous
user models~\cite{Wang201414,Bearman15}} assumes a click in the interior of the
object is well-suited. To examine this contrast empirically, our final
simulated user clicks on interior object points. To simulate interior
clicks, we uniformly sample object pixel locations from the entire ground
truth segmentation mask (up to the maximum click budget) and then
sequentially place clicks on the object of interest.
We analyze the impact of the click strategies in our results section.
\subsection{Propagating the mask through the video}
\textcolor{black}{Having discovered a good object mask using Click Carving in the initial frame, the next step is to propagate this segmentation to all other frames in the video. We use the foreground propagation method of~\cite{suyog-eccv2014} as our segmentation method primarily due to good performance and efficiency, using code from the authors. We also tried other methods like~\cite{Wen_2015_CVPR} but found~\cite{suyog-eccv2014} to be most scalable for large experiments. In its original form the method requires a human drawn object outline in the initial frame. We instead initialize the method using the region proposal which was selected using Click Carving. This initial mask is then propagated to the entire video to obtain the final segmentation. We computed the supervoxels required by~\cite{suyog-eccv2014} using~\cite{grundmann-cvpr2010} and use the default parameter settings.
}
\section{Conclusions}
\section{Introduction}\label{sec:introduction}
Video object segmentation entails computing a pixel-level mask for an object(s) across the frames of video, regardless of that object's category. Analogous to image segmentation, which produces a 2D map delineating the object's spatial region (``blob"), video segmentation produces a 3D map delineating the object's spatio-temporal extent (``tube"). The problem has received substantial attention in recent years, with methods ranging from wholly unsupervised bottom-up approaches~\cite{grundmann-cvpr2010,xu-eccv2012,galasso-accv2012}, to propagation methods that exploit user input on the first frame~\cite{ren-cvpr2007,tsai-bmvc2010,cipolla-cvpr2010,fathi-bmvc2011,sudheendra-eccv2012,suyog-eccv2014,Wen_2015_CVPR}, to human-in-the-loop methods where a user closely guides the system towards a good segmentation output~\cite{wang-tog2005,li-tog2005,price-iccv2009,bai-2009,Nagaraja_2015_ICCV}.
Successful video segmentation algorithms have potential for significant impact on tasks like activity and object recognition, video editing, and abnormal event detection.
Despite very good progress in the field, it remains challenging to collect quality video segmentations at a large scale. The system is expected to segment objects for which it may have no prior model, and the objects may move quickly and change shape or appearance over time---or (often even worse) never move with respect to the background. To scale up the ability to generate well-segmented data, \emph{human-in-the-loop} methods that leverage minimal human input are appealing~\cite{ren-cvpr2007,tsai-bmvc2010,cipolla-cvpr2010,fathi-bmvc2011,vondrick-nips2011,sudheendra-eccv2012,suyog-eccv2014,Wen_2015_CVPR,wang-tog2005,li-tog2005,price-iccv2009,bai-2009,Nagaraja_2015_ICCV}. \textcolor{black}{These methods benefit greatly by combining the respective strengths of humans and machines}. They can reserve for the human the more difficult high-level job of identifying a true object, while reserving for the algorithm the more tedious low-level job of propagating that object's boundary over time. \textcolor{black}{This synergistic interaction between humans and computers results in accurate segmentations with minimal huamn effort.}
Critical to the success of an interactive video segmentation algorithm is how the user interacts with the system. In particular, how should the user indicate the spatial extent of an object of interest in video? Existing methods largely rely on the tried-and-true interaction modes used for image labeling; namely, the user draws a bounding box or an outline around the object of interest on a given frame, and that region is propagated through the video either indefinitely or until it drifts~\cite{ren-cvpr2007,tsai-bmvc2010,fathi-bmvc2011,sudheendra-eccv2012,suyog-eccv2014,Wen_2015_CVPR}. Furthermore, regardless of the exact input modality, the common assumption is to get the user's input \emph{first}, and then generate a segmentation hypothesis thereafter. In this sense, in video segmentation propagation, information flows first from the user to the system.
We propose to reverse this standard flow of information. Our idea is for the system itself to first hypothesize plausible object segmentations in the given frame, and then allow the human user to efficiently and interactively prioritize those hypotheses.
Such an approach stands to reduce human annotation effort, since the user can use very simple feedback to guide the system to its best hypotheses.
To this end, we introduce \emph{Click Carving}, a novel method that uses point clicks to obtain a foreground object mask for a video frame. Clicks, largely unexplored for video segmentation, are an attractive input modality due to their ease, speed, and intuitive nature (e.g., with a touch screen the user may simply point a finger). Our method works as follows. First, the system precomputes thousands of mask hypotheses based on object proposal regions. Importantly, those object proposal regions exploit both image coherence cues as well as motion boundaries computed in the video. Then, the user efficiently navigates to the best hypotheses by clicking on the boundary of the true object and observing the refined top hypotheses.
Essentially, the user's clicks ``carve" away erroneous hypotheses whose boundaries disagree with the clicks. By continually revising its top rated hypotheses, the system implicitly guides the user where input is most needed next. After the user is satisfied, or the maximal budget of clicks is exhausted, the system propagates the best mask hypothesis through the video with an existing propagation algorithm. For videos in the three datasets we tested, only \textcolor{black}{2-4} clicks are typically required to accurately segment the entire clip. Note that our novel idea is not so much about the ``clicking" interface itself; rather our new ideas center around the idea of simple point supervision as a sufficient cue to perform semi-automatic segmentation and the carving backend that efficiently discerns the most reliable proposals.
Aside from testing our approach with real users, we also develop several simulated user clicking models in order to systematically analyze the relative merits of different clicking strategies. For e.g., is it more effective to click in the object center, or around its perimeter? How should multiple clicks be spaced? Is it advantageous to place clicks in reaction to where the system currently has the greatest errors? One interesting outcome of our study is that the behavior one might assume as a default---clicking in the object's interior~\cite{Bearman15,Wang201414}---is much less effective than clicking on its boundaries. We show that boundary clicks are better able to discriminate between good and bad object proposal regions.
The results show that Click Carving strikes an excellent balance of accuracy and human effort. It is faster (requires less annotation interaction) than most existing interactive methods, yet produces better results. In extensive comparisons with state-of-the-art methods on \textcolor{black}{three challenging datasets we show that Click Carving outperforms all similarly fast methods, and is competitive or better than those requiring 2 to 12 times the effort.} \textcolor{black}{This large reduction in annotion time by effective use of human interaction naturally leads to large savings in annotation costs. Because of the ease with which our framework can assist even non-experts in making high quality annotations, it has great promise for scaling up video segmentation. Ultimately such tools are critical for accelerating data collection in several research communities (e.g., computer vision, graphics, medical imaging), where large-scale spatio-temporal annotations are lacking and/or often left to experts.}
\section{Related Work}
\begin{figure*}[t]
\centering
\captionsetup{width=\textwidth, font={small}, skip=2pt}
\begin{tabular}{ccccc}
\includegraphics[keepaspectratio=true,scale=0.135]{images/bmx_image.jpeg} &
\includegraphics[keepaspectratio=true,scale=0.135]{images/bmx_static_boundary.jpeg} &
\includegraphics[keepaspectratio=true,scale=0.135]{images/bmx_static_prop.jpeg} &
\includegraphics[keepaspectratio=true,scale=0.135]{images/bmx_motion_boundary.jpeg} &
\includegraphics[keepaspectratio=true,scale=0.135]{images/bmx_motion_prop.jpeg} \\
{\scriptsize (a) Image} & {\scriptsize (b) Static Boundaries} &{\scriptsize (c) Static Proposals} & {\scriptsize (d) Motion Boundaries} & {\scriptsize (e) Motion Proposals} \\
\end{tabular}
\caption{Generation of object region proposals using both static and dynamic cues. Best viewed on pdf. }
\label{fig:preprocess}
\end{figure*}
Unsupervised video segmentation methods use no human input, and typically produce an over-segmentation of the video that is useful for mid-level grouping. Supervoxel methods find space-time blobs cohesive in color and/or motion~\cite{grundmann-cvpr2010,xu-eccv2012,galasso-accv2012}, while point trajectory approaches find consistent motion threads beyond optical flow~\cite{brox-eccv2010,lezama-cvpr2011}. Unlike unsupervised work, we consider interactive video segmentation and our method produces spatio-temporal tubes covering the extent of the complete object.
Other methods extract ``object-like" segments in video~\cite{keysegments,ma-cvpr2012,shah-cvpr2013,rehg-iccv2013,ferrari-iccv2013}, typically by learning the category-independent properties of good regions, and employing some form of tracking. Related are the methods that generate a large number of bounding box or region \emph{proposals}~\cite{Wu_2015_CVPR,Fragkiadaki_2015_CVPR,Yu_2015_CVPR,oneata}, an idea originating in image segmentation~\cite{cpmc,APBMM2014}. The idea is to maintain high recall for the sake of downstream processing. As such, these methods typically produce many segmentation hypotheses, \textcolor{black}{100s to 1000s for today's popular datasets}. To adapt them for the object segmentation problem would require human inspection to select the best one, which is non-trivial once the video contains more than a handful. Our approach makes use of object proposals, but our idea to prioritize them with Click Carving is entirely new. \textcolor{black}{We are the first to propose using proposals for the sake of speeding up interactive segmentation, whether for images or videos.}
Semi-automatic video segmentation methods accept manually labeled frame(s) as input, and propagate the annotation to the remaining video clip~\cite{ren-cvpr2007,tsai-bmvc2010,cipolla-cvpr2010,fathi-bmvc2011,sudheendra-eccv2012,suyog-eccv2014,Wen_2015_CVPR}. Often the model consists of an MRF over (super)pixels or supervoxels, with both spatial and temporal connections. Some systems
actively guide the user how to annotate~\cite{fathi-bmvc2011,vondrick-nips2011,sudheendra-eccv2012,karasevRS14}.
All the prior methods assume that initialization starts with the user, and all use traditional modes of user input (bounding boxes, scribbles, or outlines). In contrast, we explore the utility of clicks for video segmentation, and we propose a novel, interactive way to perform system-initiated initialization. We show our approach achieves comparable performance to drawing complete outlines, but with much less annotation effort. Following our novel user interaction stage, we rely on an existing propagation method and make no novelty claims about how the propagation stage itself is done.
\textcolor{black}{Human-in-the-loop systems have proved to very useful in diverse computer vision tasks such as training object detectors~\cite{tropel}, counting objects~\cite{jellybean} etc. Interactive video segmentation methods also leverage user input, but unlike the above propagation methods, the user is always in the loop and engages in a back and forth until the video is adequately segmented~\cite{wang-tog2005,bai-2009,Nagaraja_2015_ICCV}. } In all existing methods, the user guides the system to generate a segmentation hypothesis, and then iteratively corrects mistakes by providing more guidance. In contrast, Click Carving pre-generates thousands of possible hypotheses and then employs user guidance to efficiently filter high quality segmentations from them. \textcolor{black}{Our approach could potentially be used in conjunction with many of these systems as well, to reduce the interaction effort.} Compared to propagation methods, the interactive methods usually have the advantage of greater precision, but at the disadvantages of greater human effort and less amenability to crowdsourcing.
Only limited work explores click supervision for image and video annotation. Clicks on objects in images can remove ambiguity to help train a CNN for semantic segmentation from weakly labeled images~\cite{Bearman15}, or to spot object instances in images for dataset collection~\cite{LinECCV14coco}. Clicks on patches are used to obtain ground truth material types in~\cite{bell15minc}.
We are aware of only two prior efforts in video segmentation using clicks, and their usage is quite different than ours. In one, a click and drag user interaction is used to segment objects~\cite{PFS2015}. A small region is first selected with a click, then dragged to traverse up in the hierarchy until the segmentation does not bleed out of the object of interest. Our user-interaction is much simpler (jut a few mouse clicks or taps on the touchscreen) and our boundary clicks are discriminative enough to quickly filter good segmentations. In the other, the TouchCut system uses a single touch to segment the object using level-set techniques~\cite{Wang201414}. However, the evaluation is focused on image segmentation, with only limited results on video; our approach outperforms it.
\section{Results}\label{sec:results}
\subsection{Datasets and metrics}
We evaluate on \textcolor{black}{3} publicly available datasets: Segtrack-v2~\cite{rehg-iccv2013}, VSB100~\cite{Sundberg,NB13} and iVideoSeg~\cite{Nagaraja_2015_ICCV}. For evaluating segmentation accuracy we use the standard intersection-over-union (IoU) overlap metric between the predicted and ground-truth segmentations. A brief overview of the datasets:
\begin{itemize}
\item {\bf SegTrack v2 ~\cite{rehg-iccv2013}:} the most common benchmark to evaluate video object segmentation. It consists of 14 videos with a total of 24 objects and 976 frames. Challenges include appearance changes, large deformation, motion blur etc. Pixel-wise ground truth (GT) masks are provided for every object in all frames.
\vspace{2pt}
\item {\bf Berkeley Video Segmentation Benchmark (VSB100) ~\cite{Sundberg,NB13}:} consists of 100 HD sequences with multiple objects in each video. We use the ``train" subset of this dataset in our experiments, for a total of 39 videos and \textcolor{black}{4397} frames. This is a very challenging dataset; interacting objects and small object sizes make it difficult to segment and propagate. We use the GT annotations of multiple foreground objects provided by~\cite{PFS2015} on every 20th frame.
\vspace{2pt}
\item {\bf iVideoSeg ~\cite{Nagaraja_2015_ICCV}:} This new dataset consists of 24 videos from 4 different categories (car, chair, cat, dog). Some videos have viewpoint changes and others have large object motions. GT masks are available for 137 of all 11,882 frames.
\end{itemize}
\subsection{Methods for comparison}
We compare with state-of-the art \textcolor{black}{methods~\cite{ferrari-iccv2013,Nagaraja_2015_ICCV,Wen_2015_CVPR,keysegments,Wang201414,grundmann-cvpr2010,rehg-iccv2013,suyog-eccv2014,godec11a}} and our own baselines. Below we group them into 6 groups based on the amount of human annotation effort, i.e., the interaction time between the human and algorithm. In some cases, a human simply initializes the algorithm, while in others the human is in the loop always.
{\bf (1) Unsupervised:} We use the state-of-the-art method of \textbf{\cite{ferrari-iccv2013}}, which produces a single \textcolor{black}{region} segmentation result per video with zero human involvement.
\vspace{3pt}
{\bf (2) Multiple segmentation:} Most existing unsupervised methods produce multiple segmentations to achieve high recall. We consider both {\bf 1) Static object proposals (BestStaticProp):} where the best per frame region proposal (out of approx 2000 proposals per frame) is chosen as the final segmentation for that frame {\bf 2) Spatio-temporal proposals~\cite{rehg-iccv2013,keysegments,grundmann-cvpr2010}:} These methods produce multiple spatio-temporal region tracks as segmentation hypotheses. To simulate a human picking the desired segmentation from the hypotheses, we use the dataset ground truth to select the most overlapping hypothesis. We use the duration of the video to estimate interaction time. \textcolor{black}{This is a lower bound on cost,} since the annotator has to at least watch the clip once to select the best segmentation. \textcolor{black}{For the static proposals, we multiply the number of frames by 2.4 seconds, the time required to provide one click~\cite{Bearman15}.}
\vspace{3pt}
{\bf (3) Scribble-based:} We consider two existing methods: {\bf 1) JOTS~\cite{Wen_2015_CVPR}:} the first frame is interactively segmented using scribbles and GrabCut. The segmentation result is than propagated to the entire video. We use the timing data from the detailed study by~\cite{McGuinness2010434}, who find
it takes a human on average 66.43 seconds per image to obtain a good segmentation with scribbles. \textcolor{black}{{\bf 2) iVideoSeg~\cite{Nagaraja_2015_ICCV}:} This is a recently proposed state-of-the-art technique that uses scribbles to interactively label point trajectories. These labels are then used to segment the object of interest. We use the timing data kindly shared by the authors.}
\vspace{3pt}
{\bf (4) \textcolor{black}{Object outline} propagation:} the human outlines the object completely to initialize the propagation algorithm (typically in the first frame), which then propagates to the entire video.
\textcolor{black}{Here we use the same method for propagation~\cite{suyog-eccv2014} as in our approach.}
Timing data from~\cite{suyog-iccv2013,LinECCV14coco} indicate it typically takes 54-79 seconds to manually outline an object; we use the more optimistic 54 seconds for this baseline.
\vspace{3pt}
{\bf (5) Bounding box:} Rather than segment the object, the annotator draws a tight bounding box around it.
The baseline {\bf BBox-GrabCut} uses that box to obtain a segmentation for the video as follows. We learn a Gaussian Mixture Model (GMM) based appearance model for foreground and background pixels according to the box, then apply them in a standard spatio-temporal MRF defined over pixels. The unaries are derived from the learnt GMM model and contrast-sensitive spatial and temporal potentials are used for smoothness.
\vspace{3pt}
{\bf (6) 1-Click based:} We also consider baselines which perform video segmentation with a single user click. {\bf 1) TouchCut~\cite{Wang201414}} the only prior work using clicks for video segmentation. {\bf 2) Click-GrabCut:} This is similar to BBox-GrabCut except that we take a small region around the click to learn the foreground model. The background model is learnt from a small area around image boundaries. {\bf 3) Click-STProp:} To propagate the impact of a user click to the entire video volume, we use the spatio-temporal proposals from~\cite{oneata}. We do this by selecting all proposals which enclose the click inside them. Fg and bg appearance models are learnt using the selected proposals and refined using a spatio-temporal MRF. We use the timing data from~\cite{Bearman15}, which reports that a human takes about 2.4 seconds to place a single click on the object of interest.
\begin{table*}[t]
\centering
\scriptsize
\captionsetup{width=\textwidth, font={scriptsize}, skip=2pt}
\begin{tabular}{|c|c|c|c|c|c|c|c|c||c|}
\hline
& & Objectness & Interior & BBox & Uniform & Submod & Active & Human & BestProp \\ \hline
\multirow{3}{*}{\bf Segtrack-v2 } & Clicks & 0 & 6.29 & 2 & 4.46 & 3.83 & 3.34 & {\bf 2.46} & - \\ \cline{2-10}
& Time (sec) & 0 & 23.95 & 7 & 16.98 & 14.58 & 12.72 & {\bf 9.37} & - \\ \cline{2-10}
& IoU & 42.36 & 52.79 & 67.51 & 75.8 & 76.76 & 76.24 & {\bf 78.77} & 80.74 \\ \hline
\hline
\multirow{3}{*}{\bf VSB100 } & Clicks & 0 & 7.05 & 2 & 5.34 & 5.28 & 5.23 & {\bf 4.35} & - \\ \cline{2-10}
& Time (sec) & 0 & 30.11 & 7 & 22.81 & 22.55 & 22.33 & {\bf 18.58} & - \\ \cline{2-10}
& IoU & 28.45 & 46.98 & 58.98 & 64.2 & 65.67 & 66.91 & {\bf 69.63} & 72.82 \\ \hline
\hline
\multirow{3}{*}{\bf iVideoSeg } & Clicks & 0 & 5.02 & 2 & 3.84 & 3.29 & 3.15 & {\bf 2.84} & - \\ \cline{2-10}
& Time (sec) & 0 & 19.86 & 7 & 15.20 & 13.02 & 12.47 & {\bf 11.24} & - \\ \cline{2-10}
& IoU & 50.69 & 72.54 & 68.04 & 77.57 & 77.84 & {\bf 78.65} & 78.24 & 81.34 \\ \hline
\end{tabular}
\caption{Click-carving proposal selection quality for real users (Human), the different user click models (Interior, Uniform, Submod, Active), and a \textcolor{black}{BBox baseline}. With an average of 2-4 clicks to carve the proposal boundaries, users attain IoU accuracies very close to the upper bound (BestProp). Objectness, Interior clicks and the BBox baseline are substantially weaker. IoU measures segmentation overlap with the ground truth; perfect overlap is 100.
}
\label{results-click-carving}
\end{table*}
\begin{figure*}[t]
\captionsetup{width=\textwidth, font={scriptsize}, skip=2pt}
\begin{tabular}{c|c}
{\includegraphics[keepaspectratio=true,scale=0.20]{images/qual.jpg}} &
{\includegraphics[keepaspectratio=true,scale=0.27]{images/baselines.jpeg}} \\
Visual results for Click-Carving & Visual comparisons with baselines
\end{tabular}
\caption{{\bf Left:} Qualitative results for Click Carving. The yellow-red dots show the clicks made by human annotators. The best selected segmentation boundaries are overlayed on the image (green). {\bf Right:} Comparisons with baselines: The top example shows the segmentation we obtain with a single click as opposed to applying GrabCut segmentation with a tight bounding box. Bottom example shows the discriminative power of clicking on boundaries by comparing it with a baseline which clicks in the interior regions. Best viewed on pdf. }
\label{fig:qual_result}
\end{figure*}
\subsection{Experiments}
We first test the accuracy/speed tradeoff in terms of locating the best available proposal, and compare the simulated user models. Then we present comparisons against all the existing methods and baselines.
\vspace{5pt}
\noindent {\bf Click Carving for region proposal selection:} We first present the performance of Click Carving for interactively locating the best region proposal for the object of interest. We do this for the first frame in all videos.
In all experiments, we set the total click budget to be a maximum of 10 clicks per object. For simulated users, clicks are placed sequentially depending on its design, until a proposal which is within 5\% overlap of the best proposal is ranked in the top-$k$ or the click budget is exhausted. For the human user study, the user stops when they decide that they found a good segmentation within the top-$k$ ranked proposals or have exhausted the click budget.
Table~\ref{results-click-carving} shows the results for all datasets and compares the performance with all simulated users. We compare both in terms of the number of clicks and time required and also how close they get to the best proposal available in the pool of $\sim$2000 (BestProp). As expected, in all cases real users achieve the best segmentation performance and require far fewer clicks than all simulated users to achieve it. Our simulated Active user, which takes into account the current state of the segmentation, comes closest to matching the human's performance. Also, we see clicking uniformly on the object boundaries requires more clicks on average than the Active and Submodular users, which try to impact the largest object area with each subsequent click. The objectness baseline, which first ranks all the proposals using objectness scores and picks the best proposal among top-$k$ ($k$=9) performs the worst. This shows that user interaction is key to picking good quality proposals among 1000s of candidates.
All users that operate by clicking on boundaries (Human, Uniform, Submod and Active), come very close to choosing the best proposal in most cases. In contrast, clicking on the interior points requires substantially more clicks---often double the time; more importantly, the best segmentation it obtains is much worse in quality than the best possible segmentation.
This supports our hypothesis that clicking on boundaries is much more discriminative in separating good proposals from the bad ones. Whereas a matching between an object proposal contour and a boundary click will rarely be accidental, several bad proposals may have the interior click point lie within them.
In fact, selecting the best proposal using an enclosing bounding box around the true object (BBox) is more effective than clicking on interior points. This is likely because a tight bounding box can eliminate a large number of proposals that extend outside its boundaries. On the other hand, an interior click cannot restrict the selected proposals to the ones which align well to the object boundaries. Our method outperforms the bounding box selection by a large margin, showing the efficacy of our approach.
On Segtrack-v2 and iVideoSeg, Click Carving requires less than 3 clicks on average to obtain a high quality segmentation. For the most challenging dataset, VSB100, we obtain good results with an average of 4.35 clicks. This shows the potential of our method to collect large amounts of segmentation data economically. The timing data reveals the efficiency and scalability of our method. Below we show how this translates to advantages for complete video segmentation.
Figure \ref{fig:qual_result} (left) show qualitative results for Click Carving. In many cases (e.g., lions, soldier, cat), only a single click is sufficient to obtain a high quality segmentation. Several challenging instances like the cat (bottom row) and the lion (middle row), are segmented very accurately with a single click. These objects would otherwise require a large amount of human interaction to obtain good segmentation \textcolor{black}{(say using a GrabCut like approach).}
More clicks are typically needed when multiple objects are close-by or interacting with each other. Still, we observe that in many cases only a small number of clicks on each object results in good segmentations. For example, in the car video (top row), only 5 clicks are required to obtain final segmentations for both objects.
\textcolor{black}{Figure \ref{fig:qual_result} (right) highlights the key strengths of our method over two baselines. In the top example, we see that GrabCut segmentation applied even with a very tight bounding box fails to segment the object. On the other hand, even with a single click, our proposed approach produces very accurate segmentation. The example on the bottom shows the importance of clicking on boundaries. Clicking on the interior fails to retrieve a good proposal, because several bad proposals also contain those interior clicks. But our boundary clicks, which are highly discriminative, retrieve the best proposal quickly.}
\begin{table*}[t]
\centering
\tiny
\captionsetup{width=\textwidth, font={scriptsize}, skip=2pt}
\begin{tabular}{|c|c|x{10mm}|x{10mm}|x{10mm}|x{8mm}|x{9mm}|x{8mm}|x{10mm}|x{10mm}|x{8mm}|x{6mm}|x{8mm}|}
\hline
& & \textbf{Unsup.} & \multicolumn{4}{c|}{\textbf{Multiple Segmentations}} & \textbf{Scribbles} & \textbf{\textcolor{black}{Outline}} & {\textbf{Bounding Box}} & \multicolumn{3}{c|}{\textbf{Click Based}} \\ \hline
& \textbf{\#Frames} & \textbf{\cite{ferrari-iccv2013}} & \textbf{\cite{grundmann-cvpr2010}} & \textbf{\cite{keysegments}} & \textbf{\cite{rehg-iccv2013}} & \textbf{BestStaticProp} & \textbf{\cite{Wen_2015_CVPR}} & \textbf{\cite{suyog-eccv2014}} & \textbf{BBox-GrabCut} & \textbf{Click-GrabCut} & \textbf{Click-STProp} & {\textbf{Ours}} \\ \hline
birdfall2 & 30 & 32.28 & 57.40 & 49.00 & 62.50 & 72.00 & 78.70 & 63.15 & 2.59 & 2.12 & 59.21 & 62.32 (1) \\ \hline
bird of paradise & 97 & 81.83 & 86.80 & 92.20 & 94.00 & 93.11 & 93.00 & 91.59 & 35.46 & 28.04 & 86.24 & 89.90 (1) \\ \hline
bmx - person & 36 & 51.81 & 39.20 & 87.40 & 85.40 & 86.20 & 88.90 & 82.74 & 14.99 & 7.45 & 77.81 & 81.14 (1) \\ \hline
bmx - cycle & 36 & 21.71 & 32.50 & 38.60 & 24.90 & 64.27 & 5.70 & 2.95 & 4.73 & 2.52 & 13.21 & 2.15 (3) \\ \hline
cheetah - deer & 29 & 40.32 & 18.80 & 44.50 & 37.30 & 59.61 & 66.10 & 33.15 & 7.46 & 5.30 & 20.24 & 29.87 (5) \\ \hline
cheetah - cheetah & 29 & 16.53 & 24.40 & 11.70 & 40.90 & 62.51 & 35.30 & 26.96 & 9.47 & 6.32 & 24.42 & 19.96 (1) \\ \hline
drift-1 & 74 & 52.35 & 55.20 & 63.70 & 74.80 & 85.50 & 67.30 & 69.15 & 18.57 & 14.72 & 54.45 & 68.51 (2) \\ \hline
drift-2 & 74 & 33.18 & 27.20 & 30.10 & 60.60 & 78.92 & 63.70 & 52.49 & 16.59 & 15.13 & 51.97 & 49.91 (3) \\ \hline
frog & 279 & 54.13 & 67.10 & 0.00 & 72.80 & 78.10 & 56.30 & 69.69 & 49.65 & 28.95 & 64.91 & 69.12 (4) \\ \hline
girl & 21 & 54.90 & 31.90 & 87.70 & 89.20 & 72.06 & 84.60 & 67.38 & 28.26 & 18.21 & 63.43 & 66.56 (1) \\ \hline
hummingbird-1 & 29 & 8.97 & 13.70 & 46.30 & 54.40 & 77.55 & 58.30 & 58.63 & 24.02 & 19.64 & 42.12 & 44.24 (1) \\ \hline
hummingbird-2 & 29 & 32.10 & 25.20 & 74.00 & 72.30 & 83.48 & 50.70 & 56.24 & 59.05 & 28.94 & 44.67 & 39.95 (1) \\ \hline
monkey & 31 & 64.20 & 61.90 & 79.00 & 84.80 & 85.87 & 86.00 & 73.86 & 39.93 & 28.67 & 69.14 & 72.60 (2) \\ \hline
monkeydog - monkey & 71 & 72.33 & 68.30 & 74.30 & 71.30 & 77.58 & 82.20 & 74.32 & 14.22 & 12.86 & 67.80 & 71.12 (1) \\ \hline
monkeydog - dog & 71 & 0.02 & 18.80 & 4.90 & 18.90 & 57.19 & 21.10 & 75.47 & 3.30 & 2.45 & 35.10 & 65.21 (4) \\ \hline
parachute & 51 & 76.32 & 69.10 & 96.30 & 93.40 & 90.40 & 94.40 & 87.78 & 92.92 & 85.80 & 87.78 & 86.22 (1) \\ \hline
penguin 1 & 42 & 5.09 & 72.00 & 12.60 & 51.50 & 79.98 & 94.20 & 92.09 & 16.13 & 9.87 & 14.56 & 77.41 (2) \\ \hline
penguin 2 & 42 & 2.16 & 80.70 & 11.30 & 76.50 & 87.85 & 91.80 & 79.70 & 17.09 & 14.65 & 16.34 & 76.45 (3) \\ \hline
penguin 3 & 42 & 1.86 & 75.20 & 11.30 & 75.20 & 84.17 & 91.90 & 91.62 & 13.44 & 12.34 & 14.24 & 81.43 (3) \\ \hline
penguin 4 & 42 & 2.31 & 80.60 & 7.70 & 57.80 & 82.31 & 90.30 & 76.92 & 9.44 & 9.53 & 18.19 & 73.26 (3) \\ \hline
penguin 5 & 42 & 9.95 & 62.70 & 4.20 & 66.70 & 77.48 & 76.30 & 77.12 & 15.87 & 9.54 & 14.32 & 76.54 (7) \\ \hline
penguin 6 & 42 & 18.88 & 75.50 & 8.50 & 50.20 & 83.46 & 88.70 & 80.65 & 10.79 & 10.23 & 21.34 & 80.21 (3) \\ \hline
solider & 32 & 39.77 & 66.50 & 66.60 & 83.80 & 80.30 & 81.10 & 72.10 & 35.38 & 21.20 & 79.80 & 71.29 (1) \\ \hline
worm & 154 & 72.79 & 34.70 & 84.40 & 82.80 & 83.73 & 79.30 & 72.99 & 13.52 & 8.94 & 67.10 & 72.20 (5) \\ \hline
\hline
\textbf{\textcolor{black}{Average Accuracy}} & - & 35.24 & 51.89 & 45.26 & 65.92 & 78.48 & 71.91 & 67.86 & 23.04 & 16.81 & 46.18 & 63.65 \\ \hline
\textbf{Annotation Effort} & & - & 336.6 tracks & 10.6 tracks & 60 tracks & 120k proposals & 1 frame & 1 frame & 2 clicks & 1 click & 1 click & 2.46 clicks \\ \hline
\textbf{Annotation Time (sec)} & & 0 & 673.2 & 21.2 & 120 & \textcolor{black}{142.5} & 66.43 & 54 & 7 & 2.4 & 2.4 & 9.37 \\ \hline
\end{tabular}
\caption{Video segmentation accuracy (IoU) on Segtrack-v2 (per-video). The last column shows our result with real human users. Numbers in parens are the \# of clicks required by our method. The bottom two rows summarize the amount of human annotation effort required to obtain the corresponding segmentation performance, for all methods. Our approach leads to the best trade-off between video segmentation accuracy and human annotation effort.
}
\vspace*{0.1in}
\label{results-segtrack}
\end{table*}
\begin{table*}[t]
\centering
\tiny
\captionsetup{width=0.6\textwidth, font={scriptsize}, skip=2pt}
\begin{tabular}{|c|x{15mm}|x{15mm}|x{14mm}|x{8mm}|x{8mm}|x{8mm}|}
\hline
& \textbf{Unsup.} & \textbf{\textcolor{black}{Outline}} & \textbf{Bounding Box} & \multicolumn{3}{c|}{\textbf{Click Based}} \\ \hline
& \textbf{\cite{ferrari-iccv2013}} & \textbf{\cite{suyog-eccv2014}} & \textbf{BBox-GrabCut} & \textbf{Click-GrabCut} & \textbf{Click-STProp} & {\textbf{Ours}} \\ \hline
\textbf{\textcolor{black}{Avg. Accuracy}} & 17.79 & 61.43 & 14.74 & 11.14 & 26.76 & 56.15 \\ \hline
\textbf{Annot. Effort} & - & 1 frame & 2 clicks & 1 click & 1 click & 4.35 clicks \\ \hline
\textbf{Annot. Time (sec)} & 0 & \textcolor{black}{54} & 7 & 2.4 & 2.4 & 18.58 \\ \hline
\end{tabular}
\caption{Video segmentation accuracy (IoU) on all 39 videos in VSB100, format as in Table~\ref{results-segtrack}. Our approach provides the best trade-off between video segmentation accuracy and human annotation effort.
}
\vspace*{0.1in}
\label{results-vsb}
\end{table*}
\begin{figure*}[t]
\centering
\captionsetup{width=\textwidth, font={scriptsize}, skip=2pt}
\begin{tabular}{ccc}
\includegraphics[keepaspectratio=true,scale=0.30]{images/Segtrack-v2.eps} &
\includegraphics[keepaspectratio=true,scale=0.30]{images/VSB100.eps} &
\includegraphics[keepaspectratio=true,scale=0.30]{images/iVideoSeg.eps}
\end{tabular}
\caption{\textcolor{black}{Cost vs accuracy on Segtrack (left), VSB100 (center), and iVideoSeg (right). Our Click Carving based video propagation results in similar accuracy as state-of-the-art metods, but it does so with much less human effort. Click Carving offers the best trade-off between cost and accuracy. Best viewed on pdf.} }
\label{fig:cost_accuracy}
\end{figure*}
Next we discuss the results for video segmentation, where we propagate the results of Click Carving to the remaining frames in the video.
\vspace{5pt}
\noindent {\bf Video segmentation propagation on Segtrack-v2:} Table \ref{results-segtrack} shows the results on Segtrack-v2. We compare using the standard intersection-over-union (IoU) metric with a total of \textcolor{black}{10} methods which use varying amounts of human supervision. The unsupervised algorithm~\cite{ferrari-iccv2013} that uses no human input results in the lower accuracy. Among the approaches which produce multiple segmentations, BestStaticProp and~\cite{rehg-iccv2013} have the best accuracy. This is expected because these methods are designed for having high recall, but it requires much more effort to sift through the multiple hypotheses to pick the best one as the final segmentation. For example, it is prohibitively expensive to go through 2000 segmentations for each frame to get to the accuracy level of BestStaticProp. The method of~\cite{rehg-iccv2013} produces much fewer segmentations, but still requires 12x more time than our method to achieve comparable performance.
The scribble based method~\cite{Wen_2015_CVPR} achieves the best overall accuracy on this dataset, but is 6 times more expensive than our method. The propagation method of~\cite{suyog-eccv2014}, which we also use as our propagation engine, sees an increase of 4\% in accuracy when propagated from human-labeled object outlines. On the other hand, our method which is initialized from slightly imperfect, but much quicker to obtain object boundaries achieves comparable performance.
Using computer generated segmentations coupled with our Click Carving interactive selection algorithm is sufficient to obtain high performance.
Moving on to the methods that require less human supervision, i.e., bounding boxes and clicks, we see that Click Carving continues to hold advantages. In particular, BBox-GrabCut and Click-GrabCut result in poor performance, indicating that more nuanced propagation methods are needed than just relying on appearance-based segmentation alone. Click-STProp, which obtains a spatial prior by propagating the impact of a single click to the entire video volume, results in much better performance than solely appearance based methods. However, our method, which first translates clicks into accurate per-frame segmentation before propagating them, yields a \textcolor{black}{17\%} gain (\textcolor{black}{37\% relative gain}).
All these trends show that our method offers the best trade-off between segmentation performance and annotation time. Figure~\ref{fig:cost_accuracy} (left), visually depicts this trade-off. All methods which result in better segmentation accuracy than ours need substantially more human effort. Even then the gap in the performance in relatively small. On the flip side, the methods which require less annotation effort than us also result in a significant degradation in segmentation performance.
\vspace{5pt}
\noindent {\bf Video segmentation propagation on VSB100:} Next, we test on VSB100. This is an even more challenging dataset and very few existing methods have reported foreground propagation results on it. Since this dataset includes several videos that contain multiple interacting objects in challenging conditions, Click Carving tends to require more clicks (4.35 on average). Our method again outperforms all baselines which require less human effort and results in comparable performance with~\cite{suyog-eccv2014}, but at a much lower cost. Figure ~\ref{fig:cost_accuracy} (center) again reflects this trend.
\vspace{5pt}
\noindent {\bf Video segmentation propagation on iVideoSeg:} We also compare our method on the recently proposed iVideoSeg dataset~\cite{Nagaraja_2015_ICCV}. We compare with 3 methods ~\cite{grundmann-cvpr2010,godec11a,Nagaraja_2015_ICCV} out of which ~\cite{Nagaraja_2015_ICCV} is the current state-of-the-art method for interactive foreground segmentation in videos. We use the timing information provided by the authors~\cite{Nagaraja_2015_ICCV}. We compare the performance of our method on all 24 videos in the dataset (300-1000 frames per video) using the real user annotation times. Figure~\ref{fig:cost_accuracy} (right) shows the results. For all methods, each data point on the plot shows time vs. accuracy for a particular video at a particular iteration.
The methods of~\cite{Nagaraja_2015_ICCV,grundmann-cvpr2010,godec11a} run for multiple iterations i.e. a human provides annotation on several frames, observes the results and repeats until he/she is satisfied. This requires a human to evaluate the current video segmentation result and decide if more annotation is required. The authors provided timing and accuracy data for 4-5 iterations on each video. In contrast our method does one-shot selection instead of iterative refinement. Our method pre-selects the frames on which to request human annotation (every 100th frame in this case). For each selected frame, we ask a human annotator to use our Click Carving method to find the best region proposal while recording their timing. The total time for the video is simply the sum of time taken for each selected frame. The video segmentation propagation is re-initialized whenever a new labeled frame is available.
We outperform both~\cite{grundmann-cvpr2010,godec11a} by a considerable margin. When compared with~\cite{Nagaraja_2015_ICCV}, our method achieves similar segmentation accuracy but with less than half the total annotation time. On average over all 24 videos, ~\cite{Nagaraja_2015_ICCV} takes 110.05 seconds to achieve an IoU score of 80.04. In comparison our method only takes 54.35 seconds to reach an IoU score of 77.68.
\vspace{5pt}
\noindent {\bf Comparison with TouchCut:} To our knowledge TouchCut~\cite{Wang201414} is the only prior work which utilizes clicks for video segmentation. In that work, the user places a click somewhere on the object, then a level-sets technique transforms the click to an object contour. This transformed contour is then propagated to the remaining frames. Very few experimental results about video segmentation are discussed in the paper, and code is not available. Therefore, we are only able to compare with TouchCut on the 3 Segtrack videos reported in their paper. Table~\ref{results-touchcut} shows the result. When initialized with a single click, our method outperforms TouchCut in 2 out of 3 videos. With 1 more click, we perform better in all 3 videos.
\begin{table}[h]
\centering
\scriptsize
\captionsetup{width=0.45\textwidth, font={scriptsize}, skip=2pt}
\begin{tabular}{|c|c|c|c|}
\hline
& \textbf{TouchCut} & \textbf{Ours (1-click)} & \textbf{Ours (2-clicks)} \\ \hline
birdfall2 & 248 & 213 & 187 \\ \hline
girl & 1691 & 2213 & 1541 \\ \hline
parachute & 228 & 225 & 198 \\ \hline
\end{tabular}
\caption{Comparison with TouchCut~\cite{Wang201414} \textcolor{black}{in terms of pixel error (lower is better)}.}
\label{results-touchcut}
\end{table}
\vspace{5pt}
\noindent {\bf Qualitative results on video segmentation propagation:} Figure~\ref{fig:videoseg_result_segtrack} -~\ref{fig:videoseg_result_vsb} shows some qualitative results for video segmentation propagation on the 3 datasets that we used in our experiments. The left-most image in each row shows the best region proposal chosen by a human annotator using Click Carving. Subsequent images show the results of segmentation propagation, when initialized from this selected proposal.
\begin{figure*}[t]
\centering
\captionsetup{width=0.9\textwidth, font={scriptsize}, skip=2pt}
\begin{tabular}{c}
\includegraphics[keepaspectratio=true,scale=0.25]{images/segtrack_supp.jpg}
\end{tabular}
\caption{Qualitative results for video segmentation on Segtrack-v2 dataset: The results using the propagation method of~\cite{suyog-eccv2014} initialized from the segmentation in the left-most image. This initialization is obtained using our Click Carving method with static and motion-based proposals. Best viewed on pdf. }
\label{fig:videoseg_result_segtrack}
\end{figure*}
\begin{figure*}[t]
\centering
\captionsetup{width=0.9\textwidth, font={scriptsize}, skip=2pt}
\begin{tabular}{c}
\includegraphics[keepaspectratio=true,scale=0.25]{images/ivideoseg_supp.jpg}
\end{tabular}
\caption{Qualitative results for video segmentation on iVideoSeg dataset: The results using the propagation method of~\cite{suyog-eccv2014} initialized from the segmentation in the left-most image. This initialization is obtained using our Click Carving method with static and motion-based proposals. Best viewed on pdf. }
\label{fig:videoseg_result_ivideoseg}
\end{figure*}
\begin{figure*}[t]
\centering
\captionsetup{width=0.9\textwidth, font={scriptsize}, skip=2pt}
\begin{tabular}{c}
\includegraphics[keepaspectratio=true,scale=0.25]{images/vsb_supp.jpg}
\end{tabular}
\caption{Qualitative results for video segmentation on VSB100 dataset: The results using the propagation method of~\cite{suyog-eccv2014} initialized from the segmentation in the left-most image. This initialization is obtained using our Click Carving method with static and motion-based proposals. Best viewed on pdf. }
\label{fig:videoseg_result_vsb}
\end{figure*}
\vspace{5pt}
\noindent {\bf Conclusion:} We presented a novel interactive video object segmentation technique, \emph{Click Carving} using which only a few clicks are required to obtain accurate spatio-temporal object segmentation in videos. Our method strikes an excellent balance between accuracy and human effort resulting in large savings. Because of the ease of use even for non-experts, our method offers great promise for scaling up video segmentation which can be beneficial for several research communities.
\vspace{5pt}
\noindent {\bf Acknowledgements:} We would like to thank Naveen Shankar Nagaraja for providing us with the user annotation timing data on the iVideoSeg dataset for comparison. This research is supported in part by ONR YIP N00014-12-1-0754.
|
1,116,691,499,873 | arxiv | \section{Introduction}
The progressive unveiling of the intricate connections that exists between information theory and statistical mechanics has allowed fundamental advances on our understanding of complex systems~\cite{thurner2018introduction}.
One of the most important methods resulting from those discoveries is the \emph{maximum entropy principle} (MEP), which unifies multiple results and procedures under a single heuristic that operationalizes Occam's razor~\cite{jaynes1957information,jaynes2003probability}. From a pragmatic perspective, the MEP can be understood as a modeling framework that is particularly well-suited for building statistical descriptions of a broad class of systems in contexts of incomplete knowledge~\cite{cofre2019comparison}. The high versatility of the MEP has allowed it to find applications in a wide range scenarios, including the analysis of DNA motifs of transcription factor binding sites~\cite{santolini2014general}, co-variations in protein families and amino acid contact prediction~\cite{weigt2009identification,morcos2011direct}, diversity of antibody repertoires in the immune system~\cite{mora2010maximum,elhanati2014quantifying}, coordinated firing patterns of neural populations~\cite{schneidman2006weak,tang2008maximum,marre2009prediction,cofre2014exact}, collective behavior of bird flocks and mice~\cite{bialek2012statistical, cavagna2014dynamical,shemesh2013high}, the abundance and distribution of species in ecological niches~\cite{harte2011maximum,harte2014maximum}, and patterns of behavior in various complex human endeavours~\cite{lee2018partisan,lynn2019surges}.
The efficacy of the MEP rests on Shannon's entropy, which acts as as an estimate of ``uncertainty'' that guides the modeling procedure. Colloquially, the MEP generates the statistical model that is less structured while being consistent with the available knowledge, building on the available knowledge but nothing else. However, the functional form of the Shannon entropy greatly restricts the range outputs that the MEP can offer. In particular, standard applications of the MEP can only generate Boltzmann-Gibbs distributions, which are unsuitable to describe complex systems displaying long-range correlations or other effects related to different types of statistics~\cite{tsallis2005asymptotically,jizba2004world,thurner2007entropies,tsallis2009introduction}. This important limitation have triggered various efforts to generalize the MEP by means of leveraging generalizations of Shannon's entropy, resulting in a rich array of proposals (see e.g.~\cite{tsallis1988possible,beck2003superstatistics,jizba2004observability,hanel2012generalized,jeldtoft2018group}). However, we argue that plugging a generalized entropy into the MEP framework inevitably leads to an adhoc procedure whose value is fundamentally hindered by the heuristic nature of the MEP itself.
An alternative approach to extend the MEP is to consider it not as a stand-alone principle, but as a consequence of deeper mathematical laws. One route to do this --- that we follow in this paper --- is to regard the MEP as a direct consequence of the
geometry of statistical manifolds~\cite[Sec.III-D]{amari2001information}. In effect, by leveraging the structure of dual orthogonal projections allowed by the flat geometry associated with the Kullback–Leibler divergence~\cite{amari2000methods,amari2016information}, the seminal work of Amari established how
the standard MEP naturally emerges when considering hierarchical ``foliations'' of the manifold. This perspective not only sets the MEP on a firm mathematical bases, but further endows it with sophisticated tools from information geometry --- which can be used e.g. to disentangle the relevance of interactions of different orders within the system~\cite{schneidman2003network,olbrich2015information,rosas2016understanding}.
In this paper we show how the geometry of curved statistical manifolds naturally leads to an extension of the MEP based on the R\'enyi entropy.
In contrast to flat cases, the geometrical structure of curved statistical manifolds disrupts the standard construction of orthogonal projections based on Legendre-dual coordinates, making the analysis of foliations highly non-trivial. Nonetheless, by leveraging the rich literature on curved statistical manifolds~\cite{amari2000information,Kurose2002,matsuzoe2010statistical,amari2016information,wong2018logarithmic,scarfone2020study}, the framework put forward in this paper reveals how the geometry established by the R\'enyi divergence is suitable for establishing hierarchical foliations that, in turn, lead to a generalization of the MEP.
The results presented in this paper serve to emphasize the special place that the R\'enyi entropy has among other generalized entropies --- at least from the perspective of the MEP.
Furthermore, it provides a solid mathematical foundation for the plethora of existent applications based on the R\'enyi entropy (see e.g. Refs.~\cite{shalymov2016dynamics,bashkirov2004maximum,GeometricMutInf,DetectingPhaseTwithRenyi}). Furthermore, the novel connection established between information geometry and this generalized MEP opens the door for fertile explorations combining non-Euclidean geometry methods and statistical analyses, which may lead to new insights and techniques to further deepen our understanding of complex systems.
The rest of this article is structured as follows. First, Section~\ref{sec:alpha} provides a brief introduction to information geometry, emphasising concepts that are key to our proposal. Then, Section~\ref{sec:hiearchical} develops the analysis of foliations in curved statistical manifolds, and Section~\ref{sec:maxentRenyi} establishes its relationship with a maximum R\'enyi entropy principle. Finally, Section~\ref{sec:discussion} discusses the implications of our findings and summarizes our main conclusions.
\section{Preliminaries}
\label{sec:alpha}
\subsection{The Dual Structure of Statistical Manifolds}
\label{sec:prel1}
Our exposition is focused on statistical manifolds $\mathcal{M}$, whose elements are probability distributions $p_\xi(x)$ with $x\in\boldsymbol\chi$ and $\xi\in\mathbb{R}^d$.
The geometry of such statistical manifolds
is determined by two structures: a metric tensor $g_p$,
and a torsion-free affine connection
pair $(\nabla, \nabla^{*})$ that are dual with respect to $g_p$. Intuitively, $g_p$ defines norms and angles between tangent vectors and, in turn, establishes curve length and the \emph{shortest} curves.
On the other hand, the affine connection establishes contravariant derivatives of vector fields establishing the notion of parallel transportation between neighbouring tangent spaces, which defines what is a \emph{straight} curve.
Traditional Riemannian geometry is build on the assumption that the shortest and the straightest curves coincide, which led to the study of metric-compatible (Levi-Civita) connections --- pivotal to the development of the theory of general relativity. However, modern approaches motivated in information geometry~\cite{amari2021information} and
gravitational theories~\cite{Vitagliano:2010sr,Vitagliano:2013rna} consider more general cases, where the metric and connections are independent from one another. In such geometries, the parallel transport operator $\Pi:T_p\mathcal{M}\to T_q\mathcal{M}$ and its dual $\Pi^*$~\footnote{The dual transport operator acts on cotangent vectors, and is defined by the condition of guaranteeing
$g_q (\Pi V,\Pi^{*} W)=g_p (V,W)$ for all $W\in T_p\mathcal{M}$ and $V\in T^*_p\mathcal{M}$).} (induced by $\nabla$ and $\nabla^*$, respectively) might differ.
The departure of $\nabla$ and $\nabla^*$ from self-duality can be shown to be proportional to Chentsov's tensor, which allows for a single degree of freedom traditionally denoted by $\alpha \in \mathbb{R}$~\cite{amari2021information}. Put simply, $\alpha$ captures the degree of asymmetry between short and straight curves, with $\alpha=0$ corresponding to metric-compatible connections where $\nabla=\nabla^*$.
An important property of the geometry of a statistical manifold ($\mathcal{M},g,\nabla, \nabla^{*}$) is its curvature, which can be of two types: the (Riemann-Christoffel) metric curvature, or the curvature associated to the connection. Both quantities capture the distortion induced by parallel transport over closed curves --- the former with respect to the Levi-Civita connection, and the latter with respect to $\nabla$ and $\nabla^*$. In the sequel we use the term ``curvature'' to refer exclusively to the latter type.
\subsection{Establishing geometric structures via divergences} \label{sub:estab_gvd}
A convenient way to establish a geometry on a statistical manifold is via \emph{divergence maps}~\cite{amari2010information}.
Divergences are smooth,
distance-like mappings for the form $\pazocal{D}:\mathcal{M}\times \mathcal{M} \to \mathbb{R}$,
which satisfy $\pazocal{D}(p||q)\ge 0$ and vanish only when $p=q$~\footnote{Divergences are in general weaker than distances, as they don't need to be symmetric in their arguments and don't need to respect the triangle inequality.}. We use the shorthand notation $\pazocal{D}[\xi;\xi'] := \pazocal{D}(p_{\xi}||q_{\xi'})$ when expressing $\pazocal{D}$ under a parametrization of $\mathcal{M}$ in terms of coordinates $\xi=(\xi^1,\dots,\xi^n)$~\cite{amari2001information}; divergences in this form are often called ``contrast functions'' (see Ref.~\cite[Sec.~11]{calin2014geometric}).
Let us see how one can naturally build a metric from a contrast function~\cite[Sec.~4]{amari2010information}. A metric $g(\xi)$ can be built from the second-order expansion of the divergence $\pazocal{D}$ as
\begin{equation}
\label{eq:FisherMetric}
g_{ij}(\xi)
= \left \langle \partial_{\xi^i},
\partial_{\xi^j} \right \rangle
= - \partial_{\xi^i , \xi^{\prime j}}
\pazocal{D}[\xi;\xi'] \big|_{\xi=\xi'}~,
\end{equation}
which is positive-definite due to the non-negativity of $\pazocal{D}$.
This construction leads to the \emph{Fisher's metric}, which is the unique metric that emerges from a broad class of divergences~\cite[Th.~5]{amari2010information}, with this being this closely related Chentsov’s theorem~\cite{chentsov1982statistical,ay2015information,van2017uniqueness,dowty2018chentsov}.
Analogously, connections (or equivalently Christoffel symbols) emerge at the third order expansion of the
divergence as follows:
\begin{subequations}
\label{eq:connections}
\begin{align}
\label{eq:C1}
\Gamma_{ijk}(\xi) & = \left \langle \nabla_{\partial_{\xi^i}}
\partial_{\xi^j} ,\partial_{\xi^k} \right \rangle
= -\;\left . \partial_{i ,j} \partial_{k'} \pazocal{D}[\xi;\xi'] \right|_{\xi=\xi'}\!,\\
\label{eq:C2}
\Gamma_{ijk}^{*}(\xi) & = \left \langle \nabla_{\partial_{\xi^i}}^{*}
\partial_{\xi^j} ,\partial_{\xi^k} \right \rangle
= -\left . \partial_{k} \partial_{i',j'} \pazocal{D}[\xi;\xi'] \right|_{\xi=\xi'}\!,
\end{align}
\end{subequations}
where the shorthand notation $\partial_{\xi^i} =\partial_{i}$ and $\partial_{\xi'^i} =\partial_{i'}$ has been adopted for brevity.
In summary, Fisher's metric is insensible the choice of divergence but the resulting connections are,
and therefore the effects of a particular $\pazocal{D}$ manifest only at third-order.
Interestingly, this construction relating the metric and connections with
the second and third derivatives of a scalar potential bears a striking resemblance to K{\"a}hler structures on complex manifolds, which can be built through further constraints and are applicable to a range of inference problems~\cite{Choi_2015,zhang2013symplectic}.
The approach of building geometries based on divergences does not lack generality, as it has been shown that any geometry can be expressed by an appropriate divergence~\cite{matumoto1993any,ay2015novel}.
Of the various types of divergences explored in the literature (c.f. \cite{Liese2006Divergences} and references within), two classes are particularly important: \emph{$f$-divergences} of the form
\begin{equation}
\pazocal{D}_{f}[\xi;\xi'] = \int_{\boldsymbol\chi} p_{\xi}(x) f\left( \frac{p_{\xi}(x)}{q_{\xi'}(x)} \right)
d\mu(x)
\end{equation}
for $f(x)$ convex with $f(1)=0$, and \emph{Bregman divergences} of the form
\begin{align}
\pazocal{D}_{\phi}[\xi;\xi']
&= (\xi-\xi')\cdot \mathrm{D}\phi(\xi') - \big(\phi(\xi) - \phi(\xi')\big) \\
&= \xi\cdot\eta' - \phi(\xi) - \psi(\eta')\label{eq:legendre}
\end{align}
for $\phi(\xi)$ a concave function~\footnote{Following Ref.~\cite{wong2018logarithmic}, we use a non-standard definition of Bregman divergences based on concave (instead of convex) functions.}, with $\mathrm{D}\phi=(\partial\phi/\partial\xi_1,\dots,\partial\phi/\partial\xi_d)$ denoting the gradient of $\phi$, $\psi(\eta)=\min_\xi\big(\eta\cdot\xi-\phi(\xi)\big)$ is the Fenchel–Legendre concave conjugate of $\phi$, and $\eta$ the dual coordinates of $\xi$ such that
\begin{equation} \label{eq:scalar_pot}
\xi = \mathrm{D}\psi(\eta)
\quad\text{and}\quad
\eta = \mathrm{D}\phi(\xi)~.
\end{equation}
Each of these types of divergences have important properties from an information geometry perspective: $f$-divergences are monotonic with respect to coarse-grainings of the domain of events $\boldsymbol\chi$, while Bregman divergences enable dual structures that set the basis for orthogonal projections~\cite{Amaridivergences}.
As mentioned above, the deviation of a given connection $\nabla$ from its corresponding metric-compatible (i.e. Levi-Civita) counterpart can be measured by $\alpha T$, where $T$ corresponds to the invariant \textit{Amari-Chensov} tensor~\cite{cencov2000statistical,amari1982differential} and $\alpha \in \mathbb{R}$ is a free parameter. The invariance of $T$ implies that the value of $\alpha$ entirely determines the connection, and the corresponding geometry can be obtained from a divergence of the form
\begin{align}
\pazocal{D}_{\alpha}(p || q)&= \frac{4}{1-\alpha^2} \int_{\boldsymbol\chi}
\left \{ 1 - p^{\frac{1-\alpha}{2}}q^{\frac{1+\alpha}{2}} \right \}d\mu(x)~,
\end{align}
which is known as \emph{$\alpha$-divergence}. As important particular cases, if $\alpha=0$ then $\pazocal{D}_{\alpha}$ becomes the square of Hellinger's distance, and if $\alpha=1$ then it gives the well-known Kullback-Leibler
\begin{equation}
\pazocal{D}_{\mathrm{KL}}(p||q) = \int_{\boldsymbol\chi} p(x) \log \left( \frac{p(x)}{q(x)} \right)
d\mu(x)~.
\end{equation}
It is worth noting that geometrical structures are invariant under certain types of transformations.
For example, consider a divergence $\tilde{\pazocal{D}}$ given by $\tilde{\pazocal{D}}[\xi;\xi']:=F(\pazocal{D}[\xi;\xi'])$, with $F$ a monotone and differentiable function satisfying $F(0)=0$
~\footnote{Given two divergences $\pazocal{D}$ and $\tilde{\pazocal{D}}$, there is always a function $F:\mathbb{R}^3\to\mathbb{R}$ such that
$\tilde{\pazocal{D}}[\xi;\xi']=F(\pazocal{D}[\xi;\xi'],\xi,\xi)$. Building on this fact, one can consider three levels of similarity: (i) when $F$ depends only on the first argument --- which then implies the corresponding geometries are essentially the same, (ii) when $F$ can be expressed as $F(x,y,z) = f(x) g(y,z)$ --- which implies conformal-projective equivalence (see Sec.~\ref{sec:proj}, and (iii) the more general case.}.
Then, it can be shown using Eqs.~\eqref{eq:FisherMetric} and \eqref{eq:connections} that the metric and connections induced by $\pazocal{D}$ and $\tilde{\pazocal{D}}$ are related as follows:
\begin{equation}
\label{eq:class_def}
\tilde{g}=F'(0)\, g,
\quad
\tilde{\Gamma} = F'(0)\,\Gamma,
\quad
\tilde{\Gamma}^{*} = F'(0)\,\Gamma^{*}~.
\end{equation}
Therefore, $\tilde{\pazocal{D}}$ gives rise to exactly the same geometrical structure when $F'(0)=1$, and a scaled version otherwise. More general transformations between divergences and their corresponding geometries are discussed in Section~\ref{sec:proj}.
\subsection{A Pythagorean relationship in curved spaces via the R\'enyi divergence}\label{sec:IIc}
The connection induced by the KL divergence and its natural coordinates is flat (i.e. $\Gamma_{ijk}(\xi)=\Gamma^*_{ijk}(\xi)=0$).
However, this does not hold for $\alpha$-divergences when $\alpha \neq1$, which retain the same Fisher's metric but induce a connection with constant sectional curvature $\omega=(1-\alpha^2)/4$ over the whole manifold~\cite[Theorem~7]{wong2018logarithmic}.
This results into a spherical ($S^n$) geometry for $\alpha \in (0,1)$, or an hyperbolic ($H^n$) geometry for $\alpha \notin (0,1)$.
A non-zero curvature affects the relationship between geodesics~\footnote{Geodesics are the straight curves established by the connection, which in non-Riemannian geometries are not the same as the shortest curves between two points.}: if the ``$\alpha$-geodesic'' joining $p$ and $q$ is orthogonal (with respect to the Fisher metric) to the one
joining $q$ and from $r$, then
\begin{align}
\label{eq:Add_prop}
\pazocal{D}_{\alpha}(p || r) = & \, \pazocal{D}_{\alpha}(p || q) + \pazocal{D}_{\alpha}(q || r) \nonumber \\ & - \frac{1-\alpha^2}{4}\pazocal{D}_{\alpha}(p || q)\pazocal{D}_{\alpha}(q || r)~,
\end{align}
resulting in a deviation from the standard ``Pythagorean relationship''
that is observed for the case of $\alpha=1$~\cite{amari2000methods}.
However, one can rewrite Eq.~(\ref{eq:Add_prop}) as
\begin{equation}
\label{eq:divsphere}
1-\omega \pazocal{D}_{\alpha}(p || r) = \big(1-\omega \pazocal{D}_{\alpha}(p || q)\big)
\big(1-\omega \pazocal{D}_{\alpha}(q || r)\big),
\end{equation}
which describes the relationship between angles on the sphere or hyperbolic space --- depending on the sign of $\omega$~\cite{amari2000methods}.
Interestingly, Eq.~\eqref{eq:divsphere} suggests that a divergence of the form
\begin{align}
\label{eq:Renyialpha}
\mathcal{D}_{\gamma}(p || q)
:=& \frac{1}{\gamma} \log\big(1 + \gamma(1+\gamma)\pazocal{D}_{\alpha} (p||q)\big) \\
=& \frac{1}{\gamma} \log\int_{\boldsymbol\chi} p(x)^{\gamma + 1}q(x)^{-\gamma} d\mu(x) \label{eq:Renyi_div}
\end{align}
with $\alpha=-1-2\gamma$ would recover the ``Pythagorean relationship.''
In fact, $\mathcal{D}_{\gamma}$ can be recognized as the well-known R\'enyi divergence of order $\gamma-1$~\cite{wong2018logarithmic,amari2021information}, noting that we follow Ref.~\cite{valverde2019case} in adopting a shifted indexing.
The R\'{e}nyi divergence is an $f$-divergence with $f(x)=x^{\gamma}$ but it is not a Bregman divergence; however, one can re-cast it as a ``Bregman-like'' divergence~\cite{wong2018logarithmic}. To see this, let's consider $\tilde{p}_\xi \in\mathcal{M}$ to be a deformed exponential family distribution of the form (see Appendix~\ref{app:def_exp_dist})
\begin{equation}
\label{eq:def_exp}
\tilde{p}_\xi(x) = \big(1+ \gamma \xi \cdot h(x)\big)^{-\frac{1}{\gamma}} e^{\varphi_{\gamma}(\xi)}~,
\end{equation}
where $h(x)\in\mathbb{R}^d$ is a vector of sufficient statistics of $x$ and $\varphi_{\gamma}$ is a normalising potential given by
\begin{equation}\label{eq:def_varphi}
\varphi_{\gamma}(\xi) := -\log \int_{\boldsymbol \chi} (1+\gamma \xi \cdot h(x))^{-\frac{1}{\gamma}}d\mu(x)~.
\end{equation}
Note that Eq.~\eqref{eq:def_exp} gives a standard exponential family distribution when $\gamma\to0$. By defining $\mathcal{D}_\gamma[\xi;\xi'] :=\mathcal{D}_\gamma(\tilde{p}_\xi || \tilde{p}_{\xi'})$ to be the corresponding contrast function of the R\'enyi divergence, then one can show that~\cite[Th.13]{wong2018logarithmic}
\begin{equation}\label{eq:def_bigd}
\mathcal{D}_\gamma[\xi;\xi']
= \frac{1}{\gamma} \log(1 + \gamma \xi\cdot \eta^{\prime})
- \varphi_{\gamma}(\xi) - \psi_{\gamma}(\eta')~,
\end{equation}
which resembles Eq.~\eqref{eq:legendre} but with the factor $\xi \cdot \eta$ replaced by a logarithm. Above,
\begin{equation}
\psi_{\gamma}(\eta) :=
\min_{\xi} \Big\{ \frac{1}{\gamma} \log(1+\gamma \xi\cdot \eta) - \varphi_{\gamma}(\xi)\Big\
\end{equation}
is a generalization of the Fenchel–Legendre transform of $\varphi_{\gamma}$, which has conjugate coordinates established by
\begin{subequations}
\begin{align}
\eta &= \frac{1}{1+\gamma \xi\cdot\mathrm{D}\varphi_{\gamma}(\xi)} \mathrm{D}\varphi_{\gamma}(\xi)
~,\label{eq:dual_var0}\\
\xi &= \frac{1}{1+\gamma \xi \cdot \mathrm{D}\psi_{\gamma}(\eta)} \mathrm{D}\psi_{\gamma}(\eta)~,
\label{eq:dual_var1}
\end{align}
\end{subequations}
with $\mathrm{D}\varphi$ denoting the Euclidean gradient of $\varphi$.
Finally, it is worth noting that
\begin{equation}
\label{eq:grad_EV}
\mathrm{D}\varphi_{\gamma} (\xi) = \mathbb{E}_{\xi}\left\{\frac{h(X)}{1+\gamma \xi \cdot h(X)} \right\}=:\mathbb{E}_{\xi}\{Z_{\xi}(h)\}~,
\end{equation}
where $X$ is a random variable that follows the distribution $p_\xi(x)$, $h(X)$ denotes the sufficient statistics of $X$, and $Z_{\xi}(h)$ is defined implicitly as the quantity within the curly brackets. Hence these generalized Fenchel-Legendre dual coordinates can be alternatively expressed as
\begin{equation}\label{eq:dual_esp}
\eta = \frac{1}{1+\gamma \xi\cdot \mathbb{E}_{\xi}\{Z_{\xi}(h)\}} \mathbb{E}_{\xi}\{Z_{\xi}(h)\}~.
\end{equation}
For the case of $\gamma=0$, Eq.~\eqref{eq:dual_esp} reduces to the well-known relationship given by $\eta = \mathbb{E}_{\xi}\{h(X)\}$, (see Appendix~\ref{app:special_cases} for further comments).
\begin{figure}
\includegraphics[width=8.5cm,height=6cm,keepaspectratio]{geo_prev.png}
\caption{An schematic diagram depicting the three classes of geometrical structures that arise from their $\alpha$-value. The curved (i.e. $\alpha \neq \pm1$) geometries are characterized by the $\alpha$- and R\'enyi's divergence, both of which are conformally-projectively related to the $KL$ divergence --- which in turn generates a flat geometry.}
\end{figure}
\subsection{Conformal-projective classes}\label{sec:proj}
Conformal transformations are operations over geometric structures that are angle-preserving, amounting to (pseudo) rotations and dilation of the points in the manifold. Technically, a conformal transformation on $\mathcal{M}$ is defined as an invertible map $\omega:\mathcal{M}\to \mathcal{M}$
such that the induced metric by the pull-back map $\omega_{*}:T_{\omega(p)}\mathcal{M}\to T_p\mathcal{M}$ is related to the original metric up to a scaling factor $\lambda(p):\mathcal{M}\to\mathbb{R}$ such that
\begin{equation}
\label{eq:conformal_metric}
g_{p}(\omega_{*}(X),\omega_{*}(Y))=\lambda(\omega(p))g_{\omega(p)}(X,Y)
\end{equation}
for all $X,Y\in T_{\omega(p)}\mathcal{M}$.
Correspondingly, two metrics $g$ and $\tilde{g}$ are said to be \emph{conformally equivalent} if they can be linked via a conformal factor $\lambda$ as in Eq.~\eqref{eq:conformal_metric}.
Due to their non-Riemannian geometry, geometric transformations on statistical manifolds that are ``structure-preserving'' are not fully specified by their effect on the metric, but also need to characterize its effect on the connections --- which may diverge from metric-dependence via Chentsov's tensor. This characterization can be done by relaying on the notion of \emph{projectively equivalence}: two connections $\nabla$ and $\tilde{\nabla}$ are said to be \emph{projectively equivalent} if there exists a 1-form $\nu = a_{i}(\xi) \text{d}\xi^i$ that satisfies
\begin{equation}
\Gamma^{k}_{ij}(\xi) = \tilde{\Gamma}^{k}_{ij}(\xi) + a_{i}(\xi)\delta^{k}_{j}
+ a_{j}(\xi)\delta^{k}_{i}~,
\end{equation}
with $\delta_i^j$ the Kronecker delta~\footnote{Equivalently, projective equivalence can be defined by $\nabla_X Y = \tilde{\nabla}_X Y + \nu(X)Y + \nu(Y)X$ for any smooth pair of vector fields $X$ and $Y$.}.
A convenient way to put these notions together and build conformal-projective transformations is by considering transformations over divergences.
Two divergences $\pazocal{D}$ and $\tilde{\pazocal{D}}$ are said to belong to the same \emph{conformal-projective class} if two conditions are met: (i) their induced metrics are conformally equivalent, and (ii) their induced connections are projectively equivalent. It can be shown that two divergences belong to the same conformally-projective class if and only if they satisfy
\begin{equation}
\tilde{\pazocal{D}}[\xi;\xi']=\lambda(\xi)\pazocal{D}[\xi;\xi']~,
\end{equation}
with $\lambda$ being the \emph{conformal-projective factor}~\footnote{In general $\lambda$ could depend on both $\xi$ and $\xi'$~\cite{Kurose2002,amari2021information}; however, for the purposes of this paper we restrict ourselves to consider only ``left conformal-projective factors'' (i.e. $\lambda(\xi)$).}.
Let us now study the relationship between the geometries induced by $\mathcal{D}_{\gamma}$, $\pazocal{D}_{\alpha},$ and $\pazocal{D}_{\text{KL}}$.
By considering the inverse of Eq.~\eqref{eq:Renyialpha}, one finds that the function
\begin{equation}\label{eq:F}
F(x) = \frac{e^{\gamma x} -1}{(1+\gamma)\gamma}~,
\end{equation}
establishes the diffeomorphism $F(\mathcal{D}_{\gamma}[\xi;\xi'])= \pazocal{D}_{\alpha}[\xi;\xi']$, which reveals that the R\'enyi divergence and $\alpha$-divergences
generate exactly the same geometry (as
described by Eqs.~\eqref{eq:class_def}).
Building on this fact, and leveraging the Legendre-like form of the R\'enyi entropy shown in Eq.~(\ref{eq:def_bigd}), a direct calculation shows that the action of $F$ on $\mathcal{D}_{\gamma}$ can be expressed as a Bregman divergence $\pazocal{D}_{\phi}$ scaled by a conformal-projective factor~\cite[Th.~1]{wong2019logarithmic}:
\begin{equation}
\label{eq:conformal}
F(\mathcal{D}_{\gamma}[\xi;\xi'])
= \kappa(\xi)\pazocal{D}_{\phi}[\xi;\xi']~.
\end{equation}
Above, $\phi$ is a scalar potential given by $\phi(\xi)= e^{\gamma \varphi_0(\xi)}$ with $\varphi_0(\xi)$ as given in Eq.~\eqref{eq:def_varphi}, and the conformal-projective factor $\kappa$ has the form
\begin{equation}\label{eq:kappa}
\kappa(\xi)=-\frac{1}{\gamma\phi(\xi)}~.
\end{equation}
Moreover, please note that $\pazocal{D}_{\phi}$ describes a dually-flat geometry, belonging to the same equivalent class as the KL divergence.
Thus, these results together establishes that R\'enyi's $\mathcal{D}_{\gamma}$, $\pazocal{D}_{\alpha},$ and $\pazocal{D}_{\text{KL}}$ belong to the same conformal-projective equivalence class.
To conclude, let us present a derivation of the functional form of $\kappa(\xi)$ used in Eq.~\eqref{eq:conformal} following Ref.~\cite{wong2019logarithmic}. The metric induced by $\mathcal{D}_{\gamma}[\xi;\xi']$ is given by
\begin{align}\label{eq:metrixx}
\tilde{g}_{ij}(\xi)
:= -\left .
\partial_{i , j\prime}
\mathcal{D}_{\gamma}[\xi;\xi'] \right|_{\xi=\xi'}
= \kappa (\xi)\partial_{ij}\phi(\xi)~,
\end{align}
and hence $\tilde{g}_{ij}(\xi)=\kappa(\xi) g_{ij}(\xi)$. Furthermore, its induced connection and metric curvature can be found to be
\begin{subequations}
\begin{align}
\label{eq:kappaGamma}
\tilde{\Gamma}_{ij}^{\,\, k}(\xi) &= \frac{\partial_{i}\kappa(\xi)}{\kappa(\xi)} \delta_{j}^{k} + \frac{\partial_{j}\kappa(\xi)}{\kappa(\xi)} \delta_{i}^{k}~, \\
\label{eq:kappaRiemann}
\tilde{R}_{ijk}^{\quad \, l}(\xi) &= \kappa(\xi) \left( \partial_{jk}\frac{1}{\kappa(\xi)} \delta_{i}^{l} - \partial_{ik}\frac{1}{\kappa(\xi)} \delta_{j}^{l} \right)~.
\end{align}
\end{subequations}
Hence, by introducing the 1-form $\nu = \mathrm{d}\log \kappa(\xi)$,
one can identify the affine connection induced by $\tilde{\Gamma}_{ij}^{\,\, k}(\xi)$ as being projectively flat. This 1-form --- or equivalently, the conformal factor $\kappa(\xi)$ --- can be derived from the
Riemann curvature tensor, which for spaces of constant sectional curvature takes the form $R_{ijk}^{\quad \, l}=K (g_{jk}\delta_{i}^{l}-g_{ik}\delta_{j}^{l})$, with $K \in \mathbb{R}$ corresponding to its scalar curvature. As mentioned in Section~\ref{sec:IIc}, the geometry induced by the $\alpha$-divergence has curvature $\omega=(1-\alpha^2)/4$ throughout the whole manifold, and hence its Riemann tensor can be rewritten as
\begin{equation}\label{eq:sss}
R_{ijk}^{\quad \, l}=\frac{1+\alpha}{2} (\tilde{g}_{jk}\delta_{i}^{l}-\tilde{g}_{ik}\delta_{j}^{l})~,
\end{equation}
where a factor $\frac{1-\alpha}{2}=\gamma +1$ from $\omega$ has been absorbed by the metric~\footnote{Note that the metrics coming from the $\alpha$ and R\'enyi divergences are conformally related $\tilde{g}=(\gamma + 1)g$ as seen by~\eqref{eq:Renyialpha}}. Moreover, using the fact that the Riemann tensor is left unchanged by the conformal-projective transformation (i.e. $\tilde{R}_{ijk}^{\quad \, l} = R_{ijk}^{\quad \, l}$), and recognising that $K=-\gamma$, one can use Eqs.~\eqref{eq:metrixx}, \eqref{eq:kappaRiemann} and \eqref{eq:sss} to show that
\begin{equation}\label{eq:sads}
\frac{1}{\kappa(\xi)} = -\gamma \phi(\xi) + \sum_i a_{i}\xi^{i} + b~,
\end{equation}
for some $a_{i},b \in \mathbb{R}$. Finally, as the linear terms can be absorbed in the definition of $\phi$, Eq.~\eqref{eq:sads} leads to the expression for $\kappa(\xi)$ as shown above.
\section{Orthogonal foliations in curved statistical manifolds}
\label{sec:hiearchical}
This section presents the study of orthogonal foliations in curved statistical manifolds. For simplicity of the exposition, the rest of the paper focuses on multivariate distributions of $n$ binary random variables --- i.e. distributions of the form $p(x)$ where $x=(x_1,\dots,x_n)$ with $x_i\in\{0,1\}$, and hence $\boldsymbol\chi = \{0,1\}^n$.
\subsection{Orthogonal foliations on flat-projective spaces}
\label{sec:dual_foliation}
Let us consider a parametrization $\nu$ of the manifold $\mathcal{M}$. Then, for a given $p_\nu\in\mathcal{M}$ we define
\begin{equation}\label{eq:M}
\tilde{\textbf{M}}_k\{p_\nu\} := \{ q_{\nu'}\in\mathcal{M} | \nu_i' = \nu_i \;\forall i=1,\dots,k\}~,
\end{equation}
which establishes a nested structure on the manifold of the form
\begin{align}
\label{eq:foliationM}
\{p\} = \tilde{\textbf{M}}_n\{p\}
\subset \tilde{\textbf{M}}_{n-1}\{p\}
\subset \dots
\subset \tilde{\textbf{M}}_{0}\{p\} = \mathcal{M}~.
\end{align}
The parametrization $p_\nu$ also induces a natural basis for the cotangent space at each $p\in\mathcal{M}$, which we denote by $\partial_{\nu_i}\in T^*_{p}\mathcal{M}$. To study the geometry of this basis, let's consider
the functional form of $\mathcal{D}_\gamma$ induced by $\nu$, which is given by $\mathcal{D}_\gamma[\nu ; \nu'] := \mathcal{D}_\gamma( p_\nu || p_{\nu'} )$. Then, the inner product between the basis elements $\partial_{\nu_i}$ can be calculated as
\begin{equation}
\langle \partial_{\nu_i}, \partial_{\nu'_j} \rangle
= -\partial_{\nu_i,\nu^\prime_j} \mathcal{D}_\gamma[\nu;\nu']\big|_{\nu'=\nu} = \tilde{g}^{ij}(\nu)~.
\end{equation}
The properties of $\mathcal{D}_\gamma$ guarantees that $\tilde{g}^{ij}(\nu)$ is positive-definite, and hence it has a well-defined inverse for each $\nu$ which we denote by $r^{ij}(\nu) := \big(g^{-1}(\nu)\big)^{ij}$.
By denoting as $\theta$ the primal coordinates
with respect to $r$, one can then define
\begin{equation}\label{eq:E}
\tilde{\textbf{E}}_k := \{ p_{\theta}\in\mathcal{M} | \theta_j=\theta_j^u,\;\forall j>k\},
\end{equation}
where $\theta^u$ denote the $\theta$-coordinates of the uniform distribution $u$. It is direct to verify that
\begin{align}
\label{eq:foliation}
\{u\}=\tilde{\textbf{E}}_0 \subset \tilde{\textbf{E}}_1 \subset \dots \subset \tilde{\textbf{E}}_n = \mathcal{M}~.
\end{align}
Interestingly, $\tilde{\textbf{E}}_k$ grows with $k$ while $\tilde{\textbf{M}}_k$ shrinks such that for each $k$ their combined dimensions sum up to $n$ --- being enough to account for the dimensionality of $\mathcal{M}$. Furthermore, due to the fact that these complementary dimensions are orthogonal, this implies that their intersection cannot be empty.
We summarize these ideas in the following definition.
\begin{definition}
For a given parametrization $\nu$ of $\mathcal{M}$ for which $\tilde{\textbf{E}}_k$ exists, then the \emph{orthogonal foliation} of $\mathcal{M}$ associated to $p_\nu$ is the collection of sets $\big\{\tilde{\textbf{M}}_k\{p_\nu\},\tilde{\textbf{E}}_k\big\}$.
\end{definition}
Please note that the bases of $T_{p}\mathcal{M}$ and $T^*_{p}\mathcal{M}$ determined by the generalized Fenchel-Legendre dual coordinates established by Eqs.~\eqref{eq:dual_var0} and \eqref{eq:dual_var1} are
not orthogonal under the inner product related to the scalar potential $\varphi$ and its conjugate if $\gamma>0$, as discussed in Appendix~\ref{app:pythagorean}. Therefore, the standard relationship between geometric duality and Fenchel-Legendre duality that holds for $\gamma=0$ is broken in curved statistical manifolds.
Nonetheless, projective-flatness allows for the metric induced by $\mathcal{D}_\gamma$ to be expressible in coordinates where the bases are manifestly orthogonal up to a conformal-projective factor, so that $\langle\partial_{\xi_i},\partial_{\eta^j}\rangle = \kappa(\theta) \delta_{i}^{j}$ with $\kappa(\theta)$ as defined in Eq.~\eqref{eq:kappa}. Then, $\theta$ and its Fenchel-Legendre conjugate established by Eq.~\eqref{eq:scalar_pot} define
a set of conformal-projective coordinates.
Crucially, orthogonal foliations satisfy a Pythagorean property, as shown by the following lemma.
\begin{lemma}\label{lemma:orto}
Given an orthogonal foliation $\{\tilde{\textbf{M}}_k\{p\},\tilde{\textbf{E}}_k\}$,
if $p\in \tilde{\textbf{M}}_k\{p\}$, $r\in \tilde{\textbf{E}}_k$, and $q \in \tilde{\textbf{M}}_k\{p\} \cap \tilde{\textbf{E}}_k$ then
\begin{equation}\label{eq:Pyth_rel}
\mathcal{D}_{\gamma}(p||r)
= \mathcal{D}_{\gamma}(p||q)
+ \mathcal{D}_{\gamma}(q||r)~.
\end{equation}
\end{lemma}
\begin{proof}
See Appendix~\ref{app:pythagorean}.
\end{proof}
It is important to note that while building orthogonal coordinates is a relatively simple construction, these don't necessarily generally guarantee a Pythagorean relationship. As a matter of fact, although the equivalence between R\'enyi's and $\alpha$-divergences
ensures that both divergences induce the same geometry, only R\'enyi's exhibits a correspondence between orthogonality on the metric and a Pythagorean relationship on the divergence (see Section~\ref{sec:IIc}).
To illustrate these ideas, let us consider a particular construction where we take $\tilde{\textbf{M}}_k$ as the set of probabilities distributions with fixed expectation values, denoted by $\eta$, and come up with its orthogonal complement. From $\phi$ as the potential encoding these change of coordinates, we define its conjugate potential $\bar{\psi}= \min_{\xi}(\xi\cdot \eta -\phi(\xi))$. In this way, the primal coordinates $\bar{\xi}$ orthogonal to $\eta$ follow from $\mathrm{D}(\xi\cdot \eta -\phi(\xi))$, that is,
\begin{equation}
\bar{\xi}^i = \mathbb{E}_{\xi}\{h^i (x)\} -\frac{1}{\gamma \kappa(\xi)}(\mathrm{D}\log \kappa(\xi))^i~,
\end{equation}
where the first term in the right hand side follows from $\eta^i = \mathbb{E}_{\xi}\{h^i (x)\}$. The primal coordinates $\bar{\xi}^i$, allows to construct an orthogonal complement to $\tilde{\textbf{M}}_k$, and from \eqref{eq:exp_fam} one finds that
\begin{equation}
\bar{\textbf{E}}_k(c_{k^+})
=
\{ p_{\bar{\xi}}(x) \in \mathcal{M} \:|\: \bar{\xi}_{k^+} = c_{k^+} \}.
\end{equation}
\subsection{Higher-order hierarchical decomposition}
Using a orthogonal foliation, we now introduce the notion of hierarchical decomposition on curved statistical manifolds.
\begin{definition}
The $k$-th order $\gamma$-projection of $p\in\mathcal{M}$ under the orthogonal foliation $\{\tilde{\textbf{M}}_k\{p\},\tilde{\textbf{E}}_k\}$ is
\begin{equation}\label{eq:def_projection}
\tilde{p}^{(k)} := \argmin_{q \in \tilde{\textbf{E}}_k} \mathcal{D}_{\gamma}(p\,;q)
= \argmin_{q \in \tilde{\textbf{E}}_k} \pazocal{D}_{\alpha}(p\,;q)~.
\end{equation}
\end{definition}
Above, the minimum under $\mathcal{D}_{\gamma}$ and $\pazocal{D}_{\alpha}$ is the same, as both divergences are related by a monotonous function as shows by Eq.~\eqref{eq:Renyialpha}.
An useful property of the orthogonal foliation is that it enables a useful characterization of $\tilde{p}^{(k)}$ for $k>0$, as shown in the next Lemma.
\begin{lemma}\label{lemma:inc}
The $k$-th order $\gamma$-projection of $p\in\mathcal{M}$ satisfies
$\{\tilde{p}^{(k)}\} = \tilde{\textbf{E}}_k \cap \tilde{\textbf{M}}_{k}\{p\}$.
\end{lemma}
\begin{proof}
Consider $q\in\tilde{\textbf{E}}_k \cap \tilde{\textbf{M}}_{k}\{p\}$. It is direct to verify that $p,q\in\tilde{\textbf{M}}_k\{p\}$
and $q,\tilde{p}^{(k)}\in\tilde{\textbf{E}}_k$.
Then, Lemma~\ref{lemma:orto}
implies that
\begin{equation}\label{eq:asdvbg}
\mathcal{D}_{\gamma}(p||\tilde{p}^{(k)})
= \mathcal{D}_{\gamma}(p||q)
+ \mathcal{D}_{\gamma}(q||\tilde{p}^{(k)}) \geq \mathcal{D}_{\gamma}(p||q) ~.
\end{equation}
Additionally, Eq.~\eqref{eq:def_projection} and the fact that $q\in\tilde{\textbf{E}}_k$ imply that $\mathcal{D}_{\gamma}(p||q) \geq \mathcal{D}_{\gamma}(p||\tilde{p}^{(k)})$, which together with Eq.~\eqref{eq:asdvbg} show that $\mathcal{D}_{\gamma}(p||\tilde{p}^{(k)})=\mathcal{D}_{\gamma}(p||q)$. This, combined again with Eq.~\eqref{eq:asdvbg}, implies in turn that $\mathcal{D}_{\gamma}(q||\tilde{p}^{(k)})=0$, which can only be satisfied if $q=\tilde{p}^{(k)}$.
\end{proof}
Following Ref.~\cite{amari2001information}, let us consider the mixed coordinates $\nu_k =(\eta_{k^{-}};\xi_{k^{+}})$. Then, due to the duality of $\eta$ and $\xi$, one can verify that $\tilde{p}^{(k)}$ satisfy the mixed coordinates $\tilde{\nu}_k = (\eta_{k^{-}};0)$, where $\eta_{k^{-}}$ are the constraints of order up to $k$ of $p$. Interestingly, note that $u = \tilde{\textbf{E}}_0(0)$ is equal to the uniform distribution $u$ for all $p\in\mathcal{M}$ and all $\gamma$.
\begin{figure}
\includegraphics[width=8.5cm,height=6cm,keepaspectratio]{folliation.png}
\caption{\textit{(left)} Orthogonal foliation of manifold $\mathcal{M}$. \textit{(right)} Projections onto $\textbf{E}_1$ leaf (associated with $\alpha=1$) and its deformation $\tilde{\textbf{E}}_1$ related to $\alpha\neq\pm1$.}
\label{fig:fol_struct}
\end{figure}
With these definitions at hand, we can prove the following result.
\begin{theorem}\label{th:main}
For a given $p\in\mathcal{M}$, the collection of the $\gamma$-projections $\tilde{p}^{(n-1)},\dots,u$ satisfy
\begin{equation}
\label{eq:div_decomp}
\mathcal{D}_{\gamma}(p||u) = \sum_{k=1}^{n}\mathcal{D}_{\gamma}(\tilde{p}^{(k)}||\tilde{p}^{(k-1)})~.
\end{equation}
\end{theorem}
\begin{proof}
Let's start noting that both $\tilde{p}^{(n-1)}$ and $u$ belong to $\tilde{\textbf{E}}_{n-1}$, while
both $p$ and $\tilde{p}^{(n-1)}$ belong to $\tilde{\textbf{M}}_{n-1}$ due to Lemma~\ref{lemma:inc}.
Therefore, Lemma~\ref{lemma:orto} implies that
\begin{equation}
\label{eq:cool}
\mathcal{D}_{\gamma}(p||u) = \mathcal{D}_{\gamma}(p||\tilde{p}^{(n-1)})
+\mathcal{D}_{\gamma}(\tilde{p}^{(n-1)}||u)~.
\end{equation}
The rest of the proof can be done following a similar rationale recursively on $\mathcal{D}_{\gamma}(\tilde{p}^{(n-1)}||u)$.
\end{proof}
To better understand the deformation of the layers induced by $\gamma$, it is beneficial to consider the mean-field theory approach presented in Ref.~\cite{amari2001information2}. Let's consider a classic Ising model for which two layers suffice to describe the system, and focus in its projection to $E_1$. In~\cite{amari2001information2} the $m$ and $e$ projections denote the solution and naive approximations, respectively, which are both orthogonal. Moreover, the $\alpha$-projection draws the trajectory of solutions in between. In the current picture, however, the submanifolds are deformed in such a way that the $\alpha$-projection becomes orthogonal with $\alpha=\pm 1$, which are left as fixed points (see Figure~\ref{fig:fol_struct}).
\section{Generalising the Maximum Entropy principle}
\label{sec:maxentRenyi}
\subsection{R\'{e}nyi's entropy and related quantities}
Consider a manifold of distributions whose support allows a flat distribution. Then, the $\alpha$-\emph{negentropy} of $p$ is defined as
\begin{equation}
\pazocal{N}_{\gamma}(p)
:= \Lambda - H_{\gamma}(p)~,
\end{equation}
with $H_\gamma =\Lambda$ being the R\'enyi entropy of the uniform distributions, which corresponds to $\log |\bm \chi|$ for finite $\bm \chi$ or $\log n$ in the continuum, and
\begin{equation}
H_{\gamma}(p) = \frac{-1}{\gamma}\log \int_{\boldsymbol\chi} p(x;\xi)^{\gamma+1} d\mu(x)
\end{equation}
being the well-known R\'enyi entropy. This definition recovers the standard Shannon entropy and negentropy in the case $\gamma=0$~\footnote{One can interpret a continuous decrease in the constant sectional $\alpha$-curvature of the manifold manifesting as a decrease in the order of R\'enyi's entropy in statistics, eventually converging to Shannon's for $\gamma \to 0$ limit.}.
Another quantity of interest is the $\gamma$-\emph{Total Correlation}, defined as
\begin{align}
\textrm{TC}_{\gamma}(\bm X^n)
=
\sum_{i=0}^n H_{\gamma}(X^i) - H_{\gamma}
(\bm X^n),
\end{align}
where $\bm X^n := (X_1,\dots,X_n)$ is a random vector that distributes according to $p_\xi(X=x)$ with $x=(x_1,\dots,x_n)$.
This is a generalization of the well-known Total Correlation for Shannon's entropy (also known as Multi-information~\cite{studeny1998multiinformation}), which is a generalization of Shannon's mutual information for the case of 3 or more variables~\cite{rosas2019quantifying}. In particular, if $n=2$ then the total correlation gives a R\'enyi's mutual information.
\subsection{A hierarchical decomposition of R\'{e}nyi's entropy}
With a hierarchical decomposition $p,p^{(n-1)},\dots,u$ at hand, we are now poized to address the problem of entropy decomposition based on the relevance of each order.
\begin{lemma}\label{lemma:telescopic}
Consider a the $\gamma$-projections of $p\in\mathcal{M}$ under an orthogonal foliation $\{\tilde{\textbf{M}}_k\{p\},\tilde{\textbf{E}}_k\}$ such that $\tilde{\textbf{E}}_0 = \{u\}$ with $u$ the uniform distribution.
Then, the following holds for $l<k$:
\begin{equation}
\mathcal{D}_{\gamma}(\tilde{p}^{(k)}||\tilde{p}^{(l)}) = H_{\gamma}(\tilde{p}^{(l)}) - H_{\gamma}(\tilde{p}^{(k)})~.
\end{equation}
\end{lemma}
\begin{proof}
A direct application of Eq.\eqref{eq:div_decomp} shows that
\begin{equation}
\mathcal{D}_{\gamma}(\tilde{p}^{(k)}||u)
= \mathcal{D}_{\gamma}(\tilde{p}^{(k)}||\tilde{p}^{(l)})
+
\mathcal{D}_{\gamma}(\tilde{p}^{(l)}||u)~.
\end{equation}
Then, the desired result follows from re-ordering the terms and using the fact that $\mathcal{D}_{\gamma}(q||u) = \Lambda - H_{\gamma}(q)$ for any $q\in\mathcal{M}$.
\end{proof}
\begin{corollary}\label{corr:simple}
For any multivariate distribution $p$ then
\begin{align}
\pazocal{N}_{\gamma}(p)
&= \mathcal{D}_{\gamma}(p||u)~, \\
\mathrm{TC}_{\gamma}(\bm X^n)
&=\mathcal{D}_{\gamma}\left( p\Big|\Big|\prod_{k=1}^n p_{X_k} \right)~.
\end{align}
\end{corollary}
Using this lemma, we can put forward our main result.
\begin{theorem}\label{teo:2}
Consider $p\in\mathcal{M}$ and an orthogonal foliation $\{\tilde{\textbf{M}}_k\{p\},\tilde{\textbf{E}}_k\}$ such that $\tilde{\textbf{E}}_0 = \{u\}$. Then,
\begin{equation}
\tilde{p}^{(k)} = \argmax_{q\in \tilde{\textbf{M}}_k\{p\}} H_{\gamma}(q)~.
\end{equation}
Additionally, the R\'enyi negentropy can be decomposed as
\begin{equation}\label{eq:negent_dec}
\pazocal{N}_{\gamma}(p) = \sum_{k=1}^N \Delta^{(k)} H_{\gamma}(p)~,
\end{equation}
with $\Delta^{(k)} H_{\gamma}(p):= H_{\gamma}\big(\tilde{p}^{(k-1)}\big)
-
H_{\gamma}\big(\tilde{p}^{(k)}\big) >0$ quantifying the relevance of the $k$-th order constraints.
\end{theorem}
\begin{proof}
Because $\tilde{p}^{(k)}\in\tilde{\textbf{M}}_k$ (see Lemma~\ref{lemma:inc}), then thanks to Lemma~\ref{lemma:orto} any $r\in \tilde{\textbf{M}}_k$ satisfies
\begin{equation}
\mathcal{D}_{\gamma}(r||u)
=
\mathcal{D}_{\gamma}(r||\tilde{p}^{(k)})
+
\mathcal{D}_{\gamma}(\tilde{p}^{(k)}||u)~.
\end{equation}
Therefore, $\mathcal{D}_{\gamma}(r||u)\geq \mathcal{D}_{\gamma}(\tilde{p}^{(k)}||u)$ for all $r\in\tilde{\textbf{M}}_k$, and hence it follows that
\begin{equation}
\tilde{p}^{(k)}
=
\argmin_{q\in\tilde{\textbf{M}}_k}
\mathcal{D}_{\gamma}(q||u)
= \argmax_{q\in\tilde{\textbf{M}}_k}
H_{\gamma}(q)~.
\end{equation}
Above, the first equality is due to the fact that $\tilde{p}^{(k)}\in\tilde{\textbf{M}}_k$, and the second equality uses the fact that $\mathcal{D}_{\gamma}(q||u) = \Lambda - H_{\gamma}\big(q\big)$.
To prove Eq.~\eqref{eq:negent_dec}, one can use Corollary~\ref{corr:simple} and Theorem~\ref{th:main} to show that
\begin{equation}
\pazocal{N}_{\gamma}(p) =
\mathcal{D}_{\gamma}(p||u)
=\sum_{k=1}^N \mathcal{D}_{\gamma}(\tilde{p}^{(k)}||\tilde{p}^{(k-1)})~.
\end{equation}
The desired result is then a consequence of Lemma~\ref{lemma:telescopic}.
\end{proof}
Above, $\Delta^{(k)} H_{\gamma}(p)$ accounts for the relevance of the $k$-th order interactions. In particular, the first order term accounts for all the non-interactive part:
\begin{equation}
\Delta^{(1)} H_{\gamma}(p)
= \sum_{j=1}^N
\pazocal{N}_{\gamma}(X_j)
= \sum_{j=1}^N
\Big( \log n - H_{\gamma}\big(X_j\big)
\Big)
\end{equation}
with $\pazocal{N}_{\gamma}(X_j)$ being the marginal negentropy of $X_j$. The remaining terms can be seen to be equal to
\begin{equation}
\sum_{k=2}^N \Delta^{(k)} H_{\gamma}(p)
= \text{TC}_{\gamma}(p)
\end{equation}
showing that the $\text{TC}_\gamma$ captures all the correlated part of the R\'enyi negentropy, following the relationship observed in Shannon's case for $\gamma=0$~(as discussed in Ref.\cite{rosas2019quantifying}).
\subsection{Maximum R\'enyi entropy distributions over constraints on average observables}
Let us now consider a collection of observables $h$ over a system of $n$ binary variables defined as
\begin{equation}
h^{i,k}(x) = \prod_{j=1}^k x_{I_i^k(j)}~,
\end{equation}
with $h^{i,k}$ being the $i$-th observable of $k$-th order, with $I_i^k(j)$ being an appropriate assignment of indices. Then, one can define the following coordinates:
\begin{equation}
\nu^{i,k} := \mathbb{E}\{h^{i,k}(x)\}~.
\end{equation}
For example, $\nu^{i,1}$ are of the form $\mathbb{E}\{x_i\}$ and $\nu^{j,2}$ of the form $\mathbb{E}\{x_r x_s\}$. Importantly, given that $x_1,\dots,x_n$ are binary variable then one can check that, once $\nu^{i,l}$ for all $i$ and $l\leq k$ are fixed, this in turn determines all the $k$-th order marginals~\footnote{The $k$-th order marginals of $p$ are the distributions considering $k$ of the $n$ variables that compose $x$, which are obtained by marginalising the other $n-k$ variables.}. Crucially, this implies that the parameters $\nu$ as a whole determine a unique distribution $p_\nu(x)$, and hence $\nu$ is a valid parametrization of the corresponding statistical manifold~\cite{bialek2012statistical,rosas2016understanding}.
Let us now consider the family of sets $\tilde{\textbf{M}}_k$, as defined in Eq.\eqref{eq:M} associated to this parametrization. According to the previous discussion, $\tilde{\textbf{M}}_k\{p\}$ is the set of all distributions for $x$ that are compatible with the $k$-th order marginals. For determining the form of the corresponding $k$-th order $\gamma$-projection, we use the following lemma.
\begin{lemma}\label{lemma:asdggh}
The solution of the optimization problem
\begin{equation}\label{eq:optimis}
\argmax_{q\in\mathcal{M}} H_{\gamma}(q)
\qquad \text{s.t.}
\quad
\nu^{i,l} = \mathbb{E}_q\{h^{i,l}(x)\}
\end{equation}
for all $i$ and $l\leq k$ gives a projection of the form
\begin{equation}\label{eq:maxent_dist}
\tilde{p}_\theta^{(k)}(x) = e^{-z_\gamma(\theta)} \big(1 + \gamma \theta \cdot h(x) \big)^{1/\gamma} ~,
\end{equation}
with $\theta^{i,l}=0$ for all $l>k$, and a normalization factor given by $z_\gamma(\theta) = \frac{1}{\gamma} \log \sum_{x} \big(1 + \gamma\theta\cdot h(x) \big)^{1/\gamma}$.
\end{lemma}
\begin{proof}
Using Theorem~\ref{teo:2}, it is clear that $\tilde{p}_\theta^{(k)}$ can be found by solving the extreme values of a Lagrangean of the form
\begin{align}
L(q, \theta_0, \{\theta_j\}) =& H_{\gamma}(q) + \theta_0 \Big(\sum_i q_i -1\Big) \nonumber \\
&+ \sum_j \theta_j \Big( \sum_k q_k F_j(x_k) - \nu_j \Big)~,
\end{align}
where $q$ is a discrete distribution and $\theta_j$ are Lagrange multipliers.
The desired result follows from imposing $\partial L/\partial q_i = 0$ and $\partial L/\partial \theta_j = 0$.
\end{proof}
Efficient numerical methods to estimate distributions of the form specified by Eq.~\eqref{eq:maxent_dist} will be developed in a separate publication.
\section{Conclusion}\label{sec:discussion}
This paper shows how the non-Euclidean geometry of curved statistical manifolds naturally leads to a MEP that uses the R\'enyi entropy, generalising the traditional MEP framework based on Shannon's --- which take place on flat manifolds. This generalization of the MEP has three important consequences:
\begin{itemize}
\item It highlights special geometrical properties of the R\'enyi entropy, which make it stand apart from other generalized entropies.
\item It provides a solid mathematical foundation for the numerous applications of the R\'enyi entropy and divergence.
\item It enables a range of novel methods of analysis for the statistics of complex systems.
\end{itemize}
R\'{e}nyi's entropy and divergence represent one of many routes by which the classic information-theoretic definitions can be extended. One fundamental feature of the R\'enyi divergence --- that this work thoroughly exploits --- is the correspondence that it establishes between orthogonality with respect to Fisher's metric and a Pythagorean relationship in the divergence (which does not hold in the geometry induced by e.g. the $\alpha$-divergence). This correspondence is the key property that allows us to build hierarchical foliations, despite the fact that in curved manifolds the link between geometric and Fenchel-Legendre duality is generally broken. It is relevant to highlight that the correspondence between orthogonality and the Pythagorean relationship is not guaranteed by other divergences such as the $\alpha$-divergence, which makes entropies such as Tsallis'~\footnote{For an explanation of the close relationship between the Tsallis entropy and the $\alpha$-divergence, please see~\cite{Tsallis-alphadiv}.} not well suited to extend the MEP --- at least from an information geometry perspective~\footnote{For an interesting related discussion, including thermodynamic aspects, see Ref.~\cite{scarfone2020study}}.
Considering that extensions of the Renyi entropy exist (e.g. Ref.~\cite{de2016geometry}),
an interesting open question is to determine the range of divergences that satisfy these properties.
These findings are in agreement with recent research that is revealing special features of the R\'enyi entropy and divergence in the context of statistical inference and learning. In particular, Refs.~\cite{esposito2019generalization,esposito2020robust} show that the R\'enyi divergence can provide bounds to the generalization error of supervized learning algorithms. Also, Ref.~\cite{jizba2019maximum} shows that the R\'enyi entropy belongs to a class of functionals that are particularly well-suited for inference and estimation. Put together, these findings suggest that the R\'enyi entropy and divergence might be capable of playing an important role in the development of future data analysis and artificial intelligence frameworks.
This work opens the door to novel data-analyses approaches to study high-order interactions. While commonly neglected, high-order statistics have recently been proven to be instrumental in a wide range of phenomena at the heart of complex systems, including the self-organising capabilities of cellular automata~\citep{rosas2018information}, gene-to-gene information flow~\cite{cang2020inferring}, neural information processing~\citep{wibral2017partial}, high-order brain functions~\citep{luppi2020synergistic1,luppi2020synergistic2}, and emergent phenomena~\citep{rosas2020reconciling,varley2021emergence}. However, exhaustive modeling of high-order effects requires an exponential number of parameters; for that reason, practical investigations need to rely on heuristic modeling methods (see e.g.~\cite{ganmor2011sparse,shimazaki2015simultaneous}). In contrast, our framework allow us to do projections while optimising the manifold's curvature in order to best match empirical statistics. Importantly, $k$-th order projections on curved spaces lead to distributions that capture statistical phenomena of order higher than $k$ without increasing the dimensionality of the parametric family. The development of this line of research is part of our future work.
Another set of promising applications is found in condensed matter systems, where the R\'enyi entropy is often introduced as a measure of the degree of quantum entanglement. In particular, the R\'enyi entropy results from an heuristic generalization of the Von Neumann entropy, which has important benefits in being (i) more suitable to numerical simulations~\cite{hastings2010measuring} and (ii) being easier to measure by experiments~\cite{islam2015measuring}. In particular, the R\'enyi entropy has been shown to be sensible to features of quantum systems such as central charge~\cite{GeometricMutInf}, and knowledge of it at all orders encodes the whole entanglement spectrum of a quantum state~\cite{ShannonRenyiQuantumSpin}.
Moreover, in strongly coupled systems, R\'enyi entropies have been essential for establishing a connection between quantum entanglement and gravity~\cite{Dong:2016fnf,Barrella:2013wja}. More recently, the R\'enyi mutual information has been taking a central role in the identification of phase transitions~\cite{GeometricMutInf,DetectingPhaseTwithRenyi,PhysRevLett.107.020402}.
The mathematical framework established in this work serves as a solid basis for these investigations, and further allows the exploration of novel application of information geometry tools in these scenarios.
It is our hope that this contribution may serve to widen the range of applicability of the MEP, while fostering theoretical and practical investigations related to the properties of curved statistical manifolds.
\begin{acknowledgments}
The authors thank Shunichi Amari for careful reading of the manuscript and a number of insightful suggestions, and Ryota Kanai and Yike Guo for supporting this research.
F.E.R. is supported by the Ad Astra Chandaria foundation.
\end{acknowledgments}
|
1,116,691,499,874 | arxiv | \section{Model} \label{sec:model}
We solve the Hubbard model on the square lattice with longer range
hoppings, defined by the Hamiltonian:
\begin{equation}
H=\sum_{ij\sigma}t_{ij}c_{i\sigma}^{\dagger}c_{j\sigma}-\mu\sum_{i\sigma}n_{i\sigma}+U\sum_{i}n_{i\uparrow}n_{i\downarrow}\label{eq:Hubbard}
\end{equation}
with $i,j$ indexing lattice sites. $c_{\sigma i}^{\dagger}$($c_{\sigma i}$) denote creation (annihilation)
operators, $n_{\sigma i}=c_{\sigma i}^{\dagger}c_{\sigma i}$ the
density operator, $\mu$ the chemical potential, and $U$ the on-site
Hubbard interaction. The hopping amplitudes, depicted on Fig.~\ref{fig:hoppings}, are given by
\begin{equation}
t_{ij}=\left\{ \begin{array}{cc}
t, & \mathbf{r}_{i}=\mathbf{r}_{j}\pm\mathbf{e}_{x,y}\\
t', & \mathbf{r}_{i}=\mathbf{r}_{j}\pm\mathbf{e}_{x}\pm\mathbf{e}_{y}\\
t'', & \mathbf{r}_{i}=\mathbf{r}_{j}\pm2\mathbf{e}_{x,y}\\
0, & \mathrm{otherwise.}
\end{array}\right.
\end{equation}
where $\mathbf{e}_{x,y}$ are the lattice vectors in the $x$ and
$y$ directions. The bare dispersion is therefore
\begin{align}
\varepsilon_{\mathbf{k}} & =2t(\cos k_{x}+\cos k_{y})+4t'\cos k_{x}\cos k_{y}\nonumber \\
& \;\;+2t''(\cos2k_{x}+\cos2k_{y})\label{eq:bare_dispersion}
\end{align}
When $t'=t''=0$, the half-bandwidth is $D=4|t|$, but non zero $t',t''$
in general make the bandwidth larger. Hereinafter, we express all
quantities in units of $D$, unless stated differently.
\begin{figure}[!ht]
\centering{}\includegraphics[width=2.8in, trim=2.0cm 1.0cm 2.0cm 0.0cm]{hoppings}
\caption{Definition of the tight-binding parameters on the square lattice.\label{fig:hoppings} }
\end{figure}
\section{Formalism} \label{sec:formalism}
The main goal of this paper is to study the superconducting (SC) phase of the two-dimensional Hubbard model
within the TRILEX approach introduced in Refs.~\onlinecite{Ayral2015} and \onlinecite{Ayral2015c}.
TRILEX is based on a bosonic decoupling of the interaction and a self-consistent
approximation of the electron-boson vertex $\Lambda$ with a quantum
impurity model. The decoupling of the on-site interaction is done
by an exact Hubbard-Stratonovich transformation, leading to a model of non-interacting
electrons coupled to some auxiliary bosonic modes representing charge
and spin fluctuations.
We also study two methods which can be regarded as simplifications of the TRILEX method, namely $GW$+EDMFT\cite{Sun2002,Biermann2003,Sun2004,Ayral2012,Ayral2013,Biermann2014} and $GW$\cite{Hedin1965,Hedin1999}.
In $GW$+EDMFT, vertex corrections are neglected in the non-local part of the self-energy and polarization.
As both decay to zero, this additional approximation is negligible at very long distances.
Due to the full treatment of
the local vertex corrections, $GW$+EDMFT can capture the Mott transition, and we use it to obtain superconducting results
in the doped Mott insulator regime.
In the $GW$ method, vertex corrections are neglected altogether, and the self-energy and polarization are entirely calculated
from bold bubble diagrams. The $GW$ equations do not require the solution of an auxiliary quantum impurity model,
and are therefore less costly to solve. This additional approximation is justified only at weak coupling (see e.g. Ref.~\onlinecite{Ayral2012} for an illustration of its failure at large $U$),
and there we use it to explore a large region of $(t',t'',T,n_\sigma)$ parameter space ($T$ denotes temperature, $n_\sigma$ occupancy per spin).
Finally, let us stress that, in this paper, we use only
\textit{single-site} impurity models.
Cluster extensions of TRILEX are discussed in our different work, Ref.\onlinecite{Ayral2017c}. They naturally incorporate
the effect of short-range antiferromagnetic exchange $J$ and give a quantitative control on the accuracy of the solution.
\subsection{Superconducting Hedin equations}
In this section, we derive the Hedin equations\cite{Hedin1965,Hedin1999,Aryasetiawan2008} which give the self-energy and polarization as functions of the three-leg vertex function. The derivation holds in superconducting phases and is relevant for fluctuations not only in the charge channel\cite{Linscheid2015}, but also in the longitudinal and transversal spin channels.
\subsubsection{The electron boson action}
The starting point of the TRILEX method, as described in Ref.~\onlinecite{Ayral2015c}, is
the following electron-boson action:
\begin{align}
S_{\mathrm{eb}}[c,c^*,\phi] & =c^{*}_\mu\left[-G_{0}^{-1}\right]_{\mu\nu}c_\nu+\frac{1}{2}\phi_{\alpha}\left[-W_{0}^{-1}\right]_{\alpha\beta}\phi_{\beta}\label{eq:eb_action}\\
& \;+\lambda_{\mu\nu\alpha}c^{*}_\mu c_\nu \phi_{\alpha}\nonumber
\end{align}
where $c^{*}_{\mu}$ and $c_{\nu}$ are Grassmann fields describing fermionic
degrees of freedom, while $\phi_{\alpha}$ is a real bosonic field
describing bosonic degrees of freedom. Indices $\mu,\nu$ stand for space,
time, spin, and possibly other (e.g. band) indices $\mu\equiv(\mathbf{r}_{\mu},\tau_{\mu},\sigma_{\mu},\dots)$,
where $\mathbf{r}_{\mu}$ denotes a site of the Bravais lattice, $\tau_{\mu}$
denotes imaginary time and $\sigma_{\mu}$ is a spin index ($\sigma_{\mu}\in\{\uparrow,\downarrow\}$).
Indices $\alpha,\beta$ denote $\alpha\equiv(\mathbf{r}_{\alpha},\tau_{\alpha},I_{\alpha},\dots)$,
where $I_{\alpha}$ indexes the bosonic channels.
Repeated indices are summed over. Summation $\sum_{\mu}$ is shorthand for $\sum_{\mathbf{r}\in\mathrm{BL}}\sum_{\sigma}\int_{0}^{\beta}\mathrm{d}\tau$.
$G_{0,\mu\nu}$ (resp. $W_{0,\alpha\beta}$) is the non-interacting
fermionic (resp. bosonic) propagator.
Action (\ref{eq:eb_action}) can result from the exact Hubbard-Stratonovich
decoupling of the Hubbard interaction of Eq. (\ref{eq:Hubbard}) with
bosonic fields $\phi$, but it can also
simply describe an electron-phonon coupling problem.
In the present work, we are interested in a generalization of TRILEX able to accommodate superconducting order. To this purpose, we rederive the TRILEX equations starting from a more general action, written in terms of Nambu four-component spinors. The departure from the usual two-component Nambu-spinor formalism is necessary to allow for spin-flip electron-boson coupling in the action. Such terms do appear in the Heisenberg decoupling of the Hubbard interaction (see Section \ref{subsec:hubbard_model}).
We define a four-component Nambu-Grassmann spinor field as a column-vector
\begin{equation}\label{eq:Nambu_spinor}
\boldsymbol{\Psi}_{i}(\tau)\equiv
\left[
\begin{array}{c}
c_{\uparrow i}^*(\tau) \\ c_{\downarrow i}(\tau) \\ c_{\downarrow i}^*(\tau) \\ c_{\uparrow i}(\tau)
\end{array}
\right]
\end{equation}
where $i$ stands for the lattice-site $\mathbf{r}_i$.
In combined indices, analogously to \eqref{eq:eb_action}, a general electron-boson action can be written as
\begin{align}
S^\mathrm{Nambu}_{\mathrm{eb}}[\boldsymbol{\Psi},\phi] & =\frac{1}{2}\boldsymbol{\Psi}_{u}\left[-\boldsymbol{G}_{0}^{-1}\right]_{uv}\boldsymbol{\Psi}_{v}-\frac{1}{2}\phi_{\alpha}\left[W_{0}^{-1}\right]_{\alpha\beta}\phi_{\beta}\nonumber \\
& \;\;+\frac{1}{2}\phi_{\alpha}\boldsymbol{\Psi}_{u}\boldsymbol{\lambda}_{uv\alpha}\boldsymbol{\Psi}_{v}\label{eq:S_eb}
\end{align}
where $u,v$ is a combined index $u\equiv(\mathbf{r}_{u},\tau_{u}, a_{u},\dots)$, with $a,b,c,... \in \{0,1,2,3\}$ a Nambu index comprising the spin degree of freedom. The sum is redefined to go over all Nambu indices $\sum_{u} \equiv \sum_{\mathbf{r}\in\mathrm{BL}}\sum_{a}\int_{0}^{\beta}\mathrm{d}\tau$. Bold symbols are used for Nambu-index-dependent quantities.
This action does {\it not} depend on the conjugate field of $\boldsymbol{\Psi}$, because $\boldsymbol{\Psi}_i$ already contains all the degrees of freedom of the action \eqref{eq:eb_action} at the site $i$.
The partition function corresponding to the bare fermionic part of the action has the following form\cite{Zinn-Justin2002}
\begin{equation}
\int {\cal D}[\boldsymbol \Psi] e^{\frac{1}{2} \boldsymbol{\Psi}_u \boldsymbol{A}_{uv} \boldsymbol{\Psi}_v} = \left(\det \boldsymbol{A}\right)^\frac{1}{2}
\end{equation}
which is valid for any anti-symmetric matrix $\boldsymbol{A}$. Due to the unusual form of the action (no conjugated fields), the right-hand side is not the determinant of $\boldsymbol{A}$, but its square root, i.e. the Pfaffian.
We can redefine the propagators/correlation functions of interest as
\begin{align}
\boldsymbol{G}_{uv} & \equiv - \Big\langle\boldsymbol{\Psi}_u\boldsymbol{\Psi}_v\Big\rangle \\
W_{\alpha\beta} & \equiv -\langle(\phi_{\alpha}-\langle\phi_{\alpha}\rangle)(\phi_{\beta}-\langle\phi_{\beta}\rangle)\rangle,\label{eq:W_def_generic}\\
\boldsymbol{\chi}^{3,\mathrm{conn}}_{uv\alpha} & \equiv \Big\langle\boldsymbol{\Psi}_u\boldsymbol{\Psi}_v\phi_\alpha\Big\rangle -\Big\langle\boldsymbol{\Psi}_u\boldsymbol{\Psi}_v\Big\rangle\Big\langle\phi_\alpha\Big\rangle \label{eq:chi3conn_def}
\end{align}
The ``conn'' superscript denotes the connected part of the correlation function.
The renormalized vertex is defined by
\begin{equation}
\boldsymbol{\Lambda}_{uv\alpha} \equiv
\left[\boldsymbol{G}^{-1}\right]_{uw}\left[\boldsymbol{G}^{-1}\right]_{xv}\left[W^{-1}\right]_{\alpha\beta} \boldsymbol{\chi}^{3,\mathrm{conn}}_{wx\beta} \label{eq:Lambda_def}
\end{equation}
Actions \eqref{eq:S_eb} and \eqref{eq:eb_action} are physically equivalent, namely their partition functions coincide:
\begin{equation}
Z = \int {\cal D}[\boldsymbol{\Psi}, \phi] e^{-S_{\mathrm{eb}}^{\mathrm{Nambu}}[\boldsymbol{\Psi},\phi]}= \int {\cal D}[c, c^*, \phi] e^{-S_{\mathrm{eb}}[c, c^*,\phi]}\label{eq:Z_with_eb_Nambu}
\end{equation}
for an appropriate choice of $\boldsymbol{G}_0$ and $\boldsymbol{\lambda}$. Yet, they are not formally identical to each other, i.e. one cannot reconstruct \eqref{eq:S_eb} from \eqref{eq:eb_action} by mere relabeling $c\rightarrow\boldsymbol{\Psi}$, $\mu\nu\rightarrow uv$ (note the absence of Grassmann conjugation and the additional prefactors in the Nambu action).
Therefore, one must rederive the Hedin equations which connect the self-energy and polarization with the full propagators $\boldsymbol{G}$ and $W$ and the renormalized vertex $\boldsymbol{\Lambda}$. We present the full derivation using equations of motion in Appendix~\ref{sec:eom}; here we just present the final result:
\begin{subequations}\label{eq:Hedin_with_indices}
\begin{eqnarray}\label{eq:Hedin_Sigma}
\boldsymbol{\Sigma}_{uv}&=& -\boldsymbol{\lambda}_{uw\alpha}\boldsymbol{G}_{wx}W_{\alpha\beta}\boldsymbol{\Lambda}_{xv\beta} \\ \nonumber
&& +\frac{1}{2}\boldsymbol{\lambda}_{uv\alpha}W_{0,\alpha\beta}\langle\boldsymbol{\Psi}_{y}\boldsymbol{\lambda}_{yz\beta}\boldsymbol{\Psi}_{z}\rangle \\
\label{eq:Hedin_P}
P_{\alpha\beta}&=&\frac{1}{2}\boldsymbol{\lambda}_{uw,\alpha}\boldsymbol{G}_{xu}\boldsymbol{G}_{wv}\boldsymbol{\Lambda}_{vx,\beta}
\end{eqnarray}
\end{subequations}
Compared to the expressions in the normal case, there are extra factors $\frac{1}{2}$ in the Hartree term (second line in \eqref{eq:Hedin_Sigma}) and polarization \eqref{eq:Hedin_P}. These factors come from the fact that with four-spinors, the summation over spin is performed twice.
Note that the Hartree term can in principle have a frequency dependence if the bare electron-boson vertex has a dynamic part. On the other hand, the term beyond Hartree may as well contribute to the static part of the self-energy, if the bosonic propagator and the bare electron-boson vertex contain a static part. In all the calculations in this paper, the Hartree term is static and is the sole contributor the static part of self-energy. We will thus henceforth omit the Hartree term, as it can be absorbed in the chemical potential.
\subsubsection{Connection to the Hubbard model\label{subsec:hubbard_model}}
In this section, we specify the bare propagators and vertices such that action \eqref{eq:S_eb} corresponds to the Hubbard model Eq.(\ref{eq:Hubbard}). We then rewrite the Hedin equations under the assumption of spatial and temporal translational symmetry.
The Hubbard-Stratonovich transformation leading from Eq.(\ref{eq:Hubbard})
to an action of the form Eq.(\ref{eq:eb_action}) relies on decomposing the Hubbard interaction
as follows
\begin{equation}
Un_{i\uparrow}n_{i\downarrow}=\frac{1}{2}\sum_{I}U^{I}n_{i}^{I}n_{i}^{I}\label{eq:Fierz_rewriting}
\end{equation}
with $n_{I}\equiv\sum_{\sigma\sigma'}c_{\sigma}^{\dagger}\sigma_{\sigma\sigma'}^{I}c_{\sigma'}$,
and $I$ running within $\{0,z\}$ (``Ising decoupling'') or $\{0,x,y,z\}$
(``Heisenberg decoupling'') ($\sigma^{0}$ is the $2\times2$ identity
matrix, $\sigma^{x/y/z}$ are the usual Pauli matrices). This identity
is verified, up to a density term, whenever\begin{subequations}
\begin{align}
U^{\mathrm{ch}}-U^{\mathrm{sp}} & =U\label{eq:Fierz_Ising}
\end{align}
in the Ising decoupling, or
\begin{align}
U^{\mathrm{ch}}-3U^{\mathrm{sp}} & =U\label{eq:Fierz_Heisenberg}
\end{align}
\end{subequations} in the Heisenberg decoupling. We have defined
$U^{\mathrm{ch}}\equiv U^{0}$ and $U^{\mathrm{sp}}\equiv U^{x}=U^{y}=U^{z}$.
Eqs (\ref{eq:Fierz_Ising}-\ref{eq:Fierz_Heisenberg}) leave a degree
of freedom in the choice of $U^{\mathrm{ch}}$ and $U^{\mathrm{sp}}$.
Here, the choice $U^{x}=U^{y}=U^{z}$ stems from the isotropy of the Heisenberg decoupling (contrary to the Ising decoupling); it can describe SU(2) symmetry-broken phases.
In the rest of the paper, we denote all quantities diagonal in the channel index with the channel as a superscript.
To make contact with the results of Ref.~\onlinecite{AokiPRB2015}, for $GW$ we will use the Ising decoupling with\begin{subequations}
\begin{align}
U^\mathrm{ch} = U/2, \;\;\; U^\mathrm{sp} = -U/2
\end{align}
while in TRILEX and $GW$+EDMFT (unless stated differently) we will use the Heisenberg decoupling with
\begin{align}
U^\mathrm{ch} = U/2, \;\;\;
U^\mathrm{sp} = -U/6.
\end{align}
\end{subequations}
because the AF instabilities discussed in Sec.~\ref{sec:AF_instability}, which violate the Mermin-Wagner theorem, are weaker in this scheme.
The equivalence of the action \eqref{eq:S_eb} with the Hubbard model is accomplished by setting
\begin{subequations}
\begin{equation}\label{eq:G0_matrix}
\boldsymbol{G}_{0,ij}(\tau) = \left[ \begin{array}{cccc}
0 & 0 & 0 & -G_{0,ji}(-\tau)\\
0 & 0 & G_{0,ij}(\tau) & 0 \\
0 & -G_{0,ji}(-\tau) & 0 & 0\\
G_{0,ij}(\tau) & 0 & 0 & 0\\
\end{array}\right]
\end{equation}
where $i,j$ denote lattice sites, and
\begin{align} \nonumber
G_{0,ij}(\tau) & = \sum_{i\omega,\mathbf{k}} e^{-i(\omega\tau-(\mathbf{r}_i-\mathbf{r}_j)\cdot\mathbf{k})}G_{0\mathbf{k}}(i\omega) \\
G_{0\mathbf{k}}(i\omega) & =\frac{1}{i\omega+\mu-\varepsilon_{\mathbf{k}}}\label{eq:G0_def}
\end{align}
\end{subequations}
The $4\times 4$ matrices are written in Nambu indices.
The bare vertex reads:\begin{subequations}
\begin{eqnarray}
\boldsymbol{\lambda}_{uv\alpha} &=& \delta_{\mathbf{r}_u\mathbf{r}_\alpha}\delta_{\mathbf{r}_u\mathbf{r}_v}\delta_{\tau_u\tau_\alpha}
[ \boldsymbol{\delta}_{\tau_u,\tau_v} \cdot \boldsymbol{\lambda}^{I_\alpha} ]_{a_ua_v}
\end{eqnarray}
with
\begin{eqnarray}
\boldsymbol{\delta}_{\tau_u,\tau_v} =
\left[ \begin{array}{cccc}
\delta_{\tau_u,\tau_v^+}&&& \\
&\delta_{\tau_u^+,\tau_v}&& \\
&&\delta_{\tau_u,\tau_v^+}& \\
&&&\delta_{\tau_u^+,\tau_v}
\end{array}\right]
\end{eqnarray}
and:
\begin{equation}
\boldsymbol{\lambda}^{I}=\left[\begin{array}{cccc}
& \sigma_{\uparrow\downarrow}^{I} & & \sigma_{\uparrow\uparrow}^{I}\\
-\sigma_{\uparrow\downarrow}^{I} & & -\sigma_{\downarrow\downarrow}^{I}\\
& \sigma_{\downarrow\downarrow}^{I} & & \sigma_{\downarrow\uparrow}^{I}\\
-\sigma_{\uparrow\uparrow}^{I} & & -\sigma_{\downarrow\uparrow}^{I}
\end{array}\right]
\end{equation}
\end{subequations}
Thus, this vertex is local and static. The bare bosonic propagators are also local and static, as well as diagonal in the channel index:
\begin{equation}
W^I_{0,ij}(\tau) = \delta_{ij}\delta_\tau U^I
\end{equation}
Our Hubbard lattice Nambu action reads (in explicit indices)
\begin{eqnarray} \label{eq:latt_eb_action_Psi}
& & S_{\mathrm{eb}}^{\mathrm{Nambu}}[\boldsymbol{\Psi},\phi]=\nonumber \\
& & \frac{1}{2} \sum_{i,j,a,b}\iint\mathrm{d}\tau\mathrm{d}\tau'
\boldsymbol{\Psi}_{ia}(\tau)
[-\boldsymbol{G}_{0}^{-1}]_{ia,jb}(\tau-\tau')
\boldsymbol{\Psi}_{jb}(\tau')\nonumber \\
& & +\frac{1}{2}\sum_{i}\sum_{I}\int\mathrm{d}\tau\phi_{i}^{I}(\tau)[-(U^I)^{-1}]\phi_{i}^{I}(\tau)\\ \nonumber
& & +\frac{1}{2}\sum_{i}\sum_{I}\int\mathrm{d}\tau\phi_{i}^{I}(\tau)\boldsymbol{\Psi}_{ia}(\tau)\boldsymbol{\lambda}_{ab}^{I}\boldsymbol{\Psi}_{ib}(\tau)
\end{eqnarray}
\subsubsection{Translational invariance, singlet pairing and SU(2) symmetry \label{subsec:trilex_dwave}}
In this paper, we restrict ourselves to phases with no breaking of translational invariance.
With translational invariance in time and space, the propagators depend on frequency and momentum, and are matrices only in the Nambu index. We rewrite the Hedin equations derived above in the special case of the Hubbard action \label{eq:latt_eb_action_Psi}
\begin{subequations} \label{eq:Nambu_Hedin}
\begin{align}
\boldsymbol{\Sigma}_{ab,\mathbf{k}}(i\omega) & =-\sum_{\mathbf{q},i\Omega}\sum_{c,d}\sum_{I}\boldsymbol{\lambda}_{ac}^{I}\boldsymbol{G}_{cd,\mathbf{k}+\mathbf{q}}(i\omega+i\Omega)\nonumber \\
& \;\;\;\;\;\times W_{\mathbf{q}}^{I}(i\Omega)\boldsymbol{\Lambda}_{db, \mathbf{kq}}^{I}(i\omega,i\Omega),\\
P_{\mathbf{q}}^{I}(i\Omega) & =\frac{1}{2}\sum_{\mathbf{k},i\omega}\sum_{a,b,c,d}\boldsymbol{\lambda}_{ac}^{I}\boldsymbol{G}_{ba,\mathbf{k}+\mathbf{q}}(i\omega+i\Omega)\nonumber \\
& \;\;\;\;\;\times\boldsymbol{G}_{cd,\mathbf{k}}(i\omega)\boldsymbol{\Lambda}_{db, \mathbf{kq}}^{I}(i\omega,i\Omega).
\end{align}
\end{subequations}
Similarly (see Appendix \ref{app:translational} for details),
\begin{eqnarray} \nonumber
\boldsymbol{\Lambda}^{I}_{\mathbf{kq},ab}(i\omega,i\Omega) &=& \sum_{cd}
[\boldsymbol{G}_{\mathbf{k}+\mathbf{q}}^{-1}(i\omega+i\Omega)]_{ac}[\boldsymbol{G}_\mathbf{k}^{-1}(i\omega)]_{db} \\ \label{eq:Lambda_translational}
&& \times \big(W^{I}_{\mathbf{q}}(i\Omega)\big)^{-1} \boldsymbol{\chi}^{3,\mathrm{conn},I}_{\mathbf{kq},cd}(i\omega,i\Omega)
\end{eqnarray}
Furthermore, we restrict ourselves to SU(2) symmetric phases, and allow only for singlet pairing,
therefore
\begin{equation}
\langle c^*_\uparrow(\tau) c^*_\uparrow(0)\rangle = \langle c^*_\downarrow(\tau) c^*_\downarrow(0)\rangle = 0
\end{equation}
We allow no emergent mixing of spin
\begin{equation}
\langle c^*_\uparrow(\tau) c_\downarrow(0)\rangle = \langle c^*_\downarrow(\tau) c_\uparrow(0)\rangle = 0
\end{equation}
These assumptions simplify the structure of the Green's function in Nambu space
\begin{align}
\boldsymbol{G}_{\mathbf{k}}(i\omega) =
\left[ \begin{array}{cccc}
& & -F_{\mathbf{k}}(i\omega)& -G_{\mathbf{k}}^{*}(i\omega) \\
&&G_{\mathbf{k}}(i\omega)& -F^{*}_{\mathbf{k}}(i\omega) \\
F_{\mathbf{k}}(i\omega)& -G_{\mathbf{k}}^{*}(i\omega)& &\\
G_{\mathbf{k}}(i\omega)&F^{*}_{\mathbf{k}}(i\omega)& & \\
\end{array}\right] \\ \nonumber
\end{align}
where the normal and anomalous Green's functions read
\begin{align}
G_{ij}(\tau-\tau')&\equiv-\langle c_{\uparrow i}(\tau) c^*_{\uparrow j}(\tau')\rangle\\
F_{ij}(\tau-\tau')&\equiv-\langle c^*_{\downarrow i}(\tau) c^*_{\uparrow j}(\tau')\rangle
\end{align}
Under the present assumptions $G_\mathbf{k}(\tau)$ is real, therefore $G_\mathbf{k}(-i\omega)=G_\mathbf{k}^*(i\omega)$.
Here note that SU(2) symmetry and lattice inversion symmetry imply $F_{ij}(\tau)=F_{ij}(-\tau)=F_{ji}(\tau)$ (this can be proven by rotating
$c_\sigma \rightarrow (-)^{\delta_{\uparrow,\sigma}} c_{\bar\sigma}$).
Therefore, if $F_{ij}(\tau)$ is real, $F_\mathbf{k}(i\omega)$ is also purely real. In this paper we consider only purely real $F_{ij}(\tau)$.
Similarly, the block structure of the self-energy is given by:
\begin{align}
\boldsymbol{\Sigma}_{\mathbf{k}}(i\omega) =
\left[ \begin{array}{cccc}
& & S^{*}_{\mathbf{k}}(i\omega)&\Sigma_{\mathbf{k}}(i\omega) \\
&&-\Sigma_{\mathbf{k}}^{*}(i\omega)& S_{\mathbf{k}}(i\omega) \\
-S^{*}_{\mathbf{k}}(i\omega)&\Sigma_{\mathbf{k}}(i\omega)& & \\
-\Sigma_{\mathbf{k}}^{*}(i\omega)&-S_{\mathbf{k}}(i\omega)& & \\
\end{array}\right] \\ \nonumber
\end{align}
$\Sigma$ and $S$ are the normal and anomalous self-energies defined by the Nambu-Dyson equation
\begin{equation}
{\boldsymbol G}^{-1}_\mathbf{k}(i\omega) = {\boldsymbol G}^{-1}_{0,\mathbf{k}}(i\omega) - {\boldsymbol\Sigma}_\mathbf{k}(i\omega)
\end{equation}
where the inverse is assumed to be the matrix inverse in Nambu indices.
Component-wise, under the present assumptions, the Nambu-Dyson equation reads
\begin{subequations}
\begin{align}
G_{\mathbf{k}}(i\omega) & =\frac{\left(G_{0\mathbf{k}}^{-1}(i\omega)-\Sigma_{\mathbf{k}}(i\omega)\right)^{*}}{|G_{0\mathbf{k}}^{-1}(i\omega)-\Sigma_{\mathbf{k}}(i\omega)|^{2}+|S_{\mathbf{k}}(i\omega)|^{2}}\label{eq:Dyson_Nambu_G}\\
F_{\mathbf{k}}(i\omega) & =\frac{-S_{\mathbf{k}}(i\omega)}{|G_{0\mathbf{k}}^{-1}(i\omega)-\Sigma_{\mathbf{k}}(i\omega)|^{2}+|S_{\mathbf{k}}(i\omega)|^{2}}\label{eq:Dyson_Nambu_F}
\end{align}
\end{subequations}
Furthermore, due to SU(2) symmetry, the full bosonic propagator will be identical in the $x$, $y$ and $z$ channels, so we define
\begin{equation}
\eta(I) = \left\{ \begin{array}{c}
\mathrm{ch}, \;\;\; I=0 \\
\mathrm{sp}, \;\;\; I=x,y,z
\end{array}
\right.
\end{equation}
and have $W^x=W^y=W^z=W^\mathrm{sp}$, and similarly for the renormalized vertex. This will simplify the calculation of the self-energy in the Heisenberg decoupling scheme, as the contribution coming from $x$ and $y$ bosons is the same as the one coming from the $z$ boson.
The bosonic Dyson equation is then always solved in only two channels
\begin{align}
W_{\mathbf{q}}^{\eta}(i\Omega) & =\frac{U^{\eta}}{1-U^{\eta}P_{\mathbf{q}}^{\eta}(i\Omega)}\label{eq:W_dyson}
\end{align}
\subsection{TRILEX, $GW$+EDMFT and $GW$ equations}
\subsubsection{Single-site TRILEX approximation for $d$-wave superconductivity\label{subsec:trilex_dwave}}
The single-site TRILEX method consists in approximating the renormalized vertex by a local quantity, obtained from an effective single-site impurity model
\begin{eqnarray} \label{eq:imp_eb_action_Psi}
& & S_{\mathrm{imp},\mathrm{eb}}^{\mathrm{Nambu}}[\boldsymbol{\Psi},\phi]=\nonumber \\
& & \frac{1}{2}\iint\mathrm{d}\tau\mathrm{d}\tau'\boldsymbol{\Psi}_{a}(\tau)[-\boldsymbol{\cal G}^{-1}]_{a,b}(\tau-\tau')\boldsymbol{\Psi}_{b}(\tau')\nonumber \\
& & +\frac{1}{2}\sum_{I}\iint\mathrm{d}\tau\mathrm{d}\tau'\phi^{I}(\tau)[-({\cal U}^{I})^{-1}](\tau-\tau')\phi^{I}(\tau')\\
& & +\frac{1}{2}\sum_{I}\int\mathrm{d}\tau\phi^{I}(\tau)\boldsymbol{\Psi}_a(\tau)\boldsymbol{\lambda}_{ab}^{I}\boldsymbol{\Psi}_{b}(\tau)\nonumber
\end{eqnarray}
Solving the TRILEX equations amounts to finding $\boldsymbol{\mathcal{G}}(i\omega)$ and
$\mathcal{U}(i\Omega)$ such that the full propagators in the effective impurity problem \eqref{eq:imp_eb_action_Psi} coincide with the local components of the ones obtained on the lattice, namely, we want to satisfy
\begin{subequations}
\begin{align}
\sum_{\mathbf{k}}\boldsymbol{G}_{\mathbf{k}}(i\omega)[\boldsymbol{\mathcal{G}},\mathcal{U}] & =\boldsymbol{G}_{\mathrm{imp}}(i\omega)[\boldsymbol{\mathcal{G}},\mathcal{U}]\label{eq:G_sc}\\
\sum_{\mathbf{q}}W_{\mathbf{q}}^{\eta}(i\Omega)[\boldsymbol{\mathcal{G}},\mathcal{U}] & =W_{\mathrm{imp}}^{\eta}(i\Omega)[\boldsymbol{\mathcal{G}},\mathcal{U}]\label{eq:W_sc}
\end{align}
\end{subequations}
where the vertex of Eq.~\eqref{eq:Nambu_Hedin} is approximated by the impurity vertex:
\begin{equation}
\boldsymbol{\Lambda}_\mathbf{kq} = \boldsymbol{\Lambda}_\mathrm{imp}[\boldsymbol{\mathcal{G}},\mathcal{U}]
\end{equation}
In this paper, we allow only strictly $d$-wave superconducting pairing. Thus
\begin{equation}
\sum_{\mathbf{k}}F_{\mathbf{k}}(i\omega)=0\label{eq:vanishing_local_F}
\end{equation}
which means that the anomalous components of the local Green's function $\boldsymbol{G}_\mathrm{loc}$ will be zero.
Therefore, at self-consistency (Eq.~\eqref{eq:G_sc}), the impurity's Green's function is normal and thus the anomalous components of the bare propagator on the impurity must vanish
\begin{equation}
\boldsymbol{\mathcal{G}}_{02/20/13/31}=0\label{eq:vanishing_anom_Weiss}
\end{equation}
This means that the impurity problem will be identical to the one in the normal-phase calculations, which can be expressed in terms of the original Grassmann fields
\begin{eqnarray} \label{eq:imp_eb_action_c}
& & S_{\mathrm{imp},\mathrm{eb}}[c^*,c,\phi]=\nonumber \\
& & \sum_\sigma\iint\mathrm{d}\tau\mathrm{d}\tau'c^*_{\sigma}(\tau)[-{\cal G}^{-1}](\tau-\tau')c_{\sigma}(\tau')\nonumber \\
& & +\frac{1}{2}\sum_{I}\iint\mathrm{d}\tau\mathrm{d}\tau'\phi^{I}(\tau)[-({\cal U}^{I})^{-1}](\tau-\tau')\phi^{I}(\tau')\\
& & +\sum_{I,\sigma,\sigma'}\int\mathrm{d}\tau\phi^{I}(\tau)c^*_\sigma(\tau)\lambda_{\sigma\sigma'}^{I}c_{\sigma'}(\tau)\nonumber
\end{eqnarray}
where the bare vertices (slim symbols denote the impurity quantities) are given by Pauli matrices $\lambda^I_{\sigma\sigma'} = \sigma^I_{\sigma\sigma'}$.
After integrating out the bosonic degrees of freedom, one obtains an electron-electron action with retarded interactions
\begin{eqnarray} \nonumber
S_{\mathrm{imp},\mathrm{ee}}[c^*,c] & =&\iint_{\tau,\tau'}\sum_{\sigma}c^{*}_{\sigma}(\tau)\left[-\mathcal{G}^{-1}(\tau-\tau')\right]c_{\sigma}(\tau')\\ \label{eq:S_imp}
&&+\frac{1}{2}\iint_{\tau\tau'}\sum_{I}n^{I}(\tau)\mathcal{U}^{I}(\tau-\tau')n^{I}(\tau')
\end{eqnarray}
This single-site impurity problem is solved using the numerically exact hybridization-expansion continuous-time quantum Monte Carlo (CTHYB or HYB-CTQMC\cite{Werner2006,Werner2007}), employing the segment algorithm.
The transverse spin-spin interaction term is dealt with in an interaction-expansion manner\cite{Otsuki2013}. See Ref.~\onlinecite{Ayral2015c} for details.
Under the present assumptions, the approximation for the renormalized vertex entering the Hedin equations Eq.~\eqref{eq:Nambu_Hedin} is
\newcommand{\Big(\Lambda_\mathrm{imp}^{\eta(I)}\Big)^* }{\Big(\Lambda_\mathrm{imp}^{\eta(I)}\Big)^* }
\newcommand{\Lambda_\mathrm{imp}^{\eta(I)} }{\Lambda_\mathrm{imp}^{\eta(I)} }
\begin{align} \label{eq:boldLambda_imp}
\boldsymbol{\Lambda}_{\mathbf{kq}}^{I}(i\omega,i\Omega) & \approx \boldsymbol{\Lambda}_{\mathrm{imp}}^{I}(i\omega,i\Omega) \\ \nonumber
= \boldsymbol{\lambda}^I \circ &
\left[
\begin{array}{cccc}
&\Lambda_\mathrm{imp}^{\eta(I)} &&\Lambda_\mathrm{imp}^{\eta(I)} \\
\Big(\Lambda_\mathrm{imp}^{\eta(I)}\Big)^* &&\Big(\Lambda_\mathrm{imp}^{\eta(I)}\Big)^* & \\
&\Lambda_\mathrm{imp}^{\eta(I)} &&\Lambda_\mathrm{imp}^{\eta(I)} \\
\Big(\Lambda_\mathrm{imp}^{\eta(I)}\Big)^* &&\Big(\Lambda_\mathrm{imp}^{\eta(I)}\Big)^* & \\
\end{array}
\right](i\omega,i\Omega)
\end{align}
where $\circ$ denotes the elementwise product $[A\circ B]_{ij} = A_{ij} B_{ij}$ (see Appendix \ref{app:Lambda_imp} for details).
We obtain $\Lambda_\mathrm{imp}^{\eta}$ from the three-point correlation function on the impurity using
\begin{align}
& \Lambda_{\mathrm{imp}}^{\eta}(i\omega,i\Omega)\label{eq:vertex_def}\\
& \equiv\frac{\tilde{\chi}^{3,\eta,\mathrm{conn}}_\mathrm{imp}(i\omega,i\Omega) }{G_{\mathrm{imp}}(i\omega)G_{\mathrm{imp}}(i\omega+i\Omega)(1-\mathcal{U}^{\eta}(i\Omega)\chi_{\mathrm{imp}}^{\eta}(i\Omega))}\nonumber
\end{align}
where
\begin{align}
& \tilde{\chi}^{3,\eta,\mathrm{conn}}_\mathrm{imp}(i\omega,i\Omega) \equiv \iint_{\tau\tau'}e^{i\tau\omega+i\tau'\Omega}\times\label{eq:chi3tilde_def}\\
& \;\;\;\times\Big(\langle c_{\uparrow}(\tau)c_{\uparrow}^{*}(0)n^{\eta}(\tau')\rangle_{\mathrm{imp}}+G_{\mathrm{imp}}(\tau)\langle n^{\eta}\rangle_{\mathrm{imp}}\Big).
\end{align}
and
\begin{align}
G_{\mathrm{imp}}(i\omega) & \equiv-\int_{0}^{\beta}\mathrm{d}\tau e^{i\tau\omega}\langle c_\uparrow(\tau)c_\uparrow^*(0)\rangle_{\mathrm{imp}}\label{eq:G_def}\\
W^\eta_{\mathrm{imp}}(i\Omega) & \equiv-\int_{0}^{\beta}\mathrm{d}\tau e^{i\tau\Omega}\langle(\phi(\tau)-\langle\phi\rangle)(\phi(0)-\langle\phi\rangle)\rangle_{\mathrm{imp}}\label{eq:W_def}\\
& =\mathcal{U}(i\Omega)-\mathcal{U}(i\Omega)\chi_{\mathrm{imp}}^{\eta}(i\Omega)\mathcal{U}(i\Omega)
\end{align}
\begin{equation}
\chi_{\mathrm{imp}}^{\eta}(i\Omega)\equiv\int_{0}^{\beta}\mathrm{d}\tau e^{i\tau\Omega}\left(\langle n^{\eta}(\tau)n^{\eta}(0)\rangle_{\mathrm{imp}}-\langle n^{\eta}\rangle_{\mathrm{imp}}^{2}\right)\label{eq:chi_def}
\end{equation}
We can now write the final expressions for the self-energy and polarization:
\begin{subequations}
\begin{eqnarray}
& &\Sigma_{\mathbf{k}}(i\omega) =\label{eq:Sigma_Hedin}\\ \nonumber
& & \;\;\;\;-\sum_{\eta}m_{\eta}\sum_{\mathbf{q},i\Omega}G_{\mathbf{k}+\mathbf{q}}(i\omega+i\Omega)W_{\mathbf{q}}^{\eta}(i\Omega)\Lambda_{\mathrm{imp}}^{\eta}(i\omega,i\Omega)\\
& & S_{\mathbf{k}}(i\omega)=\label{eq:S_Nambu}\\
& & \;\;\;\;-\sum_{\eta}(-)^{p_{\eta}}m_{\eta}\sum_{\mathbf{q},i\Omega}F_{\mathbf{k}+\mathbf{q}}(i\omega+i\Omega)W_{\mathbf{q}}^{\eta}(i\Omega)\Lambda_{\mathrm{imp}}^{\eta}(i\omega,i\Omega)\nonumber \\
& & P_{\mathbf{q}}^{\eta}(i\Omega)=\label{eq:P_Nambu}\\
& & \;\;\;\;2\sum_{\mathbf{k},i\omega}G_{\mathbf{k}+\mathbf{q}}(i\omega+i\Omega)G_{\mathbf{k}}(i\omega)\Lambda_{\mathrm{imp}}^{\eta}(i\omega,i\Omega)\nonumber \\
& & \;\;\;\;+(-)^{p_{\eta}}2\sum_{\mathbf{k},i\omega}F_{\mathbf{k}+\mathbf{q}}(i\omega+i\Omega)F_{\mathbf{k}}(i\omega)\Lambda_{\mathrm{imp}}^{\eta}(i\omega,i\Omega)\nonumber
\end{eqnarray}
\end{subequations}
with $p_{\mathrm{ch}}=1$, $p_{\mathrm{sp}}=0$, $m_{\mathrm{ch}}=1$. These equations hold in both the Heisenberg ($m_{\mathrm{sp}}=3$) and Ising ($m_{\mathrm{sp}}=1$) decoupling schemes.
In the expression for the polarization (Eq.~\eqref{eq:P_Nambu}) we have used lattice inversion symmetry and the symmetries of $\Lambda$ and $\boldsymbol{G}$. Under the present assumptions, $P$ is purely real (see Appendix \ref{app:realness_of_P} for details).
\subsubsection{$GW$+EDMFT}
The $GW$+EDMFT approximation can be regarded as a simplified version
of TRILEX where, in the calculation of the non-local ($\mathbf{r}\neq0$) part of self-energy and polarization (second line of Eqs. (\ref{eq:Sigma_Hedin_split}),(\ref{eq:S_Nambu_split}) and (\ref{eq:P_Nambu_split}) below), an additional approximation is made:
\begin{equation}
\Lambda_{\mathrm{imp}}^{\eta}(i\omega,i\Omega)\approx1\label{eq:GW_EDMFT_approx}.
\end{equation}
The efficiency is gained because one need not measure the three-point correlator $\tilde{\chi}^{3,\eta,\mathrm{conn}}$ in the impurity model. The local self-energy and polarization still have vertex-corrections, but are identical to $\Sigma$ and $P$ on the impurity, which can be computed from only two-point correlators. Furthermore, the calculation of the non-local parts of the self-energy and polarization can now be performed in imaginary time, as opposed to the explicit summation over frequency needed in Eqs. (\ref{eq:Sigma_Hedin_split}),
(\ref{eq:S_Nambu_split}) and (\ref{eq:P_Nambu_split}).
\subsubsection{$GW$}
If we approximate the renormalized vertex by unity even in the calculation of the local part of self-energies,
we obtain an approximation similar to the $GW$ approximation, with the important difference that we are using a decoupling in both charge and spin channels, unlike the conventional $GW$ approaches which are limited to the charge channel.
This additional approximation eliminates the need for solving an impurity
problem, as now even the local self-energy and polarization are calculated by
the bubble diagrams Eq.~\eqref{eq:Sigma_Hedin}, \eqref{eq:S_Nambu} and \eqref{eq:P_Nambu}, simplified by Eq.~\eqref{eq:GW_EDMFT_approx}.
To summarize, the exact expressions for the self-energy and boson
polarization are compared to the approximate ones in $GW$, EDMFT,
$GW$+EDMFT, and TRILEX in Fig.~\ref{fig:selfenergy_approximations}.
\begin{figure*}[!ht]
\centering{}\includegraphics[width=5.4in,trim=0cm 0cm 0cm 0cm]{SelfEnergyTable}
\caption{ Self-energy/polarization approximations in various methods based
on a Hubbard-Stratonovich decoupling, compared to the exact expression. The renormalized
electron-boson vertex is either approximated by a local dynamical
quantity, or by the bare vertex. Orange triangle denotes the exact renormalized vertex, with full spatial dependence; gray triangle denotes the local approximation of the vertex. Colored circles denote terminals of the propagators and the vertex, and the (local) bare vertex at a given site; different colors denote different lattice sites $ijlm$. Internal site-indices are summed over, but when the vertex is local, only a single term in the summation survives.
\label{fig:selfenergy_approximations} }
\end{figure*}
\subsubsection{Normal phase calculation}
In the normal phase, the further simplification is that $F_\mathbf{k}(i\omega)=0$. Therefore, $S_\mathbf{k}(i\omega)=0$ and the Dyson equation \eqref{eq:Dyson_Nambu_G} reduces to the familiar form
\begin{align}
G_{\mathbf{k}}(i\omega) & =\frac{1}{i\omega+\mu-\varepsilon_{\mathbf{k}}-\Sigma_{\mathbf{k}}(i\omega)}\label{eq:nomral_G_dyson}
\end{align}
\section{Methods}
\label{sec:methods}
\subsection{Numerical implementation of the Hedin equations}
As shown in Ref.~\onlinecite{Ayral2015c}, it is numerically advantageous
to perform the computation in real space and to split the self-energy
and polarization in the following way:
\begin{subequations}
\begin{align}
& \Sigma_{\mathbf{r}}(i\omega)=\delta_{\mathbf{r}}\Sigma_{\mathrm{imp}}(i\omega)\label{eq:Sigma_Hedin_split}\\
& \;\;-\sum_{\eta}m_{\eta}\sum_{i\Omega}\tilde{G}_{\mathbf{r}}(i\omega+i\Omega)\tilde{W}_{\mathbf{r}}^{\eta}(i\Omega)\Lambda_{\mathrm{imp}}^{\eta}(i\omega,i\Omega)\nonumber \\
& S_{\mathbf{r}}(i\omega)=\label{eq:S_Nambu_split}\\
& \;\;\;\;-\sum_{\eta}(-)^{p_{\eta}}m_{\eta}\sum_{i\Omega}\tilde{F}_{\mathbf{r}}(i\omega+i\Omega)\tilde{W}_{\mathbf{r}}(i\Omega)\Lambda_{\mathrm{imp}}^{\eta}(i\omega,i\Omega)\nonumber \\
& P_{\mathbf{r}}^{\eta}(i\Omega)=\delta_\mathbf{r}P_{\mathrm{imp}}^{\eta}(i\Omega)\label{eq:P_Nambu_split}\\
& \;\;\;\;+2\sum_{i\omega}\tilde{G}_{\mathbf{r}}(i\omega+i\Omega)\tilde{G}_{-\mathbf{r}}(i\omega)\Lambda_{\mathrm{imp}}^{\eta}(i\omega,i\Omega)\nonumber \\
& \;\;\;\;+(-)^{p_{\eta}}2\sum_{i\omega}\tilde{F}_{\mathbf{r}}(i\omega+i\Omega)\tilde{F}_{-\mathbf{r}}(i\omega)\Lambda_{\mathrm{imp}}^{\eta}(i\omega,i\Omega)\nonumber
\end{align}
\end{subequations} where $\tilde{X}_{\mathbf{r}}(i\omega)\equiv (1-\delta_\mathbf{r})X_{\mathbf{r}}(i\omega)$. In the presence of lattice inversion symmetry, $X_\mathbf{r}=X_{-\mathbf{r}}$. The impurity's self-energy and polarization are defined as\begin{subequations}
\begin{align}
\Sigma_{\mathrm{imp}}(i\omega) & \equiv\mathcal{G}^{-1}(i\omega)-G_{\mathrm{imp}}^{-1}(i\omega)\label{eq:Dyson_G_imp}\\
P_{\mathrm{imp}}^{\eta}(i\Omega) & \equiv\left[\mathcal{U}^{\eta}(i\Omega)\right]^{-1}-\left[W_{\mathrm{imp}}^{\eta}(i\Omega)\right]^{-1}\nonumber \\
& =\frac{-\chi_{\mathrm{imp}}^{\eta}(i\Omega)}{1-\mathcal{U}^{\eta}\chi_{\mathrm{imp}}^{\eta}(i\Omega)}\label{eq:Dyson_W_imp}
\end{align}
\end{subequations}
\subsection{Solution by forward recursion}\label{sec:forward_recursion}
In practice, the TRILEX, $GW$+EDMFT and $GW$ equations can be solved
by forward recursion:
\begin{enumerate}
\item Start with a given $\Sigma_{\mathbf{k}}(i\omega)$ and $P^\eta_{\mathbf{q}}(i\Omega)$, and (for SC phase only) $S_{\mathbf{k}}(i\omega)$
and (for TRILEX and $GW$+EDMFT only) $\Sigma_{\mathrm{imp}}(i\omega)$
and $P^\eta_{\mathrm{imp}}(i\Omega)$ (for instance set them to zero, or
use EDMFT results)
\item Compute the new $G_{\mathbf{k}}(i\omega)$ and $W_{\mathbf{q}}^{\eta}(i\Omega)$ and (for SC phase only) $F_{\mathbf{k}}(i\omega)$
from Eqs. (\ref{eq:Dyson_Nambu_G}, \ref{eq:W_dyson}, \ref{eq:Dyson_Nambu_F}).
\item (TRILEX/$GW$+EDMFT only) Impose the self-consistency condition Eq.~(\ref{eq:G_sc}, \ref{eq:W_sc}) by reversing the impurity Dyson equations
(\ref{eq:Dyson_G_imp}, \ref{eq:Dyson_W_imp}), such that\begin{subequations}
\begin{align}
\mathcal{G}(i\omega) & =\left[\left\{ \sum_{\mathbf{k}}G_{\mathbf{k}}(i\omega)\right\} ^{-1}+\Sigma_{\mathrm{imp}}(i\omega)\right]^{-1}\label{eq:Weiss_G}\\
\mathcal{U}^{\eta}(i\Omega) & =\left[\left\{ \sum_{\mathbf{q}}W_{\mathbf{q}}^{\eta}(i\Omega)\right\} ^{-1}+P_{\mathrm{imp}}^{\eta}(i\Omega)\right]^{-1}\label{eq:Weiss_U}
\end{align}
\end{subequations}
\item (TRILEX/$GW$+EDMFT only) Solve the impurity model with the above bare fermionic and bosonic propagators: compute $G_{\mathrm{imp}}$,
$\chi_{\mathrm{imp}}^{\eta}$, $\langle n^{\eta}\rangle_{\mathrm{imp}}$
and (for TRILEX only) $\tilde{\chi}^{3,\eta,\mathrm{conn}}$
and from them
$\Sigma_{\mathrm{imp}}$ (Eq.~\ref{eq:Dyson_G_imp}), $P_{\mathrm{imp}}^{\eta}$ (Eq.~\ref{eq:Dyson_W_imp}) and (TRILEX only) $\Lambda_{\mathrm{imp}}^{\eta}$ (Eq.~\ref{eq:vertex_def});
\item Compute $\Sigma_{\mathbf{k}}(i\omega)$ and $P_{\mathbf{q}}^{\eta}(i\Omega)$ and (for SC phase only) $S_{\mathbf{k}}(i\omega)$
with Eqs. (\ref{eq:Sigma_Hedin_split}, \ref{eq:P_Nambu_split}, \ref{eq:S_Nambu_split});
\item Go back to step 2 until convergence is reached.
\end{enumerate}
\subsection{Superconducting temperature $T_c$} \label{sec:lge}
In order to determine the superconducting transition temperature $T_{c}$, we solve
a linearized gap equation (LGE).
At $T=T_c$, the anomalous part of the self-energy $S$ vanishes. Linearizing Eq.~\eqref{eq:Dyson_Nambu_F}
with respect to $S$ and plugging it into Eq.~\eqref{eq:S_Nambu_split} leads to an implicit equation for $T_c$,
featuring only the normal component of the Green's function
\begin{align} \nonumber
\label{eq:trilex_lge}
S_\mathbf{r}(i\omega) & = -\sum_{\eta,i\Omega}(-)^{\delta_{\eta,\mathrm{ch}}}F_\mathbf{r}(i\omega+i\Omega)W_\mathbf{r}^{\eta}(i\Omega)\Lambda^{\mathrm{imp},\eta}(i\omega,i\Omega)\\
F_{\mathbf{k}}(i\omega_{n}) & = -S_{\mathbf{k}}(i\omega_{n})|G_{\mathbf{k}}(i\omega_{n}))|^{2}
\end{align}
Using four-vector notation $k\equiv(\mathbf{k},i\omega)$, we obtain
\begin{align}
&A_{kk'} \equiv \sum_{\eta=\mathrm{ch},\mathrm{sp}} (-)^{p_\eta} m_\eta |G(k')|^2 W^\eta_{k-k'} \Lambda^{\mathrm{imp},\eta}_{k,k-k'}\\
&A_{kk'} S_{k'} = S_k
\end{align}
This is an eigenvalue problem for $S$.
In practice, it is more convenient to consider the spectrum of the operator $A$,
\begin{align}
A_{kk'} S_{k'}^\lambda &= \lambda S_k^\lambda
\end{align}
The eigenvalues $\lambda$ and the eigenvectors $S_k^\lambda$
depend on the temperature $T$.
The critical temperature $T_c$ is therefore given by $$\lambda_m(T_c) =1$$
where $\lambda_m$ is the largest eigenvalue of $A$.
In other words, $T=T_c$ when the first eigenvalue crosses 1.
In addition, the symmetry of the superconducting instability is given by the
$k$ dependence of $S$ for the corresponding eigenvector.
In practice, we first solve the normal-phase equations, and
then solve the LGE Eq.~\eqref{eq:trilex_lge} by forward
substitution.
Starting from an initial simple $d_{x^2-y^2}$-wave form
\begin{equation}
S_{\mathbf{k}}(i\omega_{n})=(\delta_{n,0}+\delta_{n,-1})(\cos k_{x}-\cos k_{y})
\end{equation}
we use the power method \cite{MisesZAMM1929} to compute the leading eigenvalue of the operator $A$.
We do this in a select range of temperature
for the given parameters $(U,n,t,t',t'')$ and monitor the leading
eigenvalue $\lambda_m(T)$.
If we observe a $T_c$ ($\lambda_m(T)>1)$), we can then use the eigenvector $S$ as an initial guess
to stabilize the superconducting solution using the algorithm from section \ref{sec:forward_recursion}.
We have also examined other irreducible representations of the symmetry group
and found that this $d$-wave representation is
the one with highest $T_{c}$, in agreement with Refs.~\onlinecite{OtsukiHafermannPRB2014,MaierPRB2015}.
\subsection{The AF instability} \label{sec:AF_instability}
As documented in Refs. \onlinecite{Ayral2015,Ayral2015c}, the TRILEX equations
present an instability towards antiferromagnetism below some temperature $T_\mathrm{AF}$ (see also Refs \onlinecite{AokiPRB2015,OtsukiHafermannPRB2014}).
The antiferromagnetic susceptibility $\chi^\mathrm{sp}$ is related
to the propagator of the boson in the spin channel via
$$ W^\mathrm{sp}_\mathbf{q}(i\Omega) = U^\mathrm{sp} -U^\mathrm{sp}\chi^\mathrm{sp}_\mathbf{q}(i\Omega)U^\mathrm{sp}.$$
They both diverge at $T=T_\mathrm{AF}$ because the polarization becomes too large (the denominator in \eqref{eq:W_dyson} vanishes).
This instability, which is an artifact of the approximation for the two-dimensional
Hubbard model, violates the Mermin-Wagner theorem.
For many values of $t', t''$, this AF instability prevents us from
reaching the superconducting temperature $T_c$.
This AF instability also exists in conventional
cluster DMFT methods (cellular DMFT, DCA) \cite{MaierPRB2014,MaierPRL2005,KotliarCaponePRB2006}.
Yet, in most works, it is simply ignored by enforcing
a paramagnetic solution (by symmetrizing up and down spin components).
In TRILEX, however, we do not have this possibility.
Indeed, the antiferromagnetic susceptibility directly enters the
equations (via $W$), and its divergence makes it impossible to
stabilize a paramagnetic solution of the TRILEX equations
at a temperature lower than $T_\mathrm{AF}$. For a precise definition of $T_\mathrm{AF}$ in the present context, see Appendix \ref{app:fit}.
In the following, we circumvent this issue in two ways: either by extrapolating the temperature dependence
of the eigenvalue of the linearized gap equation to low temperatures, despite the AF instability (section \ref{sec:phase_diagram}, with tight-binding values $t', t''$ relevant for cuprate physics), or, in section \ref{sec:weak_coupling}, by finding other values
of $t', t''$, where the Fermi surface shape is
qualitatively similar to the cuprate case, but where the AF instability
occurs at a temperature lower than $T_c$.
\section{Results and discussion}
\label{sec:results}
\subsection{Phase diagram} \label{sec:phase_diagram}
First, using the linearized-gap equation (LGE) method described in Sect.~\ref{sec:lge}, we compute the SC phase boundary from high temperature, for $t' = -0.2t, t''=0$,
a physically relevant case for the physics of cuprates.
We set $U/D=4$ in order to be above the Mott transition threshold at half filling (we recall that for the square lattice, $U_c/D \approx 2.4$ within single-site DMFT\cite{Schafer2014}).
The results are presented on Fig. \ref{fig:SC_edmftgw_vs_trilex}.
\begin{figure}[!ht]
\centering
\includegraphics[width=3.0in, trim=0.0cm 0.0cm 0.0cm 0.0cm, page=2]{trilex_vs_edmftgw_Tc}
\includegraphics[width=3.0in, trim=0.0cm 0.0cm 0.0cm 0.0cm, page=1]{trilex_vs_edmftgw_Tc}
\caption{
Top panel: the leading eigenvalue of the linearized gap equation in TRILEX and $GW$+EDMFT.
Bottom panel: SC critical temperature in both methods for $U/D=4$, $(t',t'')=(-0.2t,0)$. The dashed lines represents the AF instability, see text.
}
\label{fig:SC_edmftgw_vs_trilex}
\end{figure}
The top panel presents the largest eigenvalue of the LGE as a function of temperature, for
TRILEX and $GW$+EDMFT. The calculation becomes unstable due to AF instability before we can observe $\lambda_m>1$.
The extrapolation of $\lambda_m$ towards low temperature is not straightforward.
We use an empirical law
\begin{equation} \label{eq:lev_law}
\lambda_m(T)\approx a\exp(bT^{\gamma}+cT^{2\gamma})
\end{equation}
to fit the data and extrapolate to lower temperature.
This form can be shown (see Appendix \ref{app:fit})
to provide a very good fit to similar computations
in the DCA and DCA$^{+}$ methods, from the data of Refs.~\onlinecite{MaierPRL2006,MaierPRB2014}.
We perform the fit and extrapolation with $\gamma=0.3$ for $GW$+EDMFT and $\gamma=0.45$ for TRILEX,
and get the result for $T_c$ reported with solid lines on the bottom panel. The error bars shown are obtained by fitting and extrapolating with $\gamma$ varied in the window 0.3-0.6. The error bars coming from the uncertainty of the fit for a fixed $\gamma$ and a detailed discussion of the fitting procedure can be found
in Appendix \ref{app:fit}.
The dashed lines denote the temperature of the antiferromagnetic instability, below which no stable paramagnetic calculation can be made.
For all values of $\gamma$, the raw data at high temperature for both methods indicate
a similar dome shape for $T_c$ vs $\delta$,
where $\delta$ is the percentage of hole-doping, $\delta [\%]= (1-2n_\sigma)\times 100$ ($n_\sigma=\frac{1}{2}$ corresponds to half-filling).
The fact that $T_c$ vanishes at zero $\delta$ can be checked directly,
but we cannot exclude that it vanishes at a finite, small value of $\delta$.
The optimal doping in both methods is found to be around 12\%.
At half-filling, both methods recover a Mott insulating state, and $\lambda_m(T)$ is found to be very small.
We observe that TRILEX has a higher $T_c$ than $GW$+EDMFT,
showing that the effects of the renormalization of the electron-boson vertex
are non-negligible in this regime.
These results for $T_c(\delta)$ are qualitatively comparable to the results of cluster DMFT methods,
e.g. the 4-site CDMFT + ED computation of Refs~\onlinecite{KotliarCaponePRB2006, CivelliImadaPRB2016,TremblayPRB2008}, or the 8-site DCA results of Ref.~\onlinecite{GullMillisPRB2015}.
In particular, Ref.~\onlinecite{CivelliImadaPRB2016} reports a $T_c/D\approx0.0125$ at doping $\delta=13\%$ in a doped Mott insulator, which falls half-way between the TRILEX and $GW$+EDMFT results.
Furthermore, the optimal doping in Ref.~\onlinecite{KotliarCaponePRB2006} seems to coincide with our result, while in Ref.~\onlinecite{CivelliImadaPRB2016} it is somewhat bigger (around $20\%$).
We emphasize however that here we solve only a {\it single-site} quantum impurity problem,
and obtain the $d$-wave order, which is not possible in single-site DMFT due to symmetry reasons.
Let us now turn to the weak-coupling regime ($U/D=1$). We present in Fig. \ref{fig:gw_sc} the SC temperature in
the $GW$ and $GW$+EDMFT approximation within the Ising decoupling (for the $\lambda(T)$ plot, see Appendix \ref{app:fit}).
Both methods give similar results, which justifies using the faster $GW$ at weak coupling.
In contrast to the larger-$U$ case, one does not obtain the
dome versus doping due to the absence of Mott insulator at $\delta=0$.
We compare our results with the order parameter at $T=0$ obtained from a $2\times2$ CDMFT+ED calculation\cite{KotliarCaponePRB2006}.
The general trend observed is similar: optimal doping is zero, and there is a quick reduction of $T_c$ between 12 and 16\% doping.
As for the value of $T_c$, we compare to the
result presented in Ref.~\onlinecite{MaierPRB2014}. Here, a DCA$^{+}$ calculation
with a 52-site cluster impurity, at $U/D=1$,$t'=t''=0$, $\delta=10\%$, predicts
$T_c/D\approx0.06$. With the same parameters, $GW$ gives $T_c/D\approx0.21$, $GW$+EDMFT gives $T_c/D\approx0.27$,
hence overestimating $T_c$.
\begin{figure}[!ht]
\centering
\includegraphics[width=3.0in, trim=0.0cm 0.0cm 0.0cm 0.0cm, page=1]{weak_coupling_edmftgw_vs_gw}
\caption{ Comparison of $T_c(\delta)$ in $GW$ and $GW$+EDMFT methods at weak-coupling $U/D=1$, $t'=t''=0$. The dotted line is the order parameter $\Delta$ at $T=0$ from a $2\times2$ CDMFT+ED calculation, replotted from Ref.~\onlinecite{KotliarCaponePRB2006} (scale on the right).
}
\label{fig:gw_sc}
\end{figure}
\subsection{Weak coupling} \label{sec:weak_coupling}
As explained in Sec. \ref{sec:AF_instability}, in order to study the SC phase itself,
we need to identify a dispersion for which $T_c$ is above $T_\mathrm{AF}$.
To achieve this, we first scan a large set of parameters $t',t''$ with the $GW$ approximation
at weak coupling.
Indeed, at weak coupling, we can approximate TRILEX by $GW$,
which is faster to compute (there is no quantum impurity model to solve).
We look for a $(t',t'')$ point for which not only $T_\mathrm{AF}>T_c$,
but also the shape of the Fermi surface is qualitatively compatible with cuprates.
We find a whole region of parameters where this is satisfied, and
then use these parameters in a strong-coupling computation with $GW$+EDMFT and TRILEX.
Whether a weak coupling computation is a reliable guide in the search for
$t',t''$ with maximal $T_c$ at strong coupling
remains open and would require a systematic exploration with cluster methods.
However, at least in one example (shown below), this assumption will provide us with an appropriate choice of hopping amplitudes
that allows us to stabilize a superconducting solution in the doped Mott insulator regime.
Fig. \ref{fig:Tc_vs_Tneel} presents the computation of the AF instability ($T_\mathrm{AF}$) and
the SC instability ($T_c$) in $GW$, for $U/D=1$ and various $t',t''$ ($t=-1.0$ is held fixed) and various dopings.
The temperature is taken from $0.2$ down to the lowest accessible temperature, but not below $0.01$ in cases where
the extrapolation of $\lambda(T)$ yielded no finite $T_c$. The temperature step
depends on $T$ (smaller step at lower $T$; see Appendix \ref{app:fit} for an example of raw data).
\begin{figure}[!ht]
\centering
\includegraphics[width=2.55in, trim=0.0cm 4.0cm 0.0cm 0.0cm]{Tc_vs_TNeel}
\caption{$GW$ calculation of $d$-wave $T_c$ (left panels) and $T_\mathrm{AF}$ (right panels) at $U/D=1$, $t=-1.0$,
for different values of $n$, as functions of $(t',t'')$.
$t'$ and $t''$ are sampled between (and including) $-0.7$ and $0.3$ with the
step $0.1$.
$n$ is taken between (and including) $0.38$ and $0.5$
(i.e. the half-filling) with the step $0.02$.
}
\label{fig:Tc_vs_Tneel}
\end{figure}
The first observation is that the region of high $T_c$ broadly coincides with the
region of high $T_\mathrm{AF}$. This is expected as
in $GW$ the attractive interaction comes from the spin-boson, and a high-valued
and sharply-peaked $W^\mathrm{sp}$ is clearly necessary for satisfying the gap
equation Eq.~\eqref{eq:trilex_lge} with $\lambda=1$. However, the maximum of
$T_c$ with respect to $(t',t'')$ at a fixed $n$ does not
coincide with the maximum of $T_\mathrm{AF}$, thus indicating that there
are factors other than sharpness (criticality) of the spin-boson which
contribute to the height of $T_c$.
While the maximum of $T_\mathrm{AF}$ is
found rather close to $t'=t''=0$ at all dopings, the maximum in $T_c$ starts
from $(t',t'')=(-0.6,-0.4)$ at $n=0.38$ and gradually moves as $n$ is
increased. It is only at half-filling that the two maxima are found to
coincide. Furthermore, while at around $t'=t''=0$ and $t'\approx t''$ one sees
$T_\mathrm{AF}>T_c$, this trend is gradually reversed as $t''$ is made more
and more negative, such that around $t'\approx t''+0.4$ one usually sees a
finite $T_c$ in the absence of a finite $T_\mathrm{AF}$.
In Fig. \ref{fig:Tc_vs_n_examples}, we plot $T_\mathrm{AF}$ and $T_c$ vs doping
for different values of $t',t''$. The corresponding dispersion (color map) and Fermi surfaces (gray contours; red for the maximal $T_c$)
are presented in the insets.
\begin{figure*}[!ht]
\centering
\includegraphics[width=6.4in, trim=2.0cm 1.0cm 2.0cm 2.0cm]{Tc_vs_n_examples}
\caption{$GW$ calculations at $U/D=1$, $t=-1$. Dashed lines denote $T_\mathrm{AF}$, full lines $T_c$.
Inset: color map for $\varepsilon_\mathbf{k}$. Gray contours denote bare Fermi
surfaces at examined values of doping. The red line corresponds to the Fermi
surface with maximum $T_c$.}
\label{fig:Tc_vs_n_examples}
\end{figure*}
\begin{figure}[!ht]
\centering
\includegraphics[width=3.2in, trim=1.0cm 1.0cm 1.0cm 0.0cm, page=1]{tpts_phase_diagram_sketch}
\caption{ Sketch of the $GW$ phase diagram at $U/D=1$, $t=-1.0$ based on Fig.\ref{fig:Tc_vs_Tneel}. Points A,B and C are of special interest, and are further studied at strong coupling.
}
\label{fig:tpts_phase_diagram_sketch}
\end{figure}
Finally, in Fig.\ref{fig:tpts_phase_diagram_sketch}, we summarize the observations from Fig.~\ref{fig:Tc_vs_Tneel}. The blue dot denotes the global
maximum of $T_c$ and $T_\mathrm{AF}$. The dashed gray lines denote the
directions of the slowest and quickest decay of antiferromagnetism.
The red ellipses denote the
regions of maximal $T_c$, at various dopings. The yellow region is where one finds
little antiferromagnetism, but still a sizable $T_c$. The green region
corresponds to dispersions relevant for cuprates\cite{PavariniPRL2001}.
The points A,B, and C are the dispersions that we focus on and for which we perform TRILEX and $GW$+EDMFT computations.
Pt.~B is most relevant for the cuprates, and was analyzed in Fig.\ref{fig:SC_edmftgw_vs_trilex}. Pt.~C has $T_\mathrm{AF}<T_c$ which allows us to converge a superconducting solution at both weak and strong coupling. We analyze it in the next subsection. Pt.~A is where we observe an maximal $T_c$ at $16\%$ doping, and we focus on it in Section \ref{sec:ptA}.
\subsection{The nature of the superconducting phase at strong coupling} \label{sec:sc_phase}
In this section, we study the dispersion C $(t, t', t'')= (-1, -0.3, -0.6)$.
In Figure \ref{fig:Tc_vs_n_examples}, we have determined that {\sl at weak coupling} ($U/D=1$), the superconducting temperature $T_c$ is larger than the
AF temperature: we can therefore reach the superconducting phase numerically (see Appendix \ref{app:sc_weak_coupling}).
It turns out that at strong coupling, the AF instability is also absent. This allows us to stabilize superconducting solutions
in the doped Mott insulator regime.
We also perform a calculation restricted to the normal phase for all parameters in order to compare results to the ones in the SC phase.
For simplicity, in this section we will
present only $GW$+EDMFT results for $U/D=4$.
In Fig.~\ref{fig:ptC_Tcs}, we show the superconducting temperatures
at $U/D=1$ and $U/D=4$. Contrary to pt.B, in pt.C strong coupling seems to strongly enhance superconductivity.
Also, the SC dome extends to higher dopings.
\begin{figure}[!ht]
\centering
\includegraphics[width=3.0in, trim=0.0cm 0.0cm 0.0cm 0.0cm, page=1]{ptC_Tcs}
\caption{ $T_c$ for dispersions B and C at weak and strong couplings.
}
\label{fig:ptC_Tcs}
\end{figure}
In Fig.~\ref{fig:sc_phase_anomalous} we
show the results for the both the anomalous self-energy and Green's function, as well as the
imaginary part of the normal self-energy, in both the normal phase and
superconducting solution, anti-nodal and nodal regions.
The imaginary part of the normal self-energy is larger at antinodes than at nodes
and is growing when approaching the Mott insulator.
When going from the normal phase to the SC phase, the imaginary part of the self-energy
is strongly reduced at the antinode and weakly reduced at the node. The difference between the normal and SC solution (light blue area) is roughly proportional to the anomalous self-energy in the SC phase (blue line).
Note that we observe a similar phenomenon even at weak coupling (see Appendix \ref{app:sc_weak_coupling}).
\begin{figure}[!ht]
\includegraphics[width=3.0in, trim=0.0cm 0.0cm 0.0cm 0.0cm, page=1]{strong_coupling_gap_vs_n}
\caption{Evolution of various
quantities within the superconducting dome at dispersion pt.C, using
$GW$+EDMFT, $U/D=4$, $T=0.005D$.
The $T_c$, as obtained from $\lambda_m(T)$, is denoted by the gray area.
Quantities are scaled to fit the same plot.
The gray dashed horizontal line denotes the temperature at
which the data is taken, relative to the (scaled) $T_c$. The vertical full line
denotes the end of the superconducting dome at the temperature denoted by the
dashed horizontal line, i.e. denotes the doping where all the anomalous
quantities are expected to go to zero.
}
\label{fig:sc_phase_anomalous}
\end{figure}
\begin{figure}[!ht]
\includegraphics[width=3.5in, trim=0.0cm 0.0cm 0.0cm 0.0cm, page=2]{spectral_plots_at_AN.pdf}
\includegraphics[width=3.5in, trim=0.0cm 0.0cm 0.0cm 0.0cm, page=1]{spectral_plots_at_AN.pdf}
\caption{
Top panel: Spectral function versus frequency, at the anti-nodal wave vector, defined by $n_{\mathbf{k}_\mathrm{AN}=(\pi,k_x(\mathrm{AN}))} = 0.5$,
obtained by maximum entropy method\cite{Bryan1990} from $G_\mathbf{k}(i\omega_n)$.
$U/D=4$, $T/D=0.005$ for doping $\delta = 8,12,20,28 \%$.
Bottom panel: zoom in at low frequencies.
}
\label{fig:mem}
\end{figure}
In Fig. \ref{fig:mem}, we plot the spectral function at the antinodes at low temperature,
in the normal and in the superconducting phase.
At low doping, we observe at low energy
a pseudo-gap in the normal phase and the superconducting gap in the SC phase.
The result obtained here is qualitatively different
to the one obtained using 8 sites DCA cluster by Gull et al. \cite{GullParcolletMillisPRL2013, GullMillisPRB2015}.
In the cluster computations, the superconducting gap is smaller than the pseudogap, i.e. the
quasi-particle peak at the edge of the SC gap appears within the pseudogap.
It is not the case here.
Also, we do not see any ``peak-dip-hump'' structure.
Note that we are however using different parameters (for the hoppings $t',t''$, the interaction $U$ and the doping $\delta$).
It is not clear at this stage whether these qualitative differences
are due to this different parameter regime
or to an artifact of the single-site TRILEX method, e.g. the lack of local singlet physics in a single-site
impurity model. Further investigations with cluster-TRILEX methods are necessary in the SC phase.
\begin{figure}[!ht]
\includegraphics[width=3.5in, trim=1.0cm 0.0cm 0.0cm 0.0cm]{strong_coupling_sc_phase_color_plots.pdf}
\caption{ Color plots of various quantities in the first Brillouin zone, at lowest Matsubara frequency. $GW$+EDMFT calculation at pt.~C dispersion, $U/D=4$. Temperature is below $T_c$, $T/D=0.005$. All plots correspond to the superconducting phase unless stated differently. The three numbers defining the colorbar range, correspond to 3 columns (different dopings) in the figure.
}
\label{fig:SFG}
\end{figure}
In Fig. \ref{fig:SFG}, we plot various quantities at the lowest Matsubara frequency, as a function of $\mathbf{k}$.
In the first two rows we compare the anomalous self-energy and the pairing amplitude.
Both are clearly of $d$-wave symmetry. The pairing amplitude has a different
order of magnitude (see Appendix \ref{app:formalism} for an illustration of the
dependence between $F$,$G$,$\Sigma$ and $S$). In the third and fourth row we
show the imaginary part of the Green's function in the SC and normal phase.
Due to the absence of long-lived quasiparticles in
this sector, the maximum of $F_\mathbf{k}$ is moved towards the nodes, and does
not coincide with the maximum of $S_\mathbf{k}$. At small doping, the Fermi
surface in both cases becomes less sharp and more featureless, due to proximity
to the Mott insulator.
In the next two rows we show the imaginary part of the normal
self-energy. In the superconducting phase, $\mathrm{Im}\Sigma_\mathbf{k}$ is
strongly reduced in only anti-nodal regions, and thus flattened (made more
local). In the last row, we show the non-local part of the propagator for the
spin boson. At large doping we observe a splitting of resonance at $(\pi,\pi)$
which corresponds to incommensurate AF correlations (see
e.g. Ref.~\onlinecite{PfeutyOnufrievaPRB2002} for a similar phenomenon).
Having that the Green's function at around $\mathbf{k}=(0,0)$ is quite featureless, and
that the boson is sharply peaked at zero frequency, the shape of the spin-boson
around $\mathbf{q}=(\pi,\pi)$ is similar to the self-energy at around
$\mathbf{k}=(\pi,\pi)$. This pattern is observed
at all three dopings.
\subsection{Strong-coupling $T_c$ at pt.A} \label{sec:ptA}
\begin{figure}[!ht]
\centering
\includegraphics[width=3.2in, trim=0.0cm 0.0cm 0.0cm 0.0cm, page=1]{lambda_trends.pdf}
\includegraphics[width=3.2in, trim=0.0cm 0.0cm 0.0cm 0.0cm, page=1]{highest_Tcs.pdf}
\caption{ Top panel: evolution of the LGE leading eigenvalue $\lambda_m$ with temperature at pt.A and pt.B, in a $GW$+EDMFT calculation. Bottom panel: the extrapolated $T_c$ in both cases, including a TRILEX calculation at pt.A. }
\label{fig:ptA}
\end{figure}
At weak coupling, we have observed in section \ref{sec:weak_coupling} that the
dispersion pt.A ($(t,t',t'')=(-1,-0.5,-0.2)$) presents a pronounced maximum in
$T_c(t',t'')$ at $16\%$ doping. Here, we investigate that point at strong
coupling using $GW$+EDMFT and TRILEX and find that also at $U/D=4$, the $T_c$
is substantially higher than in pt.B and pt.C. Here $T_c$ is below
$T_\mathrm{AF}$ and the result is again based on extrapolation of $\lambda$.
The proposed fitting function in this case does not perform as well and the
extrapolation is less reliable, but $GW$+EDMFT and TRILEX are in better
agreement than in the case of pt.B. A further investigation using cluster
methods is necessary since, apart from
Refs.~\onlinecite{TremblayPRB2008,JarrellPRB2013,MaierPRB2015}, little
systematic exploration of $T_c(t',t'')$ has been performed.
\section{Conclusion}
In this work, we have generalized the TRILEX equations and their simplifications $GW$+EDMFT and $GW$ to the case of paramagnetic superconducting phases, using the Nambu formalism. We also generalized the corresponding Hedin equations.
We have then investigated within TRILEX, $GW$+EDMFT and $GW$
the doping-temperature phase diagram of the two-dimensional
single-band Hubbard model with various choices of hopping parameters.
In the case of a bare dispersion relevant for cuprates, in the doped Mott insulator regime, both TRILEX and $GW$+EDMFT yield a superconducting dome of $d_{x^2-y^2}$-wave symmetry, in qualitative agreement with earlier cluster DMFT calculations.
Let us emphasize that this was obtained at the low cost of solving a \emph{single-site} impurity model.
At weak coupling, we have performed a systematic scan of
tight-binding parameter space within the $GW$ approximation.
We have identified the region of parameter space
where superconductivity emerges at temperatures higher than antiferromagnetism.
With one of those dispersions, we studied the properties of the
superconducting phase at strong coupling with $GW$+EDMFT.
We also addressed the question of the optimal dispersion for superconductivity in
the Hubbard model at weak coupling.
At $16\%$ doping we identify a candidate dispersion for the highest $d$-wave $T_c$,
which remains to be investigated in detail at strong coupling (e.g. with cluster DMFT methods).
The next step will be to solve in the SC phase the recently developed cluster TRILEX methods \cite{Ayral2017c}.
Indeed, the single-site TRILEX method contains essentially
an Eliashberg-like equation with a decoupling boson,
and a local vertex (computed from the self-consistent impurity model)
which has no anomalous components.
The importance of anomalous vertex components and
the effect of local singlet physics (present in cluster methods)
is an important open question.
Note that the framework developed in this paper can also be used
to study more general pairings and decoupling schemes in TRILEX, e.g.
the effect of bosonic fluctuations in the particle-particle (i.e. superconducting) channel.
Finally, let us emphasize that the question of superconductivity in multi-orbital systems like iron-based superconductors
is another natural application of the TRILEX method, in particular in view of the strong AF fluctuations in these compounds.
In this multi-orbital case, being able to describe the SC phase without having to solve clusters
(which are numerically very expensive within multi-orbital cluster DMFT\cite{Nomura2015b,Semon2016})
could prove to be very valuable.
\begin{acknowledgments}
We thank M. Kitatani for useful insights and discussion.
This work is supported by the FP7/ERC, under Grant Agreement No. 278472-MottMetals. Part
of this work was performed using HPC resources from GENCI-TGCC (Grant
No. 2016-t2016056112). The CT-HYB algorithm has been implemented using the TRIQS toolbox\cite{Parcollet2014}.
\end{acknowledgments}
|
1,116,691,499,875 | arxiv | \section{\label{sec:level1}First-level heading}
In 1916 and 1917, Einstein proposed three fundamental radiative processes to explain light-matter interactions: spontaneous emission, stimulated emission and absorption \cite{Einstein1916,Einstein1917}. Einstein's $A_{21}$, $B_{21}$ and $B_{12}$ coefficients are typically used to describe the rate of these processes, respectively. Later, Purcell demonstrated that the spontaneous emission rate is not an immutable property of matter and that the environment can significantly modify it \cite{Purcell1946}. In recent times, there are ongoing intensive research efforts in designing nanostructured materials to control the spontaneous emission rates for applications in quantum optics, quantum computing, quantum communications, and quantum chemistry \cite{Yablonovitch1987,Lodahl2015,chikkaraddy2016single}. Most notably, metamaterials, artificial materials that may exhibit electromagnetic (EM) properties otherwise absent in natural materials, have been explored for that purpose due to their ultimate flexibility in tailoring the local optical environment. These engineered materials may feature extreme parameters such as a near-zero refractive index (NZI), and have been shown to exhibit exotic electromagnetic properties~\cite{Liberal2017,Engheta2006,Silveirinha2006,Ziolkowski2004,Vulis2019}.
As a consequence of a vanishing refractive index at frequency $\omega_Z$, the phase velocity $v_\varphi$ of an EM wave inside a near-zero index material diverges and the wavelength $\lambda$ of the wave is significantly stretched. Since the refractive index is defined as $n(\omega)=\sqrt{\varepsilon(\omega)\mu(\omega)}$, $\varepsilon(\omega)$ the relative permittivity and $\mu(\omega)$ the relative permeability, three different routes exist to achieve an NZI response: $\varepsilon$ approaches zero with arbitrary $\mu$ (i.e., epsilon-near-zero (ENZ) media) \cite{Silveirinha2006,Edwards2008}; $\mu$ approaches zero with arbitrary $\varepsilon$ (i.e., mu-near-zero (MNZ) media) \cite{Marcos2015}; and both $\varepsilon$ and $\mu$ simultaneously approach zero (i.e., epsilon-and-mu-near-zero (EMNZ) media) \cite{Vulis2019, Ziolkowski2004,Mahmoud2014,Li2015,Briggs2013}. Although all three classes of NZI media share a near-zero refractive index, they differ critically in other characteristics. For example, the normalized wave impedance $Z\left(\omega\right)=\sqrt{\mu\left(\omega\right)/\varepsilon\left(\omega\right)}$, tends to infinity in ENZ media, $Z\left(\omega_Z\right)\rightarrow\infty$, to zero in MNZ media, $Z\left(\omega_Z\right)\rightarrow 0$, and to a finite value in EMNZ media, $Z\left(\omega_Z\right)\rightarrow\left.\sqrt{\partial_{\omega}\mu\left(\omega\right)/\partial_{\omega}\varepsilon\left(\omega\right)}\right|_{\omega\rightarrow\omega_Z}$.
Similarly, the group index, $n_g\left(\omega\right)=c/v_g\left(\omega\right)$ (where $v_g\left(\omega\right)$ is the group velocity) tends to infinity in ENZ and MNZ unbounded lossless media \cite{Javani2016}, while it has a finite value, $\omega \partial_{\omega}n\left(\omega\right)$, in EMNZ media \cite{Vulis2019, Ziolkowski2004}.
Consequently, the selected class of NZI medium makes a profound impact on different optical processes, including propagation, scattering and radiation of EM waves~\cite{Liberal2017}.
Similarly, extreme material parameters impact fundamental radiative processes and their associated transition rates. Specifically, complete inhibition of spontaneous emission was predicted for three dimensional (3D) ENZ and EMNZ media \cite{Liberal2016ScAd,Liberal2018}, and two-dimensional (2D) implementations of EMNZ media \cite{Mahmoud2014}.
Typically, the suppression of spontaneous emission is justified due to the depletion of optical modes as the refractive index goes to zero.
This effect is somewhat analogous to the inhibition of spontaneous emission in photonic nanostructures exhibiting a band-gap \cite{Bykov1972,Yablonovitch1987,Yablonovitch1987,John1990,Joannopoulos2008}. However, it is distinct in that the propagation of electromagnetic waves is allowed in EMNZ media. In contrast, studies of metallic waveguides near cutoff that effectively behave as one-dimensional (1D) ENZ media reveal that the spontaneous emission rate is enhanced (theoretically diverges) in those systems \cite{Alu2009boosting,Fleury2013,Sokhoyan2013,Li2016}.
The radical difference in the predicted responses, i.e., inhibition versus enhancement, raises the question of whether these effects relate to details of the structural implementation of NZI media (e.g., microscopic coupling to a dispersive waveguide) or if they are an accurate representation of the true material response of NZI media. The latter would then imply a complex interplay between the class of NZI media (ENZ, MNZ and EMNZ) and the dimensionality of the system (1D, 2D, and 3D).
To the best of our knowledge, there is no unified framework encompassing studies of all the fundamental radiative processes for all NZI media classes (ENZ, MNZ and EMNZ) and dimensionalities (1D,2D and 3D). Here, we address this question by presenting a unified framework that provides compact expressions for the transition rates in dimension-dependent NZI media. Our results are relevant for recent experimental demonstrations of various classes of NZI media \cite{Vesseur2013,Briggs2013,Liberal2017photonic,Reshef2017direct,Luo2018coherent}.
To begin, we consider a two-level system, $\left\{\left|e\right\rbrace,\left|g\right\rbrace\right\}$ with transition dipole moment $\mathbf{p}=p\,\mathbf{u}_z $ embedded in a 3D unbounded lossless homogeneous material with a transition frequency $\omega$. First, we evaluate the influence of an NZI background on spontaneous emission, and then we discuss how these conclusions apply to the absorption and stimulated emission processes. To this end, we follow the macroscopic QED formalism \cite{Vogel2006} so that the Einstein coefficient $A_{21}'$, representing the spontaneous emission rate, can be written as a function of the Green's function $\bm{G}$ as follows \cite{Dung2003}:
\begin{align}
A_{21}'(\omega) &= \frac{2\omega^2}{\hbar \varepsilon_0 c^2}\,\,|\mathbf{p}|^2\, \mathbf{u}_z \cdot {\rm Im}[ \mathbf{G}(\mathbf{r}_0,\mathbf{r}_0,\omega)]\cdot \mathbf{u}_z \nonumber \\
&= \mathrm{Re} \left [ \mu(\omega) n(\omega) \right ] A_{21},
\label{Inhibition3DSpontEm}
\end{align}
where we have used
\begin{equation}
\mathbf{u}_z\cdot{\rm Im}[\mathbf{G}(\mathbf{r}_0,\mathbf{r}_0,\omega)]\cdot\mathbf{u}_z=\frac{\omega}{6\pi c} \mathrm{Re}\left[\mu(\omega)n(\omega)\right]
\end{equation}
for homogeneous media \cite{Dung2003} and $A_{21}=\omega^3|\mathbf{p}|^2/(3\pi\varepsilon_0\hslash c^3)$ is the free-space spontaneous emission coefficient.
We directly conclude from Eq.\,(\ref{Inhibition3DSpontEm}) that spontaneous emission is inhibited in all classes of unbounded lossless 3D NZI media as $n(\omega_Z)\rightarrow 0$.
Fig. \ref{fig:Fig1SpontEM3D}a shows the inhibition of spontaneous emission for the three classes and their different behaviours around the NZI frequency $\omega_Z$ .
\begin{figure*}
\includegraphics[width=\textwidth]{Spontaneous1D2D3D.png}
\caption{Spontaneous decay rate normalized with free-space (Purcell factor) for (a) 3D (b) 2D (c) 1D homogeneous dispersive NZI media. EMNZ metamaterial with a Lorentz model ($\varepsilon(\omega)=\mu(\omega)=\frac{\omega^2-\omega_Z^2+2i\omega\Gamma}{\omega^2-\omega_r^2+2i\omega\Gamma}$, $\omega_r=0.1\omega_Z$, $\Gamma=0$ for lossless case \cite{Supp1}) (yellow), ENZ material with $\varepsilon(\omega)$ and $\mu=1$ (blue), MNZ material with $\mu(\omega)$ and $\varepsilon=2.25$(red). We choose $\omega_\mathrm{max}=\alpha k_B T/\hbar=\omega_Z$ where $\alpha=2.821439$ is a constant and $T=300K$. Inset of (a): two-level system $\left\{\left|e\right\rbrace,\left|g\right\rbrace\right\}$ embedded inside an unbounded, lossless and homogeneous dispersive material.}
\label{fig:Fig1SpontEM3D}
\end{figure*}
Next, we study stimulated emission and absorption by referring to the detailed balance equation \cite{Einstein1916,Einstein1917}. In our case, detailed balance means that spontaneous emission and stimulated emission are balanced by the absorption process. This principle leads to the Einstein relations for dispersive materials \cite{Loudon2000}:
\begin{equation}
\frac{A'_{21}}{B'_{21}} = \mathrm{DOS} \times \hbar \omega, \,\,\,\,\,\,\,\,\,
\frac{B'_{21}}{B'_{12}} = \frac{g_1}{g_2},
\label{eq:Einstein relations}
\end{equation}
\noindent where $DOS=n^2(\omega)\omega^2/\pi^2c^2v_g(\omega)$ is the density of states and $g_i$ the degeneracy of state $| i \big \rangle$. From here, one can derive a general expression for the Einstein $B'_{21}$ coefficient in a dispersive material:
\begin{align}
B_{21}'(\omega) &=\frac{2\pi^2 c}{\hbar^2 \omega \varepsilon_0 n^2(\omega)n_g(\omega)}|\mathbf{p}|^2 \mathbf{u}_z \cdot {\rm Im}[ \mathbf{G}(\mathbf{r}_0,\mathbf{r}_0,\omega)]\cdot \mathbf{u}_z \nonumber\\
&= \frac{Z(\omega)}{n_g(\omega)}B_{21}.
\label{Milonni2003B}
\end{align}
\noindent Using this formulation, we can evaluate the Einstein $B'_{21}$ coefficient for the different classes of NZI media:
\begin{equation}
B_{21}'(\omega=\omega_Z) = B_{21}\times
\begin{cases}
\frac{1}{n_g(\omega)}\sqrt{\frac{\frac{d\mu(\omega)}{d\omega }\Bigr|_{\large{\substack{\omega=\omega_Z}}}}{\frac{d\varepsilon(\omega)}{d\omega }\Bigr|_{\large{\substack{\omega=\omega_Z}}}}} & \text{for EMNZ materials}\\
\frac{2}{\omega_Z \frac{d\varepsilon(\omega)}{d\omega}} & \text{for ENZ materials}\\
0 & \text{for MNZ materials}
\end{cases}.
\label{Eq:B21prim}
\end{equation}
Equations\,(\ref{Milonni2003B}) and (\ref{Eq:B21prim}) show that the $B_{21}$ coefficient is modified by the background medium, as pointed out in previous works \cite{Milonni1995,Milonni2003}. This result suggests that the ratio between spontaneous and stimulated emission can be selected by changing the background material. However, one must be careful to point out that the stimulated emission rate is given by the product of $B_{21}$ and the spectral density of states $\rho(\omega,T)$ ($\Gamma_{\rm sti}=B_{21}\rho(\omega,T)$). In addition, in view of Eq.\,(\ref{eq:Einstein relations}), the absorption rate must be equal to the stimulated emission rate $\Gamma_{\rm sti}=\Gamma_{\rm abs}$ \cite{Supp1}.
Therefore, in order to elucidate the impact of NZI media on the total stimulated emission and absorption rates, we address how the spectral energy density of thermal radiation $\rho\left(\omega,T\right)$ behaves in the NZI limit. This procedure will also allow us to study thermal equilibrium radiation for a black-body at temperature $T$ immersed in NZI media. The spectral energy density of thermal radiation in a material is given by \cite{Milonni2003}
\begin{equation}
\rho(\omega,T)=\frac{\hbar\omega^3}{\pi^2 c^3}\,\frac{1}{e^{\frac{\hbar\omega}{k_BT}}-1}\,\,n^2(\omega)\,n_g(\omega).
\label{Eq:Planck}
\end{equation}
\begin{figure}
\includegraphics[width=\columnwidth]{Planckbis.png}
\caption{Spectral energy density $\rho(\omega$) for air (brown), non-dispersive electric permittivity $\varepsilon=2.25$ (purple), EMNZ metamaterial with a Lorentz model ($\varepsilon(\omega)=\mu(\omega)=\frac{\omega^2-\omega_Z^2+2i\omega\Gamma}{\omega^2-\omega_r^2+2i\omega\Gamma}$ and $\omega_r=0.1\omega_Z$, $\Gamma=0$ for lossless case \cite{Supp1}) (yellow), ENZ material with $\varepsilon(\omega)$ and $\mu=1$ (blue), MNZ material with $\mu(\omega)$ and $\varepsilon=2.25$ (red).Temperature is set to $T=300K$.}
\label{fig:fig1}
\end{figure}
Figure~\ref{fig:fig1} represents the spectral energy density (corresponding to black-body radiation) for the different classes of NZI media. We set the zero-index frequency $\omega_Z$ to be equal to the maximum frequency of the spectral energy density in vacuum $\omega_\textrm{max}=\alpha k_B T/\hbar$ where $\alpha=2.821439$ \cite{Kittel1980}. In EMNZ materials, the group index is constant but the spectral energy density is highly reduced in the NZI spectral region. For frequencies below $\omega_Z$, the allowed propagation corresponds to propagation inside a left handed materials with a refractive index close to zero \cite{Dung2003,Milonni2003}. For ENZ and MNZ media, no propagation is allowed for frequencies below $\omega_Z$ because of the imaginary refractive index. In general, since $\rho(\omega,T)$ scales as $n^2(\omega)\,n_g(\omega)$, it is reduced in the NZI spectral region and vanishes exactly at $\omega_Z$.
This effect can be intuitively explained by using a box quantization treatment. The spectral energy density of thermal radiation $\rho(\omega)$ is the product of the density of states (DOS) by the mean energy of a state at temperature $T$, $\theta(\omega,T)=\frac{\hbar \omega}{e^{\frac{\hbar\omega}{k_BT}}-1}$ \cite{Joulain2005}. The modes in a 3D box of volume $L^3$ in $k$ space are separated by $\Delta k = \pi/L$. Consequently, the number of modes in a spherical shell between $k$ and $k+dk$ is
$\pi k^2dk\left(\pi/L\right)^{-3}$, so that the density of modes scales as $n^2(\omega)\,n_g(\omega)$ \cite{Loudon2000}. When the index is near zero, the number of modes within the sphere is much lower than that in vacuum and the DOS reaches its minimum value. Therefore, the spectral energy density $\rho(\omega,T)$ is equal to zero at the NZI frequency $\omega_Z$ and, consequently, thermal radiation from a black-body immersed in such media would be inhibited.
In addition, by combining Eqs.\,(\ref{Eq:B21prim}) and (\ref{Eq:Planck}) we find that the stimulated emission rate vanishes. Therefore, although previous works pointed out the possibility of controlling stimulated emission \cite{Milonni1995,Milonni2003}, we conclude that all fundamental radiative processes are inhibited inside unbounded 3D homogeneous lossless NZI media, which can be understood as a consequence of the depletion of optical modes around the NZI frequency. The same conclusion can be obtained by directly evaluating the stimulated emission and absorption rates by means of Fermi's golden rule \cite{Supp1}, without needing to invoke the detailed balance equation or thermal equilibrium considerations.
Furthermore, we note that the inclusion of the local-field correction factors using a real-cavity model \cite{Dung2003} does not change the above conclusions (Supplemental Materials (SM) \cite{Supp1}). Moreover, including material absorption gives rise to a finite value of $A'_{21}$, directly proportional to $\mathrm{Im}[ \varepsilon]$, which can be very small \cite{Supp1}.
One has to be careful, however, in translating directly this result to systems with a lower dimensionality. It is worth noticing that the macroscopic QED formalism \cite{Dung2003,Vogel2006} used above provides a very convenient theoretical framework to evaluate radiative transitions based on the imaginary part of the dyadic Green's function. This compact formulation fails however to provide a physical insight on how different classes of NZI media affect radiative transitions.
To address these issues, we introduce a simple and unified framework that allows us to clarify the modification of fundamental radiative processes in NZI media of dimension $d$. Our formulation is convenient as it provides the necessary physical insight to understand how radiative transitions are affected by the material parameters and number of dimensions concomitantly. This is relevant since some metamaterial implementations of NZI media often exhibit a reduced dimensionality \cite{Vesseur2013,Briggs2013,Li2016,Liberal2017photonic,Reshef2017direct,Luo2018coherent}.
We start by following the quantization procedure proposed by Milonni \cite{Milonni1995,Milonni2003} , where the two-level system can be modelled with the following Hamiltonian (See details in SM \cite{Supp1}):
\begin{equation}
\widehat{H}=\frac{\hslash \omega}{2}\widehat{\sigma}_z + \sum_{\mathbf{k},\lambda}\hslash\omega_k \widehat{a}^{\dagger}_{\mathbf{k}\lambda}\widehat{a}_{\mathbf{k}\lambda}
+ \sum_{\mathbf{k},\lambda}\hslash \left(g_{\mathbf{k}\lambda}\widehat{\sigma}^{\dagger}\widehat{a}_{\mathbf{k}\lambda} + h.c.\right),
\label{eq:H}
\end{equation}
with $\widehat{\sigma}_z=\left|e\right\rangle\left\langle e\right|-\left|g\right\rangle\left\langle g\right|$, $\widehat{\sigma}^{\dagger}=\left|e\right\rangle\left\langle g\right|$ , $\omega$ being the transition frequency of the emitter and $\omega_k$ the eigenfrequency of the mode with wavevector $\mathbf{k}$. The sums run over all optical modes of wavevector $\mathbf{k}$, polarization $\lambda$ with unit polarization vector $\mathbf{e}_{\mathbf{k}\lambda}$, and annihilation operator $\widehat{a}_{\mathbf{k}\lambda}$. The coupling between the emitter and optical modes is characterized by the coupling strength
\begin{equation}
g_{\mathbf{k}\lambda}=-i\sqrt{\frac{Z\left(\omega_k\right)}{n_g\left(\omega_k\right)}}\sqrt{\frac{\hslash\omega_k}{2\varepsilon_0 V_d}}
\,\,\mathbf{p}\cdot\mathbf{e}_{\mathbf{k}\lambda}.
\label{eq:g_k}
\end{equation}
The impact of the background medium and its dispersion properties in the light-matter coupling are described by the presence of the normalized wave impedance $Z\left(\omega_k\right)$ and the group index $n_g\left(\omega_k\right)$ in Eq.\,(\ref{eq:g_k}). $V_d$ is the $d$-dimensional quantization volume.
Next, the relevant transition rates can be computed by using Fermi's golden rule \cite{Loudon2000}. For instance, the $A_{21}'$ coefficient corresponding to the rate of spontaneous emission reduces to
\begin{equation}
A_{21}'=2\pi\sum_{\mathbf{k}\lambda}\left|g_{\mathbf{k}\lambda}\right|^2\delta\left(\omega_k-\omega\right).
\label{eq:A21}
\end{equation}
This basic equation provides an often overlooked but important physical insight. It conveys the thought that the decay rate of a quantum emitter depends on the number of available optical modes, $\sum_{\mathbf{k}\lambda}$, and how strongly it couples to them $\left|g_{\mathbf{k}\lambda}\right|^2$ --- both factors must be taken into account in order to correctly describe the physics. Thinking in terms of how the modes are asymptotically depleted in a system (e.g., because the refractive index goes to zero) would not provide the complete physical picture if the coupling strength scales inverse proportionally in the zero-index limit. For this reason, it is in principle possible for the spontaneous emission rate to converge to zero, infinity or to a finite value in the zero-index limit.
To further emphasize this point, we rewrite Eq.\,(\ref{eq:A21}) as the product of two factors:
$A_{21}'=G\left(\omega\right)N\left(\omega\right)$, describing (i) $G\left(\omega\right)$, how strongly the emitter couples to the optical modes as a function of the background, and (ii) $N\left(\omega\right)$, which describes the number of available modes. These factors are defined as follows:
\begin{align}
G\left(\omega\right)&=\left|\frac{g_{\mathbf{k}\lambda}}{g_{\mathbf{k}\lambda}^0}\right|^2 _{\omega_k \rightarrow \omega}
=\frac{Z\left(\omega\right)}{n_g\left(\omega\right)}
\label{eq:G}\\
N\left(\omega\right)&=
2\pi\sum_{\mathbf{k},\lambda}\left|g_{\mathbf{k}\lambda}^0\right|^2\delta\left(\omega_k-\omega\right)\nonumber\\
&=A_d\,\,\frac{\left|\mathbf{p}\right|^2}{\hslash\varepsilon_0}
\,\,\frac{\omega^d}{c^d}
\,\,\left|\mathrm{Re}\left[n(\omega)\right]\right|^{d-1}\,\,n_g\left(\omega\right),
\label{eq:N}
\end{align}
\noindent with $A_1=1/2$, $A_2=1/4$ and $A_3=1/(3\pi)$, and $g_{\mathbf{k}\lambda}^0$ is the vacuum limit of $g_{\mathbf{k}\lambda}$.
Understanding the explicit dependence on these two factors as a function of the material parameters and number of dimension provides a comprehensive picture on how different NZI media modify radiative processes. First, $N\left(\omega\right)$ is defined as the decay rate that would be observed if we could couple to existing modes in the dispersive medium, but with the coupling strength for modes in vacuum. Consequently, $N\left(\omega_Z\right)$ gives a good account on the modification of the number of modes induced by the material parameters. In particular, its dependence on the background is contained within the factor $n^{d-1}\left(\omega\right)\,\,n_g\left(\omega\right)$. This scaling rule can be understood since the sum over all modes is transformed into an integral $\int_0^{\infty}\,\,k^{d-1}\,dk$. In general, the number of modes depletes as the refractive index goes to zero, and this behavior is observed to be independent of the class of NZI media. $N\left(\omega\right)$ only depends on the refractive index, and the depletion in the NZI limit is stronger for a larger number of dimensions.
A very different behavior is observed in terms of how strongly we couple to these modes. Specifically, $G\left(\omega\right)$ is defined as the magnitude square of the ratio between the coupling strength and its vacuum counterpart. Its scaling rule with respect to the background, given by $Z\left(\omega\right)/n_g\left(\omega\right)$, is \emph{independent} of the number of dimensions, but it critically depends on the class of NZI media. This behavior can be intuitively understood by noting that the interaction Hamiltonian is defined within the electric dipole approximation
$\widehat{H}_I=-\widehat{\mathbf{p}}\cdot\widehat{\mathbf{E}}$, and, therefore, $\left|g_{\mathbf{k}\lambda}\right|^2$ is proportional to the electric field intensity. Importantly, the background material modifies the strength of the electric field fluctuations per unit of energy, thus modifying the strength of how the modes couple to the emitter. Since the classical energy per mode can be written as
$U_{\mathbf{k}\lambda}=2\varepsilon_0 V\left|\mathbf{E}^{(+)}\right|^2/(Z\left(\omega_k\right)/n_g\left(\omega_k\right))$ (see SM \cite{Supp1}), it is clear that the electric field intensity per energy unit is modified by the factor $Z\left(\omega_k\right)/n_g\left(\omega_k\right)$ due to the material properties. In this manner, we find that materials with a high, or even diverging normalized wave impedance, like ENZ media, will tend to enhance radiative transitions compared to other classes of NZI media.
Ultimately, it is the product between $G\left(\omega\right)$ and $N\left(\omega\right)$ that provides the total decay rate. By combining Eqs.\,(\ref{eq:G}) and (\ref{eq:N}) we obtain the compact expression:
\begin{equation}
A_{21}' = Z(\omega)\,\,\left|\mathrm{Re}\left[n(\omega)\right]\right|^{d-1}\,\,A_{21}.
\label{A21gen}
\end{equation}
By applying this general equation to the different NZI cases, i.e. at $\omega=\omega_Z$, one can note that the inhibition of spontaneous emission is not valid for all dimensions, even if the refractive index approaches zero. In fact, depending on the interplay between the normalized wave impedance and refractive index, one can observe either suppressed, finite, or even divergent decay rates in the NZI limit (Table 1 and Figs.\,\ref{fig:Fig1SpontEM3D}b-c for 1D and 2D cases).
The Purcell factor $A_{21}'/A_{21}$ might also take a constant value (2D ENZ or 1D EMNZ media) or present a divergent behavior (1D ENZ media), in accordance with previous studies in dispersive ENZ waveguides \cite{Alu2009boosting,Fleury2013,Sokhoyan2013,Sokhoyan2015,Li2016}. One might be tempted to justify this behavior as an example of Purcell enhancement in slow-light waveguides \cite{Lodahl2015}. However, this reasoning fails to explain all NZI cases. For instance, 1D MNZ is also a slow-light waveguide, with a near-zero group velocity at the MNZ frequency, and yet at this point, spontaneous emission is inhibited (see SM ection IV for a discussion on the validity of our theory to model implementations of 1D ENZ and MNZ media with dispersive waveguides \cite{Supp1}).
\begin{table}[h]
\centering
\begin{tabular}{|l|c|c|c|c|}
\hline
& $A'_{21}/A_{21}$ & ENZ & MNZ & EMNZ\\
\hline
1D & $Z(\omega_Z)$ & $\infty$ &$0$ & $\sqrt{\frac{\frac{d\mu(\omega)}{d\omega }\Bigr|_{\substack{\omega=\omega_Z}}}{\frac{d\varepsilon(\omega)}{d\omega }\Bigr|_{\substack{\omega=\omega_Z}}}}$\\
\hline
2D & $Z(\omega_Z)n(\omega_Z)$ & $|\mu(\omega_Z)|$ &$0$ &$0$ \\
\hline
3D & $Z(\omega_Z)n(\omega_Z)^2$ & 0 & 0 & 0 \\
\hline
\end{tabular}
\caption{Purcell factor at $\omega_Z$ for ENZ, MNZ and EMNZ media in 1D, 2D and 3D.}
\label{Table}
\end{table}
The influence of the dimensionality on the absorption and stimulated emission rates can be obtained by repeating the same procedure used for spontaneous emission (see SM section III \cite{Supp1}). It confirms that these two rates are identical in all instances, and that the ratio between stimulated and spontaneous emission rates is given by the number of photons per optical mode. Therefore, we find that stimulated emission and absorption rates are always proportional to the spontaneous emission rate, as imposed by the very structure of the interaction Hamiltonian given by Eq.\,(\ref{eq:H}). It is then concluded that the ratios between the different radiative processes are fixed, and cannot be modified by changing the background medium.
In conclusion, we investigated dimension-dependent fundamental radiative processes in NZI media. Our formalism illustrates that in order to get the correct physical picture it is crucial to consider both the number of optical modes that may couple to an emitter as well as the coupling strength. These quantities are found to depend highly on the material class and the number of spatial dimensions. For example, we theoretically worked out a dimension-dependent Purcell factor leading to an inhibition of spontaneous emission in most NZI cases, but an enhanced Purcell factor inside 1D ENZ materials. Based on detailed-balance considerations, it can be readily found that other radiative processes such as stimulated emission and absorption follow the modifications induced on spontaneous emission.
\section*{Acknowledgments}
R.W.B., N.E. and E.M acknowledge support from the Defense Advanced Research Projects Agency (DARPA) Defense Sciences Office (DSO) Nascent program and from the US Army Research Office. This work was performed while M.L. was a recipient of a Fellowship of the Belgian American Educational Foundation. M.L. and ENK would like to thank Daryl Vulis and Yang Li for stimulating discussion on zero-index topics.
OR acknowledges the support of the Banting Postdoctoral Fellowship of the Natural Sciences and Engineering Research Council of Canada (NSERC).ENK is supported by the National Science Foundation Graduate Research Fellowship Program under Grant No. DGE1144152 and DGE1745303. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.
|
1,116,691,499,876 | arxiv | \section{Introduction} \label{sec:1}
We are currently witnessing an increasing integration of our energy, transportation, and cyber networks, which, coupled with the human interactions, is giving rise to a new level of complexity in the transportation network. As we move to increasingly complex emerging mobility systems, new control approaches are needed to optimize the impact on system behavior of the interplay between vehicles at different transportation scenarios, e.g., intersections, merging roadways, roundabouts, speed reduction zones. These scenarios along with the driver responses to various disturbances \cite{Malikopoulos2013} are the primary sources of bottlenecks that contribute to traffic congestion \cite{Margiotta2011}.
An automated transportation system \cite{Zhao2019} can alleviate congestion, reduce energy use and emissions, and improve safety by increasing significantly traffic flow as a result of closer packing of automatically controlled vehicles in platoons. One of the very early efforts in this direction was proposed in 1969 by Athans \cite{Athans1969} for safe and efficient coordination of merging maneuvers with the intention to avoid congestion. Varaiya \cite{Varaiya1993} has discussed extensively the key features of an automated intelligent vehicle-highway system and proposed a related control system architecture.
Connected and automated vehicles (CAVs) provide the most intriguing opportunity for enabling decision makers to better monitor transportation network conditions and make better operating decisions to improve safety and reduce pollution, energy consumption, and travel delays. Several research efforts have been reported in the literature on coordinating CAVs at at different transportation scenarios, e.g., intersections, merging roadways, roundabouts, speed reduction zones. In 2004, Dresner and Stone \cite{Dresner2004} proposed the use of the reservation scheme to control a single intersection of two roads with vehicles traveling with similar speed on a single direction on each road.
Since then, several approaches have been proposed \cite{Dresner2008,DeLaFortelle2010} to maximize the throughput of signalized-free intersections including extensions of the reservation scheme in \cite{Dresner2004}.
Some approaches have focused on coordinating vehicles at intersections to improve travel time \cite{Yan2009}. Other approaches have considered minimizing the overlap in the position of vehicles inside the intersection, rather than arrival time \cite{Lee2012}. Kim and Kumar \cite{Kim2014} proposed an approach based on model predictive control that allows each vehicle to optimize its movement locally in a distributed manner with respect to any objective of interest.
A detailed discussion of the research efforts in this area that have been reported in the literature to date can be found in \cite{Malikopoulos2016a}.
In earlier work, a decentralized optimal control framework was established for coordinating online CAVs in different transportation scenarios, e.g., merging roadways, urban intersections, speed reduction zones, and roundabouts. The analytical solution without considering state and control constraints was presented in \cite{Rios-Torres2015}, \cite{Rios-Torres2}, and \cite{Ntousakis:2016aa} for coordinating online CAVs at highway on-ramps, in \cite{Zhang2016a} at two adjacent intersections, and in \cite{Malikopoulos2018a} at roundabouts.
The solution of the unconstrained problem was also validated experimentally at the University of Delaware's Scaled Smart City using 10 CAV robotic cars \cite{Malikopoulos2018b} in a merging roadway scenario. The solution of the optimal control problem considering state and control constraints was presented in \cite{Malikopoulos2017} at an urban intersection.
However, the policy designating the sequence that each CAV crosses the intersection in the aforementioned approaches, was based on a first-in-first-out queue, imposing limitations on the optimal solution. Moreover, no lane changing, or left and right turns were considered.
In this paper, we formulate an upper-level optimization problem, the solution of which yields, for each CAV, the optimal sequence and lane to cross the intersection. The effectiveness of the solution is illustrated through simulation.
The structure of the paper is organized as follows. In Section II, we formulate the problem of vehicle coordination at an urban intersection and provide the modeling framework. In Section III, we briefly present the analytical, closed form solution for the low-level optimization problem. In Section IV, we present the upper-level optimization problem the solution of yields, for each CAV, the optimal sequence and lane to cross the intersection. Finally in Section V, we validate the effectiveness of the solution through simulation. We offer concluding remarks in Section VI.
\section{Problem Formulation} \label{sec:2}
\subsection{Modeling Framework} \label{sec:2a}
We consider CAVs at a 100\% penetration rate crossing a signalized-free intersection (Fig. \ref{fig:1}). The region at the center of the intersection, called \textit{merging zone}, is the area of potential lateral collision of the vehicles. The intersection has a \textit{control zone} inside of which the CAVs can communicate with each other and with the intersection's \textit{crossing protocol}. The \textit{crossing protocol}, defined formally in the next subsection, stores the vehicles' path trajectories from the time they enter until the time they exit the control zone. The distance from the entry of the control zone until the entry of the merging zone is $S_c$ and, although it is not restrictive, we consider to be the same for all entry points of the control zone. We also consider the merging zone to be a square of side $S_m$ (Fig. \ref{fig:1}). Note that the length $S_c$ could be in the order of hundreds of $m$ depending on the crossing protocol's communication range capability, while $S_m$ is the length of a typical intersection. The CAVs crossing the intersection can also make a right turn of radius $R_r$, or a left turn of radius $R_l$ (Fig. \ref{fig:1}). The intersection's geometry is not restrictive in our modeling framework, and is used only to determine the total distance travelled by each CAV inside the control zone.
\begin{figure}
\centering
\includegraphics[width=3.4 in]{figures/fig1.pdf}
\caption{A signalized-free intersection.}%
\label{fig:1}%
\end{figure}
Let $\mathcal{N}(t)=\{1,\ldots,N(t)\}$, $N(t)\in\mathbb{N}$, be the set of CAVs inside the control zone at time $t\in\mathbb{R}^{+}$. Let $t_{i}^{f}$ be the assigned time for vehicle $i$ to exit the control zone.
There is a number of ways to assign $t_{i}^{f}$ for each vehicle $i$. For example, we may
impose a strict first-in-first-out queuing structure \cite{Malikopoulos2017}, where each CAV must
exit the control zone in the same order it entered the control zone. The policy, which determines the time $t_{i}^{f}$ that each vehicle $i$ exits the control zone, is the
result of an upper-level optimization problem and can aim at maximizing the throughput of the intersection. On the other hand, deriving the optimal control input (minimum acceleration/deceleration) for each vehicle $i$ from the time $t_{i}^{0}$ it enters the control zone to achieve the target $t_{i}^{f}$ can aim at minimizing its energy \cite{Malikopoulos2010a}.
In what follows, we present a two-level, joint optimization framework: (1) an upper level optimization that yields for each CAV $i\in\mathcal{N}(t)$ with a given origin (entry of the control zone) and desired destination (exit of the control zone) the sequence that will be exiting the control zone, namely, (a) minimum time $t_{i}^{f}$ to exit the control zone and (b) optimal path including the lanes that each CAV should be occupying while traveling inside the control zone; and (2) a low-level optimization that yields, for CAV $i\in\mathcal{N}(t),$ its optimal control input (acceleration/deceleration) to achieve the optimal path and $t_{i}^{f}$ derived in (1) subject to the state, control, and safety constraints.
The two-level optimization framework is used by each CAV $i\in\mathcal{N}(t)$ as follows. When vehicle $i$ enters the control zone at $t_{i}^{0}$, it accesses the intersection's \textit{crossing protocol} that includes the path trajectories, defined formally in the next subsection, of all CAVs inside the control zone. Then, vehicle $i$ solves the upper-level optimization problem and derives the minimum time $t_{i}^{f}$ to exit the control zone along with its optimal path including the appropriate lanes that it should occupy. The outcome of the upper-level optimization problem becomes the input of the low-level optimization problem. In particular, once the CAV derives the minimum time $t_{i}^{f}$, it derives its minimum acceleration/deceleration profile, in terms of energy, to achieve the exit time $t_{i}^{f}$.
The implications of the proposed optimization framework are that CAVs do not have to come to a full stop at the intersection, thereby conserving momentum and energy while also improving travel time. Moreover, by optimizing each vehicle's acceleration/deceleration, we minimize transient engine operation \cite{Malikopoulos2008b}, and thus we have additional benefits in fuel consumption.
\subsection{Vehicle Model, Constraints, and Assumptions} \label{sec:2b}
In our analysis, we consider that
each CAV $i\in\mathcal{N}(t)$ is governed by the following dynamics
\begin{equation}%
\begin{split}
\dot{p}_{i} & =v_{i}(t)\\
\dot{v}_{i} & =u_{i}(t)\\
\dot{s}_{i} & = \xi_i \cdot (v_{k}(t)-v_{i}(t))
\label{eq:model2}
\end{split}
\end{equation}
where $p_{i}(t)\in\mathcal{P}_{i}$, $v_{i}(t)\in\mathcal{V}_{i}$, and
$u_{i}(t)\in\mathcal{U}_{i}$ denote the position, speed and
acceleration/deceleration (control input) of each vehicle $i$ inside the control zone at time $t\in[t_{i}^{0}, t_{i}^{f}]$, where $t_i^0$ and $t_i^f$ are the times that vehicle $i$ enters and exits the control zone respectively; ~$s_{i}(t)\in\mathcal{S}_{i}$, with $s_{i}(t)=p_{k}(t)-p_{i}(t),$ denotes the distance of vehicle $i$ from the CAV $k\in\mathcal{N}(t)$ which is physically immediately ahead of $i$ in the same lane, and $\xi_{i}$ is a reaction constant of vehicle $i$. The sets $\mathcal{P}_{i}$,$\mathcal{V}_{i}$, $\mathcal{U}_{i}$, and $\mathcal{S}_{i}$, $i\in\mathcal{N}(t),$ are complete and totally bounded subsets of $\mathbb{R}$.
Let $x_{i}(t)=\left[p_{i}(t) ~ v_{i}(t) ~ s_{i}(t)\right] ^{T}$ denote the state of each vehicle $i$ taking values in $\mathcal{X}_{i}%
=\mathcal{P}_{i}\times\mathcal{V}_{i}\times\mathcal{S}_{i}$, with initial value
$x_{i}(t_{i}^{0})=x_{i}^{0}=\left[p_{i}^{0} ~ v_{i}^{0} ~s_{i}^{0}\right] ^{T},$ where $p_{i}^{0}= p_{i}(t_{i}^{0})=0$, $v_{i}^{0}= v_{i}(t_{i}^{0})$, and $s_{i}^{0}= s_{i}(t_{i}^{0})$ at the entry of the control zone. The state space
$\mathcal{X}_{i}$ for each vehicle $i$ is
closed with respect to the induced topology on $\mathcal{P}_{i}\times
\mathcal{V}_{i}\times\mathcal{S}_{i}$ and thus, it is compact.
We need to ensure that for any initial state $(t_i^0, x_i^0)$ and every admissible control $u(t)$, the system \eqref{eq:model2} has a unique solution $x(t)$ on some interval $[t_i^0, t_i^f]$.
The following observations from \eqref{eq:model2} satisfy some regularity conditions required both on the state equations and admissible controls $u(t)$ to guarantee local existence and uniqueness of solutions for \eqref{eq:model2}: a) the state equations are continuous in $u$ and continuously differentiable in the state $x$, b) the first derivative of the state equations in $x$, is continuous in $u$, and c) the admissible control $u(t)$ is continuous with respect to $t$.
To ensure that the control input and vehicle speed are within a
given admissible range, the following constraints are imposed.
\begin{gather}%
u_{i,min} \leq u_{i}(t)\leq u_{i,max}, \label{speed_accel constraints} \quad\text{and}\\
0 < v_{min}\leq v_{i}(t)\leq v_{max},\label{speed}\quad\forall t\in\lbrack t_{i}%
^{0},t_{i}^{f}],
\end{gather}
where $u_{i,min}$, $u_{i,max}$ are the minimum deceleration and maximum
acceleration for each vehicle $i\in\mathcal{N}(t)$, and $v_{min}$, $v_{max}$ are the minimum and maximum speed limits respectively.
To ensure the absence of rear-end collision of two consecutive vehicles traveling on the same lane, the position of the preceding vehicle should be greater than or equal to the position of the following vehicle plus a predefined safe distance $\delta_i(t)$. Thus we impose the rear-end safety constraint
\begin{equation}
\begin{split}
s_{i}(t)=\xi_i \cdot (p_{k}(t)-p_{i}(t)) \ge \delta_i(t),~ \forall t\in [t_i^0, t_i^f].
\label{eq:rearend}
\end{split}
\end{equation}
We consider constant time headway instead of constant distance that each vehicle should keep when following the other vehicles, thus, the minimum safe distance $\delta_i(t)$ is expressed as a function of speed $v_i(t)$ and minimum time headway between vehicle $i$ and its preceding vehicle $k$, denoted as $\rho_i$.
\begin{equation}
\begin{split}
\delta_i(t)=\gamma_i + \rho_i \cdot v_i(t),~ \forall t\in [t_i^0, t_i^f],
\label{eq:safedist}
\end{split}
\end{equation}
where $\gamma_i$ is the standstill distance (i.e., the distance between two vehicles when they both stop).
A lateral collision can occur if a vehicle $j\in\mathcal{N}(t)$ cruising on a different road from $i$ inside the merging zone. In this case, the lateral safety constraint between $i$ and $j$ is
\begin{equation}
\begin{split}
s_{i}(t)=\xi_i \cdot (p_{j,i}(t)-p_{i}(t)) \ge \delta_i(t),~ \forall t\in [t_i^0, t_i^f],
\label{eq:lateral}
\end{split}
\end{equation}
where $p_{j,i}(t)$ is the distance of vehicle $j$ from the entry point that vehicle $i$ entered the control zone.
\begin{definition} \label{def:lanes}
The set of all lanes at the roads of the intersection is denoted by $\mathcal{L}:=\{1,\dots,M\}, M\in\mathbb{N}.$
\end{definition}
\begin{definition} \label{def:lanesfunction}
For each vehicle $i\in\mathcal{N}(t)$, the function $l_i(t): [t_i^0, t_i^f]\to \mathcal{L}$ yields the lane the vehicle $i$ occupies inside the control zone at time $t$.
\end{definition}
\begin{definition} \label{def:cardinal}
For each vehicle $i\in\mathcal{N}(t)$, the pair of the cardinal point that the vehicle enters the control zone and the cardinal point that the vehicle exits the control zone is denoted by $o_i$.
\end{definition}
For example, based on Definition \ref{def:cardinal}, for a vehicle $i$ that enters the control zone from the West entry (Fig. \ref{fig:1}) and exits the control zone from the South exit, $o_i=(W,S)$.
\begin{definition}\label{def:path}
For each vehicle $i\in\mathcal{N}(t)$, the function $t_{p_i,l_i}\big(p_i(t),l_i(t)\big): \mathcal{P}_i\times \mathcal{L}\to[t_i^0, t_i^f],$ is called the \textit{path trajectory} of vehicle $i$, and it yields the time when vehicle $i$ is at the position $p_i(t)$ inside the control zone and occupies lane $l_i(t)$.
\end{definition}
\begin{definition}\label{def:protocol}
The intersection's \textit{crossing protocol} is denoted by $\Pi(t)$ and includes the following information
\begin{gather}\label{eq:protocol}
\Pi(t):=\{t_{p_i,l_i}\big(p_i(t),l_i(t)\big), l_i(t), o_i, t_i^0, t_i^f\}, \\ \nonumber
\forall i\in\mathcal{N}(t), t\in\mathbb{R}^+.
\end{gather}
\end{definition}
\begin{remark} \label{ass:feas}
The vehicles traveling inside the control zone can change lanes either (1) in the lateral direction (e.g., move to a neighbor lane), or (2) when making a right (or a left) turn inside the merging zone. In the former case, when the vehicle changes lane it travels along the hypotenuse $dy$ of the triangle created by the width of the lane and the longitudinal displacement $dp$ if it had not changed lane. Thus, in this case, the vehicle travels an additional distance which is equal to the difference between the hypotenuse $dy$ and the longitudinal displacement $dp$, i.e., $dy-dp$.
\end{remark}
\begin{remark} \label{ass:feas}
When a vehicle is about to make a right turn it must occupy the right lane of the road before it enters the merging zone. Similarly, when a vehicle is about to make a left turn it must occupy the left lane before it enters the merging zone.
\end{remark}
In the modeling framework presented above, we impose the following assumptions:
\begin{assumption} \label{ass:lane}
The vehicle's additional distance $dy-dp$ traveled when it changes lanes in the lateral direction can be neglected.
\end{assumption}
\begin{assumption} \label{ass:noise}
Each CAV $i\in\mathcal{N}(t)$ has proximity sensors and can communicate with other CAVs and the \textit{crossing protocol} without any errors or delays.
\end{assumption}
The first assumption can be justified since we consider an intersection and the speed limit inside the control zone is relatively low, hence $dy\approx dp$. The second assumption may be strong, but it is relatively straightforward to relax it as long as the noise in the communication, measurements and delays are bounded. In this case, we can determine upper bounds on the state uncertainties as a result of sensing or communication errors and delays, and incorporate these into more conservative safety constraints.
When each vehicle $i$ with a given $o_i$ enters the control zone, it accesses the intersection's \textit{crossing protocol} and solves two optimization problems: (1) an upper-level optimization problem, the solution of which yields its path trajectory $t_{p_i,l_i}\big(p_i(t),l_i(t)\big)$ and the minimum time $t_{i}^{f}$ to exit the control zone; and (2) a low-level optimization problem, the solution of which yields its optimal control input (acceleration/deceleration) to achieve the optimal path and $t_{i}^{f}$ derived in (1) subject to the state, control, and safety constraints.
We start our exposition with the low-level optimization problem, and then we discuss the upper-level problem.
\section{Low-level optimization} \label{sec:3}
In this section, we consider that the solution of the upper-level optimization problem is given, and thus, the minimum time $t_{i}^{f}$ for each vehicle $i\in\mathcal{N}(t)$ is known, and we focus on a low-level optimization problem that yields for each vehicle $i$ the optimal control input (acceleration/deceleration) to achieve the assigned $t_{i}^{f}$ subject to the state, control, and safety constraints.
\begin{problem} \label{problem1}
Once $t_{i}^{f}$ is determined, the low-level problem for each vehicle $i\in\mathcal{N}(t)$ is to minimize the cost functional $J_{i}(u(t))$, which is the $L^2$-norm of the control input in $[t_i^0, t_i^f]$
\begin{gather}\label{eq:decentral}
\min_{u(t)\in U_i} J_{i}(u(t))= \frac{1}{2} \int_{t^0_i}^{t^f_i} u^2_i(t)~dt,\\
\text{subject to}%
:\eqref{eq:model2},\eqref{speed_accel constraints},\eqref{speed}, \eqref{eq:rearend},\nonumber\\
\text{and given }t_{i}^{0}\text{, }v_{i}^{0}\text{, }t_{i}^{f}\text{,
}p_{i}(t_{i}^{0})\text{, }p_{i}(t_{i}^{f}),\nonumber
\end{gather}
where $p_{i}(t_{i}^{0})=0$, while the value of $p_{i}(t_{i}^{f})$ for each $i\in\mathcal{N}(t)$ depends on $o_i$ and, based on Assumption \ref{ass:lane}, can take the following values (Fig. \ref{fig:1}): (1) $p_{i}(t_{i}^{f})=2 S_c + S_m$, if the CAV crosses the merging zone, (2) $p_{i}(t_{i}^{f})=2 S_c + \frac{\pi R_r}{2}$, if the CAV makes a right turn at the merging zone, and (3) $p_{i}(t_{i}^{f})=2 S_c + \frac{\pi R_l}{2}$, if the CAV makes a left turn at the merging zone.
\end{problem}
For the analytical solution of \eqref{eq:decentral}, we formulate the Hamiltonian
\begin{gather}
H_{i}\big(t, p_{i}(t), v_{i}(t), s_{i}(t), u_{i}(t)\big) \nonumber \\
=\frac{1}{2} u_i(t)^{2}_{i} + \lambda^{p}_{i} \cdot v_{i}(t) + \lambda^{v}_{i} \cdot u_{i}(t) +\lambda^{s}_{i} \cdot \xi_i \cdot (v_{k}(t) - v_{i}(t)) \nonumber\\
+ \mu^{a}_{i} \cdot(u_{i}(t) - u_{max})
+ \mu^{b}_{i} \cdot(u_{min} - u_{i}(t)) \nonumber\\
+ \mu^{c}_{i} \cdot u_{i}(t) - \mu^{d}_{i} \cdot u_{i}(t) \nonumber\\
+ \mu^{s}_{i} \cdot (\rho_i \cdot u_i(t) - \xi_i\big(v_{k}(t) - v_i(t)\big)) ,\label{eq:16b}
\end{gather}
where $\lambda^{p}_{i}$, $\lambda^{v}_{i}$, and $\lambda^{s}_{i}$ are the influence functions \cite{Bryson:1963}, and
$\mu^{T}$ is the vector of the Lagrange multipliers. To address this problem, the constrained and unconstrained arcs will be pieced together to satisfy the Euler-Lagrange equations and necessary condition of optimality.
For the case that none of the state and control constraints become active, the optimal control is \cite{Malikopoulos2019ACC}
\begin{equation}
u^{*}_{i}(t) = (a_{i} - b_{i} \cdot \xi_i) \cdot t + c_{i}, ~ t \in[t^{0}_{i}, t_i^f]. \label{eq:20}
\end{equation}
Substituting the last equation into \eqref{eq:model2} we find the optimal speed and position for each vehicle,
namely
\begin{gather}
v^{*}_{i}(t) = \frac{1}{2} (a_{i} - b_{i} \cdot \xi_i) \cdot t^2 + c_{i} \cdot t +d_{i}, ~ t \in[t^{0}_{i}, t_i^f], \label{eq:21}\\
p^{*}_{i}(t) = \frac{1}{6} (a_{i} - b_{i} \cdot \xi_i) \cdot t^3 +\frac{1}{2} c_{i} \cdot t^2 + d_{i}\cdot t +e_{i}, \label{eq:22} \\~ t \in[t^{0}_{i}, t_i^f], \nonumber
\end{gather}
where $a_{i}$, $b_{i}$, $c_{i}$, $d_{i}$ and $e_{i}$ are constants of integration that can be computed by the initial, final, and transversality conditions \cite{Malikopoulos2019ACC}.
\section{Upper-level optimization} \label{sec:4}
When a vehicle $i\in\mathcal{N}(t),$ with a given $o_i$, enters the control zone, it accesses the intersection's \textit{crossing protocol} and solves an upper-level optimization problem. The solution of this problem yields for $i$ the path trajectory $t_{p_i,l_i}\big(p_i(t),l_i(t)\big)$ and the minimum time $t_{i}^{f}$ to exit the control zone.
In our exposition, we seek to derive the minimum $t_{i}^{f}$ without activating any of the state and control constraints of the low-level optimization Problem \ref{problem1}. Therefore, the upper-level optimization problem should yield a $t_{i}^{f}$ such that the solution of the low-level optimization problem will result in the unconstrained case \eqref{eq:20} - \eqref{eq:22}.
There is an apparent trade off between the two problems. The lower the value of $t_{i}^{f}$ in the upper-level problem, the higher the value of the control input in $[t_{i}^{0}, t_{i}^{f}]$ in the low-level problem.
The low-level problem is directly related to minimizing energy for each vehicle (individually optimal solution). On the other hand, the upper-level problem is related to maximizing the throughput of the intersection, thus eliminating stop-and-go driving (social optimal solution). Therefore, by seeking a solution for the upper-level problem which guarantees that none of the state and control constraints become active may be considered an appropriate compromise between the two.
For simplicity of notation, for each vehicle $i\in\mathcal{N}(t)$ we write the optimal position \eqref{eq:22} of the unconstrained case in the following form
\begin{gather}
p^{*}_{i}(t) = \phi_{i,3} \cdot t^3 +\phi_{i,2} \cdot t^2 + \phi_{i,1} \cdot t +\phi_{i,0} , ~ t\in [t_{i}^{0}, t_{i}^{f}], \label{eq:upper_p}%
\end{gather}
where $\phi_{i,3}, \phi_{i,2}, \phi_{i,1}, \phi_{i,0}\in\mathbb{R}$ are the constants of integration derived in the Hamiltonian analysis, in Section \ref{sec:3}, for the unconstrained case.
\begin{remark} \label{rem:3}
For each $i\in\mathcal{N}(t),$ the optimal position \eqref{eq:upper_p} is a continuous and differentiable function. Based on \eqref{speed}, it is also an increasing function with respect to $t\in\mathbb{R}^+$.
\end{remark}
Next, we investigate some properties of \eqref{eq:upper_p}.
\begin{lemma} \label{lem:1}
For each $i\in\mathcal{N}(t)$, the optimal position $p_i^*$ given by \eqref{eq:upper_p} is an one-one function.
\end{lemma}
\begin{proof}
Since, for each $i\in\mathcal{N}(t),$ $p_i^*(t)$ is an increasing function with respect to $t\in\mathbb{R}^+$ and from \eqref{speed}, for any $t_1, t_2\in[t_{i}^{0}, t_{i}^{f}]$, $p_i^*(t_{1})\neq p_i^*(t_{2}).$
\end{proof}
\begin{corollary} \label{cor:1}
Since, for each $i\in\mathcal{N}(t),$ \eqref{eq:upper_p} is an one-one function, there exist an inverse function $p_i^*(t)^{-1}$ such that
\begin{gather} \label{eq:upper_inversep}
p^{*}_{i}(t)^{-1} = \omega_{i,3} \cdot p^3 +\omega_{i,2} \cdot p^2 + \omega_{i,1} \cdot p +\omega_{i,0} ,
\end{gather}
where $\omega_{i,3}, \omega_{i,2}, \omega_{i,1}, \omega_{i,0}\in\mathbb{R}$ are constants that are a function of $\phi_{i,3}, \phi_{i,2}, \phi_{i,1}, \phi_{i,0}$.
\end{corollary}
\begin{remark} \label{rem:4}
For each $i\in\mathcal{N}(t),$ $t\in [t_{i}^{0}, t_{i}^{f}]$, we rewrite \eqref{eq:upper_p} as follows
\begin{gather}
p^{*}_{i}(t) = \phi_{i,3} \cdot t_i^3 +\phi_{i,2} \cdot t_i^2 + \phi_{i,1} \cdot t_i +\phi_{i,0}. \label{eq:upper_pi}%
\end{gather}
\end{remark}
\begin{lemma} \label{lem:2}
Let $p^{*}_{i}(t)^{-1}$ be the inverse function of \eqref{eq:upper_p} for each vehicle $i\in\mathcal{N}(t).$ Then the constants $\phi_{i,3}, \phi_{i,2}, \phi_{i,1}, \phi_{i,0}\in\mathbb{R}$ can be derived by $\omega_{i,3}, \omega_{i,2}, \omega_{i,1}, \omega_{i,0}\in\mathbb{R}.$
\end{lemma}
\begin{proof}
Due to space limitation the proof is omitted. However, the result is trivial.
\end{proof}
\begin{remark} \label{rem:5}
The inverse function $p_i^*(t)^{-1}=t_i(p^*(t)),$ where $t_i(p^*(t))\in[t_{i}^{0}, t_{i}^{f}]$, yields the time that vehicle $i\in\mathcal{N}(t)$ is at the position $p^{*}_{i}(t)$ inside the control zone.
\end{remark}
\begin{lemma} \label{lem3}
For each $i\in\mathcal{N}(t),$ the domain of $t_i(p^*(t))$ is the closed interval $[p_i(t_{i}^{0}),p_i(t_{i}^{f})]$.
\end{lemma}
\begin{proof}
Since, for each $i\in\mathcal{N}(t),$ $p_i^*(t)$ is an increasing function in $[t_{i}^{0}, t_{i}^{f}]$, then by the Intermediate Value Theorem, $p_i^*(t)$ takes values on the closed interval $[p_i(t_{i}^{0}),p_i(t_{i}^{f})]$.
\end{proof}
\begin{corollary} \label{cor:2}
Since $p_i^*(t)$ is a continuous and one-one function in $[t_{i}^{0}, t_{i}^{f}]$ for each $i\in\mathcal{N}(t),$ then $t_i(p^*(t))$ is also continuous.
\end{corollary}
\begin{corollary} \label{cor:3}
For each $i\in\mathcal{N}(t),$ $p'\big(t_i(p(t)) \big)\neq 0$ for all $p\in [p_i(t_{i}^{0}),p_i(t_{i}^{f})]$. Hence, $t_i(p^*(t))$ is differentiable in $[p_i(t_{i}^{0}),p_i(t_{i}^{f})]$.
\end{corollary}
\begin{lemma} \label{lem4}
For each $i\in\mathcal{N}(t),$ $t_i(p^*(t))$ is an increasing function in $[p_i(t_{i}^{0}),p_i(t_{i}^{f})]$.
\end{lemma}
\begin{proof}
From Lemma \ref{lem3}, for each $i\in\mathcal{N}(t)$ the domain of $t_i(p^*(t))$ is $[p_i(t_{i}^{0}), p_i(t_{i}^{f})]$. Let $p_i(t_{i}^{0})<\alpha_1<\alpha_2< p_i(t_{i}^{f})$ with $t_i(p_i(t_{i}^{0}) < t_i(\alpha_1)$. If we had $t_i(p_i(t_{i}^{0})) > t_i(\alpha_2),$ then by applying the Intermediate Value Theorem to the interval $[\alpha_2, p_i(t_{i}^{f})]$ would give an $\alpha_3$ with $\alpha_2<\alpha_3< p_i(t_{i}^{f})$ and $t_i(p_i(t_{i}^{0})) = t_i(\alpha_3)$ contradicting the fact that $t_i(p^*(t))$ is one-one on $[p_i(t_{i}^{0}),p_i(t_{i}^{f})]$.
\end{proof}
Since each vehicle $i\in\mathcal{N}(t)$ can change lanes inside the control zone, its position should be associated with the function $l_i(t)$ (Definition \ref{def:lanesfunction}) that yields the lane vehicle $i$ occupies inside the control zone at $t$.
\begin{definition} \label{def:pos_lane}
The position of each vehicle $i\in\mathcal{N}(t)$ using lane $l_i(t)$, $m\in\mathcal{L},$ is denoted by $p_{i,l}(t,l)$.
\end{definition}
Based on Definition \ref{def:pos_lane}, we augment the optimal position of $i\in\mathcal{N}(t)$ given by \eqref {eq:upper_p} to capture the lane that vehicle $i$ as follows
\begin{gather}
p^*_{i}(t,l) = p^*(t)\cdot {I}_{1}(l) + p^*(t)\cdot {I}_{2}(l) + \dots + p^*(t)\cdot {I}_{M}(l), \nonumber \\
t\in [t_{i}^{0}, t_{i}^{f}], \label{eq:upper_pl}%
\end{gather}
where ${I}_{m}(l)$, $m\in\mathcal{L}$, is the indicator function with ${I}_{m}(l=m)=1$, if $i$ occupies lane $m\in\mathcal{L}$ and ${I}_{m}(l\neq m)=0$ otherwise.
For each vehicle $i\in\mathcal{N}(t)$, the inverse function of \eqref {eq:upper_p} enhanced with the lane that vehicle $i$ occupies is the path trajectory (Definition \ref{def:path}) and can be written as follows
\begin{gather}
t_{p_i,l_i}(p_i(t),l_i(t))) = \omega_{i,3} \cdot p^3_{i}(t,l) +\omega_{i,2} \cdot p^2_{i}(t,l) \nonumber \\
+\omega_{i,1} \cdot p_{i}(t,l) +\omega_{i,0} . \label{eq:path_traj}
\end{gather}
The path trajectory $t_{p_i,l_i}(p_i(t),l_i(t))) $ yields the time that vehicle $i$ is at the position $p_i(t)$ inside the control zone and occupies lane $l_i(t)$ and is used as the cost function for the upper-level optimization problem.
In the upper-level optimization problem, each vehicle $i\in\mathcal{N}(t)$ derives its optimal path trajectory which yields the minimum time $t_i^f$ that vehicle $i$ exits the control zone along with the lane $l^*\in\mathcal{L}$ that should occupy at each $p_i^*$. To formulate this problem, we need to minimize \eqref{eq:path_traj}, evaluated at $p_i(t_i^f),$ with respect to $\omega_{i,3}, \omega_{i,2}, \omega_{i,1}, \omega_{i,0}$ that determine the shape of the path trajectory of the vehicle in $[p_i^0, p_i^f]$. Note that the value of $p_{i}(t_{i}^{f})$ for each $i\in\mathcal{N}(t)$ depends on $o_i$ and, based on the Assumption \ref{ass:lane}, it can be equal to (see Fig. \ref{fig:1}): (1) $p_{i}(t_{i}^{f})=2 S_c + S_m$, if the vehicle crosses the merging zone, (2) $p_{i}(t_{i}^{f})=2 S_c + \frac{\pi R_r}{2}$, if the vehicle makes a right turn at the merging zone, and (3) $p_{i}(t_{i}^{f})=2 S_c + \frac{\pi R_l}{2}$, if the vehicle makes a left turn at the merging zone. For simplicity of notation, we denote the total distance travelled by the vehicle $i\in\mathcal{N}(t)$ in $[t_{i}^{0}, t_{i}^{f}]$ with $S_{i,total}$, thus $p_{i}(t_{i}^{f})=S_{i,total}$.
Hence, the upper-level optimization problem is formulated as follows.
\begin{problem} \label{problem2}
\begin{gather}\label{eq:decentral2}
\min_{\omega_{i,3}, \omega_{i,2}, \omega_{i,1}, \omega_{i,0}}t_{p_i,l_i}\big(S_{i,total},l_i(t)\big)\\
\text{subject to}%
: \eqref{speed_accel constraints},\eqref{speed}, \eqref{eq:rearend}, \text{and given }t_{i}^{0}\text{, }v_{i}^{0}\text{, }t_{i}^{f}\text{,
}p_{i}(t_{i}^{0})\text{, }p_{i}(t_{i}^{f}).\nonumber
\end{gather}
\end{problem}
From Lemma \ref{lem:2}, the constants $\phi_{i,3}, \phi_{i,2}, \phi_{i,1}, \phi_{i,0}\in\mathbb{R}$ corresponding to the constraints imposed through \eqref{eq:upper_p} can be derived by $\omega_{i,3}, \omega_{i,2}, \omega_{i,1}, \omega_{i,0}\in\mathbb{R}$. This is a nonlinear programming problem that each vehicle can solve using Lagrange multiplier theory.
\section{Simulation Results} \label{sec:5}
\subsection{Validation of Upper-Level Optimization} \label{sec:5a}
To evaluate the effectiveness of the solution of the proposed upper-level optimization problem, we conduct a simulation in MATLAB. The simulation setting is as follows. The intersection contains two roads, each of which has one lane per direction. The length of each direction is 300 $m$, the merging zone of the intersection is 25 $m$ by 25 $m$, and the entry of merging zone is located at 125 $m$ from the entry point for both directions. The maximum and minimum speed are 18 $m/s$ and 2 $m/s$, respectively. The maximum and minimum acceleration are 3.0 $m/s^2$ and -3.0 $m/s^2$. The safety (minimum allowed) headway is 1.0 $s$, and the standstill distance is 1.5 $m$. Six vehicles are entering into the intersection from three directions at different time steps.
Since vehicle 1 is the first vehicle in the network, it cruises through the intersection without any constraints imposed. The trajectories of all vehicles along with the safety distance are shown in Fig. ~\ref{fig:distance}. Negative values of the safety distance means violation of the rear-end constraint. We see from Fig.~\ref{fig:distance} that both rear-end and lateral collision constraints are satisfied. The control input (acceleration) and speed profiles for the vehicles in the network is shown in Fig.~\ref{fig:speed}. We note that for all vehicles driving through the intersection, none of the acceleration and speed constraints are activated.
\begin{figure}[!h]
\centering
\includegraphics[width=.48\textwidth]{figures/distance_v2.jpg}
\caption{Trajectories and safety distances of vehicles.}
\label{fig:distance}
\end{figure}
\begin{figure}[!h]
\centering
\includegraphics[width=.48\textwidth]{figures/speed_v2.jpg}
\caption{Speed and control profiles of vehicles.}
\label{fig:speed}
\end{figure}
\section{Concluding Remarks}
In this paper, we formulated an upper-level optimization problem, the solution of which yields, for each CAV, the optimal sequence and lane to cross the intersection. The effectiveness of the solution was illustrated through simulation. We showed, through numerical results, that vehicles are successfully crossing an intersection without any rear-end or lateral collision. In addition, the state and control constraints did not become active for the entire trajectory for each vehicle.
While the potential benefits of full penetration of CAVs to alleviate traffic congestion and reduce energy have become apparent, different penetrations of CAVs can alter significantly the efficiency of the entire system. Therefore, future research should look into this direction.
\bibliographystyle{IEEEtran}
|
1,116,691,499,877 | arxiv | \section{Conclusion}
We applied neural network based binary segmentation for fully automated detection and localization of ONH in OCT en face images. We showed that a spatial restriction of the input images to a specific area where the object of interest (in our case the ONH) can occur, improves the resulting model performance on the ONH detection and localization task.\\
However, such restriction always requires domain knowledge for the specific task at hand. In many cases where the object of interest is not located at the exactly same location of the image a priori, a spatial restriction to a specific area of the inputs can be achieved via spatial normalization of the images.
The question to what extent the beneficial effect of spatial normalization of the inputs is comparable to an increased model complexity and -- as a consequence thereof -- a necessarily increased number of training samples is left for future research.\\
Some experiments -- especially the scenarios with spatially unrestricted input images -- lead to oversegmentations, which have a strong influence on the Euclidean distance metric as a measure for the ONH localization accuracy. This problem can be handled in a postprocessing step by only keeping the largest connected component. The study of neural network architectures that are capable of eliminating oversegmentations from the beginning is left to future work.\\
For all experiments we used the same simple neural network architecture. When performing training and evaluation on spatially unrestricted images, the chosen network architecture doesn't optimally generalize from training images to validation images. When utilizing a Tversky loss the model performance on spatially unrestricted images of the test set is inferior to a model of the same complexity trained on spatially restricted input images. In the case of model training with a BCE loss, the model entirely fails to segment the ONH but performs comparably well on spatially restricted input images.
A restriction of the inputs not only reduces the extend of false positive predictions but also improves the general segmentation accuracy, which is reflected by an increased Dice score.\\
Those consistent results on two different loss functions suggest that a spatial restriction of the inputs can foster neural network training in general.
\section{Experiments}
\label{sec:experiments}
ONH appear in OCT en face images as brighter or darker disks compared to the surrounding structures. ONH segmentation can be formulated as pixel-wise binary classification problem. We examine whether the reduction of the segmentation task complexity through domain knowledge while holding constant the utilized model complexity can improve the segmentation and detection performance. In concrete terms, we apply the same model on the full en face images and on two gradually cropped versions of the image data.
\paragraph{Data, Data Selection and Preprocessing}
For model training and evaluation we used a dataset comprising a total of 120 annotated OCT en face images with an image resolution of $2048 \times 2048$ pixels (pixel sizes $8.8 \times 8.8\mu m$), from 89 eyes of 64 diabetic retinopathy patients.
We split the data into a training set of 100 en face images (70 unique eyes from 48 patients), a validation set of 10 en face images (9 unique eyes from 7 patients),
and a test set of 10 en face images (10 unique eyes from 9 patients). For 30 patients of the training set and for 1 patient of the validation set, the dataset comprises more than one scan per eye. Mainly caused by patient related limited scanning conditions, some of the scans have reduced image quality.
We performed model training on the training set. The validation set was used for hyper-parameter tuning and model selection. The test set was only used once, namely, for the final model evaluation. We split the data on the patient-level, so that scans of individual patients are exclusively contained in the training set, validation set, or test set.
The en face images represent mean gray values along the z-axis of 3D OCT scans acquired with an OCT prototype that has been developed by our group~\cite{niederleithner2020clinical}.
The used system is a swept-source OCT (SS-OCT) system utilizing a Fourier-domain mode-locking (FDML) laser with a sweep-rate (A-scan rate) of $1.68 MHz$, a central wavelength of $1060 nm$ and $75 nm$ tuning range, translating to an axial resolution of $9\mu m$ in tissue. The used field of view is $18 mm$ in diameter sampled with $2048 \times 2048$ A-scans per volume, resulting in a sampling density of around $8.8 \mu m$. With a beam diameter of $1 mm$ on the pupil, the lateral resolution on the retina is approximately $20\mu m \: (1/e^2)$, therefore fulfilling the Nyquist's criterion.
Among patients, there is low variability in relative size of the ONH compared to the full en face image. Additionally, when downscaling the original en face images by a factor of 8, the ONH can still be identified. Therefore, we perform ONH segmentation and detection on downscaled en face images with image resolution of $256 \times 256$ pixels. Other than that, only the input gray values were normalized to range from 0 to 1.
\paragraph{Image size reduction for optimized machine learning performance}
Based on the inferences we drew from data exploration and visualization on the samples of the training set about the spatial distribution of the optic nerve head, we only reduce the number of columns of the images. In our experiments we used three different sizes of 2D intensity input images and corresponding binary target images, namely images of size $160 \times 256$ pixels (\textit{moderate spatial reduction}), $96 \times 256$ pixels (\textit{significant spatial reduction}), and -- as baseline -- images with $256 \times 256$ pixels (\textit{no cropping}).
\subsection{Evaluation}
\label{sec:exp_eval}
\paragraph{Semantic segmentation model}
We evaluate the semantic segmentation performance of a standard \textit{U-Net} model with convolutional layers as main units, where the encoder and decoder comprise six and five blocks of two convolutional layers with $(16-32-64-128-256-512)$ and $(256-128-64-32-32)$ filters of size $(3 \times 3)$ pixels, respectively. The first five encoder blocks are followed by a max pooling layer. Whereas, dropout is applied on the output of the sixth encoder block with a dropout rate of $0.2$.
While keeping the model complexity constant, we perform model training with two different loss functions, namely \textbf{(1)} \textit{binary cross entropy loss} or \textbf{(2)} \textit{Tversky loss}, to examine whether we observe general influence of spatial image complexity on the segmentation performance for both loss functions.
\paragraph{Evaluation metrics}
We use the Dice similarity coefficient (\textit{Dice}) to evaluate the pixel-level segmentation performance as a measure for the \textbf{ONH detection accuracy} of a model.
In skewed class distributions of the data, receiver operating characteristic (ROC) curves can present a too optimistic visualization of the model's \textit{binary classification} performance~\cite{davis2006relationship}. For the evaluation of the classification performance on data with skewed class distribution, precision-recall curves are an appropriate alternative to ROC curves ~\cite{goadrich2004learning,bunescu2005comparative,craven2005markov}.
Binary pixel values of a segmentation mask that localize the optic disc in the corresponding gray value en face image represent highly imbalanced classes, i.e. the coverage of the optic disc relative to the full total area of the en face image is relatively small.
Therefore, we use \textit{precision-recall curves} to visualize the model performance on the ONH segmentation task. The \textit{average precision (aPr)} of the model
is a quantitative summary statistic for the precision-recall curve.
The Dice similarity coefficient and further quantitative performance statistics (sensitivity, specificity, and precision) are calculated at the optimal cut-off point of the precision-recall curve. We report the area under the ROC curve (AUC) values for the sake of completeness only.\\
Furthermore, we evaluate the ONH \textbf{localization performance} based on the Euclidean distance between the centroids of the pixel-level ground truth annotation mask and the corresponding predicted ONH segmentation result.
\paragraph{Implementation details}
All models were trained for 300 epochs, and model parameters were stored at the best performing epoch on the validation set. After model selection and hyperparameter tuning, the final performance was evaluated on the test set using the learned model parameters. We utilized the stochastic optimizer Adam~\cite{kingma2014adam} during training. All experiments were performed using Python 3.8 with the TensorFlow~\cite{tensorflow2015-whitepaper} library version 2.2 and the high-level API Keras~\cite{chollet2015keras} 2.7, CUDA 11.4, and a NVIDIA Titan Xp graphics processing unit.
\subsection{Results}
Results demonstrate that the spatial restriction of the input images improves resulting model performance for both tasks, ONH segmentation and ONH localization. In our experiments, this beneficial effect is independent of the specific loss function used for training.
Detailed quantitative results are listed in Table~\ref{tab:result/quant_results}, which show that the model trained with the Tversky loss on $96 \times 256$ input images yields best mean Dice score of $0.9154$ evaluated on the test set, which is a measure for the overall segmentation and detection accuracy of the approach. Furthermore, it performs best with regard to the Euclidean distance (eDist of $1.2730$ pixels) between the centroids of the pixel-level ground truth annotation mask and the corresponding predicted ONH segmentation result, which is a measure of the ONH localization accuracy.
The observation that a restriction of the input images to the relevant image part improves resulting segmentation and localization performance also holds true for the models trained with BCE loss, with increased Dice score of $0.8615$ and decreased Euclidean distance of $2.6995$ compared to a model trained on $160 \times 256$ input images, where the model yields a Dice score of $0.8305$ and a Euclidean distance of $3.8575$. On the full-sized input images a model trained with BCE loss even completely fails to detect the ONH.
The benefit of model training on spatially restricted input images is also evident through the precision-recall curves shown in Figure~\ref{fig:result/PRc}. Qualitative segmentation results are separately shown for models trained with BCE~\Cref{fig:result/seg_result_imgs} (A.1-3) and with Tversky loss~\Cref{fig:result/seg_result_imgs} (B.1-3) for the three input sizes.\\
On the GPU, the mean inference times for all input image resolutions were lying in the magnitude of $0.007 sec$.
\begin{table}[H]
\centering
\caption{
Evaluation results on the test set for models trained utilizing a binary cross entropy loss (\textit{bce}) or utilizing a Tversky loss (\textit{tversky}) - trained and evaluated on $256\times256$ pixel inputs, $160\times256$ pixel inputs, or $96\times256$ pixel inputs with 256, 160, or 96 input rows (\textit{ir}), respectively.
The quantitative performance statistics (sensitivity, specificity, and precision) and Dice similarity coefficient (\textit{Dice}) measuring the pixel-level segmentation performance calculated at the optimal cut-off point of the precision-recall curve, the corresponding average precision (\textit{aPr}), and the mean Euclidean distance (\textit{eDist}) between the centroids of the pixel-level ground truth annotation mask and the corresponding predicted ONH segmentation result measuring the ONH localization accuracy. Due to the strong skewness of the pixel-level class label distribution, we report the area under the receiver operating characteristic (\textit{ROC}) curve (\textit{AUC}) only for completeness.
}
\vspace{-2mm}
\begin{tabular}
{ @{}L{.07\textwidth} L{.09\textwidth} | C{0.13\textwidth} C{0.12\textwidth} C{0.11\textwidth} C{0.09\textwidth} C{0.09\textwidth} | C{0.09\textwidth} | C{0.09\textwidth} }
ir & loss & sensitivity & specificity & precision & AUC & aPr & Dice & eDist \\ \hline
256 & bce & 0.0000 & 1.0000 & 0.0000 & 0.5000 & 0.0073 & 0.0000 & nan \\
160 & bce & 0.8280 & 0.9980 & 0.8331 & 0.9986 & 0.9221 & 0.8305 & 3.8575 \\
96 & bce & 0.8382 & 0.9979 & 0.8861 & 0.9987 & 0.9447 & \textit{0.8615} & \textit{2.6995} \\ \hline
256 & tversky & 0.8921 & 0.9985 & 0.8155 & 0.9988 & 0.8291 & 0.8521 & 8.7934 \\
160 & tversky & 0.8516 & 0.9993 & 0.9392 & 0.9889 & 0.8950 & 0.8933 & 1.5614 \\
96 & tversky & 0.9303 & 0.9980 & 0.9010 & 0.9704 & 0.8705 & \textbf{0.9154} & \textbf{1.2730} \\
\end{tabular}
\normalsize
\label{tab:result/quant_results}
\vspace{-6mm}
\end{table}
\begin{figure}[htp]
\centering
\includegraphics[width=1\textwidth]{PRc}
\vspace{-5mm}
\caption{
ONH segmentation performance evaluation. Precision-recall curves for models based on an U-Net with $256\times256$ pixel inputs, $160\times256$ pixel inputs, or $96\times256$ pixel inputs trained with different loss functions: a) binary cross entropy (bce) loss and b) Tversky loss. Corresponding average precision (\textit{aPr}) values are given in parenthesis.
}
\label{fig:result/PRc}
\end{figure}
\begin{figure}[htp]
\centering
\includegraphics[width=1\textwidth]{segResults_test_BCE_TVERSKY_all.png}
\caption{ONH segmentation results on test set samples. Pixel level segmentation results based on an U-Net with $256\times256$ pixel inputs (A.1 and B.1), $160\times256$ pixel inputs (A.2 and B.2), or $96\times256$ pixel inputs (A.3 and B.3). (A.1-3) Models trained utilizing a binary cross entropy (bce) loss. (B.1-3) Models trained utilizing a Tversky loss.
En face images with resulting ONH segmentation boundaries (first rows) and en face images overlayed with corresponding pixel-level segmentation results (second rows). True positives (yellow), false positives (red), false negatives (green), and true negatives (gray values of en face image). The input images are cropped to the central image region delineated by white horizontal lines overlayed on the en face images in the bottom rows.}
\label{fig:result/seg_result_imgs}
\end{figure}
\section{Introduction}
In this paper, we propose to leverage a deep neural network for automated segmentation of the ONH in 2D en face projections of 3D widefield (60 degrees) swept source OCT (SS-OCT) scans. We utilize a simple U-Net architecture for accurate ONH detection and localization, described in Section~\ref{sec:methods:sem_seg}.\\
The acquisition of retinal OCT scans is standardized. Therefore, the rough location of the ONH and of other anatomical structures (like the fovea) is known. The overall image content is only flipped on the vertical axis depending whether the en face image has been reconstructed from a 3D OCT scan of the left or of the right eye.
Therefore, the image regions where the ONH can be located in the en face images is a priori known to a large extent.
A spatially unrestricted image has a larger field of view and therefore comprises a larger amount of image content. Images with a large field of view, in turn, need a large receptive field of the machine learning model. Both challenges require an increased machine learning model capacity and call for optimized training strategies that are typically needed to empower the machine learning model to learn a large variability of anatomical and pathological medical images.
In this work, we aim to improve classifier training without the need to simultaneously increase the computational burden in terms of model complexity, run time, memory footprint, or the need for extensive data augmentation tweaks. We propose to simply spatially restrict the data fed to the classifier during training and during inference.
In the spirit of data-centric artificial intelligence (AI), we propose to apply domain specific data modifications instead of extensively tune machine learning architectures and related learning strategies.
Data-centric AI is the recently evolved discipline of focusing the engineering work and research specifically on the data that is used for training a machine learning model at least as much as on tuning of the machine learning model itself. The data-centric AI discipline shifts the data engineering task from the past one-time preprocessing approach to a systematically engineering of the data realized by an iterative \textit{data tuning} process that is an integral component of the training-validation cycle.
\paragraph{Clinical background and motivation}
The optic nerve head (ONH) is an important anatomical structure in several aspects. It constitutes a distinct anatomical structure located in the posterior segment of the eye and comprises blood vessels, ganglion cell axons, connective tissue and glia~\cite{reis2012optic}. In color fundus images as well as in OCT en face images, the area of the optic nerve head is typically depicted with distinct different brightness or color values compared to the color or gray values of the surrounding anatomical structures.
Often the detection and localization of the ONH in 2D en face view serves as a preprocessing step for subsequent image analysis steps even if the subsequent analysis is performed in the original 3D OCT volume space.
Automated localization of the ONH enables the implementation of a fully automated image processing pipeline where an ONH mask is required as an integral component.
Here, we apply supervised machine learning and deep neural networks for accurate fully automated ONH localization.
\paragraph{Related Work}
In contrast to applying convolutional neural networks (CNN) on image patches to perform segmentation of a full image through densely extracting and classifying those image parts, deep learning based semantic segmentation approaches, such as \textit{fully convolutional networks (FCN)} proposed by Long et al.~\cite{long2015fully} or \textit{DeepLab} proposed by Chen et al.~\cite{chen2015semantic}, yield higher segmentation accuracy with a reduced computational cost.
Ronneberger et al.~\cite{ronneberger2015u} introduced the U-Net model architecture and won the \textit{International Symposium on Biomedical Imaging (ISBI)} cell tracking challenge (2015) by training this network on transmitted light microscopy images. Today the U-Net is one of the state-of-the art deep learning architectures for a wide range of image segmentation tasks.
The U-Net architecture enables fast end-to-end training and inference from input images to dense label maps of the same resolution. Furthermore, it can be trained from only a few training samples~\cite{ronneberger2015u}.
The most closely related work to our approach was presented by Fard and Bagherinia~\cite{fard2019automatic}, who developed an automatic optic nerve head detection algorithm in widefield OCT using an U-Net and subsequent template matching strategy, in order to find the ONH center using a $4 mm$ diameter disc-shaped binary mask based on the U-Net output. As 3-channel input to the U-Net, an OCT en face image, a vessel enhanced OCT en face, and an OCT contrast map were used. The approach is evaluated based on the \textit{ONH center} localization accuracy.
Our work differs in the following aspects: although the euclidean distance between the ground-truth and the predicted ONH center is used in our work as performance metric, \textbf{(i)} we do not principally aim to predict the ONH center but we primarily tackle an ONH segment task, \textbf{(ii)} instead of relying on 3-channel inputs only a plain 2D OCT en face image is used as single network input, and \textbf{(iii)} no template matching strategy has to be utilized on the output of the neural network prediction as a postprocessing step. Both, the reduced computational cost for the computation of the inputs and the end-to-end learning and direct mapping from an input image to the corresponding prediction of the pixel-level ONH region during inference render our approach overall simpler and faster to train and to apply on new data.
\paragraph{Contribution}
We propose to adopt a data-centric AI strategy, namely to simply restrict the spatial complexity (i.e., the size of the input images) to the relevant image regions instead of exceedingly tune the utilized model architecture, model complexity, data augmentation strategies, or the therewith related training strategies.
Since the patient positioning and image acquisition of the underlying 3D OCT scan is standardized and the variability of the ONH location relative to other anatomical structures in the retina is limited, we can perform this spatial complexity reduction through simply cropping the input images.
By cropping the input images to the region of the most probable location of the ONH, the variability of the image content is reduced as well.\\
We evaluate the effect of different input image sizes on the resulting segmentation accuracy and study whether we see general improvement of the achievable model performance irrespective of the specific training strategy by using different training objectives.
Experiments (Section~\ref{sec:experiments}) on annotated 2D en face projections of 3D OCT scans show that the proposed approach segments and localizes the ONH in 2D en face images with high accuracy.
To the best of our knowledge, this is the first published work on fully automated detection and localization of the ONH in 2D en face projections of 3D widefield SS-OCT scans that tunes the model performance primarily via data engineering and that additionally only utilizes a single neural network without the need for refinement in a postprocessing step.
\section{Data-centric segmentation of the optic nerve head}
In this work we focus on the benefits and effects of data-centric engineering. We utilize a machine learning segmentation model to perform and evaluate the proposed approach.
We apply semantic segmentation~\cite{mostajabi2015feedforward,noh2015learning,long2015fully,chen2015semantic,zheng2015conditional} to perform pixel-level classification in a single pass in order to detect and localize the object of interest in an image. As opposed to performing image segmentation based on an image-level classification on small image patches, semantic segmentation is very run-time efficient.
In the following, we describe the data, the utilized semantic segmentation approach, and the corresponding training objectives in more detail.
\subsection{Data representation}
\label{sec:methods:representations}
The data comprises $N$ tuples of medical images and pixel-level ground truth annotations
$\langle \mathbf{X}_n, \mathbf{Y}_n \rangle$, with $n=1,2,\dots ,N$, where $\mathbf{X}_n \in \mathbb{R}^{a \times b}$ is an intensity image of size $a \times b$ and
$\mathbf{Y}_n \in \{0,1\}^{a \times b}$ is a binary image of the same size specifying the pixel-level presence of the object of interest.\\
The data is divided into disjoint sets, used for training, validation, or testing of the model.
\subsection{Data-centric optimization}
\label{sec:methods:dataCentric}
For data-centric performance optimization of the utilized machine learning model, we reduce the spatial size of both, the intensity input image $\mathbf{X}_n \in \mathbb{R}^{a \times b}$ and of the binary target image $\mathbf{Y}_n \in \{0,1\}^{a \times b}$.\\
We crop the 2D intensity input images $\mathbf{X}_n$ and the corresponding binary target images $\mathbf{Y}_n$ to the size $\dot{a} \times \dot{b}$, with $\dot{a} \leq a$ and $\dot{b} \leq b$ so that for training of the machine learning model we use tuples of intensity input images $\mathbf{X}_n \in \mathbb{R}^{\dot{a} \times \dot{b}}$ and corresponding binary target images $\mathbf{Y}_n \in \{0,1\}^{\dot{a} \times \dot{b}}$.
During inference, inputs of the same (reduced) image size $\dot{a} \times \dot{b}$ have to be fed to the machine learning model.
\subsection{Semantic segmentation methodology}
\label{sec:methods:sem_seg}
\paragraph{U-Net based semantic segmentation}
We leverage an \textbf{\textit{U-Net}} architecture to learn the mapping $M(\mathbf{X}) = \mathbf{X} \mapsto \mathbf{Y}$ from intensity images $\mathbf{X}$ to corresponding binary images $\mathbf{Y}$ of dense pixel-level class labels by training a deep neural network $M$. During testing, the model $M$ yields images $\mathbf{\hat{Y}}$ of dense pixel-level predictions for unseen testing images $\mathbf{X}_u$.
The U-Net architecture comprises a contracting path (\textit{encoder}) and an expanding path (\textit{decoder}), which are jointly trained. The encoder transforms the input image into a low-dimensional abstract representation of the image content. This low-dimensional embedding is fed into the \textit{decoder}, which maps this representation to corresponding dense class label predictions of a full input resolution. Convolutional layers are the main processing units of encoder and decoder.
The utilization of encoder features in the decoder, referred to as \textit{skip connections}, has been one of the main contributions of the U-Net architecture proposed by Ronneberger et al.~\cite{ronneberger2015u}.
Based on tuples of intensity images and corresponding dense target labels, the semantic segmentation model parameters of the encoder and of the decoder are updated in every update iteration (\textit{end-to-end training}).
\subsection{Training objectives}
\label{sec:methods:ojectives}
We train networks separately on two objective functions, on \textit{binary cross entropy loss} or on \textit{Tversky loss}. At each pixel location, both loss functions penalize the deviation of the network prediction from the ground truth binary class labels.
\subsubsection{Binary cross entropy loss}
The (pixel-level) binary cross entropy loss is computed between class probabilities $\hat{Y}_i$ and target labels $Y_i$:
\begin{equation}\label{eqn:binary_ce_loss}
L_i = -Y_i \log(\hat{Y}_i) - (1-Y_i) \log(1-\hat{Y}_i),
\end{equation}
where $\hat{Y}_i$ is the sigmoidal output $M(X_i)$ of the neural network $M$ for the i-th pixel in image $X$
\begin{equation}\label{eqn:sigmoid}
\sigma(z_i) = \frac{1}{1 + e^{-z_i}}
\end{equation}
\subsubsection{Tversky loss}
For binary segmentation problems, the \textit{Dice similarity coefficient (DSC)}~\cite{sorensen1948method,dice1945measures} quantifies the ``similarity'' of predicted and the true segmentation, and is defined by:
\begin{equation}\label{eqn:dice_score}
DSC = \frac{2 \cdot t^+}{2 \cdot t^+ + f^+ + f^-},
\end{equation}
where $t^+$ is the number of \textit{true positives}, $f^+$ is the number of \textit{false positives}, and $f^-$ is the number of \textit{false negatives}. Values of DSC range from 0.0 (worst classification) to 1.0 (perfect classification).
The Tversky index~\cite{tversky1977features} $T$ can be interpreted as a generalization of the Dice similarity coefficient by introducing parameter $\beta$:
\begin{equation}\label{eqn:tversky_index}
T = \frac{t^+}{t^+ + \beta \cdot f^+ + (1-\beta) \cdot f^-}.
\end{equation}
For binary segmentation problems, the Tversky index can not only be utilized to evaluate the performance of a trained model on the test set, but also as basis for an objective function during training. We can define a \textit{Tversky loss} $T_L$ objective function to train a model to minimize:
\begin{equation}\label{eqn:ce_loss}
T_L = 1.0 - \frac{t^+ + \epsilon}{t^+ + \beta \cdot f^+ + (1-\beta) \cdot f^- + \epsilon}
\end{equation}
between real-valued network predictions $\mathbf{\hat{Y_i}}$ and binary target labels $\mathbf{Y_i}$ for the i-th pixel, where $\epsilon$ is a small real-valued constant to prevent numerical instabilities.
\section*{Acknowledgements}
This work has received funding from the European Commission H2020 program initiated by the Photonics Public Private Partnership (MOON number 732969).
\bibliographystyle{splncs}
|
1,116,691,499,878 | arxiv |
\part{Biophysics: quantitative methods for biology}
\chapter*{Motivations}\label{introbio}
\addcontentsline{toc}{chapter}{Motivations}
The application of quantitative methods to biology is presently the object of a large
amount of theoretical studies.
Theoretical study and modeling of biological phenomena are not a substitute to biological in vivo
investigation. Instead, they are a very ``economical'' way to formulate quantitative relations between
relevant quantities and to make predictions with them.
Nowadays, modeling biological phenomena corresponds to the approach that has been adopted innoumerous
times in other domains of science, especially physics.
Weren't the three Kepler's laws a model? Of course they were.
Kepler was not aware of more fundamental and general laws to use (namely Newtonian mechanics and
gravity), he just formulated a quantitative description of his observations.
The Bohr-Sommerfield quantization of adiabatic invariants was a model for old quantum theory.
In these examples the model preceded the theory and somehow helped formulating it. Of course, the
model can follow the theory and somehow simplify it, as the Ising model for magnetism
simplifies a full quantum mechanical approach to the problem.
The power of computers makes it possible to develop theoretical tools and models to elaborate and
speculate on the vast amount of data accumulated on the genome and on the proteome.
Strongly motivated by this, in collaboration with Dr. F. Musso of the University of Burgos,
in 2006 I introduced a model of evolution based on a population of
``Turing machines''. Each machine is actually defined by a finite number of ``states'' that
form its own code or genome. This code undergoes stochastic evolution with certain rates
that implement different aspects of genome mutation. Then, a performance based selection process
creates a new generation of machines with increased performance. The process is repeated for a large
number of generations.
The goal of this model is to explore features of the Darwinian evolution with a full control
of the parameters that participate to it. Indeed, in silico evolution and mathematical modeling
are ideal environments to test evolutive hypotheses that are otherwise difficult to test in the real
biological environment.
In a similar spirit, in 2007 I co-founded the Gemini team in Annecy, with C.~Lesieur, biologist,
and other collaborators from the mathematical-physical background of the ``Federation de
recherche Modelisation, Simulation, Interactions Fondamentales''.
The team agreed to work on questions of assembly in oligomeric proteins\footnote{Oligomeric proteins
are those whose native state, or functional state, is the aggregate of two or more polypeptidic chains.}.
The team biologist had already approached the subject
with standard experimental tools but needed to use theoretical methods
describe the problem in general terms through the analysis of
a larger number of cases.
A mathematician, L. Vuillon, is part of the Gemini team. This is a signal that the project on the
oligomeric proteins has a sufficiently high degree of ``synthesis'', in the Greek sense,
to be effective in biology and to require a theoretical development as a complex system.
The approach that we carry on is inductive, based on the systematic analysis of structure
data of oligomeric proteins. I will present it later.
The research is revealing features related to the process of the interface formation
and to the process of the assembly of different chains.
The inductive reasoning, which goes from specific observations to broader generalizations and theories,
is not always applied in mathematical and theoretical physics.
Indeed, it is typical of the periods when a paradigm, or theory, is missing but
a scenario is emerging from observations, that forces to move toward a more accurate and complete
understanding.
Deductive reasoning is common when the paradigm is established and can be applied to predict
a variety of phenomena.
Modelization is intermediate because the construction of the model often
comes from empirical knowledge but the model allows one to deduce further effects.
I hope the next two chapters on biophysics will suggest why a theoretical physicist, like me,
is engaging in research on evolution and proteome, and how he can help in the inductive reasoning
towards a paradigm.
\chapter{Modelling Darwinian evolution\label{c:turing}}
\newcommand{N_{\text{t}}}{N_{\text{t}}}
\newcommand{N_{\text{c}}}{N_{\text{c}}}
\newcommand{N_{\text{nc}}}{N_{\text{nc}}}
\newcommand{p_{\text{m}}}{p_{\text{m}}}
\newcommand{p_{\text{i}}}{p_{\text{i}}}
Darwinian evolution is the today paradigm that unifies paleontological records with modern biology. It creates a bridge between the microscopic view (genome, proteome) and
the macroscopic features of the living organisms (phenotype). The phenotypic, or macroscopic,
mechanism of Darwinian evolution is natural selection namely a differential scrutiny of the
phenotypes by environment. The microscopic mechanism is genome mutations. In neodarwinism, it
is very important to appreciate the fact that even if the genome is a physical memory that is
transmitted from the parent(s) to the offspring, the phenotype of a single organism
is not inherited: if a person looses a leg, its children will still
have both of their legs.
Notice that the physical memory or genome is also under control of the selection so it is
correct to say that the genome is part of the phenotype.
The opposite is false: the phenotype is not contained in the genome,
otherwise it would be automatic to inherit acquired characters, as in Lamarkism.
The basic functioning of the genome\footnote{Here a possible distintion between genome and DNA is not needed} is to record long sequences of four letters that later on
can be mapped into amino acid sequences by a known mapping that biologists call
``the genetic code''. This means that the biological
function of proteins, intrinsically three-dimensional and based on physico-chemical properties
of atomic aggregates, can be described by a discrete and finite amount of information.
It's unavoidable to imagine that amount of information as being an algorithm that, executed
with given rules, realizes some tasks. This analogy between genome and algorithm pervades
the whole domain of the artificial life \cite{fogel2006} namely the tentative of realizing
in silico organisms that exploit the main features of living organisms\footnote{More or less, one can summarize them in: existence of a
separation interior/exterior, existence of a metabolism, response to external stimuli,
self-identical reproduction.}.
The need of modeling Darwinian evolution, or else the need of creating an artificial life,
comes from several directions all related to the difficulty
to make quantitative experimental studies:
we cannot rewind the Earth history to study past organisms ``alive'',
the fossil record is incomplete, in vitro evolution experiments are long and expensive and
only few of them have successfully been done.
Moreover, the biggest difficulty is to keep apart the different causes that produce the observed
evolutionary dynamics.
In this scenario, in silico evolution can help precisely where observation biology fails:
in silico experiments can be done on today's calculus grids, the control on the parameters is
complete and the full record of the evolution is available.
I cannot avoid a comparison with cosmology: we know a single universe, we cannot rewind its
history and observational data on far objects (in remote time and in space) are few. The difference
is that artificial cosmological experiments are just impossible while artificial evolution
experiments are possible and will slowly be realized. In this sense, biological evolution
is much more affordable than cosmology.
From Maynard Smith, one reads: ``...we badly need a comparative biology. So far,
we have been able to study only one evolving system and we cannot wait for interstellar flight
to provide us with a second. If we want to discover generalizations about evolving systems, we
will have to look at artificial ones.''
Modeling evolution has definitely not the goal to replace observatory or experimental biology
but has the goal to help finding ``universal'' features in the evolutionary dynamics and in
the mechanisms of mutation, selection. After all, universality is the way critical phenomena
are studied in statistical mechanics: specific details do not participate in determining
universal features.
This point has been particularly stressed in my recent article \cite{FM3}.
The existence of a small genome within a much larger phenotype and the basic functioning by the
genetic code are strategic for modelization purposes because they allow to investigate at
least some of the features of evolution without paying too much attention to the whole organism
and its proteome but just focusing on algorithm features.
\section{The model}
The idea is to have a population of algorithms that evolve with generations. At each generation,
each algorithm undergoes mutation (possibly in several flavours). The
algorithm is then executed, which corresponds to the life of the organism. The output represents
the phenotype and the interaction with the environment, therefore selection acts on it.
The selection, precisely as in the biological case, evaluates differentially two or more phenotypes
retaining the best fitted for reproduction. This creates an artificial
life cycle, as in Figure~\ref{insilico}, that can be repeated a large number of times,
to study the evolution of the population features.
\begin{figure}
\begin{pgfpicture}{-3cm}{2.3cm}{11cm}{9cm}
\pgfputat{\pgfxy(1,8.5)}{\pgfbox[center,center]{initial population}}
\pgfputat{\pgfxy(1.5,7)}{\pgfbox[center,center]{mutation, recombination}}
\pgfputat{\pgfxy(1.5,6.5)}{\pgfbox[center,center]{(genotype)}}
\pgfputat{\pgfxy(9,7)}{\pgfbox[center,center]{performance based selection}}
\pgfputat{\pgfxy(9,6.5)}{\pgfbox[center,center]{(phenotype)}}
\pgfputat{\pgfxy(9,4.5)}{\pgfbox[center,center]{offspring}}
\pgfputat{\pgfxy(1,4.63)}{\pgfbox[center,center]{\small termination}}
\pgfputat{\pgfxy(1,4.33)}{\pgfbox[center,center]{\small conditions}}
\pgfputat{\pgfxy(1,2.8)}{\pgfbox[center,center]{end}}
{\color{blue}
\pgfrect[stroke]{\pgfxy(-0.5,8.2)}{\pgfxy(3,0.7)}
\pgfrect[stroke]{\pgfxy(-0.5,6.2)}{\pgfxy(4,1.2)}
\pgfrect[stroke]{\pgfxy(6.8,6.2)}{\pgfxy(4.4,1.2)}
\pgfrect[stroke]{\pgfxy(7.1,4.15)}{\pgfxy(3.8,0.7)}
\pgfrect[stroke]{\pgfxy(0.5,2.5)}{\pgfxy(1,0.5)}
\pgfline{\pgfxy(0,4.5)}{\pgfxy(1,5.5)}\pgfline{\pgfxy(1,5.5)}{\pgfxy(2,4.5)}
\pgfline{\pgfxy(2,4.5)}{\pgfxy(1,3.5)}\pgfline{\pgfxy(1,3.5)}{\pgfxy(0,4.5)}
\pgfline{\pgfxy(1,8.2)}{\pgfxy(1,7.4)}
\pgfline{\pgfxy(3.5,7)}{\pgfxy(6.8,7)}
\pgfline{\pgfxy(9,6.2)}{\pgfxy(9,4.9)}
\pgfline{\pgfxy(1,6.2)}{\pgfxy(1,5.5)}
\pgfline{\pgfxy(2,4.5)}{\pgfxy(7.1,4.5)}
\pgfline{\pgfxy(1,3.5)}{\pgfxy(1,3.)}
\pgfsetendarrow{\pgfarrowtriangle{3pt}}
\pgfline{\pgfxy(1,8)}{\pgfxy(1,7.7)}
\pgfline{\pgfxy(4.7,7)}{\pgfxy(5.1,7)}
\pgfline{\pgfxy(9,6.2)}{\pgfxy(9,5.5)}
\pgfline{\pgfxy(4.8,4.5)}{\pgfxy(4.2,4.5)}
\pgfline{\pgfxy(1,5.5)}{\pgfxy(1,5.85)}
\pgfline{\pgfxy(1,3.5)}{\pgfxy(1,3.25)}
}
\end{pgfpicture}
\caption{Basic flow of evolutionary computations or artificial life cycle. Notice the distinction
of genotype and phenotype.\label{insilico}}
\end{figure}
In my Turing machines model, the algorithms are precisely the Turing machines. This choice was mainly
motivated by the generality attained by the Turing machines language, in spite of a very simple set
of basic instructions. This aspect is extremely important as it allows to do some theoretical
investigation of the model. Many other
formalizations of algorithms have actually been adopted in evolutionary computation \cite{fogel2006}.
Turing machines are abstract symbol-manipulating devices that implement a ``one-point''
discrete evolution law. Given a finite alphabet or list of symbols $\mathcal{A}$ and given
the total number of internal ``states'' $N_{\text{t}}$, one defines a Turing
machine by giving the evolution law $Q$
\begin{equation}
(r,\mathbf{s})\mathop{\mapsto}^Q (r',d',\mathbf{s}')\,,\qquad r,r'\in \mathcal{A}\,,\qquad
d'\in \{R,L\}\,,\qquad \mathbf{s},\mathbf{s}'\in \{1,2, \ldots N_{\text{t}}\}
\end{equation}
It is the set of actions that the machine performs,
determined by the value read on the tape $r=r(t)$, at a position $x_0(t)$, and by the internal state
of the machine $\mathbf{s}=\mathbf{s}(t)$. These variables depend on the execution time, so
$r'=r(t+1)$ and so on. Notice that the mapping $Q$ does not depend on the old displacement right/left
$d(t)$ but it produces the new displacement $d'=d(t+1)$.
The mapping $Q$ is the core of the Turing machine and can be represented by the triplets
``write,displace,call'' $(w,d,c)$ acting on a tape, as in table~\ref{t:somma} and in
figure~\ref{f:turing}:
\begin{enumerate}
\item \textbf{write:} it writes a new symbol at position $x_0(t)$,
\item \textbf{displace:} it moves right R or left by one cell L,
\item \textbf{call:} it changes its internal state.
\end{enumerate}
An initial configuration is given by assigning the function $T(x,0)\in\mathcal{A}\,,\ \forall\ x \in \mathbb{Z}$.
Recursively, a new configuration $T(x,t+1)\in\mathcal{A}\,, \ \forall\ x \in \mathbb{Z}$
is computed from $T(x,t)$ with the mapping $Q$ in such a way that only a single mutated position
$x_0(t)$ can exist at each time
$$
\begin{array}{c}
T(x,t+1)=T(x,t) \quad \forall\ x\neq x_0(t) \\[3mm]
T(x_0(t),t+1) \mbox{ could be } \neq T(x_0(t),t)
\end{array}
$$
and such that $x_0(t+1)=x_0(t)\pm 1$. Here I will use the binary set of symbols
$\mathcal{A}=\{0,1\}$. This choice is mainly dictated by the simplicity of coding it offers.
Representing $T(x,t)\ \forall\ x $ by a tape of cells at position $x\in\mathbb{Z}$, one has the
familiar representation of figure~\ref{f:turing}.
\begin{figure}[h]
\begin{pgfpicture}{-3cm}{0.5cm}{8cm}{4.0cm}
{\color{blue}
\pgfline{\pgfxy(0.5,1)}{\pgfxy(10.5,1)}
\pgfline{\pgfxy(0.5,2)}{\pgfxy(10.5,2)}
\pgfline{\pgfxy(1,1)}{\pgfxy(1,2)}
\pgfline{\pgfxy(2,1)}{\pgfxy(2,2)}
\pgfline{\pgfxy(3,1)}{\pgfxy(3,2)}
\pgfline{\pgfxy(4,1)}{\pgfxy(4,2)}
\pgfline{\pgfxy(5,1)}{\pgfxy(5,2)}
\pgfline{\pgfxy(6,1)}{\pgfxy(6,2)}
\pgfline{\pgfxy(7,1)}{\pgfxy(7,2)}
\pgfline{\pgfxy(8,1)}{\pgfxy(8,2)}
\pgfline{\pgfxy(9,1)}{\pgfxy(9,2)}
\pgfline{\pgfxy(10,1)}{\pgfxy(10,2)}}
{\color{red}
\pgfline{\pgfxy(3.5,2.1)}{\pgfxy(4.3,3.1)}
\pgfline{\pgfxy(3.5,2.1)}{\pgfxy(2.7,3.1)}
\pgfline{\pgfxy(4.3,3.1)}{\pgfxy(4.3,3.6)}
\pgfline{\pgfxy(2.7,3.1)}{\pgfxy(2.7,3.6)}
}
{\color{blue}
\pgfsetdash{{1pt}{1ex}}{0pt}
\pgfline{\pgfxy(-0.1,1)}{\pgfxy(0.5,1)}
\pgfline{\pgfxy(-0.1,2)}{\pgfxy(0.5,2)}
\pgfline{\pgfxy(10.5,1)}{\pgfxy(11.3,1)}
\pgfline{\pgfxy(10.5,2)}{\pgfxy(11.3,2)}}
\pgfputat{\pgfxy(1.5,1.5)}{\pgfbox[center,center]{0}}
\pgfputat{\pgfxy(2.5,1.5)}{\pgfbox[center,center]{1}}
\pgfputat{\pgfxy(3.5,1.5)}{\pgfbox[center,center]{1}}
\pgfputat{\pgfxy(4.5,1.5)}{\pgfbox[center,center]{1}}
\pgfputat{\pgfxy(5.5,1.5)}{\pgfbox[center,center]{0}}
\pgfputat{\pgfxy(6.5,1.5)}{\pgfbox[center,center]{1}}
\pgfputat{\pgfxy(7.5,1.5)}{\pgfbox[center,center]{1}}
\pgfputat{\pgfxy(8.5,1.5)}{\pgfbox[center,center]{0}}
\pgfputat{\pgfxy(9.5,1.5)}{\pgfbox[center,center]{1}}
\pgfputat{\pgfxy(3.5,0.7)}{\pgfbox[center,center]{$x_0(t)$}}
\pgfputat{\pgfxy(3.5,2.8)}{\pgfbox[center,center]{$\mathbf{s}(t)$}}
\pgfputat{\pgfxy(3.5,3.4)}{\pgfbox[center,center]{$Q$}}
\end{pgfpicture}
\caption{\label{f:turing}Graphical representation of a Turing machine at time $t$, in the internal state
$\mathbf{s}(t)$, located on the $x_0(t)$ cell of an infinite tape.}
\end{figure}
\begin{table}
$$
\begin{array}{|c|c|c|c|}
\hline \raisebox{-5mm}[7mm][4mm]{\begin{pgfpicture}{0cm}{0cm}{15mm}{10mm}
\pgfline{\pgfxy(1.7,0.1)}{\pgfxy(-0.2,1.2)} \pgfputat{\pgfxy(0.4,0.4)}{\pgfbox[center,center]{read}} \pgfputat{\pgfxy(1.1,1)}{\pgfbox[center,center]{\bf state}}\end{pgfpicture}} & {\bf 1} & {\bf 2} & {\bf 3} \\
\hline 0 & 1-\mbox{Right}-\bf 2 & 0-\mbox{Left}-\bf 3 & \_-\_-\_ \\
\hline 1 & 1-\mbox{Right}-\bf 1 & 1-\mbox{Right}-\bf 2 & 0-\_-\mbox{\bf Halt} \\ \hline
\end{array}
$$
\caption{\label{t:somma}Table of states of a Turing machine that performs the sum of two positive
numbers represented by ``sticks'': $\ \ldots 0\ 1\ 1\ 1\ 0\ 0\ $ represents the number three and so on.
Missing entries are irrelevant and can be fixed arbitrarily. Here and in the following, the states of the
machine are written in bold character, to ease the reading.}
\end{table}
Considering that the general characterization of Turing machines is not needed here, for computational
reasons the tape is taken of finite length usually fixed to $L=300$ boxes. In some simulations
the tape has been ``periodized'', by identifying the last cell+1 with the first one.
Periodic boundary conditions were also used in lattice models.
Moreover, as it is extremely easy to generate machines that run forever, the maximum of 4000
temporal steps is imposed. When a machine reaches it, it is stopped and its tape is taken without
further modification.
The simulations start with a population of $npop=300$ Turing machines each with just one state of
the following form
$$
\begin{array}{|c|c|}
\hline
&\bf 1 \\ \hline
0 & 0-\mbox{R}-\mbox{\bf Halt} \\ \hline
1 & 1-\mbox{R}-\mbox{\bf Halt} \\ \hline
\end{array}
$$
and go on for $ngen=50000$ generations or more.
At each generation every TM undergoes the following three processes, in the order:
\begin{enumerate}
\item (insertion) states-increase,
\item (point) mutation,
\item selection and reproduction.
\end{enumerate}
In the states-increase process, with a probability $p_{\text{i}}$, the TM passes from $\mathbf N_{\text{t}}$ to
$\mathbf N_{\text{t}}+\mathbf 1$ states by the addition of the further state
$$
\begin{array}{|c|c|}
\hline & \mathbf N_{\text{t}}+\bf 1\\
\hline 0 & 0-\mbox{R}-\mbox{\bf Halt}\\
\hline 1 & 1-\mbox{R}-\mbox{\bf Halt}\\ \hline
\end{array}
$$
This state will be initially non-coding since it cannot be called by any other state. Indeed,
the Turing machine cannot call a state that does not exist. The only way this state can be activated
is if a mutation in an already coding state changes the state call to $\mathbf N_{\text{t}}+\mathbf 1$.
Notice that, when called, this particular state does not affect the tape but halts the machine.
Consequently the activation of this state is mainly harmful or neutral and it can be advantageous
only in exceptional cases therefore the TM can benefit from the added states only if they are
mutated before their activation.
This form of mutation vaguely resembles DNA insertion.
During point mutation, all the entries of each state of the TM can be randomly changed with a
probability $p_{\text{m}}$. The new entry is randomly chosen among all corresponding permitted values
excluded the original one. The permitted values are:
\begin{itemize}
\item 0 or 1 for the ``write'' entries;
\item Right, Left for the ``move'' entries;
\item The \textbf{Halt} state or an integer from {\bf 1} to the number of states $\mathbf N_{\text{t}}$ of
the machine for the ``call'' entries.
\end{itemize}
This mechanism of mutation is reminiscent of the biological point mutation.
Notice that the states-increase process is actually a form of mutation.
Here it has been chosen to keep the two
mutations separate in order to differentiate their roles. Other biological mechanisms like
traslocation, inversion, deletion, etc.\ are not implemented.
In the selection and reproduction phase a new population is created from the actual one (old population).
The number of offspring of a TM is determined by its ``performance'' and, to a minor extent, by chance.
The performance\footnote{The word ``performance'' is preferred to ``fitness'' as this last one
indicates two different concepts in biology and in the field of algorithms. The word ``fitness'' will
be used in the biological sense.} of a TM is a function
that measures how well the output tape of the machine reproduces a given ``goal'' tape starting from a
prescribed input tape. It is computed in the following way. The performance is initially set to zero.
Then the output tape and the goal tape are compared cell by cell. The performance is increased by one
for any $1$ on the output tape that has a matching $1$ on the goal tape and it is decreased by 3 for any
$1$ on the output tape that matches a $0$ on the goal tape.
As a selection process, I use what in the field of evolutionary algorithms is known as ``tournament
selection of size 2 without replacement''. In it, two TMs are randomly extracted from the old
population and let run on the
input tape. At the end, a performance value is assigned to each machine on the basis of its
output tape. The performance values are compared and the machine which scores higher creates two copies
of itself in the new population, while the other is eliminated. This reproduction is fully asexual.
If the performance values are equal, each TM creates a copy of itself in the new population.
The two TMs that were chosen for the tournament are eliminated from the old population and the
process restarts until the exhaustion of the old population.
The goal tapes are chosen according to the criterion of providing two difficult and qualitatively
different tasks for a TM. The distribution of the ``1'' on the goal tape has to be
extremely non-regular since a periodic distribution would provide a very easy task for a TM.
In the various simulations several goal tapes have been used. The tape ``primes'' has ``1'' on the
cell positions corresponding to prime numbers, with $1$ included for convenience, and zeros elsewhere:
$$
\begin{array}{l}
1110101000.1010001010.0010000010.1000001000.1010001000.0010000010.1000001000.1010000010.\\
0010000010.0000001000.1010001010.0010000000.0000001000.1000001010.0000000010.1000001000.\\
0010001000.0010000010.1000000000.1010001010.0000000000.1000000000.0010001010.0010000010.\\
1000000000.1000001000.0010000010.1000001000.1010000000.0010000000.
\end{array}
$$
In the previous expression I inserted a dot every ten cells to facilitate the reading.
The second goal tape $\pi$ is given by the binary expression of the decimal part of $\pi$,
namely $(\pi-3)_{\text{bin}}$:
$$
\begin{array}{l}
0010010000.1111110110.1010100010.0010000101.1010001100.0010001101.0011000100.1100011001.\\
1000101000.1011100000.0011011100.0001110011.0100010010.1001000000.1001001110.0000100010.\\
0010100110.0111110011.0001110100.0000001000.0010111011.1110101001.1000111011.0001001110.\\
0110110010.0010010100.0101001010.0000100001.1110011000.1110001101.
\end{array}
$$
Notice that while for prime numbers the ``1'' become progressively rarer so that the task becomes
progressively more difficult, in the case of the digits of $\pi$ they are more or less equally
distributed.
Another difference is that prime numbers are always odd (with the exception of 2) therefore in the
goal tape
two ``1'' are separated by at least one ``0''. On the contrary, the digits of $\pi$ can form clusters
of ``1'' of arbitrary length; this feature is actually visible only in very long tapes of
thousands of cells or more and is not important here.
According to the definition, the maximal possible value for the performance is 63 for the prime
numbers and 125 for the digits of $\pi$.
In \cite{chevalier}, the objective was introduced of gathering ``1'' on the left side of the output tape,
simulating the process of resources accumulation. The actual definition of the score is involved
so it will not be written here.
\section{Results}
\begin{figure}[!ht]
\includegraphics[viewport=30 10 438 368,clip,scale=0.7]{Pi3d.pdf}\hspace*{-4mm}
\includegraphics[viewport=66 53 318 348,clip,scale=0.77]{Picontour.pdf}\\
\hspace*{2mm}
\includegraphics[viewport=60 60 468 318,clip,scale=0.7]{Pi_p_i.pdf}
\includegraphics[viewport=80 60 468 318,clip,scale=0.7]{Pi_p_m.pdf}\\[2mm]
\begin{tikzpicture}
\useasboundingbox (0,0) rectangle (16,0.3);
\draw (6.3,8) node{$\log_{10} p_{\text{m}}$};\draw (0.5,8.7) node {$\log_{10} p_{\text{i}}$};
\draw (13.5,0.7) node {$\log_{10} p_{\text{m}}$}; \draw (4,0.7) node {$\log_{10} p_{\text{i}}$};
\draw (13.5,7.5) node {$\log_{10} p_{\text{m}}$};
\begin{pgfrotateby}{\pgfdegree{90}}
\pgfputat{\pgfxy(3.7,-0.5)}{\pgfbox[center,center]{performance}}
\pgfputat{\pgfxy(12.7,0.2)}{\pgfbox[center,center]{performance}}
\pgfputat{\pgfxy(11.7,-9.5)}{\pgfbox[center,center]{performance}}
\pgfputat{\pgfxy(3.7,-9.5)}{\pgfbox[center,center]{performance}}
\end{pgfrotateby}
\draw (3.8,7) node {(a)}; \draw(13.5,7) node {(b)};\draw (3.5,0.) node {(c)};\draw (13.5,0.) node {(d)};
\end{tikzpicture}
\caption{\label{plotpi}For the goal $\pi$, in (a) it is shows the 3D plot of the best performance
value in the population, averaged on the ten different seeds of the simulation, as function of the
states-increase rate $p_{\text{i}}$ and of the mutation rate $p_{\text{m}}$. The three orthogonal projections of (a)
are also shown.}
\end{figure}
In \cite{FM1} the dependence of the performance by the external parameters $p_{\text{m}},p_{\text{i}}$ was studied with
both the output tapes indicated.
The two series of simulations show very similar features. The most evident effect is that
having a large amount of non-coding states speeds up evolution and allows to reach larger values of the
population performance, as in Figure~\ref{plotpi}.
It is important to remember that when new states are added by the states-increase process,
they are and remain non-coding until activation by point mutation.
Of course, the model has a bias toward the growth of the number of states, because no deletion is
introduced and no cost for large genomes is used. This bias is on the total number of states, not
on the actual number of coding triplets. The latter is not biased, it can both increase and decrease.
This bias
\begin{enumerate}
\item \textbf{implies} that the total number of states $N_{\text{t}}$ cannot decrease but
\item \textbf{does not imply}
that the performance grows faster if $N_{\text{t}}$ is large. For this reason, there is no need to
add deletion or metabolic costs for large genomes.
\end{enumerate}
The total number of triplets is approximately
\begin{equation}
N_{\text{t}} \approx 2(1+ ngen \cdot p_{\text{i}})
\end{equation}
If $N_{\text{c}}$ is the number of coding triplets, the ratio $N_{\text{c}}/N_{\text{t}}$ has been measured and it is of the order of
few percent, often less, so approximately $N_{\text{nc}}=N_{\text{t}}-N_{\text{c}}\approx N_{\text{t}}$ is the number of non-coding triplets.
The ratio $N_{\text{c}}/N_{\text{t}}$ is observed to decrease with the growth of the performance. This enhances the effect
of ``reservoir'' of the non-coding triplets: simulations show that the performance grows faster if $N_{\text{nc}}$
is large, Figure~\ref{plotpi} (a) and (c), Figure~\ref{f:storia}. This means that the non-coding triplets
are used to explore new strategies.
While the phenomenon in itself is not totally unexpected, its amplitude and persistence surprises.
Several simulations, in part not yet published, seem to indicate that
the phenomenon continues at higher $p_{\text{i}}$ with a ratio $N_{\text{c}}/N_{\text{t}}< 0.5$ namely with an enormous
excess of non-coding versus coding triplets.
It is important to stress that $N_{\text{t}}$ is positively selected, namely it is larger that the value attained
in the absence of selection (random evolution). Also, $N_{\text{nc}}$ is larger than in the random choice case.
As the bias is present with and without selection, the effect on $N_{\text{t}}$ is not produced by the bias.
It is a real effect, indirectly produced by selection. It is indirect because the algorithms of
selection do not act on $N_{\text{t}}$.
Besides numerical investigations, the model allows one to perform some analytical evaluations.
The spirit of the papers \cite{FM2, FM3} has been precisely to develop a mathematical description of
the mutation-selection dynamics and complete it with numerical data.
The mutation-selection dynamics is the set of rules that are used by the Turing machines evolving
population during simulations. They can be treated mathematically, thus showing the presence of
an error threshold. This is a value of the mutation probability $p_{\text{m}}^{\star}$
such that if $p_{\text{m}}> p_{\text{m}}^{\star}$ the highest performance class degrades faster than it is generated;
said otherwise, the occupation number of the highest performance class reduces to zero in such a
way that the population performance decreases. Degradation is due to harmful mutations, that are
very frequent events. Generation is due to rare good mutations and mainly to the selection mechanism,
that favours the replication of high performance individuals.
Moreover, the population evolves toward the error threshold; this means that, granted a sufficiently
large number of generations, the population will occupy all the performance classes up to the
error threshold.
\begin{figure}[!h]
\hspace{22mm}\includegraphics[angle=90,scale=0.6]{storia_fitness_pi.pdf}
\caption{\label{f:storia}The evolution of the performance with the generations,
at various values of the state-increase rate. Notice the overlap of the four highest lines, possibly
related to the presence of a plateau in the plot of the performance versus $p_{\text{i}}$. This interpretation
is reasonable but still uncertain and difficult to prove because of computational time.}
\end{figure}
\begin{figure}[!h]
\hspace*{22mm}\includegraphics[scale=0.6]{stati_codificanti.pdf}
\caption{\label{f:ncodif}The average number of coding triplets $\langleN_{\text{c}}\rangle$ at the end of
simulation ($ngen=50000$) is shown as a function of $p_{\text{m}}$, for all the values $p_{\text{i}}$. The value $N_{\text{c}}$
is taken for the best performing machine in the population and is averaged on the seeds. The black thick
line on the right represents the critical number of coding triplets $N_{\text{c}}^{\star}$, as function of
$p_{\text{m}}^{\star}$, extracted from (\ref{errort}).}
\end{figure}
The error threshold evaluated in \cite{FM3} is
\begin{equation}\label{errort}
p_{\text{m}}^{\star}=1-2^{-\frac{1}{3N_{\text{c}}^{\star}}}
\end{equation}
Both of these effects are shown in Figure~\ref{f:ncodif}. The black thick line represents
the relation between the critical number of coding triplets $N_{\text{c}}^{\star}$ and the
error threshold $p_{\text{m}}^{\star}$. It is never crossed by the averaged population data.
Clearly, the error threshold equation expresses quite a general feature of the systems evolving by
random mutation and performance based selection: the effect of random mutations is to put an
upper bound to the size of the genome. The only way to escape this fate is to reduce the effect of
mutations by using reparation mechanisms, optimization, small coding part in a large non-coding genome...
Other effects studied with the Turing machine model include the extinction time of the machines and the
evolutive effect of punctuated equilibria.
\section{Discussion}
The Turing machines model has been introduced by myself and F. Musso in 2007.
From the first paper \cite{FM1} on, the model has been used for mathematical evaluations and several
numerical investigations \cite{FM2, FM3, chevalier}. One of the next challenges will be to introduce
recombination and study the evolution and maintenance of sexual reproduction, a very general
reproductive form in nature that is still theoretically poorly understood. Work in this
direction has been carried on during the internship of my master student \cite{chevalier}.
At the very beginning of this Turing machines program, it wasn't clear to me if the model was just
a personal exercise or if it could be of some use. The positive answer came later, by comparing
with other evolutionary models and also by appreciating that the TM model could be developed
in several directions, for example the one already cited of sexual reproduction.
There are not many other evolutionary models designed to study Darwinian evolution, the most famous one
being Avida \cite{Avida} with its ancestor Tierra \cite{Tierra}. They both are elegant and complete
platforms that create an in silico life. Organisms are programs that live on computer grids and
compete for resources: CPU time and memory.
The simplicity is not the feature of these models, neither is the parsimony in terms of computer
resources.
A model that is much closer to TM has been developed by some members of the Institute of complex
systems in Lyon \cite{aevol}. Their model ``Aevol'' has both a genome and a proteome therefore
it implements the transcription/translation mechanism.
In common, all these models have the idea that the genome is an algorithm that is created by random
mutations and is selected on the basis of its performance, as in Figure~\ref{insilico}, measured
by the ability of realizing some task: self-replication in the Avida platform, evaluation of
some complex mathematical function in Aevol and Turing.
Of course, they all have an incomparably lesser degree of complexity than living organisms.
More importantly, they do not try to describe the DNA by some ``close'' description
or the protein functions by some catalytic process. Is this a real difficulty?
Probably it would be of interest to describe a ``realistic'' genome, with four bases, the genetic code,
the mRNA and amino acid sequences. One could try to simulate the evolution of very basic enzymatic
functions in bacteria. To do that, one should find the ``functional site'' of a protein from its
amino acid sequence, with one of the prediction tools that are known to work.
In perspective, this project offers an interesting development but, so far, it has not been realized.
Instead, all the cited models claim that a certain algorithm can simulate a genome; they
claim that a mathematical function can represent the phenotype and work with these simplifying
hypotheses.
Given that, the problem of knowing if realistic results can emerge from non-realistic models is
conceptually extremely important. I think the answer that all these authors implicitly assume is very
deep and smart:
no matter the details of the model, results are universal if universal hypotheses have been formulated.
``Universal'' is employed in the sense of Kadanoff and Wilson, as it is used in statistical physics and
in the renormalization group.
Therefore, having different models is important because the comparison of their features and
predictions leads to understand which are the universal features of evolution.
\chapter{Protein assembly}
A large number of proteins become biologically functional only after association of a
number of amino acid chains. The ensuing structures are called oligomers. In addition to folding, the
oligomers need to assemble, which takes place through the formation of interface areas
that are mutually interacting.
From \cite{goodsell}, one learns that 20\% of the proteins in Escherichia coli are monomeric, the rest
oligomeric/polymeric, with a clear preference for dimers (38\%) and tetramers (21\%).
Amongst oligomers, the large majority is homooligomeric. Few are true polymers. In general,
polymers differ from oligomers by their variable stoichiometric number, that can take values
of hundred to millions of subunits.
In the Protein Data Bank (PDB), 20\% only of the recorded proteins are oligomers.
It has been noticed, however, that the Protein Data Bank (PDB) over-represents small monomers,
because of the difficulties in protein crystallization, thus the value of 20\% is underestimated.
Given all these data, the importance of investigations of oligomeric proteins is apparent.
Folding and assembly are two processes that occur in oligomeric proteins after ribosomal synthesis.
It is believed that in most cases at least a partial folding is required before the assembly
can start. The reason for this is that assembly requires the encounter of at least two parts that are
in solution in the cell; this process is diffusion limited and can be quite lengthy. In spite of this,
it is known that sometimes a ``fly-casting'' mechanism takes place in which assembly comes very early
and the several subunits fold together only after assembly.
Thus, the two processes cannot be considered as separated and independent from each other.
Moreover, in the
first case, it is reasonable to expect at least a partial rearrangement of the structure after assembly.
Even if the microscopic description of folding and assembly undergoes the principles of
molecular dynamics-molecular mechanics approaches, it would be of great value to obtain a more
macroscopic understanding. Some of the relevant questions are now indicated.
What does differentiate two amino acid sequences, one incapable of association, the other capable?
Namely, given an unknown sequence, can one predict if it will give rise to a monomer or to an oligomer?
Is it possible to predict, from the sequence, which amino acids will constitute the interface?
What will be the interface three-dimensional form?
These questions are not so different from those of protein folding. For example, given the sequence,
it is possible to do secondary structure predictions because it has been shown that
certain groups of few amino acids have particular propensities to one or other possible
secondary structure. For example, this has lead to formulate the Chou-Fasman rule.
These predictions, however, are not free of errors.
In a similar fashion, it is reasonable to imagine that groups of amino acids or perhaps certain
secondary elements have a propensity to form or not interfaces. Or else, there is a propensity for
interfaces of a specific geometric form.
Are there propensities for a preferred association mechanism?
Notice that examples are known where sequences with 90\% of identity follow different association
patterns. This means that few key amino acids can actually decide the association mechanisms
and, why not, the folding itself.
These and other questions motivate the present studies on oligomeric proteins.
Important is to focus on the interfaces. In an oligomeric protein the interface has a high degree
of specificity and is very stable. Indeed, the mutual recognition of the two sides of the interface
is extremely accurate. Early it was recognised that this happens if the interface
is made of many weak ``contacts'' \cite{crane}, namely hydrogen bonds, and if the geometric and
chemical arrangements of atoms on the
two parts are complementary \cite{chothia}. Indeed, strong contacts, as in a ionic bond,
would be able to attract and fix
several different molecules so they would be non specific.
Absence of complementarity would increase the interatomic distance thus reducing the strength of
the contacts and possibly creating space for spurious molecules of water.
The project that I will detail in the next sections focuses on the interfaces of trimeric and higher
stoichiometry proteins. Using experimental approaches, it has been observed that few residues,
located on the interface of a protein oligomer, are crucial for its assembly. Some of them control
the formation of interfaces (association
steps) while others control the stability of the oligomer (maintenance of the associated state)
\cite{luke}. These key residues are not necessarily conserved among proteins of identical function
or even of similar fold \cite{ngling}.
This could mean that the few residues dedicated to protein assembly would have to be identified
experimentally, for each particular case. Alternatively, a theoretical approach could reveal
which features characterize interfaces, by a systematic investigation of the
known three dimensional structures of protein interfaces. There are about 4000 cases deposited on the
PDB data bank, from the trimeric to the dodecameric stoichiometry.
The aim is to identify key residues involved in the different steps of the protein assembly
and possibly to derive some of the basic principles that manage protein assembly.
To this purpose, I have created a series of programs (Gemini) that sort out the protein interfaces
and describe them
as interaction networks (graphs). The interface structure is thus efficiently coded into
graphs that allow to identify (or at least propose) the chemical links responsible for the
interface's formation. At present, 3000 cases have been screened. The programs have been
successfully tested on known protein interfaces.
\section{Gemini}
It is a series of programs and database utilities that have been created under the common name of
Gemini to investigate properties of the interfaces of oligomers: presently, the most important are
GeminiDistances, GeminiRegions, GeminiGraph and GeminiData \cite{FL1}.
These programs come from the need to make systematic the analysis of oligomeric interfaces in
three-dimensional protein structures. The main criterion followed has been to propose a framework of
the amino acid interactions involved in an interface so their role in providing the interface its
specificity and in regulating the mechanism of assembly can be addressed, for example by comparing
protein interfaces of similar geometry.
The objective is to find all pairs of atoms (one atom per chain) located at distances small enough
for intermolecular interaction, and to reduce this set of interaction pairs to a minimum: the
smallest set that still describes the protein interface.
\subsection{GeminiDistances}
This program has the main goal to recognize the interface between two adjacent chains M and M+1
in an oligomeric protein from its 3D structure.
A first screening is done on the backbone $\alpha$ carbons of adjacent chains: all pairs of amino acids
(one per chain), whose C$\alpha$ are separated by a distance lower than a given cut-off,
fixed to cut1=20 \AA, are retained for the next step, the others are discarded. This has
the unique goal
to speed up the calculation and is legitimated by the observation that the maximal amino acid
theoretical length is about 8 \AA. With smaller distance cuts-off (e.g. 10 \AA), some of
the amino acids of the interfaces were missed.
In the second screening all the atoms of the amino acids previously retained are examined and the pairs
at distance lower than cut2=5 \AA\ are kept to form the so-called \textit{raw interface}.
This $5$ \AA\ distance covers the range of distances that corresponds to weak chemical bonds involved
in interfaces: Van der Waals, electrostatics, hydrogen bonds. Notice that these cuts-off can be
freely modified. The presence of the second cut-off makes the raw interface de facto independent
of the first one: values of cut1 of 17, 20, 25 \AA\ and higher give identical results.
The raw interface is a long list of pairs of atoms that may form chemical bonds. For example, the
interface of the heptamer co-chaperon 10, produced by Mycobacter tuberculosis (PDB code: 1HX5),
has 328 pairs of atoms selected in the raw interface. Because the aim of GeminiDistances is to propose a
framework with a minimum of interactions, it is necessary to add another constraint to deselect a
maximum number of pairs. The deselection is performed by a
\textit{symmetrization} procedure which only retains a single interaction per atom, the one involving
the closest partner, even for atoms having more than one partner on the adjacent chain. Precisely,
for each atom of M, in the raw interface, only the closest atom on M+1 is retained, yielding a set of
pairs $L1$. Similarly, for each atom of M+1, in the raw interface, only the closest one on M is
retained to form a second set of pairs $L2$. The pairs common to both lists, $L1 \cap L2$,
form the interface used for the investigations of this paper, also called \textit{symmetrized interface}.
In other words, a pair of atoms $(i, j)$ is in the interface if both $i$ is the closest to $j$
and $j$ is the closest to $i$.
The symmetrization makes the symmetrized interface almost cut-off independent. Indeed, values in the
range cut2=4.5 to 6 \AA\ have been explored. In the former case, some interactions are lost and
the raw interface forms a subset of the raw interface obtained with cut2=5 \AA. Vice versa, in the
latter case the raw interface is bigger. After symmetrization, one observes remarkably small
variations: in average, they do not exceed 10\% of the interface in the indicated range for cut2.
Variations are even smaller if only amino acids and not atoms are considered.
It is important to keep in mind that the symmetrization discards many atoms at distances for which
a chemical interaction is plausible. Therefore, the output generated by GeminiDistances may miss
atoms, that will be called \textit{false negative}. It may also select atoms which are not
chemically the most plausible, indicated as \textit{false positive}. But the selection of the most
chemically plausible interactions is a more
difficult task than the geometrical selection performed by GeminiDistances. A more chemical
selection would be necessarily slower and might not necessarily be more accurate. Such a method may
be better for a case-to-case study, but the symmetrization is more appropriate for comparing the
interfaces of many oligomers.
For example, from the 328 pairs of atoms selected for the raw interface of 1HX5, only 18 pairs
remain after symmetrization. In a more coarse grained interpretation, the atoms
of the symmetrized interface are replaced by the amino acids they belong to. This \textit{amino acids
interface} is used by the next program GeminiRegions.
GeminiDistances is written in C and runs in less than 0.2 s for an average size protein,
on a normal desktop computer.
\subsection{GeminiRegions, GeminiGraph}
This program separates the amino acids interface, given by GeminiDistances, into regions,
understood as elementary interaction networks between the amino acids of two adjacent chains.
Many criteria can be used: the one adopted so far considers that amino acids in a region must be
``close'' along the sequence, in addition to be close in space as considered in the construction of
the interface itself.
Another interesting criterion that is implemented in Gemini is based on connected components
in graph theory. According to it, atoms are grouped if there are paths that connect them by steps
shorter than a given distance. This criterion ignores the sequential nature of the proteins.
This C++ program runs in the infinitesimal time of 2 ms per protein.
By construction, a region, or an interaction network, contains the interactions expressed by the
pairs of the amino acids that form the interface; this corresponds to the notion of graph. In
mathematics, a graph is a set of vertices (here the amino acids) connected by a set of links
(here the weak chemical bonds).
Therefore, it is natural to introduce the following graphical representation, done
with the program GeminiGraph.
Vertices are the amino acids; for reader's ease, those involved in a weak chemical bond are
symbolised by a cross ``$\times$'' whereas those not involved in weak chemical bonds are
symbolized by a dot ``$\cdot$''. Dashed-dotted lines indicate backbone-backbone interactions,
solid lines indicate side chain-backbone or side chain-side chain.
Amino acid type and number are indicated. See Figures~\ref{1efi-1-10},~\ref{1ek9-1-48}.
\begin{figure}\begin{center}
\includegraphics[height=5cm]{5-1EFI-1-10.pdf}
\end{center}
\caption{\label{1efi-1-10}The largest region of the pentamer 1EFI (subunit B of the heat-labile
enterotoxin of Escherichia coli) is represented here. The ladder structure formed by backbone-backbone
interactions is present in most of the interfaces formed by the alignment of two parallel or
antiparallel $\beta$ strands.}
\end{figure}
\begin{figure}[t]\begin{center}\vspace*{-7mm}
\includegraphics[height=7cm]{1ek9.pdf}\\[-7mm]
\includegraphics[height=5cm]{3-1ek9-1-48.pdf}
\caption{\label{1ek9-1-48}The top image enlights the $\alpha$ and $\beta$ structures of the interface
of the trimeric membrane protein TolC (PDB: 1EK9) of Escherichia coli.
The bottom image contains the Gemini graph of the interface.
The right part shows the same ladder structure of Figure~\ref{1efi-1-10}; the left part instead is characterized by more separated amino acids, with a distance along the sequence that oscillates
between 3 and 4 residues,
and the presence of structures like the letter ``V'', with angular separation of 3-4 residues. Indeed,
this left part is an interface formed by two $\alpha$-helices. Actually, it is a big $\alpha$-coiled
coil interface of 12 $\alpha$-helices winded up in a big helix.}
\end{center}
\end{figure}
The interaction networks have been extensively compared with known cases in literature, observing a
good assessment of the amino acids involved in protein interfaces. The comparison shows that
Gemini detects rather accurately the amino acids geometrically and chemically involved in the
interfaces. The chemical accuracy is particularly remarkable since the GeminiDistances selection is
based on the geometry of the interface and no chemical selection is done.
This recalls Crick's concept that he formulated observing $\alpha$-coiled coil interfaces: the analysis
of the geometry of a protein interface leads to its chemical specificity \cite{crick}.
\section{Developments}
There are basically three lines of development that emerge from the Gemini interaction networks. I
will briefly present them.
Firstly, the interaction networks must be analysed and systematically compared, looking for
patterns. Many parameters can matter. The polarity of the residues has been preliminarily studied in
\cite{poster}. Some statistics on the length of the side chain and the differential use of the amino
acids has been presented in \cite{risson} and, previously, in \cite{ivan}.
This research continues with the main focus on the interfaces formed by the
alignment of two parallel or antiparallel $\beta$ strands. Nearly 60 representatives have been
collected for this geometry that I will call $2\beta$.
Indeed, the ladder structure of Figure~\ref{1efi-1-10} is extremely frequently observed in this type
of interface but is not observed in other interface geometries, thus it is a candidate to be a
distinctive feature.
This is a good example of the patterns that I would like to trace in the Gemini graphs: features that
allow to distinguish geometries and to characterize their chemical properties.
The preliminary analyses cited indicate that the amino acids are ``flexible'', they adapt to play
different roles; this suggests that specific features will not be at the amino acid level but
possibly to the lower level of atomic groups in the side chain.
Moreover, the investigations point toward the joined structure, in which both the sides
of an interface are important. Patterns or elementary blocks must appear in an interface, not
just in a sequence.
Secondly, interaction networks can be used to propose amino acid substitution and test the
effects with in vitro experiments. Moreover, the principles itself that I have adopted in designing
the Gemini programs can be tested against these experiments.
Given a network of interactions, it is reasonable to expect that the effect of a mutation will be
different according to which amino acid
is modified. In particular, little effect is expected on amino acids marked by a dot, big effect is
expected for amino acids with many connections.
The experimental part is of pertinence of C. Lesieur and is performed at the facilities of the BioPark
of Archamps.
One indication that has been found is that some interfaces are ``active'' even in the absence of
the rest of the chain. This means that the subunits can recognize each other even when the rest of
the chain has been removed.
An opposite result would have indicated that the whole chain is always needed for the assembly,
thus showing a marginal role of the interface.
Thirdly, one can imagine to simulate the process of association of two subunits. For this, a software
is available, Simulation of Diffusional Association SDA \cite{sda}. This software implements the
Langevin equation for Brownian motion and allows to trace Brownian trajectories of two
molecules in water. Statistics on the trajectories produces the association rates, namely the
number of encounters per time, and the residence time. Clearly, these simulations can replace in vitro
experiments of association.
The interest, for the Gemini team researches, is to test artificially created interfaces
and study their interactions. My student J. Zrimi has invested his internship in simulating
the association of different subunits of three proteins \cite{jihad, hystid}, leading
to the confirmation of a role of the four histidine in the association of these proteins.
I am directly involved in the first and third of these projects, the second being based on experimental
manipulations.
\section{Discussion}
The Annecy team groups the competences and the facilities to work on both the theoretical
and the experimental aspects of protein assembly, to understand the mechanisms of assembly, the
sequence-structure-interface relationship, and the structural determinants of the interface geometry.
I have created the team and I am the main responsible of the theoretical part.
The most important result has been the creation of the programs Gemini \cite{gemini}, of which I am the
main author (80\%). For the graphical part, I asked the collaboration of my internship student
\cite{mottin}.
The creation of these program was, more than just writing lines of code, the search of the
correct ideas to translate a three-dimensional all atoms information into a synthetic description of
the most relevant interactions. This process lasted for more than one year. Of course,
the principles implemented in Gemini have been largely discussed with C.~Lesieur and also with
L. Vuillon.
I have tutored several internship students. In particular, J. Zrimi has been here for a five months
internship. He was student of Master 2 at the Master ``Production et Valorisation des Substances
Naturelles et Biopolym\'eres'' of the Faculty of sciences and techniques of Marrakech.
He will come again to our laboratory for his PhD studies.
A very important event, of which I am promoter and co-organizer, is the conference
``Theoretical approaches for the genome and the proteins'', TAGp2010, that will take place in
Annecy-Le-Vieux in October 2010.
This conference follows two previous meetings, 2008, 2006, that were focused on the genome.
\chapter*{Preface}
The text that I present in the next pages aims at giving some flavour of the researches I have
carried on after my degree in physics, obtained in 1995. I tried to give to these notes the style
of a comprehensible presentation of the ideas that have animated my researches, with emphasis
on the unity of the development. The single steps are here presented in a correlated view.
The calculation details are usually available on my original papers, therefore they have been
omitted here considering space and time constraints too.
For many years, I have been interested in quantum integrable systems. They are physical models
with very special properties that allow to evaluate observable quantities with exact calculations.
Indeed, exact calculations are seldom possible in theoretical physics. For this reason,
it is instructive to be able to perform exact evaluations in some specific model.
B.~Sutherland entitles ``Beautiful models'' his book \cite{suth} to express the elegant physical
and mathematical properties of integrable models.
Thus, in the Introduction to Part I, I define
and present, in a few examples, a number of basic properties of quantum integrable systems.
These examples will be used in the following three chapters. I will describe the work
I did on Destri-de Vega equations, in the first chapter, on the thermodynamic Bethe ansatz in
the second one, on the Hubbard model in the third one.
In each chapter I also give one or few proposals for the future, to show that the respective
field is an active domain of research.
Of course, I had to make a choice of the subjects I presented and, forcedly, others were excluded
to keep the text into a readable size.
Particularly, I regret I could mention very little of the physical combinatorics of TBA
quasi-particles, work that I have carried on with P. Pearce and that would have required several
additional pages.
Around 2006, I started to follow lectures and seminars delivered by people that, coming
from a theoretical
physicist background, were starting to work on genome, proteins and cells.
Two colleagues of my laboratory, L. Frappat and P. Sorba, were working on a quantum group
model for the genetic code. I was curious: how can someone even think to apply quantum groups or
let say integrable systems, to the genetic code? Now I know that beyond the application of
the apparatus of theoretical physics to biology, it is important to find the new
ideas, the new equations, the new models that are needed to better capture the properties of biological
systems. In the Part II of this text I will clarify this attitude, especially with the
motivations at page~\pageref{introbio}.
After a series of lectures by M. Caselle, I started to experiment with an evolutionary model
based on Turing machines. The model, created by a colleague of mine, F. Musso, and myself,
will be presented in Chapter~\ref{c:turing}.
Near the end of 2007, Paul Sorba was contacted by a biologist interested in finding
theoretical physicists for collaboration. This was an unusual request so Paul organized a meeting
with C. Lesieur, to listen to her researches and projects. I immediately accepted to
participate and the team ``Gemini'' was created. A few months later a regular collaboration
was on,
especially after my primitive but successful attempts to use the art of computer programming
to search for the protein interfaces.
The two projects on biophysics are now my main research activities, and the time I dedicate to
integrable systems has been considerably reduced.
I think the changes I made in my activities reflect more than a personal event and highlight
the new horizons theoretical physics is called to explore.
\part{Integrable models}
\chapter{Introduction}
The study of integrable models is the study of physical systems that are too elegant to be true but too
physical to be useless.
Take water in a shallow canal ($\phi$ is the wave amplitude) and you will find the known example
of the Korteweg-de Vries equation (KdV, formalized in 1895 but the first observation of solitary waves in a canal dates to 1844 by J. Scott-Russel)
\begin{equation}\label{KdV}
\partial_t\phi+\partial^3_x\phi+6\phi\partial_x\phi=0.
\end{equation}
This equation is nonlinear thus different waves are expected to interact each other.
Its speciality is that it admits ``solitonic'' solutions, namely wave packets in which each component
emerges undistorted after a scattering event. This rare property is similar to free waves motion, in which
different wave components move independently, but is dramatically
broken when interactions are switched on, unless there are some special constraints that
forbid the distortion. For the sake of precision, notice that the KdV equation has also ``normal''
dispersive waves.
The wave propagation conserves an infinite number of integrals of motion. This makes more
clear the presence of the constraints that force the unusual solitonic behaviour.
It is a general theorem of Hamiltonian mechanics that if a (classical) system of coordinates
$q_i,p_i\,,\quad i=1,\ldots,N$ and Hamiltonian $H(q,p)$ possesses $N$ independent functions
$I_i(q,p)$ such that
\begin{equation}\label{involuz}
\{H,I_i\}=0=\{I_i,I_j\}
\end{equation}
then there exist $N$ angle-action variables $\phi_i,I_i$.
The Hamiltonian is a function of the $I_i$ only $H(I_i)$ and the equations of motion can be
explicitly solved by just one integration.
This is the origin of the name of integrable models.
This theorem is lost when $N \rightarrow \infty$ or for a quantum system, but somehow its ``spirit''
remains: the presence of several integrals of motion, as is (\ref{involuz}), over-constraints the
scattering parameters of waves or particles and special behaviours appear.
In $1+1$ quantum field theory, this has been made precise by showing \cite{zam1979, parke} the absence
of particle production and the factorization of the scattering matrix when there are at least two
local conserved charges that are integrals of Lorentz tensors of rank two or higher.
This theorem has very strong consequences. It implies for example that the scattering is elastic, namely the set of incoming
momenta coincides with the set of outgoing momenta. As an example,
the factorization is written here for a four particles scattering
\begin{equation} \label{fattor}
S_{i_{1}...i_{4}}^{j_{1}...j_{4}}=
\sum_{k_{1}k_{2}k_{3}k_{4}l_{1}l_{2}l_{3}l_{4}}
S_{i_{1}i_{2}}^{k_{1}k_{2}} S_{k_{1}i_{3}}^{l_{1}k_{3}}
S_{k_{2}k_{3}}^{l_{2}l_{3}} S_{l_{1}i_{4}}^{j_{1}k_{4}}
S_{l_{2}k_{4}}^{j_{2}l_{4}} S_{l_{3}l_{4}}^{j_{3}j_{4}}
\end{equation}
but the generalization is simple \cite{zam1979}. The sum goes over internal indices
as in Figure~\ref{f:scatt}.
\begin{figure}
\hspace*{50mm}\begin{tikzpicture}
\draw (0.2,4.3) -- (5,1.8);\draw(0,4.3) node{$i_1$};\draw(5.2,1.8) node{$j_1$};
\draw (1.,4.7) -- (2.8,0);\draw(0.9,4.9) node{$i_2$};\draw(2.8,-0.3) node{$j_2$};
\draw (3,3.9) -- (0.5,0.1);\draw(3.1,4.1) node{$i_3$};\draw(0.4,-0.1) node{$j_3$};
\draw (5,2.5) -- (-0.1,0.5);\draw(5.3,2.5) node{$i_4$};\draw(-0.3,0.3) node{$j_4$};
\draw(0.5,3.8) node{$\theta_1$}; \draw(1.,4.2) node{$\theta_2$};
\draw(3,3.3) node{$\theta_3$}; \draw(4.5,2.7) node{$\theta_4$};
\draw(2,3.7) node{$k_1$}; \draw(1.3,2.9) node{$k_2$};
\draw(2.42,2.55) node{$k_3$}; \draw(3.8,1.6) node{$k_4$};
\draw(3.7,2.7) node{$l_1$}; \draw(2.3,1.8) node{$l_2$};
\draw(1.3,1.9) node{$l_3$}; \draw(1.8,0.9) node{$l_4$};
\end{tikzpicture}
\caption{\label{f:scatt}A four particles scattering: $i_1+i_2+i_3+i_4\rightarrow j_1+j_2+j_3+j_4$.
Incoming and outgoing momenta do coincide. Time flows downward.}
\end{figure}
The message is clear: an $N$ particles scattering
factorizes in $N(N-1)/2$ two-particle interactions. This means that a scattering event always
decomposes into independent two-particle events, without multi-particle effects.
Internal indices can only appear when there are particles within the same mass multiplet,
otherwise the conservation of momenta forces the conservation of the type of particle
$i_1=k_1=l_1=j_1$ and so on. This means that particle annihilation or creation are forbidden
outside a mass multiplet.
The factorization comes from the presence of higher rank integrals of motion and from the peculiar
property of a two-dimensional plane that non-parallel lines always meet \cite{parke}.
In a Minkowski space, integrals of motion that are integrals of Lorenz tensors act by
parallel shifting trajectories. For example, they parallel shift lines in Figure~\ref{f:scatt}.
In $1+1$ dimensions, two non parallel straight trajectories will always have a cross point but in higher
dimensions a parallel movement can suppress the cross point.
This geometrical fact indicates that the constraints imposed in higher dimensions are stronger that
in $1+1$ and the theory will be a free one, as shown by Coleman and Mandula \cite{colemanmandula}.
Therefore, factorization is a very strong property that has no equal in a general field theory.
The example of the sine-Gordon model \cite{zam1979} will be presented later, in which the full S-matrix is known.
In order to construct a scattering formalism, we need to use asymptotic states namely we need
the so called IN states ($t \rightarrow -\infty$) and OUT states ($t \rightarrow \infty$).
A two-particles scattering then takes the form
\begin{equation} \label{commut}
|A_{i}(\theta_{1}) A_{j}(\theta_{2})\rangle_{\text{in}} =\sum_{k,l} S_{ij}^{kl}(\theta_1-\theta_2)
|A_{k}(\theta_{1}) A_{l}(\theta_{2})\rangle_{\text{out}}
\end{equation}
where it has been taken into account that Lorentz boosts shift
rapidities\footnote{$E=m\cosh \theta\,,\quad p=m\sinh \theta\,,\quad \sinh \theta=\frac{v}{\sqrt{1-v^2}}$}
of a constant amount so the amplitude depends on the difference only.
By parallel shifting lines in Figure~\ref{f:scatt}, it is possible to appreciate that there are two
possible factorizations for a $3\rightarrow 3$ particles scattering. Their consistency
implies the following equation known as Yang-Baxter equation or factorization equation
\begin{equation} \label{YB}
\sum_{k_1,k_2,k_3}S_{i_{1}i_{2}}^{k_{1}k_{2}}(\theta_{1}-\theta_2)
S_{k_{1}i_{3}}^{j_{1}k_{3}}(\theta_{1}-\theta_3)
S_{k_{2}k_{3}}^{j_{2}j_{3}}(\theta_{2}-\theta_3)=\sum_{k_1,k_2,k_3}
S_{i_{2}i_{3}}^{k_{2}k_{3}}(\theta_{2}-\theta_3)
S_{i_{1}k_{3}}^{k_{1}j_{3}}(\theta_{1}-\theta_3) S_{k_{1}k_{2}}^{j_{1}j_{2}}(\theta_{1}-\theta_2)
\end{equation}
This equation characterizes quantum integrability. It first appeared
in the lattice case as the star-triangle relation obtained in the context of the Ising and
six-vertex models (see for example \cite{baxter}).
In the lattice context, scattering amplitudes are replaced by Boltzmann weights.
A fully general definition of integrable theories is difficult because integrable models are found
in a variety of cases and contexts from lattice models to continuum theories, from classical to
quantum dynamics.
Therefore, rather than trying to give a general definition, I prefere to indicate the most relevant
features. Indeed,
the three key ingredients of an integrable theory, both apparent in the
KdV and in the sine-Gordon case (see after), are:
\begin{enumerate}
\item[P1] incoming parameters of waves or particles are left unchanged by the scattering event, apart for time shifts,
\item[P2] there are infinite integrals of motion in involution,
\item[P3] a Yang-Baxter equation holds.
\end{enumerate}
The first one expresses the conservation of the incoming momenta. The second characterization generalizes the
original notion of integrability for classical Hamiltonian systems with finite degrees of freedom.
The third property expresses the mathematics of integrability.
\section{The sine-Gordon model}
The sine-Gordon model will be used here as a complete example of several ``integrable'' ideas.
Later it will be used to introduce the nonlinear integral equation of type
Kl\"umper-Pearce-Destri-de Vega.
The Lagrangian density is
\begin{equation}\label{senogordone}
\mathcal{L}[\phi]=
\frac12\ \partial_{\mu}\phi\ \partial^{\mu}\phi+\frac{\mu^2}{\beta^2} (\cos\beta\phi-1)
\end{equation}
and will be considered in 1+1 dimensions (signature of the metric $(1,-1)$).
The corresponding equation of motion is
\begin{equation}\label{senogordone2}
\frac{\partial^2\phi}{\partial t^2} -\frac{\partial^2\phi}{\partial x^2}
=-\frac{\mu^2}{\beta} \sin\beta \phi
\end{equation}
At small $\beta$ this model appears as a deformation of the Klein-Gordon equation in which $\mu$ plays
the role of a mass. Expanding the cosinus function in the Lagrangian (or the sinus in the equations of motion)
the coupling $\beta$ first appears with the fourth order term $-\beta \phi^4$, while $\beta=0$ is precisely the
Klein-Gordon equation.
The sine-Gordon equation (\ref{senogordone2}) admits solitonic solutions, satisfying property P1,
that are distinct in three types\footnote{It is usual to interpret as equivalent those fields that
differ by multiples of $2\pi/\beta$.}
\begin{enumerate}
\item the solitons, characterized by $\phi(+\infty,t)-\phi(-\infty,t)=\frac{2\pi}{\beta} m$, $m>0$,
integer;
\item the antisolitons, $\phi(+\infty,t)-\phi(-\infty,t)=\frac{2\pi}{\beta} m$, $m<0$, integer;
\item the breathers, with $\phi(+\infty,t)=\phi(-\infty,t)=0$.
\end{enumerate}
Solutions that combine an arbitrary number of these three elementary types do exist and they are
all known \cite{faddeevtak}. They all behave as indicated in property P1. Precisely for this reason,
one can think the soliton as an entity ``in its own'': it is recognizable and well identified
even if it participates to a multicomponent wave.
The name, soliton or antisoliton, suggests that these two waves are distinct because they have
opposite sign of the ``topological charge'': $\phi(+\infty,t)-\phi(-\infty,t)$. Having the breather
zero topological charge, it can be interpreted as a bound state of soliton and antisoliton.
The single soliton state at rest is
\begin{equation}
\phi_s(x)=\frac{4}{\beta}\ \text{atan} \exp (\mu x )
\end{equation}
and the single antisoliton is simply given by $\phi_a=-\phi_s$. By Lorentz boost, the single soliton
at speed $u$ is $\phi_s((x-ut)/\sqrt{1-u^2})$
An example of soliton-antisoliton state is given by
\begin{equation}\label{santis}
\phi_{sa}(x,t)=\frac{4}{\beta}\ \text{atan} \frac{\sinh \frac{\mu ut}{\sqrt{1-u^2}}}{{u\cosh\frac{\mu x}{\sqrt{1-u^2}}}}
\end{equation}
This state is not the breather (see later). Indeed, at large $|t|$ this state decomposes into a soliton and
an antisoliton solution travelling in opposite directions (and non bounded)
\begin{eqnarray}\label{decomp}
\phi_{sa}(x,t)&\xrightarrow[t\rightarrow -\infty]{}& \phi_s\Big(\frac{x+ut}{\sqrt{1-u^2}}+\log u\Big)+
\phi_a\Big(\frac{x-ut}{\sqrt{1-u^2}}-\log u\Big)\\
\phi_{sa}(x,t)&\xrightarrow[t\rightarrow \infty]{}& \phi_s\Big(\frac{x+ut}{\sqrt{1-u^2}}-\log u\Big)+
\phi_a\Big(\frac{x-ut}{\sqrt{1-u^2}}+\log u\Big)\nonumber
\end{eqnarray}
Notice that each wave maintains its initial speed, as indicated by property P1, just experiencing a
phase shift of $-2\sqrt{1-u^2}\,(\log u)/u>0$, being $0\leq u\leq 1$. The phase shift is positive namely the
two interacting waves accelerate with respect to their asymptotic motion.
This acceleration indicates an attraction, consistently with the idea that solitons and antisolitons
have opposite charge.
The simplest breather-like solution
\begin{equation}
\phi_{b}(x,t)=\frac{4}{\beta}\ \text{atan} \frac{\sin \frac{\mu ut}{\sqrt{1+u^2}}}{{u\cosh\frac{\mu x}{\sqrt{1+u^2}}}}
\end{equation}
is a time periodic solution that takes its name from the fact that it resembles a mouth that opens and closes.
Curiously, it can be formally obtained from the soliton-antisoliton state (\ref{santis}) by rotating to an
imaginary speed $u\rightarrow i\,u$.
This type of solution can be interpreted as a bound state of a soliton and an antisoliton because it has zero
topological charge and because the soliton and antisoliton can attract each other.
It is significantly different from the soliton-antisoliton state because, asymptotically, it does not
decompose into two infinitely separated wases as it does the soliton-antisoliton state (\ref{decomp}).
Finally, the sine-Gordon model admits an infinite number of conserved integrals of motion in involution
(property P2).
The sine-Gordon model discussed so far is strictly classical namely the field $\phi$ is a real function
of space and time. Nevertheless, the model can be quantized with fields becoming operators on an Hilbert
space leading to a scattering theory of quantum particles.
Notice that the various solutions given so
far do not survive the limit $\beta\rightarrow 0$ namely they aren't
perturbative solutions of the Klein-Gordon equation (see after \ref{senogordone2}).
The coupling $\beta$ is not very important in the classical theory and could be removed
by redefinition of the field and the space-time coordinates. On the contrary, in the quantum theory,
it will play a true physical role.
In \cite{coleman1975},\cite{dashen1975},\cite{korepin1975} some interesting steps of
the quantization procedure are performed.
In particular, the need to remove ultraviolet divergences and the existence of a lower
bound for the ground state energy lead to observe that outside the range
\begin{equation}\label{range}
0\leq \beta^2 \leq 8\pi
\end{equation}
the theory seems not well defined, missing a lower bond for the Hamiltonian.
Within the range, the theory describes two particles,
charge conjugated, that carry the same name of the classical counterparts, soliton and antisoliton,
and other particles, corresponding to the breathers.
Within this interval, the theory shows up to coincide with the massive Thirring model,
in the sector of even number of solitons plus antisolitons (``even sector''):
\begin{equation}\label{thirring}
\mathcal{L}[\psi]=
\bar\psi i\gamma_{\mu}\partial^{\mu}\psi -m_F\bar{\psi}\psi -\frac12\ g\ j_{\mu} j^{\mu}\,,
\quad \mbox{with} \quad j_{\mu}=\bar{\psi} \gamma_{\mu}\psi
\end{equation}
The equivalence of the two models is better stated by saying that they have the same correlation
functions in the even sector, provided the respective coupling constants are identified by
\begin{equation}\label{coupling}
\frac{\beta^2}{4\pi}=\frac{1}{1+g/\pi}
\end{equation}
Another useful coupling will be
\begin{equation}\label{gamma}
\gamma=\frac{\beta^2}{1-\frac{\beta^2}{8\pi}}
\end{equation}
Notice that there is no equivalence outside the even sector because the soliton does not correspond
to the fermion \cite{mand1975}: the transformation between the two is highly nonlocal.
In other words, there are states of sine-Gordon that do not exist in Thirring and vice versa.
The relation between the Thirring and sine-Gordon couplings reveals that the special point
$g=0$ or $\beta^2=4\pi$ describes a free massive Dirac theory. This free point
separates two distinct regimes
\begin{equation}\begin{array}{lc@{\hspace{7mm}}c@{\hspace{7mm}}c}
\mbox{repulsive} & -\frac{\pi}2<g<0 & 8\pi>\beta^2 >4\pi & \infty>\gamma>8\pi\\
\mbox{attractive} & 0<g<\infty & 4\pi>\beta^2 >0 & 8\pi>\gamma>0\\
\end{array}\label{repattr}
\end{equation}
The repulsive regime is so called because no bound state of the Thirring fermions or sine-Gordon bosons
is observed. Vice versa, in the attractive regime the quantum fields corresponding to the classical
breathers describe bound states between solitons and antisolitons.
The attractive regime admits small values of beta, where the theory is close to a $\phi^4$ theory
(\ref{senogordone}) but with an unusual attractive sign
\begin{equation}\label{senogordoneappr}
\mathcal{L}[\phi]=
\frac12\ \partial_{\mu}\phi\ \partial^{\mu}\phi-\frac{\mu^2}{2} \phi^2+\frac{\mu^2}{4!} \beta^2\phi^4\ldots
\end{equation}
The mass of the breathers is given by the exact expression
\begin{equation}\label{breathermass}
M_n=2M\sin \frac{n \gamma}{16}\,,\qquad n=1,2,\ldots,<\frac{8\pi}{\gamma}\,,
\end{equation}
where $M$ is the mass of the soliton. In the repulsive regime, no integer is in the range, indicating
that breathers do not exist; this mass formula makes sense in the attractive regime only.
The interpretation of breathers as bound states comes also from the fact that the breather masses
are below the threshold $M_n<2 M$.
These breathers originate in the quantization of the classical breather solutions and, from
(\ref{senogordoneappr}),
they correspond to the perturbation of the Klein-Gordon particles.
Indeed, given that the soliton mass at leading order is
\begin{equation}\label{solitonmass}
M=\frac{8\mu}{\gamma}
\end{equation}
the smallest breather mass $M_1$ for in the weak coupling $\beta^2\rightarrow 0$ is
\begin{equation}
M_1= 2\frac{8\mu}{\gamma} \frac{\gamma}{16}=\mu
\end{equation}
So, the lowest breather originates in the perturbation of the Klein-Gordon boson. Notice that the
breather is a bound state while the Klein-Gordon model has no bound states at all. This is true
even for the $n$th breather
\begin{equation}\label{breathern}
M_n=n\, \mu
\end{equation}
so the Klein-Gordon free multiparticle states become bound states in sine-Gordon.
Differently from the breather, the soliton doesn't emerge from the Klein-Gordon theory:
its mass diverges in this limit (\ref{solitonmass}) so this particle is considered decoupled from
the theory.
The relations (\ref{coupling}, \ref{repattr}) indicate a strong/weak duality between
sine-Gordon and Thirring: strong interactions in one model correspond to weak interactions in the
other. Can we see a physical track of this? Yes, for example in the weak sine-Gordon regime
$\beta^2\rightarrow 0$. Indeed, the $n$th breather appears to be a
bound state of $n$ Klein-Gordon particles (\ref{breathern}). It is a stable state that takes
its stability from the strong fermionic coupling $g \gg 0$ of Thirring. Moving to higher
values of $\beta^2$, the fermionic coupling decreases therefore we expect to have less and less
stable breathers, consistently with the mass expression (\ref{breathermass}).
In the same weak regime $\beta^2\rightarrow 0$, the soliton is decoupled from the
theory as it has an almost infinite mass (\ref{solitonmass}). It is strongly coupled in the
repulsive regime where its mass is small.
The most important feature is that the quantum sine-Gordon model is still integrable.
This was first seen by showing that conservation laws do survive perturbative quantization.
Now this result is known beyond perturbation theory \cite{sasaki1987} and grants the already
discussed necessary conditions for the factorization of scattering (\ref{fattor}). Thus, all
the properties P1, P2, P3 hold.
Particle annihilation and creation are forbidden. Consequently, all the bound states in (\ref{breathermass})
are stable particles even when there are breather states above the creation threshold
$M_n>2\, M_1$. This happens for some $n$ and for sufficiently small $\gamma$. Notice that in the attractive
regime the lowest breather is always the lightest particle.
Moving toward the repulsive regime, one observes that the $n$th breather disappears into a
soliton-antisoliton state when $8\pi/\gamma$ is a positive integer
\begin{equation}
\lim_{\gamma\rightarrow (\frac{8\pi}{n})^{-}} M_n =2\, M
\end{equation}
The lowest breather disappears at the free fermion point $\gamma=8\pi$.
If one can show the existence of conserved charges as required in the factorization theorem,
the two particles scattering amplitudes can be evaluated on the basis of their symmetries.
In other words, the Yang-Baxter equation (\ref{YB}) supplemented
with usual analytic properties (poles from mass spectrum), unitarity and crossing symmetry,
is (often) enough to find the scattering amplitudes. This avoids a much more
lengthy calculation based on the evaluation of Feynman diagrams to all orders.
For the sine-Gordon model, this has been done in \cite{zam1979}.
The notation in (\ref{commut}) is now used to write down the amplitudes. For the solitonic part only,
there are three
$2\rightarrow2$ particle processes
\begin{eqnarray}
A_s+A_s&\rightarrow& A_s+A_s \nonumber \\
A_s+A_{\bar{s}}&\rightarrow& A_s+A_{\bar{s}} \label{scattssbar}\\
A_{\bar{s}}+A_{\bar{s}}&\rightarrow& A_{\bar{s}}+A_{\bar{s}} \nonumber
\end{eqnarray}
where $A_s$ ($A_{\bar{s}}$) indicates a soliton (antisoliton) momentum state.
Charge conjugation symmetry makes the first and the last processes to
have identical amplitude. Using $\theta=\theta_1-\theta_2$, we have
\newcommand{\ket}[1]{|{#1}\rangle}
\begin{eqnarray}\nonumber
\ket{A_s(\theta_1)A_s(\theta_2)}_{\text{in}} &=& S(\theta)\, \ket{A_s(\theta_1)A_s(\theta_2)}_{\text{out}}\\
\ket{A_s(\theta_1)A_{\bar s}(\theta_2)}_{\text{in}} &=&
S_T(\theta)\, \ket{A_s(\theta_1)A_{\bar s}(\theta_2)}_{\text{out}}+
S_R(\theta)\, \ket{A_{\bar s}(\theta_1)A_s(\theta_2)}_{\text{out}} \\ \nonumber
\ket{A_{\bar s}(\theta_1)A_{\bar s}(\theta_2)}_{\text{in}} &=& S(\theta)\,
\ket{A_{\bar s}(\theta_1)A_{\bar s}(\theta_2)}_{\text{out}}
\end{eqnarray}
There are just three independent amplitudes to be determined, that we organize in a $4\times 4$
matrix to be used in the Yang-Baxter equation (\ref{YB})
\begin{equation}
S=\begin{pmatrix} S(\theta)\\ &S_T(\theta)&S_R(\theta)\\
&S_R(\theta)&S_T(\theta)\\&&& S(\theta)\end{pmatrix}
\end{equation}
Notice that, in the Yang-Baxter equation, the conservation of the set of momenta forbids amplitudes describing
particles with different mass to mix each other. In sine-Gordon, there are just two particles with identical mass,
the soliton and the antisoliton, with interactions listed in (\ref{scattssbar}).
This means that the scattering processes involving a breather do no mix with those in
(\ref{scattssbar}). As the breathers have different mass, the following processes
are of pure transmission, reflection being forbidden
\begin{eqnarray}\nonumber
A_s+B_n &\rightarrow& A_s+B_n\\
A_{\bar s}+B_{n} &\rightarrow& A_{\bar s}+B_{n}\\
B_{m}+B_{n} &\rightarrow& B_{m}+B_{n}\nonumber
\end{eqnarray}
The whole knowledge of the scattering amplitudes is not needed. The soliton part is given by
\begin{equation}\label{matriceS}
S=\begin{pmatrix} -i\sinh\Big(\frac{8\pi}{\gamma}(i\pi-\theta)\Big)\\
& -i\sinh\Big(\frac{8\pi}{\gamma}\theta\Big) & \sin \frac{8\pi^2}{\gamma}\\
& \sin \frac{8\pi^2}{\gamma} & -i\sinh\Big(\frac{8\pi}{\gamma}\theta\Big) \\
&&&-i\sinh\Big(\frac{8\pi}{\gamma}(i\pi-\theta)\Big)
\end{pmatrix} U(\theta)
\end{equation}
where $U(\theta)$ is a known factor.
The expression of the scattering matrix will be useful soon, in relation to the six-vertex model.
\section{The six-vertex model}
It is a two dimensional classical statistical mechanics model in which interactions are associated
with a vertex: the four bonds surrounding a vertex fix the Boltzmann weight associated with it.
In the present model the possible vertices are those shown here:
\begin{center}\includegraphics[scale=0.5]{seivertici.pdf}\end{center}
A bond can therefore be in either one of two states, that will be indicated by ``0'' or ``1''
(0 associated to up and right, 1 associated to down and left).
Initially, this model was introduced as a two dimensional idealization of an ice crystal and called ice-type model.
Indeed, the vertex represents the oxygen atom and the four bonds connected to it represent two covalent bonds and
two hydrogen bonds. The arrows indicate to which oxygen the hydrogen atom is closer, thus differentiating the
covalent bonds from the hydrogen bonds.
The Boltzmann weights for a vertex are nonnegative values indicated by
$w_{1},w_{2},w_{3},w_{4},w_{5},w_{6}$. Hereafter I will put
$w_{1}=w_{2}=a$, $w_{3}=w_{4}=b$ and $w_{5}=w_{6}=c$, as in \cite{baxter}.
Their product on the whole lattice vertices is summed
on all the configurations to build up the partition function
\begin{equation}
\mathcal{Z}
= \sum_{{\rm conf}}\, w_{1}^{N_{1}}\, w_{2}^{N_{2}}\,w_{3}^{N_{3}}\, w_{4}^{N_{4}}\,w_{5}^{N_{5}}\, w_{6}^{N_{6}}
\label{partizione}
\end{equation}
where $N_i$ is the number of occurrences of the type $i$ vertex in the lattice.
Periodic boundary conditions will now be uses in the vertical and horizontal directions.
The expression for the partition function takes a useful form if one introduces the transfer matrix
and the so called R matrix.
The transfer matrix {\bf T} is a $2^{N}\times 2^{N}$ matrix that describes how the system
``evolves'' from a row to the next one of the lattice. The R matrix somehow summarizes the possible
behaviours on a single vertex or lattice site. On a given row, the vertical bond at site $i$ is associated
with a local vector space $V_i=\mathbb{C}^2$. Also, $A$ represents an auxiliary space $A=\mathbb{C}^2$.
The R matrix is a $4\times 4$ matrix acting on $A\otimes V_i$ (or else on $A\otimes A$)
\begin{eqnarray}\label{rmatrix}
R &=&\left(\begin{array}{cccc} a&&&\\&b&c\\ &c&b\\&&&a\end{array}\right) \\
&& \hspace{5mm} {\scriptstyle 00\hspace{2.2mm} 01\hspace{2.2mm} 10\hspace{2.2mm} 11} \nonumber
\end{eqnarray}
where the lower line indicates how the entries are interpreted with respect to the two possible
bond configurations.
The transfer matrix acts on the physical vector space $\mathcal{V}$ and is a product of R
matrices
\begin{equation}\label{trasf}\ba{l}
\mathcal{V}=\displaystyle \mathop{V}_{\scriptscriptstyle 1}\otimes\mathop{V}_{\scriptscriptstyle 2}
\otimes\ldots\otimes \mathop{V}_{\scriptscriptstyle N}\\[4mm]
{\bf T}:\mathcal{V} \rightarrow \mathcal{V}\\[4mm]
{\bf T}=\displaystyle\mathop{\text{Tr}}_A\ R_{A\,1}R_{A\,2}\ldots R_{A\,N}
\end{array}\end{equation}
where the trace is taken on the auxiliary space $A$\footnote{Here, the
standard notation of of lattice integrable systems is used such that the lower
indices of the matrix do not indicate its entries but the spaces on which the matrix acts
(namely the auxiliary space and one of the horizontal lattice sites, enumerated fro 1 to $N$).}.
The partition function can be written as the trace of the product of $M$ transfer matrices
(if $M$ is the number of rows of the lattice)
\begin{equation}
\mathcal{Z} = {\rm Tr}\,{\rm {\bf T}}^{M}
\label{transfer}
\end{equation}
having now taken the trace on $\mathcal{V}$, namely on all the horizontal sites. It turns out that
if the Boltzmann weights have an appropriate form, the transfer matrix generates an integrable system.
The following parametrization makes the game
\newcommand{\gamma_{\text{6v}}}{\gamma_{\text{6v}}}
\begin{equation}
\label{R_matrix_entries}
a=a(\theta)=\sinh \frac{\gamma_{\text{6v}}}{\pi}(\theta+i\pi),\quad b=b(\theta)=
\sinh\frac{\gamma_{\text{6v}}}{\pi} \theta ,\quad
c=c(\theta)=i\sin \gamma_{\text{6v}}
\end{equation}
and gives an R matrix function of the spectral parameter\footnote{The
spectral parameter is a complex number that is used to describe a sort of off-shell physics; usually it is fixed
to a specific value or interval to construct a physical model.}
$\theta$, $R(\theta)$, and also of the coupling $\gamma_{\text{6v}}$. The R~matrix satisfies a Yang-Baxter equation
(see later).
It turns out that this parametrization is very much the same as in (\ref{matriceS}), except for
the identification of the couplings that requires some care and will be done later.
This means that the integrable sine-Gordon model and the six-vertex model have something in common.
Anticipating a later discussion, one can use the six-vertex model for a lattice regularization of the sine-Gordon.
In other words, sine-Gordon appears as a certain continuum limit of the six-vertex model, provided
a mass scale is introduced.
The disadvantage of the parametrization (\ref{R_matrix_entries}) is that is introduces complex Boltzmann
weights and this looks
odd in statistical mechanics. However, this is not a serious problem, first because the
statements that concern integrability do hold for arbitrary complex parameters, second because it is easy
to get a real transfer matrix, simply by using an imaginary value for $\theta$.
The Yang-Baxter equation satisfied by the R matrix (\ref{R_matrix_entries}) is
\begin{equation}\label{YBlattice}
R_{12}(\lambda-\mu)R_{13}(\lambda)R_{23}(\mu)=R_{23}(\mu)R_{13}(\lambda)R_{12}(\lambda-\mu)
\end{equation}
where $\lambda,\mu$ are arbitrary complex spectral parameters. Thus, property P3 above also holds for
the lattice case. From this equation, a
very general construction shows that the transfer matrix forms a commuting family
\begin{equation}
[{\bf T}(\theta),{\bf T}(\theta')]=0
\end{equation}
for arbitrary values of the spectral parameters. Now, any expansion of the transfer matrix
produces commuting objects. In particular,
it is possible to evaluate the logarithmic derivative of the transfer matrix
\begin{equation}\label{hamilt}
H=\mathcal{A}\left. \frac{d\log{\bf T}(\theta)}{d\theta}\right|_{\frac{i\pi}{2}}
\end{equation}
and all the higher derivatives. The operators obtained with this procedure are local
and commutative therefore we conclude that property P2 is satisfied for the lattice model.
The logarithmic derivative is manageable, at least for the six-vertex model, and leads to a
very interesting expression (the overall factor $\mathcal{A}$ is easy to evaluate but not very important)
\begin{eqnarray}\nonumber
H&=&-\sum_{i=1}^{N-1}\Big[
\sigma_i^1\sigma_{i+1}^1+\sigma_i^2\sigma_{i+1}^2+\Delta \big(1+\sigma_i^3\sigma_{i+1}^3)\Big]\\
\Delta&=&\cos\gamma_{\text{6v}} \label{xxz}
\end{eqnarray}
The $\sigma_i^{j}\,,\ j=1,2,3$ are Pauli matrices acting on the site $i$, where, by definition,
different site matrices always commute.
This one-dimensional lattice quantum Hamiltonian is known as XXZ model. The more general version
with three different coefficients and three spatial dimensions was introduced by
W. Heisenberg (1928) as a natural physical description of magnetism in solid state physics.
Indeed, the Heisenberg idea was to consider, on each lattice site, a quantum magnetic
needle of spin $\frac{1}{2}$ fully free to rotate.
The magnetic needle is assumed sensitive to the nearest neighbor needles
with the simplest possible coupling of magnetic dipoles.
At $\gamma_{\text{6v}}=0$ it is fully isotropic with rotational $su(2)$ symmetry.
As soon as $\gamma_{\text{6v}}\neq 0$ is introduced, the model acquires an anisotropy.
The XXX Hamiltonian is free of couplings apart from the overall sign.
Given the present sign choice, it is apparent that adjacent parallel spins
lower the energy. This explains the name ``ferromagnetic'' attributed to
the Hamiltonian in (\ref{xxz}), if $\Delta=1$. The Hamiltonian with opposite sign is known as ``antiferromagnetic''.
The presence of $\Delta\neq 1$ in the XXZ model spoils this distinction
because the ferromagnetic or antiferromagnetic behavior depends by the
coupling and the name cannot be attached to the Hamiltonian itself but to the phases it describes.
The phases of the two models are indicated in table~\ref{xxzcoupl}.
\begin{table}\begin{center}
\begin{tabular}{c|l@{\hspace*{12mm}}l}
&XXZ& six-vertex \\ \hline
$\Delta >1$ & ferromagnetic & ferroelectric, vertex: one of 1,2,3,4\\
$-1<\Delta<1$ & \multicolumn{2}{l}{critical case, multi-degenerate ground state}\\
$\Delta <-1$ & antiferromagnetic & antiferroelectric, vertices 5 and 6 alternate
\end{tabular}\end{center}
\caption{\label{xxzcoupl}The thermodynamic phases of the XXZ and six-vertex model are indicated.
The XXZ model being defined by the Hamiltonian, the phases are understood at zero temperature, while
the six-vertex phases are read from the transfer matrix and contain a temperature, hidden in the parameters $a,b,c$.
Moreover, one-dimensional models do not break symmetries: phase transitions can only occur at zero temperature.}
\end{table}
Finally, as the XXZ model is embedded in XYZ, the six-vertex is embedded in the more general eight-vertex model,
that is still integrable.
The XXZ model is presented in the review \cite{defk} and also in the book \cite{suth}.
\subsection{The Baxter T-Q relation\label{s:baxter}}
\noindent The one-dimensional XXZ model has the merit of having inaugurated the studies of quantum
integrable systems and of the methods known as Bethe ansatz, in the celebrated Bethe paper \cite{bethe}.
Indeed, his idea was to try to guess, or ansatz, the appropriate eigenfunctions for the Hamiltonian (\ref{xxz}),
from a trial form, then show that the guess is correct.
This approach is called coordinate Bethe ansatz and produces a set of constraints
on the parameters of the wave function known as Bethe equations.
Baxter \cite{baxter}, from the Bethe equations, was able to show the existence of a T-Q relation
(\ref{TQbordo}) for the transfer matrix.
After, he could reverse the approach and, with a more direct construction, he derived the T-Q relation
from the Yang-Baxter equation. Therefore, he obtained the Bethe equations from the T-Q relation. The presentation
will follow the second approach.
Following Baxter, the transfer matrix satisfies a functional equation~\cite{baxter}
for periodic boundary conditions on a row of $N$ sites. This means that there exist a matrix $\mbox{\boldmath $Q$}(u)$
such that
\begin{equation}\label{TQperiod}
\mbox{\boldmath $T$}(u)\mbox{\boldmath $Q$}(u)=f\left(u+\frac{\lambda}{2}\right)
\mbox{\boldmath $Q$}(u-\lambda)+f\left(u-\frac{\lambda}{2}\right)\mbox{\boldmath $Q$}(u+\lambda)
\end{equation}
where we have used
\begin{equation}
f(u)=\left(\frac{\sin u}{\sin \lambda}\right)^{N}
\end{equation}
The coupling and spectral parameters are related to the previous ones by
\begin{equation}
\lambda=\gamma_{\text{6v}}\,,\qquad u=-i\frac{\gamma_{\text{6v}}}{\pi}\theta
\end{equation}
The new operator $\mbox{\boldmath $Q$}(u)$ forms a family of matrices that commute each other and with the
transfer matrix $[\mbox{\boldmath $Q$}(u),\mbox{\boldmath $Q$}(v)]=[\mbox{\boldmath $Q$}(u),\mbox{\boldmath $T$}(v)]=0$. This implies that the same functional equation (\ref{TQperiod})
holds true also for the eigenvalues $T(u)$ and $Q(u)$. Moreover, all these operators have the
same eigenvectors independent of $u$. The eigenvalues of $Q$ are given by
\begin{equation}
Q(u)=\prod_{j=1}^M \sin(u-u_j) \label{BaxQ}
\end{equation}
where $u_j$ are the Bethe roots and appear now as zeros of the eigenvalues of $\mbox{\boldmath $Q$}(u)$.
The Bethe ansatz equations result by imposing that the transfer matrix eigenvalues on the left are entire
functions. Indeed, when $Q(u)=0$, being $T(u)$ entire, the right hand side must vanish. This forces the
constraints (Bethe equations)
\begin{equation}\label{betheTQ}
\left(\frac{\sin\left(u+\frac{\lambda}{2}\right)}{\sin\left(u-\frac{\lambda}{2}\right)}\right)^{N}=-
\frac{Q(u+\lambda)}{Q(u-\lambda)}
\end{equation}
The T-Q relation (\ref{TQperiod}) shows that the columns of $\mbox{\boldmath $Q$}(u)$ are eigenvectors of $\mbox{\boldmath $T$}$ so this
equation actually provides both information on eigenvalues and eigenvectors.
The Bethe equations have a finite number of solutions
in the periodicity strip
\begin{equation}
\text{Re} (u)\in [0,\pi]
\end{equation}
This is easily seen because they can be transformed into algebraic equations in the new variables
$$z_j=\exp(i\ u_j)$$
Notice that the lattice model also has a finite number of states: indeed,
$$
\dim \mathcal{V}=2^N
$$
This is the size of the transfer matrix and is also the number of expected
solutions of the Bethe equations. Indeed, it has been possible to show
that the Bethe equations have the correct number of solutions and that the corresponding Bethe eigenvectors form a base for $\mathcal{V}$,
see \cite{yy1966_2}, \cite{kirillov1987} and references there. This is referred to as the
``completeness'' of the Bethe ansatz.
One feature observed is that Bethe roots satisfy a Pauli-like principle,
in the sense that they are all distinct: there is no need to consider
solutions with $u_j=u_k$ for different $j\neq k$.
\section{Conformal field theories\label{sect:CFT}}
Conformal field theories (CFT) are scale invariant quantum field theories.
They were introduced for two main reasons, one being the study of continuum phase transitions
and the other being the interest of describing quantum strings on their world sheet,
a two-dimensional surface in a ten-dimensional space. This second point strongly motivated
the treatment of the two dimensional case, initiated in the fundamental work
\cite{bpz}. Curiously, the subsequent development of the two-dimensional case, instead,
was much more
statistical mechanics oriented. Two-dimensional conformal field theories are very close to
the subject of
quantum integrability because they also are integrable theories and, often, they appear
in certain limits of lattice or continuum integrable theories.
These topics and some connections between conformal field theory and integrability
will be discussed later, in relation to several of the investigations that I have
carried on: nonlinear Destri-de Vega equations, thermodynamic Bethe ansatz equations and so on.
Four-dimensional conformal field theories are studied in the Maldacena gauge/string duality
framework. In particular, the superconformal gauge theory $\mathcal{N}=4$
appears in relation to some integrable theories, after the work \cite{MZ}. In particular,
in that paper the XXX model appeared. As the paper \cite{bpz} created the bridge
between integrable models and two-dimensional field theories, \cite{MZ} inaugurated
the interchange between (some) aspects of integrable models and four-dimensional
superconformal field theories.
In the two-dimensional case, the generators of the conformal symmetry are the modes
of the Virasoro algebra (\( {\cal V} \))
\begin{equation}
\label{virasoro}
[L_{m},L_{n}]=(m-n)L_{m+n}+\frac{c}{12}(m^{3}-m)\delta _{m+n,0}
\end{equation}
where the constant \( c \) that appears in the central extension term is called
\emph{conformal anomaly} or often \emph{central charge}.
For a physical theory on the Minkowski plane or on a cylinder geometry in which the space is
periodic and the time flows in the infinite direction, the algebra of the full conformal group
is the tensor product
of two copies of (\ref{virasoro}) $ {\mathcal{V}}\otimes {\bar{\mathcal{V}}}$.
For other geometries, it can be different. For example, on a strip (finite space,
infinite time) there is a single copy.
According to this, the Hamiltonian is \( L_{0}+\bar{L}_{0} \) for the plane
or cylinder and is $L_0$ in a strip.
We need this distinction because, later on, we will use both types of space-time.
Somehow, the presence of two copies in the plane and in the cylinder with periodic space is justified
because there are two types of movers: left and right movers, namely massless particles
moving at the speed of light toward left or right\footnote{Conformal invariance implies that
particles are massless and move at the speed of light.}.
In a strip, corresponding to a finite space with spatial borders, movement is not allowed,
thus just one copy remains.
All the states of a CFT must lie in some irreducible
representation of the algebra (\ref{virasoro}).
Physical representations must have the Hamiltonian spectrum
bounded from below, i.e. they must contain a so called \emph{highest weight
state}
(HWS) \( |\Delta \rangle \) for which \begin{equation}
\label{hws}
L_{0}|\Delta \rangle =\Delta |\Delta \rangle \qquad ,\qquad L_{n}|\Delta
\rangle =0\qquad ,\qquad n>0
\end{equation}
These representations are known as \emph{highest weight representations} (HWR).
The irreducible representations of \( {\cal V} \) are labelled by two numbers,
namely the central charge \( c \) and the conformal dimension \( \Delta \).
We shall denote the HWRs of
\( {\mathcal{V}} \) by \( {\mathcal{V}}_{c}(\Delta ) \). For a given theory,
the Hilbert space \( {\mathcal{H}} \) of the theory is built up of all possible representations \(
{\mathcal{V}}_{c}(\Delta ) \) with the same \( c \), each one with a certain multiplicity:
\begin{equation}
\label{hilbert}
{\mathcal{H}}=\bigoplus _{\Delta ,\bar{\Delta }}{\mathcal{N}}_{\Delta
,\bar{\Delta }}{\mathcal{V}}_{c}(\Delta )\otimes
{\bar{\mathcal{V}}}_{c}(\bar{\Delta })
\end{equation}
If a certain \( {\mathcal{V}}_{c}(\Delta )\otimes
{\bar{\mathcal{V}}}_{c}(\bar{\Delta }) \)
does not appear, then simply \( {\mathcal{N}}_{\Delta ,\bar{\Delta }}=0 \).
The numbers \( {\mathcal{N}}_{\Delta ,\bar{\Delta }} \) count the multiplicity
of each representation in \( {\mathcal{H}} \), therefore they must always
be non negative integers. They are not fixed by conformal invariance alone
as they depend on the geometry and on possible boundary conditions.
Every HWS (\ref{hws}) in the theory
can be put in one-to-one correspondence with a field through the formula \(
|\Delta,\bar{\Delta}\rangle =A_{\Delta,\bar{\Delta}}(0,0)|0,0\rangle \),
where the vacuum \( |0,0\rangle \) is projective (i.e. \(
L_{0},\bar{L}_{0},L_{\pm 1},\bar{L}_{\pm 1} \))
invariant. In particular the HWS (\ref{hws}) correspond to some fields \( \phi
_{\Delta ,\bar{\Delta }}(z,\bar{z}) \)
that transform under the conformal group as \begin{equation}
\label{primary}
\phi _{\Delta ,\bar{\Delta }}(z,\bar{z})=\left( \frac{\partial z'}{\partial
z}\right) ^{\Delta }\left( \frac{\partial \bar{z}'}{\partial \bar{z}}\right)
^{\bar{\Delta }}\phi _{\Delta,\bar{\Delta } }(z',\bar{z}')
\end{equation}
They are called \emph{primary fields}. Non primary fields (secondaries) do have
much more involved transformations. A basis for the states
can be obtained by applying strings of \( L_{n},n<0 \) to \( |\Delta \rangle
\).
The commutation relations imply \begin{equation}
\label{secondaries}
L_{0}L_{n}^{k}|\Delta \rangle =(\Delta +nk)L_{n}^{k}|\Delta \rangle
\end{equation}
Therefore \( L_{0} \) eigenvalues organize the space \(
{\mathcal{V}}_{c}(\Delta ) \)
(called a \emph{module}) so that the states lie on a {}``stair{}'' whose
\( N \)-th step (called the \( N \)-th \emph{level}) has \( L_{0}=\Delta +N
\)\begin{equation}
\label{levels}
\begin{array}{ccc}
\mbox {states} & \mbox {level} & L_{0}\\
...... & ... & ...\\
L_{-3}|\Delta \rangle \, ,\, L_{-2}L_{-1}|\Delta \rangle \, ,\,
L_{-1}^{3}|\Delta \rangle & 3 & \Delta +3\\
L_{-2}|\Delta \rangle \, ,\, L_{-1}^{2}|\Delta \rangle & 2 & \Delta +2\\
L_{-1}|\Delta \rangle & 1 & \Delta +1\\
|\Delta \rangle & 0 & \Delta
\end{array}
\end{equation}
All the fields corresponding to the HWR \( {\mathcal{V}}_{c}(\Delta ) \) are
said to be in the \emph{conformal family} \( [\phi _{\Delta }] \) generated
by the primary field \( \phi _{\Delta } \).
For the following, the most important conformal models will be those knows
as minimal models, characterized by the central charge
\begin{equation}\label{minimal}
c=1-\frac{6}{p(p+1)}
\end{equation}
with $p=3,4,\ldots$
These models are all unitary and all have a finite number of primary fields.
The first one, $c=\frac12$, is the universality class of the Ising model. The next one
$c=\frac7{10}$, is the tricritical Ising model, namely an Ising model with vacancies.
After, we find the universality class of the three-states Potts model and so.
The limit $p\rightarrow \infty$ is also a CFT; it is one point of the class of the free massless boson
with $c=1$.
Indeed, $c=1$ is a wide class of unitary conformal field theories, all derived from a free massless
boson compactified in the following way
\begin{equation}
\phi \equiv \phi +2\pi m R\,,\quad m\in\mathbb{Z}
\end{equation}
and radially quantized. A full description of this theory would be very long. A sketch is presented
in \cite{tesidott}. The theory turns out to be characterized by
certain vertex operators
\begin{equation}
\label{vertex_operators}
V_{(n,\, m)}(z,\overline{z})=:\exp i (p_{+} \phi (z)+p_{-}\bar{\phi }(\overline{z})): ,
\qquad p_{\pm}=\frac{n}{R}\pm \frac12 m R.
\end{equation}
with conformal weight $\Delta=p_{+}^2/2,\,\bar{\Delta}=p_{-}^2/2$.
Each pair $(n,m)$ describes a different sector of the theory; its states are obtained by the
action
of the modes of the fields, $\partial _z \phi$ and $\bar{\partial_z}\bar{\phi}$, in a standard
Fock space construction.
It is important to stress that a particular \( c=1 \) CFT is specified by giving
the spectrum of the quantum numbers \( (n,m) \) (and the compactification radius
\( R \)) such that the corresponding set of vertex operators (and their descendants)
forms a \emph{closed and local} operator algebra. The locality requirement is
equivalent to the fact that the operator product expansions of any two such
local operators is single-valued in the complex plane of \( z \).
By this requirement of locality, it was proved in \cite{km93} that there are
only two maximal local subalgebras of vertex operators: \( {\cal A}_{b} \), purely bosonic,
generated by the vertex operators
\begin{equation}
V_{(n,\, m)}:\, n,\, m\in \mathbb {Z}
\end{equation}
and \( {\cal A}_{f} \), fermionic, generated by
\begin{equation}
V_{(n,\, m)} : \, n\in \mathbb{Z},\, m\in 2\mathbb{Z} \quad \mbox{ or }\quad n\in \mathbb{Z}+\frac{1}{2},\,
m\in 2\mathbb{Z}+1 .
\end{equation}
Other sets of vertex operators can be built, but the product of two of them gives a nonlocal expression
(namely the operator product expansion is multi-valued).
\begin{figure}[h]
\hspace{50mm}\includegraphics[scale=0.5]{foursectors.pdf}
\caption{The family of vertex operators $ V_{(n,\, m)}$ with $ n\in \mathbb Z/2$
and $ m\in \mathbb Z$. Sectors \textbf{I} and \textbf{II} are ${\cal A}_{b}$, namely the
ultraviolet limit of the sine-Gordon model. Sectors \textbf{I} and \textbf{III} are ${\cal A}_{f}$,
that defines the UV behaviour of massive Thirring. Sector \textbf{I} is the one where sine-Gordon
and massive Thirring are equivalent.
\textbf{IV} is a sector of non mutually local vertex operators.\label{4sectors.eps}}
\end{figure}
The sine-Gordon model appears as a integrable perturbation of the $c=1$ free boson by an operator with
scaling
dimensions $(\Delta,\bar{\Delta})=(\beta^2/8\pi,\beta^2/8\pi)$. The corresponding unperturbed
algebra is ${\cal A}_{b}$ while the algebra ${\cal A}_{f}$ can be perturbed to give rise to the
massive Thirring model. The compactification radius amd the sine-Gordon coupling are related by
$$R^2=\frac{4\pi}{\beta^2}$$.
\section{Perturbed conformal field theory}
We may think to define a quantum field theory as a deformation of a conformal field theory by some
operators \cite{Zam-adv}, i.e. to perturb the action of a CFT as in the following expression
\begin{equation}
\label{pcft-action}
S[\Phi_i]=S_{CFT}+\sum _{i=1}^{n}\lambda _{i}\int d^{2}x\ \Phi _{i}(x)
\end{equation}
Of course, the class of two-dimensional field theories is larger than the one described by
this action. Nevertheless, this class of \emph{perturbed conformal field theories} has a
special role because it describes the vicinity of critical points in the theory of critical
phenomena.
The main goal is to be able to compute off-critical correlation functions by
\begin{equation}
\langle X\rangle =\int {\cal D}\varphi \, Xe^{-S[\Phi ]}=\int {\cal
D}\varphi \, X \exp \big[-S_{CFT}-\lambda \int d^{2}x\ \Phi (x)\big]
\end{equation}
Indeed, expanding in powers of \( \lambda \) one can express \( \langle X\rangle \)
as a series of conformal correlators (in principle computable by conformal field theory
techniques).
The perturbed theory is especially important if it maintains the integrability of
the conformal point. If so, the perturbed theory has a factorized scattering.
The possible perturbing fields are classified with respect to their renormalization group action
as
\begin{itemize}
\item \textbf{relevant} if \( \Delta <1 \). If such a field perturbs a conformal
action,
it creates exactly the situation described above, i.e. the theory starts to
flow along a renormalization group trajectory going to some IR destiny.
\item \textbf{irrelevant} if \( \Delta >1 \). Such fields correspond to
perturbations which describe the neighborhood of non trivial IR fixed points.
It is more appropriate to refer to them as \emph{attraction} fields because the perturbation
is not able to move off the critical point; it actually always returns to it.
We shall not deal with this case in the following, but the interested reader
may consult, for example, \cite{Fev-Quattr-Rav} to see some possible
applications of this situation.
\item \textbf{marginal} if \( \Delta =1 \). Their classification requires investigation of
derivatives of the beta function.
\end{itemize}
\section{Conclusion}
A typical phenomenon of integrable systems emerges, namely the fact that
different models can transform the one into the other in some conditions and
also shown to be equivalent.
Firstly, the equivalence of the bosonic sine-Gordon model with the fermionic massive Thirring
has been presented. After, the correspondence of the six-vertex model and the XXZ model
has been shown. Moreover, these lattice models share the same R or S matrix as the sine-Gordon
model.
At this point, it is natural to expect that a proper continuum limit on the
six-vertex model could produce sine-Gordon; indeed, this is the case and will be discussed later
in the context of the nonlinear integral equation.
This ``game'' of models that are related one to the other can be pushed forward. For example,
if $\Delta=0$, the XXZ model reduces to the XX model, that can also be written as a lattice
free fermion (one fermionic species).
If $\Delta\rightarrow \pm \infty$, one obtains the one-dimensional Ising model.
More important for what follows, the XXX model emerges in the high coupling limit of the
Hubbard model, that is a lattice quantum model of two fermionic species (namely, spin up,
spin down).
The deep reason of these strong connections between different models is that, for a given
size of the R matrix, there are very few solutions of the Yang-Baxter equation. In other words,
there are very few classes of integrability, classified by the solution of the Yang-Baxter equation.
\chapter{A nonlinear equation for the Bethe ansatz\label{c:nlie}}
\section{Light-cone lattice\label{s:lightcone}}
In this section I present a lattice regularization of the Sine-Gordon model
which is particularly suitable to study finite size effects. It is well known to
lattice theorists that the same continuum theory can often be obtained as
limit of many different lattice theories.
This means that there are many possible regularizations of the same theory
and it is customary to choose the lattice action possessing
the properties that best fit calculational needs. In the present context
the main goal is to have a lattice discretization of the Sine-Gordon model that preserves
the property of integrability. The following light-cone lattice construction
is a way (not the unique!) to achieve this goal.
In two dimensions, the most obvious approach would be to use a rectangular lattice with
axes corresponding to space and time directions. Here, a different approach \cite{ddv87}
is adopted where space-time is discretized along light-cone directions.
Light-cone coordinates in Euclidean or Minkowski space-time are
\begin{equation}
x_{+}=x+t\,,\quad x_{-}=x-t
\end{equation}
When discretized, they define a light-cone lattice of ``events'' as in figure \ref{lcl1}.
\begin{figure}[h]
\hspace*{50mm}\includegraphics[scale=0.7]{lightcone1.pdf}
\caption{Light-cone lattice with periodic boundary conditions in space direction;
states are associated to edges, enumerated from 1 to $2N$.
\label{lcl1}}
\end{figure}
Then, any rational and not greater than \( 1 \) value is permitted as particle speed,
in an infinite lattice. The shortest displacement of the particle (one lattice
spacing) is realized at light speed \( \pm 1 \) and corresponds, from the statistical
point of view, to nearest neighbors interactions. Particles are therefore massless
and can be right-movers (R) or left-movers (L) only.
Smaller speeds can be obtained with displacements longer than the fundamental cell and
correspond to higher neighbors interactions. They will not be used here.
With only nearest neighbor interactions, the evolution from one row to the next one,
as in figure \ref{lcl1}, is governed by a transfer matrix.
Here there are four of them. Two act on the light-cone, $U_{R}$ and $U_{L}$,
the first one shifting the state of the system one step forward-right, the other one
step forward-left. The remaining $U$ and $V^2$ act in time and space directions respectively
\begin{equation}
U=e^{-i\,\alpha\,H}=U_RU_L\,,\qquad V^2=e^{-i\,\alpha\,P}=U_RU_L^{\dagger}
\end{equation}
so they actually correspond to the Hamiltonian (forward shift) and the total momentum (right shift).
Their action is pictorially suggested in figure~\ref{lcl2}. Also, the relations hold
\begin{equation}\label{EP}
U_R=e^{-i\,\frac{\alpha}{2}(H+P)}\,,\qquad U_L=e^{-i\,\frac{\alpha}{2}(H-P)}
\end{equation}
\begin{figure}[h]
\hspace*{18mm}\includegraphics[scale=0.7]{lightcone2.pdf}
\caption{The action of the operators $V$ and $U_R$ is illustrated. The operator $U_L$
acts as $U_R$ but with leftward movement.\label{lcl2}}
\end{figure}
Much more details are given in \cite{tesidott} and on the original papers cited there.
I now construct the alternating transfer matrix
\begin{equation}\label{atm}
\mathbf{T}(\theta,\Theta,\gamma_{\text{6v}})=\mathop{\mbox{Tr}}_A
L_{1}(\theta-\Theta)L_{2}(\theta+\Theta)\ldots L_{2N-1}(\theta-\Theta)L_{2N}(\theta+\Theta)
\end{equation}
where the operators $L_i$ are associated to a vertex. To remove ambiguity, one has to associate
odd numbers to the lower vertex, even numbers to the upper vertex, compare with Figure~\ref{lcl1}.
$\Theta$ for the moment is a free parameter. Its presence corresponds to make the lattice
model inhomogeneous such that the interaction on each site is tuned by the presence of the
inhomogeneity. This does not change integrability.
The standard construction of (\ref{trasf}) suggest to take
\begin{equation}
L_i(\theta)=R_{Ai}(\theta)
\end{equation}
using the R matrix of the six-vertex model (\ref{R_matrix_entries}).
The forward-right and the forward-left operators are obtained by
\begin{equation}
U_R=\mathbf{T}(\Theta,\Theta,\gamma_{\text{6v}})\,,\qquad
U_L=\mathbf{T}(-\Theta,\Theta,\gamma_{\text{6v}})
\end{equation}
One can interpret these expressions by noticing that to switch from a right mover to a left mover
one has to change the sign of rapidity.
The transfer matrices depend on $\Theta$ and on the coupling
$\gamma_{\text{6v}}$, whose values are not yet specified.
The methods of Bethe ansatz can be used to diagonalize these operators as in
the subsection~\ref{s:baxter}.
The specific case is treated in \cite{devega89} and gives the following results.
The eigenvalues of the transfer matrix are given by
\begin{equation}\label{eigenT}
\begin{array}{c}
\displaystyle T (\theta ,\Theta )=\left[ a(\theta -\Theta )\, a(\theta +\Theta )\right] ^{N}\prod ^{M}_{j=1}\frac{\sinh \displaystyle\frac{\gamma_{\text{6v}} }{\pi }\left[ i\frac{\pi }{2}+\vartheta _{j}+\theta \right] }{\sinh \displaystyle\frac{\gamma_{\text{6v}} }{\pi }\left[ i\frac{\pi }{2}-\vartheta _{j}-\theta \right] }+\\[7mm]
+\displaystyle \left[ b(\theta -\Theta )\, b(\theta +\Theta )\right] ^{N}\prod ^{M}_{j=1}\frac{\sinh \displaystyle\frac{\gamma_{\text{6v}} }{\pi }\left[ i\frac{3\pi }{2}-\vartheta _{j}-\theta \right] }{\sinh \displaystyle\frac{\gamma_{\text{6v}} }{\pi }\left[ -i\frac{\pi }{2}+\vartheta _{j}+\theta \right] }
\end{array}
\end{equation}
provided the values of $ \vartheta _{j} $ satisfy the set of coupled nonlinear
equations called Bethe equations
\begin{equation}
\label{bethe}
\left( \displaystyle\frac{\sinh \displaystyle\frac{\gamma_{\text{6v}} }{\pi }\left[ \vartheta _{j}+\Theta +\displaystyle\frac{i\pi }{2}\right] \sinh \displaystyle\frac{\gamma_{\text{6v}} }{\pi }\left[ \vartheta _{j}-\Theta +\displaystyle\frac{i\pi }{2}\right] }{\sinh \displaystyle\frac{\gamma_{\text{6v}} }{\pi }\left[ \vartheta _{j}+\Theta -\displaystyle\frac{i\pi }{2}\right] \sinh \displaystyle\frac{\gamma_{\text{6v}} }{\pi }\left[ \vartheta _{j}-\Theta -\displaystyle\frac{i\pi }{2}\right] }\right) ^{N}=-\prod _{k=1}^{M}\displaystyle\frac{\sinh \displaystyle\frac{\gamma_{\text{6v}} }{\pi }\left[ \vartheta _{j}-\vartheta _{k}+i\pi \right] }{\sinh \displaystyle\frac{\gamma_{\text{6v}} }{\pi }\left[ \vartheta _{j}-\vartheta _{k}-i\pi \right] }
\end{equation}
where $M$ can by any integer from $0$ to $N$ included. The complex numbers $\{ \vartheta _{j} \}$ are called \emph{Bethe roots}. These equations are a modification of (\ref{betheTQ}).
Because of the periodicity
\begin{equation}
\label{periodicita'}
\vartheta _{j}\, \rightarrow \, \vartheta _{j}+\displaystyle\frac{\pi ^{2}}{\gamma_{\text{6v}} }i
\end{equation}
further analyses can be restricted to a strip around the real axis
\begin{equation}
\label{strisciafisica}
\vartheta _{j}\in \mathbb {R}\times i\, \left] -\displaystyle\frac{\pi ^{2}}{2\gamma_{\text{6v}} },\displaystyle\frac{\pi ^{2}}{2\gamma_{\text{6v}} }\right] .
\end{equation}
Details on Bethe equations and Bethe roots were given in subsection~\ref{s:baxter}.
Another form of the Bethe equations can be obtained by taking the logarithm of the previous one.
It is important to fix and consistently use a logarithm determination: here the fundamental one
will be used. The equations become
\begin{eqnarray}
2\pi I_j&=&
N\log \displaystyle\frac{\sinh \displaystyle\frac{\gamma_{\text{6v}} }{\pi }\left[ \vartheta _{j}+\Theta +\frac{i\pi }{2}\right]}{\sinh \frac{\gamma_{\text{6v}} }{\pi }\left[ \vartheta _{j}+\Theta -\displaystyle\frac{i\pi }{2}\right]}+N\log\frac{\sinh \displaystyle\frac{\gamma_{\text{6v}} }{\pi }\left[ \vartheta _{j}-\Theta +\frac{i\pi }{2}\right] }{ \sinh \displaystyle\frac{\gamma_{\text{6v}} }{\pi }\left[ \vartheta _{j}-\Theta -\frac{i\pi }{2}\right] }
\nonumber \\
&-&\sum _{k=1}^{M}\log\displaystyle\frac{\sinh \displaystyle\frac{\gamma_{\text{6v}} }{\pi }\left[ \vartheta _{j}-\vartheta _{k}+i\pi \right] }{\sinh \displaystyle\frac{\gamma_{\text{6v}} }{\pi }\left[ \vartheta _{j}-\vartheta _{k}-i\pi \right] }\label{bethelog}
\end{eqnarray}
where their nature of quantization conditions is now explicit: the $I_j$ are quantum numbers,
taken half-integers or integers according to the parity of the number of Bethe roots
\begin{equation}
I_j\in \mathbb{Z}+\frac{1+M}{2}
\end{equation}
The energy \( E \) and momentum \( P \) of a state can be read out from (\ref{EP}) and (\ref{eigenT})
\begin{equation}
\label{autovalori}
\displaystyle e^{i\displaystyle\frac{\alpha}{2}(E\pm P)}=\prod ^{M}_{j=1}\frac{\sinh \displaystyle\frac{\gamma_{\text{6v}} }{\pi }\left[ i\displaystyle\frac{\pi }{2}-\Theta \pm \vartheta _{j}\right] }{\sinh \displaystyle\frac{\gamma_{\text{6v}} }{\pi }\left[ i\displaystyle\frac{\pi }{2}+\Theta \mp \vartheta _{j}\right] }
\end{equation}
or by the same equation in logarithmic form
\begin{equation}\label{autovalorilog}
E\pm P=-i\frac{2}{\alpha}\sum ^{M}_{j=1}\log \frac{\sinh \displaystyle\frac{\gamma_{\text{6v}} }{\pi }\left[ i\displaystyle\frac{\pi }{2}-\Theta \pm \vartheta _{j}\right] }{\sinh \displaystyle\frac{\gamma_{\text{6v}} }{\pi }\left[ i\displaystyle\frac{\pi }{2}+\Theta \mp \vartheta _{j}\right] }
\end{equation}
The logarithmic forms reveal an interesting aspect of the Bethe ansatz namely that energy,
momentum, spin (see later) and all the higher integrals of motion have
an additive structure in which Bethe roots resemble rapidities of independent particles
\begin{equation}\label{quasipart}
I_l=\sum^{M}_{j=1} f_l(\vartheta_j)
\end{equation}
called ``quasiparticles''. Quasiparticles are usually distinct from physical particles. They are degrees of
freedom that do not appear in the Hamiltonian (\ref{hamilt}) but are ``created'' by the Bethe ansatz
and incorporate the effects of the interactions. Indeed, their dispersion relation is not Galilean nor
relativistic\footnote{Galilean: $E=\frac{p^2}{2m}$, relativistic $E=\sqrt{p^2 c^2+m^2 c^4}$.}.
Quasiparticles can have complex ``rapidities''.
With this quasiparticle nature of Bethe roots in mind, the left hand side of the Bethe equations
(\ref{bethe}) is precisely the $j$th momentum term that one can extract from (\ref{autovalorilog}).
The right hand side represents the interaction of pairs of quasi-particles.
In this Bethe ansatz description, the third component of the spin is given by
\begin{equation}\label{spinz}
S_z=N-M
\end{equation}
where the reference state for the algebraic Bethe ansatz
is taken with all spins up or all spins down and it is described by $M=0$ in (\ref{bethe}):
it is the ferromagnetic state.
Then, every Bethe root corresponds to overturning a spin: it is a ``magnon'', because it
carries a unit of ``magnetization''. It is also called spin wave. It is the smallest
``excitation'' of the ferromagnetic state.
When $M=N$, all roots are real and one has the antiferromagnetic state, that is an ordered state
with zero total spin but with a nontrivial local spin organization. Here, with the present sign
conventions, it is also the ground state.
Its actual expression is complicated. Just to give an idea of what it can look like,
in the simplest case of an homogeneous ($\Theta=0$) XXX ($\gamma_{\text{6v}}=0$) model for a two-site chain
it is
\begin{equation}
|\uparrow\downarrow\rangle-|\downarrow\uparrow\rangle
\end{equation}
while for four sites it is
\begin{equation}
|\uparrow\uparrow\downarrow\downarrow\rangle+|\downarrow\uparrow\uparrow\downarrow\rangle+
|\downarrow\downarrow\uparrow\uparrow\rangle+|\downarrow\uparrow\uparrow\downarrow\rangle-2
(|\uparrow\downarrow\uparrow\downarrow\rangle+|\downarrow\uparrow\downarrow\uparrow\rangle)
\end{equation}
Excitations of the antiferromagnetic state are: (1) ``real Bethe holes'', namely real positions
corresponding to real roots in the antiferromagnetic state but excluded in the excitation,
(2) complex roots.
In what follows, only the antiferromagnetic ground state and its excitations will be considered,
because it has one important property: in the thermodynamic limit
\( N\, \rightarrow \, \infty \) it can be interpreted as a Dirac sea and the its excitations,
holes and complex roots, behave as particles.
For later convenience, the coupling constant \( \gamma_{\text{6v}} \) is expressed
in terms of a different variable \( p \):
\begin{equation}\label{pp}
p=\frac{\pi }{\gamma_{\text{6v}} }-1,\qquad 0<p<\infty
\end{equation}
and I will work in the range of $0<\gamma_{\text{6v}}<\pi $. Notice that, in (\ref{xxz}) and in
table~\ref{xxzcoupl}, this is the choice that corresponds to the critical regime.
In this new parameter, the strip becomes
\begin{equation}
\label{striscia}
\vartheta _{j}\in \mathbb {R}\times i\, \left] -\displaystyle\frac{\pi (1+p)}{2},
\displaystyle\frac{\pi (1+p)}{2}\right] .
\end{equation}
This new parameter is related to the sine-Gordon ones by
\begin{equation}\ba{c}
p=\frac{\beta^2}{8\pi-\beta^2}\\
0<p<1 \quad \mbox{attractive regime};\qquad 1<p <\infty\quad \mbox{repulsive regime}
\end{array}\end{equation}
see also (\ref{repattr}). With this parameter, the relation between the six-vertex and sine-Gordon coupling is
\begin{equation}
\gamma=8\pi p=8\pi\left(\frac{\pi}{\gamma_{\text{6v}}}-1\right)
\end{equation}
Thus, the XXX chain is characterized by $\gamma_{\text{6v}}=0$ that means $\beta^2=8\pi$. This is the strongest point in the
repulsive regime of sine-Gordon. Afther that point, the quantum sine-Gordon model seems to loose meaning.
\section{A nonlinear integral equation from the Bethe ansatz}
In this section the fundamental nonlinear integral equation driving sine-Gordon scaling
functions will be presented. In the literature it is known with several names, following the different
formulations that have been done: Kl\"umper-Batchelor-Pearce equation or Destri-de Vega equation. It
has also been indicated with the nonspecific tetragram NLIE (nonlinear integral equation).
It was first obtained in \cite{klumper} for the vacuum (antiferromagnetic ground state)
scaling functions of XXZ then, with different methods, in \cite{ddv95}
for XXZ and sine-Gordon. I will follow the Destri-de Vega approach applied to the sine-Gordon model.
The treatment of excited states was pioneered in \cite{fioravanti} and refined
in \cite{ddv97,noi PL1}, to arrive to the final form in \cite{noiNP,tesidott}.
It is important to stress that this formalism is equivalent to the Bethe equations but
is especially indicated to the antiferromagnetic regime. In general, it adapts to regimes
where the number of Bethe roots is of the order of the size of the system.
Indeed, the key idea is to sum up a macroscopically large number of Bethe roots
for the ground state or for the reference state
and replace them by a small number of holes to describe deviations near
the reference state, as holes in a Dirac sea.
\subsection{Counting function\label{section:count-funct}}
First, it is possible to write the Bethe equations (\ref{bethelog}) in terms of a
\emph{counting function} \( Z_{N}(\vartheta ). \)
I introduce the function
\[
\phi _{\nu }(\vartheta )=i\log \frac{\sinh \frac{1}{p+1}(i\pi \nu +\vartheta
)}{\sinh \frac{1}{p+1}(i\pi \nu -\vartheta )},\qquad \phi _{\nu }(-\vartheta
)=-\phi _{\nu }(\vartheta )\]
The ``oddity'' on the analyticity strip around the real axis defines a precise
logarithmic determination. The counting function is defined by
\begin{equation}
\label{def.Zn}
\displaystyle Z_{N}(\vartheta )=N[\phi _{1/2}(\vartheta +\Theta )+\phi
_{1/2}(\vartheta -\Theta )]-\sum _{k=1}^{M}\phi _{1}(\vartheta -\vartheta
_{k})
\end{equation}
The logarithmic form of the Bethe equations (\ref{bethelog}) takes now
a simple form in terms of the counting function
\begin{equation}
\label{quantum}
\displaystyle Z_{N}(\vartheta _{j})=2\pi I_{j}\, ,\quad I_{j}\in \mathbb
{Z}+\frac{1+\delta }{2},\quad \delta =(M) \text{mod}\, 2=(N-S_z) \text{mod}\, 2\in
\left\{ 0,1\right\}
\end{equation}
Notice that the counting function is not independent of the Bethe roots.
Said differently, one cannot separate the construction of $Z_N$ and the
solution of (\ref{quantum}).
Now it is possible to give a formal definition of ``holes'':
they are solutions of (\ref{quantum}) that do not appear in (\ref{def.Zn}).
I will not make use of nonreal holes.
Bethe roots and holes are zeros of the equation
\begin{equation}
\label{zero-condition}
1+(-1)^{\delta }e^{iZ_{N}(\vartheta _{j})}=0
\end{equation}
once the counting function is known. More, they are simple zeros because Bethe roots/holes
exclude each other.
\subsection{Classification of Bethe roots and counting
equation\label{classificazione}}
From Bethe Ansatz it is known that a solution of (\ref{bethelog}), namely a Bethe state,
is uniquely characterized
by the set of quantum numbers \( \left\{ I_{j}\right\} _{j=1,...,M}\, ,\quad 0\leq
M\leq N \)
that appear in (\ref{quantum}). Notice that \( M\leq N \) means \( S\geq 0. \)
Bethe roots can either be real or appear in complex conjugate pairs. Complex conjugate pairs
grant the reality of the energy, momentum and transfer matrix.
In the specific case (\ref{bethe}), there is another possibility, due to the periodicity
(\ref{periodicita'}): if a complex solution has imaginary part \( \text{Im}\,
\vartheta =\frac{\pi }{2}(p+1) \), it can appear alone (its complex conjugate is not required).
A root with this value of the imaginary part is called \emph{self-conjugate root}.
From the point of view of the counting function, a more precise classification of roots is required:
\begin{itemize}
\item \emph{real roots}; they are real solutions of (\ref{quantum}); their number is \( M_{R} \);
\item \emph{holes}; real solutions of (\ref{quantum}) that do appear in the
ground state but not in the excitation under examination; in practice, they are real
of (\ref{quantum}) that do not enter in the counting function (\ref{def.Zn});
their number is \( N_{H} \);
\item \emph{special roots or holes} (special objects); they are real roots or
holes whose counting function derivative \( Z_{N}'(\vartheta _{j}) \) is negative,
contrasted with normal roots or holes, whose derivative is positive;
their number is \( N_{S} \); they must be counted both as {}``normal{}''
and as {}``special{}'' objects;
\item \emph{close pairs}; complex conjugate solutions with imaginary part in
the range \( 0<|\text{Im}\, \vartheta |<\pi \min (1,p) \); this range is dictated by the first
singularity (essential singularity) of the function $\phi_1(\theta)$, when moving of the real axis;
their number is \( M_{C} \);
\item \emph{wide roots in pairs}: complex conjugate solutions with imaginary
part belonging to the range \( \pi \min (1,p)<|\text{Im}\, \vartheta |<\pi \frac{p+1}{2} \)
namely after the first singularity of $\phi_1(\theta)$;
\item \emph{self-conjugate roots}: complex roots with imaginary part \( \text{Im}\,
\vartheta =\pi \frac{p+1}{2} \); they are wide roots but miss the complex conjugate
so they are single; their number is \( M_{SC} \).
\end{itemize}
The total number of wide roots appearing in pairs or as single is \( M_{W} \).
The following notation will be used, sometimes, for later convenience, to
indicate
the position of the solutions: \( h_{j} \) for holes, \( y_{j} \) for special
objects, \( c_{j} \) for close roots, \( w_{j} \) for wide roots.
Complex roots with imaginary part larger than the self-conjugates are not
required
because of the periodicity of Bethe equations (\ref{strisciafisica}).
A graphical representation of
the various types of solutions is given in figure \ref{radici.eps}.
\begin{figure}[h]
{\par\centering
\resizebox*{0.9\textwidth}{0.37\textheight}{\includegraphics{roots_class.pdf}}
\par}
\caption{The different types of roots and holes and their position in the complex
plane. We denote $\mu=\pi \min (1,p) $.
The red line at \protect\( \protect\frac{\pi ^{2}}{2\gamma_{\text{6v}} }=
\protect\frac{\pi}{2}(p+1)\protect\)
is the self-conjugate one. Close roots are located within the blue lines, wide roots lie outside.
The holes are indicated by the circles on the real axis.\label{radici.eps}}
\end{figure}
The function \( Z_{N} \) (\ref{def.Zn}) has a number of branch point singularities produced by the
presence of the logarithms.
The largest horizontal strip containing the real axis and free of singularities is
bounded by the singularities of the various terms \( \phi _{\nu }(\vartheta ) \).
The strip is at the largest size when no complex roots are introduced, otherwise
it is narrower because the imaginary part of the complex roots in
\(\phi _{\nu }(\vartheta-\vartheta_k ) \) displaces the position of the singularities.
An important property follows from this classification: the \( Z_{N} \)
function is \emph{real analytic} in a strip that contains the real axis
\begin{equation}
\label{analiticita_{r}eale}
Z_{N}\left( \vartheta ^{*}\right) =\left( Z_{N}(\vartheta )\right) ^{*}
\end{equation}
By considering asymptotic values of \( \phi _{\nu }(\vartheta ) \) and \(
Z_{N}(\vartheta ) \)
for \( \vartheta \rightarrow \pm \infty \), it is possible to obtain an
equation
relating the numbers of all the various types of roots. I refer the reader
interested in the details of the derivation to \cite{ddv97,tesidott}.
Here I only mention the final result, in the form where the continuum limit
\( N\rightarrow \infty \), \( a\rightarrow 0 \) and \( L=Na \) finite, has
already been taken\begin{equation}
\label{counting-eq}
N_{H}-2N_{S}=2S_z+M_{C}+2\, \theta (p-1)\, M_{W}
\end{equation}
where \( \theta (x) \) is the step function: \( \theta (x)=0 \) for \( x<0 \)
and \( \theta (x)=1 \) for \( x>0 \).
Recall that \( S_z \) is a nonnegative integer and the right hand side only contains nonnegative
values. From this, it turns out that \( N_{H} \) is even (\( M_{C} \) is the number of
close roots, and is even).
This \emph{counting equation} expresses the fact that
the Bethe equations have a finite number of solutions only. There are also other constraints,
once $N$ is fixed:
\begin{equation}
S_z\leq N\,,\qquad M_C+M_W\leq N\,,\qquad N_H\leq N.
\end{equation}
Moreover, the various types of roots/holes do respect the mutual exclusion principle.
This means that, in order to accommodate complex roots, one has to ``create'' space by inserting holes,
or vice versa.
Notice also that in the attractive regime the wide roots do not participate to
the counting and that at the free fermion point $p=1$, or $\beta^2=4\pi$,
they do exist as self-conjugate only, namely there are no wide roots in pairs. This suggests that
the role of wide roots is different in the two regimes.
The most important fact is that the number of real roots does not appear in
this equation: they have been replaced by the number of holes. This, together to what will be
explained in the next paragraph, allows to consider the real roots as a sea of particles, or Dirac sea,
and all other types of solutions, holes and complex roots, as excitations above it.
\subsection{Nonlinear integral equation\label{section:NLIE_1}}
\noindent Let \( \hat{x} \) be a real solution of the Bethe equation. Thanks
to Cauchy's integral formula, an holomorphic function \( f(x) \)
admits the following representation
\begin{equation}
\label{cauchy}
\displaystyle f(\hat{x})=\oint _{\Gamma _{\hat{x}}}\frac{d\mu }{2\pi
i}\frac{f(\mu )}{\mu -\hat{x}}=\oint _{\Gamma _{\hat{x}}}\frac{d\mu }{2\pi
i}f(\mu )\frac{(-1)^{\delta }e^{iZ_{N}(\mu )}iZ_{N}'(\mu )}{1+(-1)^{\delta
}e^{iZ_{N}(\mu )}}
\end{equation}
where \( \Gamma _{\hat{x}} \) is an anti-clockwise simple path encircling \( \hat{x} \), namely
one of the real holes or complex roots, and avoiding all the others, see (\ref{zero-condition}).
In the region where \( \phi _{1}(\vartheta ) \) is holomorphic, (\ref{cauchy}) can be used to write
an expression for all the real roots and real holes
\begin{equation}
\label{integr_{g}amma}
\begin{array}{c}
\displaystyle \sum ^{M_{R}+N_{H}}_{k=1}\phi _{1}(\vartheta -x_{k})=\sum
^{M_{R}+N_{H}}_{k=1}\oint _{\Gamma _{x_{k}}}\frac{d\mu }{2\pi i}\phi
_{1}(\vartheta -\mu )\frac{(-1)^{\delta }e^{iZ_{N}(\mu )}iZ_{N}'(\mu
)}{1+(-1)^{\delta }e^{iZ_{N}(\mu )}}=\\
\displaystyle =\oint _{\Gamma }\frac{d\mu }{2\pi i}\phi _{1}(\vartheta -\mu
)\frac{(-1)^{\delta }e^{iZ_{N}(\mu )}iZ_{N}'(\mu )}{1+(-1)^{\delta
}e^{iZ_{N}(\mu )}}
\end{array}
\end{equation}
The sum on the contours has been modified to a unique curve \( \Gamma \)
encircling all the real solutions \( {x_{k}} \) (real root plus holes), and avoiding the complex
Bethe solutions as in the Figure~\ref{curvagamma.eps}; this is possible because they are finite
in number.
\begin{figure}[h]
\begin{center}\includegraphics[scale=0.7]{curvagamma.pdf}\end{center}
\caption{The integration curve encircles real solutions only. The crosses represent roots while
the circles represent holes. \label{curvagamma.eps}}
\end{figure}
Clearly the \( \Gamma \) curve must be contained in the strip \[
0<\eta _{+},\eta _{-}<\min \{\pi ,\pi p,|\text{Im}\, c_{k}|\, \forall \, k\}\]
Without loss of generality, assume that \( \eta _{+}=\eta _{-}=\eta \), and
deform \( \Gamma \) to the contour of the strip characterized by \( \eta \).
The regions at \( \pm \infty \) do no contribute because, as the lattice size is finite, those
regions are free of root or holes. Moreover, in those regions, $ Z_N' $ vanishes
therefore the integral can be just evaluated on the lines \( \mu =x\pm i\eta \),
where \( x \) is real. After algebraic manipulations involving integrations
by parts and convolutions (for details see \cite{tesidott}) one arrives
at a nonlinear integral equation for the counting function \( Z_{N}(\vartheta) \)
\begin{equation}
\begin{array}{rl}
\displaystyle Z_{N}(\vartheta ) &
\displaystyle =2\,N\arctan \frac{\sinh \vartheta }{\cosh
\Theta }+\sum ^{N_{H}}_{k=1}\chi (\vartheta -h_{k})-2\sum ^{N_{S}}_{k=1}\chi
(\vartheta -y_{k})-\\
& \displaystyle -\sum ^{M_{C}}_{k=1}\chi (\vartheta -c_{k})
-\sum ^{M_{W}}_{k=1}\chi (\vartheta -w_{k})_{II}+\\
& \displaystyle +2\, \text{Im} \int ^{\infty
}_{-\infty }d\rho\, G(\vartheta -\rho -i\eta )\log \left( 1+(-1)^{\delta
}e^{iZ_{N}(\rho +i\eta )}\right)
\end{array}\end{equation}
The kernel
\begin{equation}
\label{funzioneG}
G(\vartheta )=\frac{1}{2\pi }\int ^{+\infty }_{-\infty }dk\, e^{ik\vartheta
}\frac{\sinh \frac{\pi (p-1)k}{2}}{2\sinh \frac{\pi pk}{2}\, \cosh \frac{\pi
k}{2}}
\end{equation}
presents a singularity at the same place where \( \phi _{1}(\vartheta ) \)
does: $\vartheta=i\pi\min\{1,p\}$.
An analytic continuation outside the fundamental strip \( 0<|\text{Im}\, \vartheta |<\pi \min \{1,p\} \)
(I determination region) must take this fact into account.
The source terms are given by \[
\chi (\vartheta )=2\pi \int _{0}^{\vartheta }dx\ G(x)\]
and
\begin{equation}
\chi (\vartheta )_{II}=\left\{
\begin{array}{ll}
\chi (\vartheta )+\chi \left( \vartheta -i\,\pi\ \mathrm{sign}\left( \text{Im}\,\vartheta \right) \right) \, ,& p>1\, ,\\
\chi (\vartheta )-\chi \left( \vartheta -i\,p\,\pi\ \mathrm{sign}\left( \text{Im}\,\vartheta \right) \right) \, , & p<1\, .
\end{array}
\right.
\end{equation}
is a modification of the source term due to the analytic continuation over the
strip \( 0<|\text{Im}\, \vartheta |<\pi \min\{1,p\} \), i.e. in the so called II
determination region.
Such equation, together with the quantization condition (\ref{quantum}), is equivalent to the
original Bethe Ansatz (\ref{bethe}).
The NLIE for $Z_N$ is not independent of the Bethe roots: it and the quantization conditions must be
solved at the same time.
Once the Bethe roots are known, one can use them into eqs.(\ref{autovalori})
to compute the energy and momentum of a given state.
\subsection{Continuum limit\label{section:limite_cont.}}
Although such NLIE is already a precious tool for the lattice model itself,
its importance becomes essential when a continuum limit is done.
The continuum limit has the objective to transform a lattice system into a continuum model.
As already mentioned, one has to take the lattice size \( N\rightarrow \infty \) (that would be
the normal thermodynamic limit of statistical mechanics) and the lattice edge \( \alpha\rightarrow 0 \)
simultaneously, in such a way that the product \( L=N\,\alpha \) stays finite.
In this way one obtain a continuum theory with finite size space, namely a cylindrical geometry.
However, the lattice spacing is not present in the Boltzmann weights and in the
transfer matrix (\ref{atm}, \ref{R_matrix_entries}) so we have no way to use it.
Moreover, one can convince himself, by explicit calculations, that if the
limit is taken by keeping the \( \Theta \) parameter fixed, the lattice NLIE
blows up to infinity and looses meaning. This reflects the fact that the number
of roots increases as \( N \) in the thermodynamic limit. However, as shown in ref.\cite{ddv87},
if one assumes a dependence of \( \Theta \) on \( N \) of the form
\begin{equation}
\label{teta_{n}}
\displaystyle \Theta \approx \log \frac{4N}{{\mathcal{M}}L}.
\end{equation}
it is possible to get a finite limit out of the lattice NLIE. This limit is exactly
the one that was used in \cite{ddv87} to bring a lattice fermion field into the Thirring fermion field
on the continuum.
Notice that sending \( \Theta \rightarrow \infty \)
in this way naturally introduces a renormalized physical mass \( {\mathcal{M}} \). This is the
deep reason of the use of the light-cone lattice and of the inhomogeneity in
(\ref{atm}). In other words, without the inhomogeneity, the continuum system would be a
critical one, massless, with central charge $c=1$ (\cite{klumper}).
The \emph{continuum counting function} is defined by: \begin{equation}
\label{def_{Z}}
\displaystyle Z(\vartheta )=\lim _{N\, \rightarrow \, \infty }Z_{N}(\vartheta )
\end{equation}
and appears in a continuum NLIE
\begin{equation}
\label{nlie-cont}
\displaystyle Z(\vartheta )=l\sinh \vartheta +g(\vartheta |\vartheta
_{j})+2 \text{Im} \int ^{\infty }_{-\infty }dx\,G(\vartheta -x-i\eta )\log
\left( 1+(-1)^{\delta }e^{iZ(x+i\eta )}\right)
\end{equation}
where \( l={\cal M}L \). The first term on the right hand side is a momentum term.
The second one, \( g(\vartheta |\vartheta _{j}) \), is a source term, in the sense that is adapts
to the different combinations of roots and holes
\begin{equation}\label{source}
\displaystyle g(\vartheta |\vartheta _{j})=\sum ^{N_{H}}_{k=1}\chi (\vartheta
-h_{k})-2\sum ^{N_{S}}_{k=1}\chi (\vartheta -y_{k})-\sum ^{M_{C}}_{k=1}\chi
(\vartheta -c_{k})-\sum ^{M_{W}}_{k=1}\chi (\vartheta -w_{k})_{II}
\end{equation}
The positions of the sources \( \{\vartheta _{j}\}\equiv \{h_{j}\, ,\, y_{j}\, ,\, c_{j}\, ,\, w_{j}\} \)
are fixed by the Bethe quantization conditions
\begin{equation}\label{quantum2}
Z(\vartheta _{j})=2\pi I_{j}\quad ,\quad I_{j}=\mathbb Z+\frac{1+\delta }{2}
\end{equation}
The parameter \( \delta \) can be both 0 or 1. On the lattice it was determined by the total number
of roots, which now has become infinite. Restrictions on it will be appear later.
The vacuum state, or Hamiltonian ground state, corresponds to the choice \( \delta =0 \).
With a procedure analogous to the one sketched above, it is possible to produce integral expressions
for energy and momentum. Starting from (\ref{autovalori}), one has to isolate an extensive term,
proportional to \( N \), to be subtracted. The remaining finite part of energy and momentum takes the
form
\begin{eqnarray}
\nonumber
E-E_{bulk} & =&{\cal M}\left(\sum ^{N_{H}}_{j=1}\cosh h_{j}-2\sum ^{N_{S}}_{j=1}\cosh y_{j}-\sum
^{M_{C}}_{j=1}\cosh c_{j}+ \sum _{j=1}^{M_{W}}(\cosh w_{j})_{II}-\right.\\
& &\displaystyle \left. -\int ^{\infty
}_{-\infty }\frac{dx}{2\pi }2\,\text{Im}\left[ \sinh (x+i\eta )\log (1+(-1)^{\delta
}e^{iZ(x+i\eta )})\right] \right) \label{energy}\\
P & =& {\cal M}\left(\sum ^{N_{H}}_{j=1}\sinh h_{j}-2
\sum^{N_{S}}_{j=1}\sinh y_{j}-\sum ^{M_{C}}_{j=1}\sinh c_{j}+
\sum _{j=1}^{M_{W}}(\sinh w_{j})_{II}-\right. \nonumber \\
& &\displaystyle \left. -\int ^{\infty}_{-\infty }\frac{dx}{2\pi }2\, \text{Im}\left[ \cosh (x+i\eta )
\log (1+(-1)^{\delta }e^{iZ(x+i\eta )})\right]\right) \label{momentum}
\end{eqnarray}
Therefore, energy and momentum can be evaluated once the counting function \( Z(\vartheta ) \) and the
source positions \( \vartheta _{j} \) have been obtained from (\ref{nlie-cont}, \ref{quantum2}).
All these equations are exact, no approximation has been introduced in deriving them.
In practice, these equations can be treated analytically in certain limits ($l\rightarrow 0$ or
$l\rightarrow \infty$) and numerically for intermediate values, where we actually lack a closed formula
for them. Numerical computations are done without any approximation other than the technical ones
introduced by the computer truncations.
In calculations, one starts from an initial guess for the counting function,
\( Z^{(0)}(\vartheta ) \), uses it in (\ref{quantum2}) to get the roots/holes positions
then evaluates a new \( Z^{(1)}(\vartheta ) \) from (\ref{nlie-cont}) and so on, up to the required
precision.
This iterative procedure is conceptually very simple and inclined to good convergence, as one can
easily estimate. Indeed, the following term appears within integration
\begin{equation}\label{Zapprox}
e^{iZ(x+i\eta )}\approx e^{iZ-\eta\, l\,\cosh x}
\end{equation}
The presence of a negative $\cosh$ at the exponent makes the support of the integral compact and
the integral itself subdominant with respect to all the other terms, speeding up the convergence of
the iteration, especially for large $ l $.
So, at least for the cases where holes only are considered, and no complex roots or no special roots,
finding numerical solutions can be quite easy. The whole procedure takes few seconds of
computing on a typical Linux/Intel platform without resorting to any supercomputer or other
technically advanced tool.
When complex roots are present, things are much more complicated and the computation time increases
dramatically. Also, convergence at small $l$ can be problematic
because those real roots or holes that are closer to the origin can become ``special''. \label{lespecial}
In practice,
they emanate one or two ``supplementary'' sources in consequence of a local change of sign of $Z'(x)$.
They have not been extensively treated. A similar phenomenon has been discussed in \cite{FPR2}, in the
frame of thermodynamic Bethe ansatz.
The limit procedure described here is mathematically consistent, but the question is if from the
physical point of view it describes a consistent quantum theory and allows for a meaningful
physical interpretation.
The first indication comes from the emergence, in \cite{ddv87}, of the fermionic massive Thirring
fields from the six-vertex diagonal alternating lattice of section~\ref{s:lightcone}.
This indicates that the procedure points toward a sine-Gordon/massive Thirring model.
Before going on, an important remark must be made about the allowed values for
the XXZ spin \( S_z \). From (\ref{spinz}), only nonnegative integer values can be given to \( S_z \).
As shown in \cite{noiPL2}, one is led to include also the half-integer choice for \( S_z \),
in order to describe the totality of the spectrum. This choice seems not justified on the light-cone
lattice of section~\ref{s:lightcone} because it would require adding one column of points to the
lattice, thus spoiling periodicity. Most probably the way to introduce it would be by inserting a twist
in the seam or some other nontrivial boundary condition.
In any case, half-integer values for $S_z$ are necessary and seem fully consistent with the rest of
the model to describe odd numbers of particles.
At this point the following physical scenario appears.
\begin{enumerate}
\item The physical vacuum, or ground state, of the continuum theory comes from the antiferromagnetic
state of the lattice so is characterized by the absence of sources, holes or complex roots.
\item All the sources are excitations above this vacuum.
\item This theory describes at least the sine-Gordon and the massive Thirring model on a cylinder; the
circumference describes a finite space of size \( L \); the infinite direction is time
\( S_z \) is the topological charge and can take nonnegative integer or half-integer values.
\item This theory describes also states that are not in sine-Gordon or in massive Thirring.
\end{enumerate}
As already observed, the real roots have disappeared from the counting equation in the continuum
limit. They actually become a countable set and are taken into account by the integral term, both in
the NLIE and in the energy-momentum expressions. They can be interpreted as a sort of Dirac sea on
which holes and complex roots build particle excitations. Of course,
the presence of holes or complex roots distorts the Dirac sea too, through the source term
\( g(\vartheta |\vartheta _{j}) \).
Observe that it has been indicated that only nonnegative values of \( S_z \) are required to describe
the whole Hilbert space of the theory. Indeed the lattice theory is assumed charge-conjugation
invariant so negative values of $S_z$, namely states with negative topological charge, have the same
energy and momentum as their charge conjugate states.
The assumption that all the sine-Gordon and all massive Thirring states can be described by the
NLIE is absolutely not trivial and still misses a mathematical proof even if all the analyses
done so far are consistent with this assumption.
\subsection{The infrared limit of the NLIE and the particle scattering\label{s:IR}}
The first task in order to understand the physics underlying the NLIE is to
characterize the scattering of the model, by reconstructing the S-matrix. As we started from an
integrable model, we assume that the continuum one remains integrable. We will find that this
hypothesis is extremely reasonable because it is related to the structure of the source term
$g(\theta|\theta_i)$.
Indeed, the function \( \chi \) can be written as
\begin{equation}
\chi (\vartheta )=-i\log S(\vartheta )
\end{equation}
where \( S(\vartheta ) \) is the soliton-soliton scattering amplitude
in sine-Gordon theory \ref{matriceS}, if the parameters are fixed as in (\ref{pp}).
This means that the exponentiation of the source (\ref{source}) term is the product of several
sine-Gordon two-particle scattering amplitudes, as it appears in the factorization theorem.
One has to remember that the theory has been constructed on a cylinder therefore the connection with the
factorization theorem can emerge only in the limit where the circumference becomes infinite.
In this limit, the cylinder becomes indistinguishable from a plane. Here the only external parameter
is the adimensional ``size'' $l=\mathcal{M} L$. It will be pushed to infinity $l\rightarrow \infty$.
This can be interpreted as a very large volume or a very large mass, thus explaining the name of
infrared limit (IR).
In this limit, the integral terms in (\ref{nlie-cont}) and in (\ref{energy}, \ref{momentum})
vanish exponentially fast, so they can be dropped and one remains with the momentum and the source term.
Indeed, one can estimate that
\begin{equation}\label{stima}
\log(1+(-1)^{\delta}e^{iZ(x+i\eta )})\approx \log(1+(-1)^{\delta}e^{iZ-\eta\, l\,\cosh x})\approx
(-1)^{\delta}e^{iZ-\eta\, l\,\cosh x}
\end{equation}
The presence of a negative $\cosh$ at the exponent produces an exponentially fast decay for large $l$
in the integral terms.
Consider first a state with \( N_{H} \) holes only and XXZ spin \( S_z=N_{H}/2 \).
\begin{equation}
\label{IR-holes}
Z(\vartheta )=l\sinh \vartheta +\sum _{j=1}^{N_{H}}\chi (\vartheta
-h_{j})\quad ,\quad Z(h_{j})=2\pi I_{j}
\end{equation}
This equation is the quantization of momenta in a box, for a system of particles. Indeed, by
exponentiation one has
\begin{equation}
\exp(i\,l\sinh h_k) \prod_{j=1}^{N_{H}} S(h_k-h_j)=\pm 1
\end{equation}
where the sign depends on the parity of the quantum numbers.
This leads to interpret holes as solitons with rapidities \( h_{j} \). This is further evidenced by
considering the energy and momentum expressions
\begin{equation}
E={\cal M}\sum _{j=1}^{N_{H}}\cosh h_{j}\,\qquad P={\cal M}\sum _{j=1}^{N_{H}}\sinh h_{j}
\end{equation}
which is the energy of \( N_{H} \) free particles of mass \( {\cal M} \).
The identification with the particular element \( S(\theta) \) of the S-matrix
forces to give to these solitons a topological charge $Q=+1$ each, which is consistent
with the interpretation that \( Q=2S_z\). An analogous interpretation is possible in terms of pure
antisolitons, reflecting the charge conjugation invariance of the theory.
When considering two holes and a complex pair, the source terms can be arranged,
thanks to some identities satisfied by the functions \( \chi \), in the form\[
Z(\vartheta _{i})=l\sinh \left( \vartheta _{i}\right) -i\log S_{+}(\vartheta
_{i}-\vartheta _{j})=2\pi I_{j}\, \, \, ,\, \, \, i\, ,\, j=1,\, 2\]
where \[
S_{+}(\vartheta )=\frac{\sinh \left( \frac{\vartheta +i\pi }{2p}\right)
}{\sinh \left( \frac{\vartheta -i\pi }{2p}\right) }S(\vartheta )\]
which is the scattering amplitude of a soliton on an antisoliton in the
parity-even channel. The quantum numbers \( I_{+},I_{-} \) of the two complex roots are
constrained to be \( I_{\pm }=\mp \frac{1}{2} \) for consistency of the IR
limit. This state has \( S_z=0 \), with topological charge $Q=0$.
There is an analogous parity-odd channel in singe-Gordon \cite{zam1979}, with an $S_{-}$ amplitude.
It is realized by the state with two holes and a selfconjugate root.
In the same way, it has been possible to treat more complex cases, with different combinations of
roots \cite{noiNP2}. See also \cite{tani} for details of the calculation.
In the attractive regime one has also to consider the breather particles that appear as
soliton-antisoliton bound states. It turns out that the breathers are represented by self-conjugate
roots (1st breather) or by arrays of wide roots (higher breathers).
Thus, the whole scattering theory of sine-Gordon can be reconstructed in the IR limit,
thanks to the structure of the source term. It is now difficult to argue that the NLIE
does not describe sine-Gordon.
\subsection{UV limit and vertex operators\label{s:UV}}
It is interesting to study the opposite limit \( l\rightarrow 0 \), where one expects to see a conformal
field theory; indeed, in this limit the masses vanish and scale invariance appears. This reproduces
the UV limit of sine-Gordon/massive Thirring, namely the \( c=1 \) conformal field theory described in
section~\ref{sect:CFT} and in the Figure~\ref{4sectors.eps}.
The UV calculations are usually more difficult to perform than the IR ones, as they
require to split the NLIE into two independent ``left'' and ``right'' parts, called
\emph{kink equations}. They correspond to the left and right movers of a two-dimensional conformal field theory. A similar manipulation is done on the energy and momentum expressions, that can finally
be expressed in a closed form, thanks to a lemma presented in \cite{ddv95}. For the details,
the reader is invited to consult the thesis \cite{tesidott} where all the calculations are shown in
detail. In the present text, I give only the main results and the physical insight they imply.
A first important result is that the \( c=1 \) CFT quantum number \( m \) of the vertex operators
(\ref{vertex_operators}), which is identified with the UV limit of the topological charge,
can be related unambiguously to the XXZ spin by \( \pm m=2S_z \). Of course,
the \( \pm \) reflects the charge conjugation invariance of the theory. Then,
by examining the states already ``visited'' at the IR limit, one can establish a bridge
between particle states and vertex operators of \( c=1 \) theory.
\begin{enumerate}
\item The \textbf{vacuum} state has no sources, namely no holes or complex roots: only the sea of real
roots is present.
There are two possible choices: \( \delta =0 \) or \( 1 \). The result of the UV calculation
gives\[
\begin{array}{lll}
\rm {for}\, \delta =0:\quad & \Delta ^{\pm }=0 & \, \, \mathrm{i}.e.\, \,
\mathbb I\\
\rm {for}\, \delta =1:\quad & \Delta ^{\pm }=\frac{1}{8R^{2}} & \, \,
\mathrm{i}.e.\, \, V_{(\pm 1/2,0)}
\end{array}\]
i.e. the physical vacuum is the one with \( \delta =0 \). The other state
belongs to the sector IV that does not describes a local CFT, as in Figure~\ref{4sectors.eps}.
\item The \textbf{two-soliton} state, described by two holes, with the smallest quantum numbers, gives
\begin{enumerate}
\item for \( \delta =0 \) and \( I_{1}=-I_{2}=\frac{1}{2} \)
\( \Longrightarrow \)
\( \Delta ^{\pm }=\frac{R^{2}}{2}\, \, \mathrm{i}.e.\, \, V_{(0,2)} \).
\item for \( \delta =1 \) and \( I_{1}=-I_{2}=1 \) \( \Longrightarrow \)
a \( V_{(\pm 1/2,2)} \) descendent, not in UV sG spectrum, as it also belongs
to sector IV.
\end{enumerate}
\item The \textbf{symmetric soliton-antisoliton} state (two holes and a self-conjugate root),
\( \delta =1 \), \( I_{1}=-I_{2}=1 \) and $ I_{c}^{\pm }=0 $
\( \Longrightarrow \) \( \Delta ^{\pm }=\frac{1}{2R^{2}}\, \, \mathrm{i}.e.\,
\, V_{(\pm 1,0)} \)
\item The \textbf{antisymmetric soliton-antisoliton} state (two holes and a complex pair) \\
\textcolor{black}{~~\( \delta =0 \), \(
I_{1}=I_{c}^{-}=-I_{2}=-I_{c}^{+}=\frac{1}{2} \)
\( \Longrightarrow \) \( \Delta ^{\pm }=\frac{1}{2R^{2}}\, \, \mathrm{i}.e.\,
\, V_{(\pm 1,0)} \)}\\
It is obvious that these last two give two linearly independent combinations of the operators
\( V_{(\pm 1,0)} \), one with even, the other with odd parity.
\item The \textbf{one hole} state with \( I=0 \), \( \delta =1 \) \( \Longrightarrow \)
\( \Delta ^{\pm }=\frac{1}{8R^{2}} \) i.e. the vertex operator \( V_{(0,1)} \), belongs to sector II.
For \( \delta =0 \) there are two minimal rapidity states with \( I=\pm \frac{1}{2} \). They are
identified with the operators \( V_{(\pm 1/2,\pm 1)} \). As these states belong to sector
III, they are of fermionic nature and actually one identifies them with the
components of the Thirring fermion.
\end{enumerate}
These examples, taken all together, suggest the following choice of \( \delta \) to discriminate between
sine-Gordon and massive Thirring states
\begin{equation}
\label{regola_d'oro}
\begin{array}{cc}
2S_z+\delta +M_{SC}\in 2\mathbb Z & \quad \text{for\, Sine-Gordon}\\
\delta +M_{SC}\in 2\mathbb Z & \quad \text{for\, massive\, Thirring}
\end{array}
\end{equation}
where \( M_{SC} \) is the number of selfconjugate roots. This selects the sectors I and II for
sine-Gordon states, and the sectors I and III for the Thirring ones, as in section \ref{sect:CFT}.
The NLIE describes also the sector IV, that does not contain local operators.
The correct interpretation of Coleman equivalence of Sine-Gordon and Thirring models is that
even topological charge sectors are identical, the difference of the two models shows up only in
the odd topological charge sectors, for which the content of Thirring must be fermionic while that of
Sine-Gordon must be bosonic.
To conclude these remarks, I briefly add a comment about the special objects that were introduced in the
classification of roots but not really used later. I need to recall their
definition: they are real roots or holes \( y_{i} \) having \( Z'(y_{i})<0 \).
Now, the function \( Z \) is globally monotonically increasing. Indeed its
asymptotic values for \( \vartheta \rightarrow \pm \infty \) are dominated by the
term \( l\sinh \vartheta \) which is obviously monotonically increasing. Also,
for \( l \) large, this term dominates. Therefore at IR the function \( Z \)
is surely monotonic and no special objects can appear. However, these global asymptotic estimations
can fail at small $l$ and finite $\vartheta$. In that case the derivative $Z'(x)$ can become locally
negative. Thus, a real root or hole with negative derivative becomes ``special'' (and splits in three
objects).
At the critical value \( l_{crit} \) of the parameter \( l \), at which the derivative become negative
moving from IR towards UV, the convergence of the iterative procedure breaks down, thus revealing
that some singularity has been encountered.
For the scaling function to be consistently continued after this singularity, one needs
to modify the NLIE adding exactly the contributions that have been called
special objects. A more careful analysis \cite{tesidott} reveals that these singularities
are produced by the logarithm in the convolution term going off its fundamental determination:
``special'' objects are an artifact of the description by a counting function, they do not
exist in the Bethe equations.
A treatment of these objects can be found in \cite{tesidott}. See also the discussion at
page~\pageref{lespecial}. I don't know of successful numerical calculations in presence of
special objects. In \cite{FPR2} a similar case occurred in the TBA formalism, in presence of boundary
interactions, and was treated numerically because it was localised in some asymptotic region.
\section{Discussion}
In this chapter I have introduced the formalism of Destri-de Vega to study the sine-Gordon model
on a cylinder with finite size space and infinite time. The presentation proposed here
has mainly the purpose to show that this method is effective in treating finite size effects
in quantum field theory
and creating a bridge from a massive field theory on Minkowski space-time (visible in the IR limit)
to a conformal field theory.
As typical in treating integrable models, different systems meet on the way: lattice systems,
scattering theory and conformal field theory all participate to the scenario described by the
NLIE.
The formalism was introduced by Kl\"umper et al. in \cite{klumper} and by Destri and de Vega in
\cite{ddv95}. After, a number of other people participated to its development. Particularly, my
involvement characterized my PhD years from 1996 to 1999, with the Bologna group.
I directly contributed to four papers, \cite{noi PL1}, \cite{noiNP}, \cite{noiPL2}, \cite{noiNP2} and
I wrote my PhD thesis on this subject \cite{tesidott}.
The main steps of my contribution are
\begin{itemize}
\item The whole formulation was revisited and corrected.
\item The study of the IR and UV limits was done systematically.
\item The spectrum of the continuum theory was carefully described, using both UV, IR and numerical calculations.
also adding the odd particle sector.
\item Many cases were studied numerically, to gain a complete control of the whole region that separates the IR and the UV.
\item The results were compared with perturbative calculations done with the truncated conformal
space approach,
giving a confirmation of the methods.
\item The introduction of a twist allowed to describe the restrictions of sine-Gordon, namely the
perturbation of conformal minimal models by the thermal operator. These are massive theories
that are described by the same sine-Gordon NLIE after the introduction of an appropriate twist.
\end{itemize}
Later, other groups profited of this NLIE to investigate a variety of models. An inexpected
development will be presented in chapter~\ref{c:hubbard} and investigates integrability-related
problems in gauge theory (especially $\mathcal{N}=4$ SYM).
Are there other things to be done? Even if the degree of difficulty is very high, the gain would be
great, if one could succeed in the description of the eigenvectors in the continuum theory, or of
some correlation function.
Their knowledge is important because they enter into the evaluation of many physical
quantities (diffusion amplitudes, magnetic susceptibility) whose values can be compared with experiments.
\chapter{Thermodynamic Bethe Ansatz}
The first treatment of Bethe equations in their thermodynamic limit was done by Yang and Yang
in \cite{yy1966}, for a system of nonrelativistic bosons interacting by a repulsive delta-function,
on a line. The Bethe equations were obtained from the model Hamiltonian by the coordinate Bethe
ansatz calculation, one of the several implementations of the Bethe ansatz methods.
Then, the objective of that treatment was to evaluate the thermodynamic limit of the
Bethe equations. The authors succeeded and were able to get the partition function of this
interacting gas. They also showed the partition function analyticity in the
temperature and the chemical potential, indicating the absence of phase transitions.
This method was generalized by Al.B. Zamolodchikov \cite{zam89} for relativistic particles
interacting with a factorized scattering matrix. His goal was to create a contact between a
factorized scattering theory of a given S matrix to its ultraviolet limit, typically
a conformal field theory in two dimensions. The author reasonably assumes that, at a finite
temperature, the equilibrium states of the particles in a box are described by some
Bethe-like wave functions (asymptotic wave function),
precisely as in the Yang and Yang approach. A quantization condition inspired to Bethe equations
(\ref{betheTQ}, \ref{bethe}) is thus imposed.
Indeed, in the Bethe equations structure we have recognized a momentum term and an interaction term,
see near (\ref{quasipart}). Zamolodchikov assumes
that the interaction term is given by a factor of S matrix amplitudes and the momentum is the
usual relativistic one. Notice, however, that
the true Bethe wave functions obtained within Bethe ansatz calculations involve ``quasiparticles'', as
described near (\ref{quasipart}), while the scattering theory involves physical particles.
For example, the Bethe roots can be complex while physical particle rapidities are always real.
The assumption works and the Zamolodchikov ``thermodynamic Bethe ansatz'' has been applied
to a variety of models. It allows to study the theory obtained by
perturbing a conformal field theory with a relevant operator, under the condition that the
perturbation maintains integrability.
Later, Pearce and Kl\"umper in \cite{pearceklumper1991} introduced another approach to
``calculate analytically the finite-size corrections and scaling dimensions of
critical lattice models'' (quoted from \cite{pearceklumper1991}).
These authors do not make use of Bethe equations; instead, they start from the
transfer matrix of an integrable lattice model and, knowing the arrangement of the zeros of its
eigenvalues, are able to solve certain identities satisfied by the transfer matrix itself.
Then, one can evaluate the continuum limit of lattice models.
In the following, I will mainly talk about the last approach.
All of them have triggered several further investigations.
Indeed, the TBA serves as an interface between conformally invariant theories and massive or massless
integrable theories, in particular when these massive or massless theories are obtained as
deformations of the conformal ones.
\section{Lattice TBA}
I start by defining a family of models on a square lattice of $N$ horizontal cells (faces)
and with many rows (their number will not be used) using the following diagrammatic
representation for the double row transfer matrix \cite{BPO1996}
\begin{equation}\label{transferm}
\mbox{\boldmath $D$}(N,u,\xi)_{\mbox{\footnotesize\boldmath$\sigma\,\sigma'$}}
=\!\!\sum_{\tau_{0},\dots,\tau_{N-1}}
\raisebox{-17.1mm}[19.8mm][15mm]{
\begin{tikzpicture}[scale=1.38]
\draw (1,0) -- (7,0) ;
\draw (1,-1) -- (7,-1) ;
\draw (1,1) -- (7,1) ;
\foreach \i in {1,...,7}{
\draw (\i,-1) -- (\i,1) ;
}
\foreach \i in {1.5,...,3.5,6.5}{
\draw (\i,-0.5) node {$u$};
\draw (\i,0.5) node {$\lambda-u$};
}
\foreach \i in {0,...,3}{
\draw (\i,-0.2)+(1.2,0) node {$\tau_{\i}$};
}
\foreach \i in {1,...,3}{
\draw (\i,-1.2)+(1,0) node {$\sigma_{\i}$};
}
\foreach \i in {1,...,3}{
\draw (\i,1.2)+(1,0) node {$\sigma_{\i}^{\prime}$};
}
\foreach \i in {1,...,3}{
\draw (\i,1.2)+(1,0) node {$\sigma_{\i}^{\prime}$};
}
\draw (6,-1.2) node {$\sigma_{N-1}$};
\draw (5.7,-.2) node {$\tau_{N-1}$};\draw (6.8,-.2) node {$\tau_{N}$};
\draw (6,1.2) node {$\sigma_{N-1}^{\prime}$};
\draw (0.2,-1)--(1,0)--(0.2,1)--cycle [style=dashed];
\draw (0.2,-1)--(1,-1) [style=dotted];
\draw (0.2,1)--(1,1) [style=dotted];
\draw (7,-1.2)node{$1$};\draw (7.2,0)node{$2$};\draw (7,1.2)node{$1$};
\draw(0.5,0) node{$\begin{matrix}\lambda\!-\!u\\[-1mm]\xi\end{matrix}$};
\draw (0.2,-1.2)node{$r$};\draw (0.2,1.2)node{$r$};
\end{tikzpicture}
}
\end{equation}
I use a double row transfer matrix \cite{skly} because, in presence of boundary interactions,
it is needed to
ensure integrability. In fact, it is a transfer matrix that acts from the row $i$ to the $i+2$ while
in a more standard single row transfer matrix it would act from $i$ to $i+1$.
The diagrammatic representation is a mean to simplify the notation and to reduce the use of indices;
it is also quite effective to realize a sort of ``graphical algebra'', useful to verify
a number of properties \cite{BPO1996}. An entry of the transfer matrix,
$\mbox{\boldmath $D$}(N,u,\xi_{\text{latt}})_{\mbox{\footnotesize\boldmath$\sigma\,\sigma'$}}$,
is obtained by multiplying the Boltzmann weights of
each cell and summing on the indices indicated, namely summing on the central row of sites.
The dashed triangle is a Boltzmann weight associated to
the three corresponding border sites; it actually introduces an interaction that is fully localized
on the border of the lattice. As the actual lattice is just the one formed by the square cells,
the triangle has been represented with dashed lines to indicate that is not a lattice cell.
The Boltzmann weights associated
to one face are
\begin{equation}\label{face}
W\!\left(\,\begin{array}{@{}cc|@{~}}d&c\\ a&b\end{array}\,u\right)=
\raisebox{-9mm}[10mm][9mm]{
\begin{tikzpicture}
\draw (0,0) --(1,0)--(1,1)--(0,1)--cycle;
\draw (0.5,0.5) node{$u$};
\draw (-0.2,-0.2) node{$a$};
\draw (1.2,-0.2) node{$b$};
\draw (1.2,1.2) node{$c$};
\draw (-0.2,1.2) node{$d$};
\end{tikzpicture}
}
\ =\ \frac{\sin(\lambda-u)}{\sin\lambda}\,\delta_{a,c}+
\frac{\sin u}{\sin\lambda}\,\sqrt{\frac{\sin (a\lambda) \sin (c\lambda)}{\sin (b\lambda) \sin
(d\lambda)}}\,\delta_{b,d}
\end{equation}
and those associated to the boundary interactions are
\begin{equation}
B^{r,1}\!\left(\,r\pm1 \, \begin{array}{c|} r \\r \end{array}\;u, \, \xi \right)=
\raisebox{-16mm}[10mm][14mm]
{\begin{tikzpicture}
\draw (0,0)--(0.8,1)--(0,2)--cycle[style=dashed];
\draw(0.3,1) node{$\begin{matrix}u\\[-1mm]\xi\end{matrix}$};
\draw(1.3,1)node{$r\pm 1$};
\draw(0,-0.2)node{$r$};\draw(0,2.2)node{$r$};
\end{tikzpicture}
} \label{triangle}
=\sqrt{\frac{\sin(r\pm1)\lambda}{\sin r \lambda}} \quad
\frac{\sin(\xi \pm u) \sin(r\lambda+\xi \mp u)}{\sin^2 \lambda}
\end{equation}
where the integers associated to every vertex are called heights and must satisfy the adjacency rule
of the Dynkin diagram $A_L$ namely adjacent sites must have height difference of 1 and
heights are from 1 to $L$. I also use a crossing parameter
\begin{equation}
\lambda=\frac{\pi}{L+1}
\end{equation}
There are more general forms for these weights and also periodic boundary conditions
are possible. Here we have chosen those that are more indicated to our development. Indeed,
they are critical weights namely they describe the system at its
critical temperature. Moreover, the right boundary weight is kept fixed.
In these lattice models the interaction is characterized by the four sites around a face so
nearest neighbors and next-to-nearest neighbors interact. On the contrary, in the six-vertex and
XXZ models the interaction was between nearest neighbors only. The phase diagram has been
studied in \cite{ABF} by using corner transfer matrix techniques. It is common to call these models
from the authors of the paper, ABF models. More precisely, I treat here the $A_L$ models, from
the adjacency rules that are used.
These models have a number of useful properties. Indeed, from direct inspection of
(\ref{face}) and (\ref{triangle}) it appears that the transfer matrix is an entire function
of the spectral parameter $u$, that means that its entries are free of poles or other
singularities in the whole complex plane.
Transfer matrices at different spectral parameter commute, thus insuring integrability
\begin{equation}
[\mbox{\boldmath $D$}(N,u,\xi),\mbox{\boldmath $D$}(N,u',\xi)]=0
\end{equation}
Indeed, with a standard construction, one can expand in $u$ and generate integrals of motion.
In spite of a different formulation based on faces instead of vertices, the present $A_L$ models
are very deeply related to the six-vertex one.
The original T-Q relations of Baxter and the six-vertex Bethe ansatz
can be modified to hold for the present case.
The double row transfer matrix satisfy a T-Q functional equation that is fully similar to
the one that holds in the periodic case (\ref{TQperiod}), with small modifications to account for
the boundary
\begin{equation}
s(2u)\,\D\left(u+\frac{\lambda}{2}\right)\mbox{\boldmath $Q$}(u)=s(2u+\lambda)\ f\!\left(u+\frac{\lambda}{2}\right)
\mbox{\boldmath $Q$}(u-\lambda)+s(2u-\lambda)\ f\!\left(u-\frac{\lambda}{2}\right)\mbox{\boldmath $Q$}(u+\lambda)\label{TQbordo}
\end{equation}
where I have used
\begin{equation}
f(u)=\left(\frac{\sin u}{\sin \lambda}\right)^{2N}\,,\qquad s(u)=\frac{\sin u}{\sin \lambda}
\end{equation}
The double row transfer matrix is the one in (\ref{transferm}) but, when not strictly necessary, I omit
the dependence by $N$ and $\xi$.
These expressions hold for a fixed border on the left and the right of (\ref{transferm}), namely when
$r=1$ and $\tau_0=r+1=2$.
In this case the boundary coupling $\xi $ disappears. Of course, they can be modified
to include other cases. The goal here is not to reach the largest generality but to show how
the methods work and how the TBA appears in lattice systems. For this reason, I follow the
approach of \cite{FPW} in order to introduce the fusion and the TBA hierarchy.
The new operator $\mbox{\boldmath $Q$}(u)$ is a family of matrices that commute each other and commute with the
transfer matrix $[\mbox{\boldmath $Q$}(u),\mbox{\boldmath $Q$}(v)]=[\mbox{\boldmath $Q$}(u),\D(v)]=0$. This implies that the same functional equation
holds true for the eigenvalues $D(u)$ and $Q(u)$. The eigenvalues of $Q$ are given by
\begin{equation}
Q(u)=\prod_{j=1}^n \frac{\sin(u-u_j) \sin(u+u_j)}{\sin^2 \lambda}\label{BaxQ2}
\end{equation}
where $u_j$ are the Bethe roots. The Bethe ansatz equations result by imposing
that the transfer matrix is an entire function. Indeed, when $Q(u)=0$, being $D(u)$ entire,
the right hand side must vanish. This forces the following Bethe equations
\begin{equation}\label{TQBethe}
\frac{\sin(2u+\lambda)}{\sin(2u-\lambda)}\
\left(\frac{\sin\left(u+\frac{\lambda}{2}\right)}{\sin\left(u-\frac{\lambda}{2}\right)}\right)^{2N}=-
\frac{Q(u+\lambda)}{Q(u-\lambda)}
\end{equation}
It is convenient to shift the transfer matrix
\begin{equation}
\tilde{\mbox{\boldmath $D$}}(u)=\mbox{\boldmath $D$}(u+\frac{\lambda}{2})
\end{equation}
With this convention, the transfer matrix is Hermitian in $\text{Re}(u)=0$. Given the position
of its zeros, that will be presented later, the relevant analyticity strip is
\begin{equation}
-\lambda<\text{Re} (u)<\lambda
\end{equation}
Using a standard notation, I let $\D(u)=\D_0^1$
\begin{equation}
\D_k^{q}=\D^{q}(u+k\lambda),\quad \mbox{\boldmath $Q$}_k=\mbox{\boldmath $Q$}(u+k\lambda),\quad
s_k(u)={s(2u+k\lambda)},\quad
f_k(u)=(-1)^{N}{s(u+k\lambda)}^{2N}
\end{equation}
For ``historical'' reasons, this notation is customary here: the upper index is not an exponent
but just an index.
The T-Q relation implies that the eigenvalues $D(u)$ are determined by the eigenvalues $Q(u)$ in the
compact form
\begin{equation}
\tilde{D}(u)=\tilde{D}_0=\frac{s_1f_{1/2}Q_{-1}+s_{-1}f_{-1/2}Q_1}{s_0 Q_0}
\end{equation}
\subsection{Fusion hierarchy and TBA hierarchy}
\label{sec:functional}
From the T-Q relation (\ref{TQbordo}) one can construct a hierarchy of models namely
a set of transfer matrices $\mbox{\boldmath $D$}^q$ recursively defined one after the other.
The process, called fusion, was introduced in \cite{kul81} as a method of obtaining new solutions to
the Yang-Baxter equation, by combining R~matrices of a known solution.
Fusion for the ABF models has been done in \cite{date1986} and studied in detail in \cite{bazres1989},
in the case of periodic boundary conditions. The construction holds for the open boundary case
for which the fusion has been done in \cite{BPO1996}.
For what follows, the presence of the hierarchy has very important consequences because it
imposes a very particular organization of the zeros of the transfer matrix eigenvalues in
the complex plane of the spectral parameter $u$. At the end, this will lead to ``solve'' the eigenvalue
problem.
From \cite{BPO1996}, the fusion hierarchy for the transfer matrices is
\begin{equation}
s_{q-2}s_{2q-1}\D_0^{q}\D_q^{1}=s_{q-3}s_{2q}f_q\D_0^{q-1}
+s_{q-1}s_{2q-2}f_{q-1}\D_0^{q+1},\qquad q=1,2,\ldots,L-1\label{FusionHier}
\end{equation}
Notice that the level $q+1$ is extracted from the right hand side, by knowing the levels
$q,q-1,1$.
This equation is obtained for the eigenvalues, by extracting the transfer matrix eigenvalue from
(\ref{TQbordo}) and multiplying by itself after a shift. This procedure is repeated.
The second term on the right hand side is the higher fusion level $q+1$ and is fixed by
the levels $q,\ q-1,\ 1$. This expresses the idea of hierarchy\footnote{It is helpful to read all
these expressions by just ignoring all the coefficients, namely the factors $s_q$ and $f_q$.}.
The initial conditions of the recurrence are
\begin{equation}\label{initial}
\D_0^{-1}=0,\qquad \D_0^{0}=f_{-1}\mbox{\boldmath $I$},
\end{equation}
and there is a closure condition: the fusion process is upper limited by $L$ that is the largest
value of the ``heights'' located on the corners of a lattice face
\begin{equation}
\mbox{\boldmath $D$}_0^{L}=0\label{closure}
\end{equation}
This fusion makes sense because the obtained higher fusion level transfer matrices still have
the remarkable properties of the original $q=1$ transfer matrix. Indeed,
they are entire functions and they commute each other. This last property implies that
they all have the same set of eigenvectors.
Then, they can be interpreted as transfer matrices of new lattice models.
Starting with the fusion hierarchy (\ref{FusionHier}), one can use induction to derive the
T-system of functional equations~\cite{KP92,BPO1996}
\begin{equation}\label{funct}
s_{q-2}s_{q}\D_0^{q}\D_1^{q}=
s_{-2}s_{2q}f_{-1}f_q\mbox{\boldmath $I$}+s_{q-1}^2\D_0^{q+1}\D_1^{q-1},\qquad q=1,2,\ldots,L-1
\end{equation}
For $q=1$, the rightmost term vanishes because of the initial conditions and what remains is just
an inversion identity. If we further define
\begin{equation}\label{deft}
\mbox{\boldmath $d$}_0^{q}=\frac{s_{q-1}^2\D_1^{q-1}\D_0^{q+1}}{s_{-2}s_{2q}f_{-1}f_q},\qquad q=1,2,\ldots, L-2
\end{equation}
then the inversion identity hierarchy (T-system) can be put in the form of a
Y-system~\cite{KP92,BPO1996}
\begin{equation}
\mbox{\boldmath $d$}_0^{q}\mbox{\boldmath $d$}_1^{q}=\big(\mbox{\boldmath $I$}+\mbox{\boldmath $d$}_0^{q+1}\big)\big(\mbox{\boldmath $I$}+\mbox{\boldmath $d$}_1^{q-1}\big)\label{Ysystem}
\end{equation}
with closure
\begin{equation}
\mbox{\boldmath $d$}_0^{0}=\mbox{\boldmath $d$}_0^{L-1}=0
\end{equation}
For later convenience, I define the shifted transfer matrices
\begin{equation}\label{shifted}
\tilde{\mbox{\boldmath $D$}}^{q}(u)=\mbox{\boldmath $D$}^{q}\Big(u+\frac{2-q}{2}\,\lambda\Big)\,,\qquad
\tilde{\mbox{\boldmath $d$}}^{q}(u)=\mbox{\boldmath $d$}^{q}\Big(u+\frac{1-q}{2}\,\lambda\Big)
\end{equation}
so that the $Y$-system takes now the form
\begin{equation} \label{ysyst2}
\tilde{\mbox{\boldmath $d$}}^q\big(u-\frac{\lambda}2\big)\
\tilde{\mbox{\boldmath $d$}}^q\big(u+\frac{\lambda}2\big)=
\big(1+\tilde{\mbox{\boldmath $d$}}^{q-1}(u)\big)\big(1+\tilde{\mbox{\boldmath $d$}}^{q+1}(u)\big)
\end{equation}
The transfer matrices are entire functions of $u$ and periodic
\begin{equation}
\mbox{\boldmath $D$}^{q}(u)=\mbox{\boldmath $D$}^{q}(u+\pi)\,,\qquad \tilde{\mbox{\boldmath $d$}}^{q}(u)=\tilde{\mbox{\boldmath $d$}}^{q}(u+\pi)
\end{equation}
They also are real if $u$ is real and satisfy a ``crossing symmetry''
\begin{equation}
\mbox{\boldmath $D$}^{q}(u)=\mbox{\boldmath $D$}^{q}\big((2-q)\lambda-u\big),\qquad
\mbox{\boldmath $d$}^{q}(u)=\mbox{\boldmath $d$}^{q}\big((1-q)\lambda-u\big)
\end{equation}
The advantage of the shifting these matrices in that the shifted matrices have the same
analyticity strip
\begin{equation}\label{phystrip}
-\lambda< \text{Re}(u)<\lambda
\end{equation}
that means that within a periodicity strip $(\text{Re}(u),\text{Re}(u)+\pi)$ we will use only
the analyticity strip (\ref{phystrip}) to solve for the eigenvalues.
This choice is related to the position of the zeros and will be presented later.
Lastly, the asymptotic values $\tilde{d}^q(+ i\infty)$ were computed in \cite{KP92}
\begin{equation}\label{asym1}
\tilde{d}^q(+i\infty)=
\frac{\sin [q\theta]\,\sin[(q+2)\theta]}{\sin^2\theta}
\;=\;\frac{\sin^2 (q+1)\theta}{\sin^2\theta}-1,
\qquad \theta=\frac{s\pi}{L+1}
\end{equation}
Functional equations as the T-system and Y-system hold for the periodic boundary case, the open case,
and off-criticality. They are a true feature of the this type of lattice models.
The same hierarchy holds for the six-vertex model but it is unlimited so there is no truncation.
Notice that my goal is to find the eigenvalues of the nonfused double row transfer matrix $\mbox{\boldmath $D$}(u)$.
The T-Q relation (\ref{TQbordo}) offers us one way: solve the Bethe equations (\ref{TQBethe}),
put the Bethe roots in (\ref{BaxQ2}) and use (\ref{TQbordo}) to find the eigenvalues $T(u)$.
The fusion hierarchy offers another path: solve the whole family of TBA functional equations
(\ref{ysyst2}) at all orders of fusion; this gives $\tilde{d}^1(u)$ that appears as the rightmost term in
(\ref{funct}) when $q=1$ then this equation can be inverted to find $D^1(u)=D(u)$, as wanted.
The two paths, albeit mathematically equivalent, offer different levels of difficulty.
Solving or at least classifying all solutions of the Bethe equations can be an hard task, given that
Bethe roots are complex numbers.
It turns out that it is simpler, in this case, to follow the second approach. This will require
to control the zeros of the fused transfer matrices.
\subsection{Functional equations: Y-system, TBA}
In the previous section I have presented several systems of functional equations for the
transfer matrices. I try now to motivate their relevance in relation to the Zamolodchikov
approach to thermodynamic Bethe ansatz.
The Y-system in the form (\ref{ysyst2}) can also be written as
\begin{equation}\label{ysyst3}
\tilde{\mbox{\boldmath $d$}}^q\big(u-\frac{\lambda}2\big)\
\tilde{\mbox{\boldmath $d$}}^q\big(u+\frac{\lambda}2\big)=\prod_{r}
\big(1+\tilde{\mbox{\boldmath $d$}}^{r}(u)\big)^{A_{qr}}
\end{equation}
where $A_{qr}=\delta_{q,r-1}+\delta_{q,r+1}$ is the incidence matrix of the Dynkin diagram $A_{L-2}$.
This equation has now the form of the one obtained by Zamolodchikov in \cite{zam91} for the RSOS
scattering theories. These theories are obtained by perturbing a unitary conformal field theory
of the minimal series (\ref{minimal}) by the operator $\phi_{1,3}$. This operator is relevant
and it preserves integrability namely the perturbed theory is still integrable.
There are two possible directions of perturbation, according to the sign of the coupling.
One gives rise to a massive model $A_m^{(-)}$ whose scattering matrix factorizes according to the
factorization theorem discussed in the Introduction.
The other direction of the perturbation gives a massless theory $A_m^{(+)}$.
The name of ``RSOS theories'' for these models come from the fact that their two particle scattering
matrix, apart overall factors, is the R-matrix of the ABF models of \cite{ABF}, that are also called
RSOS.
For these theories in the massive regime, Zamolodchikov wrote the thermodynamic Bethe ansatz
equations in \cite{zam91} for the ground state and showed that they are particular solutions of
a Y-system with the same structure of (\ref{ysyst3}). The approach of Zamolodchikov does not describe
excitations above the ground state but it is reasonable to imagine that the Y-system,
more that the thermodynamic Bethe ansatz, describes the symmetries of the model and all its
excited states. Y-systems emerge in most (or possibly all) the thermodynamic Bethe ansatz equations
that have been derived: see \cite{zamplb91} but also one of 135 citations to it, for example
\cite{Serban:2010sr}, that is a recent review on the integrability in AdS/CFT, containing many recent
citations to Y-systems.
Now, it is curious to see if the Y-system can emerge, in field theory, in a direct way that avoids
the tortuous method of \cite{zam91}. Indeed, in the series of papers opened with \cite{blz1},
the authors formulated a continuum version of the transfer matrix, of the T-Q relation and of the
subsequent functional equations for the minimal series $\mathcal{M}_{2,2n+3}$. This latter is
a family of non-unitary conformal field theories with central charge $c=1-3\frac{(2n+1)^2}{2n+3}.$
The formulation has been also modified in \cite{blz_exc} to describe the perturbation by the operator
$\phi_{1,3}$. So, the Y-systems can be obtained constructively from the conformal field
theory operators, at the critical point and off criticality.
\subsection{Zeros of the eigenvalues of the transfer matrix\label{s:zeros}}
The conclusion of section~\ref{sec:functional} was an indication to solve the Y-system (\ref{ysyst2})
to obtain the eigenvalues of the transfer matrix. Before engaging in this calculation, one
has to know the analytic properties of the transfer matrices, especially in relation to zeros
(and poles, if any).
The eigenvalues $\tilde{D}^q(u)$ of the transfer matrices have many zeros in the complex plane.
They can be easily counted from (\ref{transferm}), considering that each face contributes
with a trigonometric term $\sin(a+u)$ (with some $a$) and the triangular face with two such
terms. This means that $\tilde{D}^q(u)$ is a polynomial in $z=\exp (iu)$ and $\bar{z}=\exp(-iu)$
of degree $2N+2$ so we have to expect $4N+4$ zeros in a periodicity strip.
This counting can be slightly different in presence of other boundary conditions.
Many analytical and numerical observations, and also the ``string hypothesis'' formulated in
\cite{bazres1989} for the Bethe roots, indicate that the zeros have a peculiar structure within the
strip indicated in (\ref{phystrip}) as analyticity strip. Indeed, zeros can be on the middle axis
of the strip, such that their real part vanishes
\begin{equation}\label{onestring}
\text{Re} (u)=0
\end{equation}
(these are called 1-strings) or on the border of the strip. In that case they appear as pairs
\begin{equation}\label{twostring}
\text{Re}(u_1)=-\text{Re}(u_2)=\lambda\,,\qquad \text{Im}(u_1)=\text{Im}(u_2)
\end{equation}
and are called 2-strings, in the usual Bethe ansatz meaning of ``string''.
This pattern holds for all transfer matrices, with $q=1,2,\ldots,L-2$.
The matrix $\mbox{\boldmath $D$}^{L-1}$ is proportional to the identity so its strip is empty.
Moreover, these are always simple zeros, because Bethe roots are mutually exclusive.
Numerical examples showing this peculiar behaviour of 1- and 2-strings have been presented in
a number of papers, starting from the one \cite{klpe91}
in which the methods were established. This pattern holds in presence of boundary interactions
\cite{OPW}, \cite{FPR} and with off-critical transfer matrices \cite{PCA}, with massive or
massless perturbations.
In \cite{FPW} there is a whole ``art gallery'' of images of the zeros of the transfer matrix eigenvalues
for the critical $A_5$ model with fixed boundaries.
\begin{figure}[t]
\hfill\includegraphics[width=0.3\linewidth]{mechanismA.pdf}
\hfill\includegraphics[width=0.3\linewidth]{mechanismB.pdf}
\hfill\includegraphics[width=0.3\linewidth]{mechanismC.pdf}
\caption{\label{ABCmech} Three states for the tricritical Ising model are shown.
The arrows indicate three different mechanisms of displacement of the zeros triggered by the change
of the boundary coupling constant from one to another conformal boundary condition
${\cal B}_{(2,1)} \mapsto {\cal B}_{(1,2)}$. Note that this is the reverse of the physical flow which
actually goes from the UV point $(1,2)$ to the infrared point $(2,1)$.
The states are
A, $(0|0)\,\mapsto\,(0)_{+}$;
B, $(1|0)\,\mapsto\,(0)_{-}$;
C, $(0\,0\,0|1)\,\mapsto\,(0\,0\,0\,|0\,0)_{-}$. In this image, taken from \cite{FPR}, the shift
(\ref{shifted}) is not used. Strip 1 is on the left, strip 2 on the right.}
\end{figure}
In Figure~\ref{ABCmech} there are three examples of states from the tricritical
Ising model $A_4$. As it has just two levels of fusion, $q=1,2$, it has two analyticity strips.
The transfer matrices were not shifted so strips are not centered as in (\ref{phystrip}).
In the figure, arrows indicate the dynamics of the zeros, namely their displacement after
tuning a boundary coupling.
In some case, often because of the normalization factors in (\ref{FusionHier}), some zeros at
intermediate positions occur, like
\begin{equation}
\text{Re}(u)=\pm \frac{\lambda}{2}
\end{equation}
I will not consider these cases here. Actually, they do not modify the general ideas that follow,
especially because they do not have dynamics: they appear in a fixed number, they
show up in all states and have a fixed position.
In addition to (\ref{onestring}, \ref{twostring}),
several numerical investigations show that the relative order of the zeros in the
$L-2$ fundamental strips (\ref{phystrip}) is sufficient to identify a common eigenstate of the
transfer matrices $\mbox{\boldmath $D$}^{1}$ to $\mbox{\boldmath $D$}^{L-2}$. Indeed, the state is fixed by the order of appearance of
1- and 2-strings, from the real axis to the asymptotic region. As an example, in Figure~\ref{ABCmech}
the state A is given by:
\begin{itemize}
\item strip 1: four appearances of 2-strings behind one 1-string
\item strip 2: one appearance of 1-string
\end{itemize}
This means that such an eigenstate is uniquely characterized by a topological
information and that the corresponding geometrical data, namely actual positions of zeros, can be
inferred by other means. The relative order between different strips has no relevance.
Eigenstates/eigenvalues are thus classified by combinatorial rules.
This phenomenon is at the origin of the nomenclature of the states
by a set of \textit{non-negative quantum numbers} $\{I_k^{(q)}\}$; considering the upper half-plane,
in a strip the lower 1-string carries number 1 and so on counting upward; given the $k$-th
1-string of strip $q$, $I_k^{(q)}$ is the number of 2-strings above it. This definition allows an
easy reconstruction of the sequence of 1- and 2-strings in the strip (see also Figure~\ref{ABCmech}).
Indicating with $m_q,\,n_q$ the number of 1-strings, 2-strings respectively in strip $q$
(above the real axis), one clearly has
\begin{equation}
n_q\geq I^{(q)}_1\geq I^{(q)}_2 \geq \ldots \geq I^{(q)}_{m_q} \geq 0
\end{equation}
It is important to mention that, even if the general scheme proposed here is only true
``asymptotically'' (large $N$), deviations are very small and already with a lattice of
six faces the general picture appears. Deviations consist in the displacement of the zeros by
their asymptotic position (\ref{onestring} or \ref{twostring}). This phenomenon is also observed
in the Bethe ansatz framework: the ``string hypothesis'' is asymptotic and there are deviations
in a finite size lattice.
The number of zeros is expected to grow with the lattice size, according to the counting done at
the beginning of this section. Notice first that $N$ growths in steps of 2, because of adjacency
rules. Given a certain set of zeros with their relative order, the addition of an even number of
new columns to
the lattice has the only effect to add 2-strings to strip 1, near the real axis.
In other words, the real axis of the first strip is a source of 2-strings, when $N$ increases.
The definition of the quantum numbers has been chosen consistently with this: the quantum numbers
of a state do not depend by $N$.
The addition of new 2-strings in the first strip pushes the other zeros (in all strips)
far and far from the real axis. This trend is very important for the scaling limit that will
be used to get contact with the conformal theory.
Indicating a 1-string position by $\frac{i\ y_k^{(q)}}{L+1}$, the vanishing of the transfer matrix
eigenvalue is
\begin{equation}
\tilde{d}^q\left(\frac{i\ y_k^{(q)}}{L+1}\right)=0\,,\qquad y_k^{(q)}>0
\end{equation}
The leading behaviour with $N$ is
\begin{equation}\label{leadingzero}
y_k^{(q)}=y_k^{(q)}(N) \sim \hat{y}_k^{(q)}+\log N
\end{equation}
where $\hat{y}_k^{(q)}$ is the asymptotic position.
Even if the relative orders and positions of different strips do not matter, there are constraints
between the composition of the different strips. They originate in the Y-system and will be
presented later.
\subsection{Solving the Y-system}
Aiming at finding the eigenvalues of the transfer matrix, the functional system (\ref{ysyst2}) has
been ``solved'' in \cite{klpe91} by taking Fourier transform of the logarithmic derivative. This
method requires to
control the zeros and poles within the analyticity strip (\ref{phystrip}). After, the method
was adapted to several other cases, as in \cite{OPW} for fixed boundaries, in \cite{FPR}, \cite{FPR2}
and \cite{io} for interacting boundaries, in \cite{PCA} for off-critical lattices and finally in
\cite{FPW} for the whole series of $A_L$ models and for the XXX model.
Therefore, there is no need to repeat all the long calculations here.
The final result will be written after taking a scaling limit or continuum limit
\begin{equation}\label{scaling}
\hat{d}^q(x)=\lim_{N\rightarrow \infty} \tilde{d}^q\left(N,\frac{i \,x }{L+1}\right)
\end{equation}
where the matrices of (\ref{shifted}) are used. The lattice size has been indicated and a new spectral
parameter $x$ has been introduced, rotated and dilatated by respect to $u$\footnote{Notice that in
presence of boundary or bulk interactions, the corresponding parameters usually need to be scaled
with $N$.}.
The size $N$ appears in the limit (\ref{scaling}) first explicitly, counting the number of faces,
second implicitly, by the ``scaling'' positions of the zeros (\ref{leadingzero}).
There are two important reasons to require the continuum limit: first, the positions of the zeros
in (\ref{s:zeros}) are ``asymptotically'' correct therefore exact results can be obtained in the
limit only, second because one is usually interested in comparing with conformal field theory data.
Conformal field theory is defined on a continuous space, not on the lattice.
From \cite{FPW}, the transfer matrix eigenvalues are
\begin{equation}
-\frac12 \log D(N,u)=-N \log[\kappa_{\text{bulk}}(u)]-\frac12 \log[\kappa_{\text{bound}}(u)]-
\frac12 \log\tilde{D}_{\text{finite}}\big(N,u-\frac{\lambda}2\big)
\end{equation}
where the function $\kappa_{\text{bulk}}$ contains the main bulk contribution and is independent of $N$,
the function $\kappa_{\text{bound}}$ contains a boundary free energy and also is independent of $N$.
These functions are given in \cite{NepomechiePearce}; they will not be used here.
$\tilde{D}_{\text{finite}}$ is finite for large $N$ but still depends by it and will be given later.
The factor $\frac12$ has a physical meaning of normalizing to a row-to-row transfer matrix.
This produces the following lattice partition function\footnote{The lattice has $M$ rows and $N$
columns with periodic boundary conditions in the vertical direction and open boundary conditions
in the horizontal one, according to (\ref{transferm}).}
\begin{eqnarray}\label{partition}
&&Z(N,M,u)=\mbox{Tr }\big(\mbox{\boldmath $D$}(u)^{M/2}\big) \\ &&
=\exp \Big[ -NM \log[\kappa_{\text{bulk}}(u)] -\frac12 M \log[\kappa_{\text{bound}}(u)] \Big]
\mbox{ Tr }\big[\tilde{\mbox{\boldmath $D$}}_{\text{finite}}\big(N,u-\frac{\lambda}2\big)^{M/2}\big] \nonumber
\end{eqnarray}
The solution of the T-system leads to
\begin{eqnarray}\label{solY}
& &-\frac12 \log \tilde{D}_{\text{finite}}(N,0) \\ \nonumber
& &= -\int_{-\infty}^{\infty}dy\ \frac{\log(1+\tilde{d}^{1}\big(N,\frac{i\ y}{L+1}\big))}{4\pi\cosh y}
-\sum_{k=1}^{m_1}\log\tanh\frac{y_k^{(1)}(N)}2 + \mbox{higher order corrections}\\
& &=\frac{\pi}{N} E+\mbox{higher order corrections in }\frac{1}{N} \nonumber
\end{eqnarray}
with $E$ independent by $N$ and given by
\begin{equation}\label{sc_energ}
E=-\frac{1}{\pi^2}\int_{-\infty}^{\infty}dy\ e^{-y}\ \log\big(1+\hat{d}^{1}(y)\big)+
2\sum_{k=1}^{m_1}\frac{1}{\pi}\ e^{-\hat{y}_k^{(1)}}
\end{equation}
Of course, $E$ depends by the configuration of the zeros in the various strips.
I can now express the last factor of the partition function at the isotropic point $u=\frac{\lambda}{2}$
as
\begin{equation}
\chi(\hat{q})=\mbox{ Tr }\big[\tilde{\mbox{\boldmath $D$}}_{\text{finite}}(N,0)^{M/2}\big] =
\sum_{\text{configurations}} \exp \big(-\frac{\pi\ M}{N} E\big)=
\sum_{\text{configurations}} \hat{q}^E
\end{equation}
with
\begin{equation}
\hat{q}=e^{-\frac{\pi\ M}{N}}
\end{equation}
This $\hat{q}$ is a geometrical parameter that measures the ratio between the number of row and
columns in the lattice.
When the system is at criticality, the function $\chi$ is the conformal partition function.
Soon I will make the connection between the ``energy'' $E$ and conformal energies of the two
dimensional conformal theory.
In (\ref{sc_energ}), two ingredient are still missing: the transfer matrix $\hat{d}^1$ and the zeros
$\hat{y}^{(1)}_k$. Thanks to the limit (\ref{scaling}), I can give them an expression. This is not possible
before the limit, namely on the lattice, because the positions indicated in section \ref{s:zeros} are
only asymptotically correct.
The Y-system provides the missing ingredients by the thermodynamic Bethe ansatz equations
\begin{eqnarray}
\log \hat{d}^{q}(x)&=&-4\ \delta_{1,q}\ e^{-x}+
\log\prod_{j=1}^{L-2}\prod_{k=1}^{m_j} \Big[\tanh\frac{x-\hat{y}_k^{(j)}}{2}\Big]^{A_{q,j}}
\nonumber\\
&+&\sum_{j=1}^{L-2} A_{q,j}\,\int_{-\infty}^{\infty}dy\ \frac{\log(1+\hat{d}^{j}(x))}
{2\pi\cosh (x-y)}\,,
~\qquad q=1,\ldots,L-2 \label{sc_tba}
\end{eqnarray}
and the quantization conditions
\begin{eqnarray}\label{sc_psi}
\hat{\Psi}^{q}(x)&=&4\delta_{1,q}\, e^{-x}+i\sum_{r=1}^{L-2}{A_{q,r}}\sum_{k=1}^{m_r}
\log\tanh\big(\frac{x-\hat{y}_k^{(r)}}{2}-i\frac{\pi}{4}\big) \\
&-&\sum_{r=1}^{L-2} A_{q,r} \diagup\hspace{-4mm}\int_{-\infty}^{\infty} dy\,
\frac{\log(1+\hat{d}^{r}(y))}{2\pi\sinh(x-y)}\nonumber\\[3mm]
\hat{\Psi}^{q}(\hat{y}_k^{(q)})&=&\pi\,n_k^{(q)}=\pi[ 1+2(I_k^{(q)}+m_q-k)]
\end{eqnarray}
where a new family of quantum numbers has been introduced $n_k^{(q)}$. They are odd integers.
These two sets of equations close the problem. Indeed by simultaneously solving the TBA
equations and the quantization conditions one has the transfer matrices in the center of the
strips (\ref{phystrip}) and the position of the zeros. Those with label $q=1$ enter in the energy
expression (\ref{sc_energ}) namely in the original transfer matrix eigenvalues.
Equations (\ref{sc_tba}) and (\ref{sc_psi}) are now exact and describe the whole spectrum of the
transfer matrix.
The Y-system controls the structure of the equation, namely the appearance of the integral with a
difference-like integration kernel and the hyperbolic cosine.
The equations still show the full $A_{L-2}$ structure of the Y-system (\ref{ysyst3}), in which
a strip is coupled with the two neighboring ones.
The specific form of the driving term $-4\ \delta_{1,q}\ e^{-x}$ comes from the initial condition
for $\mbox{\boldmath $D$}^0$ in (\ref{initial}) and indicates how the degrees of freedom with fusion index 1
participate to the energy.
The hyperbolic tangent for the zeros and the convolution kernel $1/\cosh (x-y)$ are specific of a
critical lattice and they become elliptic functions in the off-critical case. They are
mainly fixed by the form of the left side of the Y-system.
In presence of boundary interactions, one also adds a specific term as in \cite{FPR2}, \cite{io}.
The positivity condition
\begin{equation}\label{positivita}
1+\hat{d}^{j}(x)>0
\end{equation}
holds on the whole real axis therefore the integral terms are always real. The zeros terms can
produce an imaginary part that corresponds to taking the logarithm of a negative value of
$\hat{d}^q(x)$.
The equations (\ref{sc_tba}) are legitimately called thermodynamic Bethe ansatz because they
correspond to the equations of Zamolodchikov \cite{zam91} for the RSOS scattering theories.
His equations are obtained for the massive case with periodic boundary conditions while here
the model is lives on a open strip and the bulk is critical.
More importantly, the equations of Zamolodchikov only hold for the ground state while the present
lattice approach provides all the states.
Notice that the Zamolodchikov equations are usually expresses in terms of the pseudo-energy
$\epsilon_q(x)=-\log \hat{d}^q(x)$.
In the general case, there is no explicit solution of this functional equations. For this reason,
sometimes numerical solutions were used. It is true that, near a critical regime, it is possible
to obtain a closed form for the energy.
Indeed, after long calculations, one has a very short form for the conformal energy
\begin{equation}\label{energy3}
E=-\frac{c_L}{24}+ \frac14 \boldsymbol{m}^{T} C \boldsymbol{m}
+\sum_{q=1}^{L-2}\sum_{k=1}^{m_q} I_{k}^{(q)}
\end{equation}
where the central charge appears as function of $L$
\begin{equation}
c_L=1-\frac{6}{L(L+1)}
\end{equation}
and $\boldsymbol{m}^T=(m_1,m_2,\ldots,m_{L-2})$ is a vector with the number of 1-strings. In this case, a
completely explicit expression for the energies has been obtained. Notice that the solution
of the TBA system is still unknown. One arrives at the expression for the energy by manipulation,
not by actually solving the TBA or quantization conditions.
In noncritical cases, one can arrive at equations like (\ref{energy3}) only in the UV and IR
limits already described in section~\ref{s:IR} and \ref{s:UV}.
\section{NLIE versus TBA}
Two formalisms have been introduced. One, directly derived from the Bethe ansatz equations,
leads to a Kl\"umper-Batchelor-Pearce-Destri-de Vega equation. The other, derived from functional
equations for the transfer matrix, leads to the (full spectrum) thermodynamic Bethe ansatz
equations. Both produce one or a system of exact nonlinear integral equations of Freedholm type
that allow the evaluation of energy, momenta and other observables.
Both equations have the structure
\begin{eqnarray}
f(x)&=&\nu(x)+ \chi(x,\theta_i)+K\star \log(1+\exp(\alpha f))\nonumber \\
\nu(x)&=&\mbox{driving term }\label{struttura}\\
\chi(x,\theta_i)&=& \mbox{sources, to be fixed by quantization conditions} \nonumber
\end{eqnarray}
with quantization conditions given by
\begin{equation}\label{zero}
f(\hat{\theta}+\theta_i)=I_i\,,\qquad I_i\in\mathbb{Z}
\end{equation}
($\hat{\theta}$ is a fixed shift that appears in the TBA case but not in the DdV one).
The function $f$ itself can carry an index, thus participating to a system of coupled equations.
The two approaches complement each other, precisely as the $\mbox{\boldmath $T$}$ and $\mbox{\boldmath $Q$}$ operators
complement each other in Baxter T-Q relation (\ref{TQperiod}). Indeed, the main message is that
the $\mbox{\boldmath $Q$}$ operator is behind the Kl\"umper-Batchelor-Pearce-Destri-de Vega approach while the $\mbox{\boldmath $T$}$
operator gives rise to the TBA equations.
In spite of the many analogies, the two approaches offer different paths to evaluate the main
observables.
The choice of using one or the other is mainly dictated by the available initial information:
Bethe roots/holes or zeros of the transfer matrix eigenvalues.
Notice that the Kl\"umper-Pearce-Destri-de Vega equation is exact in both the finite lattice and
the continuum limit while the TBA equations exist only in the continuum limit.
Indeed, the first approach does not assume a ``string hypothesis'' for the zeros/holes
while the second one uses the notion that zeros have a fixed real part. This is true in the
large size limit and is very much like a ``string hypothesis'' for the transfer matrices. Actually,
it is even more than an hypothesis as it has been widely tested.
Another major difference is in the number of equations. For the sine-Gordon case, one Destri-de Vega
equation describes the spectrum. A corresponding TBA set would couple an infinite number of
equations. This is better understood if one defines a counting function (see (\ref{def.Zn})) by
\begin{equation}
\mathcal{F}=\exp(i Z(u))=\frac{Q(u+\lambda)f(u-\frac{\lambda}2)\sin(u-\lambda)}
{Q(u-\lambda)f(u+\frac{\lambda}2)\sin(u+\lambda)}\ \frac{\kappa(u)}{\kappa(\lambda-u)}
\end{equation}
such that Bethe equations (\ref{TQBethe}) reduce to
\begin{equation}
\mathcal{F}(u_j)=-1
\end{equation}
The function $\kappa(u)$ is the solution of
\begin{eqnarray}
\kappa(u)\kappa(u+\lambda)&=&f_1\ f_1\ s_{-2}\ s_2\nonumber \\
\kappa(u)&=&\kappa(\lambda-u)
\end{eqnarray}
and is given in \cite{NepomechiePearce}. This function has mainly a normalization role and is not
very important to the present purposes.
Notice that in the definition of the counting function the rightmost factor is actually 1.
The first case of the T-system (\ref{funct}) can be written as
\begin{equation}
1+d^1(u)=\frac{s_{-1}s_1}{f_1\ f_1\ s_{-2}\ s_2}D^1_0\ D^1_1
\end{equation}
therefore, using the T-Q relation (\ref{TQbordo}), one gets
\begin{equation}\label{ddvtba}
1+d^1(u)=\left[1+\mathcal{F}(u+\textstyle\frac{\lambda}2)\right]
\left[1+\frac{1}{\mathcal{F}(u-\frac{\lambda}2)}\right]
\end{equation}
Now it is obvious that the TBA approach has to solve for the whole hierarchy, here
represented by the left hand side, while the counting function of the DdV equation (right hand side)
is just related to the first fusion level.
This last equation is very important because it makes the bridge between the two formalisms.
Here is has been written for the $A_L$ models but, taking its limit $\lambda\rightarrow 0$ and rescaling
$u$ as done in the second part of \cite{FPW}, one adapts it to the XXX model. In that case the
hierarchy has no truncation, that means that $q$ is not upper
bounded.
The six-vertex untruncated hierarchy holds for the sine-Gordon model; this makes clear that the TBA
equations for sine-Gordon form an infinite system, as previously indicated.
I have derived (\ref{ddvtba}) following the paper \cite{janosarpad}.
The Zamolodchikov scattering formulation of TBA equations \cite{zam91} was based on a dressed Bethe
ansatz, namely a Bethe ansatz based on physical particles and not on quasi-particles;
that formulation was not able to treat excited states. An interesting but extremely lengthy
method to describe excitations was formulated by Dorey and Tateo \cite{doreytateo}.
It is based on analytic continuations of the ground state TBA equations by the adimensional
parameter $r=MR$, that is the product of the mass of the fundamental particle and of the
space size. This parameter enters the TBA equations as
\begin{equation}
\nu(x)=r\cosh x
\end{equation}
and represents the momentum of a particle with (real) rapidity $x$. Analytic continuation
along a path that encloses the singularities of the equation and returns to the real axis produces
equations for the excited states:
pictorially, moving around branch point singularities let one change the Riemann sheet and access a new
excitation level.
The difficulty of this approach is in finding systematic classifications of these singularities.
Clearly, the lattice formulation given here has no such difficulty and all excitations
are easily described.
The work of Bazhanov, Lukyanov and Zamolodchikov \cite{blz1}, \cite{blz_exc} was motivated by the
need of describing excitations in a more systematic way by constructing a Y-system \textit{ab-initio}
for a quantum field theory formulation of transfer matrices and $\mbox{\boldmath $Q$}$ operators.
They could get the Y-system. The methods shown in this chapter are an efficient way to
solve the Y-system, for the ground state and all the excitations.
There is another important difference, in the role of the function $f(x)$. Indeed, in the TBA
case $\alpha=-1$, the function $\exp (- f)$
indicates how the energy is distributed among the degrees of freedom and is real, as it
is a transfer matrix eigenvalue.
In the Kl\"umper-Pearce-Destri-de Vega case, $f$ is a counting function namely it controls the
density of Bethe roots (indicated with $\rho$)
\begin{equation}
\frac{dZ(u)}{du}\sim 2\pi \frac{I_{j+1}-I_j}{u_{j+1}-u_j}\sim 2\pi \rho(u)
\end{equation}
and is especially related to the momenta of the particles. $\alpha=i$ so $\exp(i f)$ is a complex
function.
\section{Integrals of motion}
The DdV equation and the TBA equations allow the evaluation of energy, momenta and other
integrals of motion.
For the TBA equations, the equations for high integrals of motion have been obtained in
\cite{fevgrinza} thus providing explicit
expressions in the case of the tricritical Ising model with boundary perturbations
\begin{equation}\label{integrals}
C_n\ I_{2n-1}(\xi)
=\frac{2}{2n-1}\sum_{k=1}^{m_1}e^{-(2n-1)y_k^{(1)}} +(-1)^{n}\int_{-\infty}^{\infty}\frac{dy}{\pi}
\log(1+\hat{d}^1(y,\xi))\ e^{-(2n-1)y}
\end{equation}
The constant is taken from \cite{blz1}
\begin{equation}
C_n=2^{2-n}\ 3^{1 - 2 n}\ 5^{1 - n}\ \frac{(10 n-7)!!}{n!\, (4 n-2)!} \ \pi
\end{equation}
The case $n=1$ gives the energy $E=I_1(\xi)$ as in (\ref{sc_energ}).
The TBA equations and quantization conditions are as in (\ref{sc_tba}) and (\ref{sc_psi}) with
$L=4$ and with the addition, on the right hand side of $\log \hat{d}^q{(x)}$, of the boundary
interaction term $\log g_q(x,\xi)$.
In particular, for the boundary flow $(1,2)\rightarrow (1,1)$ the choice is
\begin{equation}
g_1(x,\xi)=\tanh\frac{x+\xi}{2}\,,\qquad g_2(x,\xi)=1
\end{equation}
with $\xi=-\infty$ corresponding to the boundary condition (1,2), namely an unstable UV point,
and $\xi=+\infty$ to the (1,1), a stable IR point. This perturbative flow is triggered by the
boundary operator $\phi_{1,3}$, namely by an operator that acts on the border of the strip.
Notice also that $\hat{t}_1$ and $\hat{g}_1$ of \cite{fevgrinza} correspond to
$\hat{d}^2$ and $\hat{g}_2$ of the present paper; analogously $\hat{t}_2$ and $\hat{g}_2$ correspond to
$\hat{d}^1$ and $\hat{g}_1$.
These equations are strategically important to fix the correspondence with basis vectors in the conformal
field theory. Indeed, the difficulty with TBA equations is that they provide expressions for the
energy but no indication of the states. In conformal field theory, given the high amount of symmetry,
typically many states have the same energy. For example, in a standard cylinder quantization,
the vacuum sector of the tricritical Ising model has level degeneracies
\begin{equation}
1,\ 0,\ 1,\ 1,\ 2,\ 2,\ 4,\ \ldots
\end{equation}
How can one match lattice states and conformal field theory states?
The first few conformal field theory conserved charges are given in \cite{sasaki1987}, obtained
after quantization of classical integrals of motion of the modified KdV and sine-Gordon models.
This derivation is very important because it is naturally related to $\phi_{1,3}$ perturbations and
to the structure of the Y-system (\ref{ysyst2}). The integrals of motion are
\begin{eqnarray}
\mbox{\boldmath $I$}_1&=&L_0 -\frac{c}{24} \nonumber \\
\mbox{\boldmath $I$}_3&=&2\sum_{n=1}^{\infty}L_{-n}L_n +L_0^2-\frac{2+c}{12}L_0 +\frac{c(5c+22)}{2880}\label{cftintegrals}\\
\mbox{\boldmath $I$}_5&=&\sum_{m,n,p}:\!L_n L_m L_p\!:\delta_{0,m+n+p}+\frac{3}{2}\sum_{n=1}^{\infty}L_{1-2n}L_{2n-1}
+\sum_{n=1}^{\infty}\left(\frac{11+c}{6}n^2-\frac{c}{4}-1 \right)L_{-n}L_n \nonumber \\
&&-\frac{4+c}{8} L_0^2 +\frac{(2+c)(20+3c)}{576}L_0 - \frac{c(3c+14)(7c+68)}{290304}\nonumber
\end{eqnarray}
Expressions for the following cases become quickly very complicated.
In terms of the generators of the Virasoro algebra, the space of states is built by linear
superpositions of the states (\ref{levels}).
A conceptually simple (but technically very difficult) problem of linear algebra is to find
common eigenstates of the integrals of motion on the Virasoro basis (\ref{levels}). For the first
few levels, an explicit expression has been evaluated in \cite{fevgrinza}, together with the
corresponding eigenvalues, providing a list of eigenvalues $I_n^{\text{CFT}}$. At higher energy,
the eigenvalues are given by solving algebraic equations of degree equal to the degeneracy.
Using the eigenvalues (\ref{integrals}), one has another list $I_n^{\text{TBA}}$.
Matching the two lists creates a one-to-one dictionary that in \cite{fevgrinza} was appropriately called
\textit{lattice-conformal dictionary}. The wording is inspired from \cite{melzer}.
I'm not aware of closed expressions for the integrals (\ref{integrals}), like the energy expression
(\ref{energy3}), even if I believe they should exist.
For this reason, numerical evaluations have been used. Notice that, even if $I_n^{\text{TBA}}$ and
$I_n^{\text{CFT}}$ are evaluated numerically, the matching is exact because the spectrum is discrete,
as one can appreciate looking at the values given for the vacuum sector in table~\ref{t:lista}.
\openin1=dati_I3_11cft.tex
\openin2=dati_I3_11tba.tex
\openin3=dati_I5_11cft.tex
\openin4=dati_I5_11tba.tex
\newcommand\ra{\read1 to \datoa \datoa}
\newcommand\rb{\read2 to \datob \datob}
\newcommand\rc{\read3 to \datoc \datoc}
\newcommand\rd{\read4 to \datod \datod}
\renewcommand{\v}{|0\rangle}
\begin{sidewaystable}
\caption{Comparison of the eigenvalues of $\mbox{\boldmath $I$}_3$ and $\mbox{\boldmath $I$}_5$ from conformal field theory and from TBA in the vacuum sector of the tricritical Ising model. The left column contains the level degeneracy (l.d.) as indicated in the conformal character: $d q^{l}$ \label{t:lista}}
$$
\begin{array}{@{~~}r@{\hspace{5mm}}r@{~\longleftrightarrow~}l@{~~~~~~}l@{~~~~~~}l@{~~~~~~}l@{~~~~~~}l}
\hline \rule{0mm}{6mm}
\text{l.d.} & \multicolumn{2}{l}{\text{lattice-conformal dictionary}}
&\ra&\rb&\rc&\rd\\[2mm] \hline
\rule{0mm}{8mm} 1 & (~) &\v &\ra&\rb&\!\!\!\!\rc&\!\!\!\!\rd \\[8mm]
1q^2 & (00) & L_{-2}\v &\ra&\rb&\rc&\rd \\[8mm]
1q^3 & (10) & L_{-3}\v &\ra&\rb&\rc&\rd \\[8mm]
2q^4 & (20) & 3(\frac{4+\sqrt{151}}{5} L_{-4}+2\, L_{-2}^2)\v &\ra&\rb&\rc&\rd \\[3mm]
& (11) & 3(\frac{4-\sqrt{151}}{5} L_{-4}+2\, L_{-2}^2)\v &\ra&\rb&\rc&\rd\\[8mm]
2q^5 & (30) &(\frac{7+\sqrt{1345}}{2} L_{-5}+20\, L_{-3}L_{-2})\v &\ra&\rb&\rc&\rd \\[3mm]
& (21) &(\frac{7-\sqrt{1345}}{2} L_{-5}+20\, L_{-3}L_{-2})\v &\ra&\rb&\rc&\rd \\[8mm]
4q^6 & (40) & (11.124748\, L_{-6}+9.6451291\, L_{-4}L_{-2} &\ra&\rb&\rc&\rd \\
& \multicolumn{2}{l}{\hspace{27mm}+4.4320186\, L_{-3}^2+ L_{-2}^3)\v } &&&& \\[4mm]
& (31) &(-4.9655743\, L_{-6} + 2.3354391\, L_{-4}L_{-2} &\ra&\rb&\rc&\rd \\
& \multicolumn{2}{l}{\hspace{27mm}+0.71473858\, L_{-3}^2+ L_{-2})\v} &&&& \\[4mm]
& (22) &(0.66457527\, L_{-6} - 1.2909210\, L_{-4}L_{-2} &\ra&\rb&\rc&\rd\\
& \multicolumn{2}{l}{\hspace{27mm}-1.2605013\, L_{-3}^2+L_{-2}^3)\v} &&&& \\[4mm]
& (0000|00) &(-1.6612491\, L_{-6} - 4.0646472\, L_{-4}L_{-2} &\ra&\rb&\rc&\rd \\
& \multicolumn{2}{l}{\hspace{27mm}+ 1.4118691\, L_{-3}^2 + L_{-2}^3)\v} &&&&\\[3mm] \hline
\end{array}
$$
\end{sidewaystable}
\closein1\closein2\closein3\closein4
One can wonder about the fate of such integrals of motion when a
relevant perturbation is switched on. In the present case, where the $\phi_{1,3}$
boundary perturbation is concerned, the TBA formulation is preserved because, as already discussed,
this perturbation generates flows for which the Y-system and the functional equations still hold.
This is well known on the lattice side.
It is also know that at special points of the sine-Gordon coupling, one describes the $\phi_{1,3}$
perturbations of the minimal models (\ref{minimal}) therefore it is natural to expect
that the quantities (\ref{cftintegrals}) are compatible with such a perturbation, being derived from
the sine-Gordon integrals of motion.
Among the possible families of involutive integrals of motion allowed in the CFT, they
are those whose ranks (or Lorentz spins, namely the indices $n$) are predicted to be preserved,
by the counting argument in \cite{Zam-adv}. The perturbed operators are given in \cite{blz_exc}.
Numerical investigations of $I_3(\xi)$ were done in \cite{fevgrinza}.
The expressions (\ref{integrals}) actually work for all the other models described by (\ref{sc_tba}).
I would like to mention that very similar expressions hold in the DdV formalism \cite{marcodavide}.
Also these authors do not find a closed form for (\ref{integrals}), except at a free fermion point.
In the Ising case $A_3$ a closed form is known in terms of poly-logarithms \cite{nigro}, but
this is a free fermion!
\section{Numerical considerations}
The form of equations (\ref{struttura}) is particularly suited to be solved by iteration. Indeed,
starting from an initial guess
\begin{equation}
f_0=\nu(x)
\end{equation}
one iterates by
\begin{equation}\label{itera}
f_{k+1}(x)=\nu(x)+ \chi(x,\theta_i)+K\star \log(1+\exp(\alpha f_k))
\end{equation}
up to the required precision. This looks very easy and, sometimes, it is so. The difficulties come
when there are sources to fix, in particular when they are outside the real axis, in which case
one experiences also a big increase of the computational time.
The difficulty is due to the mixing of the ``functional'' problem, namely finding a function $f$,
with the ``sources'' problem, namely finding the sources. Then, one has to iterate at the same time
on $f$ and on $\theta_k$.
The numerical approach has been used in many occasions. In \cite{FPR2}, there is a wide discussion
in relation to the TBA case. See also the paper \cite{FFGR2} for the DdV case.
Here I would like to discuss about the question of convergence.
Is the iteration (\ref{itera}) converging? Is it converging to the good solution, if
there are multiple solutions?
The {\em contraction mapping theorem} states that if a mapping $\mathcal{M}: V \rightarrow V$ on a
complete metric space $V$ is a contraction then there exists a unique fixed point
$f_0=\mathcal{M}(f_0)$ and
all the sequences obtained under iteration starting from an arbitrary initial point $f\in V$
converge to the fixed point.
In practice, the derivative $|\mathcal{M}'(f)|$ measures the strength of the contraction and the
rate of convergence. The case $|\mathcal{M}'(f)|<1$ is a contraction while $|\mathcal{M}'(f)|>1$
is a dilatation. Values close to 0 converge quickly, values close to 1 converge slowly.
If one could show that the mapping $f_{k+1}=\mathcal{M}(f_k)$ is a contraction, then the answer to the
previous questions would be affirmative. Unfortunately, the mapping is not easy to evaluate.
Restricting to the TBA case, one can do some steps forward. By varying $f$ in (\ref{itera})
one has
\begin{equation}\label{itera2}
\delta f_{k+1}(x)=\int \frac{1}{2\pi\cosh(x-y)}\ \frac{-\exp(-f_k)}{1+\exp(-f_k)} \delta f_k(y)\ dy
\end{equation}
By the integral
$$
\int_{-\infty}^{\infty} \frac{1}{\cosh t}\ dt=\pi \,,
$$
the first fraction sums up to $\frac12$ therefore ``in average'' is a contraction.
If $f_k$ is real, as it happens in the ground state, the last fraction has absolute value lower
than 1. This suggests that the
integration acts globally as a contraction with factor smaller than $\frac12$ therefore iteration is
convergent to the unique solution.
Numerical calculations have shown that, in absence of sources, the convergence of the iteration
equations is usually fast. In the $L=4$ case, namely the tricritical Ising model, 30 iterations are
sufficient to reach 9 significant digits, that confirms the estimate of a contraction factor of $1/2$
\begin{equation}
2^{-30}\sim 10^{-9}
\end{equation}
In presence of sources, the intuitive evaluation breaks down because one has to iterate on the sources
position. This is related to the fact that $f_k$ can acquire an imaginary part multiple of $\pi$,
although $1+\exp(-f)>0 $ as in (\ref{positivita}).
On numerical calculations, one immediately observes the need to iterate longer, up to
hundred times when several sources are considered. Moreover, in \cite{FPR2} it was pointed out that
certain algorithms to fix sources do not converge. The problem does not appear for the function
$f$ itself. When algorithms for sources do not converge, they appear to be dilatations, in which case
the iteration takes away from the fixed point and the solution must be found by other means.
This fact is curious and could become more serious in other models with different kernel or sources
structure, up to the point of preventing the unicity of the solution.
In the models considered here, an incomplete possible argument goes as follows. In (\ref{sc_psi})
one neglects
the integrals, that often appear to be much smaller that the source terms. Then, an equation with
$q>1$ looks like
\begin{equation}
\exp(-i\ \pi\ n_k^{(q)})=
\prod_{j=1}^{m_{q-1}} \tanh\Big(\frac{\hat{y}_k^{(q)}-\hat{y}_j^{(q-1)}}{2}-i\frac{\pi}{4}\Big)
\prod_{h=1}^{m_{q+1}} \tanh\Big(\frac{\hat{y}_k^{(q)}-\hat{y}_h^{(q+1)}}{2}-i\frac{\pi}{4}\Big)
\end{equation}
and $\hat{y}_k^{(q)}$ can be extracted for example by
\begin{gather}
\tanh \Big(\frac{\hat{y}_k^{(q)}-\hat{y}_1^{(q-1)}}{2}-i\frac{\pi}{4}\Big)=Y(\hat{y}_k^{(q)})\\
Y(\hat{y}_k^{(q)})\mathop{=}^{\text{def}}\exp(-i\ \pi\ n_k^{(q)})
\prod_{j=2}^{m_{q-1}} \coth\Big(\frac{\hat{y}_k^{(q)}-\hat{y}_j^{(q-1)}}{2}-i\frac{\pi}{4}\Big)
\prod_{h=1}^{m_{q+1}} \coth\Big(\frac{\hat{y}_k^{(q)}-\hat{y}_h^{(q+1)}}{2}-i\frac{\pi}{4}\Big)
\end{gather}
Notice that $Y()$ is a function of modulus one. Inverting the $\tanh$ leads to the iterative form
\begin{equation}\label{iter}
\hat{y}_k^{(q)}=\hat{y}_1^{(q-1)}+i\frac{\pi}{2}+\log\frac{1+Y(\hat{y}_k^{(q)})}{1-Y(\hat{y}_k^{(q)})}=\mathcal{Y}_k^{(q)}(\hat{y}_k^{(q)})
\end{equation}
The derivative is
\begin{equation}
\frac{d\mathcal{Y}_k^{(q)}}{d\hat{y}_k^{(q)}}=\frac{2Y'(\hat{y}_k^{(q)})}{1-(Y(\hat{y}_k^{(q)}))^2}
\end{equation}
If this derivative is larger that 1, (\ref{iter}) becomes a dilatation. Evaluating it is not easy.
If the sources can be arranged in such a way that $Y(\hat{y}_k^{(q)})$ is sufficiently close to $\pm 1$,
the denominator can be made close to zero.
This probably means that the derivative can be quite large and be a dilatation, but this
analysis is not conclusive.
Several numerical investigations have shown that there are sources arrangements where iterative algorithms
as (\ref{iter}) do not converge at all. In these cases, other methods are needed. For example, one first
estimates the interval in which the source is expected then locates it by the bisection method.
Of course, this uses much more computer time than an iterative method.
In \cite{FPR2} a similar situation was described in relation to a boundary function $g(x,\xi)$.
The treatment given here is complementary to that one and, in some sense, is more general because
it does hold even when boundary parameters are absent.
In the frame of the DdV equation, in \cite{noiNP} and \cite{noiNP2} the case of ``special holes''
was left unsolved: in that case, the functional equation itself seems to fail to converge, or
better, it converges to a nonsense, possibly for the reasons indicated here.
The lesson of this analysis is that
iterations can give rise to unexpected problems and this could have consequences about the unicity
of the solution. On the other hand, in my numerical TBA calculations, I never observed problems
of unicity.
\section{Discussion}
I have introduced the thermodynamic Bethe ansatz method, sketched its lattice derivation
and discussed the relevance of the Y-systems to summarize the symmetries of the model.
I have presented the integrals of motion. The methods of DdV and TBA have been compared and numerical
considerations have been shown.
Personally, I have worked on TBA equations for a fair amount of time (2000 to 2006). My contribution
has been important.
\begin{itemize}
\item I have derived and treated TBA equations for the boundary flows of the tricritical Ising model.
\item I have established the lattice-conformal dictionary.
\item I extended the TBA to all $A_L$ models and also to the XXX model.
\item I analyzed how the zeros move in consequence of boundary flows.
\item I derived the conformal characters (conformal partition functions) from TBA.
\item I have done extensive high precision numerical calculations with TBA equations, in
presence of many sources and of several coupled equations.
\item I worked on the physical combinatorics of quasi-particles. They form a lattice gaz whose
partition function is the conformal character.
\end{itemize}
The first development that I propose is to make systematic the lattice-conformal dictionary.
This is probably related to the realization of a lattice Virasoro algebra. It will lead to a
better understanding of the space of states.
The second is related to the quasi-particles and the physical combinatorics. I
have already experimented, in \cite{fp}, algebraic formalisms to express these quasi-particles.
The formulation was primitive but other authors worked on it and proposed more effective
formalisms, see \cite{mathieu} and the papers that followed it.
I think the clarification of an algebraic formalism for the quasi-particles would help
to work on the space of states of the minimal models and their perturbations.
\chapter{Hubbard model and integrability in $\mathcal{N}=4$ SYM\label{c:hubbard}}
\newcommand{\cc}[1]{c_{#1}^{\phantom{\dagger}}}
\newcommand{\cd}[1]{c_{#1}^{\dagger}}
\newcommand{\nn}[1]{n^{\phantom{\dagger}}_{#1}}
\newcommand{^{\uparrow}}{^{\uparrow}}
\newcommand{^{\downarrow}}{^{\downarrow}}
\newcommand{\mathcal{V}}{\mathcal{V}}
\newcommand{\mathcal{W}}{\mathcal{W}}
\newcommand{\wt}[1]{\widetilde{#1}}
\newcommand{\mbox{${\mathbb M}$}}{\mbox{${\mathbb M}$}}
\newcommand{\raisebox{-.12ex}{$\stackrel{\circ}{\pi}$}{}}{\raisebox{-.12ex}{$\stackrel{\circ}{\pi}$}{}}
\newcommand{\raisebox{.12ex}{$\stackrel{\circ}{W}$}{}}{\raisebox{.12ex}{$\stackrel{\circ}{W}$}{}}
The Hubbard model was introduced in order to investigate strongly correlated
electrons in matter \cite{Hubbard,Gutzwiller} and since, it has been widely studied,
essentially due to its connection with condensed matter physics.
It has been used to describe the Mott metal-insulator transition \cite{Mott,Hubbard3}, high critical
temperature $T_c$ superconductivity \cite{Anderson,Affleck}, band magnetism \cite{Lieb}
and chemical properties of aromatic molecules \cite{heilieb}.
The literature on the Hubbard model being rather large, I refer to the books \cite{Monto,EFGKK} and
references therein. Exact results have been mostly obtained in the case of the
one-dimensional model, which enters the framework of our study. In
particular, the 1D model Hamiltonian eigenvalues have been obtained by means of the coordinate
Bethe Ansatz by Lieb and Wu \cite{LiebWu}.
One of the main motivations for the present study of the Hubbard model and
its generalisations is the fact that it has unexpectedly appeared in the context of $N=4$ super
Yang-Mills theory.
This is a superconformal gauge theory in four dimensions, conjectured to be dual to
a string theory in a $\mbox{AdS}_5\times\mbox{S}^5$ background, a ten dimensional space.
Indeed, it was noticed in \cite{Rej:2005qt} that the Hubbard model at half-filling,
when treated perturbatively in the coupling, reproduces the long-ranged integrable spin chain
of \cite{Beisert:2004hm} as an effective theory. It thus provides a localisation of the long-ranged
spin chain model and gives a potential solution to the problem of describing interactions which are
longer than the length of the spin chain. The Hamiltonian of this chain was conjectured in
\cite{Beisert:2004hm} to be an all-order description of the dilatation operator of $N=4$ super
Yang-Mills in the $SU(2)$ sector. That is, the energies of the spin chain where conjectured
to be proportional to the anomalous dimensions of the gauge theory operators in this sector.
After, it was shown that starting with the fourth
loop terms, the Hubbard model is incomplete in describing the dilatation operator
\cite{bes2006}, certain highly nontrivial phase factors being required.
But this wasn't the end of the story!
The full factorized scattering matrix of the gauge theory has been studied and Beisert has shown the
relation of this
S-matrix with the Shastry R-matrix of the Hubbard model \cite{beis2006}. This means that the integrable
structure of the Hubbard model enters in the conjectured integrable structure of the SYM theory.
In this chapter, I will present two different approaches to the Hubbard model, one based on the
Kl\"umper-Batchelor-Pearce-Destri-de Vega method \cite{FFGR, FFGR2} and one based on
R-matrices \cite{DFFR}.
The first one has led to the evaluation of energies for the antiferromagnetic state.
It allows also to control the order of the limits of high coupling and high lattice size.
The large size of the model is easily treated at all values of the coupling.
This is important as in the SYM frame it corresponds to very long monomials of local operators, totally
inaccessible with ordinary diagrammatic techniques.
For the second approach, in 2005-2006 I thought there may be the possibility that some integrable extension
of the Hubbard model could be put in relation to other subsectors of the $N=4$ super Yang-Mills theory
given that the Hubbard model itself was observed in relation to the sector $SU(2)$.
Here I will discuss a general approach to construct a number of supersymmetric
Hubbard models. Each of these models can be treated perturbatively and thus
gives rise to an integrable long-ranged spin chain in the high coupling limit.
Other symmetric or supersymmetric generalizations of the Hubbard model have been
constructed, see e.g. \cite{EKS}. These approaches mainly concern high
$T_c$ superconductivity models. They essentially use the $gl(1|2)$ or $gl(2|2)$ superalgebras,
which appear as the symmetry algebras of the Hamiltonian of the model. The approach
I have adopted in \cite{DFFR}
however is different, being based on transfer matrices and quantum inverse scattering framework.
It ensures the integrability of the model and allows one to obtain local Hubbard-like
Hamiltonians for general $gl(N|M)$ superalgebras. After a Jordan-Wigner transformation, these
Hamiltonians appear to describe one or more families of charged and chargeless fermions.
\section{The Hubbard model\label{sect:Hubbard}}
The Hubbard model, introduced in \cite{Hubbard,Gutzwiller}, describes hopping electrons on a lattice,
with an ultralocal repulsive potential that implements a screened Coulomb repulsion, with $U>0$.
The 1-dimensional Hamiltonian is given by
\begin{equation}\label{oldHubb}
H=-t \sum _{i=1}^L \sum _{\rho=\uparrow , \downarrow}
\left(e^{i\phi} \cd{\rho,i}\cc{\rho,i+1}+e^{-i\phi}\cd{\rho,i+1}\cc{\rho,i}\right)
+U \sum_{i=1}^L \big(1-2\nn{\uparrow,i}\big)\big(1-2\nn{\downarrow,i}\big)
\end{equation}
where $\cc{},\cd{} $ are usual fermionic operators, $i$ indicates the lattice site and $\rho$ is the
``spin orientation''. I will always use periodic boundary conditions.
The physical idea behind this Hamiltonian is that the metallic positive ions create the
crystalline structure. Each ion puts up to two electrons in the conductive band. Ions are much
heavier than electrons so for most investigations they can be considered as static thus the lattice
has no dynamics.
Electrons in the conductive band repel each other (of course!) but they also experience major
screening effects.
Indeed, an electron feel the repulsion of the other electrons but also the strong, periodic,
attraction by the ions.
This makes the Coulomb repulsion short-ranged. In the Hubbard model, the electronic repulsion is
modeled with an ultralocal term: electrons interact only if they are on the same site.
Pauli exclusion then implies
that they interact if they have opposite spin only. Pauli exclusion implies also that
a maximum of $2L$ electrons can be accommodate in the lattice, in which case it is ``fully
filled''. I will often use the ``half filled'' case that contains precisely $L$ electrons.
The phase $\phi$ in the Hamiltonian represents a uniform magnetic field. For many purposes, one
can put it to zero. In the approach of \cite{FFGR2} this phase was introduced to fit with
the Hubbard model used in \cite{Rej:2005qt}.
There are some features that can be explored without too much calculative effort. If $U=0$, the Hamiltonian
describes free fermions (electrons). The first term in (\ref{oldHubb}) describes hopping between
nearest neighbor sites in such a way that
electrons can freely move around, yielding a conductor. On the other hand, when
$U$ becomes very large, it appears that the total energy is lower if one can make negative the
contribution from the potential term
$\big(1-2\nn{\uparrow,i}\big)\big(1-2\nn{\downarrow,i}\big)$, namely if on each site there is just one
electron. At half filling and large $U$, the ground state has precisely this form with one electron
per
site, no empty sites and no doubly occupied sites. At zero temperature and large $U$ this state is
fully frozen because overturning a spin would require an amount of energy of $U$ to create a
state with a doubly occupied site. This ``frozen'' state describes a Mott insulator namely a system
whose conductive band is not empty but the Coulomb interaction forbids any electronic
displacement. At positive temperature, the ground state is always conductive because thermal excitations
can provide the amount of energy needed to create vacances and double occupances.
The large $U$ regime is the spin chain limit. Indeed, the Hubbard model looks very close to an
Heisenberg XXX model: one (quantum) spin per site, up or down.
Notice that if the lattice is not half filled, there is conduction whatever is the value of $U$.
The underling algebraic structure leads to superalgebras. In a first instance, I consider a single
fermion
\begin{equation}\label{fermi}
\{\cc{},\cd{}\}=\mbox{\boldmath $I$} \,,\qquad n=\cd{} \cc{}
\end{equation}
where $\mbox{\boldmath $I$}$ is the identity operator, $n$ is the number operator and $\{,\}$ is the anticommutator.
The operators $\cc{},\cd{},\mbox{\boldmath $I$},n$ form a realization of a $gl(1|1)$ superalgebra.
One way to see this is to write down the whole set of ``commutation rules''
\begin{equation}
[n,c]=-c\,,\qquad [n,\cd{}]=\cd{}\,,\qquad [X,\mbox{\boldmath $I$}]=0 \mbox{ ~for~ } X\in\{n,c,\cd{}\}
\end{equation}
These and (\ref{fermi}) can be realized by the two dimensional matrices of $gl(1|1)$
$$
E_{12}=\begin{pmatrix} 0 & 1\\ 0 & 0\end{pmatrix}=c \,,\quad
E_{21}=\begin{pmatrix} 0 & 0\\ 1 & 0\end{pmatrix}=\cd{} \,,\quad
E_{11}=\begin{pmatrix} 1 & 0\\ 0 & 0\end{pmatrix}=n\,,\quad
E_{22}=\begin{pmatrix} 0 & 0\\ 0 & 1\end{pmatrix}=\mbox{\boldmath $I$}-n\,.
$$
On each site of the Hubbard lattice there are two ``spin polarizations'' so on each site
there is a $gl(1|1)\oplus gl(1|1)$ superalgebra and, on the whole lattice, the fermionic structure
\begin{equation}
\{\cc{\rho,i},\cd{\rho',j}\}=\delta_{\rho,\rho'}\delta_{i,j} \, \qquad
\{\cc{\rho,i},\cc{\rho',j}\}=\{\cd{\rho,i},\cd{\rho',j}\}=0
\end{equation}
is $L$-times the tensor product of the one site structure.
I can easily represent the fermionic structure by a graded tensor product of the matrices
\begin{equation}
E_{12;\rho,i}=\cc{\rho,i} \,,\quad
E_{21;\rho,i}=\cd{\rho,i} \,,\quad
E_{22;\rho,i}=\nn{\rho,i}=\cd{\rho,i}\cc{\rho,i} \,,\quad
E_{11;\rho,i}=1-\nn{\rho,i}=\cc{\rho,i}\cd{\rho,i} \label{JW}
\end{equation}
When it occurs, the second pair of labels $\rho,i$ indicates the spin polarization $\rho$
and the site $i$.
The matrices $E_{12}\,,E_{21}$ are taken of fermionic character (they satisfy anticommutation relations whatever their spin and space labels are) and $E_{11}\,,E_{22}$ are taken of bosonic character (they always enter commutation relations whatever their spin and space labels are).
The relation (\ref{JW}) is a graded Jordan-Wigner transformation\footnote{The ordinary
Jordan-Wigner transformation is
$\displaystyle\cd{\uparrow,i}=\sigma^{-}_{\uparrow,i} \prod _{k>i} \sigma^{z}_{\uparrow,i}$
for the up polarization; an additional term occurs for the down polarisation.}
and respects periodic boundary conditions\footnote{The standard one violates periodicity.}.
I now rewrite the Hubbard Hamiltonian in the matrix language
\begin{equation}\label{hubb}
H = -t \sum _{i=1}^L \sum _{\rho=\uparrow , \downarrow}
\left( E_{21;\rho,i}\ E_{12;\rho,i+1}+E_{21;\rho,i+1}\ E_{12;\rho,i}\right)
+U \sum_{i=1}^L
\big(E_{11;\uparrow,i}-E_{22;\uparrow,i}\big)\big(E_{11;\downarrow,i}-E_{22;\downarrow,i}\big)
\end{equation}
and I split it into the sum of the two polarizations
\begin{gather}
H= H_{\text{XX}}^{\uparrow} +H_{\text{XX}}^{\downarrow} +U \sum_{i=1}^L
\big(E_{11;\uparrow,i}-E_{22;\uparrow,i}\big)\big(E_{11;\downarrow,i}-E_{22;\downarrow,i}\big)\;; \label{hubbspin}\\
H_{\text{XX}}^{\rho} = -t \sum _{i=1}^L
\left( E_{21;\rho,i}\ E_{12;\rho,i+1}+E_{21;\rho,i+1}\ E_{12;\rho,i}\right)\;. \nonumber
\end{gather}
Taking one polarization of the kinetic term one easily sees
\begin{gather}
E_{21;\rho,i}\ E_{12;\rho,i+1}+E_{21;\rho,i+1}\ E_{12;\rho,i}=
\frac12 \Big[ E_{x;\rho,i} \ E_{x;\rho,i+1}+ E_{y;\rho,i}\ E_{y;\rho,i+1} \Big]\\[3mm]
E_{x;\rho,i}=\begin{pmatrix} 0 & 1\\ 1 & 0\end{pmatrix}_{\rho,i} \,,\quad
E_{y;\rho,i}=\begin{pmatrix} 0 & -i\\ i & 0\end{pmatrix}_{\rho,i} \notag
\end{gather}
the appearance of two (graded) XX spin chain Hamiltonians\footnotemark,
one for each polarisation, within the Hubbard model.
\footnotetext{At this point it should be clear that the difference between graded and non
graded cases appears when boundary effects are observed; the thermodynamic limit usually ignores such terms, being sensitive to bulk contributions only.}
It turns out that the breaking of (\ref{hubbspin}) into the Hamiltonian of two XX models plus a
potential term allows one to generalise this model to higher algebraic structures by maintaining
integrability\footnote{The flux $\phi$ does not affect integrability properties.}.
Exact investigations on the Hubbard model required many years of work.
A first hint of integrability came from the coordinate Bethe Ansatz solution obtained by Lieb and
Wu \cite{LiebWu} in 1968 but a full understanding of it by an R-matrix satisfying a Yang-Baxter
equation came much later. An R-matrix was first constructed by Shastry \cite{shastry,JWshas}
and Olmedilla et al.~\cite{Akutsu}, by coupling the R-matrices of two independent $XX$ models,
through a term depending on the coupling constant $U$ of the Hubbard potential.
The proof of the Yang-Baxter relation for the R-matrix was given by Shiroishi and Wadati \cite{shiro2}
in 1995.
The construction of the R-matrix was then generalised to the $gl(N)$ case by Maassarani et al.,
first for the XX model \cite{maasa} and then for the $gl(N)$ Hubbard model
\cite{maasa2,maasa3}. Later, I will use this approach to generalize to $gl(N|M)$ models.
The Lieb-Wu equations \cite {LiebWu, Rej:2005qt} for the Hubbard model are,
in the half-filling case,
\begin{eqnarray}
e^{i\hat{k}_jL}&=&\prod _{l=1}^M \frac {u_l - \frac {2t}{U}\sin
(\hat{k}_j+\phi)-\frac {i}{2}} {u_l - \frac {2t}{U}\sin
(\hat{k}_j+\phi)+\frac {i}{2}} \nonumber \\
\prod _{j=1}^L \frac {u_l - \frac {2t}{U}\sin
(\hat{k}_j+\phi)+\frac {i}{2}} {u_l - \frac {2t}{U}\sin
(\hat{k}_j+\phi)-\frac {i}{2}}&=& \mathop{\prod _ {m=1}}_{m \not=l}^M \frac
{u_l-u_m+i}{u_l-u_m-i} \, , \label {lw}
\end{eqnarray}
where $M$ is the number of down spins; here they are modified to include the phase. The spectrum
of the Hamiltonian is then given in terms of the momenta $\hat{k}_j$ by the
{\it dispersion relation}
\begin{equation}\label{energia}
E=-2 t \sum _{j=1}^{L} \cos (\hat{k}_j+\phi) \,.
\end{equation}
Starting from these ``Bethe equations'', I will present two coupled nonlinear integral equations
for the antiferromagnetic state of the model. These equations are derived in \cite{FFGR2}
in the same framework of the Kl\"umper-Batchelor-Pearce-Destri-de Vega approach of the chapter~\ref{c:nlie}.
For reason of completeness, it is important to point out that the thermodynamics (infinite length
$L$, but finite temperature) of the Hubbard model has been studied in \cite {KB,JKS} by means of three
nonlinear integral equations. This approach was based on the equivalence of the quantum
one-dimensional Hubbard model with the classical two-dimensional Shastry model. The work
presented here was oriented to gauge theory understanding. The objective was to obtain energies of
the Hubbard model at zero temperature but at any value of the lattice size $L$ therefore the
approach of \cite {KB,JKS} was not appropriate.
There are some features of (\ref{lw}) that deserve some attention. In the large coupling limit
(large $U$), the second set of equations decouples form the first one and coincides with the
XXX Bethe equations (\ref{betheTQ}), once the limit $\lambda\rightarrow 0$ has been taken.
This is consistent with the argument, given earlier, that the spin chain limit of the Hubbard model
is the XXX model.
On the opposite, if $U=0$ the second group becomes useless because the first group is enough to fix
$\hat{k}_j$ and the energy. The first group reduces to
\begin{equation}
e^{i\hat{k}_jL}=1
\end{equation}
that is the box quantization of free particles. The momenta are all different, as usual in Bethe
ansatz, therefore particles are fermions. Indeed, in this limit the Hamiltonian describes free
fermions.
From this analysis, one can see that the Lieb-Wu equations describe the phenomenon of spin-charge
separation. Indeed, the momenta $\hat{k}_j$ are as many as the electrons so they carry charge.
Instead, the ``rapidities'' $u_{\ell}$ are as many as the down spins so they carry sping.
In the spin limit the quasiparticles described by $\hat{k}_j$ disappear from the equations
while at the free fermion point $U=0$ it is the opposite.
\subsection{$\mathcal{N}=4$ super Yang-Mills and AdS/CFT}
This superconformal field theory in its planar limit, namely the limit of an infinite number
of colors, is probably an integrable theory. It seems related to the Hubbard model, as it was
first observed in
\cite{Rej:2005qt}. The following relation between coupling constants was proposed
\begin{equation}\label{couplings}
\frac{t}{U}=\frac{g}{\sqrt{2}}=\frac{\sqrt{\lambda}}{4\pi^2}\,,\qquad U=-\frac{1}{g^2}=
-\frac{8\pi^2}{\lambda}
\end{equation}
where $\lambda$ is the 't Hooft coupling of the theory and $g$ is related to the SYM coupling.
The Hubbard lattice must be taken half-filled.
The energy (\ref{energia}) is related to the anomalous dimensions $\gamma_{\text{SYM}}$
of the super Yang-Mills operators in the scalar sector $SU(2)$ by
\begin{equation}
\gamma_{\text{SYM}}=\frac{\lambda}{8\pi ^2}E
\end{equation}
The lattice size $L$ is identified with
the ``length'' of the operators in terms of the fundamental scalar fields of the theory.
This theory is believed to be dual to a type II string theory
on a $AdS_5\times S^5$ background. This and other dualities between quantum field theory and string
theory are known as AdS/CFT dualities, after Maldacena \cite{maldacena}. The duality has a very
nice and curious feature: it exchanges strong and weak couplings. As strong coupling calculations
are usually difficult, the duality makes them accessible \textit{via} weak coupling calculations
in the dual theory.
After the important work of Minahan and Zarembo \cite{MZ}, there has been an explosion of researches
in this domain. The AdS/CFT duality has been enriched of tools and new calculation methods
by recognizing that there are integrable models, on both sides of the duality.
\section{Universal Hubbard models}
Following the methods of \cite{maasa} and \cite{martins}, it has been possible to generalize the
Hubbard model to include more general symmetries than the original $SU(2)$ one.
In a first stage, the XX model is generalized to (almost) arbitrary vector
spaces and symmetries. Secondly, two copies of the XX model are ``glued'' to form a Hubbard model.
This is the usual construction of the R-matrix of the Hubbard model.
I will use the standard notation in which the lower index indicates the space on which the operator
acts. For example, to $A\in \mbox{End}(V)$, I associate the operator $A_{1}=A\otimes \mbox{\boldmath $I$}$ and
$A_{2}=\mbox{\boldmath $I$}\otimes A$ in $\mbox{End}(V)\otimes \mbox{End}(V)$. More
generally, when considering expressions in $\mbox{End}(V)^{\otimes k}$,
$A_{j}$, $j=1,\ldots,k$ will act as the identity in all spaces $\mbox{End}(V)$ except the $j^{th}$ one.
To deal with superalgebras, I will also need a $\mathcal{Z}_{2}$ grading $[.]$ on
$V$, such that $[v]=0$ will be associated to bosonic states
and $[v]=1$ to fermionic ones.
The construction of a universal XX model is mainly based on general properties
of projectors and permutations. The needed projectors $\pi,\wt\pi$ select a proper subspace of $V$
\begin{eqnarray}
\pi:\ V\to\ W\,,\quad
\wt\pi=\mbox{\boldmath $I$}-\pi:\ V\to\ \wt{W} \mbox{~~~with~~~} V=W\oplus\wt{W}
\label{def:univpi}
\end{eqnarray}
In the tensor product of two vector spaces I take the possibly graded permutation
\begin{eqnarray}
P_{12}:
\begin{cases} V\otimes V \ \to\ V\otimes V\\
v_{1}\otimes v_{2}\ \to\ (-1)^{[v_{1}][v_{2}]}\, v_{2}\otimes v_{1}
\end{cases}
\end{eqnarray}
and also $\Sigma_{12}$
\begin{eqnarray}
\Sigma_{12} &=&
\pi_{1}\,\wt\pi_{2}+\wt\pi_{1}\,\pi_{2}
\label{def:univSigma}
\end{eqnarray}
It is easy to show that $\Sigma_{12}$ is also a projector in $V\otimes V$:
$\left(\Sigma_{12}\right)^2=\Sigma_{12}$.
The operator $C$ will also be used later
\begin{equation}
C = \pi-\wt\pi\,.
\label{eq:opC}
\end{equation}
It obeys $C^{2}=\mbox{\boldmath $I$}$ and is related to $\Sigma_{12}$ through the equalities
\begin{equation}
\Sigma_{12}=\frac12(1-C_{1}C_{2}) \mbox{ ~and~ }
\mbox{\boldmath $I$}\otimes\mbox{\boldmath $I$}-\Sigma_{12}=\frac12(1+C_{1}C_{2})
\label{eq:univSig-C}
\end{equation}
From the previous operators, one can construct an R-matrix acting on $V\otimes V$ and with spectral
parameter $\lambda$
\begin{equation}
R_{12}(\lambda) = \Sigma_{12}\,P_{12} + \Sigma_{12}\,\sin\lambda +
(\mbox{\boldmath $I$}\otimes\mbox{\boldmath $I$}-\Sigma_{12})\,P_{12}\,\cos\lambda
\label{def:univRXX}
\end{equation}
Several properties of the R-matrix are given in \cite{DFFR}, \cite{FFR}. The most important is
the Yang--Baxter equation
\begin{gather}
R_{12}(\lambda_{12})\,R_{13}(\lambda_{13})\,R_{23}(\lambda_{23}) =
R_{23}(\lambda_{23})\,R_{13}(\lambda_{13})\,R_{12}(\lambda_{12})
\qquad\notag \\[2mm]
\mbox{where~ } \lambda_{ij} = \lambda_i-\lambda_j.
\label{eq:univYBE}
\end{gather}
With a very standard construction, from the R-matrix one constructs the ($L$ sites) transfer matrix
(\ref{trasf})
\begin{equation}
t_{1\ldots L}(\lambda) = \mathop{\mbox{strace}}_{0}R_{01}(\lambda)\,R_{02}(\lambda)\ldots R_{0L}(\lambda)
\end{equation}
by taking the supertrace in the auxiliary space. So far, the calculation has been very general and no
special properties of the space $V$ are required. Now, if $V$ has infinite dimension, it is
necessary to assume the existence of a trace or supertrace with the cyclic property.
If $V$ has finite dimension, the trace always exists.
The relation (\ref{eq:univYBE}) implies that the transfer matrices commute for
different values of the spectral parameter, thus granting integrability.
Since the R-matrix is regular (namely in $\lambda=0$ it is a permutation), logarithmic derivatives
in $\lambda=0$ give local operators as in (\ref{hamilt}). The first one can be chosen as XX-Hamiltonian
\begin{gather}
H=t_{1\ldots L}(0)^{-1}\, \frac{dt_{1\ldots L}}{d\lambda}(0) =\sum_{j=1}^{L} H_{j,j+1}
\label{eq:univXXHam}\\
\mbox{with}\quad H_{j,j+1}=P_{j,j+1}\,\Sigma_{j,j+1} \nonumber
\end{gather}
where periodic boundary conditions have been used, i.e. the site $L+1$ is identified with the first one.
For example, the original XX model (related to the algebra $gl(2)$) is obtained without gradation
with local vector space $V=\mathbb{C}^2$ and $2\times 2$ matrices
\begin{equation} \label{example}
\pi=E_{1,1}\,, \quad \wt{\pi}=\mbox{\boldmath $I$}-\pi = E_{2,2}
\end{equation}
Then the Hamiltonian is the XX model
$$
H=\sum_{j=1}^{L} \left( E_{12;j} E_{21;j+1}+E_{21;j} E_{12;j+1} \right)=
\sum_{j=1}^{L} \left( \sigma^{+}_{j} \sigma^{-}_{j+1}+\sigma^{-}_{j} \sigma^{+}_{j+1} \right)=
\frac12 \sum_{j=1}^{L} \left( \sigma^{x}_{j} \sigma^{x}_{j+1}+\sigma^{y}_{j} \sigma^{y}_{j+1} \right)
$$
For this reason, (\ref{eq:univXXHam}) defines generalized XX models that in \cite{FFR} were called
\textit{universal}. With the same choice (\ref{example}) but using a grading such that the index
1 is bosonic and the index 2 is fermionic, the $gl(1|1)$ XX model has Hamiltonian
\begin{equation}
H=\sum_{j=1}^{L} \left( -E_{12;j} E_{21;j+1}+E_{21;j} E_{12;j+1} \right)=
\sum_{j=1}^{L}\left(\cd{j}\cc{j+1}+\cd{j+1}\cc{j}\right)
\end{equation}
because the matrices $E_{12}$ and $E_{21}$ are both ``fermionic''; they anticommute on different sites
so the fermionic realization (\ref{JW}) can be used. The index $\rho$ here is not necessary.
The relation between the XX model and the Hubbard model is now more clear.
``Gluing'' two possibly different universal XX models produces a generalized integrable
Hubbard model. The $R$-matrices of two universal XX models are distinguished by the arrow
$R^{\uparrow}_{12}(\lambda)$ and $R^{\downarrow}_{12}(\lambda)$.
The Hubbard-like $R$-matrix has two spectral parameters $\lambda_1\,,\lambda_2$ and is constructed by
tensoring on each site an ``up'' and a ``down'' copy
\begin{equation}
R_{12}(\lambda_{1},\lambda_{2}) =
R^{\uparrow}_{12}(\lambda_{12})\,R^{\downarrow}_{12}(\lambda_{12}) +
\frac{\sin(\lambda_{12})}{\sin(\lambda'_{12})} \,\tanh(h'_{12})\,
R^{\uparrow}_{12}(\lambda'_{12})\,C^{\uparrow}_{1}\,
R^{\downarrow}_{12}(\lambda'_{12})\,C^{\downarrow}_{1}
\label{R-XXfus}
\end{equation}
where $\lambda_{12}=\lambda_{1}-\lambda_{2}$ and $\lambda'_{12}=\lambda_{1}+\lambda_{2}$.
Moreover, $h'_{12}=h(\lambda_{1})+h(\lambda_{2})$ and the choice of the function $h(\lambda)$
is fixed within the proof of the Yang-Baxter equation.
Indeed, when the function $h(\lambda)$ is given by $\sinh(2h)=U\, \sin(2\lambda)$
for some free parameter $U$, the R-matrix (\ref{R-XXfus}) obeys the Yang-Baxter equation:
\begin{eqnarray}
R_{12}(\lambda_{1},\lambda_{2})\,
R_{13}(\lambda_{1},\lambda_{3})\,
R_{23}(\lambda_{2},\lambda_{3})
&=&
R_{23}(\lambda_{2},\lambda_{3})\,
R_{13}(\lambda_{1},\lambda_{3})\,
R_{12}(\lambda_{1},\lambda_{2})\,.
\end{eqnarray}
Notice that, this time, the equation is not of difference type.
As remarked in \cite{DFFR} the proof relies only on some intermediate properties
that are not affected by the choice of the fundamental projectors (\ref{def:univpi}).
The proof follows the steps of the original proof by Shiroishi \cite{shiro2} for the Hubbard model.
The same proof has been used for general $gl(N)$ algebras in \cite{EFGKK}.
The Hubbard R-matrix is regular but non symmetric. It satisfies unitarity.
A commuting family of transfer matrices is obtained by fixing one of the two spectral parameters
\begin{equation}\label{redmonodromy}
t_{1\ldots L}(\lambda)= \mathop{\mbox{str}_{0}}
R_{01}(\lambda,\mu)\ldots R_{0L}(\lambda,\mu) \Big|_{\mu=0}\,.
\end{equation}
Any other choice for $\mu$ is possible but, at least in view of obtaining a local Hamiltonian,
they do not give new information.
The `reduced' R-matrices that enter in the previous equation take a particularly simple factorised form
\begin{equation}
R_{12}(\lambda,0) =
\,R^{\uparrow}_{12}(\lambda)\,R^{\downarrow}_{12}(\lambda)\,I^{\uparrow\downarrow}_{1}(h)
\end{equation}
where
\begin{equation}
I^{\uparrow\downarrow}_{1}(h) = \mbox{\boldmath $I$}\otimes\mbox{\boldmath $I$}+\tanh(\frac{h}{2})\,C^{\uparrow}_{1}\,C^{\downarrow}_{1}
\end{equation}
and one arrives at a Hubbard-like Hamiltonian
\begin{equation} \label{eq:HubHam}
H = \sum_{j=1}^{L}H_{j,j+1} = \sum_{j=1}^{L} \Big[
\Sigma^{\uparrow}_{j,j+1}\,P^{\uparrow}_{j,j+1}
+\Sigma^{\downarrow}_{j,j+1}\,P^{\downarrow}_{j,j+1}
+U\,C^{\uparrow}_{j}\,C^{\downarrow}_{j} \Big]
\end{equation}
where periodic boundary conditions hold. Clearly, the up and down components are put in interaction only
by the potential term $U \,C^{\uparrow}_{j}\,C^{\downarrow}_{j} $.
The tensor product of the up and down component is represented in Figure~\ref{updown}.
\begin{figure}[h]
\begin{center}\begin{tikzpicture}[baseline,scale=1]
\draw(1.6,0) node{$\begin{array}{ccccccc}(\mathcal{A}\!\!&\!\!\otimes\!\!&\!\!\mathcal{A})&
\otimes&(\mathcal{A}\!\!&\!\!\otimes\!\!&\!\!\mathcal{A}) \\[-1mm]
\phantom{(}\scriptstyle \uparrow\!\! &&\!\! \scriptstyle \downarrow && \phantom{)}\scriptstyle \uparrow\!\! &&\!\!
\scriptstyle \downarrow \\[-2mm] &\textcolor{blue}{1} &&&& \hspace*{-3.5mm} \textcolor{blue}{2}\hspace*{-3.5mm}
\end{array}$};
\draw[color=red,line width=1.3pt] (0,0.6) -- (0,1.3)--(2.6,1.3)--(2.6,0.6) ;
\draw[color=green,line width=1.3pt] (1,0.6) -- (1,1.1)--(3.4,1.1)--(3.4,0.6) ;
\end{tikzpicture}\end{center}
\caption{\label{updown}This scheme shows the coupling between two universal XX models.
The blue indices represent the sites 1 and 2 of the Hubbard model. On each site there is one
XX up and one XX down. $\mathcal{A}$
represents the local vector space $V$ or the local algebra $\mbox{End}(V)$ if vectors or
matrices are considered, respectively. }
\end{figure}
Operators ``up'', as $R^{\uparrow}_{12}(\lambda)$, act on the first and third spaces and are the
identity on the others while operators ``down'', as $R^{\downarrow}_{12}(\lambda)$, act on the
second and fourth.
The Hubbard model itself is obtained by two graded $gl(1|1)$ models
as in (\ref{example}). Its local (one site) vector space is
\begin{equation}
V=V^{\uparrow}\otimes V^{\downarrow}\,,\qquad V^{\rho}=\mathbb{C}^2
\end{equation}
The universal Hamiltonian (\ref{eq:HubHam}) has the same structure of the Hubbard model. What can make
different the dynamics is the fact that the projectors $\pi,\wt\pi$ seem to introduce several types of
particles.
These models were introduced in relation to their symmetries. In \cite{DFFR, FFR} it has been shown
that the transfer matrix admits as symmetry (super)algebra the direct sum of the symmetry algebras
of the XX components
\begin{equation}\label{simmetria}
\mathcal{S}=\mbox{End}(W^{\uparrow})\oplus \mbox{End}(\wt{W}^{\uparrow})
\oplus \mbox{End}(W^{\downarrow})\oplus \mbox{End}(\wt{W}^{\downarrow})
\end{equation}
(the up R-matrix commutes with the down generators and vice versa).
The generators of the symmetry have the form of the sum of local matrices acting on a single site at
a time
\begin{equation}
\mbox{${\mathbb M}$}=\mbox{${\mathbb M}$}^{\uparrow}+\mbox{${\mathbb M}$}^{\downarrow}\,,\qquad \mbox{${\mathbb M}$}^{\uparrow}=\sum_{j=1}^{L}\mbox{${\mathbb M}$}^{\uparrow}_{j} \,,\qquad
\mbox{${\mathbb M}$}^{\downarrow} =\sum_{j=1}^{L}\mbox{${\mathbb M}$}^{\downarrow}_{j}
\end{equation}
where
\begin{equation}
\mbox{${\mathbb M}$}=\mbox{${\mathbb M}$}^{\uparrow}+\mbox{${\mathbb M}$}^{\downarrow} \qquad \mbox{and} \qquad
\mbox{${\mathbb M}$}^\sigma\in \mbox{End}(W^{\sigma})\oplus \mbox{End}(\wt{W}^{\sigma})\,.
\end{equation}
They commute with the monodromy/transfer matrix and with the Hamiltonian.
In this formalism, given that $W^{\uparrow}=\wt W^{\uparrow}=W^{\downarrow}=\wt W^{\downarrow}=\mathbb{C}$,
the Hubbard model seems to have just the symmetry algebra
\begin{equation}\label{simm}
gl(1)\oplus gl(1)\oplus gl(1)\oplus gl(1)
\end{equation}
where each term is a single operator
\begin{equation}\label{simmH}
\hat{E}_{1,1;\uparrow}=\sum_{j=1}^{L} E_{1,1;\uparrow j}\,,\qquad
\hat{E}_{1,1;\downarrow}=\sum_{j=1}^{L} E_{1,1;\downarrow j}\,,\qquad
\hat{E}_{2,2;\uparrow}=\sum_{j=1}^{L} E_{2,2;\uparrow j}\,,\qquad
\hat{E}_{2,2;\downarrow}=\sum_{j=1}^{L} E_{2,2;\downarrow j}
\end{equation}
These operators count the number of ``particles''. This is more visible after a Jordan-Wigner
transformation (\ref{JW}): indeed the operator
\begin{equation}
\hat{E}_{2,2;\uparrow}=\sum_{j=1}^{L} n_{\uparrow, j}
\end{equation}
counts how many up fermions are in a given state. Similarly, the operator $\hat{E}_{2,2;\downarrow}$
counts the number of down fermions. From the local (=on site) identity
$ E_{1,1}+ E_{2,2}=\mbox{\boldmath $I$}$, the following sums give the lattice size
\begin{equation}
\hat{E}_{1,1;\uparrow}+\hat{E}_{2,2;\uparrow}=\hat{E}_{1,1;\downarrow}+\hat{E}_{2,2;\downarrow}=L
\end{equation}
so in the symmetry algebra (\ref{simm}) there is an amount of redundancy.
It is well known that the Hubbard symmetry algebra is $su(2)$ and becomes $su(2)\times su(2)$ if
the number of sites is even.
Indeed, the cases where $V^{\sigma}$ is two dimensional are special because, in addition to the list of
generators contained in (\ref{simmetria}, \ref{simmH}), they have new generators given by
\begin{equation}
S^{\pm}_j=\sigma^{\pm}_{\uparrow j}\otimes\sigma^{\mp}_{\downarrow j}\,,\qquad
\mathcal{M}^{\pm}_j=\sigma^{\pm}_{\uparrow j}\otimes\sigma^{\pm}_{\downarrow j}\,.
\end{equation}
To be precise, the first commutes with the Hamiltonian in all cases and promotes the Hubbard symmetry
algebra to $su(2)$, where the third generator would be
$S^3_j=\frac12(\hat{E}_{2,2;\uparrow j}-\hat{E}_{2,2;\downarrow j})$.
The operators $\mathcal{M}^{\pm}_j$ commutes only if $L$ is even enhancing the symmetry to
$su(2)\oplus su(2)$. In that case, the third generator is
$\mathcal{M}^3_j=\frac12(\hat{E}_{2,2;\uparrow j}+\hat{E}_{2,2;\downarrow j})$.
Unfortunately, this strongest symmetry doesn't extend to higher dimensional cases.
Some enlargement of the symmetry appears at large coupling in perturbative calculations
but it does not survive at higher orders.
\subsubsection{$gl(2|2) \oplus gl(2|2) $ Hubbard Hamiltonian }
This model implements two identical copies (up and down) of an XX both with
\begin{equation}
\pi=E_{11}+E_{33}\,,\qquad \wt\pi=E_{22}+E_{44}
\end{equation}
and with indices $3,4$ of fermionic nature. Using a graded Jordan-Wigner transformation one arrives
at a fermionic form for the Hamiltonian
\begin{eqnarray}\label{hamiltgl22}
H &\!\!=\!\!& \sum_{i=1}^{L} \; \Big\{ \;
\sum_{\sigma=\uparrow,\downarrow} \big( \cd{\sigma,i} \cc{\sigma,i+1}
+ \cd{\sigma,i+1} \cc{\sigma,i} \big)
\big( c_{\sigma,i}'^\dagger c_{\sigma,i+1}' + c_{\sigma,i+1}'^\dagger c_{\sigma,i}' + 1
- n_{\sigma,i}' - n_{\sigma,i+1}' \big) \nonumber \\
&& + \, U (1-2n_{\uparrow,i})(1-2n_{\downarrow,i}) \; \Big\}
\end{eqnarray}
where the factor
\begin{equation}
\mathcal{N}'_{\sigma,i,i+1}=\big( c_{\sigma,i}'^\dagger
c_{\sigma,i+1}' + c_{\sigma,i+1}'^\dagger c_{\sigma,i}' + 1 - n_{\sigma,i}'
- n_{\sigma,i+1}' \big)
\end{equation}
multiplies an ordinary Hubbard hopping term; only unprimed particles enter into the potential.
There are four types of fermionic particles, respectively generated by
$\cd{\uparrow,i}\,,\cd{\downarrow,i}\,,c_{\uparrow,i}'^\dagger \,,c_{\downarrow,i}'^\dagger$
so that they define a 16 dimensional vector space on each site
\begin{equation}
V_{\uparrow,i}\otimes V_{\downarrow,i}\otimes V_{\uparrow,i}' \otimes V_{\downarrow,i}'
\end{equation}
with each $V=\mathbb{C}^2$.
The corresponding numbers of particles are conserved.
The factor $\mathcal{N}'_{\sigma,i,i+1}$ works on a $4\times 4$ one-site space; its eigenvalues
can be easily obtained and are $\pm 1$ with two-fold multiplicity. This means that it cannot vanish,
$\mathcal{N}'_{\sigma,i,i+1}\neq 0$.
Moreover, if no primed particles are present, $\mathcal{N}'_{\sigma,i,i+1}=1\,, ~\forall ~\sigma,i$.
The same is true if the lattice is fully filled with primed particles
in which case $\mathcal{N}'_{\sigma,i,i+1}=-1$ therefore two of the sectors
described by this Hamiltonian are equivalent to the ordinary Hubbard model. A Russian doll
structure is appearing: if the projectors are well chosen, a larger model contains
the small ones.
If there are primed particles only, the energy vanishes but not the momentum. This actually means that
primed particles do not have a dynamics independent of the unprimed. This fact is curious and I
am not aware of other cases in which it has been observed.
If the potential term is interpreted as a Coulomb repulsion, then unprimed particles only carry electric
charge so primed particles are neutral.
The compound objects formed by ~$\cd{\sigma,i}\,c_{\sigma,i}'{^\dagger}$~ are rigid: no other term in
the Hamiltonian can destroy them. In this sense, there are four types of carriers, with the same
charge but different behaviours:
two are the elementary objects $\cd{\sigma,i}$ in two polarisations
$\sigma=\uparrow\,,\downarrow$,
two are the compound objects, in two polarisations.
\definecolor{verdone}{rgb}{0.1,0.55,0.1}
\newcommand{{\color{verdone}$\bullet$}}{{\color{verdone}$\bullet$}}
\newcommand{{\color{red}$\bullet$}}{{\color{red}$\bullet$}}
\newcommand{{\tiny\color{black}$\bullet$}}{{\tiny\color{black}$\bullet$}}
\begin{figure}[h]
\begin{tabular}{c@{$\longleftrightarrow$\hspace{3mm}}c@{\hspace{20mm}}c@{$\longleftrightarrow$\hspace{3mm}}c
@{\hspace{20mm}}c@{\hspace{3mm}$\longleftrightarrow$\hspace{3mm}}c}
\begin{tabular}{c}{\color{verdone}$\bullet$}\\[-3.4mm] {\color{red}$\bullet$} \end{tabular}~~{\tiny\color{black}$\bullet$} & {\tiny\color{black}$\bullet$} ~~\begin{tabular}{c}{\color{verdone}$\bullet$}\\[-3.4mm] {\color{red}$\bullet$} \end{tabular} &
{\color{red}$\bullet$} ~~ {\color{verdone}$\bullet$} & {\color{verdone}$\bullet$} ~~ {\color{red}$\bullet$} & {\tiny\color{black}$\bullet$} ~~~~{\color{red}$\bullet$} & {\color{red}$\bullet$} ~~~~ {\tiny\color{black}$\bullet$} \\[6mm]
No: ~~ {\color{red}$\bullet$} ~~ {\color{verdone}$\bullet$} ~~~~ & {\tiny\color{black}$\bullet$} ~~\begin{tabular}{c}{\color{verdone}$\bullet$} \\[-3.4mm] {\color{red}$\bullet$}\end{tabular} &
No: ~~ {\tiny\color{black}$\bullet$} ~~ {\color{verdone}$\bullet$}~~ & {\color{verdone}$\bullet$} ~~ {\tiny\color{black}$\bullet$}
\end{tabular}
\caption{The different elementary processes that are described in (\ref{hamiltgl22}); unprimed particles
are charged {\color{red}$\bullet$}, primed particles are neutral {\color{verdone}$\bullet$}. The compound object has both the colors.
The two lower processes cannot exist, namely the compound object cannot be created or destroyed and
the neutral particle alone is static.}
\end{figure}
The study of the two-particle scattering matrix has been done in \cite{FFR}, with a preliminary account
of the Bethe equations. There are general features that emerge. First the vacuum state is chosen as
(other choices are possible)
\begin{equation}
\Omega= \mathop{(e_1^{\uparrow}\otimes e_1^{\downarrow})}_{\scriptscriptstyle 1} \otimes
\mathop{(e_1^{\uparrow}\otimes e_1^{\downarrow})}_{\scriptscriptstyle 2} \otimes \dots
\mathop{(e_1^{\uparrow}\otimes e_1^{\downarrow})}_{\scriptscriptstyle L}
\end{equation}
where the index behind the tensor product labels the lattice sites. All other states are considered
excitations above it. From the projector $\pi$ one has to remove the part that projects on the vacuum
so the operator $\raisebox{-.12ex}{$\stackrel{\circ}{\pi}$}{}$ projects on the subspace $\raisebox{.12ex}{$\stackrel{\circ}{W}$}{}$
\begin{equation}
\raisebox{-.12ex}{$\stackrel{\circ}{\pi}$}{}=\pi-E_{1,1}\,,\qquad \raisebox{.12ex}{$\stackrel{\circ}{W}$}{}=\raisebox{-.12ex}{$\stackrel{\circ}{\pi}$}{}\ V=\raisebox{-.12ex}{$\stackrel{\circ}{\pi}$}{}\ W
\end{equation}
Particles are classified by the type, according to the various subspaces
$\raisebox{.12ex}{$\stackrel{\circ}{W}$}{}^{\uparrow},\wt{W}^{\uparrow},\raisebox{.12ex}{$\stackrel{\circ}{W}$}{}^{\downarrow},\wt{W}^{\downarrow}$.
Within the universal XX models, all particles satisfy the exclusion principle, namely they
cannot appear on the same site. If two particles are both from $\raisebox{.12ex}{$\stackrel{\circ}{W}$}{}$ or both from
$\wt{W}$, they reflect each other; if they are one from $\raisebox{.12ex}{$\stackrel{\circ}{W}$}{}$, one from $\wt{W}$,
they traverse each other but still remaining on different sites.
In the universal Hubbard models, the coupling activates a sort of electrostatic interaction
felt by particles of opposite ``polarisation'' only. Indeed, the potential term in (\ref{eq:HubHam})
squares to the identity (\ref{eq:opC}) so on one site it has eigenvalues $\pm U$.
Which sign occurs is dictated by the membership to $\raisebox{.12ex}{$\stackrel{\circ}{W}$}{}$ or $\wt{W}$ according to the rule:
with $U>0$, equal type particles (both in $\raisebox{.12ex}{$\stackrel{\circ}{W}$}{}$ or in $\wt{W}$) repel each other with an amplitude
-1 while different type particles one each from $\raisebox{.12ex}{$\stackrel{\circ}{W}$}{}\,,\wt{W}$ attract each other with an
amplitude that is just a phase.
Observe that the vacuum itself is in the repulsive case so actually the only
"visible" effect is the attractive one.
The most important interaction comes when a particle from $\wt{W}^{\uparrow}$ and one from $\wt{W}^{\downarrow}$
meet at a point. This gives rise to the usual transmission and reflection amplitudes of the
Hubbard model, $T(p_{1},p_{2}), R(p_{1},p_{2})$. Notice that they are the same for all particles.
The two-particle S-matrix, directly taken from \cite{FFR}, is
\begin{eqnarray*}
S_{12}(p_{1},p_{2}) &=& S^{X\uparrow}_{12}(p_{1},p_{2})
+S^{X\downarrow}_{12}(p_{1},p_{2})+S^{\updownarrow}_{12}(p_{1},p_{2})
+S^{H}_{12}(p_{1},p_{2})\\[1.2ex]
S^{X\rho}_{12}(p_{1},p_{2}) &=&
e^{-ip_{1}}\,\raisebox{-.12ex}{$\stackrel{\circ}{\pi}$}{}^{\rho}\otimes \wt\pi^{\rho}
+e^{ip_{2}}\,\wt\pi^{\rho}\otimes \raisebox{-.12ex}{$\stackrel{\circ}{\pi}$}{}^{\rho}
-P_{12}\Big(
\raisebox{-.12ex}{$\stackrel{\circ}{\pi}$}{}^{\rho}\otimes \raisebox{-.12ex}{$\stackrel{\circ}{\pi}$}{}^{\rho} +\wt\pi^{\rho}\otimes \wt\pi^{\rho}\Big)
\,,\ \rho=\uparrow,\downarrow\qquad\\
S^\updownarrow_{12}&=&
\raisebox{-.12ex}{$\stackrel{\circ}{\pi}$}{}^{\uparrow}\otimes (\raisebox{-.12ex}{$\stackrel{\circ}{\pi}$}{}^{\downarrow}+\wt\pi^{\downarrow})+
(\raisebox{-.12ex}{$\stackrel{\circ}{\pi}$}{}^{\downarrow}+\wt\pi^{\downarrow})\otimes \raisebox{-.12ex}{$\stackrel{\circ}{\pi}$}{}^{\uparrow}+
\raisebox{-.12ex}{$\stackrel{\circ}{\pi}$}{}^{\downarrow}\otimes \wt\pi^{\uparrow}
+\wt\pi^{\uparrow}\otimes \raisebox{-.12ex}{$\stackrel{\circ}{\pi}$}{}^{\downarrow}
\\
S^{H}_{12}(p_{1},p_{2}) &=& \Big(T(p_{1},p_{2})\,\mbox{\boldmath $I$}\otimes\mbox{\boldmath $I$}
+R(p_{1},p_{2})\,P_{12}\Big)\,
\Big(\wt\pi^{\uparrow}\otimes \wt\pi^{\downarrow}
+\wt\pi^{\downarrow}\otimes \wt\pi^{\uparrow} \Big)
\\
T(p_{1},p_{2}) &=&
\frac{\sin(p_{1})-\sin(p_{2})}{\sin(p_{1})-\sin(p_{2})-2iU}\\
R(p_{1},p_{2}) &=&
\frac{2iU}{\sin(p_{1})-\sin(p_{2})-2iU} \ =\ T(p_{1},p_{2}) -1
\end{eqnarray*}
In summary, the generalizations of the Hubbard model describe new aspects, mainly in relation to the
$\pi$ projector and $\raisebox{.12ex}{$\stackrel{\circ}{W}$}{}$ space. They were not present in Hubbard because its space of states is
too small.
The generalized models can describe many different fermionic particles, all living in the same lattice, some charged and some chargeless.
The core of the interactions within $\wt{W}$ remains the same
as in Hubbard, with the same amplitudes.
The exposition on the generalizations of the Hubbard model stops here.
\section{A system of two non-linear integral equations for the Hubbard model}
Following the methods of Chapter~\ref{c:nlie}, the system of equations for the Hubbard model
is introduced. The full derivation is given in the original paper \cite{FFGR2}.
The main purpose of this work was to study
certain super Yang-Mills operators; this has conditioned some choices, as the systematic use of the
phase $\phi$, which is related to a global magnetic field; of course, the whole construction holds for the
Hubbard model itself.
Looking at the Lieb-We equations (\ref {lw}), I define the function
\begin{equation}
\Phi (x,\xi)=i \ln \frac {i\xi +x}{i\xi -x} \, , \label {Phi}
\end{equation}
with the branch cut of $\ln(z)$ along the real negative $z$-axis in such a way that
$-\pi < \arg z <\pi$. Then, I introduce the gauge transformation which amounts to add the
magnetic flux
\begin{equation}
k_j=\hat{k}_j+\phi \,.
\end{equation}
Using the following counting functions
\begin{eqnarray}
W(k)&=&L(k-\phi) -\sum _{l=1}^M \Phi \left (u_l-\frac {2t}{U}\sin k, \frac{1}{2} \right ) \, ,
\label {Wdef} \\
Z(u)&=&\sum _{j=1}^L \Phi \left (u - \frac {2t}{U}\sin k_j, \frac{1}{2} \right ) -
\sum _{m=1}^M \Phi \left (u-u_m, 1 \right ) \, , \label {Zdef}
\end{eqnarray}
the Lieb-Wu equations take the form of quantisation conditions for the Bethe roots $\{k_j,u_l\}$,
\begin{eqnarray}
W(k_j)&=&\pi (M +2 I^w_j) \, , \\
Z(u_l)&=&\pi (M-L+1+2I^z_l) \, .
\end{eqnarray}
From now on, the treatment focuses on the highest energy state, consisting of the maximum number
$M=L/2$ of real roots $u_l$ and of $L$ real roots $k_j$. For simplicity reasons, it is useful
to restrict the calculation to the case $M\in 2\mathbb{N}$ (the remaining
case $M\in 2\mathbb{N}+1$ is a simple modification of this case), which obviously implies
$L\in 4\mathbb{N}$.
With the integral definition of the Bessel function $J_0(z)$,
\begin{equation}
J_0(z)=\int _{-\pi}^{\pi} \frac {dk}{2\pi}\, e^{i\,z\sin k}\,,
\end{equation}
and also the following shorthand notations
\begin{equation}
L_W(k)= {\mbox {Im}}\ln \left [1-e^{iW(k+i0)}\right ] \, , \quad
L_Z(x)= {\mbox {Im}}\ln \left [1+e^{iZ(x+i0)}\right ] \, .
\end{equation}
the first of two nonlinear integral equations for the counting functions is
\begin{eqnarray}
Z(u)&=&L \int _{-\infty}^{\infty} \frac {dp}{2p} \sin (pu) \frac
{J_0\left ( \frac {2tp}{U}\right )}{\cosh \frac {p}{2}}+2 \int
_{-\infty}^{\infty} dy \ G(u-y) \ {\mbox {Im}}\ln \left
[1+e^{iZ(y+i0)}\right ]- \nonumber \\
&-&\frac {2t}{U}\int _{-\pi}^{\pi} dk \cos k \frac {1}{\cosh \left
( \pi u - \frac {2t\pi}{U}\sin k \right ) } \
{\mbox {Im}}\ln \left [1-e^{iW(k+i0)}\right ] \, , \label {Zeq4}
\end{eqnarray}
where $G(x)$ is the same kernel function that appears in the spin $1/2$ XXX chain and in the BDS
Bethe Ansatz\footnote{The Beisert, Dippel, Staudacher model was a deformation of the XXX Bethe
equations introduced to describe all loops in the $SU(2)$ sector of SYM, see \cite{FFGR}.}, as in eq. 2.24 of \cite{FFGR},
\begin{equation}
G(x)=\int _{-\infty}^{\infty} \frac {dp}{2\pi} e^{ipx} \frac
{1}{1+e^{|p|}} \, . \label {Gxxx}
\end{equation}
The first line of the NLIE for $Z$ (\ref {Zeq4}) coincides with the NLIE (eq. 3.15 of \cite{FFGR})
for the counting function of the highest energy state of the BDS model. The second
line of (\ref {Zeq4}) is the genuine contribution of the Hubbard model.
The second nonlinear integral equation is
\begin{eqnarray}
W(k)&=&L\left[ (k-\phi) + \int _{-\infty}^{\infty} \frac {dp}{p}
\sin \left ({\frac
{2tp}{U}\sin k }\right ) \frac {J_0\left (\frac {2tp}{U}\right )}{1+e^{|p|}}\right]
- \nonumber \\
&-&\int _{-\infty}^{\infty}dx \, \frac {1}{\cosh \left ( \frac
{2t\pi}{U}\sin k-\pi x \right ) } \,
{\mbox {Im}}\ln \left[1+e^{iZ(x+i0)}\right ]- \label {Weq2}\\
&-& \frac {4t}{U} \int _{-\pi}^{\pi} dh \ G \left ( \frac {2t}{U}
\sin h-\frac {2t}{U}\sin k \right ) \cos h \mbox{ Im} \ln
\left[1-e^{iW(h+i0)}\right ] \, . \nonumber
\end{eqnarray}
The two equations (\ref{Zeq4}, \ref{Weq2}) are coupled by integral
terms and are completely equivalent to the Bethe equations for the
highest energy state.
The eigenvalues of the Hamiltonian (\ref {oldHubb}) on the Bethe states are given by (\ref{energia}),
that can now be expressed in terms of the counting functions.
I use the Bessel function
\begin{equation}
J_1(z)=\frac {1}{2\pi i}\int _{-\pi}^{\pi} dk \sin k ~ e^{iz
\sin k} \, , \label {J1}
\end{equation}
The highest eigenvalue energy is expressed in terms of the counting functions $Z$ and $W$ as follows
\begin{eqnarray}
E&=&-2t\left\{ L \int _{-\infty}^{\infty} \frac {dp}{p} \frac
{J_0\left (\frac
{2tp}{U}\right ) J_1\left (\frac {2tp}{U}\right ) }{e^{|p|}+1}
+ \int _{-\infty}^{\infty} dx \left [ \int _{-\infty}^{\infty}
\frac
{dp}{2\pi}\frac {e^{ipx}}{\cosh \frac {p}{2}}i J_1\left (\frac
{2tp}{U}\right ) \right ] L_Z(x) - \right. \nonumber \\
&-& \frac{2t}{U} \left. \int _{-\pi}^{\pi} \frac {dh}{\pi} L_W(h)
\cos h \left [ \int _{-\infty}^{\infty} \frac {dp}{i}
e^{i\frac {2tp}{U}\sin h } \frac {J_1\left (\frac
{2tp}{U}\right ) }{e^{|p|}+1} \right ]-
\int _{-\pi}^{\pi} \frac {dh}{\pi} L_W(h)
\sin h \right\} \nonumber \\
&\equiv& E_L+E_Z+E_{W1}+E_{W2} \,, \quad \text{and}\quad E_W\equiv
E_{W1}+E_{W2}. \label {Eexp}
\end{eqnarray}
The first line of (\ref {Eexp}), namely $E_L+E_Z$, coincides
formally with the expression of the highest energy of the BDS
chain as given in equation (3.24) of \cite{FFGR}. However, in this case $Z$ satisfies a NLIE which
is different from that of the BDS model. On the other hand, the second line, i.e. $E_W=E_{W1}+E_{W2}$,
is a completely new contribution.
This system of equations can be extended to include all excitations. Indeed, one has to include
appropriate sources for the real holes and for all the complex roots that can appear, precisely as
in (\ref{struttura}). The main goal of the papers \cite{FFGR, FFGR2} was, however, to use the
nonlinear integral equations for a careful investigation of several limits: large volume (large $L$
expansion, namely a thermodynamic limit), strong ($U\rightarrow +\infty$) and weak coupling
($U\rightarrow 0$) and possibly to study the effect of interchanging the order of the limits.
At the beginning of this work (2005-2006) the belief was that the Hubbard model could represent
anomalous dimensions of the $\mathcal{N}=4$ super Yang-Mills theory.
Within this correspondence between
Hubbard and super Yang-Mills, the study of the large volume limit corresponds to study
very long super Yang-Mills operators.
In Figure~\ref{ws}, the energy (\ref{Eexp}) is plotted as function of the coupling constant, for
a lattice of 12 sites.
\begin{figure}[h]\hspace*{3mm}
\includegraphics[width=0.95\linewidth]{antif_12_ws.pdf}
\caption{\label{ws} The behaviour of the energies for Hubbard and
BDS model from small to strong coupling is plotted here
for a lattice of 12 sites. The left branches of the curves are obtained
by solving numerically the NLIE while the right branches are plotted using the
the strong coupling expansion from the NLIE. In the small
picture there is a zoom of the region where the branches overlap. The approximated match is due to numerical errors in the left curve.}
\end{figure}
\section{Discussion}
I have presented two different works on the subject of the Hubbard model in relation to the
$\mathcal{N}=4$ super Yang-Mills theory. Both of them have been developed in the years
2005-2007 and were amongst the first attempts to use integrability techniques in the gauge theory.
The integrable generalizations of the Hubbard model were introduced in \cite{DFFR}. In \cite{FFR}
I have started to work on the derivation of the Bethe equations. The scattering matrix
is fully presented in that article. The full set of Bethe equations has been obtained more recently
by colleagues of mine \cite{fomin}.
The work on the nonlinear integral equations for the Hubbard model in \cite{FFGR2} was actually
the continuation
of a work presented in \cite{FFGR} on the XXX model with excitations of type hole,
on the BDS Bethe ansatz \cite{Beisert:2004hm} and on the $SO(6)$ spin chain.
I treated these models in the frame of the integrability within the $\mathcal{N}=4$ super
Yang-Mills theory. Today we know that these models are at best approximations of the correct
Bethe equations \cite{beis2006}. In spite of this, it was important to start working with
the methods presented here.
The work of \cite{FFGR2} has shown to the community of SYM that the methods of nonlinear integral
equations are effective in treating certain questions starting from Bethe equations.
For this reason, my co-authors are still active in the field. They have treated a number of new
cases, including models with non-compact symmetry groups, large number of holes, etc.
\cite{davidemarcopaolo}.
After these publications, my research activities have taken a new direction, that will be presented
in the next chapters.
|
1,116,691,499,879 | arxiv | \section{Introduction}
As a significant methodology for generative modeling, autoencoders map the data in the sample space $\mathcal{X}$ to a low-dimensional manifold embedded in a latent space $\mathcal{Z}\subset\mathbb{R}^C$.
Typically, an autoencoder is specified by an encoder $f:\mathcal{X}\mapsto\mathcal{Z}$ mapping the data to latent codes in $\mathcal{Z}$, a predefined or learnable prior distribution $p_\mathcal{Z}$ on $\mathcal{Z}$, and a decoder $h:\mathcal{Z}\mapsto\mathcal{X}$ mapping the latent codes back to $\mathcal{X}$.
By learning these modules, the autoencoder minimizes the discrepancy between the data distribution $p_\mathcal{X}$ and the model distribution $p_{h(\mathcal{Z})}$~\cite{kingma2013auto,tolstikhin2018wasserstein}.
Compared with other generative modeling strategies like generative adversarial networks (GANs)~\cite{goodfellow2014generative} and generative flows~\cite{kingma2018glow}, autoencoders can represent observed data explicitly in the latent space.
Therefore, besides generating high-dimensional data like images~\cite{xu2020learning} and texts~\cite{wang2019topic}, autoencoders have been widely used to learn data representations for other downstream tasks, $e.g.$, data clustering and classification.
However, most existing autoencoders are designed for the data in the same space.
They are often inapplicable for complicated structured data sampled from incomparable spaces, such as a collection of arbitrarily-sized unaligned graphs ($i.e.$, the graphs have different numbers of nodes and the correspondence between their nodes is unknown).
The variational graph autoencoder (VGAE)~\cite{kipf2016variational} and its variants~\cite{pan2018adversarially,wang2016structural} obtain node-level embeddings rather than a global graph representation.
Recently, some models apply attention-based pooling layers to aggregate the node embeddings as the graph representation~\cite{vinyals2016order,li2016gated,luise2018differential}.
However, these models often require side information to explore the clustering structure of the graphs, and they seldom consider the reconstructive and generative power of the graph representations.
\begin{figure}[t]
\centering
\includegraphics[width=0.95\linewidth]{figures/scheme_gnae.pdf}
\caption{An illustration of our graphon autoencoder.}
\label{fig:scheme}
\end{figure}
To overcome the challenges above, we propose a novel \textbf{Graphon Autoencoder (GNAE)}.
Leveraging the theory of graphon~\cite{lovasz2012large}, we induce graphons ($i.e.$, two-dimensional symmetric Lebesgue measurable functions) from observed graphs and represent the node attributes associated with each graph as the signals defined on the graphons, such that the attributed graphs become the induced graphons and signals, which are in the same functional space.
As illustrated in Figure~\ref{fig:scheme}, our GNAE essentially achieves a Wasserstein autoencoder~\cite{tolstikhin2018wasserstein} for the graphons.
The encoder of our GNAE is an aggregation of Chebyshev graphon filters, which can be implemented as a graph neural network (GNN) for the induced graphons.
It outputs the latent representations of the induced graphons (or, equivalently, the observed graphs).
The posterior distribution of the latent representations is regularized by their prior distribution.
The decoder of our GNAE is a graphon factorization model, which can reconstruct graphons from the latent representations and sample graphs with arbitrary sizes.
For each induced graphon, its latent representation corresponds to the coefficients of the graphon factors in the decoder, which explicitly indicates its similarity to the factors.
Therefore, our GNAE achieves an interpretable and scalable generative model for graphs.
We develop an efficient algorithm to achieve our GNAE.
For each reconstructed graphon, we sample graphs and calculate their distributions conditioned on the reconstructed graphon and the input graphon, respectively.
Taking the KL divergence between the conditional distributions as the underlying distance, we minimize the Wasserstein distance between the data and model distributions and learn our GNAE by a reward-augmented maximum likelihood (RAML) estimation method~\cite{norouzi2016reward}.
This algorithm avoids dense matrix multiplications and the backpropagation corresponding to the fused Gromov-Wasserstein (FGW) distance~\cite{vayer2020fused} between graphons, which owns low computational complexity.
Experiments show that our GNAE performs well on representing and generating graphs, which have good generalizability ($i.e.$, generating graphs with various sizes but similar structures) and transferability ($i.e.$, training on a graph set and testing on others).
\section{Proposed Model}
\subsection{From attributed graphs to graphons with signals}
Mathematically, a graphon is a two-dimensional symmetric Lebesgue measurable function, denoted as $g:\Omega^2\mapsto [0, 1]$, where $\Omega$ is a measure space with a probability measure $\mu_{\Omega}$.
Typically, we often set $\Omega=[0, 1]$ and $\mu_{\Omega}$ as a uniform distribution on $\Omega$.
Associated with the graphon, we can define a $M$-dimensional signal on it~\cite{morency2021graphon}, which is denoted as $s:\Omega\mapsto \mathbb{R}^M$.
Denote the graphon space as $\mathcal{G}$ and the signal space as $\mathcal{S}$, respectively.
For arbitrary $g_1,g_2\in\mathcal{G}$, their $\delta_p$ distance~\cite{lovasz2012large} is $\delta_p(g_1, g_2):=\inf_{\phi \in \mathcal{F}_{\Omega}}\|g_1 - g_2^{\phi}\|_{p}$, where $\|g\|_{p}:=(\int_{\Omega^2} | g(u, v)|^p dudv)^{\frac{1}{p}}$.
Typically, we set $p=1$ or $2$.
If $\delta_{p}(g_1, g_2)=0$, we say $g_1$ and $g_2$ are equivalent, denoted as $g_1\cong g_2$.
The work in~\cite{borgs2008convergent,gao2019graphon} shows that the quotient space $(\widehat{\mathcal{G}}, \delta_{p})$, where $\widehat{\mathcal{G}}:=\mathcal{G}\setminus \cong$, is homomorphic.
$\delta_{p}$ is widely used in practice because of its computability.
Especially, the work in~\cite{janson2013graphons,xu2021learning} indicates that the $\delta_p$ distance is equivalent to the order-$p$ Gromov-Wasserstein (GW) distance~\cite{memoli2011gromov}:
\begin{definition}
For arbitrary $g_1,g_2\in\mathcal{G}$, their order-$p$ Gromov-Wasserstein distance is
\begin{eqnarray}
\begin{aligned}
d_{\text{gw}}(g_1, g_2):=\sideset{}{_{\pi\in\Pi(\mu_{\Omega},\mu_{\Omega})}}\inf\Bigl(\int_{\Omega^2\times\Omega^2}|g_1(u,u')-g_2(v,v')|^p d\pi(u,v)d\pi(u',v')\Bigr)^{\frac{1}{p}},
\end{aligned}
\end{eqnarray}
where $\Pi(\mu_{\Omega},\mu_{\Omega}):=\{\pi\geq 0 | \int_{u\in\Omega}d\pi(u,v)=\mu_{\Omega}, \int_{v\in\Omega}d\pi(u,v)=\mu_{\Omega}\}$.
\end{definition}
For $\mathcal{S}$, we can apply the Wasserstein distance as its metric:
\begin{definition}
For arbitrary $s_1,s_2\in\mathcal{S}$, their order-$p$ Wasserstein distance is
\begin{eqnarray}\label{eq:d_w}
\begin{aligned}
d_{\text{w}}(s_1, s_2) := \sideset{}{_{\pi\in\Pi(\mu_{\Omega},\mu_{\Omega})}}\inf\Bigl(\int_{\Omega^2}\|s_1(u)-s_2(v)\|_p^p d\pi(u,v)\Bigr)^{\frac{1}{p}}.
\end{aligned}
\end{eqnarray}
\end{definition}
\textbf{Sampling graphs:}
Graphon is a nonparametric graph generative model.
We can sample graphs with arbitrary sizes from a graphon by the following steps:
\begin{eqnarray}\label{eq:generate_graph}
\begin{aligned}
\text{1)}~\text{for}~n=1,..,N,~v_n\sim \mu_{\Omega};~\text{2)}~a_{nn'}\sim \text{Bernoulli}(g(v_n, v_{n'}));~\text{3)}~\bm{s}_n\sim \mathcal{N}(s(v_n), \sigma).
\end{aligned}
\end{eqnarray}
The first step is sampling $N$ nodes independently from $\mu_{\Omega}$.
The second step generates an adjacency matrix $\bm{A}=[a_{nn'}]\in \{0, 1\}^{N\times N}$, whose elements are sampled from the Bernoulli distributions determined by the graphon.
When a signal is available, we can sample the attributes associated with the nodes, denoted as $\bm{S}=[\bm{s}_n]\in\mathbb{R}^{N\times M}$, from the distributions determined by the signal, $e.g.$, the Gaussian distributions in (\ref{eq:generate_graph}).
For convenience, we denote $G(\bm{A}, \bm{S})$ as the sampled graph.
\textbf{Inducing graphons:}
We induce a graphon and a signal from an attributed graph as follows.
\begin{definition}[Induced Graphon]\label{def:step}
For a graph $G(\bm{A},\bm{S})$, where $\bm{A}=[a_{nn'}]\in \{0, 1\}^{N\times N}$ and $\bm{S}=[\bm{s}_n]\in\mathbb{R}^{N\times M}$, we can induce a graphon and its corresponding signal as two step functions:
\begin{eqnarray}\label{eq:induced_graphon}
\begin{aligned}
g_{\mathcal{P}}(v,v')=\sideset{}{_{n,n'=1}^{N}}\sum a_{nn'}1_{\mathcal{P}_n}(v)1_{\mathcal{P}_{n'}}(v'),~\text{and}~s_{\mathcal{P}}(v)=\sideset{}{_{n=1}^{N}}\sum \bm{s}_{n}1_{\mathcal{P}_n}(v),~
\forall~v,v'\in\Omega,
\end{aligned}
\end{eqnarray}
where $\mathcal{P}=\{\mathcal{P}_n\}_{n=1}^{N}$ represents $N$ equitable partitions of $\Omega$, $i.e.$, $\cup_{n}\mathcal{P}_n=\Omega$ and $|\mathcal{P}_n|=|\mathcal{P}_{n'}|$ for all $n\neq n'$.
The indicator $1_{\mathcal{P}_n}(v)=1$ if $v\in\mathcal{P}_{n}$, otherwise it equals to $0$.
\end{definition}
Obviously, $g_{\mathcal{P}}\in\mathcal{G}$ and $s_{\mathcal{P}}\in\mathcal{S}$.
The step function approximation lemma~\cite{chan2014consistent} shows that for the graphs sampled from a graphon $g$, the average of their induced graphons provides a consistent estimation of $g$.
The estimation error reduces with the increase of the number and the size of the graphs.
\subsection{A graphon autoencoder in functional space}
The graphons and their associated signals, denoted as $\{(g,s)\}$, can be viewed as samples in a functional space $(\mathcal{X},d_{\mathcal{X}},\mathbb{P})$.
Here, $\mathcal{X}=\mathcal{G}\times \mathcal{S}$, $d_{\mathcal{X}}$ is an underlying distance defining the discrepancy between different samples,\footnote{Note that the underlying distance may not be a strict metric in practice.} which will be introduced below and discussed in-depth in Section~\ref{sec:learning}, and $\mathbb{P}$ represents the set of probability measures defined on $\mathcal{X}$.
By inducing graphons with signals, we can represent the arbitrarily-sized unaligned graphs as the samples in the same space. Accordingly, existing machine learning techniques like autoencoders become applicable.
In particular, our graphon autoencoder (GNAE) can be viewed as a Wasserstein autoencoder of the graphons, which consists of an encoder $f:\mathcal{X}\mapsto\mathcal{Z}$, a decoder $h:\mathcal{Z}\mapsto\mathcal{X}$, and a learnable latent prior distribution $p_{\mathcal{Z}}$.
Given attributed graphs, we obtain a set of induced graphons and associated signals, denoted as $\{(g_{\mathcal{P}}, s_{\mathcal{P}})\}$ and learn the autoencoder to minimize the order-1 Wasserstein distance between the (unknown) data distribution $p_{\mathcal{X}}$ and the model distribution $p_{h(\mathcal{Z})}$, $i.e.$, $\min d_{\text{w}}(p_\mathcal{X}, p_{h(\mathcal{Z})})$, where $p_\mathcal{X},p_{h(\mathcal{Z})}\in\mathbb{P}$.
According to Theorem 1 in~\cite{tolstikhin2018wasserstein}, we relax the optimization problem as
\begin{eqnarray}\label{eq:wae}
\begin{aligned}
\sideset{}{_{f,h,p_{\mathcal{Z}}}}\min~\mathbb{E}_{\bm{x}\sim p_\mathcal{X}}\mathbb{E}_{\bm{z}\sim q_{\mathcal{Z}|\mathcal{X}; f}}[d_{\mathcal{X}}(\bm{x}, h(\bm{z}))] + \gamma d(q_{\mathcal{Z};f},p_\mathcal{Z}),
\end{aligned}
\end{eqnarray}
where each $\bm{x}=(g_{\mathcal{P}},s_{\mathcal{P}})$ represents a tuple of the graphon and signal induced from the corresponding observed graph.
$d_{\mathcal{X}}(\bm{x},h(\bm{z}))$ represents the reconstruction error of the sample $\bm{x}$.
$q_{\mathcal{Z};f}=\mathbb{E}_{\bm{x}\sim p_\mathcal{X}}[q_{\mathcal{Z}|\bm{x}; f}]$ is the expected latent posterior conditioned on different samples, which is required to be closed to the latent prior $p_{\mathcal{Z}}$ under the metric $d$.
Parameter $\gamma$ achieves a trade-off between reconstruction loss and the regularizer.
As shown in~(\ref{eq:wae}), our GNAE learns the encoder, the decoder, and the latent prior distribution jointly.
These modules are implemented as follows.
\begin{figure}[t]
\centering
\subfigure[Aggregated graphon filters (encoder)]{
\includegraphics[height=3.4cm]{figures/encoder.pdf}\label{fig:encoder}
}
\subfigure[Graphon factorization model (decoder)]{
\includegraphics[height=3.4cm]{figures/decoder.pdf}\label{fig:decoder}
}
\caption{Illustrations of the encoder and the decoder of our GNAE.}
\end{figure}
\textbf{Latent prior distribution} Following the work in~\cite{xu2020learning}, we set the latent prior $p_{\mathcal{Z}}$ as a learnable Gaussian mixture model (GMM) and implement the regularizer in (\ref{eq:wae}) as the sliced fused Gromov-Wasserstein (SFGW) distance, $i.e.$, $d(q_{\mathcal{Z};f},p_\mathcal{Z}):=d_{\text{sfgw}}(q_{\mathcal{Z};f},p_\mathcal{Z})$.
This configuration helps us to learn latent representations with clustering structures.
\textbf{Encoder} The encoder of our GNAE is designed as an aggregation of Chebyshev graphon filters.
Given a graphon $g$ and its corresponding signal $s$, we achieve our encoder as follows:
\begin{eqnarray}\label{eq:filter}
\begin{aligned}
&\bm{z} = f((g,s))=\text{MLP}\Bigl(\int_{\Omega}\sideset{}{_{j=0}^{J}}\sum \theta_j(s^{(j)}(v))dv\Bigr),~\text{where}\\
&s^{(0)}(v) = s(v),~s^{(1)}(v) = L_g(s^{(0)}(v)) = \int_{\Omega}g(u,v) (s^{(0)}(v) - s^{(0)}(u))du,~\text{and}\\
&s^{(j)}(v) = 2L_g(s^{(j-1)}(v))-s^{(j-2)}(v)~\text{for}~j>1.
\end{aligned}
\end{eqnarray}
In the $j$-th step, the filtering result $s^{(j)}$ is obtained by applying a Laplacian filter to $s^{(j-1)}$ ($i.e.$, $L_g(s^{(j-1)}(v))$) and treating $s^{(j-2)}$ as an offset.
$\theta^{(j)}(\cdot)$ is a linear projection, mapping each $s^{(j)}(v)$ to $\mathbb{R}^{D}$.
We obtain the aggregated filtering result by accumulating and integrating the results of different steps.
Finally, we apply a multi-layer perceptron (MLP) network to derive a $C$-dimensional latent representation.
Figure~\ref{fig:encoder} illustrates the scheme of our encoder.
For general graphons and signals, we can implement the filtering process above based on Fourier transform~\cite{bracewell1966fourier}, whose complexity is high.
Fortunately, for the induced graphon and signal $(g_{\mathcal{P}},s_{\mathcal{P}})$, we can implement the graphon filters by a Chebyshev spectral graph convolutional (ChebConv) network~\cite{defferrard2016convolutional}.
For $j>0$, if $\Omega=[0,1]$ and $\mathcal{P}=\{\mathcal{P}_n\}_{n=1}^{N}$ are equitable partitions, we have
\begin{eqnarray}\label{eq:gcn}
\begin{aligned}
\int_{\Omega}g_{\mathcal{P}}(u,v) (s_{\mathcal{P}}^{(j-1)}(v) - s_{\mathcal{P}}^{(j-1)}(u))du
=\frac{1}{N}\sideset{}{_{n=1}^{N}}\sum(\bm{L}\bm{S}^{(j-1)})_{n}1_{\mathcal{P}_n}(v),~\text{for}~v\in\Omega,
\end{aligned}
\end{eqnarray}
where $\bm{L}=\text{diag}(\bm{A1})-\bm{A}$ is the Laplacian graph matrix, $\bm{S}^{(j-1)}=[\bm{s}_n^{(j-1)}]$ is a matrix of the $(j-1)$-th signal, and each $\bm{s}_n^{(j-1)}$ corresponds to the signal in the partition $\mathcal{P}_n$.
Accordingly, $(\bm{L}\bm{S}^{(j-1)})_{n}$ is the $n$-th row of $\bm{L}\bm{S}^{(j-1)}$.
Plugging (\ref{eq:gcn}) into (\ref{eq:filter}), we can derive the latent representation as $\bm{z}=\text{MLP}(\sum_{n=1}^{N}(\sum_{j=0}^{J}\frac{1}{N^{j+1}}\theta_j(\bm{S}^{(j)}))$, which can be implemented as a ChebConv network followed by an average pooling layer and a MLP.
\textbf{Decoder}
For each $\bm{z}=[z_1,..,z_C]$ derived from the encoder, we apply a factorization model to reconstruct the corresponding graphon and signal.
Specifically, the decoder consists of $C$ graphon factors, denoted as $\{(\tilde{g}_c, \tilde{s}_c)\}_{c=1}^{C}$.
Each graphon factor corresponds to two step functions defined as (\ref{eq:induced_graphon}) shows.
Accordingly, the reconstructed sample can be represented as $h(\bm{z})=(\hat{g},\hat{s})$, where
\begin{eqnarray}\label{eq:reconstruct}
\begin{aligned}
\hat{g}=\sideset{}{_{c=1}^{C}}\sum \tilde{z}_c \tilde{g}_c,~\text{and}~\hat{s}=\alpha(\sideset{}{_{c=1}^{C}}\sum \tilde{z}_c \tilde{s}_c),~\text{with}~\tilde{z}_c =\text{softmax}_c(\bm{z})=\frac{\exp(z_c)}{\sum_{c'}\exp(z_{c'})}.
\end{aligned}
\end{eqnarray}
We reparameterize each $\tilde{g}_c$ as $\sigma(b_c)$, where $b_c(u,v):\Omega^2\mapsto \mathbb{R}$ is a step function with unbounded output range and $\sigma(\cdot)$ is a sigmoid function, and let the latent representation pass through a softmax layer.
This setting makes $\{\tilde{g}_c\}_{c=1}^{C}$ and $\hat{g}$ in the space $\mathcal{G}$.
The function $\alpha(\cdot)$ depends on the type of the original signal, which can be ReLU, softmax, sigmoid, $etc$.
As shown in Figure~\ref{fig:decoder}, the graphon factors may have different partitions.
Applying the inclusion-exclusion principle~\cite{jukna2011extremal}, we have
\begin{proposition}\label{prop:partition}
Suppose that $\{\tilde{g}_c:[0,1]^2\mapsto [0,1]\}_{c=1}^{C}$ are 2D step functions, each of which has $N_c$ equitable partitions.
Denote $\{\mathcal{L}_c\}_{c=1}^{C}$ as the sets of the landmarks indicating the partitions, where $\mathcal{L}_c=\{\frac{1}{N_c},...,\frac{N_c-1}{N_c}\}$.
For the $\hat{g}$ derived by (\ref{eq:reconstruct}), the number of its partitions is
$|\mathcal{P}|=|\cup_{c=1}^{C}\mathcal{L}_c| + 1=\sum_{\emptyset\neq \mathcal{C}\subset\{1,..,C\}} (-1)^{|\mathcal{C}|}|\cap_{c\in\mathcal{C}}\mathcal{L}_c| + 1$.
If all the $N_c$'s are prime numbers, $|\mathcal{P}|=\sum_{c=1}^{C}|\mathcal{L}_c|+1$.
\end{proposition}
Proposition~\ref{prop:partition} shows the number of the partitions of the reconstructed graphon can be much larger than that of the graphon factors, which is beneficial for the capability of our model.
\section{Learning algorithm}\label{sec:learning}
\subsection{The fused Gromov-Wasserstein distance between graphons}\label{ssec:distance}
Besides the modules above, the key of our GNAE is the underlying distance $d_{\mathcal{X}}$.
A straightforward way is implementing $d_{\mathcal{X}}$ as the fused Gromov-Wasserstein (FGW) distance~\cite{vayer2020fused}:
\begin{definition}
For $\bm{x}_1,\bm{x}_2\in\mathcal{X}$, where $\bm{x}_1=(g_1,s_1)$ and $\bm{x}_2=(g_2,s_2)$, their order-$p$ fused Gromov-Wasserstein distance, denoted as $d_{\text{fgw}}(\bm{x}_1, \bm{x}_2)$, is
\begin{eqnarray*}
\begin{aligned}
\inf_{\pi\in\Pi(\mu_{\Omega},\mu_{\Omega})}\Bigl(\int_{\Omega^2\times\Omega^2}|g_1(u,u')-g_2(v,v')|^p d\pi(u,v)d\pi(u',v')+\int_{\Omega^2}\|s_1(u)-s_2(v)\|_p^p d\pi(u,v)\Bigr)^{\frac{1}{p}},
\end{aligned}
\end{eqnarray*}
\end{definition}
The FGW distance combines the GW distance between graphons and the Wasserstein distance between signals, enforcing them share the same optimal transport $\pi(u,v)$.
It is a metric for the quotient space $\widehat{\mathcal{X}}:=\mathcal{X}\setminus \cong$ when $p=1$ and a semi-metric when $p>1$~\cite{vayer2020fused}.
We can compute the FGW distance by solving an optimization problem with finite variables when dealing with the graphons and signals formulated as step functions.
\begin{proposition}\label{prop1}
Given $\bm{x}_{1,\mathcal{P}}=(g_{1,\mathcal{P}},s_{1,\mathcal{P}})$ and $\bm{x}_{2,\mathcal{Q}}=(g_{2,\mathcal{Q}},s_{2,\mathcal{Q}})$, where
\begin{eqnarray*}
&g_{1,\mathcal{P}}(v,v')=\sideset{}{_{n,n'=1}^{N}}\sum g_{1,nn'}1_{\mathcal{P}_n}(v)1_{\mathcal{P}_{n'}}(v'),~g_{2,\mathcal{Q}}(v,v')=\sideset{}{_{m,m'=1}^{M}}\sum g_{2,mm'}1_{\mathcal{Q}_m}(v)1_{\mathcal{Q}_{m'}}(v'),\\
&s_{1,\mathcal{P}}(v)=\sideset{}{_{n=1}^{N}}\sum\bm{s}_{1,n} 1_{\mathcal{P}}(v),~s_{2,\mathcal{Q}}(v)=\sideset{}{_{m=1}^{M}}\sum\bm{s}_{2,m} 1_{\mathcal{Q}}(v)
\end{eqnarray*}
are step functions, we have
\begin{eqnarray}\label{eq:d_fgw2}
\begin{aligned}
d_{\text{fgw}}(\bm{x}_{1,\mathcal{P}}, \bm{x}_{2,\mathcal{Q}})=\sideset{}{_{\bm{T}\in\Pi(\bm{\mu}_{\mathcal{P}},\bm{\mu}_{\mathcal{Q}})}}\min ( \langle\bm{D}_g, \bm{T}\otimes\bm{T}\rangle + \langle\bm{D}_s, \bm{T}\rangle )^{\frac{1}{p}},
\end{aligned}
\end{eqnarray}
where $\bm{D}_g=[|g_{1,nn'}-g_{2,mm'}|^p]\in\mathbb{R}^{N^2\times M^2}$, $\bm{D}_s=[\|\bm{s}_{1,n}-\bm{s}_{2,m}\|_p^p]\in\mathbb{R}^{N\times M}$, $\otimes$ represents Kronecker product, and $\Pi(\bm{\mu}_{\mathcal{P}},\bm{\mu}_{\mathcal{Q}})=\{\bm{T}\geq\bm{0}|\bm{T}\bm{1}=\bm{\mu}_{\mathcal{P}},\bm{T}^{\top}\bm{1}=\bm{\mu}_{\mathcal{Q}}\}$ with $\bm{\mu}_{\mathcal{P}}=[\frac{|\mathcal{P}_1|}{|\Omega|},..,\frac{|\mathcal{P}_N|}{|\Omega|}]$ and $\bm{\mu}_{\mathcal{Q}}=[\frac{|\mathcal{Q}_1|}{|\Omega|},..,\frac{|\mathcal{Q}_M|}{|\Omega|}]$.
\end{proposition}
Although (\ref{eq:d_fgw2}) is computable, its computational complexity is so high as $\mathcal{O}(N^4)$ for the graphons with $N$ partitions (or $\mathcal{O}(N^3)$ if $p=2$~\cite{peyre2016gromov}).
What is worse, because the graphons reconstructed by our GNAE are non-sparse 2D step functions, some commonly-used acceleration strategies like the sliced FGW distance~\cite{xu2020learning} and the sparse matrix multiplications~\cite{titouan2019optimal,xu2019scalable} become inapplicable.
Additionally, the underlying distance is parameterized by the GNAE model, so we have to consider the gradient of the optimal transport matrix $\bm{T}$ to the model parameters, which is expensive on both time and memory~\cite{xie2020hypergradient}.
\subsection{The KL divergence between conditional graph distributions}\label{ssec:raml}
Facing the issues above, we need to explore a surrogate of the FGW distance when learning our GNAE model.
A competitive choice is the KL divergence between the distributions of graphs conditioned on $h(\bm{z})$ and $\bm{x}$, denoted as $d_{\text{KL}}(q(G|\bm{x}), p(G|h(\bm{z})))$.
Specifically, given a reconstructed sample, $i.e.$, $h(\bm{z})=(\hat{g},\hat{s})$, we sample a set of attributed graphs, denoted as $\mathcal{Y}=\{G(\widehat{\bm{A}},\widehat{\bm{S}})\}$.
Each $G(\widehat{\bm{A}},\widehat{\bm{S}})$ has $K$ nodes.
According to (\ref{eq:generate_graph}), the likelihood of $G(\widehat{\bm{A}},\widehat{\bm{S}})$ is
\begin{eqnarray}\label{eq:pG}
\begin{aligned}
&p(G|h(\bm{z})) =p(\mathcal{V})p(\widehat{\bm{A}}|\hat{g},\mathcal{V})p(\widehat{\bm{S}}|\hat{s},\mathcal{V})
=\frac{1}{|\Omega|^{K}}\sideset{}{_{k,k'=1}^{K}}\prod p(\hat{a}_{kk'}|\hat{g}(v_k,v_{k'}))
\sideset{}{_{k=1}^{K}}\prod p(\hat{\bm{s}}_{k}|\hat{s}(v))
\\
&\propto\sideset{}{_{k,k'=1}^{K}}\prod \hat{g}(v_{k},v_{k'})^{\hat{a}_{kk'}}(1-\hat{g}(v_{k},v_{k'}))^{1-\hat{a}_{kk'}}\sideset{}{_{k=1}^{K}}\prod \exp\Bigl(-\frac{\|\hat{\bm{s}}_{k} - \hat{s}(v_k)\|_2^2}{2M\sigma^2}\Bigr),
\end{aligned}
\end{eqnarray}
where $\mathcal{V}=\{v_1,..,v_K\}$ and each $v_k$ is sampled independently from $\mu_{\Omega}$, so that $p(\mathcal{V})=\frac{1}{|\Omega|^{K}}$ when $\mu_{\Omega}$ is a uniform distribution.
Additionally, we leverage an exponentiated payoff distribution~\cite{norouzi2016reward} to approximate the probability of each $G(\widehat{\bm{A}},\widehat{\bm{S}})$ conditioned on the observed graphon $\bm{x}$:
\begin{eqnarray}\label{eq:qG}
\begin{aligned}
q(G|\bm{x}) = \frac{\exp(r(\hat{\bm{x}}_{G},\bm{x}))}{\sum_{G'\in\mathcal{Y}}\exp(r(\hat{\bm{x}}_{G'},\bm{x}))}=\frac{\exp(-d_{\text{fgw}}(\hat{\bm{x}}_{G},\bm{x}) / \tau)}{\sum_{G'\in\mathcal{Y}}\exp(-d_{\text{fgw}}(\hat{\bm{x}}_{G'},\bm{x}) / \tau)},
\end{aligned}
\end{eqnarray}
where $\hat{\bm{x}}_{G}$ is the graphon (and the signal) induced from the graph $G$, and $r(\hat{\bm{x}}_{G},\bm{x})=-\frac{d_{\text{fgw}}(\hat{\bm{x}}_{G},\bm{x})}{\tau}$ is the reward function implemented as the negative order-2 FGW distance between $\hat{\bm{x}}_{G}$ and $\bm{x}$.
Parameter $\tau$ controls the smoothness of $q(G|\bm{x})$.
In our work, we set $\tau$ adaptively as $\min_{G\in\mathcal{Y}} d_{\text{fgw}}(\hat{\bm{x}}_G,\bm{x})$.
\subsection{Reward-augmented maximum likelihood estimation}
Leveraging $d_{\text{KL}}(q(G|\bm{x}), p(G|h(\bm{z})))$, we develop an efficient learning algorithm with much lower computational complexity.
Specifically, we can rewrite $d_{\text{KL}}(q(G|\bm{x}), p(G|h(\bm{z})))$ as
\begin{eqnarray}\label{eq:kld}
\begin{aligned}
d_{\text{KL}}(q(G|\bm{x}), p(G|h(\bm{z})))=-\mathbb{E}_{G\sim q(G|\bm{x})}[\log p(G|h(\bm{z}))] + \mathbb{E}_{G\sim q(G|\bm{x})}[\log q(G|\bm{x})],
\end{aligned}
\end{eqnarray}
where the second term is the entropy of the sampled graphs, which is a constant with respect to the model.
Plugging (\ref{eq:kld}) into (\ref{eq:wae}), we learn our GNAE by
\begin{eqnarray}\label{eq:gnae}
\begin{aligned}
\sideset{}{_{f,h,p_{\mathcal{Z}}}}\min-\mathbb{E}_{\bm{x}\sim p_\mathcal{X}}\mathbb{E}_{\bm{z}\sim q_{\mathcal{Z}|\mathcal{X}; f}}\mathbb{E}_{G\sim q(G|\bm{x})}[\log p(G|h(\bm{z}))] + \gamma d_{\text{sfgw}}(q_{\mathcal{Z};f},p_\mathcal{Z}).
\end{aligned}
\end{eqnarray}
This optimization problem can be solved by the reward-augmented maximum likelihood (RAML) method~\cite{norouzi2016reward}.
The scheme of our learning algorithm is shown in Algorithm~\ref{alg:raml}.
In principle, when the sampled graph is close to the original input graph, the FGW distance between their induced graphon will be small.
Accordingly, the log-likelihood of the graph ($i.e.$, $\log p(G|h(\bm{z}))$) will be assigned to a large weight ($i.e.$, $q(G|\bm{x})$), and the model will be updated to increase the likelihood.
\begin{algorithm}[t]
\caption{Learning a GNAE by RAML}
\label{alg:raml}
\begin{algorithmic}[1]
\INPUT A set of graphons and signals induced from observed attributed graphs, denoted as $\mathcal{X}$.
\OUTPUT An encoder $f$, a decoder $h$, and a latent prior $p_{\mathcal{Z}}(\bm{z})=\frac{1}{T}\sum_t\mathcal{N}(\bm{z};\bm{\mu}_{t},\text{diag}(\bm{\sigma}_t^2))$.
\STATE \textbf{for} each epoch
\STATE \quad\textbf{for} each batch $\{\bm{x}_n\}_{n=1}^{N_b}\subset \mathcal{X}$
\STATE \quad\quad\textbf{for} $n=1,..,N_b$
\STATE \quad\quad\quad Samples of $q_{\mathcal{Z};f}$: $\bm{z}_n=f(\bm{x}_n)$.
\STATE \quad\quad\quad Samples of $p_{\mathcal{Z}}$: $t\sim \text{Categorical}(\frac{1}{T})$, and $\bm{z}'_n\sim \mathcal{N}(\bm{\mu}_{t},\text{diag}(\bm{\sigma}_t^2))$.
\STATE \quad\quad\quad Sample attributed graphs $\{G_i(\widehat{\bm{A}}_i,\widehat{\bm{S}}_i)\}_{i=1}^{I}$ from $h(\bm{z}_n)$ and induce $\{\hat{\bm{x}}_{G_i}\}_{i=1}^{I}$ by (\ref{eq:induced_graphon}).
\STATE \quad\quad\quad Compute $\{d_{\text{fgw}}(\hat{\bm{x}}_{G_i},\bm{x}_n)\}_{i=1}^{I}$ by (\ref{eq:d_fgw2}) and obtain $\{q(G_i|\bm{x}_n)\}_{i=1}^{I}$ by (\ref{eq:qG}).
\STATE \quad\quad\quad Compute $\{p(G_i|h(\bm{z}_n))\}_{i=1}^{I}$ by (\ref{eq:pG}).
\STATE \quad\quad Calculate $\mathcal{L}=-\sum_{n=1}^{N_b}\sum_{i=1}^{I}q(G_i|\bm{x}_n)\log p(G_i|h(\bm{z}_n)) + \gamma d_{\text{sfgw}}(q_{\mathcal{Z};f}, p_{\mathcal{Z}})$.
\STATE \quad\quad Update the model by Adam optimizer~\cite{kingma2014adam}.
\end{algorithmic}
\end{algorithm}
Although our learning algorithm still involves computing FGW distances, it has obvious advantages on computational complexity compared to using $d_{\text{fgw}}(\bm{x},h(\bm{z}))$ as underlying distance directly.
Firstly, the number of each sampled graph's nodes ($i.e.$, $K$) can be much smaller than that of the reconstructed graphon's partitions ($i.e.$, $N$).
Additionally, we replace dense reconstructed graphons with sparse adjacency matrices of the sampled graphs with the help of the Bernoulli sampling.
Therefore, it is relatively easy to compute the $d_{\text{fgw}}(\hat{\bm{x}}_{G_i},\bm{x}_n)$ in Line 7 of Algorithm~\ref{alg:raml} --- its computational complexity is $\mathcal{O}(EK)$ and $E$ is the number of the edges of the graph inducing $\bm{x}_n$.
Moreover, the gradient corresponding to the first term of $\mathcal{L}$ is $-\sum_{n=1}^{N_b}\sum_{i=1}^{I}q(G_i|\bm{x}_n)\nabla\log p(G_i|h(\bm{z}_n))$.
Here, the $q(G_i|\bm{x}_n)$ is used as a constant, and the corresponding FGW distance is not involved in the backpropagation, which reduces the cost of time and memory greatly.
\section{Connections to Existing Work}
\textbf{Autoencoders}
The principle of autoencoders is to minimize the discrepancy between the data and model distributions.
The variational autoencoder (VAE)~\cite{kingma2013auto} and its variants~\cite{tomczak2018vae,wang2019topic} apply the KL-divergence as the discrepancy and learns a probabilistic autoencoder via maximizing the evidence lower bound (ELBO).
The Wasserstein autoencoders (WAEs)~\cite{tolstikhin2018wasserstein,kolouri2018sliced} minimize a relaxed form of the Wasserstein distance to learn a deterministic autoencoder.
Both these two strategies lead to a learning task including a reconstruction loss of observed data and a regularizer penalizing the distance between the prior and the posterior (or the mixture of different posterior distributions) in the latent space.
The prior can be a predefined normal distribution or a learnable mixture model~\cite{takahashi2019variational,xu2020learning}.
The commonly-used distances between the prior and the posterior include KL divergence, maximum mean discrepancy, GAN-based loss, FGW distance, $etc$.
\textbf{Generative graph modeling}
The early graph models like the Erd{\H{o}}s-R{\'e}nyi graph~\cite{erdHos1960evolution} simulate large graphs to yield certain statistical properties but cannot capture complicated mechanisms of real-world graphs.
Recently, the GNN-based graph generative models have been widely used, which can be categorized into two classes.
The first class learns node-level embeddings and estimates edges based on the pairs of the embeddings~\cite{kipf2016variational,kipf2017semi,niepert2016learning,xu2018powerful}, which works well on link prediction~\cite{zhang2018link} and conditional graph generation~\cite{yang2019conditional}.
The second class applies various pooling layers~\cite{ying2018hierarchical,vinyals2016order,li2016gated} to obtain graph embeddings and then leverages recurrent neural networks to generate nodes and edges in an autoregressive manner~\cite{you2018graph,shi2019graphaf,jin2020hierarchical,dai2020scalable}.
Besides the GNN-based models, the Gromov-Wasserstein factorization (GWF) model~\cite{xu2020gromov} reconstructs each graph as a weighted GW barycenter~\cite{peyre2016gromov} of learnable graph factors, which achieves encouraging performance on graph clustering.
Following the GWF model, the graph dictionary learning (GDL) model~\cite{vincent2021online} leverages a linear factorization to reconstruct graphs, which has lower complexity than the GWF model.
However, the models above seldom consider the clustering structure or the distribution of the graph embeddings they learned.
\textbf{Graphon-based graph models}
Graphon is a nonparametric graph model, which has been widely used in network modeling~\cite{avella2018centrality,gao2019graphon} and optimization~\cite{parise2018graphon}.
To infer graphons from observed graphs, many methods have been proposed, $i.e.$, the stochastic block approximation (SBA) methods~\cite{airoldi2013stochastic,channarond2012classification,chan2014consistent} and the low-rank approximation methods~\cite{keshavan2010matrix,chatterjee2015matrix,xu2018rates}.
These methods require well-aligned graphs, which is questionable in practice.
The work in~\cite{xu2021learning} relaxes this requirement, learning the graphon and aligning the observed graphs alternately by solving a GW barycenter problem~\cite{peyre2016gromov}.
All the methods above are based on the weak regularity lemma~\cite{lovasz2012large}, approximating graphons by 2D step functions.
Recently, the work in~\cite{ruiz2020graphon,ruiz2020graphon2} bridges the gap between graphon-based signal processing and graph neural networks, which inspires the design of our encoder.
However, existing methods either learn graphons to generate graphs or leverage graphons to process the information of nodes.
None of them consider learning the distribution of graphons as we did.
\textbf{The novelties of our GNAE} To our knowledge, our graphon autoencoder makes the first attempt to build a WAE in the graphon space, which provides a new algorithmic framework for graphon distribution modeling and graph generation.
Our GNAE extends the GNN-based model and the factorization model to the functional space of graphons.
The encoder achieves graphon filtering, whose GNN-based implementation is a special case for induced graphons.
The decoder improves the GDL model by leveraging graphon factors with different partitions.
Combining these two strategies in the framework of autoencoders, our GNAE inherits their advantages and learns them with better interpretability and capability.
\section{Experiments}\label{sec:exp}
\subsection{Graph representation and classification}
To demonstrate the usefulness of our GNAE model, we test it on six public graph datasets and compare it with state-of-the-art methods on graph modeling.
The datasets we used can be categorized into three classes:
The \textbf{MUTAG} and the \textbf{PTC-MR} in~\cite{kriege2012subgraph} contain molecules with categorical node attributes;
the \textbf{PROTEIN} and the \textbf{ENZYMES} in~\cite{borgwardt2005protein} contain proteins with continuous node attributes;
and the \textbf{IMDB-B} and the \textbf{IMDB-M} in~\cite{yanardag2015deep} contain social networks without node attributes.
These datasets can be downloaded from \url{https://chrsmrrs.github.io/datasets/}~\cite{Morris+2020}.
For the datasets without node attributes, we treat the local degree profiles~\cite{cai2018simple} of nodes as the attributes.
The baselines include: ($i$) the kernel-based methods, $e.g.$, Random Walk Kernel (\textbf{RWK})~\cite{gartner2003graph}, Shortest Path Kernel (\textbf{SPK})~\cite{borgwardt2005protein}, Graphlet Kernel (\textbf{GK})~\cite{shervashidze2009efficient}, Weisfeiler-Lehman Sub-tree Kernel (\textbf{WLK})~\cite{shervashidze2011weisfeiler}, Deep Graph Kernel (\textbf{DGK})~\cite{yanardag2015deep}, Multi-Scale Laplacian Kernel (\textbf{MLGK})~\cite{kondor2016multiscale}, and Fused Gromov-Wasserstein Kernel (\textbf{FGWK})~\cite{titouan2019optimal};
($ii$) the GNN-based methods, $e.g.$, \textbf{sub2vec}~\cite{adhikari2018sub2vec}, \textbf{graph2vec}~\cite{narayanan2017graph2vec}, and the state-of-the-art InfoGraph method~\cite{sun2019infograph} that uses Graph Isomorphismic Network (GIN)~\cite{xu2018powerful} and Differentiable Pooling (DP)~\cite{ying2018hierarchical} as its backbone model, respectively (\textbf{InfoGraph$_{\text{GIN}}$} and \textbf{InfoGraph$_{\text{DP}}$});
($iii$) the factorization models (FMs), $e.g.$, the Gromov-Wasserstein factorization (\textbf{GWF})~\cite{xu2020gromov} and the Graph Dictionary Learning (\textbf{GDL})~\cite{vincent2021online}.
We reproduce the baselines either based on the code released by the authors or our own implementations and set their hyperparameters according to the released code or the corresponding references.
When implementing our GNAE model, we consider two variants: applying the FGW distance directly as the underlying distance and learning the GNAE model by alternating optimization (\textbf{GNAE}$_{\textbf{FGW}}$), or applying the KL divergence of graph distributions as the underlying distance and learning the model by the proposed RAML (\textbf{GNAE}$_{\textbf{RAML}}$).
For the GNAE models, the settings of their hyperparameters are given in Appendix.
We test the methods above on graph classification.
For each kernel-based method, we train a kernel SVM classifier~\cite{chang2011libsvm}.
For other methods, we learn graph representations explicitly in an unsupervised way and train an SVM classifier based on the representations.
The SVM classifier of each method is trained based on 10-fold cross-validation, and we use the same random seed to split data and select the most suitable SVM kernel function manually.
Table~\ref{tab:class} lists the mean and the standard deviation of the classification accuracy achieved by the methods on each dataset.
We can find that the performance of our GNAE models is at least comparable to that of the state-of-the-art methods ($e.g.$, MLGK, FGWK and InfoGraph).
Especially, the proposed GNAE$_{\text{RAML}}$ achieves the top-5 accuracy on four of the six datasets, which is the same as InforGraph$_{\text{GIN}}$ does.
Note that for the GNN-based methods, the dimension of their graph representations is over $100$.
However, our GNAE models achieve competitive results based on the representations with much a lower dimension ($\leq 30$ for all the datasets).
For the challenging ENZYMES dataset, our GNAE methods do not work well.
A potential reason for this phenomenon is the model misspecification issue --- the node attributes in this dataset are sparse and have high dynamic ranges, so the smoothed signal model we applied may not be able to describe and reconstruct such attributes well.
\begin{table}[!t]
\centering
\caption{Comparison on classification accuracy ($\%$).}
\label{tab:class}
\begin{small}
\begin{threeparttable}
\begin{tabular}{
@{\hspace{1pt}}c@{\hspace{2pt}}|
@{\hspace{2pt}}l@{\hspace{3pt}}|
@{\hspace{3pt}}c@{\hspace{3pt}}
@{\hspace{3pt}}c@{\hspace{3pt}}
@{\hspace{3pt}}c@{\hspace{3pt}}
@{\hspace{3pt}}c@{\hspace{3pt}}
@{\hspace{3pt}}c@{\hspace{3pt}}
@{\hspace{3pt}}c@{\hspace{3pt}}|
@{\hspace{3pt}}c@{\hspace{1pt}}
}
\hline\hline
Category
&Method
&MUTAG
&PTC-MR
&PROTEIN
&ENZYMES
&IMDB-B
&IMDB-M
&\# in Top5\\ \hline
\multirow{6}{*}{Kernels}
&RWK
&83.72$_{\pm \text{1.50}}$
&57.85$_{\pm \text{1.30}}$
&73.95$_{\pm \text{0.59}}$
&28.52$_{\pm \text{1.83}}$
&50.70$_{\pm \text{0.26}}$
&34.65$_{\pm \text{0.19}}$
&0\\
&SPK
&\textbf{85.22}$_{\pm \text{2.43}}$
&58.24$_{\pm \text{2.44}}$
&\textbf{74.93}$_{\pm \text{0.86}}$
&38.87$_{\pm \text{3.01}}$
&55.60$_{\pm \text{0.22}}$
&37.99$_{\pm \text{0.30}}$
&2\\
&GK
&81.66$_{\pm \text{2.11}}$
&57.26$_{\pm \text{1.41}}$
&71.10$_{\pm \text{1.08}}$
&30.36$_{\pm \text{4.84}}$
&65.90$_{\pm \text{0.98}}$
&43.89$_{\pm \text{0.38}}$
&0\\
&WLK
&80.72$_{\pm \text{3.00}}$
&57.97$_{\pm \text{0.49}}$
&73.01$_{\pm \text{1.09}}$
&54.69$_{\pm \text{3.27}}$
&\textbf{72.30}$_{\pm \text{3.44}}$
&46.35$_{\pm \text{0.46}}$
&1\\
&DGK
&\textbf{87.44}$_{\pm \text{2.72}}$
&60.08$_{\pm \text{2.55}}$
&74.27$_{\pm \text{1.12}}$
&53.22$_{\pm \text{1.01}}$
&66.90$_{\pm \text{0.56}}$
&44.55$_{\pm \text{0.52}}$
&1\\
&MLGK
&\textbf{87.94}$_{\pm \text{1.61}}$
&\textbf{62.23}$_{\pm \text{1.39}}$
&\textbf{75.86}$_{\pm \text{0.99}}$
&61.89$_{\pm \text{1.17}}$
&66.60$_{\pm \text{0.25}}$
&41.17$_{\pm \text{0.03}}$
&3\\
&FGWK\tnote{*}
&\textbf{88.13}$_{\pm \text{4.22}}$
&\textbf{62.98}$_{\pm \text{5.27}}$
&72.20$_{\pm \text{3.81}}$
&\textbf{71.48}$_{\pm \text{2.96}}$
&63.50$_{\pm \text{4.01}}$
&46.27$_{\pm \text{3.85}}$
&3\\ \hline
\multirow{4}{*}{GNNs}
&sub2vec
&60.88$_{\pm \text{9.89}}$
&59.99$_{\pm \text{6.38}}$
&54.29$_{\pm \text{5.20}}$
&45.25$_{\pm \text{2.80}}$
&55.30$_{\pm \text{1.54}}$
&36.67$_{\pm \text{0.83}}$
&0\\
&graph2vec
&83.15$_{\pm \text{9.25}}$
&60.17$_{\pm \text{6.86}}$
&72.96$_{\pm \text{1.89}}$
&\textbf{71.65}$_{\pm \text{3.10}}$
&71.10$_{\pm \text{0.54}}$
&\textbf{50.44}$_{\pm \text{0.87}}$
&2\\
&InfoGraph$_{\text{GIN}}$
&\textbf{89.13}$_{\pm \text{1.01}}$
&61.65$_{\pm \text{1.43}}$
&\textbf{74.88}$_{\pm \text{4.31}}$
&39.52$_{\pm \text{3.99}}$
&\textbf{73.90}$_{\pm \text{0.87}}$
&\textbf{49.29}$_{\pm \text{0.53}}$
&\textbf{4}\\
&InfoGraph$_{\text{DP}}$\tnote{*}
&84.28$_{\pm \text{3.94}}$
&\textbf{62.26}$_{\pm \text{4.55}}$
&73.50$_{\pm \text{2.91}}$
&\textbf{61.93}$_{\pm \text{4.64}}$
&68.50$_{\pm \text{5.07}}$
&44.79$_{\pm \text{3.33}}$
&2\\\hline
\multirow{2}{*}{FMs}
&GWF
&78.25$_{\pm \text{3.67}}$
&\textbf{61.87}$_{\pm \text{2.53}}$
&73.19$_{\pm \text{1.97}}$
&\textbf{72.11}$_{\pm \text{4.00}}$
&60.90$_{\pm \text{2.68}}$
&39.97$_{\pm \text{1.35}}$
&2\\
&GDL\tnote{*}
&78.18$_{\pm \text{2.37}}$
&60.32$_{\pm \text{1.35}}$
&74.29$_{\pm \text{3.60}}$
&\textbf{71.15}$_{\pm \text{3.19}}$
&\textbf{71.70}$_{\pm \text{1.10}}$
&\textbf{49.12}$_{\pm \text{0.49}}$
&3\\ \hline
\multirow{2}{*}{\textbf{Ours}}
&GNAE$_{\text{FGW}}$
&79.53$_{\pm\text{5.79}}$
&61.43$_{\pm\text{4.28}}$
&\textbf{75.32}$_{\pm\text{2.88}}$
&48.00$_{\pm\text{6.36}}$
&\textbf{72.50}$_{\pm\text{4.30}}$
&\textbf{47.30}$_{\pm\text{1.97}}$
&3\\
&GNAE$_{\text{RAML}}$
&79.76$_{\pm\text{3.88}}$
&\textbf{61.75}$_{\pm\text{6.29}}$
&\textbf{75.78}$_{\pm\text{3.42}}$
&50.70$_{\pm\text{4.14}}$
&\textbf{73.10}$_{\pm\text{3.75}}$
&\textbf{46.67}$_{\pm\text{3.33}}$
&\textbf{4}\\
\hline\hline
\end{tabular}
\begin{footnotesize}
\begin{tablenotes}
\item[1] The methods marked by ``*'' are implemented by ourselves.
\item[2] For each dataset, the bold numbers are the five highest accuracy (top-5 results).
\end{tablenotes}
\end{footnotesize}
\end{threeparttable}
\end{small}
\end{table}
\begin{figure}[t]
\centering
\subfigure[Training time]{
\includegraphics[height=2.7cm]{figures/runtime2.pdf}\label{fig:time}
}
\subfigure[Typical graphs and graphon factors]{
\includegraphics[height=2.7cm]{figures/factors_graphs.pdf}\label{fig:factors}
}
\caption{For the IMDB-B dataset: (a) Comparisons for FGW-based methods on their runtime. (b) Illustrations of typical graphs and the graphon factors learned by our GNAE$_{\text{RAML}}$.}
\end{figure}
Our GNAE$_{\text{RAML}}$ is more efficient than its competitors that apply FGW distance ($i.e.$, GWF, GDL, and GNAE$_{\text{FGW}}$).
Suppose that all the models contain $C$ graph (or graphon) factors with comparable sizes, denoted as $\mathcal{O}(N)$.
Given a graph with $N$ nodes and $E$ edges, the GWF reconstructs it by the FGW barycenter of its factors~\cite{xu2020gromov}, which needs to compute $C$ FGW distances iteratively.
Therefore, its computational complexity is at least $\mathcal{O}(CN^3)$.
Both the GNAE$_{\text{FGW}}$ and the GDL applies linear factorization models, so they only need to compute one FGW distance between the input and the reconstruction, whose complexity is $\mathcal{O}(|\mathcal{P}|^2N)$ and $\mathcal{O}(N^3)$, respectively.
Here, $\mathcal{P}$ is the partitions of the graphon reconstructed by the GNAE$_{\text{FGW}}$.
Proposition~\ref{prop:partition} shows that $|\mathcal{P}|\geq N$, so the GNAE$_{\text{FGW}}$ is slightly slower than the GDL.
Our GNAE$_{\text{RAML}}$ samples $I$ small graphs and computes $I$ FGW distances, each of which is a pair of two sparse matrices.
Denote the number of nodes in each small graph as $K$.
The computational complexity of our GNAE$_{\text{RAML}}$ is $\mathcal{O}(IEK)$.
Because $E\ll N^2$, $K\ll N$, and we set $I=\mathcal{O}(C)$, our GNAE$_{\text{RAML}}$ owns the lowest computational complexity.
Figure~\ref{fig:time} shows the training time per epoch of different models on the IMDB-B dataset, which verifies our analysis above --- our GNAE$_{\text{RAML}}$ is $\times 4$ faster than the GDL and GNAE$_{\text{FGW}}$ and $\times 12$ faster than the GWF.
Note that because the implementation of the GDL does not support GPU computing, we test all the methods on a single core of a CPU (Core i7 2.5GHz) for fairness.
Figure~\ref{fig:factors} visualize some typical graphs in the IMDB-B dataset and the graphon factors learned by our GNAE$_{\text{RAML}}$ ($i.e.$, $\{\tilde{g}_c\}_{c=1}^{15}$).
We can find that the IMDB-B graphs are formulated as communities connected by one or two central nodes.
The graphon factors we learned reflect the topological property of the graphs, which further demonstrates the rationality of our GNAE model.
\subsection{Generalizability and transferability on social network modeling}
Our GNAE can generate graphons from graph representations and sampling graphs with different sizes but similar topological structures.
Again, take the IMDB-B dataset as an example.
For this dataset, the average number of nodes per graph is 19.77.
Given a GNAE trained on this dataset, we sample graph representations from learned prior distribution and generate graphons by the decoder of the GNAE, $i.e.$, $\hat{g}=h(\bm{z})$ with $\bm{z}\sim p_{\mathcal{Z}}$.
Based on $\hat{g}$, we sample graphs with different sizes, as shown in Figure~\ref{fig:generate}.
We can find that the generated graphs have similar structures, each containing two communities connected by few key nodes.
Note that this topological structure is typical for the real IMDB graph (as shown in Figure~\ref{fig:factors}).
This experimental result demonstrates that our GNAE has the potentials as a graph generator with strong generalizability, which is especially suitable for social network modeling and simulation.
\begin{figure}[t]
\centering
\includegraphics[height=2.5cm]{figures/imdb_sampling.pdf}
\caption{Illustrations of the graphs sampled from the generated graphon on the left. From left to right, the number of nodes for each graph is $20, 40, 60, 80$, respectively.}
\label{fig:generate}
\end{figure}
Another advantage of our GNAE is its transferability, which is seldom considered by existing work.
In particular, we can train a GNAE on a dataset and use it to represent the graphs in a related but different dataset.
For example, both the IMDB-B and the IMDB-M are movie collaboration datasets.
Each graph in these two datasets is an ego-network of an actor/actress, which indicates his/her collaborations with other actors/actresses~\cite{yanardag2015deep}.
The IMDB-B contains 1000 ego-networks driven by two genres (\textit{Action} and \textit{Romance}), while the IMDB-M contains 1500 ego-networks driven by three genres (\textit{Comedy}, \textit{Romance} and \textit{Sci-Fi}).
Obviously, these two datasets have different structures but share some information.
To demonstrate the transferability of our model, we first train a GNAE model on one dataset.
Then, without any fine-tuning, we leverage the model to represent the graphs in the other dataset.
Finally, we train and test an SVM classifier based on the representations and record the classification accuracy achieved by 10-fold cross-validation.
Table~\ref{tab:trans} shows the performance of our GNAE and the strongest baseline InfoGraph$_{\text{GIN}}$ in the transfer learning scenarios.
We can find that the performance of the InfoGraph$_{\text{GIN}}$ drops a lot when doing transfer learning.
On the contrary, our GNAE shows good transferability, whose performance only degrades slightly.
It captures the structural information shared by the two datasets, making the model transferable.
\begin{table}[t]
\centering
\caption{Comparison on the classification accuracy ($\%$) achieved by transfer learning}
\begin{small}
\begin{tabular}{
@{\hspace{1pt}}c|
c@{\hspace{3pt}}|
@{\hspace{3pt}}c@{\hspace{3pt}}|
@{\hspace{3pt}}c@{\hspace{3pt}}|
@{\hspace{3pt}}c@{\hspace{1pt}}}
\hline\hline
\multirow{2}{*}{Method} & \multicolumn{4}{c}{Training $\rightarrow$ Testing}\\
\cline{2-5}
&
IMDB-B $\rightarrow$ IMDB-B &
IMDB-M $\rightarrow$ IMDB-B &
IMDB-M $\rightarrow$ IMDB-M &
IMDB-B $\rightarrow$ IMDB-M \\
\hline
InfoGraph$_{\text{GIN}}$ &
73.90$_{\pm \text{0.87}}$ &
66.10$_{\pm \text{1.90}}$ &
49.29$_{\pm \text{0.53}}$ &
45.29$_{\pm \text{1.28}}$\\
GNAE$_{\text{RAML}}$ &
73.60$_{\pm \text{3.80}}$ &
70.70$_{\pm \text{3.49}}$ &
46.93$_{\pm \text{3.14}}$ &
46.20$_{\pm \text{3.50}}$\\
\hline\hline
\end{tabular}
\end{small}
\label{tab:trans}
\end{table}
\section{Conclusion and Future Work}\label{sec:conclusion}
We proposed a novel graphon autoencoder associated with an efficient learning algorithm.
It is pioneering work achieving an interpretable and scalable graph generative model.
Currently, the main advantages of our GNAE, $e.g.$, its generalizability and transferability, are demonstrated on social network modeling.
However, as shown in Table~\ref{tab:class}, we need to improve the GNAE model for other graph types like proteins, molecules, and more complicated heterogeneous graphs and hypergraphs.
Additionally, we will explore other potential substitutes for the FGW distance to further improve the efficiency of our learning algorithm.
|
1,116,691,499,880 | arxiv |
\section{Algorithms for \cshitarg{H}{}}\label{sec:algo-col}
In this section we develop algorithmic upper bounds
for \cshitarg{H}{} on graphs of bounded treewidth.
We start with a simple observation that essentially reduces
the problem to $H$ being a connected graph.
\begin{lemma}\label{lem:cshit:conn}
Let $(G,\sigma)$ be a \cshitarg{H}{} instance $(G,\sigma)$.
Then, a set $X \subseteq V(G)$ hits all
$\sigma$-$H$-subgraphs of $G$
if and only if
there exists a connected component $C$ of $H$
such that, if we define $V_C = \sigma^{-1}(C) \subseteq V(G)$,
then $X \cap V_C$ hits all $\sigma|_{V_C}$-$C$-subgraphs of $G[V_C]$.
\end{lemma}
\begin{proof}
The right-to-left implication is immediate. In the other direction,
by contradiction, assume that for each connected component $C$
of $H$ there exists a $\sigma|_{V_C}$-$C$-subgraph $\pi_C$ in $G[V_C]$ that is not hit by $X$.
Then, as each vertex of $H$ has its own color,
$\bigcup_C \pi_C$ is a $\sigma$-$H$-subgraph in $G$ that is not hit by $X$.
\end{proof}
Hence, Lemma~\ref{lem:cshit:conn} allows us to solve \cshitarg{H}{} problem
only for connect graphs $H$: in the general case, we solve \cshitarg{C}
on $(G[V_C],\sigma|_{V_C})$ for each connected component $C$ of $H$.
In the remainder of this section we consider only connected graphs $H$.
We now resolve two simple special cases: when $H$ is a path or a clique.
\begin{theorem}\label{thm:cshit:poly}
\cshitarg{H}{} is polynomial-time solvable in the case
when $H$ is a path.
\end{theorem}
\begin{proof}
Let $h = |V(H)|$ and let $a_1,a_2,\ldots,a_h$ be the consecutive vertices on the path $H$.
For an input $(G,\sigma)$ to \cshitarg{H}{}, construct an auxiliary directed graph $G'$ as follows.
Take $V(G') = V(G) \cup \{s,t\}$, where $s$ and $t$ are two new vertices.
For each $1 \leq i < h$ and every edge $uv \in E(G)$
with $\sigma(u) = a_i$ and $\sigma(v) = a_{i+1}$, add an arc $(u,v)$ to $G'$.
Moreover, for each $u \in \sigma^{-1}(a_1)$ add an arc $(s,u)$ and for each
$u \in \sigma^{-1}(a_h)$ add an arc $(u,t)$.
Observe the family of $\sigma$-$H$-subgraphs of $G$ is in one-to-one correspondance
with directed paths from $s$ to $t$ in the graph $G'$.
Hence, to compute a set $X$ of minimum size that hits all $\sigma$-$H$-subgraphs of $G$
it suffices to compute a minimum cut between $s$ and $t$ in the graph $G'$. This can be done in polynomial time using any maximum flow algorithm.
\end{proof}
\begin{theorem}\label{thm:cshit:clique}
A \cshitarg{H}{} instance $(G,\sigma)$
can be solved in time $2^{\mathcal{O}(t)} |V(G)|$ in the case
when $H$ is a clique and $t$ is the treewidth of $G$.
\end{theorem}
\begin{proof}
We perform an absolutely standard dynamic programming algorithm, essentially using the folklore fact
that any clique in $G$ needs to be completely contained in a bag of the decomposition.
Recall that we may assume that we are given a nice tree decomposition
$(\ensuremath{\mathtt{T}},\beta)$ of $G$, of width \emph{less} than $t$.
For a node $w$, a \emph{state} is a set ${\widehat{X}} \subseteq \beta(w)$.
For a node $w$ and a state ${\widehat{X}}$, we say
that a set $X \subseteq \alpha(w)$ is \emph{feasible}
if $G[\gamma(w) \setminus ({\widehat{X}} \cup X)]$ does not contain any $\sigma$-$H$-subgraph.
Define $T[w,{\widehat{X}}]$ to be the minimum size of a feasible set for $w$ and ${\widehat{X}}$.
Note that $T[\mathtt{root}(\ensuremath{\mathtt{T}}),\emptyset]$ is the answer to the input
\cshitarg{H}{} instance. We now show how to compute the values $T[w,{\widehat{X}}]$
in a bottom-up manner in the tree decomposition $(\ensuremath{\mathtt{T}},\beta)$.
\medskip
\noindent\textbf{Leaf node.} Observe that
$\emptyset$ is the unique valid state for a leaf node $w$,
and $T[w,\emptyset] = 0$.
\medskip
\noindent\textbf{Introduce node.}
Consider now an introduce node $w$ with child $w'$, and a unique vertex
$v \in \beta(w) \setminus \beta(w')$.
Furthemore, consider a single state ${\widehat{X}}$ at node $w$.
If $v \in {\widehat{X}}$, then clearly a set $X$ is feasible for $w$ and ${\widehat{X}}$
if and only if it is feasible for $w'$ and ${\widehat{X}} \setminus \{v\}$.
Hence, $T[w,{\widehat{X}}] = T[w',{\widehat{X}} \setminus \{v\}]$ in this case.
Consider now the case $v \notin {\widehat{X}}$. If there is a
$\sigma$-$H$-subgraph in $G[\beta(w) \setminus {\widehat{X}}]$ then clearly no
set is feasible for $w$ and ${\widehat{X}}$, and $T[w,{\widehat{X}}] = +\infty$.
Otherwise, since $H$ is a clique and $v$ does not have any neighbor in $\alpha(w)$, there is no $\sigma$-$H$-subgraph of
$G[\gamma(w)]$ that uses both $v$ and a vertex of $\alpha(w)$.
Consequently, $T[w,{\widehat{X}}] = T[w',{\widehat{X}}]$ in the remaining case.
\medskip
\noindent\textbf{Forget node.}
Consider now a forget node $w$ with child $w'$,
and a unique vertex $v \in \beta(w') \setminus \beta(w)$.
Let ${\widehat{X}} \subseteq \beta(w)$ be any state.
We claim that $T[w,{\widehat{X}}] = \min(T[w',{\widehat{X}}], 1+T[w',{\widehat{X}} \cup \{v\}])$.
In one direction, it suffices to observe that, for any set $X$ feasible for $w$ and ${\widehat{X}}$,
if $v \in X$, then $X \setminus \{v\}$ is feasible
for $T[w',{\widehat{X}} \cup \{v\}]$, and otherwise, if $v \notin X$,
then $X$ is feasible for $T[w',{\widehat{X}}]$.
In the other direction, note that any feasible set for $w'$ and ${\widehat{X}}$
is also feasible for $w$ and ${\widehat{X}}$, whereas for every feasible set $X$ for $w'$ and ${\widehat{X}} \cup \{v\}$,
$X \cup \{v\}$ is feasible for $w$ and ${\widehat{X}}$.
\medskip
\noindent\textbf{Join node.}
Let $w$ be a join node with children $w_1$ and $w_2$, and let ${\widehat{X}}$ be a state for $w$.
We claim that $T[w,{\widehat{X}}] = T[w_1,{\widehat{X}}] + T[w_2,{\widehat{X}}]$.
Indeed, note that, since $H$ is a clique, any $\sigma$-$H$-subgraph of $G[\gamma(w) \setminus {\widehat{X}}]$
has its image entirely contained in $\gamma(w_1)$ or entirely contained in $\gamma(w_2)$.
Consequently, if $X_i$ is feasible for $w_i$ and ${\widehat{X}}$, for $i=1,2$,
then $X_1 \cup X_2$ is feasible for $w$ and ${\widehat{X}}$. In the other direction,
it is straightforward that for every feasible set $X$ for $w$ and ${\widehat{X}}$,
$X \cap \alpha(w_i)$ is feasible for $w_i$ and ${\widehat{X}}$, $i=1,2$.
\end{proof}
We now move to the proof of Theorem~\ref{thm:intro:cshit:algo}.
\begin{theorem}[Theorem~\ref{thm:intro:cshit:algo} restated]\label{thm:cshit:algo}
A \cshitarg{H}{} instance $(G,\sigma)$
can be solved in time $2^{\mathcal{O}(t^{\mu(H)})} |V(G)|$
in the case when $H$ is connected and is not a clique,
where $t$ is the treewidth of $G$.
\end{theorem}
\begin{proof}
Recall that we may assume that we are given a nice tree decomposition
$(\ensuremath{\mathtt{T}},\beta)$ of $G$, of width \emph{less} than $t$,
and a labeling $\Lambda: V(G) \to \{1,2,\ldots,t\}$ that is injective on each bag.
We now define a state that will be used in the dynamic programming algorithm.
A \emph{state} at node $w \in V(\ensuremath{\mathtt{T}})$ is a pair $({\widehat{X}},\mathbb{C})$
where ${\widehat{X}} \subseteq \beta(w)$ and $\mathbb{C}$ is a family of separator $t$-chunks,
where each chunk $\mathbf{c}$ in $\mathbb{C}$:
\begin{enumerate}
\item uses only labels of $\Lambda^{-1}(\beta(w) \setminus {\widehat{X}})$;
\item the mapping $\pi: \ensuremath{\partial} \mathbf{c} \to \beta(w) \setminus {\widehat{X}}$ that
maps a vertex of $\ensuremath{\partial} \mathbf{c}$ to a vertex with the same label is a homomorphism
of $H[\ensuremath{\partial} \mathbf{c}]$ into $G$ (in particular, it respects colors).
\end{enumerate}
Observe that, as $|\ensuremath{\partial} \mathbf{c}| \leq \mu(H)$ for any separator chunk $\mathbf{c}$,
there are $\mathcal{O}(t^{\mu(H)})$ possible separator $t$-chunks, and hence
$2^{\mathcal{O}(t^{\mu(H)})}$ possible states for a fixed node $w$.
The intuitive idea behind a state is that, for node $w \in V(\ensuremath{\mathtt{T}})$ and state $({\widehat{X}},\mathbb{C})$
we investigate the possibility of the following: for a solution $X$ we are looking for, it holds that
${\widehat{X}} = X \cap \beta(w)$ and
the family $\mathbb{C}$ is exactly the set of possible separator
chunks of $H$ that are subgraphs of $G \setminus X$, where the subgraph relation is defined as on
$t$-boundaried graphs and $G \setminus X$ is equiped with $\ensuremath{\partial} G \setminus X = \beta(w) \setminus X$
and labeling $\Lambda|_{\beta(w) \setminus X}$.
The difficult part of the proof is to show that this information is sufficient, in particular,
it suffices to keep track only of separator chunks, and not all chunks of $H$.
We emphasize here that the intended meaning of the set $\mathbb{C}$ is that it represents
separator chunks present in the entire $G \setminus X$, not $G[\gamma(w)] \setminus X$.
That is, to be able to limit ourselves only to separator chunks, we need to encode some prediction
for the future in the state. This makes our dynamic programming algorithm
rather non-standard.
Let us proceed to the formal definition of the dynamic programming table.
For a bag $w$ and a state $\mathbf{s} = ({\widehat{X}},\mathbb{C})$ at $w$ we define the graph $G(w,\mathbf{s})$
as follows. We first take the $t$-boundaried graph $(G[\gamma(w)] \setminus {\widehat{X}}, \Lambda|_{\beta(w) \setminus {\widehat{X}}})$,
and then, for each chunk $\mathbf{c} \in \mathbb{C}$ we add a disjoint copy of $\mathbf{c}$ to $G(w,\mathbf{s})$ and identify the pairs
of vertices with the same label in $\ensuremath{\partial} \mathbf{c}$ and in $\beta(w) \setminus {\widehat{X}}$.
Note that $G[\gamma(w)] \setminus {\widehat{X}}$ is an induced subgraph of $G(w,\mathbf{s})$: by the properties
of elements of $\mathbb{C}$, no new edge has been introduced between two vertices of $\beta(w)$.
We make $G(w,\mathbf{s})$ a $t$-boundaried graph in a natural way: $\ensuremath{\partial} G(w,\mathbf{s}) = \beta(w) \setminus {\widehat{X}}$
with labeling $\Lambda|_{\ensuremath{\partial} G(w,\mathbf{s})}$.
For each bag $w$ and for each state $\mathbf{s} = ({\widehat{X}},\mathbb{C})$
we say that a set $X \subseteq \alpha(w)$ is \emph{feasible} if
$G(w,\mathbf{s}) \setminus X$ does not contain any $\sigma$-$H$-subgraph, and for
any separator $t$-chunk $\mathbf{c}$ of $H$, if there is a $\mathbf{c}$-subgraph in
$G(w,\mathbf{s}) \setminus X$ then $\mathbf{c} \in \mathbb{C}$.
We would like to compute the value $T[w,\mathbf{s}]$ that equals a minimum size of a feasible set $X$.
We remark that a reverse implication to the one in the definition of a feasible set $X$
(i.e., all chunks of $\mathbb{C}$ are present in $G(w,\mathbf{s}) \setminus X$)
is straightforward for any $X$:
we have explicitly glued all chunks of $\mathbb{C}$ into $G(w,\mathbf{s})$.
Hence, $T[w,\mathbf{s}]$ asks for a minimum
size of set $X$ whose deletion not only deletes all $\sigma$-$H$-subgraphs,
but also makes $\mathbb{C}$ a ``fixed point'' of an operation
of gluing $G[\gamma(w)] \setminus (X \cup {\widehat{X}})$ along the boundary $\beta(w) \setminus {\widehat{X}}$.
Observe that, if $\beta(w) = \emptyset$, there is only one state $(\emptyset,\emptyset)$ valid
for the node $w$. Hence, $T[\texttt{root}(\ensuremath{\mathtt{T}}),(\emptyset,\emptyset)]$ is the
answer to \cshitarg{H}{} on $(G,\sigma)$.
In the rest of the proof we focus on computing values $T[w,\mathbf{s}]$ in a bottom-up
fashion in the tree $\ensuremath{\mathtt{T}}$.
\medskip
\noindent\textbf{Leaf node.} As already observed, there is only one valid state for empty bags.
Hence, for each leaf node $w$, we may set $T[w,(\emptyset,\emptyset)] = 0$.
\medskip
\noindent\textbf{Introduce node.} Consider now an introduce node $w$ with child $w'$, and the unique vertex
$v \in \beta(w) \setminus \beta(w')$.
Furthermore, consider a single state $\mathbf{s} = ({\widehat{X}},\mathbb{C})$ at node $w$.
There are two cases, depending on whether $v \in {\widehat{X}}$ or not.
Observe that if $v \in {\widehat{X}}$ then $\mathbf{s}' = ({\widehat{X}} \setminus \{v\}, \mathbb{C})$ is a valid state
for the node $w'$ and, moreover, $G(w,\mathbf{s}) = G(w',\mathbf{s}')$.
Hence, $T[w,\mathbf{s}] = T[w',\mathbf{s}']$.
The second case, when $v \notin {\widehat{X}}$, is more involved.
Let $\mathbb{C}' \subseteq \mathbb{C}$ be the set of all these chunks in $\mathbb{C}$
that do not use the label $\Lambda(v)$. Observe that $\mathbf{s}' = ({\widehat{X}},\mathbb{C}')$ is a valid
state for the node $w'$. In what follows we prove that $T[w,\mathbf{s}] = T[w',\mathbf{s}']$
unless some corner case happens.
Consider first the graph $G^\circ := G(w,\mathbf{s}) \setminus \alpha(w)$ (i.e., we glue all chunks
of $\mathbb{C}$, but only to $G[\beta(w) \setminus {\widehat{X}}]$ as opposed to of
$G[\gamma(w) \setminus {\widehat{X}}]$; this corresponds to taking the deletion set $X$ maximal possible) with $\ensuremath{\partial} G^\circ = \beta(w)\setminus {\widehat{X}}$.
If there exists an $\sigma$-$H$-subgraph of $G^\circ$
or a separator $t$-chunk $\mathbf{c}$ that is a subgraph
of $G^\circ$, but does not belong to $\mathbb{C}$, then clearly there is no
feasible set $X$ for $w$ and $\mathbf{s}$.
In this case we set $T[w,\mathbf{s}] = +\infty$.
Observe that the graph $G^\circ$ has size $t^{\mathcal{O}(\mu(H))}$, and hence we can check
if the aforementioned corner case happens in time polynomial in $t$.
We now show that in the remaining case $T[w,\mathbf{s}] = T[w',\mathbf{s}']$.
Consider first a feasible set $X$ for node $w$ and state $\mathbf{s}$.
We claim that $X$ is also feasible for $w'$ and $\mathbf{s}'$; note that $\alpha(w) = \alpha(w')$.
Observe that $G(w',\mathbf{s}')$ is a subgraph of $G(w,\mathbf{s})$, thus, in particular, $G(w',\mathbf{s}') \setminus X$ does not contain any $\sigma$-$H$-subgraph.
Let $\mathbf{c}$ be any separator $t$-chunk that is a subgraph of $G(w',\mathbf{s}') \setminus X$.
By the previous argument, we have that $\mathbf{c}$ is a subgraph of $G(w,\mathbf{s}) \setminus X$ as well.
Since $X$ is feasible for $w$ and $\mathbf{s}$, we have that $\mathbf{c} \in \mathbb{C}$. Since $\mathbf{c}$ is a subgraph of $G(w',\mathbf{s}') \setminus X$, it does not use the label $\Lambda(v)$
and we infer $\mathbf{c} \in \mathbb{C}'$. Thus, $X$ is feasible for $w'$ and $\mathbf{s}'$, and, consequently, $T[w,\mathbf{s}] \geq T[w',\mathbf{s}']$.
In the other direction, consider a feasible set $X$ for node $w'$ and state $\mathbf{s}'$.
We would like to show that $X$ is feasible for $w$ and $\mathbf{s}$.
We start with the following structural observation about minimal separators.
\begin{myclaim}\label{cl:H-sep}
Let $\mathbf{c}$ be a separator chunk in $H$.
Let $a_1,a_2 \in \mathrm{int} \mathbf{c}$ and assume $Z \subseteq V(\mathbf{c})$ is an $a_1a_2$-separator in $\mathbf{c}$.
Then there exists a separator chunk $\mathbf{c}'$ that (a) contains $a_1$ or $a_2$ in its interior, (b) whose vertex set is a proper subset of the vertex set of $\mathbf{c}$,
and (c) such that $\ensuremath{\partial} \mathbf{c}' \subseteq Z \cup \ensuremath{\partial} \mathbf{c}$
\end{myclaim}
\begin{proof}
Since $\mathbf{c}$ is a separator chunk, there exists a connected component $B$ of $H \setminus \ensuremath{\partial} \mathbf{c}$ that
is vertex-disjoint with $\mathbf{c}$, and such that $N_H(B) = \ensuremath{\partial} \mathbf{c}$.
Since $Z$ is an $a_1a_2$-separator in $\mathbf{c}$, and $a_1,a_2 \in \mathrm{int} \mathbf{c}$, we have that $Z' := Z \cup \ensuremath{\partial} \mathbf{c}$ is an $a_1a_2$-separator in $H$.
Then we may find a set $S \subseteq Z'$ that is a minimal $a_1a_2$-separator in $H$. For $i=1,2$, let $A_i$ be the connected component of $H \setminus S$ that contains $a_i$.
Since $A_1$ and $A_2$ are vertex-disjoint, and $H[B]$ is connected, $B$ can be contained only in one of these sets.
Without loss of generality we may assume $A_1 \cap B = \emptyset$.
As $N_H(B) = \ensuremath{\partial} \mathbf{c}$, we infer that $A_1 \cap \ensuremath{\partial} \mathbf{c} = \emptyset$ and, consequently, $A_1 \subseteq \mathrm{int} \mathbf{c}$.
Moreover, $a_2 \notin N_H[A_1]$, and hence $A_1 \subsetneq \mathrm{int} \mathbf{c}$.
Since $N_H(A_1) = S$, $\mathbf{c}' := \mathbf{c}[A_1]$ is a separator chunk that satisfies all the requirements of the claim.
\renewcommand{\qed}{\cqedsymbol}\maybeqed\end{proof}
We first prove the condition for feasibility with respect to separator $t$-chunks.
\begin{myclaim}\label{cl:introduce-chunk}
For any separator $t$-chunk $\mathbf{c}$, if there exists a $\sigma$-$\mathbf{c}$-subgraph in $G(w,\mathbf{s}) \setminus X$, then $\mathbf{c} \in \mathbb{C}$.
\end{myclaim}
\begin{proof}
By contradiction, assume now that there exists a separator $t$-chunk $\mathbf{c}$ such that $\mathbf{c} \notin \mathbb{C}$,
but there exists a $\sigma$-$\mathbf{c}$-subgraph $\pi$ in $G(w,\mathbf{s}) \setminus X$.
Without loss of generality assume that $\mathbf{c}$ has minimum possible number of vertices, and, for fixed $\mathbf{c}$,
the image of $\pi$ contains minimum possible number of vertices of $\alpha(w) \cup (V(G(w,\mathbf{s})) \setminus V(G(w',\mathbf{s}')))$.
Since $\mathbf{c}$ is a separator chunk, there exists a connected component $B$ of $H \setminus \ensuremath{\partial} \mathbf{c}$ that is vertex-disjoint
with $\mathbf{c}$ and such that $N_H(B) = \ensuremath{\partial} \mathbf{c}$. Let $b \in B$ be any vertex.
If the image of $\pi$ does not contain any vertex of $\alpha(w)$, then $\pi$ is a $\sigma$-$\mathbf{c}$-subgraph in $G^\circ$, and $\mathbf{c} \in \mathbb{C}$ by the previous steps.
Hence, there exists a vertex $a_1 \in \mathrm{int} \mathbf{c}$ such that $\pi(a_1) \in \alpha(w)$.
Let $A_1$ be the connected component of $H[\pi^{-1}(\alpha(w))]$ that contains $a_1$.
Observe that $N_H(A_1)$ separates $a_1$ from $b$ in $H$ and $N_H(A_1) \subseteq V(\mathbf{c})$.
Hence, in $H$ there exists a minimal $a_1b$-separator $S \subseteq N_H(A_1)$. Let $A_1'$ be the connected
component of $H \setminus S$ that contains $A_1$. Since $N_H(B) = \ensuremath{\partial} \mathbf{c}$ and $S \subseteq N_H(A_1)\subseteq V(\mathbf{c})$, we infer that $A_1' \subseteq \mathrm{int} \mathbf{c}$.
By minimality of $S$ we have $N_H(A_1') = S$, and hence $\mathbf{c}' = \mathbf{c}[A_1']$ is a separator
chunk. Moreover, $\pi(N_H(A_1'))=\pi(S) \subseteq \pi(N_H(A_1)) \subseteq \beta(w')$, and we may equip $\mathbf{c}'$
with a labeling $x \mapsto \Lambda(\pi(x))$ for any $x \in S$, constructing a separator $t$-chunk.
Assume first $\mathbf{c}' \neq \mathbf{c}$. Then $\mathbf{c}'$ has strictly less vertices than $\mathbf{c}$ (as $A_1' \subseteq \mathrm{int} \mathbf{c}$).
By the choice of $\mathbf{c}$, we have $\mathbf{c}' \in \mathbb{C}$.
Since $v$ does not have any neighbors in $\alpha(w)$, we have that $v\notin \pi(N_H(A_1))$, so in particular $v\notin \pi(S)$. Hence in fact $\pi(S) \subseteq \beta(w')$, and the label $\Lambda(v)$ is not used in $\mathbf{c}'$. Therefore $\mathbf{c}' \in \mathbb{C}'$.
Consequently, we can modify $\pi$
by remapping the vertices of $A_1'$ to the copy of $\mathbf{c}'$ that has been glued into $G(w',\mathbf{s}')$, obtaining
a $\mathbf{c}$-subgraph of $G(w,\mathbf{s}) \setminus X$ with strictly less vertices in $\alpha(w) \cup (V(G(w,\mathbf{s})) \setminus V(G(w',\mathbf{s}')))$, a contradiction. Note here that the color constraints ensure that the vertices we remap $A_1'$ to are not used by $\pi$, and thus the remapped $\pi$ is still injective.
We are left with the case $\mathbf{c}' = \mathbf{c}$, that is, $A_1' = \mathrm{int} \mathbf{c}'$. In particular, this implies $v \notin \pi(\ensuremath{\partial} \mathbf{c})$.
If the image of $\pi$ does not contain any vertex of $V(G(w,\mathbf{s})) \setminus V(G(w',\mathbf{s}'))$, then
$\pi$ is a $\mathbf{c}$-subgraph in $G(w',\mathbf{s}') \setminus X$
and, by the feasibility of $X$ for $w'$ and $\mathbf{s}'$, we have $\mathbf{c} \in \mathbb{C}' \subseteq \mathbb{C}$, a contradiction.
Hence, there exists a vertex $a_2 \in V(\mathbf{c})$ such that $\pi(a_2) \notin V(G(w',\mathbf{s}'))$.
Observe that $Z := \pi^{-1}(\beta(w'))\cap \mathbf{c}$ is a separator between $a_1$ and $a_2$ in $\mathbf{c}$. We apply Claim~\ref{cl:H-sep} and obtain a chunk $\mathbf{c}''$.
Note that $\pi(\ensuremath{\partial} \mathbf{c}'') \subseteq \beta(w')$, as $v \notin \pi(\ensuremath{\partial} \mathbf{c})$.
Hence, we may treat $\mathbf{c}''$ as a separator $t$-chunk with labeling $x \mapsto \Lambda(\pi(x))$,
and a restriction of $\pi$ is a $\mathbf{c}''$-subgraph in $G(w,\mathbf{s}) \setminus X$.
Since $\mathbf{c}''$ has strictly less vertices than $\mathbf{c}$, by the choice of $\mathbf{c}$ we have $\mathbf{c}'' \in \mathbb{C}$.
Moreover, $v \notin \pi(Z \cup \ensuremath{\partial} \mathbf{c})$ and, consequently, $\Lambda(v)$ is not used in $\mathbf{c}''$ and $\mathbf{c}'' \in \mathbb{C}'$.
Hence, we may modify $\pi$ by remapping all vertices of $\mathrm{int} \mathbf{c}''$ to the copy of $\mathbf{c}''$, glued onto $\beta(w')$ in the process of constructing $G(w',\mathbf{s}')$; again, the color constraints ensure that this remapping preserves injectivity of $\pi$.
In this manner we obtain
a $\sigma$-$\mathbf{c}$-subgraph of $G(w,\mathbf{s}) \setminus X$ with strictly less vertices in $\alpha(w) \cup (V(G(w,\mathbf{s})) \setminus V(G(w',\mathbf{s}')))$, a contradiction.
\renewcommand{\qed}{\cqedsymbol}\maybeqed\end{proof}
We now move to the second property of a feasible set.
\begin{myclaim}\label{cl:introduce-H}
There are no $\sigma$-$H$-subgraphs in $G(w,\mathbf{s})\setminus X$.
\end{myclaim}
\begin{proof}
By contradiction, assume there exists a $\sigma$-$H$-subgraph $\pi$ in $G(w,\mathbf{s}) \setminus X$.
Without loss of generality pick $\pi$ such that minimizes the number of vertices of $\pi(V(H))$ that belong to $\alpha(w)$.
As $G^\circ$ does not contain any $\sigma$-$H$-subgraph, there exists $a \in V(H)$ such that $\pi(a) \in \alpha(w)$.
Since $X$ is feasible for $w'$ and $\mathbf{s}'$, there exists $b \in V(H)$ such that $\pi(b) \notin G(w',\mathbf{s}')$.
Observe that $\beta(w') \setminus {\widehat{X}}$ separates $\alpha(w) \setminus X$ from $G(w,\mathbf{s}) \setminus \gamma(w')$ in the graph $G(w,\mathbf{s}) \setminus X$.
Hence, there exists a minimal $ab$-separator $S$ in $H$ such that $\pi(S) \subseteq \beta(w')$. Since $H$ is connected, $S \neq \emptyset$.
Let $A$ be the connected component of $H \setminus S$ that contains $a$.
Note that $\mathbf{c}[A]$ is a separator chunk in $H$ with $\ensuremath{\partial} \mathbf{c}[A] = N_H(A) = S$.
Define $\lambda: S \to \{1,2,\ldots,t\}$ as $\lambda(x) = \Lambda(\pi(x))$ for any $x \in S$.
With this labeling, $\mathbf{c}[A]$ becomes a separator $t$-chunk and $\pi|_{N_H[A]}$ is a $\sigma$-$\mathbf{c}$-subgraph in $G(w,\mathbf{s}) \setminus X$.
By Claim~\ref{cl:introduce-chunk}, $\mathbf{c} \in \mathbb{C}$.
Hence, we can modify $\pi$ by remapping all vertices of $\mathrm{int} \mathbf{c}$ to the copy of $\mathbf{c}$ that has been
glued into $G(w,\mathbf{s})$ in the process of its construction; again, the color constraints ensure that this remapping preserves injectivity of $\pi$.
In this manner we obtain a $\sigma$-$H$-subgraph in $G(w,\mathbf{s}) \setminus X$ with strictly less verties of $\pi(V(H)) \cap \alpha(w)$, a contradiction to the choice of $\pi$.
\renewcommand{\qed}{\cqedsymbol}\maybeqed\end{proof}
Claims~\ref{cl:introduce-chunk} and~\ref{cl:introduce-H} conclude the proof of the correctness of computations at an introduce node.
\medskip
\noindent\textbf{Forget node.}
Consider now a forget node $w$ with child $w'$, and a unique vertex $v \in \beta(w') \setminus \beta(w)$.
Let $\mathbf{s} = ({\widehat{X}},\mathbb{C})$ be a state for $w$; we are to compute $T[w,\mathbf{s}]$.
We shall identify a (relatively small) family $\mathbb{S}$ of states for $w'$ such that
\begin{enumerate}
\item for any $\mathbf{s}' = ({\widehat{X}}',\mathbb{C}') \in \mathbb{S}$ and for any set $X'$ feasible for $w'$ and $\mathbf{s}'$, the set $X' \cup (\{v\} \cap {\widehat{X}}')$ is feasible for $w$ and $\mathbf{s}$;
\item for any set $X$ feasible for $w$ and $\mathbf{s}$, there exists $\mathbf{s}' = ({\widehat{X}}',\mathbb{C}') \in \mathbb{S}$ such that $X \cap \{v\} = {\widehat{X}}' \cap \{v\}$ and
$X \setminus \{v\}$ is feasible for $w'$ and $\mathbf{s}'$.
\end{enumerate}
Using such a claim, we may conclude that
$$T[w,\mathbf{s}] = \min_{\mathbf{s}' = ({\widehat{X}}',\mathbb{C}') \in \mathbb{S}} T[w',\mathbf{s}'] + |{\widehat{X}}' \cap \{v\}|.$$
We now proceed to the construction of $\mathbb{S}$.
First, observe that $\mathbf{s}' := ({\widehat{X}} \cup \{v\}, \mathbb{C})$ is a valid state for $w'$ and, moreover, $G(w',\mathbf{s}') = G(w,\mathbf{s}) \setminus \{v\}$.
Hence, for any $X' \subseteq \alpha(w')$ we have that $X'$ is feasible for $\mathbf{s}'$ if and only if $X' \cup \{v\}$ is feasible for $\mathbf{s}$.
Thus, it is safe to include $\mathbf{s}'$ in the family $\mathbb{S}$ (it satisfies the first property of $\mathbb{S}$) and it fulfils the second property
for all feasible sets $X$ containing $v$.
Second, we add to $\mathbb{S}$ all states $\mathbf{s}' = ({\widehat{X}}',\mathbb{C}')$ for the node $w'$ that satisfy the following: ${\widehat{X}}' = {\widehat{X}}$
and $\mathbb{C}$ is exactly the family of these chunks $\mathbf{c}\in \mathbb{C}'$ that do not use the label $\Lambda(v)$.
Observe that for any such state, $G(w',\mathbf{s}')$ is a supergraph of $G(w,\mathbf{s})$, with the only difference
that in $G(w',\mathbf{s}')$ the vertex $v$ receives label $\Lambda(v)$. We infer that any set $X'$ that is feasible for $w'$ and $\mathbf{s}'$
is also feasible for $w$ and $\mathbf{s}$.
This finishes the description of the family $\mathbb{S}$. It remains to argue that for every set $X$ that is feasible for $w$ and $\mathbf{s}$ and does not contain $v$,
there exists a state $\mathbf{s}'$ added in the second step such that $X$ is feasible for $w'$ and $\mathbf{s}'$.
To this end, for a fixed such $X$ define $\mathbb{C}'$ as follows: a separator $t$-chunk $\mathbf{c}$ belongs to $\mathbb{C}'$ if and only
if there is a $\mathbf{c}$-subgraph in a $t$-boundaried graph $G^1 := (G(w,\mathbf{s}) \setminus X, \Lambda|_{\beta(w')})$.
We emphasize that this definition
differs from the standard definition of a $t$-boundaried graph $G(w,\mathbf{s}) \setminus X$ on the vertex $v$: $v \in \ensuremath{\partial} G^1$ and it has label $\Lambda(v)$.
Observe that, since $G^1$ differs from $G(w,\mathbf{s}) \setminus X$ (treated as a $t$-boundaried graph) only on the labeling of $v$,
we have that $\mathbb{C}$ is exactly the family of chunks of $\mathbb{C}'$ that do not use the label $\Lambda(v)$.
Moreover, note that $\mathbf{s}' := ({\widehat{X}},\mathbb{C}')$ is a valid state for $w'$, and hence $\mathbf{s}' \in \mathbb{S}$.
We now argue that $X$ is feasible for $w'$ and $\mathbf{s}'$.
To this end, consider any (possibly $t$-boundaried) subgraph $H'$ of $H$ and a $\sigma$-$H'$-subgraph $\pi$ in $G(w',\mathbf{s}') \setminus X$.
Observe that $G^1$ is a subgraph of $G(w',\mathbf{s}') \setminus X$, as $\mathbb{C} \subseteq \mathbb{C}'$.
Pick $\pi$ that minimizes the number of vertices of $\pi(V(H'))$ that do not belong to $V(G^1)$.
We claim that there is no such vertex.
Assume otherwise, and let $a \in V(H')$ be such that $\pi(a) \notin V(G^1)$. Thus, $\pi(a)$ lies in the interior
of some chunk $\mathbf{c} \in \mathbb{C}'$ that was glued onto $\beta(w')$ in the process of constructing $G(w',\mathbf{s}')$.
By the definition of $\mathbb{C}'$, there exists a $\sigma$-$\mathbf{c}$-subgraph $\pi_\mathbf{c}$ in $G^1$.
Define $\pi'$ as follows $\pi'(c) = \pi_\mathbf{c}(c)$ if $c$ belongs to $\mathbf{c}$, and $\pi'(c) = \pi(c)$ otherwise. Again, the color constraints ensure that $\pi'$ is injective.
Observe that $\pi'$ is also a $\sigma$-$H'$-subgraph in $G(w',\mathbf{s}')$ with strictly smaller number of vertices in $\pi(V(H')) \setminus V(G^1)$, a contradition to the choice of $\pi$.
We infer that, for any (possibly $t$-boundaried) subgraph $H'$ of $H$, there exists a $\sigma$-$H'$-subgraph $\pi$ in $G(w',\mathbf{s}') \setminus X$ if and only if it exists in $G^1$.
This implies that $X$ is feasible for $w'$ and $\mathbf{s}'$, and concludes the description of the computation in the forget node.
\medskip
\noindent\textbf{Join node.}
Let $w$ be a join node with children $w_1$ and $w_2$, and let $\mathbf{s} = ({\widehat{X}},\mathbb{C})$ be a state for $w$.
Observe that $\mathbf{s}$ is a valid state for $w_1$ and $w_2$ as well.
We claim that $T[w,\mathbf{s}] = T[w_1,\mathbf{s}] + T[w_2,\mathbf{s}]$.
To prove it, first consider a set $X$ feasible for $w$ and $\mathbf{s}$. Define $X_i = X \cap \alpha(w_i)$ for $i=1,2$; note that $X = X_1 \uplus X_2$.
Observe that $G(w_i,\mathbf{s}) \setminus X_i$ is a subgraph of $G(w,\mathbf{s})$ for every $i=1,2$.
Hence, $X_i$ is feasible for $w_i$ and $\mathbf{s}$ and, consequently, $T[w,\mathbf{s}] \geq T[w_1,\mathbf{s}] + T[w_2,\mathbf{s}]$.
In the other direction, let $X_1$ be a feasible set for $w_1$ and $\mathbf{s}$, and let $X_2$ be a feasible set for $w_2$ and $\mathbf{s}$.
We claim that $X := X_1 \cup X_2$ is feasible for $w$ and $\mathbf{s}$; note that such a claim would conclude the description of the computation at a join node.
To this end, let $\pi$ be a $\sigma$-$H'$-subgraph in $G(w,\mathbf{s}) \setminus X$, where $H'=H$ or $H'$ is a some separator $t$-chunk in $H$.
In what follows we prove, by induction on $|V(H')|$,
that there exists a $\sigma$-$H'$-subgraph in $G(w_1,\mathbf{s}) \setminus X_1$ or in $G(w_2,\mathbf{s}) \setminus X_2$. Note that this claim is sufficient to prove that $X$ is feasible
for $w$ and $\mathbf{s}$.
Fix $H'$. Without loss of generality, pick $\pi$ that minimizes the number of vertices in $\pi(V(H')) \cap \alpha(w)$.
If there exists $i \in \{1,2\}$ such that $\pi(V(H'))$ does not contain any vertex of $\alpha(w_i)$, then $\pi$ is a $\sigma$-$H'$-subgraph of $G(w_{3-i},\mathbf{s}) \setminus X_{3-i}$, and we are done.
Hence, assume for each $i \in \{1,2\}$ there exists $a_i \in V(H')$ such that $\pi(a_i) \in \alpha(w_i)$.
Observe that $Z := \pi^{-1}(\beta(w))$ separates $a_1$ from $a_2$ in $H'$.
First, we focus on the case $H' = \mathbf{c}$ being a separator $t$-chunk.
For a fixed chunk $\mathbf{c}$, we apply Claim~\ref{cl:H-sep} to the vertices $a_1,a_2$ and the set $Z$, obtaining a chunk $\mathbf{c}'$.
By the inductive assumption, there exists a $\sigma$-$\mathbf{c}'$-subgraph in $G(w_i,\mathbf{s}) \setminus X_i$ for some $i=1,2$.
Hence, $\mathbf{c}' \in \mathbb{C}$, and we may modify $\pi$ by redirecting the vertices of $\mathrm{int} \mathbf{c}'$ to the copy of $\mathbf{c}'$
that has been glued into $G(w_i,\mathbf{s})$ in the process of its construction.
In this manner, we obtain a $\sigma$-$H'$-subgraph of $G(w,\mathbf{s}) \setminus X$ with strictly less vertices of $\alpha(w)$,
a contradiction to the choice of $\pi$.
Second, we focus on the case $H' = H$.
Let $S \subseteq Z$ be a minimal $a_1a_2$-separator in $H$.
Let $A_1$ be the connected component of $H \setminus S$ that contains $a_1$.
Note that $\mathbf{c}[A_1]$ with the labeling $x \mapsto \Lambda(\pi(x))$ for $x \in S$ is a separator $t$-chunk,
and $\pi|_{N_H[A_1]}$ is a $\sigma$-$\mathbf{c}$-subgraph in $G(w,\mathbf{s}) \setminus X$.
By the induction hypothesis, there exists a $\sigma$-$\mathbf{c}$-subgraph in $G(w_i,\mathbf{s}) \setminus X_i$ for some $i=1,2$ and, consequently, $\mathbf{c} \in \mathbb{C}$.
Modify $\pi$ by redirecting the vertices of $\mathrm{int} \mathbf{c}$ to the copy of $\mathbf{c}$ that has been glued into $G(w,\mathbf{s})$ in the process of its construction.
In this manner we obtain a $\sigma$-$H$-subgraph in $G(w,\mathbf{s}) \setminus X$ with strictly less vertices in $\alpha(w)$, a contradiction to the choice of $\pi$.
Thus, we have shown that $X$ is a feasible set for $w$ and $\mathbf{s}$, concluding the proof of the correctness of the computation at join node.
Since each node has $2^{\mathcal{O}(t^{\mu(H)})}$ states, and there are $\mathcal{O}(t|V(G)|)$ nodes in $\ensuremath{\mathtt{T}}$, the time bound of the computation follows.
This concludes the proof of Theorem~\ref{thm:cshit:algo}.
\end{proof}
\section{General algorithm for \shitarg{H}{}}\label{sec:std:algo}
In this section we present an algorithm for \shitarg{H}{} running in time
$2^{\mathcal{O}(t^{\mu^\star}% {\eta(H)} \log t)} |V(G)|$,
where $t$ is the width of the tree decomposition we are working on.
The general idea is the natural one.
\begin{definition}[profile]
The \emph{profile} of a $t$-boundaried graph $(G,\lambda)$ is the set $\mathbb{P}^{G,\lambda}$
of all $t$-slices that are subgraphs of $(G,\lambda)$.
A \emph{$(\leq p)$-profile} of $(G,\lambda)$ is the profile of $(G,\lambda)$, restricted to all $t$-slices that have at most $p$ vertices.
\end{definition}
For each node $w$ of the tree decomposition,
for each set ${\widehat{X}} \subseteq \beta(w)$,
and for each family $\mathbb{P}$ of $t$-slices, we would like to find the minimum size
of a set $X \subseteq \alpha(w)$ such that, if we treat $G[\gamma(w)]$ as a $t$-boundaried
graph with $\ensuremath{\partial} G[\gamma(w)] = \beta(w)$ and labeling $\Lambda|_{\beta(w)}$, then
the profile of $G[\gamma(w) \setminus (X \cup {\widehat{X}})]$ is exactly $\mathbb{P}$.
However, as the number of $t$-slices can be as many as $t^{|H|}$, we have too many choices for the potential profile $\mathbb{P}$.
\subsection{The witness graph}
The essence of the proof
is to show that each ``reasonable'' choice of $\mathbb{P}$
can be encoded as a \emph{witness graph} of essentially size $\mathcal{O}(t^{\mu^\star}% {\eta(H)})$.
Such a claim would give a $2^{\mathcal{O}(t^{\mu^\star}% {\eta(H)} \log t)}$ bound on the number of possible
witness graphs, and provide a good bound on the size of state space.
For technical reasons, we need to slightly generalize the notion of a profile, so that
it encapsulates also the possibility of ``reserving'' a small set $Y$ of vertices from the boundary for another pieces of the graph $H$.
\begin{definition}[extended profile]
For a $t$-boundaried graph $(G,\lambda)$, \emph{an extended profile}
consists of a $(\leq |V(H)|-|Y|)$-profile $\mathbb{P}^{G,\lambda}_Y$ of the graph $(G \setminus Y,\lambda|_{\ensuremath{\partial} G \setminus Y})$ for every set $Y \subseteq \ensuremath{\partial} G$ of size at most $|V(H)|$.
\end{definition}
\begin{definition}[witness-subgraph, equivalent]
Let $(G_1,\lambda_1)$ and $(G_2,\lambda_2)$ be two $t$-boundaried graphs. We say that $(G_1,\lambda_1)$ is a \emph{witness-subgraph} of $(G_2,\lambda_2)$,
if $\ensuremath{\partial} G_1 = \ensuremath{\partial} G_2$, $\lambda_1 = \lambda_2$, and, moreover, for every $Y \subseteq \ensuremath{\partial} G_1$ of size at most $|V(H)|$ we have
$\mathbb{P}^{G_1,\lambda_1}_Y \subseteq \mathbb{P}^{G_2,\lambda_2}_Y$.
We say that two $t$-boundaried graphs are \emph{equivalent}, if one is the witness-subgraph of the other, and vice-versa (i.e., they boundaries, labelings, and extended profiles are equal).
\end{definition}
Let us emphasize that, maybe slightly counterintuitively, a witness-subgraph is not necessarily a subgraph; the `sub' term corresponds to admitting a subfamily of $t$-slices as subgraphs.
In the next lemma we show that every $t$-boundaried graph has a small equivalent one, showing us that there is only a small number
of reasonable choices for an (extended) profile in the dynamic programming algorithm.
\begin{lemma}\label{lem:witness}
Assume $H$ contains a connected component that is not a clique.
Then, for any $t$-boundaried graph $(G,\lambda)$ there exists an equivalent $t$-boundaried graph $({\widehat{G}},\lambda)$ such
that (a) is a subgraph of $(G,\lambda)$, (b) $\ensuremath{\partial} G = \ensuremath{\partial} {\widehat{G}}$ and $G[\ensuremath{\partial} G] = {\widehat{G}}[\ensuremath{\partial} {\widehat{G}}]$, and (c)
${\widehat{G}} \setminus E({\widehat{G}}[\ensuremath{\partial} {\widehat{G}}])$ contains $\mathcal{O}(t^{\mu^\star}% {\eta(H)})$ vertices and edges.
\end{lemma}
\begin{proof}
We define ${\widehat{G}}$ by a recursive procedure. We start with ${\widehat{G}} = G[\ensuremath{\partial} G]$.
Then, for every $t$-chunk $\mathbf{c} = (H',\lambda')$,
we invoke a procedure
$\mathtt{enhance}(\mathbf{c},\emptyset)$. The procedure $\mathtt{enhance}(\mathbf{c},X)$, for $X \subseteq V(G)$,
first tries to find a $\mathbf{c}$-subgraph $\pi$ in
$(G \setminus X,\lambda)$. If there is none, the procedure terminates.
Otherwise, it first adds all edges and vertices of $\pi(\mathbf{c})$ to ${\widehat{G}}$ that
are not yet present there.
Second, if $|X| < |V(H)|$, then it recursively invokes $\mathtt{enhance}(\mathbf{c}, X \cup \{v\})$
for each $v \in \pi(\mathbf{c})$.
We first bound the size of the constructed graph ${\widehat{G}}$.
There are at most $2^{|V(H)|} t^{\mu^\star}% {\eta(H)}$ choices for the chunk,
since a chunk $\mathbf{c}$ is defined by its vertex set, and there are at most $t^{\mu^\star}% {\eta(H)}$ labelings
of its boundary.
The procedure $\mathtt{enhance}(\mathbf{c}, X)$ at each step adds at most one copy of $H$ to $G$,
and branches into at most $|V(H)|$ directions. The depth of the recursion is bounded
by $|V(H)|$. Hence, in total at most $2^{|V(H)|}t^{\mu^\star}% {\eta(H)}\cdot (|V(H)|+|E(H)|)\cdot |V(H)|^{|V(H)|}=\mathcal{O}(t^{\mu^\star}% {\eta(H)})$ edges and vertices are added to ${\widehat{G}}$,
except for the initial graph $G[\ensuremath{\partial} G]$ (recall that we consider $H$ to be a fixed graph and the constants hidden by the big-$\mathcal{O}$ notation can depend on $H$).
It remains to argue that ${\widehat{G}}$ satisfies property (d). Clearly, since $({\widehat{G}},\lambda)$
is a subgraph of $(G,\lambda)$, the implication in one direction is trivial.
In the other direction, we start with the following claim.
\begin{myclaim}\label{cl:witness:chunk}
For any set $Z \subseteq V(G)$ of size at most $|V(H)|$, and for any
$t$-chunk $\mathbf{c}$,
if there exists a $\mathbf{c}$-subgraph
in $(G \setminus Z, \lambda)$ then there exists also one in $({\widehat{G}} \setminus Z,\lambda)$.
\end{myclaim}
\begin{proof}
Let $\pi$ be a $\mathbf{c}$-subgraph in $(G \setminus Z,\lambda)$.
Define $X_0 = \emptyset$. We will construct sets $X_0 \subsetneq X_1 \subsetneq \ldots$,
where $X_i \subseteq Z$ for every $i$, and
analyse the calls to the procedure $\mathtt{enhance}(\mathbf{c}, X_i)$ in the process of constructing ${\widehat{G}}$.
Assume that $\mathtt{enhance}(\mathbf{c}, X_i)$ has been invoked at some point during the construction;
clearly this is true for $X_0 = \emptyset$. Since we assume $X_i \subseteq Z$,
there exists a
$\mathbf{c}$-subgraph in $(G \setminus X_i,\lambda)$ --- $\pi$ is one such example.
Hence, $\mathtt{enhance}(\mathbf{c}, X_i)$ has found a $\mathbf{c}$-subgraph $\pi_i$, and added its image to ${\widehat{G}}$.
If $\pi_i$ is a $\mathbf{c}$-subgraph also in $({\widehat{G}} \setminus Z,\lambda)$, then we are done.
Otherwise, there exists $v_i \in Z \setminus X_i$ that is also present in the image of $\pi_i$.
In particular, since $|Z| \leq |V(H)|$, we have $|X_i| < |V(H)|$ and the call
$\mathtt{enhance}(\mathbf{c}, X_i \cup \{v_i\})$ has been invoked. We define $X_{i+1} := X_i \cup \{v_i\}$.
Since the sizes of sets $X_i$ grow at each step, for some $X_i$, $i \leq |Z|$, we reach
the conclusion that $\pi_i$ is a $\mathbf{c}$-subgraph of $({\widehat{G}} \setminus Z,\lambda)$, and the claim is proven.
\renewcommand{\qed}{\cqedsymbol}\maybeqed\end{proof}
Fix now a set $Y \subseteq V(G)$ and a $t$-slice $\mathbf{p}$ with labeling $\lambda_\mathbf{p}$
and with $|Y| + |V(\mathbf{p})| \leq |V(H)|$.
Let $\pi$
be a $\mathbf{p}$-subgraph of $(G \setminus Y,\lambda)$.
Let $A_1,A_2,\ldots,A_r$ be the connected components of $H[\mathrm{int} \mathbf{p}]$.
Define $H_i = N_H[A_i]$, and observe that each $H_i$ is a chunk
with $\ensuremath{\partial} H_i = N_H(A_i) \subseteq \ensuremath{\partial} \mathbf{p}$.
We define $\lambda_i = \lambda_\mathbf{p}|_{\ensuremath{\partial} H_i}$ to obtain a $t$-chunk $\mathbf{c}_i = (H_i,\lambda_i)$.
By the properties of a $t$-slice, each vertex of $\mathbf{p}$ is present in at least
one graph $\mathbf{c}_i$, and vertices of $\ensuremath{\partial} \mathbf{p}$ may be present in more than one.
We now inductively define injective
homomorphisms $\pi_0,\pi_1,\ldots,\pi_r$ such that
$\pi_i$ maps the subgraph of $\mathbf{p}$ induced by $\ensuremath{\partial} \mathbf{p} \cup \bigcup_{j \leq i} A_j$
to $({\widehat{G}} \setminus Y,\lambda)$, and does not use any vertex
of $\bigcup_{j > i}\pi(A_j)$. Observe that $\pi_r$ is
a $\mathbf{p}$-subgraph of $({\widehat{G}} \setminus Y,\lambda)$.
Hence, this construction will conclude the proof of the lemma.
For the base case, recall that $\pi(\ensuremath{\partial} \mathbf{p}) \subseteq \ensuremath{\partial} G = \ensuremath{\partial} {\widehat{G}}$ and define $\pi_0 = \pi|_{\ensuremath{\partial} \mathbf{p}}$.
For the inductive case, assume that
$\pi_{i-1}$ has been constructed for some $1 \leq i \leq r$.
Define
$$Z_i = Y \cup \pi(\ensuremath{\partial} \mathbf{p} \setminus \ensuremath{\partial} H_i) \cup \bigcup_{j < i}\pi_{i-1}(A_j) \cup \bigcup_{j > i} \pi(A_j).$$
Note that since $\pi$ and $\pi_{i-1}$ are injective and $Y$ is disjoint with $\pi(\ensuremath{\partial} \mathbf{p})$, then we have that $Z_i\cap \pi(\ensuremath{\partial} \mathbf{p}) = \pi(\ensuremath{\partial} \mathbf{p} \setminus \ensuremath{\partial} H_i)$.
This observation and the inductive assumption on $\pi_{i-1}$ imply that the mapping $\pi|_{V(H_i)}$ does not use any vertex of $Z_i$.
Thus, $\pi|_{V(H_i)}$ is a $\mathbf{c}_i$-subgraph in $(G \setminus Z_i, \lambda)$.
Observe moreover that $|Z_i| \leq |Y|+|V(\mathbf{p})|\leq |V(H)|$.
By Claim~\ref{cl:witness:chunk}, there exists a $\mathbf{c}_i$-subgraph $\pi_i'$ in
$({\widehat{G}} \setminus Z_i,\lambda)$. Observe that, since $\pi_i'$ and $\pi_{i-1}$
are required to preserve labelings on boundaries of their preimages,
$\pi_i := \pi_i' \cup \pi_{i-1}$ is a function and a homomorphism.
Moreover, by the definition of $Z_i$, $\pi_i$ is injective
and does not use any vertex of $\bigcup_{j > i}\pi(A_j)$.
Hence, $\pi_i$ satisfies all the required conditions, and the inductive construction is completed.
This concludes the proof of the lemma.
\end{proof}
\subsection{The dynamic programming algorithm}\label{ss:std-algo}
Using Lemma~\ref{lem:witness}, we now define states of the dynamic programming algorithm
on the input tree decomposition $(\ensuremath{\mathtt{T}},\beta)$.
For every node $w \in V(\ensuremath{\mathtt{T}})$, a \emph{state}
is a pair $\mathbf{s} = ({\widehat{X}},{\widehat{G}})$ where ${\widehat{X}} \subseteq \beta(w)$
and ${\widehat{G}}$ is a graph with $\mathcal{O}(t^{\mu^\star}% {\eta(H)})$ vertices and edges
such that $\beta(w) \setminus {\widehat{X}} \subseteq V({\widehat{G}})$ and ${\widehat{G}}[\beta(w) \setminus {\widehat{X}}] = G[\beta(w) \setminus {\widehat{X}}]$.
We treat ${\widehat{G}}$ as a $t$-boundaried graph with $\ensuremath{\partial} {\widehat{G}} = \beta(w) \setminus {\widehat{X}}$
and labeling $\Lambda|_{\beta(w) \setminus {\widehat{X}}}$.
We say that
a set $X \subseteq \alpha(w)$ is \emph{feasible} for $w$ and $\mathbf{s}$ if
$(G[\gamma(w) \setminus (X \cup {\widehat{X}})], \Lambda|_{\beta(w) \setminus {\widehat{X}}})$
is a witness-subgraph of
$({\widehat{G}},\Lambda|_{\beta(w) \setminus {\widehat{X}}})$.
For every $w$ and every state $\mathbf{s}$, we would like to compute $T[w,\mathbf{s}]$,
the minimum possible size of a feasible set $X$.
Note that by Lemma~\ref{lem:witness} the answer to the input \shitarg{H}{} instance is the minimum value of $T[\mathtt{root}(\ensuremath{\mathtt{T}}),(\emptyset, {\widehat{G}})]$
where ${\widehat{G}}$ iterates over all graphs with $\mathcal{O}(t^{\mu^\star}% {\eta(H)})$ vertices and edges that do not contain the $t$-slice $(H,\emptyset)$ as a subgraph.
Hence, it remains to show how to compute the values $T[w,\mathbf{s}]$ in a bottom-up manner in the tree decomposition.
Observe that, if we have two $t$-boundaried graphs $(G_1,\lambda_1)$ and $(G_2,\lambda_2)$ of size
$t^{\mathcal{O}(1)}$ each (e.g., they are parts of a state, or they were obtained from Lemma~\ref{lem:witness}),
a brute-force algorithm checks the relation of being a witness-subgraph in $t^{\mathcal{O}(1)}$ time.
We start the description with the following auxiliary definition.
Let $P \subseteq V(H)$. Observe that $N_H[\mathrm{int} H[P]] \subseteq P$ and $H[N_H[\mathrm{int} H[P]]]$
is the unique inclusion-wise maximal slice with vertex set being a subset of $P$.
We call the slice $H[N_H[\mathrm{int} H[P]]]$ the \emph{core slice} of $P$, and the remaining
vertices $P \setminus N_H[\mathrm{int} H[P]]$ the \emph{peelings} of $P$.
Observe that $\mathrm{int} H[N_H[\mathrm{int} H[P]]] = \mathrm{int} H[P]$,
$\ensuremath{\partial} H[N_H[\mathrm{int} H[P]]] \subseteq \ensuremath{\partial} H[P]$, and $\ensuremath{\partial} H[P] \setminus \ensuremath{\partial} H[N_H[\mathrm{int} H[P]]]$ equals the peelings of $P$.
In the description, for brevity, we use $\emptyset$ to denote not only an empty set,
but also an empty graph. Moreover, for a $t$-boundaried graph $(G,\lambda)$
and a set $Y \subseteq V(G)$, we somewhat abuse the notation and write
$(G \setminus Y,\lambda)$
for the $t$-boundaried graph $(G \setminus Y,\lambda|_{\ensuremath{\partial} G \setminus Y})$.
We assume we are given a \emph{nice} tree decomposition $(\ensuremath{\mathtt{T}},\beta)$ of the
input graph $G$, and a labeling $\Lambda:V(G) \to \{1,2,\ldots,t\}$ that
is injective on every bag.
We now describe how to conduct computations in each of the four types of nodes
in the tree decomposition $(\ensuremath{\mathtt{T}},\beta)$.
In all cases, it will be clear from the description
that all computations for a single node $w$ can be done
in time polynomial in $t$ and the number of states per one node,
and hence we do not further discuss the time complexity of the algorithm.
\medskip
\noindent\textbf{Leaf node.}
It is immediate that $\emptyset$ is feasible for every leaf node $w$
and every state $\mathbf{s}$ at $w$, and hence $T[w,\mathbf{s}] = 0$ is the correct value.
\medskip
\noindent\textbf{Introduce node.}
Consider now an introduce node $w$ with child $w'$, and the unique vertex
$v \in \beta(w) \setminus \beta(w')$.
Furthemore, consider a single state $\mathbf{s} = ({\widehat{X}},{\widehat{G}})$ at node $w$;
we are to compute $T[w,\mathbf{s}]$.
First, consider the case $v \in {\widehat{X}}$.
Then, it is straightforward to observe that $\mathbf{s}' := ({\widehat{X}} \setminus \{v\}, {\widehat{G}})$
is a state for $w'$, and the families of feasible sets for state $\mathbf{s}$ and $w$
and for state $\mathbf{s}'$ and $w'$ are equal. Thus, $T[w,\mathbf{s}] = T[w',\mathbf{s}']$ and we are done.
Consider now the case $v \notin {\widehat{X}}$.
Define $\lambda := \Lambda|_{\beta(w) \setminus {\widehat{X}}}$ and $\lambda' := \Lambda|_{\beta(w') \setminus {\widehat{X}}}$.
We compute a family $\mathbb{S}$ of states at the node $w'$ and define $T[w,\mathbf{s}] = \min_{\mathbf{s}' \in \mathbb{S}} T[w',\mathbf{s}']$.
We are going to prove that
\begin{enumerate}
\item for every $\mathbf{s}' \in \mathbb{S}$, and every $X$ that is feasible for $w'$ and $\mathbf{s}'$, $X$ is also feasible for $w$ and $\mathbf{s}$;\label{p:std:intro1}
\item for every $X$ that is feasible for $w$ and $\mathbf{s}$, there exists $\mathbf{s}' \in \mathbb{S}$ such that $X$ is also feasible for $w'$ and $\mathbf{s}'$.\label{p:std:intro2}
\end{enumerate}
Observe that such properties of $\mathbb{S}$ will imply the correctness of the computation of $T[w,\mathbf{s}]$. We now proceed to the construction of $\mathbb{S}$.
We iterate through all states $\mathbf{s}' = ({\widehat{X}}',{\widehat{G}}')$ for the node $w'$ and insert $\mathbf{s}'$ into $\mathbb{S}$ if the following conditions hold.
First, we require ${\widehat{X}} = {\widehat{X}}'$. Second, we construct a graph ${\widehat{G}}'_v$ by adding the vertex $v$ to ${\widehat{G}}'$,
and making $v$ adjacent to all vertices $u \in \ensuremath{\partial} {\widehat{G}}' = \beta(w') \setminus {\widehat{X}}$ for which $vu \in E(G)$.
Observe that ${\widehat{G}}'_v [\beta(w) \setminus {\widehat{X}}] = {\widehat{G}}[\beta(w) \setminus {\widehat{X}}]= G[\beta(w) \setminus {\widehat{X}}]$, and $v$ has exactly the same neighbourhood in ${\widehat{G}}'_v$ and in ${\widehat{G}}$.
To include $\mathbf{s}'$ in $\mathbb{S}$, we require that $({\widehat{G}}'_v, \lambda)$ is a witness-subgraph of $({\widehat{G}},\lambda)$.
We first argue about Property~\ref{p:std:intro1}.
Let $\mathbf{s}' = ({\widehat{X}}, {\widehat{G}}') \in \mathbb{S}$ and let $X$ be feasible for $w'$ and $\mathbf{s}'$; we are to prove that $X$ is also feasible for $w$ and $\mathbf{s}$.
To this end, consider a set $Y \subseteq \beta(w) \setminus {\widehat{X}}$ and a $t$-slice $\mathbf{p}$ such that $|Y| + |V(\mathbf{p})| \leq |V(H)|$.
Let $\pi$ be a $\mathbf{p}$-subgraph in $(G[\gamma(w) \setminus (X \cup {\widehat{X}} \cup Y)], \lambda)$.
Let $P = \pi^{-1}(\gamma(w'))$; note that in particular $v\notin \pi(P)$. Let $\mathbf{p}'$ be the core slice of $P$, and let $Q$ be the peelings of $P$.
Observe that for every $a \in \ensuremath{\partial} P = Q \cup \ensuremath{\partial} \mathbf{p}'$ we have $\pi(a) \neq v$ and, moreover, either $a \in \ensuremath{\partial} \mathbf{p}$ or $\pi(a) \in N_{G[\gamma(w)]}(v)$.
In both cases, $\pi(a) \in \beta(w') \setminus ({\widehat{X}} \cup Y)$
and we may define $\lambda_P(a) := \Lambda(\pi(a))$. Note that with the labeling $\lambda_P|_{\ensuremath{\partial} \mathbf{p}'}$, the slice $\mathbf{p}'$ becomes a $t$-slice,
and $\pi|_{V(\mathbf{p}')}$ is a $\mathbf{p}'$-subgraph of $(G[\gamma(w') \setminus (X \cup {\widehat{X}} \cup Y \cup \pi(Q))],\lambda')$.
Since $X$ is feasible for $w'$ and $\mathbf{s}'$, and $|Y| + |\pi(Q)| + |V(\mathbf{p}')| \leq |Y| + |V(\mathbf{p})| \leq |V(H)|$,
we have that there exists a $\mathbf{p}'$-subgraph $\pi'$ in $({\widehat{G}}' \setminus (Y \cup \pi(Q)),\lambda')$.
As no vertex of $Y$, $\pi(Q)$ nor $v$ belongs to the image of $\pi'$, and vertices of $Q$ are not adjacent to the vertices of $\mathrm{int} \mathbf{p}'$ in $H$,
a direct check shows that $\pi' \cup \pi|_{V(\mathbf{p}) \setminus V(\mathbf{p}')}$ is a $\mathbf{p}$-subgraph of $({\widehat{G}}'_v \setminus Y,\lambda)$.
Recall that we require that $({\widehat{G}}'_v,\lambda)$ is a witness-subgraph of $({\widehat{G}},\lambda)$.
Consequently, there exists a $\mathbf{p}$-subgraph of $({\widehat{G}} \setminus Y,\lambda)$.
Since the choice of $Y$ and $\mathbf{p}$ was arbitrary, $X$ is feasible for $w$ and $\mathbf{s}$.
For Property~\ref{p:std:intro2}, let $X$ be a feasible set for $w$ and $\mathbf{s}$.
Let ${\widehat{G}}'$ be the witness graph, whose existence is guaranteed by Lemma~\ref{lem:witness} for the graph $(G[\gamma(w') \setminus (X \cup {\widehat{X}})], \lambda')$.
Define $\mathbf{s}' = ({\widehat{X}}, {\widehat{G}}')$. Observe that $\mathbf{s}'$ is a valid state for $w'$.
By definition of ${\widehat{G}}'$, the set $X$ is feasible for $w'$ and $\mathbf{s}'$.
It remains to show that $\mathbf{s}' \in \mathbb{S}$, that is, that $({\widehat{G}}'_v,\lambda)$ is a witness-subgraph of $({\widehat{G}},\lambda)$.
To this end, consider a set $Y \subseteq \beta(w) \setminus {\widehat{X}}$ and a $t$-slice $\mathbf{p}$ such that $|Y| + |V(\mathbf{p})| \leq |V(H)|$.
Let $\pi$ be a $\mathbf{p}$-subgraph in $({\widehat{G}}'_v \setminus Y,\lambda)$.
Similarly as before, let $P = \pi^{-1}(V({\widehat{G}}'))$; again, note that $v\notin \pi(P)$. Let $\mathbf{p}'$ be the core slice of $P$, and let $Q$ be the peelings of $P$.
Observe that for every $a \in \ensuremath{\partial} P = Q \cup \ensuremath{\partial} \mathbf{p}'$ we have $\pi(a) \neq v$ and, moreover, either $a \in \ensuremath{\partial} \mathbf{p}$ or $\pi(a) \in N_{{\widehat{G}}'_v}(v)$.
In both cases, $\pi(a) \in \beta(w') \setminus ({\widehat{X}} \cup Y)$,
and we may define $\lambda_P(a) := \Lambda(\pi(a))$. Note that with the labeling $\lambda_P|_{\ensuremath{\partial} \mathbf{p}'}$, the slice $\mathbf{p}'$ becomes a $t$-slice,
and $\pi|_{V(\mathbf{p}')}$ is a $\mathbf{p}'$-subgraph of $({\widehat{G}}' \setminus (Y \cup \pi(Q)),\lambda')$.
Since $|Y| + |\pi(Q)| + |V(\mathbf{p}')| \leq |Y| + |V(\mathbf{p})| \leq |V(H)|$,
by the properties of the witness graph guaranteed by Lemma~\ref{lem:witness}, we have that there exists a $\mathbf{p}'$-subgraph $\pi'$ of
$(G[\gamma(w') \setminus (X \cup {\widehat{X}} \cup Y \cup \pi(Q))], \lambda')$.
As no vertex of $Y$, $\pi(Q)$ nor $v$ belongs to the image of $\pi'$, and vertices of $Q$ are not adjacent to the vertices of $\mathrm{int} \mathbf{p}'$ in $H$,
a direct check shows that $\pi' \cup \pi|_{V(\mathbf{p}) \setminus V(\mathbf{p}')}$ is a $\mathbf{p}$-subgraph of $(G[\gamma(w) \setminus (X \cup {\widehat{X}} \cup Y)],\lambda)$.
Since $X$ is feasible for $w$ and $\mathbf{s}$, there exists a $\mathbf{p}$-subgraph of $({\widehat{G}} \setminus Y, \lambda)$.
As the choice of $Y$ and $\mathbf{p}$ was arbitrary, $({\widehat{G}}'_v,\lambda)$ is a witness-subgraph of $({\widehat{G}},\lambda)$, and $\mathbf{s}' \in \mathbb{S}$.
This finishes the proof of the correctness of computations at an introduce node.
\medskip
\noindent\textbf{Forget node.}
Consider now a forget node $w$ with child $w'$, and the unique vertex $v \in \beta(w') \setminus \beta(w)$.
Let $\mathbf{s} = ({\widehat{X}},{\widehat{G}})$ be a state for $w$; we are to compute $T[w,\mathbf{s}]$.
Define $\lambda := \Lambda|_{\beta(w) \setminus {\widehat{X}}}$ and $\lambda' := \Lambda|_{\beta(w') \setminus {\widehat{X}}}$.
First, observe that $\mathbf{s}^v := ({\widehat{X}} \cup \{v\},{\widehat{G}})$ is also a valid state for the node $w'$.
Note that $X \cup \{v\}$ is feasible for $\mathbf{s}$ if and only if $X$ is feasible for $w'$ and $\mathbf{s}^v$:
the question of feasibility of $X \cup \{v\}$ for $w$ and $\mathbf{s}$ and $X$ for $w'$ and $\mathbf{s}^v$ in fact inspects the same subgraph of $G$.
Consequently, we take $1+T[w',\mathbf{s}^v]$ as one candidate value for $T[w,\mathbf{s}]$.
In the remainder of the computations for the forget node we identify a family $\mathbb{S}$ of valid states for the node $w'$.
We prove that
\begin{enumerate}
\item for every $\mathbf{s}' \in \mathbb{S}$, and every $X$ that is feasible for $w'$ and $\mathbf{s}'$, $X$ is also feasible for $w$ and $\mathbf{s}$;\label{p:std:forget1}
\item for every $X$ that is feasible for $w$ and $\mathbf{s}$, and such that $v \notin X$, there exists $\mathbf{s}' \in \mathbb{S}$
such that $X$ is also feasible for $w'$ and $\mathbf{s}'$.\label{p:std:forget2}
\end{enumerate}
This claim will imply that
$$T[w,\mathbf{s}] = \min(1+T[w',\mathbf{s}^v], \min_{\mathbf{s}' \in \mathbb{S}} T[w',\mathbf{s}']).$$
The family $\mathbb{S}$ is defined as follows. We iterate through all valid states $\mathbf{s}' = ({\widehat{X}}',{\widehat{G}}')$ for the node $w'$.
First, we require ${\widehat{X}}' = {\widehat{X}}$. Second, we define the graph ${\widehat{G}}'_v$ as the graph ${\widehat{G}}'$ with the label $\Lambda(v)$ of the node $v$
forgotten, that is, ${\widehat{G}}'_v$ and ${\widehat{G}}'$ are equal as simple graphs and $\ensuremath{\partial} {\widehat{G}}'_v = \ensuremath{\partial} {\widehat{G}}' \setminus \{v\} = \beta(w) \setminus {\widehat{X}}$.
To include $\mathbf{s}'$ in $\mathbb{S}$, we require that $({\widehat{G}}'_v,\lambda)$ is a witness-subgraph of $({\widehat{G}},\lambda)$.
For Property~\ref{p:std:forget1},
let $\mathbf{s}' = ({\widehat{X}}, {\widehat{G}}') \in \mathbb{S}$ and let $X$ be feasible for $w'$ and $\mathbf{s}'$; we are to prove that $X$ is also feasible for $w$ and $\mathbf{s}$.
To this end, consider a set $Y \subseteq \beta(w) \setminus {\widehat{X}}$ and a $t$-slice $\mathbf{p}$ such that $|Y| + |V(\mathbf{p})| \leq |V(H)|$.
Let $\pi$ be a $\mathbf{p}$-subgraph in $(G[\gamma(w) \setminus (X \cup {\widehat{X}} \cup Y)], \lambda)$.
Note that $\pi$ is also a $\mathbf{p}$-subgraph in $(G[\gamma(w') \setminus (X \cup {\widehat{X}} \cup Y)], \lambda')$.
Since $X$ is feasible for $w'$ and $\mathbf{s}'$, there exists a $\mathbf{p}$-subgraph $\pi'$ in $({\widehat{G}}' \setminus Y,\lambda')$.
As $\mathbf{p}$ does not use the label $\Lambda(v)$, by the definition of ${\widehat{G}}'_v$, $\pi'$
is also a $\mathbf{p}$-subgraph of $({\widehat{G}}'_v \setminus Y,\lambda)$.
Since $({\widehat{G}}'_v,\lambda)$ is a witness-subgraph of $({\widehat{G}},\lambda)$, there exists a $\mathbf{p}$-subgraph
in $({\widehat{G}}\setminus Y,\lambda)$.
Since the choice of $Y$ and $\mathbf{p}$ was arbitrary, $X$ is feasible for $w$ and $\mathbf{s}$ and Property~\ref{p:std:forget1} is proven.
For Property~\ref{p:std:forget2}, let $X$ be a feasible set for $w$ and $\mathbf{s}$, and assume $v \notin X$.
Let ${\widehat{G}}'$ be the witness graph, whose existence is guaranteed by Lemma~\ref{lem:witness}, for the graph $(G[\gamma(w') \setminus (X \cup {\widehat{X}})],\lambda')$.
Define $\mathbf{s}' = ({\widehat{X}}, {\widehat{G}}')$. Observe that $\mathbf{s}'$ is a valid state for $w'$.
By definition of ${\widehat{G}}'$, the set $X$ is feasible for $w'$ and $\mathbf{s}'$.
It remains to show that $\mathbf{s}' \in \mathbb{S}$, that is, that $({\widehat{G}}'_v,\lambda)$ is a witness-subgraph of $({\widehat{G}},\lambda)$.
To this end, consider a set $Y \subseteq \beta(w) \setminus {\widehat{X}}$ and a $t$-slice $\mathbf{p}$ such that $|Y| + |V(\mathbf{p})| \leq |V(H)|$.
Let $\pi$ be a $\mathbf{p}$-subgraph in $({\widehat{G}}'_v \setminus Y,\lambda)$.
By the definition of ${\widehat{G}}'_v$, $\pi$ is also a $\mathbf{p}$-subgraph of $({\widehat{G}}' \setminus Y,\lambda')$.
By the properties of the witness graph of Lemma~\ref{lem:witness}, there exists a $\mathbf{p}$-subgraph $\pi'$ of $(G[\gamma(w') \setminus (X \cup {\widehat{X}} \cup Y)],\lambda')$.
As $\mathbf{p}$ does not use the label $\Lambda(v)$, $\pi'$ is also a $\mathbf{p}$-subgraph of $(G[\gamma(w) \setminus (X \cup {\widehat{X}} \cup Y)],\lambda)$.
Since $X$ is feasible for $w$ and $\mathbf{s}$, there exists a $\mathbf{p}$-subgraph of $({\widehat{G}},\lambda)$.
As the choice of $Y$ and $\mathbf{p}$ was arbitrary, $({\widehat{G}}'_v,\lambda)$ is a witness-subgraph of $({\widehat{G}},\lambda)$ and, consequently, $\mathbf{s}' \in \mathbb{S}$.
This finishes the proof of the correctness of the computations at the forget node.
\medskip
\noindent\textbf{Join node.}
Let $w$ be a join node with children $w_1$ and $w_2$, and let $\mathbf{s} = ({\widehat{X}},{\widehat{G}})$
be a state for $w$. Define $\lambda = \Lambda|_{\beta(w) \setminus {\widehat{X}}}$.
Our goal is to define a family $\mathbb{S}$ of pairs of states $(\mathbf{s}_1,\mathbf{s}_2)$
such that $\mathbf{s}_i$ is a valid state for the node $w_i$ for $i=1,2$, and:
\begin{enumerate}
\item for every $(\mathbf{s}_1,\mathbf{s}_2) \in \mathbb{S}$, and every
pair of sets $X_1,X_2$, such that $X_i$ is feasible for $w_i$ and $\mathbf{s}_i$, $i=1,2$, the set $X := X_1 \cup X_2$ is feasible for $w$ and $\mathbf{s}$;\label{p:std:join1}.
\item for every $X$ that is feasible for $w$ and $\mathbf{s}$, there exists a pair $(\mathbf{s}_1,\mathbf{s}_2) \in \mathbb{S}$
such that the set $X_i := X \cap \alpha(w_i)$ is feasible for $w_i$ and $\mathbf{s}_i$, $i=1,2$.\label{p:std:join2}
\end{enumerate}
This claim will imply that
$$T[w,\mathbf{s}] = \min_{(\mathbf{s}_1,\mathbf{s}_2) \in \mathbb{S}} T[w_1,\mathbf{s}_1] + T[w_2,\mathbf{s}_2].$$
The family $\mathbb{S}$ is defined as follows. We iterate through all pairs $(\mathbf{s}_1,\mathbf{s}_2)$
such that $\mathbf{s}_i = ({\widehat{X}}_i,{\widehat{G}}_i)$ is a valid state for the node $w_i$, $i=1,2$.
First, we require ${\widehat{X}} = {\widehat{X}}_1 = {\widehat{X}}_2$.
Second, we define the graph ${\widehat{G}}_1 \oplus {\widehat{G}}_2$ as a disjoint union of the graphs ${\widehat{G}}_1$
and ${\widehat{G}}_2$ with the boundaries $\ensuremath{\partial} {\widehat{G}}_1 = \ensuremath{\partial} {\widehat{G}}_2 = \beta(w) \setminus {\widehat{X}}$ identified.
To include $(\mathbf{s}_1,\mathbf{s}_2)$ into $\mathbb{S}$, we require that $({\widehat{G}}_1 \oplus {\widehat{G}}_2, \lambda)$ is a witness-subgraph of $({\widehat{G}},\lambda)$.
For Property~\ref{p:std:join1}, let $(\mathbf{s}_1,\mathbf{s}_2) \in \mathbb{S}$, $\mathbf{s}_i = ({\widehat{X}},{\widehat{G}}_i)$ for $i=1,2$.
Recall that $X_i$ is feasible for $w_i$ and $\mathbf{s}_i$, $i=1,2$, and $X = X_1 \cup X_2$; we are to prove that $X$ is feasible for $w$ and $\mathbf{s}$.
To this end, let $Y \subseteq \beta(w) \setminus {\widehat{G}}$ and let $\mathbf{p}$ be a $t$-slice such that $|Y| + |V(\mathbf{p})| \leq |V(H)|$.
Assume there exists a $\mathbf{p}$-subgraph $\pi$ in $(G[\gamma(w) \setminus (X \cup {\widehat{X}} \cup Y)],\lambda)$.
First, let $P_1 = \pi^{-1}(\gamma(w_1))$.
Let $\mathbf{p}_1$ be the core slice of $P_1$, and let $Q_1$ be the peelings of $P_1$.
Observe that for every $a \in \ensuremath{\partial} P_1 = Q_1 \cup \ensuremath{\partial} \mathbf{p}_1$ we have $\pi(a) \in \beta(w) \setminus ({\widehat{X}} \cup Y)$, and
we may define $\lambda_1(a) := \Lambda(\pi(a))$. Note that with the labeling $\lambda_1|_{\ensuremath{\partial} \mathbf{p}_1}$, the slice $\mathbf{p}_1$ becomes a $t$-slice,
and $\pi|_{V(\mathbf{p}_1)}$ is a $\mathbf{p}_1$-subgraph of $(G[\gamma(w_1) \setminus (X_1 \cup {\widehat{X}} \cup Y \cup \pi(Q_1))],\lambda)$.
Since $X_1$ is feasible for $w_1$ and $\mathbf{s}_1$, and $|Y| + |\pi(Q_1)| + |V(\mathbf{p}_1)| \leq |Y| + |V(\mathbf{p})| \leq |V(H)|$,
we have that there exists a $\mathbf{p}_1$-subgraph $\pi_1$ in $({\widehat{G}}_1 \setminus (Y \cup \pi(Q_1)),\lambda)$.
As no vertex of $Y \cup \pi(Q_1)$ belongs to the image of $\pi_1$, and vertices of $Q_1$ are not adjacent to the vertices of $\mathrm{int} \mathbf{p}_1$ in $H$,
a direct check shows that $\pi_\circ := \pi_1 \cup \pi|_{V(\mathbf{p}) \setminus V(\mathbf{p}_1)}$ is a $\mathbf{p}$-subgraph of $(({\widehat{G}}_1 \oplus G[\gamma(w_2) \setminus (X_2 \cup {\widehat{X}})]) \setminus Y,\lambda)$.
We now perform the same operation for $w_2$ and $\pi_\circ$. That is, let $P_2 = \pi_\circ^{-1}(\gamma(w_2))$.
Let $\mathbf{p}_2$ be the core slice of $P_2$, and let $Q_2$ be the peelings of $P_2$.
Again, we have $\pi_\circ(\ensuremath{\partial} P_2) \subseteq \beta(w) \setminus ({\widehat{X}} \cup Y)$
and we define $\lambda_2 := \Lambda|_{\ensuremath{\partial} P_2}$, turning $\mathbf{p}_2$ into a $t$-slice.
The mapping $\pi_\circ|_{V(\mathbf{p}_2)}$ is a $\mathbf{p}_2$-subgraph of $(G[\gamma(w_2) \setminus (X_2 \cup {\widehat{X}} \cup Y \cup \pi(Q_2))],\lambda)$.
Since $X_2$ is feasible for $w_2$ and $\mathbf{s}_2$, and $|Y| + |\pi(Q_2)| + |V(\mathbf{p}_2)| \leq |Y| + |V(\mathbf{p})| \leq |V(H)|$,
we have that there exists a $\mathbf{p}_2$-subgraph $\pi_2$ in $({\widehat{G}}_2 \setminus (Y \cup \pi(Q_2)),\lambda)$.
Similarly as before, a direct check shows that $\pi_2 \cup \pi_\circ|_{V(\mathbf{p}) \setminus V(\mathbf{p}_2)}$ is a $\mathbf{p}$-subgraph of $(({\widehat{G}}_1 \oplus {\widehat{G}}_2) \setminus Y,\lambda)$.
Recall that we require that $({\widehat{G}}_1 \oplus {\widehat{G}}_2,\lambda)$ is a witness-subgraph of $({\widehat{G}},\lambda)$.
Consequently, there exists a $\mathbf{p}$-subgraph in $({\widehat{G}} \setminus Y,\lambda)$.
Since the choice of $Y$ and $\mathbf{p}$ was arbitrary, $X$ is feasible for $w$ and $\mathbf{s}$.
For Property~\ref{p:std:join2}, let $X$ be a feasible set for $w$ and $\mathbf{s}$.
For $i=1,2$, recall that $X_i = X \cap \alpha(w_i)$ and
let ${\widehat{G}}_i$ be the witness graph whose existence is guaranteed by Lemma~\ref{lem:witness}
for the graph $(G[\gamma(w_i) \setminus (X_i \cup {\widehat{X}})],\lambda)$.
Observe that $\mathbf{s}_i := ({\widehat{X}},{\widehat{G}}_i)$ is a valid state for the node $w_i$.
Moreover, by definition, $X_i$ is feasible for $w_i$ and $\mathbf{s}_i$.
To finish the proof of Property~\ref{p:std:join2}, it suffices to show
that $(\mathbf{s}_1,\mathbf{s}_2) \in \mathbb{S}$, that is, $({\widehat{G}}_1 \oplus {\widehat{G}}_2,\lambda)$ is a witness-subgraph of $({\widehat{G}},\lambda)$.
To this end, consider a set $Y \subseteq \beta(w) \setminus {\widehat{X}}$ and a $t$-slice $\mathbf{p}$ such that $|Y| + |V(\mathbf{p})| \leq |V(H)|$.
Let $\pi$ be a $\mathbf{p}$-subgraph in $({\widehat{G}}_1 \oplus {\widehat{G}}_2,\lambda)$.
Let $P_1 = \pi^{-1}(V({\widehat{G}}_1))$, let $\mathbf{p}_1$ be the core slice of $P_1$, and let $Q_1$ be the peelings of $P_1$.
Observe that for every $a \in \ensuremath{\partial} P_1 = Q_1 \cup \ensuremath{\partial} \mathbf{p}_1$ we have $\pi(a) \in \beta(w) \setminus ({\widehat{X}} \cup Y)$,
and we may define $\lambda_1(a) := \Lambda(\pi(a))$. Note that with the labeling $\lambda_1|_{\ensuremath{\partial} \mathbf{p}_1}$, the slice $\mathbf{p}_1$ becomes a $t$-slice,
and $\pi|_{V(\mathbf{p}_1)}$ is a $\mathbf{p}_1$-subgraph of $({\widehat{G}}_1 \setminus (Y \cup \pi(Q_1)),\lambda)$.
As $|Y| + |\pi(Q_1)| + |V(\mathbf{p}_1)| \leq |Y| + |V(\mathbf{p})| \leq |V(H)|$,
by the properties of the witness graph of Lemma~\ref{lem:witness},
there exists a $\mathbf{p}_1$-subgraph $\pi_1'$ of $(G[\gamma(w_1) \setminus (X_1 \cup {\widehat{X}} \cup Y \cup \pi(Q_1))],\lambda)$.
As no vertex of $Y \cup \pi(Q_1)$ belongs to the image of $\pi_1'$, and vertices of $Q_1$ are not adjacent to the vertices of $\mathrm{int} \mathbf{p}_1$ in $H$,
a direct check shows that $\pi_\circ := \pi_1' \cup \pi|_{V(\mathbf{p}) \setminus V(\mathbf{p}_1)}$
is a $\mathbf{p}$-subgraph of $((G[\gamma(w_1) \setminus (X_1 \cup {\widehat{X}})] \oplus {\widehat{G}}_2) \setminus Y,\lambda)$.
We now perform a similar operation for $w_2$ and $\pi_\circ$.
Let $P_2 = \pi_\circ^{-1}(V({\widehat{G}}_2))$, let $\mathbf{p}_2$ be the core slice of $P_2$, and let $Q_2$ be the peelings of $P_2$.
Since $\pi_\circ(\ensuremath{\partial} P_2) \subseteq \beta(w) \setminus ({\widehat{X}} \cup Y)$, we define $\lambda_2 := \Lambda|_{\ensuremath{\partial} P_2}$, turning $\mathbf{p}_2$ into a $t$-slice.
The mapping $\pi|_{V(\mathbf{p}_2)}$ is a $\mathbf{p}_2$-subgraph of $({\widehat{G}}_2 \setminus (Y \cup \pi(Q_2)),\lambda)$.
As $|Y| + |\pi(Q_2)| + |V(\mathbf{p}_2)| \leq |Y| + |V(\mathbf{p})| \leq |V(H)|$,
by the properties of the witness graph of Lemma~\ref{lem:witness},
there exists a $\mathbf{p}_2$-subgraph $\pi_2'$ of $(G[\gamma(w_2) \setminus (X_2 \cup {\widehat{X}} \cup Y \cup \pi(Q_2))],\lambda)$.
Similarly as before,
a direct check shows that $\pi_2' \cup \pi_\circ|_{V(\mathbf{p}) \setminus V(\mathbf{p}_2)}$
is a $\mathbf{p}$-subgraph of $(G[\gamma(w) \setminus (X \cup {\widehat{X}} \cup Y)],\lambda)$.
Since $X$ is feasible for $w$ and $\mathbf{s}$, there exists a $\mathbf{p}$-subgraph of $({\widehat{G}},\lambda$).
As the choice of $Y$ and $\mathbf{p}$ was arbitrary, $({\widehat{G}}_1 \oplus {\widehat{G}}_2,\lambda)$ is a witness-subgraph of $({\widehat{G}},\lambda)$ and $(\mathbf{s}_1,\mathbf{s}_2) \in \mathbb{S}$.
This finishes the proof of the correctness of computations at a join node.
\medskip
This finishes the description of the dynamic programming algorithm
for Theorem~\ref{thm:std:algo}, and concludes its proof.
\section{Boring proofs}\label{sec:boring}
\begin{proof}[of Lemma~\ref{lem:preprocess}]
First, preprocess $\Phi$ as follows:
\begin{enumerate}
\item Simplify all clauses that contain repeated variables: delete all clauses
that contain both a variable and its negation, and remove duplicate literals
from the remaining clauses.
\item As long as there exists a variable $x$ that appears only positively or only
negatively, fix the evaluation of $x$ that satisfies all clauses containing $x$
and simplify the formula.
\item As long as there exists a clause $C$ with only one literal, fix the evaluation
of this literal that satisfies $C$ and simplify the formula.
\item If some clause becomes empty in the process, return a dummy unsatisfiable clean formula.
\end{enumerate}
Observe that after this preprocessing, the size of $\Phi$ only shrinks,
while each variable appears at least twice.
Second, replace every variable $x$ with a cycle of implications
$x_1 \Rightarrow x_2 \Rightarrow \ldots \Rightarrow x_{s(x)} \Rightarrow x_1$, where
$s(x)$ is the number of appearances of $x$ in $\Phi$.
More formally, for every variable $x$:
\begin{enumerate}
\item introduce $s(x)$ new variables $x_1, x_2, \ldots, x_{s(x)}$, and replace
each occurence of $x$ in $\Phi$ with a distinct variable $x_i$; and
\item introduce $s(x)$ new clauses $x_i \Rightarrow x_{i+1}$ (i.e., $\neg x_i \vee x_{i+1}$)
for $i=1,2,\ldots,s(x)$, where $x_{s(x)+1} = x_1$.
\end{enumerate}
Observe that, after this replacement, each variable $x_i$ appears exactly three times in
the formula $\Phi$: positively in the implication $x_{i-1} \Rightarrow x_i$,
negatively in the implication $x_i \Rightarrow x_{i+1}$ (with the convention $x_0 = x_{s(x)}$
and $x_{s(x)+1} = x_1$), and the third time in one former place of the variable $x$.
Moreover, as after the first step each variable appears at least twice in $\Phi$,
for every former
variable $x$ we have $s(x) \geq 2$ and no new clause contains twice the same variable.
Finally, note that the second step increases the size of the formula by a constant factor.
The lemma follows.
\end{proof}
\section{Conclusions and open problems}\label{sec:conc}
Our preliminary study of the treewidth parameterization of the \shitarg{H}{}
problem revealed that its parameterized complexity is highly involved. Whereas for the more graspable colored version we obtained essentially tight bounds, a large gap between lower and upper bounds remains for the standard version.
In particular, the following two questions arise:
Can we improve the running time of Theorem~\ref{thm:std:algo} to
factor $t^{\mu(H)}$ in the exponent?
Is there any relatively general symmetry-breaking assumption on $H$
that would allow us to show a $2^{o(t^{\mu(H)})}$ lower bound
in the absence of colors?
In a broader view, let us remark that
the complexity of the treewidth parameterization of \emph{minor-hitting} problems
is also currently highly unclear. Here, for a minor-closed graph class $\mathcal{G}$
and input graph $G$, we seek for the minimum size of a set $X \subseteq V(G)$ such that
$G \setminus X \in \mathcal{G}$, or, equivalently, $X$ hits all
minimal forbidden minors of $\mathcal{G}$.
A straightforward dynamic programming algorithm has double-exponential dependency on the
width of the decomposition.
However, it was recently shown that $\mathcal{G}$ being the class
of planar graphs, a $2^{\mathcal{O}(t \log t)} |V(G)|$-time algorithm exists~\cite{planarization}.
Can this result be generalized to more graph classes?
\section{Introduction}\label{sec:intro}
The ``optimality programme'' is a thriving trend within parameterized
complexity, which focuses on pursuing tight bounds on the time
complexity of parameterized problems. Instead of just determining
whether the problem is fixed-parameter tractable, that is, whether the
problem with a certain parameter $k$ can be solved in time $f(k)\cdot
n^{\mathcal{O}(1)}$ for some computable function $f(k)$, the goal is to
determine the best possible dependence $f(k)$ on the parameter $k$.
For several problems, matching upper and lower bounds have been
obtained on the function $f(k)$. The lower bounds are under the
complexity assumption Exponential Time Hypothesis (ETH), which roughly
states than $n$-variable 3SAT cannot be solved in time $2^{o(n)}$;
see, e.g., the survey of Lokshtanov et al.~\cite{lms:survey}.
One area where this line of research was particularly successful is the
study of fixed-parameter algorithms parameterized by the treewidth of
the input graph and understanding how the running time has to depend
on the treewidth. Classic results on model checking monadic
second-order logic on graphs of bounded treewidth, such as Courcelle's
Theorem, provide a unified and generic way of proving fixed-parameter
tractability of most of the tractable cases of this
parameterization~\cite{ArnborgLS91,courcelle}. While these results
show that certain problems are solvable in time $f(t)\cdot n$ on graphs of
treewidth $t$ for some function $f$, the exact function $f(t)$ resulting from this approach is usually hard to determine and far from
optimal. To get reasonable upper bounds on $f(t)$, one typically
resorts to constructing a dynamic programming algorithm, which often is straightforward, but tedious.
The question whether the straightforward dynamic programming
algorithms for bounded treewidth graphs are optimal received
particular attention in 2011. On the hardness side, Lokshtanov, Marx
and Saurabh proved that many natural algorithms are probably
optimal~\cite{lms:known,lms:slightly}. In particular, they showed that
there are problems for which the $2^{\mathcal{O}(t\log t)} n$ time algorithms
are best possible, assuming ETH.
On the algorithmic
side, Cygan et al.~\cite{cut-and-count} presented a new technique,
called {\em{Cut\&Count}}, that improved the running time
of the previously known (natural) algorithms for many connectivity
problems. For example, previously only $2^{\mathcal{O}(t\log t)}\cdot n^{\mathcal{O}(1)}$
algorithms were known for \textsc{Hamiltonian Cycle} and
\textsc{Feedback Vertex Set}, which was improved to $2^{\mathcal{O}(t)}\cdot
n^{\mathcal{O}(1)}$ by Cut\&Count. These results indicated that not only
proving tight bounds for algorithms on tree decompositions is within
our reach, but such a research may lead to surprising algorithmic
developments. Further work includes derandomization
of Cut\&Count in~\cite{cut-and-count-derand1,FominLS14}, an attempt to
provide a meta-theorem to describe problems solvable in
single-exponential time~\cite{cut-and-count-logic}, and a new
algorithm for \textsc{Planarization}~\cite{planarization}.
We continue here this line of research by investigating a family of
subgraph-hitting problems parameterized by treewidth and find
surprisingly tight bounds for a number of problems. An interesting
conceptual message of our results is that, for every integer
$c\ge 1$, there are fairly natural problems where the best possible
dependence on treewidth is of the form $2^{\mathcal{O}(t^c)}$.
\paragraph{Studied problems and motivation}
In our paper we focus on the following generic \shitarg{H}{} problem: for a pattern graph $H$
and an input graph $G$, what is the minimum size of a set $X \subseteq V(G)$ that hits
all subgraphs of $G$ that are isomorphic to $H$?
(Henceforth we call them \emph{$H$-subgraphs} for brevity.)
This problem generalizes a few other problems studied in the literature,
for example \textsc{Vertex Cover} (for $H = P_2$)~\cite{lms:known},
or finding the largest induced subgraph
of maximum degree at most $\Delta$ (for $H = K_{1,\Delta+1}$)~\cite{max-deg-vd}. We also study the following \emph{colorful} variant \cshitarg{H}{}, where the input graph $G$
is additionally equipped with a coloring $\sigma : V(G) \to V(H)$, and we are only interested
in hitting $H$-subgraphs where every vertex matches its color.
A direct source of motivation for our study is the work of Pilipczuk~\cite{cut-and-count-logic}, which attempted to describe graph problems admitting fixed-parameter algorithms with running time of the form $2^{\mathcal{O}(t)}\cdot |V(G)|^{\mathcal{O}(1)}$, where $t$ is the treewidth of $G$.
The proposed description is a logical formalism where one can quantify existence of some vertex/edge sets,
whose properties can be verified ``locally'' by requesting satisfaction of a formula of modal logic in every vertex.
In particular, Pilipczuk argued that the language for expressing local properties needs to be somehow modal,
as it cannot be able to discover cycles in a constant-radius neighborhood of a vertex.
This claim was supported by a lower bound: unless ETH fails, for any constant $\ell\ge 5$, the problem of finding the minimum size of a set that hits all the cycles $C_\ell$ in a graph of treewidth $t$ cannot be solved in time $2^{o(t^2)}\cdot |V(G)|^{\mathcal{O}(1)}$. Motivated by this result, we think that it is natural to investigate the complexity of hitting subgraphs for more general patterns $H$, instead of just cycles.
We may see the colorful variant as an intermediate step towards full
understanding of the complexity of \shitarg{H}{}, but it is also an
interesting problem on its own. It often turns out that the
colorful variants of problems are easier to investigate, while their
study reveals useful insights; a remarkable example is the
kernelization lower bound for \textsc{Set Cover} and related
problems~\cite{colors-and-ids}. In our case, if we allow colors, a
major combinatorial difficulty vanishes: when the algorithm keeps
track of different parts of the pattern $H$ that appear in the graph
$G$, and combines a few parts into a larger one, the coloring $\sigma$
ensures that the parts are vertex-disjoint. Hence,
the colorful variant is easier to study, whereas at the same time it
reveals interesting insight into the standard variant.
\paragraph{Our results and techniques}
In the case of \cshitarg{H}{}, we obtain a tight bounds for the complexity
of the treewidth parameterization. First, note that, in the presence
of colors, one can actually solve \cshitarg{H}{} for each connected
component of $H$ independently; hence, we may focus only on connected
patterns $H$. Second, we observe that there are two special cases. If
$H$ is a path then \cshitarg{H}{} reduces to a maximum flow/minimum cut
problem, and hence is polynomial-time solvable. If $H$ is a clique,
then any $H$-subgraph of $G$ needs to be contained in a single bag of
any tree decomposition, and there is a simple $2^{\mathcal{O}(t)} |V(G)|$-time
algorithm, where $t$ is the treewidth of $G$.
Finally, for the remaining cases we show that the dependence on
treewidth is tightly connected to the value of $\mu(H)$, the maximum
size of a minimal vertex separator in $H$ (a separator $S$ is minimal
if there are two vertices $x,y$ such that $S$ is an $xy$-separator,
but no proper subset of $S$ is). We prove the following matching upper
and lower bounds.
\begin{theorem}\label{thm:intro:cshit:algo}
A \cshitarg{H}{} instance $(G,\sigma)$
can be solved in time $2^{\mathcal{O}(t^{\mu(H)})} |V(G)|$
in the case when $H$ is connected and is not a clique,
where $t$ is the treewidth of $G$.
\end{theorem}
\begin{theorem}\label{thm:intro:lb:col}
Let $H$ be a graph that contains a connected component that is neither a path nor a clique.
Then, unless ETH fails, there does
not exist an algorithm that, given a \cshitarg{H}{} instance $(G,\sigma)$
and a tree decomposition of $G$ of width $t$, resolves $(G,\sigma)$
in time $2^{o(t^{\mu(H)})} |V(G)|^{\mathcal{O}(1)}$.
\end{theorem}
In every theorem of this paper, we treat $H$ as a fixed graph of constant size, and hence the factors hidden in the $\mathcal{O}$-notation may depend on the size of $H$.
In the absence of colors, we give preliminary results showing that the parameterized
complexity of the treewidth parameterization of \shitarg{H}{} is more involved than
the one of the colorful counterpart.
In this setting, we are able to relate the dependence on treewidth only to a larger parameter of the graph $H$.
Let $\mu^\star}% {\eta(H)$ be the maximum size of $N_H(A)$, where $A$ iterates
over connected subsets of $V(H)$ such that $N_H(N_H[A]) \neq \emptyset$, i.e.,
$N_H[A]$ is not a whole connected component of $H$.
Observe that $\mu(H) \leq \mu^\star}% {\eta(H)$ for any $H$.
First, we were able to construct a counterpart of Theorem~\ref{thm:intro:cshit:algo}
only with the exponent $\mu^\star}% {\eta(H)$.
\begin{theorem}\label{thm:std:algo}
Assume $H$ contains a connected component that is not a clique.
Then, given a graph $G$ of treewidth $t$,
one can solve \shitarg{H}{} on $G$ in time $2^{\mathcal{O}(t^{\mu^\star}% {\eta(H)} \log t)} |V(G)|$.
\end{theorem}
We remark that for \cshitarg{H}{}, an algorithm with running time
$2^{\mathcal{O}(t^{\mu^\star}% {\eta(H)})} |V(G)|$
(as opposed to $\mu(H)$ in the exponent in Theorem~\ref{thm:intro:cshit:algo})
is rather straightforward: in the state of
dynamic programming one needs to remember, for every subset $X$ of the bag of size at most $\mu^\star}% {\eta(G)$, all forgotten connected parts of $H$ that are attached to $X$ and not hit
by the constructed solution. To decrease the exponent to $\mu(H)$, we introduce a
``prediction-like'' definition of a state of the dynamic programming,
leading to highly involved proof of correctness.
For the problem without colors, however, even an algorithm with the exponent $\mu^\star}% {\eta(H)$ (Theorem~\ref{thm:std:algo})
is far from trivial. We cannot limit ourselves to keeping track of forgotten
connected parts of the graph $H$ independently of each other, since in the absence of colors
these parts may not be vertex-disjoint and, hence, we would not be able to reason
about their union in latter bags of the tree decomposition.
To cope with this issue, we show that the set of forgotten
(not necessarily connected) parts of the graph $H$ that are subgraphs of $G$
can be represented as a \emph{witness graph} with $\mathcal{O}(t^{\mu^\star}% {\eta(H)})$ vertices and edges.
As there are only $2^{\mathcal{O}(t^{\mu^\star}% {\eta(H)} \log t)}$ possible graphs of this size,
the running time bound follows.
We also observe that the bound of $\mathcal{O}(t^{\mu^\star}% {\eta(H)})$ on the size of the witness graph
is not tight for many patterns $H$. For example, if $H$ is a path, then
we are able to find a witness graph with $\mathcal{O}(t)$ vertices and edges, and the algorithm of Theorem~\ref{thm:std:algo}
runs in time $2^{\mathcal{O}(t \log t)} |V(G)|$.
From the lower bound perspective, we were not able to prove an analogue of
Theorem~\ref{thm:intro:lb:col} in the absence of colors. However, there is a good reason
for that: we show that for any fixed $h \geq 2$ and $H = K_{2,h}$,
the \shitarg{H}{} problem is solvable in time $2^{\mathcal{O}(t^2 \log t)} |V(G)|$
for a graph $G$ of treewidth $t$.
This should be put in contrast with $\mu^\star}% {\eta(K_{2,h}) = \mu(K_{2,h}) = h$.
Moreover, the lower bound of $2^{o(t^h)}$ can be proven if we break the symmetry
of $K_{2,h}$ by attaching a triangle to each of the two degree-$h$ vertices of $K_{2,h}$ (obtaining a graph
we denote by $H_h$; see Figure~\ref{fig:Hh}).
\begin{theorem}\label{thm:lb:Hh}
Unless ETH fails, for every $h \geq 2$ there does
not exist an algorithm that, given a \shitarg{H_h} instance $G$
and a tree decomposition of $G$ of width $t$, resolves $G$
in time $2^{o(t^h)} |V(G)|^{\mathcal{O}(1)}$.
\end{theorem}
This indicates that the optimal dependency on $t$ in an algorithm
for \shitarg{H}{} may heavily rely on the symmetries of $H$, and may
be more difficult to pinpoint.
\paragraph{Organization of the paper}
After setting notation in Section~\ref{sec:prelims},
we prove Theorem~\ref{thm:std:algo} in Section~\ref{sec:std:algo},
with a special emphasize on the existence of the witness graph in the begining of the section.
We discuss the special cases of \shitarg{H}{} in Section~\ref{sec:std-discussion}.
The proofs of results for the colorful variant,
namely Theorems~\ref{thm:intro:cshit:algo} and~\ref{thm:intro:lb:col},
are provided in Sections~\ref{sec:algo-col} and~\ref{sec:lb-col}, respectively.
Section~\ref{sec:conc} concludes the paper.
\section{Lower bound for \cshitarg{H}{}}\label{sec:lb-col}
In this section we prove a tight lower bound
for \cshitarg{H}{}. The proofs are inspired by the approach of~\cite{cut-and-count-logic}
for the lower bond for \shitarg{C_\ell}.
In our constructions, we often use the following basic operation.
Let $(G,\sigma)$ be an $H$-colored graph constructed so far.
We pick some induced subgraph $H[Z]$ of $H$ and ``add a copy of $H[Z]$ to $G$''.
By this, we mean that we take a disjoint union of $G$ and $H[Z]$, and color
$H[Z]$ (extend $\sigma$ to the copy of $H[Z]$) naturally: a vertex $d \in H[Z]$ receives
color $d$.
After this operation, we often identify some vertices of the new copy $H[Z]$ with some
old vertices of $G$. However, we always identify pairs of vertices of the same color,
thus $\sigma$ is defined naturally after the identification.
We first start with a simple single-exponential lower bound
that describes the case when $H$ is a forest.
\begin{theorem}\label{thm:lb:col-vc}
Let $H$ be a graph that contains a connected component that is not a path.
Then, unless ETH fails, there does
not exist an algorithm that, given a \cshitarg{H}{} instance $(G,\sigma)$
and a tree decomposition of $G$ of width $t$, resolves $(G,\sigma)$
in time $2^{o(t)} |V(G)|^{\mathcal{O}(1)}$.
\end{theorem}
\begin{proof}
We reduce from the well-known \textsc{Vertex Cover} problem.
We show a polynomial-time algorithm that, given a graph $G_0$,
outputs a \cshitarg{H}{} instance $(G,\sigma)$ together with a tree
decomposition of $G$ of width $|V(G_0)| + \mathcal{O}(1)$
such that the minimum possible size
of a vertex cover in $G_0$ equals the minimum possible size of a solution to $(G,\sigma)$ minus
$|E(G_0)|$.
As a $2^{o(n)}$-time algorithm for \textsc{Vertex Cover}
would contradict ETH~\cite{vc-subexp}, this will imply the statement of the theorem.
Let $C$ be a connected component of $H$ that is not a path.
It is easy to verify that in such a component there always exist at least three vertices
that are not cutvertices; let $a$, $b$ and $c$ be any three of them.
We construct an instance $(G,\sigma)$ as follows.
We start with $V(G) = V(G_0)$, with each vertex of $V(G_0)$ colored $a$.
Then, for each edge $uv \in E(G_0)$ we add three copies $C_{uv},C_{uv,u}$ and $C_{uv,v}$
of the graph $H[C]$
and identify the following pairs of vertices:
\begin{itemize}
\item the vertex $a$ in the copy $C_{uv,u}$ with the vertex $u$,
\item the vertex $a$ in the copy $C_{uv,v}$ with the vertex $v$,
\item the vertices $b$ in the copies $C_{uv,u}$ and $C_{uv}$, and
\item the vertices $c$ in the copies $C_{uv,v}$ and $C_{uv}$.
\end{itemize}
Define $D_{uv}$ to be the vertex set of all copies of $H[C]$ introduced for the edge $uv$.
Let $\pi_{uv}, \pi_{uv,u}$ and $\pi_{uv,v}$ be the (naturally induced)
injective homomorphisms from $H[C]$ to $G[C_{uv}], G[C_{uv,u}]$ and $G[C_{uv,v}]$, respectively.
Let $(G',\sigma')$ be the instance constructed so far, and observe
that all values of $\sigma'$ lie in $C$.
Finally, we add to $G$ a large number (at least $|E(G_0)| + |V(G_0)| + 1$) copies of $H \setminus C$.
This completes the description of the instance $(G,\sigma)$.
Observe that each connected component of $G \setminus V(G_0)$ is of size at most
$3|V(H)|-4 = \mathcal{O}(1)$.
Hence, it is straightforward to provide a tree decomposition of $G$ of width $|V(G_0)| + \mathcal{O}(1)$.
We now argue about the correctness of the reduction. First, let $Z$ be a vertex cover of $G_0$.
Define $X \subseteq V(G)$ as follows: first set $X := Z$ and then, for each edge $uv \in E(G_0)$,
if $u \in Z$ then add to $X$ the vertex $c$ in the copy $C_{uv}$
and otherwise add the vertex $b$ in the copy $C_{uv}$.
Clearly, $|X| = |Z| + |E(G_0)|$. We claim that $X$ hits all $\sigma'$-$H[C]$-subgraphs of $G'$,
and, consequently by Lemma~\ref{lem:cshit:conn}, hits all $\sigma$-$H$-subgraphs of $G$.
Let $\pi$ be any $\sigma'$-$H[C]$-subgraph of $G'$.
Recall that none of the vertices $a$, $b$ and $c$ is a cutvertex of $H[C]$.
As each vertex of $V(G_0)$ is colored $a$, at most one such vertex can be used in the image of $\pi$.
We infer that there exists
a single edge $uv \in E(G_0)$ such that $\pi(C) \subseteq D_{uv}$, as otherwise $\pi(a)$ is a cutvertex of $\pi(H[C])$.
Moreover, as the vertices $b$ and $c$ in the copy $C_{uv}$ are cutvertices of $G[D_{uv}]$,
we have that $\pi(C)$ is contained in one of the sets $C_{uv}$, $C_{uv,u}$ or $C_{uv,v}$.
However, each of this set is of size $|C|$, and each has non-empty intersection
with $X$. The claim follows.
In the other direction, let $X \subseteq V(G)$ be such that $X$ hits
all $\sigma$-$H$-subgraphs of $G$. We claim that there exists a vertex cover of $G_0$
of size at most $|X| - |E(G_0)|$. As $G$ contains a large number of copies of $H \setminus C$,
the set $X \cap V(G')$ needs to hit all $\sigma'$-$H[C]$-subgraphs of $G'$.
In particular, $X$ hits $\pi_{uv}$ for every $uv \in E(G_0)$. For every $uv \in E(G_0)$
pick one $x_{uv} \in X \cap C_{uv}$; note that $x_{uv}$ are pairwise distinct
as the sets $C_{uv}$ are pairwise disjoint.
Denote $Y = X \setminus \{x_{uv} : uv \in E(G_0)\}$.
Construct a set $Z \subseteq V(G_0)$ as follows: for each $y \in Y$
\begin{enumerate}
\item if $y \in V(G_0)$, insert $y$ into $Z$;
\item if $y \in C_{uv,u} \setminus \{u\}$ for some $uv \in E(G_0)$, insert $u$ into $Z$;
\item if $y \in C_{uv,v} \setminus \{v\}$ for some $uv \in E(G_0)$, insert $v$ into $Z$;
\item otherwise, do nothing.
\end{enumerate}
Observe that each $y \in Y$ gives rise to at most one vertex in $Z$. Hence,
$|Z| \leq |Y| = |X| - |E(G_0)|$.
We finish the proof of the theorem by showing that
$Z$ is a vertex cover of $G_0$.
Consider any $uv \in E(G_0)$. The vertex $x_{uv}$ cannot be simultanously equal
to both the vertex $b$ and the vertex $c$ in the copy $C_{uv}$; by symmetry, assume
$x_{uv}$ does not equal the vertex $b$ in the copy $C_{uv}$.
Hence, $x_{uv}$ does not hit $\pi_{uv,u}$ and, consequently, there exists $y \in Y$
that hits $\pi_{uv,u}$. By the construction of $Z$, the vertex $y$ forces $u \in Z$.
As the choice of $uv$ was arbitrary, $Z$ is a vertex cover of $G_0$ and the theorem is proven.
\end{proof}
We are now ready to the main lower bound construction.
\begin{theorem}[Theorem~\ref{thm:intro:lb:col} restated]\label{thm:lb:col}
Let $H$ be a graph that contains a connected component that is neither a path nor a clique.
Then, unless ETH fails, there does
not exist an algorithm that, given a \cshitarg{H}{} instance $(G,\sigma)$
and a tree decomposition of $G$ of width $t$, resolves $(G,\sigma)$
in time $2^{o(t^{\mu(H)})} |V(G)|^{\mathcal{O}(1)}$.
\end{theorem}
\begin{proof}
The case $\mu(H) = 1$ is proven by Theorem~\ref{thm:lb:col-vc}, so in the remainder
of the proof we focus on the case $\mu := \mu(H) \geq 2$.
We show a polynomial-time algorithm that, given a clean $3$-CNF formula $\Phi$ with $n$
variables,
outputs a \cshitarg{H}{} instance $(G,\sigma)$ together with a tree decomposition of $G$
of width $\mathcal{O}(n^{1/\mu})$ and an integer $k$, such that $\Phi$ is satisfiable
if and only if there exists a set $X \subseteq V(G)$ of size at most $k$
that hits all $\sigma$-$H$-subgraphs of $G$.
By Lemma~\ref{lem:preprocess}, this would in fact give a reduction from an arbitrary
$3$-CNF formula, and hence conclude the proof of the theorem by Theorem~\ref{thm:spars}.
Let $a,b \in V(H)$ be such that $S$ is a minimal $ab$-separator in $H$ and $|S| = \mu$.
We pick vertices $a$ and $b$ in such a manner that neither of them is a cutvertex in $H$;
observe that this is always possible.
Let $A,B$ be the connected components of $H \setminus S$ that contain $a$ and $b$, respectively.
Finally, let $D$ be the connected component of $H$ that contains both $a$ and $b$.
We first develop two auxiliary gadgets for the construction.
The first gadget, an $\alpha$-\emph{OR-gadget} for $\alpha \in \{a,b\}$, is constructed as follows.
Let $c$ and $d$ be two arbitrary vertices of $S$.
We take three copies $D^1,D^\circ,D^2$ of $H[D]$ and identify:
\begin{enumerate}
\item the vertex $c$ in the copies $D^1$ and $D^\circ$, and
\item the vertex $d$ in the copies $D^2$ and $D^\circ$.
\end{enumerate}
The vertices $\alpha$ (recall $\alpha \in \{a,b\}$)
in the copies $D^1$ and $D^2$ are called the \emph{attachment points}
of the OR-gadget; let us denote them by $\alpha^1$ and $\alpha^2$, respectively.
For any graph $G$ and coloring $\sigma: V(G) \to V(H)$,
and for any two vertices $u,v \in V(G)$ of color $\alpha$, by \emph{attaching an OR-gadget}
to $u$ and $v$ we mean the following operation: we create a new copy of the $\alpha$-OR-gadget,
and identify the attachment vertices $\alpha^1$ and $\alpha^2$ with $u$ and $v$, respectively.
The following claim summarizes the properties of an $\alpha$-OR-gadget.
\begin{myclaim}\label{cl:OR-gadget}
Let $(G',\sigma')$ be a colored graph created by attaching an $\alpha$-OR-gadget
to $u$ and $v$ in a colored graph $(G,\sigma)$. Let $\Gamma$ be the vertex set of the gadget
(including $u$ and $v$).
Then
\begin{enumerate}
\item any set $X \subseteq V(G')$ that hits all $\sigma'$-$H[D]$-subgraphs of $G'$
needs to contain at least two vertices of $\Gamma$, including at least one vertex
of $\Gamma \setminus \{u,v\}$;
\item there exist sets $X^u, X^v \subseteq \Gamma$, each of size $2$, such that $u \in X^u$,
$v \in X^v$, and both these sets hit all $\sigma'$-$H[D]$-subgraphs of $G'$
that contain at least one vertex of $\Gamma \setminus \{u,v\}$.
\end{enumerate}
\end{myclaim}
\begin{proof}
For the first claim, observe that $X$ needs to contain a vertex $x \in D^\circ \subseteq \Gamma \setminus \{u,v\}$.
If $x$ is not equal to $c$ in the copy $D^\circ$, then $X$ needs to additionally contain a vertex
of $D^1$. Symmetrically, if $x$ is not equal to $d$ in the copy $D^\circ$, then
$X$ needs to additionally contain a vertex of $D^2$.
For the second claim, let $X^u$ consist of $u$ and the vertex $d$ in the copy $D^\circ$,
and let $X^v$ consist of $v$ and the vertex $c$ in the copy $D^\circ$.
Let $\pi$ be any $\sigma'$-$H[D]$-subgraph of $G'$ such that $\pi(D)$ contains
a vertex of $\Gamma \setminus \{u,v\}$.
We argue that $\pi$ is hit by $X^u$; the argumentation for $X^v$ is symmetrical.
Since neither $a$ nor $b$ is a cutvertex of $H$, and both
$u$ and $v$ have the same color in $\sigma$, we have $\pi(D) \subseteq \Gamma$.
If $\pi(d) \in D^\circ$ then we are done, so $\pi(d) \in D^1$.
Since $H[D]$ is connected, $\pi(c) \in D^1$. Moreover, we now have two options for $\pi(a)$: either $u$, or the vertex $a$ in the copy $D^\circ$. If it was not that $\pi(a)=u$, then $c$ would be a cut-vertex in $H$ that would separate $a$ from $b$. However, since $S$ contains also $d$ which is different than $c$, this would be a contradiction with $S$ being minimal.
Consequently, $\pi(a)=u$ and $X^u$ hits $\pi$.
\renewcommand{\qed}{\cqedsymbol}\maybeqed\end{proof}
The second gadget, an $\alpha$-$r$-cycle for $\alpha \in \{a,b\}$ and integer $r \geq 2$,
is constructed as follows.
We first take $r$ vertices $\alpha^1,\alpha^2,\ldots,\alpha^r$, each colored $\alpha$.
Then, we attach an $\alpha$-OR-gadget to the pair $\alpha^i,\alpha^{i+1}$ for every $1 \leq i \leq r$
(with the convention $\alpha^{r+1} = \alpha^1$).
For any graph $G$ and coloring $\sigma: V(G) \to V(H)$,
and for any sequence of pairwise distinct vertices $u^1,u^2,\ldots,u^r$, each colored $\alpha$,
by \emph{attaching an $\alpha$-$r$-cycle} to $u^1,u^2,\ldots,u^r$ we mean the following
operation: we create a new copy of the $\alpha$-$r$-cycle and identify $u^i$
with $\alpha^i$ for every $1 \leq i \leq r$.
The following claim summarizes the properties of an $\alpha$-$r$-cycle.
\begin{myclaim}\label{cl:cycle}
Let $(G',\sigma')$ be a colored graph created by attaching a $\alpha$-$r$-cycle
to $u^1,u^2,\ldots,u^r$ in a colored graph $(G,\sigma)$.
Let $\Gamma$ be the vertex set of the gadget (including all vertices $u^i$).
Then
\begin{enumerate}
\item any set $X \subseteq V(G')$ that hits all $\sigma'$-$H[D]$-subgraphs of $G'$
needs to contain at least $r + \lceil r/2 \rceil$ vertices of $\Gamma$,
including at least $r$ vertices of $\Gamma \setminus \{u^i: 1 \leq i \leq r\}$;
\item for every set $I \subseteq \{1,2,\ldots,r\}$ that contains either $i$ or $i+1$
for every $1 \leq i < r$, and contains either $1$ or $r$,
there exist sets $X^I \subseteq \Gamma$ of size $|I|+r$ such that $u^i \in X^I$ whenever $i \in I$,
and $X^I$ hits all $\sigma'$-$H[D]$-subgraphs of $G'$
that contain at least one vertex of $\Gamma \setminus \{u^i: 1 \leq i \leq r\}$.
\end{enumerate}
\end{myclaim}
\begin{proof}
For the first claim, apply the first part of Claim~\ref{cl:OR-gadget}
to each introduced $\alpha$-OR-gadget:
$X$ needs to contain at least one vertex that is not an attachment vertex
in each $\alpha$-OR-gadget between $u^i$ and $u^{i+1}$ ($r$ vertices in total)
and, moreover, at least two vertices in each $\alpha$-OR-gadget.
For the second claim, construct $X^I$ as follows: first take all vertices $u^i$ for $i \in I$
and then, for each $1 \leq i \leq r$, insert into $X^I$ the set $X^{u^i}$
from Claim~\ref{cl:OR-gadget}, if $i \in I$, and the set $X^{u^{i+1}}$ otherwise.
Observe that, in the second step, each index $i$ gives rise to exactly one new vertex
of $X^I$, and hence $|X^I| = |I| + r$. Moreover, the required hitting property of $X^I$
follows directly from Claim~\ref{cl:OR-gadget}.
\renewcommand{\qed}{\cqedsymbol}\maybeqed\end{proof}
Armed with the aforementioned gadgets,
we now proceed to the construction of the instance $(G,\sigma)$.
Let $s$ be the smallest positive integer such that $s^\mu \geq 3n$. Observe that
$s = \mathcal{O}(n^{1/\mu})$. We start our construction by introducing
a set $M$ of $s\mu$ vertices $w_{i,c}$, $1 \leq i \leq s$, $c \in S$.
We define $\sigma(w_{i,c}) = c$.
The set $M$ is the central part of the constructed graph $G$.
In particular,
in our reduction each connected component of $G \setminus M$ will be of constant size,
yielding immediately the promised tree decomposition.
To each clause $C$ of $\Phi$, and to each literal $l$ in $C$, assign a function
$f_{C,l}: S \to \{1,2,\ldots,s\}$ such that $f_{C,l} \neq f_{C',l'}$ for $(C,l) \neq (C',l')$.
Observe that this is possible due to the assumption $s^\mu \geq 3n$ and the fact
that $\Phi$ is clean.
For each variable $x$ of $\Phi$, proceed as follows.
First, for each clause $C$ and literal $l \in \{x, \neg x\}$,
we introduce a copy $D_{x,C,l}$ of $H[N[A]]$ and identify
every vertex $c \in S$ in the copy $D_{x,C,l}$ with the vertex $w_{f_{C,l}(c),c}$.
Let $a_{x,C,l}$ be the vertex $a$ in the copy $D_{x,C,l}$.
Second, introduce a new dummy vertex $a_x$, colored $a$.
Finally, attach an $a$-$4$-cycle to vertices $a_{x,C_1,l}, a_{x,C_2,\neg l}, a_{x,C_3,l}, a_x$;
recall that $x$ appears exactly three times in $\Phi$, twice positively and once
negatively or twice negatively and once positively.
For each clause $C$ of $\Phi$, proceed as follows.
First, for each literal $l$ in $C$ introduce a copy $D_{C,l}$ of $H[N[B]]$ and identify
every vertex $c \in S$ in the copy $D_{C,l}$ with the vertex $w_{f_{C,l}(c),c}$.
Let $b_{C,l}$ be the vertex $b$ in the copy $D_{C,l}$.
Second,
we attach a $b$-$r_C$-cycle on the vertices $b_{C,l}$, where $2 \leq r_C \leq 3$ is the number
of literals in $C$.
We define $k = 12n-m$, where $n$ is the number of variables in $\Phi$ and $m$ is the number
of clauses.
Finally, perform the following two operations.
First, for each connected component $L$ of $H[D] \setminus S$ that is different than $A$ and $B$,
and for each function $f: N(L) \to \{1,2,\ldots,s\}$, create a copy $D_{L,f}$ of $H[N[L]]$ and
identify each vertex $c \in N(L) \subseteq S$ with the vertex $w_{f(c),c}$.
Let $(G',\sigma')$ be the colored graph constructed so far.
Second, introduce a large number (at least $k+1$) of disjoint copies of $H \setminus D$
into the graph $G$. This concludes the construction of the \cshitarg{H}{} instance $(G,\sigma)$.
Observe that each connected component of $G \setminus M$ is of constant size.
Thus, it is straightforward to provide a tree decomposition of $G$ of width
$s\mu + \mathcal{O}(1)= \mathcal{O}(n^{1/\mu})$.
Hence, it remains to argue about the correctness of the construction.
In one direction, let $\phi$ be a satisfying assignment of $\Phi$.
Define a set $X \subseteq V(G)$ as follows.
\begin{enumerate}
\item For each variable $x$, include into $X$
the set $X^I$ from Claim~\ref{cl:cycle} for the $a$-$4$-cycle created for the variable $x$
that contains the vertices $a_{x,C,l}$ for clauses $C$ where $l$ is evaluated
to true by $\phi$,
and the vertex $a_x$ if there is only one such clause.
Note that the construction of the attachment points of the $a$-$4$-cycle created for $x$
ensures that $X^I$ contains exactly two non-consecutive attachment points, and hence
$|X^I| = 6$.
\item For each clause $C$, pick one literal $l$ that is satisfied by $\phi$
in $C$, and include into $X$ the set $X^I$ from Claim~\ref{cl:cycle}
for the $b$-$r_C$-cycle created for $C$ that contains $b_{C,l'}$ for all $l' \neq l$.
Observe that we include exactly $2r^C-1$ vertices to $X$ for clause $C$.
\end{enumerate}
Since $\Phi$ is clean, we observe that
\begin{equation}\label{eq:lb-col}
|X| = 6n + \sum_{\mathrm{clause\ }C} (2r_C-1) = 6n + 2\cdot 3n - m = 12n-m = k.
\end{equation}
Consider any $\sigma$-$H$-subgraph $\pi$ of $G$.
If $\pi(V(H))$ contains a vertex of some $\alpha$-$r$-cycle that is not an attachment vertex,
then $\pi$ is hit by $X$ by Claim~\ref{cl:cycle}.
Otherwise, observe that the color constraints imply that $\pi(S) \subseteq M$.
Consequently, there exists $f: S \to \{1,2,\ldots,s\}$ such that
$\pi(c) = w_{f(c),c}$ for each $c \in S$.
The only vertices of $G$ that are both adjacent to $M$ and have colors from $A$
belong to the copies $D_{x,C,l}$. Similarly, the only vertices
of $G$ that are both adjacent to $M$ and have colors from $B$ belong
to the copies $D_{C,l}$.
As $N_H(A) = N_H(B) = S$ and both $H[A]$ and $H[B]$ are connected, we infer that there exists a clause $C$ and literal $l \in C$
corresponding to a variable $x$,
such that $f = f_{C,l}$, and $\pi$ maps $A$ to $D_{x,C,l}$ and $B$ to $D_{C,l}$.
However, observe that
if $l$ is satisfied by $\phi$, then $X$ contains the vertex $a_{x,C,l} \in D_{x,C,l}$,
and otherwise $l$ does not satisfy $C$ and $X$ contains $b_{C,l} \in D_{C,l}$.
Consequently, $\pi$ is hit by $X$.
As the choice of $\pi$ was arbitrary, we infer that $X$ hits all $\sigma$-$H$-subgraphs
of $G$.
In the other direction, let $X$ be a set of at most $k$ vertices of $G$
that hits all $\sigma$-$H$-subgraphs.
As $G$ contains at least $k+1$ disjoint copies of $H \setminus D$, we infer that
$X \cap V(G')$ hits all $\sigma'$-$H[D]$-subraphs of $G'$.
By Claim~\ref{cl:cycle}, $X$ contains at least $6$ vertices of each $a$-$4$-cycle introduced
for every variable $x$, and at least $2r_C-1$ vertices of each $b$-$r_C$-cycle
introduced for every clause $C$ (because $2r_c-1=r_c+\lceil r_c/2\rceil$ for $2\leq r_c\leq 3$). However, as these gadgets
are vertex disjoint, by similar calculations as in~\eqref{eq:lb-col} we infer that
these numbers are tight: $X$ contains \emph{exactly} $6$ vertices in each $a$-$4$-cycle,
\emph{exactly} $2r_C-1$ vertices in each $b$-$r_C$-cycle
and no more vertices of $G$.
In particular, for each variable $x$, $X$ contains either all vertices
$a_{x,C,x}$ for clauses $C$ where $x$ appears positively,
or all vertices $a_{x,C,\neg x}$ for clauses $C$ where $x$ appears negatively.
Define an assingment $\phi$ as follows: for every variable $x$,
we set $\phi(x)$ to true if $X$ contains all vertices $a_{x,C,x}$,
and to false otherwise. We claim that $\phi$ satisfies $\Phi$.
To this end, consider a clause $C$. As $X$ contains only $2r_C-1$ vertices in the $b$-$r_C$-cycle
constructed for $C$, by Claim~\ref{cl:cycle} there exists a literal $l \in C$ such that
$b_{C,l} \notin X$. Let $x$ be the variable of $l$.
Let us construct a $\sigma$-$H$-subgraph $\pi$ of $G$ as follows:
\begin{enumerate}
\item $\pi|_S = f_{C,l}$;
\item $\pi|_{N_H[A]}$ maps $N_H[A]$ to $D_{x,C,l}$;
\item $\pi|_{N_H[B]}$ maps $N_H[B]$ to $D_{C,l}$;
\item for every component $L$ of $H[D] \setminus S$ that is not equal to $A$ nor $B$,
$\pi|_{N_H[L]}$ maps $N_H[L]$ to $D_{L,f_{C,l}|_{N_H(L)}}$;
\item $\pi|_{V(H) \setminus D}$ maps $H \setminus D$ to any its copy in $G$.
\end{enumerate}
It is straightforward to verify that $\pi$ is a $\sigma$-$H$-subgraph of $G$.
Moreover, $\pi(V(H))$ contains only two vertices of the introduced $\alpha$-$r$-cycles:
$a_{x,C,l}$ and $b_{C,l}$.
Since $X$ cannot contain any vertex outside these $\alpha$-$r$-cycles,
we infer that $a_{x,C,l} \in X$.
Consequently, $\phi$ sets $l$ to true, and thus satisfies $C$.
This finishes the proof of the correctness of the reduction,
and concludes the proof of the theorem.
\end{proof}
\section{Preliminaries}\label{sec:prelims}
\paragraph{Graph notation}
In most cases, we use standard graph notation.
A graph $P_n$ is a path on $n$ vertices, a graph $K_n$ is a complete graph on $n$ vertices, and a graph $K_{a,b}$ is a complete bipartite graph with $a$ vertices on one side, and $b$ vertices on the other side.
A \emph{$t$-boundaried graph} is a graph $G$ with a prescribed (possibly empty) \emph{boundary}
$\ensuremath{\partial} G \subseteq V(G)$ with $|\ensuremath{\partial} G|\leq t$, and an injective function
$\lambda_G: \ensuremath{\partial} G \to \{1,2,\ldots,t\}$. For a vertex $v \in \ensuremath{\partial} G$
the value $\lambda_G(v)$ is called the \emph{label} of $v$.
A \emph{colored graph} is a graph $G$ with a function $\sigma:V(G) \to \mathbb{L}$,
where $\mathbb{L}$ is some finite set of colors.
A graph $G$ is $H$-colored, for some other graph $H$, if $\mathbb{L} = V(H)$.
We also say in this case that $\sigma$ is an $H$-coloring of $G$.
A \emph{homomorphism} from graph $H$ to graph $G$ is a function $\pi : V(H) \to V(G)$
such that $ab \in E(H)$ implies $\pi(a)\pi(b) \in E(G)$.
In the $H$-colored setting, i.e., when $G$ is $H$-colored, we also require that $\sigma(\pi(a)) = a$ for any $a \in V(H)$
(every vertex of $H$ is mapped onto appropriate color).
The notion extends also to $t$-boundaried graphs:
if both $H$ and $G$ are $t$-boundaried, we require that
whenever $a \in \ensuremath{\partial} H$ then $\pi(a) \in \ensuremath{\partial} G$ and $\lambda_G(\pi(a)) = \lambda_H(a)$. Note, however, that we allow that a vertex of $V(H) \setminus \ensuremath{\partial} H$ is mapped onto a vertex of $\ensuremath{\partial} G$.
An \emph{$H$-subgraph of $G$} is any injective homomorphism $\pi: V(H) \to V(G)$.
Recall that in the $t$-boundaried setting, we require that the labels are preserved,
whereas in the colored setting, we require that the homomorphism respects colors.
In the latter case, we call it a \emph{$\sigma$-$H$-subgraph of $G$} for clarity.
We say that a set $X \subseteq V(G)$ \emph{hits} a ($\sigma$-)$H$-subgraph $\pi$
if $X \cap \pi(V(H)) \neq \emptyset$.
The (\textsc{Colorful}) \shitarg{H}{} problem asks for a minimum possible size
of a set that hits all ($\sigma$-)$H$-subgraphs of $G$.
\paragraph{(Nice) tree decompositions}
For any nodes $w,w'$ in a rooted tree $\ensuremath{\mathtt{T}}$, we say that $w'$ is a \emph{descendant} of $w$
(denoted $w' \preceq w$) if $w$ lies on the unique path between $w'$ and $\texttt{root}(\ensuremath{\mathtt{T}})$,
the root of $\ensuremath{\mathtt{T}}$.
A \emph{tree decomposition} of a graph is a pair $(\ensuremath{\mathtt{T}},\beta)$, where
$\ensuremath{\mathtt{T}}$ is a rooted tree, and $\beta : V(\ensuremath{\mathtt{T}}) \to 2^{V(G)}$ is a mapping satisfying:
\begin{itemize}
\item for each vertex $v \in V(G)$, the set $\{w \in V(\ensuremath{\mathtt{T}})\ |\ v \in \beta(w)\}$ induces a nonempty and connected subtree of~$\ensuremath{\mathtt{T}}$,
\item for each edge $e \in E(G)$, there exists $w \in V(\ensuremath{\mathtt{T}})$ such that $e \subseteq \beta(w)$.
\end{itemize}
The width of a decomposition $(\ensuremath{\mathtt{T}},\beta)$ equals $\max_{w \in V(\ensuremath{\mathtt{T}})} |\beta(w)|-1$,
and the treewidth of a graph is the minimum possible width of its decomposition.
For a tree decomposition $(\ensuremath{\mathtt{T}},\beta)$, we define two auxiliary mappings:
\begin{align*}
\gamma(w) &= \bigcup_{w' \preceq w} \beta(w'), & \alpha(w) &= \gamma(w) \setminus \beta(w).
\end{align*}
In our dynamic programming algorithms, it is convenient to work on the so-called
\emph{nice tree decompositions}. A tree decomposition $(\ensuremath{\mathtt{T}},\beta)$ is \emph{nice} if
$\beta(\texttt{root}(\ensuremath{\mathtt{T}})) = \emptyset$ and each node $w \in V(\ensuremath{\mathtt{T}})$ is of one of the following four
types:
\begin{description}
\item[leaf node] $w$ is a leaf of $\ensuremath{\mathtt{T}}$ and $\beta(w) = \emptyset$.
\item[introduce node] $w$ has exactly one child $w'$, and there exists a vertex $v \in V(G) \setminus \beta(w')$ such that $\beta(w) = \beta(w') \cup \{v\}$.
\item[forget node] $w$ has exactly one child $w'$, and there exists a vertex $v \in \beta(w')$,
such that $\beta(w) = \beta(w') \setminus \{v\}$.
\item[join node] $w$ has exactly two children $w_1,w_2$ and $\beta(w) = \beta(w_1) = \beta(w_2)$.
\end{description}
It is well known (see e.g.~\cite{nice-decomp})
that any tree decomposition of width $t$ can be transformed, without increasing its width,
into a nice decomposition with $\mathcal{O}(t|V(G)|)$ nodes.
Hence, by an application of the recent $5$-approximation for treewidth~\cite{tw-apx}, in all our algorithmic results we implicitely assume that we are given
a nice tree decomposition of $G$ with $\mathcal{O}(t|V(G)|)$ nodes and of width \emph{less} than $t$,
so that each bag is of size at most $t$.
(This shift of the value of $t$ by one is irrelevant for the complexity bounds,
but makes the notation much cleaner.)
Moreover, we may assume that we also have a function $\Lambda: V(G) \to \{1,2,\ldots,t\}$
such that, for each node $w \in V(\ensuremath{\mathtt{T}})$, $\Lambda|_{\beta(w)}$ is injective.
(Observe that it is straightforwad to construct $\Lambda$ in a top-bottom manner.
) Consequently, we may treat each graph $G[\gamma(w)]$ as a $t$-boundaried
graph with $\ensuremath{\partial} G[\gamma(w)] = \beta(w)$ and labeling $\Lambda|_{\beta(w)}$.
\begin{figure}[t]
\centering
\includegraphics{fig-params}
\caption{Red vertices denote a slice (left), chunk (centre) and separator chunk (right) in a graph $H$ being a path.
The light-red vertices belong to the boundary.}
\label{fig:params}
\vskip -0.4cm
\end{figure}
\begin{figure}[tb]
\centering
\begin{subfigure}{.4\textwidth}
\centering
\includegraphics{fig-ex-path}
\caption{In a path $P_h$, $h\geq 5$, we have $\mu(P_h) = 1$ (a separator chunk is depicted green)
whereas $\mu^\star}% {\eta(P_h) = 2$ (a corresponding chunk is depicted red).}
\end{subfigure} \quad
\begin{subfigure}{.4\textwidth}
\centering
\includegraphics{fig-ex-star}
\caption{In a double star we have $\mu(H) = \mu^\star}% {\eta(H) = 1$ (a chunk is depicted red).}
\end{subfigure}\\\vspace{2mm}%
\begin{subfigure}{.4\textwidth}
\centering
\includegraphics{fig-ex-2star}
\caption{In a subdivided star $\mu^\star}% {\eta(H)$ equals the degree of the center (a corresponding chunk is depicted red), whereas
$\mu(H) = 1$ as in all trees on at least three vertices.}
\end{subfigure}\quad
\begin{subfigure}{.4\textwidth}
\centering
\includegraphics{fig-ex-clique}
\caption{In a clique without one edge, $\mu(H) = \mu^\star}% {\eta(H) = |V(H)|-2$ (a corresponding separator chunk is depicted red).}
\end{subfigure}\\\vspace{2mm}%
\begin{subfigure}{.4\textwidth}
\centering
\includegraphics{fig-ex-biclique}
\caption{In a biclique $K_{a,b}$, $a,b \ge 2$, we have $\mu(H) = \mu^\star}% {\eta(H) = \max(a,b)$ (a corresponding separator chunk is depicted red).}
\end{subfigure}
\caption{Examples of graphs with the values of $\mu(H)$ and $\mu^\star}% {\eta(H)$.
In each example, the vertices with lighter color belong to the boundary of a corresponding chunk.
\label{fig:examples}}
\end{figure}
\paragraph{Important graph invariants, chunks, and slices}
For two vertices $a,b \in V(H)$, a set $S \subseteq V(H) \setminus \{a,b\}$
is an \emph{$ab$-separator} if $a$ and $b$ are not in the
same connected component of $H \setminus S$.
The set $S$ is additionally a \emph{minimal $ab$-separator}
if no proper subset of $S$ is an $ab$-separator.
A set $S$ is a \emph{minimal separator} if it is a minimal $ab$-separator
for some $a,b \in V(H)$.
For a graph $H$, by $\mu(H)$ we denote the maximum size
of a minimal separator in $H$.
For an induced subgraph $H' = H[D]$, $D \subseteq V(H)$,
we define the boundary $\ensuremath{\partial} H' = N_H(V(H) \setminus D)$ and the interior $\mathrm{int} H' = D \setminus \ensuremath{\partial} H[D]$; thus $V(H')=\ensuremath{\partial} H'\uplus \mathrm{int} H'$.
Observe that $N_H(\mathrm{int} H') \subseteq \ensuremath{\partial} H'$; the inclusion can be proper, as it is possible that a vertex of $\ensuremath{\partial} H'$ has no neighbor in $\mathrm{int} H'$.
An induced subgraph $H'$ of $H$ is a \emph{slice} if $N_H(\mathrm{int} H') = \ensuremath{\partial} H'$,
and a \emph{chunk} if additionally $H[\mathrm{int} H']$ is connected.
For a set $A \subseteq V(H)$, we use $\mathbf{p}[A]$ ($\mathbf{c}[A]$) to denote the unique slice (chunk) with interior $A$ (if it exists).
The intuition behind this definition is that, when we consider some bag $\beta(w)$
in a tree decomposition, a slice is a part of $H$ that may already be present in $G[\gamma(w)]$
and we want to keep track of it.
If a slice (chunk) $\mathbf{p}$ is additionally equipped with an injective labeling $\lambda_\mathbf{p} : \ensuremath{\partial} \mathbf{p} \to \{1,2,\ldots,t\}$,
then we call the resulting $t$-boundaried graph a \emph{$t$-slice} (\emph{$t$-chunk}, respectively).
By $\mu^\star}% {\eta(H)$ we denote the maximum size of $\ensuremath{\partial} \mathbf{c}$, where $\mathbf{c}$ iterates over all chunks of $H$.
We remark here that both $\mu(H)$ and $\mu^\star}% {\eta(H)$
are positive only for graphs $H$ that contain at least one connected
component that is not a clique, as otherwise there are no chunks with nonempty boundary nor minimal separators in $H$.
Observe that if $S$ is a minimal $ab$-separator in $H$,
and $A$ is the connected component of $H \setminus S$ that
contains $a$, then $N_H(A) = S$ and $\mathbf{c}[A]$ is a
chunk in $H$ with boundary $S$. Consequently,
$\mu(H) \leq \mu^\star}% {\eta(H)$ for any graph $H$.
A chunk $\mathbf{c}$ for which $\ensuremath{\partial} \mathbf{c}$ is a minimal separator in $H$
is henceforth called a \emph{separator chunk}.
Equivalently, a chunk $\mathbf{c}$ is a separator chunk if and only if there exists
a connected component $B$ of $H \setminus \mathbf{c}$ such that $N_H(B) = \ensuremath{\partial} \mathbf{c}$.
Observe also that $\mu(H)$ equals the maximum boundary size over all separator chunks in $H$.
See also Figures~\ref{fig:params} and~\ref{fig:examples} for an illustration.
\paragraph{Exponential Time Hypothesis and SAT instances}
Our lower bounds are based on the \emph{Exponential Time Hypothesis} (ETH)
of Impagliazzo and Paturi~\cite{IP01}.
We do not need here its formal definition, but instead we rely on the following corollary
of the celebrated Sparsification Lemma~\cite{IPZ01}.
\begin{theorem}[\cite{IPZ01}]\label{thm:spars}
Unless ETH fails, there does not exist an algorithm that can resolve satisfiability of a
$n$-variable $m$-clause $3$-CNF formula in time $2^{o(n+m)}$.
\end{theorem}
In our reductions we start from a slightly preprocessed $3$-SAT instances.
We say that a $3$-CNF formula $\Phi$ is \emph{clean} if
each variable of $\Phi$ appears exactly three times, at least once
positively and at least once negatively, and each clause of $\Phi$ contains
two or three literals and does not contain twice the same variable.
\begin{lemma}\label{lem:preprocess}
Given any $3$-CNF formula $\Phi$, one can in polynomial time compute
an equivalent clean formula $\Phi'$ of size linearly bounded in the size of $\Phi$, such that $\Phi'$ is satisfiable if and only if $\Phi$ is.
\end{lemma}
\begin{proof}
First, preprocess $\Phi$ as follows:
\begin{enumerate}
\item Simplify all clauses that contain repeated variables: delete all clauses
that contain both a variable and its negation, and remove duplicate literals
from the remaining clauses.
\item As long as there exists a variable $x$ that appears only positively or only
negatively, fix the evaluation of $x$ that satisfies all clauses containing $x$
and simplify the formula.
\item As long as there exists a clause $C$ with only one literal, fix the evaluation
of this literal that satisfies $C$ and simplify the formula.
\item If some clause becomes empty in the process, return a dummy unsatisfiable clean formula.
\end{enumerate}
Observe that after this preprocessing the size of $\Phi$ could have only shrunk,
while each variable now appears at least twice.
Second, replace every variable $x$ with a cycle of implications
$x_1 \Rightarrow x_2 \Rightarrow \ldots \Rightarrow x_{s(x)} \Rightarrow x_1$, where
$s(x)$ is the number of appearances of $x$ in $\Phi$.
More formally, for every variable $x$:
\begin{enumerate}
\item introduce $s(x)$ new variables $x_1, x_2, \ldots, x_{s(x)}$, and replace
each occurence of $x$ in $\Phi$ with a distinct variable $x_i$; and
\item introduce $s(x)$ new clauses $x_i \Rightarrow x_{i+1}$ (i.e., $\neg x_i \vee x_{i+1}$)
for $i=1,2,\ldots,s(x)$, where $x_{s(x)+1} = x_1$.
\end{enumerate}
Observe that, after this replacement, each variable $x_i$ appears exactly three times in
the formula $\Phi$: positively in the implication $x_{i-1} \Rightarrow x_i$,
negatively in the implication $x_i \Rightarrow x_{i+1}$ (with the convention $x_0 = x_{s(x)}$
and $x_{s(x)+1} = x_1$), and the third time in one former literal of the variable $x$.
Moreover, as after the first step each variable appears at least twice in $\Phi$,
for every former
variable $x$ we have $s(x) \geq 2$ and no new clause contains twice the same variable.
Finally, note that the second step increases the size of the formula only by a constant factor.
The lemma follows.
\end{proof}
\section{Discussion on special cases of \shitarg{H}{}}\label{sec:std-discussion}
As announced in the introduction, we now discuss a few special cases
of \shitarg{H}{}.
\subsection{Hitting a path}
First, let us consider $H$ being a path, $H = P_h$ for some $h \geq 3$.
Note that $\mu(P_h) = 1$, while $\mu^\star}% {\eta(P_h) = 2$ for $h\geq 5$.
Observe that in the dynamic programming algorithm of the previous section
we have that $G[\gamma(w) \setminus (X \cup X_w)]$ does not contain an $H$-subgraph
and, hence, the witness graph obtained through Lemma~\ref{lem:witness}
does not contain an $H$-subgraph as well.
However, graphs exluding $P_h$ as a subgraph have treedepth (and hence treewidth as well) bounded by $h$
(since any their depth-first search tree has depth bounded by $h$).
Using this insight, we can derive the following
improvement of Lemma~\ref{lem:witness}, that
improves the running time of Theorem~\ref{thm:std:algo}
to $2^{\mathcal{O}(t \log t)} |V(G)|$ for $H$ being a path.
\begin{lemma}\label{lem:witness-path}
Assume $H$ is a path.
Then, for any $t$-boundaried graph $(G,\lambda)$ that does not contain an $H$-subgraph,
there exists a witness graph as in Lemma~\ref{lem:witness} with $\mathcal{O}(t)$ vertices and edges.
\end{lemma}
\begin{proof}
In this proof, by \emph{witness graph} we mean any graph $({\widehat{G}},\lambda)$
that satisfies the requirements of Lemma~\ref{lem:witness},
for fixed input $(G,\lambda)$ and a graph $H$ being a path.
A witness graph $({\widehat{G}},\lambda)$ is \emph{minimal} if, for every $v \in V({\widehat{G}}) \setminus \ensuremath{\partial} {\widehat{G}}$
the graph $({\widehat{G}} \setminus \{v\},\lambda)$ is not a witness graph.
We claim that, in the case $H = P_h$, every minimal witness graph ${\widehat{G}}$ has only $\mathcal{O}(t)$
vertices and edges, assuming $G$ does not contain any $H$-subgraph.
Clearly, this claim will prove Lemma~\ref{lem:witness-path}.
Fix a minimal witness graph $({\widehat{G}},\lambda)$.
Since $H=P_h$ is connected, observe the following: any connected component $C$ of ${\widehat{G}}$ needs to contain
at least one vertex of $\ensuremath{\partial} {\widehat{G}}$, as otherwise $({\widehat{G}} \setminus C,\lambda)$
is a witness graph as well, a contradiction.
For each connected component $C$ of ${\widehat{G}}$, we fix some depth-first search
spanning tree $T_C$ of ${\widehat{G}}[C]$, rooted in some vertex $r_C \in \ensuremath{\partial} {\widehat{G}} \cap C$.
Since $G$ (and thus ${\widehat{G}}$) does not contain an $H$-subgraph,
the depth of $T_C$ is less than $h = \mathcal{O}(1)$. Since $T_C$ is a depth-first search tree, all the edges of $C$ connect vertices that are in ancestor-descendant relation in $T_C$; in other words, $C$ is a subgraph of the ancestor-descendant closure of $T_C$.
Consider any $v \in C$.
Let $v_1,v_2,\ldots,v_s$ be the children of $v$ in the tree $T_C$
and let $T_i$ be the subtree of $T_C$ rooted at $v_i$.
Without loss of generality, assume that there exists $0 \leq r \leq s$ such that
a tree $T_i$ contains a vertex of $\ensuremath{\partial} {\widehat{G}}$ if and only if $i > r$.
We claim that $r \leq h^4 = \mathcal{O}(1)$.
Let us first verify that this claim proves that $|C| = \mathcal{O}(|\ensuremath{\partial} {\widehat{G}} \cap C|)$, and hence summing up through the components $C$ will conclude the proof of Lemma~\ref{lem:witness-path}.
Since $T_C$ has depth less that $h$, and there are $|\ensuremath{\partial} {\widehat{G}} \cap C|$ vertices in $C$ that belong to $\ensuremath{\partial} {\widehat{G}}$, we infer that in $T_C$ there are at most $h|\ensuremath{\partial} {\widehat{G}} \cap C|$ vertices $u$ such that the subtree rooted at $u$ contains some vertex of $\ensuremath{\partial} {\widehat{G}}$. However, if we consider any vertex $u$ such that subtree rooted at $u$ does not contain any vertex of $\ensuremath{\partial} {\widehat{G}}$, then this subtree must be of size $\mathcal{O}(h^{4h})$: its depth is less than $h$, and every vertex has at most $h^4$ children. Therefore, the tree $T_C$ contains at most $h|\ensuremath{\partial} {\widehat{G}} \cap C|$ vertices $u$ whose subtrees contain vertices of $\ensuremath{\partial} {\widehat{G}}$, and each of these vertices has at most $h^4$ subtrees rooted at its children that are free from $\ensuremath{\partial} {\widehat{G}}$, and thus have size $\mathcal{O}(h^{4h})$. We infer that $|V(C)|=|V(T_C)|\leq h|\ensuremath{\partial} {\widehat{G}} \cap C|\cdot (1+h^4\cdot \mathcal{O}(h^{4h}))=\mathcal{O}(|\ensuremath{\partial} {\widehat{G}} \cap C|)$, because $h$ is considered a constant.
In order to prove that $r\leq h^4$, let us mark some of the trees $T_i$, $1 \leq i \leq r$.
Let $r_C = w_0, w_1, \ldots, w_p = v$ be the path between $r_C$ and $v$ in the tree $T_C$; note that we have $p<h$.
For any $0 \leq a < b \leq p$, and for any $3 \leq l < h$,
mark any $h$ trees $T_i$, $1 \leq i \leq r$, such that $G[\{w_a,w_b\} \cup T_i]$
contains a path between $w_a$ and $w_b$ with exactly $l$ vertices.
(If there is less than $h$ such trees, we mark all of them.)
For any $0 \leq a \leq p$, and for any $2 \leq l < h$,
mark any $h$ trees $T_i$, $1 \leq i \leq r$, such that $G[\{w_a\} \cup T_i]$
contains a path with endpoint $w_a$ and with exactly $l$ vertices.
(Again, if there is less than $h$ such trees, we mark all of them.)
We claim that all trees $T_i$, $1 \leq i \leq r$ are marked;
as we have marked less than $h^4$ trees, this would conclude the proof of the lemma.
By contradiction, without loss of generality assume that $T_1$ is not marked.
We claim that $({\widehat{G}} \setminus T_1,\lambda)$ is a witness graph as well.
Consider any $Y \subseteq V(G)$ and $t$-slice $\mathbf{p}$ such that $|Y| + |V(\mathbf{p})| \leq |V(H)|$ and assume there
exists a $\mathbf{p}$-subgraph in $(G \setminus Y,\lambda)$.
By the definition of a witness graph, there exists a $\mathbf{p}$-subgraph
in $({\widehat{G}} \setminus Y,\lambda)$.
Let $\pi$ be such a subgraph that contains a minimum possible number of vertices of $T_1$
in its image. We claim that there are none, and $\pi$ is a
$\mathbf{p}$-subgraph in $({\widehat{G}}\setminus (Y \cup T_1),\lambda)$ as well.
Assume the contrary. Since $T_1$ does not contain any vertex of $\ensuremath{\partial} {\widehat{G}}$,
there exists a subpath $H'$ of $\mathbf{p}$, with $|V(H')| = l$ for some $l<h$,
such that either (a) both endpoints of $H'$ are mapped by $\pi$
to some $w_a,w_b$, $0 \leq a < b \leq p$, and the internal vertices
of $H'$ are mapped to $T_1$; or (b) one endpoint of $H'$
is mapped by $\pi$ to some $w_a$, $0 \leq a \leq p$, and all other vertices
of $H'$ are mapped to $T_1$. Since $T_1$ was not marked, we infer that in both cases there exist at least $h$ marked trees $T_i$ that also contain such a subpath $H'$.
Since the union of $Y$ and the image of $\pi$ has cardinality at most $h$, we infer that there exists a tree $T_i$ that was marked for the same
choice of $a,b$ and $l$ in case (a), or $a$ and $l$ in case (b),
and, furthermore, no vertex of $T_i$ is contained neither in $Y$ nor in the image of $\pi$.
We modify $\pi$ by remapping all vertices of $V(H') \cap \pi^{-1}(T_1)$
to the corresponding vertices of $T_i$.
In this manner we obtain a $\mathbf{p}$-subgraph of $({\widehat{G}} \setminus Y,\lambda)$
with strictly less vertices in $T_1$ in its image, a contradiction
to the choice of $\pi$.
Hence, $\pi$ does not use any vertex of $T_1$, and is a
$\mathbf{p}$-subgraph of $({\widehat{G}} \setminus (Y \cup T_1), \lambda)$.
This concludes the proof.
\end{proof}
\begin{corollary}
For every positive integer $h$, the \shitarg{P_h} problem can be solved in time
$2^{\mathcal{O}(t \log t)} |V(G)|$ on a graph $G$ of treewidth $t$.
\end{corollary}
\subsection{Hitting pumpkins}
Now let us consider $H = K_{2,h}$ for some $h \geq 2$ (such a graph is sometimes called a \emph{pumpkin} in the literature).
Observe that $\mu^\star}% {\eta(K_{2,h}) = \mu(K_{2,h}) = h$.
On the other hand, we note the following.
\begin{lemma}\label{lem:witness-K2h}
Assume $H = K_{2,h}$ for some $h \geq 2$.
If the witness graph given by Lemma~\ref{lem:witness} does not admit an $H$-subgraph, then it
has $\mathcal{O}(t^2)$ vertices and edges.
\end{lemma}
\begin{proof}
Let $V(H) = \{a_1,a_2,b_1,b_2,\ldots,b_h\}$ where $A := \{a_1,a_2\}$
and $B:= \{b_1,b_2,\ldots,b_h\}$ are bipartition classes of $H$.
Note that there are only two types of proper chunks in $H$:
$N_H[a_i]$, $i=1,2$ and $N_H[b_j]$, $1 \leq j \leq h$.
Hence, one can easily verify that in the construction of the witness graph ${\widehat{G}}$ of Lemma~\ref{lem:witness}
every vertex $v \in {\widehat{G}} \setminus \ensuremath{\partial} {\widehat{G}}$ has at least two neighbours in $\ensuremath{\partial} {\widehat{G}}$,
and ${\widehat{G}} \setminus \ensuremath{\partial} {\widehat{G}}$ is edgeless.
Then we have $|N_{\widehat{G}}(v) \cap \ensuremath{\partial} {\widehat{G}}|\leq 2\binom{|N_{\widehat{G}}(v) \cap \ensuremath{\partial} {\widehat{G}}|}{2}$ for each $v \in V({\widehat{G}})\setminus \ensuremath{\partial} {\widehat{G}}$.
However, since the constructed witness graph ${\widehat{G}}$ does not admit an $H$-subgraph,
each two vertices $v_1,v_2 \in \ensuremath{\partial} {\widehat{G}}$ have less
than $h$ common neighbours in ${\widehat{G}}$, as otherwise there is a $H$-subgraph
in ${\widehat{G}}$ on vertices $v_1$, $v_2$ and $h$ vertices of $N_{\widehat{G}}(v_1) \cap N_{\widehat{G}}(v_2)$.
Hence
\begin{equation}\label{eq:K2h}
\sum_{v \in V({\widehat{G}})\setminus \ensuremath{\partial} {\widehat{G}}} \binom{|N_{\widehat{G}}(v) \cap \ensuremath{\partial} {\widehat{G}}|}{2} \leq (h-1)\binom{|\ensuremath{\partial} {\widehat{G}}|}{2} \leq (h-1)\binom{t}{2}.
\end{equation}
Consequently, there are at most $2(h-1)\binom{t}{2}$ edges of ${\widehat{G}}$
with exactly one endpoint in $\ensuremath{\partial} {\widehat{G}}$, whereas there are at most $\binom{t}{2}$ edges in ${\widehat{G}}[\ensuremath{\partial} {\widehat{G}}]$. The lemma follows.
\end{proof}
Lemma~\ref{lem:witness-K2h} together with the dynamic programming of Section~\ref{ss:std-algo} imply that \shitarg{K_{2,h}} can be solved in time $2^{\mathcal{O}(t^2\log t)}|V(G)|$, in spite of the fact that $\mu^\star}% {\eta(K_{2,h}) = \mu(K_{2,h}) = h$.
\begin{corollary}
For every positive integer $h$, the \shitarg{K_{2,h}} problem can be solved in time
$2^{\mathcal{O}(t^2 \log t)} |V(G)|$ on a graph $G$ of treewidth $t$.
\end{corollary}
We now show that a slight modification of $K_{2,h}$ enables us to prove a much higher lower bound. For this, let us consider a graph $H_h$ for $h \geq 2$
defined as $K_{2,h}$ with triangles attached to both degree-$h$ vertices (see Figure~\ref{fig:Hh}).
Note that $\mu(H_h) = \mu^\star}% {\eta(H_h) = h$.
One may view $H_h$ as $K_{2,h}$ with some symmetries broken, so that
the proof of Lemma~\ref{lem:witness-K2h} does not extend to $H_h$.
We observe that the lower bound proof of Theorem~\ref{thm:intro:lb:col}
works, with small modifications, also for the case of \shitarg{H_h}.
As the proof for this special case is slightly simpler than the one of Theorem~\ref{thm:intro:lb:col},
we present it first here to give intuition to the reader.
\begin{theorem}[Theorem~\ref{thm:lb:Hh} restated]\label{thm:lb:Hh2}
Unless ETH fails, for every $h \geq 2$ there does
not exist an algorithm that, given a \shitarg{H_h} instance $G$
and a tree decomposition of $G$ of width $t$, resolves $G$
in time $2^{o(t^h)} |V(G)|^{\mathcal{O}(1)}$.
\end{theorem}
\begin{proof}
We denote the vertices of $H_h$ as in Figure~\ref{fig:Hh}.
\begin{figure}[tb]
\centering
\includegraphics{fig-Hh}
\caption{The graph $H_h$, equal to $K_{2,h}$ with two triangles added to the degree-$h$ vertices.}
\label{fig:Hh}
\end{figure}
We first define the following basic operation for the construction.
By \emph{attaching a copy of $H_h$ at vertices $u$ and $v$} we mean the following:
we introduce a new copy of $H_h$ into the constructed graph, and identify $u$ with the copy
of the vertex $a$, and $v$ with the copy of the vertex $b$.
The intutive idea behind attaching a copy of $H_h$ at $u$ and $v$ is that it forces
the solution to take $u$ or $v$.
Assume we are given as input a clean $3$-CNF formula $\Phi$
with $n$ variables and $m$ clauses.
We are to construct a \shitarg{H_h} instance $G$ with a budget bound $k$
and a tree decomposition of $G$ of width $\mathcal{O}(n^{1/h})$,
such that $G$ admits a solution of size at most $k$ if and only if $\Phi$
is satisfiable. This construction, together with Lemma~\ref{lem:preprocess} and Theorem~\ref{thm:spars}, proves the statement of the theorem.
Let $s$ be the smallest positive integer such that $s^h \geq 3n$. Observe that
$s = \mathcal{O}(n^{1/h})$. We start our construction by introducing
a set $M$ of $sh$ vertices $w_{j,i}$, $1 \leq j \leq s$, $1 \leq i \leq h$.
The set $M$ is the central part of the constructed graph $G$.
In particular,
in our reduction each connected component of $G \setminus M$ will be of constant size,
yielding immediately the promised tree decomposition of $G$ of width $\mathcal{O}(n^{1/h})$.
To each clause $C$ of $\Phi$, and to each literal $l$ in $C$, assign a function
$f_{C,l}: \{1,2,\ldots,h\}
\to \{1,2,\ldots,s\}$ such that $f_{C,l} \neq f_{C',l'}$ for $(C,l) \neq (C',l')$.
Observe that this is possible due to the assumption $s^h \geq 3n$ and the fact
that $\Phi$ is clean.
For each variable $x$ of $\Phi$, proceed as follows.
For each clause $C$ that contains $x$ in a literal $l \in \{x,\neg x\}$,
we introduce a new vertex $a_{x,C,l}$ and make it adjacent to all
vertices $w_{f_{C,l}(i),i}$ for $1 \leq i \leq h$.
Let $a_{x,C_1,l}$, $a_{x,C_2,\neg l}$ and $a_{x,C_3,l}$ be the three
vertices introduced;
recall that $x$ appears exactly three times in $\Phi$, twice positively and once
negatively or twice negatively and once positively.
Moreover, we introduce a fourth dummy vertex $a_x$.
Finally, we attach a copy of $H_h$ to the following four pairs of vertices:
$(a_{x,C_1,l}, a_{x,C_2,\neg l})$,
$(a_{x,C_3,l}, a_{x,C_2,\neg l})$,
$(a_{x,C_1,l}, a_x)$, and
$(a_{x,C_3,l}, a_x)$.
Let $D_x$ be the set of vertices constructed for variable $x$.
Observe that, for every variable $x$,
we have constucted four $H_h$-subgraphs, and there are exactly two ways
to hit them with only two vertices: either we take
$\{a_{x,C_1,l}, a_{x,C_3,l}\}$ into the solution or
$\{a_{x,C_2,\neg l}, a_x\}$ into the solution.
Moreover, any solution to \shitarg{H_h} on the constructed graph needs
to take at least two vertices of $D_x$.
For each clause $C$ of $\Phi$, proceed as follows.
For each literal $l$ in $C$, introduce a new vertex $b_{C,l}$
and make it adjacent to
all vertices $w_{f_{C,l}(i),i}$ for $1 \leq i \leq h$.
For each two different literals $l_1,l_2$ in $C$, attach
a copy of $H_h$ at vertices $b_{C,l_1}$ and $b_{C,l_2}$.
Let $D_C$ be the set of vertices constructed for the clause $C$.
Observe that, for every clause $C$ that contains $r_C$ literals
(recall $2 \leq r_C \leq 3$), we have constructed a number of
$H_h$-subgraphs, we need at least $r_C-1$ vertices of the solution
to hit them, and, without loss of generality, we may assume that
any solution that contains only $r_C-1$ vertices of $D_C$
actually contains all but one of the vertices $b_{C,l}$.
We set the budget
$$k = 2n + \sum_{C \in \Phi} r_C-1 = 5n-m.$$
Observe that this budget is tight: any solution $X$ to \shitarg{H_h}
on $G$ of size at most $k$ needs to contain exactly two vertices in each $D_x$,
exactly $r_C-1$ vertices in each $D_C$ and, consequently,
is of size exactly $k$ and does not contain any more vertices.
It remains to argue about the correctness of this construction.
The crucial observation is that there are only few $H_h$-subgraphs in $G$:
the vertices $a_{x,C,l}$, $a_x$ and $b_{C,l}$ are the only vertices of $G$
that have degree at least $h$ and, at the same time, are contained in some triangle
in $G$. With this observation, a direct check shows that, apart from
the copies of $H_h$ introduced explicitely in the construction,
the only other copies are ones with vertices $a$ and $b$ mapped
to $a_{x,C,l}$ and $b_{C,l}$ for every clause $C$, every literal $l$ in $C$,
and $x$ being the variable of $l$.
In one direction, let $\phi$ be a satisfying assignment of $\Phi$.
Construct a solution $X$ as follows.
For each variable $x$, include into $X$ all vertices $a_{x,C,l}$ for which
$l$ is satisfied in $\phi$. Moreover, include also the vertex $a_x$
if only one vertex $a_{x,C,l}$ has been included in the previous step.
For each clause $C$, pick a literal $l$ that satisfies it,
and include into $X$ all vertices $b_{C,l'}$ for $l' \neq l$.
This concludes the description of the set $X$.
Clearly, $|X| = k$. Moreover, observe that, for each clause $C$
and literal $l$ in $C$, $l \in \{x,\neg x\}$, we have that either
$a_{x,C,l}$ or $b_{C,l}$ belongs to $X$. With the previous insight
into the family of $H_h$-subgraphs of $G$, this implies that $X$
hits all $H_h$-subgraphs of $G$.
In the second direction, let $X$ be such a set of at most $k$ vertices
that hits all $H_h$-subgraphs in $G$.
By the discussion on tightness of the budget, $X$ contains exactly $2$
vertices in each set $D_x$ and exactly $r_C-1$ vertices in each set $D_C$, and no more vertices.
Define an assignment $\phi$ as follows:
for each variable $x$, we set $\phi(x)$ so that $a_{x,C,l} \in X$ if and only
if $l$ is evaluated to true by $\phi$. Note that this is a valid definition
by the construction of $G[D_x]$.
To show that $\phi$ satisfies $\Phi$, consider a clause $C$.
By the budget bounds, there exists a vertex $b_{C,l} \notin X$.
Recall that there exists an $H_h$-subgraph of $G$ with $a$ mapped to $a_{x,C,l}$
and $b$ mapped to $b_{C,l}$, where $x$ is the variable of $l$.
As $X$ hits all $H_h$-subgraphs of $G$, $a_{x,C,l} \in X$.
Hence, $\phi$ sets $l$ to true and hence satisfies $C$.
This concludes the proof.
\end{proof}
|
1,116,691,499,881 | arxiv | \section{Introduction}
Photoionization cross sections are necessary for the computation of
photoionization and recombination rates
for ionization balance in astrophysical plasmas (e.g. Kallman and
Krolik 1991, Shull \& Van Steenberg 1982, Sutherland \& Dopita 1993).
Accurate cross sections have been calculated in the close-coupling
approximation using the R-matrix method,
for most astrophysically important atoms and ions under the Opacity
Project (OP; Seaton et al.\thinspace 1994) and the Iron Project (IP; Hummer et al.\thinspace 1993).
The cross sections incorporate, in an {\it ab initio} manner,
the complex autoionizing resonance structures
that can make important contributions to total photoionization
rates. These data are currently available from the electronic database
of the Opacity Project TOPbase
(Cunto et al.\thinspace 1993; {\it The Opacity Project Team} 1995).
Resonant phenomena have been shown to be of crucial, often dominant,
importance in electron-ion scattering, photoionization, and
recombination processes (see, for example, the references for the OP and
the IP). Most of the related calculations for such
atomic processes have been carried out in the close-coupling (hereafter
CC)
approximation that quantum mechanically couples the open and closed
channels responsible for the continuum, and
the quasi-bound resonant states, respectively.
Photoionization calculations in the CC
approximation consist of an expansion over the states of the
(e + ion) system, with a number of excited states of the residual (photoionized)
ion (also called the ``target"). The CC approximation thereby
includes, in an {\it ab initio} manner, several
infinite Rydberg series of resonances converging on to the
states of the target ion. The resonances are
particularly prominent in the near-threshold region
due to strong electron correlation effects,
and the accuracy of the calculated cross section depends
on the representation of the resonances.
Furthermore, singular resonant features may sometimes dominate the cross
section over an extended energy region. A prime example of such
resonances, in addition to the Rydberg series of resonances and
the broad near-threshold resonances, are the so called
``photoexcitation-of-core" (PEC) resonances that occur at higher energies
corresponding to photoexcitation of dipole transitions in the residual ion
by the incident photon (Yu \& Seaton 1987).
The PECs are very large features that attenuate the background cross
section by up to orders of magnitude, and are present in many
of the cross sections in all ions with
target states coupled by dipole transitions, i.e.,
even and odd parity LS terms. Much of the effort in the
decade-long Opacity Project was devoted to a careful consideration and
delineation of resonances using the R-matrix method which has the
advantage that once the (e + ion) Hamiltonian has been
diagonalized in the R-matrix basis, cross sections may be computed at an
arbitrary number of energies to study resonant phenomena.
The methods and a number of calculations are described in the volume
{\it The Opacity Project} by the Opacity Project Team (1995).
Owing to the complexity in the structure of the cross sections,
thousands of points are normally calculated
to represent the detailed cross sections for each bound state of an ion
or atom. While this is a great advance in terms of
atomic physics and accuracy, the huge amount and the inherent details of the
data do present a serious practical problem for numerical modeling.
An additional difficulty in the use of these cross sections for
photoionization modeling is uncertainty in the precise positions of these resonances.
The OP cross sections were calculated primarily
for the computation of Rosseland and Planck mean opacities in
local thermodynamic
equilibrium (LTE) for stellar envelope models (Seaton et al.\thinspace 1994);
the cross sections at all photon frequencies are integrated over the
Planckian black-body radiation field.
Whereas
the LTE mean opacities are insensitive to small uncertainties in the
precise locations of resonances in photoionization cross sections, the
situation is different for the calculation of photoionization
rates of individual atomic species in non-LTE astrophysical
sources photoionized by radiation fields that include spectral
lines, such as H~II regions or active galactic nuclei (AGN). There may
be spurious coincidences between the strong lines and the narrow resonance
features in the original data. On the other
hand, the physical presence of extensive structures of
resonances in cross sections has a pronounced effect on the total
photoionization rate
that should not be neglected, as exemplified by
recent work on Fe ions (Nahar, Bautista, \& Pradhan 1997a,
Bautista \& Pradhan 1998). Thus, a numerical procedure is needed that
accurately and efficiently reproduces the new CC cross sections
for astrophysical modeling, in particular the extensive OP photoionization
cross section data in TOPbase.
In some previous works analytic fits have been presented for
partial photoionization cross sections of sub-shells (Verner et al.\thinspace 1993,
Verner and Yakovlev 1995), and for the OP
cross sections ``smoothed" over resonances (Verner et al.\thinspace 1996).
However, analytic fits can not reproduce the well localized effects of
resonances and groups of resonances.
In their approach, Verner et al.\thinspace
(1996) smoothed over resonances at variable energy
intervals whose widths were adjusted until the resonance
structures disappeared. In some cases, however, very large resonances
could not be smoothed, so they were neglected in the fits. The analytic
fits of Verner et al.\thinspace seem computationally efficient for modeling
computer codes.
However, the smoothing procedure is unphysical and
artificially deletes most of the extensive resonance structures in the
OP cross sections.
In addition, as we show later, neglecting
very large resonances results in errors in the fitted cross sections
and in the resulting photoionization rates.
Although such errors in the photoionization rates are difficult to quantify
in general, due to the frequency dependence of the
irregular radiation field that varies
from object to object and from one point to another within the same object,
we present several quantitative estimates for specific cases.
In this paper we compute resonance-averaged photoionization (RAP)
cross sections from a convolution
with a running Gaussian distribution over energy intervals that subsume
the uncertainties in resonance positions, estimated to be about
1\% from a comparison of the calculated bound state energy levels
with spectroscopic measurements ({\em The Opacity Project Team}, 1995).
This procedure should minimize errors in ionization rates
due to inaccuracies in resonance positions,
while taking into account their contributions and preserving
the overall physical complexities in the structure of the cross sections,
especially in the
important near threshold region. Further, RAP cross sections effectively
simulate some broadening processes, notably thermal (Doppler)
broadening, that result in a natural smearing of the sharp resonance
features. Thus, RAP cross sections assume a qualitatively physical form
even though the quantitative aspects may not be generalized for all
sources.
The differences between photoionization rates calculated
with the present RAP cross sections, and the detailed cross sections, are studied for
a variety of radiation fields.
The relatively low-energy cross sections from the OP and IP are
merged with cross sections from Relativistic Distorted Wave
calculations by Zhang (1997) for high photon energies including
inner-shell ionization from closed sub-shells (not considered in the OP
data), and from the
Hartree-Slater central-field calculations by Reilman \& Manson (1979).
Further, we employ a numerical technique for
representing the photoionization cross sections by a small number of
points, from the photoionization threshold to very high energies.
The tabulated cross sections can be readily coded in computer
modeling programs to enable accurate computation of photoionization rates
for an arbitrary ionizing radiation flux. A Fortran subroutine RESPHOT
is made available to users to facilitate the interface of RAP data
with models.
\section{Resonance-Averaged Photoionization (RAP) Cross Sections}
The uncertainty in the position of any given feature
in the photoionization cross section may be represented by a probabilistic
Gaussian distribution of width $\delta E$ around the position predicted
by the theoretical calculation. There is, in principle, no reason to
expect that the accuracy
should vary with the central energy of a given feature or from one
cross section to another. Thus we can
assume $\delta E/E \equiv \Delta$ to be constant.
Then the averaged photoionization cross section in terms of the detailed
theoretical cross section convolved over
the probabilistic distribution is
\begin{equation}
\sigma_A(E)= C \int_{E_0}^{\infty}\sigma(x)
\,\exp{[-(x-E)^2/2(\delta E)^2 ]} \,\,dx,
\end{equation}
where $\sigma$ and $\sigma_A$ are the detailed and averaged photoionization
cross sections respectively, E$_o$ is the ionization threshold energy,
and $C$ is a normalizing constant.
Fig. 1 compares the detailed and the RAP cross sections
for Fe~II (Nahar \& Pradhan 1994)
and Fe~I (Bautista 1997) for choices of the $\Delta$
= 0.01, 0.03, 0.05, and 0.10. The convolved cross
sections with the Gaussian distribution are smooth even across regions of
intricate resonance structures. In regions free of resonances
the RAP cross sections asymptotically approach the original cross sections
without loss of accuracy. It is observed that the choice of the width
of the Gaussian distribution has an appreciable effect on the resulting
RAPs. As the width of the distribution decreases the
RAPs show more structure resembling the detailed resonant structure.
On the other hand, if the chosen width of the distribution
is too large the overall structure of the cross sections in the resonant
region is smeared over and the background may be unduly altered.
This is seen in the case of Fe~II (Fig.
1(a))
and Fe~I (Fig.1(b)) in the 0.8 to 0.9 Ry region that lies between
large resonance structures.
Based on such numerical tests, we adopt a standard width for all the cross
sections of $\Delta=0.03$ (solid lines in Fig. 1(a) and (b)).
This choice is sufficiently
conservative with respect to the uncertainties in the theoretical cross
sections and is able to provide RAP cross sections that resemble reasonably well
the overall structure of the cross sections.
This choice of $\Delta$ also yields RAPs sufficiently smooth to be
represented by small number of points that lead to accurate
photoionization rates ((see Sections 4 and 5).
\section{The Data}
The OP ground state photoionization cross sections for
atoms and ions of He through Si (Z = 2 -- 14), and S, Ar, Ca,
and Fe are obtained
from TOPbase (Cunto et al.\thinspace 1993). For the lowest ionization stages of
Iron, Fe~I -- V,
radiative data of much higher accuracy than those from the OP have
recently been computed under the IP (Table 1 of Bautista \& Pradhan
1998) and have been included in the
present work (detailed references are given in the Appendix).
The R-matrix calculations performed under the OP and IP were carried out
for photon energies up to just above the highest target state in
the CC expansion for the residual core
ion. The first version of TOPbase included OP cross sections with
power law tails extrapolated to energies higher than in the
R-matrix calculations.
High energy cross sections, however, have now been calculated for
all the ground and excited states of
atoms and ions with Z$\le 12$ using a fully relativistic
distorted-wave method (Zhang 1997) that includes the inner-shell `edges'
not considered in the original OP data. The low energy R-matrix
cross sections smoothly match the high energy distorted-wave
tails; which yields a consistent set of merged (OP + RDW) results that
should be accurate for all energies of practical interest. For the lowest
ionization stages of S, Ar, Ca, and Fe that are not included in
Zhang's calculations, we have adopted central-field
high energy cross sections by Reilman \& Manson (1979).
\section{Numerical Representation of the RAP Cross Sections}
\label{RAP}
Having calculated the RAP cross sections, we represent these with
a minimum number of points selected so
that the cross section can be recovered by linear interpolation
to an accuracy better than 3\%.
We obtain a representation for the cross sections,
from the ionization threshold to very high photon energies including
all of the inner shell ionization edges, with approximately 30 points
per cross section. Examples of
the RAP cross sections, and differences with the analytic fits (Verner
et al.\thinspace 1996), are
illustrated in Fig. 2 for S~I and Fe~I. It is clear that for these two
important elements these differences are substantial and would correspondingly
affect the photoionization rates. In particular it may be noted that
the effect of resonances varies significantly with energy, representative
of the complex atomic effects such as the Rydberg series limits that can
not be reproduced by any analytic procedure. For example, the resonance-
averaged structure in the RAP cross section for
S~I in the near-threshold region is a rise and a dip, corresponding to
actual resonances. The resonances make an even greater contribution for
Fe~I and the analytic fit is likely to yield a serious underestimation
of the photoionization of neutral iron.
Fig. 3 presents RAP cross sections for several other elements, with
some singularly large features over wide energy ranges (e.g. Fe~IV and
Al~I). Keeping in mind
the 1\% or so uncertainty in the resonance positions, the RAPs represent
these physical features (an extensive discussion of the resonant feature
in Fe~IV is given by Bautista \& Pradhan 1997).
Fig. 3 also show the discrete sets of points that can be interpolated
for a detailed and accurate representation of the averaged cross
sections. Such a representation is not possible with analytic fits.
In addition to a more accurate representation of the atomic physics
the present RAP cross sections should also be computationally preferable to
analytic fits
since a single set of points can reproduce the effective cross section,
while analytic fits require several formulae and parameters
for all of the inner-shell contributions.
\section{Accuracy of RAP Cross Sections}
A careful study of the accuracy of the RAP cross sections and their reduced representation
is important if they are to be used for practical applications.
Any reasonable transformation or smoothing procedure of the photoionization
cross sections should conserve the total area under the cross section function integrated over a
certain energy integral. This, however, gives no indication about the
actual accuracy of the transformed function.
A good indication of the uncertainty in the cross sections may be obtained
from the photoionization rates that result from the product
of the cross section and
a radiation field integrated over the photon energy from the
ionization threshold to infinity.
The radiation fields in practice are complicated functions of frequency
and may even vary from one point to another
within the same object, as well as from object to object. Then, different
radiation field functions would sample preferentially
distinct energy intervals in the
cross sections and may be used as an accuracy indicator.
For the present work we have selected nine different ionizing radiation fields
that are expected to represent some general conditions
for a number of cases
of astrophysical interest. These radiation fields correspond to
ionizing sources
typical of an O star with $T_{eff}$ = 40,000 K, a high luminosity star with
$T_{eff}$ = 100,000 K, and an extremely hot $T=10^8$ K black body source.
Each of these sources is assumed to be surrounded by a gaseous envelope
with nearly
cosmic chemical composition under pressure equilibrium conditions. The
densities of the ionized gas at the near side to the source
are taken to be $10^4,\ 3\times 10^3,$ and $10^{10}$ cm$^{-3}$,
respectively. Under photoionization equilibrium the physical
parameters were calculated
for each of these nebulae using the computer code CLOUDY (Ferland 1993),
and three different ionizing radiation fields were obtained for each case
corresponding to the conditions at the near side of the cloud,
at half depth of the ionized cloud, and near the ionization front.
As an example, the radiation fields selected for the
$T_{eff}$ = 100,000 K source are shown in Fig. 4.
All detailed OP cross sections, their RAP cross sections, and their
reduced representations (linearly interpolated values between the
RAP cross sections), were integrated over the different radiation
fields to obtain the photoionization rates and the results compared.
Photoionization rates obtained
with detailed cross sections and RAP cross sections agree within
5\%.
When comparing the photoionization rates with those obtained from
the analytic fits of Verner et al.\thinspace (1996),
significant differences are found.
The most prominent differences are for the lower ionization stages of iron
Fe~I~--~V, and Na~VII, for which the fits of Verner et al.\thinspace
give photoionization rates
differing by up to about 70\%. Differences between 20-30\% are found
for Be~I, S~IV, Ar~II, Fe~VII, and Fe~XI, and between 10-20\%
for B~I, S~VII, Mg~I, Mg~II, Al~I, Ar~I, Ar~III, Ar~V, and Fe~VIII.
For all other
ions the fits of Verner et al.\thinspace yield photoionization rates that
agree to within 10\% with the present results. This agreement for the
last set of data is primarily for the multiply ionized systems where the
resonances are usually narrow and the
resonance contributions, relative to the background, are small.
It is emphasized that the errors in the photoionization rates when
using analytic fits for the cross sections vary with the shape of the
radiation field and are unpredictable in general.
\section{Recombination Rates and Ionization Fractions}
A further check on the new RAP cross sections may be made by computing ionization
fractions in photoionization equilibrium. These calculations also
require (electron-ion) recombination rate coefficients. In recent years
a unified method has been developed that incorporates radiative and
dielectronic recombination (RR and DR) in an {\em ab initio} manner, and
enables the calculation of total (e + ion) recombination rates in the CC
approximation using the R-matrix method (Nahar \& Pradhan
1992, 1995). In addition, the new recombination rates are fully
self-consistent with the photoionization cross sections as both the
photoionization and the recombination data are calculated in the CC
approximation using the same eigenfunction expansion over the
states of the residual ion. Unified, total recombination rates have been
computed so far for approximately 33 atoms and ions, including all C, N, O
ions (Nahar and Pradhan 1997, Nahar 1998), the C-sequence ions (Nahar
1995,1996), and Fe ions Fe~I -- V (references are given in Bautista \& Pradhan 1998).
In a recent work on iron emission and ionization structure in gaseous
nebulae (Bautista \& Pradhan 1998) the new
photoionization/recombination data for Fe~I -- V, including detailed and
RAP cross sections,
was employed to obtain
ionic fractions of Fe in a photoionized H~II region (the Orion nebula),
and considerable differences were found with previous works.
In this work we compute a few C, N, O ionization fractions
using the RAP cross sections computed in
the present work and the new unified (e + ion) recombination rates, to
study the effect of the new photoionization/recombination data.
Although the differences for lighter elements are relatively smaller
than for the Fe ions,
they can be significant in temperature ranges in transition regions
between adjacent ionization stages, as illustrated in Fig. 5. Whereas
there is no significant difference in the RAP cross sections computed in
this work for C,N, and O, and the earlier ground state photoionization data incorporated
in CLOUDY, some significant differences are found when the new unified
recombination data for C,N,O ions is employed.
\section{FORTRAN Subroutines and Data}
The RAP cross sections for the ground state of all atoms and ions in
TOPbase, and new data for Fe~I~--~V and some other ions, are available in
a FORTRAN subroutine RESPHOT that can be readily interfaced with
modeling codes. The routine is available
electronically from the authors. Given an ion stage (N,Z) and an energy in
Rydbergs, RESPHOT returns the linearly interpolated value
of the photoionization cross section from the table of points
described in Section 4. It is also important to point out that the
RAP cross sections are given as a function of the energy of the
ejected electron (i.e., energies with respect to the ionization
threshold) instead of the photon energies in TOPbase that are relative
to the first ionization potential calculated to about 1\% accuracy.
Users can easily scale the
cross sections to the more accurate experimental ionization
potentials.
Also available is a routine RCRATE which gives total
unified e-ion recombination rates that
may be used instead of the combination of previously available (and
often inaccurate) data on RR and DR rates.
Given an ion stage (N,Z) and an electron temperature, RCRATE returns the total
recombination rate coefficient interpolated from the tables
of Nahar (1995,1996), Nahar and Pradhan (1997), and the references
in Table 1 of Bautista and Pradhan (1998).
RESPHOT and RCRATE can be easily interfaced with photoionization codes
such CLOUDY, as demonstrated by their use to produce the results in Figs.
4 and 5.
\section{Conclusion}
Resonance-averaged photoionization (RAP) cross sections
have been calculated for most atoms and ions of astrophysical
importance using the Opacity Project data from TOPbase and new data on
Fe and C,N,O ions.
These incorporate the effect of autoionizing resonances in
an averaged manner that is not too sensitive to the precise positions of
resonances, but accounts for the often significant attenuation of the
effective cross sections that is neglected in earlier works. The
RAP cross sections have been represented by a small number of points that can be
readily interpolated in modeling codes to reproduce photoionization
cross sections at all energies of practical interest, including
inner-shell ionization thresholds. Illustrative examples show
considerable differences with analytic fits that neglect resonance
structures. It is also pointed out that new recombination rates,
unifying the radiative and dielectronic recombination processes, are
being computed for astrophysically abundant elements to provide a
self-consistent set of photoionization/recombination data for
modeling astrophysical sources in radiative equilibrium.
\acknowledgements
This work was partially supported by a NSF grant for the
Iron Project PHY-9482198 and the NASA Astrophysical Data Program. PR acknowledges the financial support from the Graduate School at OSU through a University Fellowship.
\section*{Appendix}
\begin{deluxetable}{ll}
\small
\singlespace
\footnotesize
\tablewidth{30pc}
\tablecaption{References to the Photoionization Cross Sections}
\tablehead{
\colhead{\bf Ion} &
\colhead{\bf Reference} }
\startdata
He-like & Fernley et al.\thinspace 1987 \nl
Li-like & Peach et al.\thinspace 1988 \nl
Be-like & Tully et al.\thinspace 1990\nl
B I & Berrington \& Hibbert, in preparation \nl
B-like & Fernley et al., in preparation \nl
C-like & Luo \& Pradhan 1989 \nl
N-like & Burke \& Lennon, in preparation \nl
O-like, F-like & Butler \& Zeippen, in preparation \nl
Ne-like & Scott, in preparation \nl
Na-like & Taylor, in preparation \nl
Mg-like & Butler et al.\thinspace 1993 \nl
Al-like & Mendoza et al.\thinspace 1995 \nl
Si-like & Nahar \& Pradhan 1993 \nl
P-like & Butler et al., in preparation \nl
S-like & Berrington et al., in preparation\nl
Cl-like & Storey \& Taylor, in preparation\nl
Fe~IX & Butler et al., in preparation \nl
Ar-like & Saraph \& Storey, in preparation
\tablebreak
Fe~VIII& Butler et al.\thinspace, in preparation \nl
Fe~VII & Sawey \& Berrington 1992 \nl
Ca-like & Saraph \& Storey, in preparation \nl
Fe~VI & Butler et al., in preparation \nl
Fe~V & Bautista 1996 \nl
Fe~IV & Bautista \& Pradhan 1997 \nl
Fe~III & Nahar 1996 \nl
Fe~II & Nahar \& Pradhan 1994 \nl
Fe~I & Bautista 1997 \nl
\enddata
\end{deluxetable}
\def \it Astronomy and Astrophys. Supplement Series {\it Astronomy and Astrophys. Supplement Series}
\def Rev. Mexicana de Astronom\'{\i}a y Astrofis. {Rev. Mexicana de Astronom\'{\i}a y Astrofis.}
|
1,116,691,499,882 | arxiv | \section{Introduction}
\label{s:intro}
The phenomenon of Ekman circulation (EC) occurs in most if not all
rotating flows with stressed boundaries that are not parallel to the
axis of rotation. The manifestation of EC ranges from wind-driven
ocean currents \cite{Ba67}, to the accumulation of the tea leaves at
the bottom of a stirred cup \citeaffixed{AH60}{see, e.g.,}. One of
consequences of EC and of the associated Ekman flows is greatly to
to enhance mixing and transport and in particular, the transport of
angular momentum, above the values due to viscosity alone.
Traditionally, Ekman flows are explained in terms of action of
Coriolis forces in the Ekman layers along the rotating stressed
boundaries \cite{Gr68}.
There are circumstances when the presence of EC has undesirable
effects. For example, this is the case in laboratory experiments to
study the development of magneto-rotational instability (MRI) in
liquid metals \citeaffixed{RRB04}{see a monograph edited by}. The
MRI instability is important in astrophysics where it is believed
to lead to turbulence in magnetized accretion disks \cite{Ba03}.
Many of the features of the MRI and its associated enhancement of
angular momentum transport (AMT) can be studied experimentally in
magnetized flows between rotating coaxial cylinders. In these
experiments, the rotation rates of the cylinders are chosen in such
a way that the fluid's angular momentum increases outwards so that
the resulting rotational profile is stable to axisymmetric
perturbations (so-called centrifugally stable regime). The presence
of a weak magnetic field can destabilize the basic flow, provided
the angular velocity increases inward, and lead to an enhancement of
outward AMT.
In an ideal situation, the basic state consists of circular Couette
flow (CCF), and the outward transport of angular momentum in the
absence of magnetic fields is solely due to viscous effects. The
presence of a magnetic field would destabilize the basic flow
through the effects of MRI and lead to a measurable increase of AMT.
In practice, this ideal case can never be realized in laboratory
experiments because of horizontal boundaries. The presence of these
boundaries drives an EC that enhances AMT even in the absence of
magnetic effects. In order to study the enhancement of AMT due to
MRI it is crucial to be able to distinguish the effects that are
magnetic in origin from those that are due to the EC. One
possibility is to make the cylinders very tall so the horizontal
boundaries are far removed from the central region. This approach,
however, is not practical owing to the high price of liquid metals.
The alternative approach is to device boundaries in such a way that
the resulting EC can be controlled and possibly reduced. For
example, attaching the horizontal boundaries to the inner or outer
cylinder results in dramatically different flow patterns. Another
possibility could be to have the horizontal boundaries rotating
independently of inner and outer cylinder. Goodman, Ji and
coworkers \cite{KJGCS04,BJSC06,JBSG06} have proposed to split the
horizontal boundaries into two independently rotating rings whose
rotational speeds are chosen so as to minimize the disruption to the
basic CCF by secondary Ekman circulations. Indeed this approach has
been implemented in the Princeton's MRI liquid gallium experiment
\cite{Sc08}. In any case, no matter how the horizontal boundaries
are implemented it is important to understand what kind of EC
patterns arise before the magnetic effects are introduced.
In the present paper we address this issue by studying the effects
of horizontal boundary conditions on CCF numerically. We study both
axisymmetric and fully three-dimensional geometries and investigate
the effects of changing rotation rates (Reynolds number) through the
onset of unsteadiness and three-dimensionality. The next section
(section~\ref{s:formul}) describes the formulation of the problem
and gives an account of numerical aspects of its solution technique
including a brief description of the spectral element code Nek5000
\cite{FOC08}. The section~\ref{s:result} starts with an explanation
of flow behaviour due to horizontal boundary conditions, i.e.~CCF,
Ekman and disrupted Ekman circulation due to periodic horizontal
boundaries, `lids' and `rings', correspondingly
(section~\ref{ss:BC}). Then the paper proceeds with description of
comparison of our results with the experimental data
(section~\ref{ss:exp}) followed by an examination of torque and AMT
(section~\ref{ss:amf}). Finally, we draw conclusions and describe
future work in section~\ref{s:concl}.
\section{Problem Formulation and Numerical Method}
\label{s:formul}
\subsection{Formulation}
\label{ss:formul}
We study the flow of an incompressible fluid with finite (constant)
kinematic viscosity $\nu$ in a cylindrical annulus bounded by
coaxial cylinders. The cylinders have the radii $R_1^*$ and $R_2^*$
($R_1^*<R_2^*$) and rotate with angular velocities $\Omega_1^*$ and
$\Omega_2^*$, respectively. The annulus is confined in the vertical
direction by horizontal boundaries at distance $H^*$ apart. The
formulation of the problem in cylindrical coordinates $(r,\theta,z)$
with the scales for characteristic length $L$ and velocity $U$,
\begin{equation} \label{e:nondim:LUB}
L = R_2^* - R_1^* \qquad\qquad U = \Omega_1^* R_1^* - \Omega_2^*
R_2^*
\end{equation}
and therefore, with the relationship between dimensional variables
(with asterisk) and non-dimensional radius, height, velocity vector
$\boldsymbol{V}$, time and pressure given by
\begin{equation} \label{e:nondim}
\left[ r^*, z^*, \boldsymbol{V^*}, t^*, p^* \right] = \left[ L \, r,
L \, z, U \boldsymbol{V}, \frac{L}{U} \, t, \rho U^2 p \right]
\end{equation}
correspondingly, results in the following non-dimensional
incompressible Navier-Stokes equations:
\begin{eqnarray}
\dod{V_r}{t} + \left( \boldsymbol{V} \bcdot \boldsymbol{\nabla} \right)V_r
- \frac{V_\theta^2}{r}
& = &
\frac{1}{Re} \left[ \triangle {V_r} - \frac{2}{r^2}\dod{u_\theta}{\theta}
- \frac{V_r}{r^2} \right] - \dod{p}{r} \label{e:MHD:V_r} \\
\dod{V_\theta}{t} + \left( \boldsymbol{V} \bcdot \boldsymbol{\nabla} \right)V_\theta
+ \frac{V_r V_\theta}{r}
& = &
\frac{1}{Re} \left[ \triangle {V_\theta} + \frac{2}{r^2}\dod{u_r}{\theta}
- \frac{V_\theta}{r^2} \right] - \frac{1}{r}\dod{p}{\theta} \label{e:MHD:V_t} \\
\dod{V_z}{t} + \left( \boldsymbol{V} \bcdot \boldsymbol{\nabla} \right)V_z
& = &
\frac{1}{Re} \triangle {V_z}
- \dod{p}{z} \label{e:MHD:V_z} \\
\dod{V_r}{r} + \frac{1}{r} \dod{V_\theta}{\theta} + \dod{V_z}{z} + \frac{V_r}{r}
& = & 0 \label{e:MHD:div_V:axi}
\end{eqnarray}
where $\rho$ is a constant fluid density and Reynolds number $Re$ is
defined as
\begin{equation} \label{e:Rem}
Re = \frac{U L}{\nu} = \frac{(\Omega_1^* R_1^* - \Omega_2^* R_2^*)(
R_2^* - R_1^*)}{\nu}
\end{equation}
while the scalar advection operator due to a vector field
$\boldsymbol{V}$ and laplacian of a scalar function $S(r,z)$ are
given by
\begin{equation} \label{e:lapl:adv:axi}
\fl\hspace{7ex} \left( \boldsymbol{V} \bcdot \boldsymbol{\nabla}
\right)S = V_r \dod{S}{r} + \frac{V_\theta}{r}\dod{S}{\theta} + V_z
\dod{S}{z} \quad \triangle S = \dsods{S}{r} + \frac{1}{r}\dod{S}{r}
+ \frac{1}{r^2}\dsods{S}{\theta} + \dsods{S}{z}
\end{equation}
The initial conditions for the flow in the annulus and boundary
conditions at the cylinder surfaces $r = R_1$ and $r = R_2$ are
\begin{equation} \label{e:BC}
V_r = V_z = 0 \qquad V_\theta = r \, \Omega(r)
\end{equation}
where non-dimensional angular velocity $\Omega(r)$ is given by
circular Couette flow (CCF) profile
\begin{equation} \label{e:BC:OC}
\Omega_C(r) = A + \frac{B}{r^2} \qquad A = \frac{\Omega_2
R_2^2-\Omega_1 R_1^2}{R_2^2-R_1^2} \quad B = \frac{R_1^2
R_2^2(\Omega_1-\Omega_2)}{R_2^2-R_1^2}
\end{equation}
At the horizontal boundaries $z = 0$ and $z = H$, two types of the
boundary conditions have been considered, namely, {\it lids} and
{\it rings}, given by (\ref{e:BC}) where angular velocity
$\Omega(r)$ is equal to
\begin{equation} \label{e:BC:O}
\Omega(r) = \left\{
\begin{array}{c@{\quad : \quad}c} \Omega_1 & r = R_1
\\ \Omega_3 & R_1 < r < R_{12}
\\ \Omega_4 & R_{12} < r < R_2
\\ \Omega_2 & r = R_2
\end{array}
\right.
\end{equation}
Here $R_{12}$ is the radial location of the boundary between the
inner and outer rings, and $\Omega_3$ and $\Omega_4$ are angular
velocities of inner and outer rings, correspondingly. Inspired by
Princeton MRI liquid gallium experiment \cite{Sc08}, the
non-dimensional angular velocities and cylinder height as well as
cylinder and ring boundary radii used in this study are given in
table~\ref{t:param} in addition to the dimensional parameters
involved in comparison with the experiment
(subsection~\ref{ss:exp}). In the cases with lids, angular
velocities $\Omega_3$ and $\Omega_4$ are equal to the angular
velocity of the outer cylinder $\Omega_2$ while in the cases with
rings they turn out to be close to the values of CCF profile
(\ref{e:BC:OC}) taken at the middle of radii of the corresponding
rings.
\begin{table
\centering
\begin{minipage}[c]{0.4\textwidth}
\centering
{\renewcommand{\arraystretch}{1.3}
\begin{tabular}{c|cc||cc|c}
& Lids & Rings & \multicolumn{3}{c}{Experiment: Lids} \\ \hline\hline
$R_1$ & \multicolumn{2}{c||}{0.538} & $R_1^*$ &(cm) & 7.1 \\
$R_2$ & \multicolumn{2}{c||}{1.538} & $R_2^*$ &(cm) & 20.3 \\
$R_{12}$ & \multicolumn{2}{c||}{1.038} & $R_{12}^*$ &(cm) & 13.7 \\
$H$ & \multicolumn{2}{c||}{2.114} & $H^*$ &(cm) & 27.9 \\
$\Omega_1$ & \multicolumn{2}{c||}{3.003} & $\Omega_1^*$ & (rpm) & 200 \\
$\Omega_2$ & 0.488 & 0.400 & $\Omega_2^*$ & (rpm) & 26 \\
$\Omega_3$ & 0.488 & 1.367 & $\Omega_3^*$ & (rpm) & 26 \\
$\Omega_4$ & \multicolumn{2}{c||}{0.488} & $\Omega_4^*$ & (rpm) & 26 \\
\end{tabular}
}
\end{minipage}
\qquad\qquad
\begin{minipage}[c]{0.4\textwidth}
\begin{center}
{\setlength{\unitlength}{1.0in}
\begin{picture}(2.0,3.)( -0.0,-.0)
\put(0.00,0.00){\psfig{figure=obabko_fig_mesh.ps,angle=-90,width=2in}}
\put(.50,-0.15){$r=R_1$}
\put(1.25,-0.15){$R_{12}$}
\put(1.9,-0.15){$R_{2}$}
\put(.25,2.65){$z=H$}
\put(.45,1.3){$\Omega_1$}
\put(2.1,1.3){$\Omega_2$}
\put(0.95,2.8){$\Omega_3$}
\put(1.6,2.8){$\Omega_4$}
\end{picture}}
\end{center}
\vspace{-0.2in}\mbox{}
\end{minipage}
\caption{The geometry and rotation parameters
for the computational cases with lids and rings at $Re=6190$ and
experimental setup with lids at $Re=9270$
along with the drawing of the cut of 3D computational mesh at $\theta=0$
for the case with rings.
{\it Note the clustering of the gridlines at the boundaries
of the spectral elements whose location and dimensions are chosen
to resolve efficiently boundary layers
and `step' changes in angular velocity between cylinders and rings. }
} \label{t:param}
\end{table}
\subsection{Numerical Technique}
\label{ss:numer}
The axisymmetric version of equations
(\ref{e:MHD:V_r}--\ref{e:MHD:div_V:axi}) and fully three-dimensional
version in cartesian coordinates has been solved numerically with
the spectral-element code Nek5000 developed and supported by Paul
Fischer and collaborators \citeaffixed[and references
within]{FOC08,FLL07}{see}.
The temporal discretization in Nek5000 is based on a semi-implicit
formulation in which the nonlinear terms are treated explicitly in
time and all remaining linear terms are treated implicitly. In
particular, we used either a combination of $k$th-order backward
difference formula (BDF$k$) for the diffusive/solenoidal terms with
extrapolation (EXT$k-1$) for the nonlinear terms or the
operator-integration factor scheme (OIFS) method where BDF$k$ is
applied to the material derivative with the explicit fourth-order
Runge-Kutta scheme being used for the resulting pure advection
initial value problem.
With either the BDF$k$/EXT$k-1$ or OIFS approach, the remaining
linear portion of time advancement amounts to solving an unsteady
Stokes problem. This problem is first discretized spatially using
spectral-element method (SEM) and then split into independent
subproblems for the velocity and pressure in weak variational form.
The computational domain is decomposed into $K$ non-overlapping
subdomains or elements, and within each element, unknown velocity
and pressure are represented as the tensor-product cardinal Lagrange
polynomials of the order $N$ and $N-2$, correspondingly, based at
the Gauss-Lobatto-Legendre (GLL) and Gauss-Legendre (GL) points.
This velocity-pressure splitting and GLL-GL grid discretization
requires boundary condition only for velocity field and avoids an
ambiguity with the pressure boundary conditions in accordance with
continuous problem statement.
The discretized Stokes problem for the velocity update gives a
linear system which is a discrete Helmholtz operator. It comprises
the diagonal spectral element mass matrix with spectral element
Laplacian being strongly diagonally dominant for small timesteps,
and therefore, Jacobi (diagonally) preconditioned conjugate gradient
iteration is readily employed. Then the projection of the resulting
trial viscous update on divergence-free solution space enforces the
incompressibility constraint as the discrete pressure Poisson
equation is solved by conjugate gradient iteration preconditioned by
either the two-level additive Schwarz method or hybrid
Schwarz/multigrid methods. Note that we used
dealising/overintegration where the oversampling of polynomial order
by a factor of $3/2$ was made for the exact evaluation of quadrature
of inner products for non-linear (advective) terms.
The typical axisymmetric case with rings at high Reynolds number of
$Re=6200$ (see figure~\ref{f:lid:ring}b) required the spacial
resolution with polynomial order $N=10$ and number of spectral
elements $K=320$ (cf.~drawing for table~\ref{t:param}) and was
computed with timestep $\Delta{t}=10^{-3}$ for the duration of
$t\sim300$, while the axisymmetric run with lids at the same $Re$
(figure~\ref{f:lid:ring}a) had $N=8$, $K=476$,
$\Delta{t}=5\times10^{-3}$ and $t\sim500$. The corresponding
three-dimensional cases with rings and lids had $N=11$, $K=9600$,
$\Delta{t}=6.25\times10^{-4}$, $t\sim280$ and $N=9$, $K=14280$,
$\Delta{t}=6.25\times10^{-4}$, $t\sim180$, respectively. Note that
in order to facilitate time advancement and minimize CPU
requirements, the final output from another cases, e.g.~with lower
Reynolds number $Re$, was used as initial conditions for some of the
computations with higher $Re$, and the corresponding axisymmetric
cases with small random non-axisymmetric perturbation was a starting
point for most of our fully 3D computations. Apart from CPU
savings, the usage of the perturbed axisymmetric solution obtained
in {\it cylindrical formulation}
(\ref{e:MHD:V_r}--\ref{e:MHD:div_V:axi}) as initial condition for 3D
computations at low Reynolds numbers ($Re=620$) served as an
additional validation of the code setup due to the convergence of
the fully 3D results computed in {\it cartesian formulation} back to
the unperturbed axisymmetric steady state initial condition (see
also subsection~\ref{ss:amf}).
Finally, the step change of angular velocities that mimics its
transition in the gaps or grooves between the cylinders and
horizontal boundaries as well as between the inner and outer ring in
Princeton MRI liquid gallium experiment \cite{Sc08} was modelled
within one spectral element of the radial size $L_g=0.020$ by
ramping power law function of radius with an exponent that was
varied in the range from 4 to $N-1$ without noticeable effect on the
flow.
\section{Results}
\label{s:result}
Let us first start with examination of effects of horizontal
boundary conditions on flow pattern in general and Ekman circulation
in particular before moving to a comparison with the experiment and
examination of angular momentum transport in the cylindrical
annulus.
\subsection{Horizontal Boundary Effects}
\label{ss:BC}
Here we contrast two type of horizontal boundary conditions with an
ideal baseline case of circular Couette flow (CCF). Being zero in
the ideal case, we argue that the unbalance between `centrifugal'
rotation and centripetal pressure gradient determines the fate of
the radial flow along horizontal boundaries in the cylindrical
annulus.
In the ideal case of CCF, the sheared circular motion is balanced by
centripetal pressure gradient. To be precise, the ideal CCF is the
following exact solution of equations
(\ref{e:MHD:V_r}--\ref{e:MHD:div_V:axi}) for periodic (or
stress-free) horizontal boundary conditions:
\begin{eqnarray} \label{e:CCF}
\qquad V_r = V_z = 0 \qquad V_\theta = r \Omega_C(r) = A \: r +
\frac{B}{r}
\nonumber\\
p_C(r) = \int^r \frac{V_\theta^2}{r} d r = \frac{A^2 r^2}{2} -
\frac{B^2}{2 r^2} + 2 A B \log r + \mbox{Const}
\end{eqnarray}
Here the constant $A$ given by equation (\ref{e:BC:OC}) is
proportional to the increase in axial angular momentum,
\begin{equation} \label{e:L}
{\cal L}=r V_\theta = \Omega \: r^2
\end{equation}
outward between the cylinders while the constant $B$ is set by
shear-generating angular velocity drop between them. The
figure~\ref{f:CCF} shows CCF azimuthal velocity $V_\theta$ (dashed),
angular velocity $\Omega_C$ (solid), axial angular momentum $\cal L$
(dash-dotted) and negative of pressure, $-p_C$ (dotted,) for the
non-dimensional parameters given in table~\ref{t:param}.
\begin{figure}
\centerline{\includegraphics[width=4.5in]{obabko_fig1_CCF.eps}}
\caption{The CCF azimuthal velocity $V_\theta$ (\broken),
angular velocity $\Omega_C$ (\full), axial angular momentum ${\cal L}=r V_\theta$ (\chain) and
minus pressure $P_C$ (\dotted) versus radius.
{\it Note the monotonically increasing angular momentum and decreasing
angular velocity with radius for centrifugally stable circular Couette flow
where the `centrifugal' rotation balances the centripetal pressure
gradient leading to zero radial and axial velocities.} }
\label{f:CCF}
\end{figure}
Since we are primarily interested in further MRI studies, the
baseline flow has to be centrifugally stable, i.e. with angular
momentum $\cal L$ increasing outward (for $\Omega>0$), and
therefore, satisfying Rayleigh criterion
\begin{equation}
\dod{{\cal L}^2}{r}>0
\label{e:Rayleigh}
\end{equation}
which is the case in this ideal CCF (dash-dotted line in
figure~\ref{f:CCF}). In order to maintain rotation with shear in
this virtual experiment with periodic horizontal boundaries, the
positive axial torque ${\cal T}_C$
\begin{equation} \label{e:TC}
\fl {{\cal T}}_C = \int_A \left( \vec{r} \times ( d \vec{A} \bcdot
\boldsymbol{\tau} ) \right)_z = \int_0^H \int_0^{2\pi} d z d \theta
\frac{r^3}{Re} \left. \dod{}{r} \frac{V_\theta}{r}
\right|_{V_\theta=r\Omega_C} = \frac{4 \pi H R_1^2
R_2^2}{R_2^2-R_1^2} \frac{\Omega_1-\Omega_2}{Re}
\end{equation}
has to be applied to the inner cylinder while the outer cylinder is
kept from shear-free solid body rotation ($\Omega(r)=\Omega_1=A$,
$B={\cal T}=0$) by negative torque $-{\cal T}_C$. Note that in the
above equation (\ref{e:TC}), $\boldsymbol{\tau}$ is the
non-dimensional shear stress tensor (see also \ref{s:app:amt}).
\subsubsection{Ekman Circulation with `Lids'}
\label{sss:Ekman}
In practice, the ideal CCF can never be realized in laboratory
experiments because of horizontal boundaries. The simplest
realizable configuration is the one we refer to as `lids' when
horizontal boundaries are coupled to the outer cylinder
($\Omega_3=\Omega_4=\Omega_2$). To see how flow changes in the
presence of lids that rotate with outer cylinder, let us imagine
that these lids were inserted impulsively into fluid with ideal CCF
profile given by equation (\ref{e:CCF}) and plotted as a solid line
in figure~\ref{f:CCF} for $\Omega_1$ and $\Omega_2$ from
table~\ref{t:param}. Keeping the most important terms in the
axisymmetric form of the equation (\ref{e:MHD:V_r}) gives
\begin{equation}
\dod{V_r}{t} = \Omega^2 r - \dod{p}{r} + \frac{1}{Re}\dsods{V_r}{z} + \cdots
\label{e:Lid:V_r}
\end{equation}
where we used $V_\theta=r \Omega$. For the initial condition of CCF
(\ref{e:CCF}), the left-hand side of equation (\ref{e:Lid:V_r}) is
equal to zero everywhere outside the lids which is also consistent
with zero radial flow $V_r=0$. This zero radial flow also results in
zero diffusion term $\frac{1}{Re}\dsods{V_r}{z}$ in equation
(\ref{e:Lid:V_r}) and zero net radial force $\Omega^2 r -
\dod{p}{r}$. The latter results from the exact CCF balance between
(positive) `centrifugal' rotation term $\Omega_C^2 \: r$ and
(negative) centripetal pressure gradient term $-\dod{p}{r}$ in
equation (\ref{e:Lid:V_r}).
Instead of initial ideal CCF angular velocity $\Omega_C$
(\ref{e:CCF}), the flow next to the lids now rotates with a smaller
angular velocity of the outer cylinder
($\Omega_2=\Omega_3=\Omega_4<\Omega_C$). However, the centripetal
pressure gradient is still set by the bulk rotation of the rest of
the fluid and therefore, becomes suddenly larger than the
`centrifugal' rotation of fluid next to the lids,
i.e.~$\dod{p}{r}=\Omega_C^2 r>\Omega^2 r$. As a result of this
angular momentum deficit of near-wall fluid, the centripetal
pressure gradient prevails over rotation term in (\ref{e:Lid:V_r}).
Therefore, the net radial force becomes non-zero and negative,
$\Omega^2 r - \dod{p}{r}<0$, resulting in negative sign of
$\dod{V_r}{t}$ (\ref{e:Lid:V_r}) and therefore, in a formation of
the Ekman layer with an inward radial flow ($V_r<0$) in the vicinity
of the lids.
\begin{figure}
{\centering
\subfloat[Lids]{\includegraphics[width=3in]{obabko_fig2a_dvrdt_lid.eps}
}
\hspace{0.005in}
\subfloat[Rings]{\includegraphics[width=3in]{obabko_fig2b_dvrdt_ring.eps}
}
\caption{Steady state scaled radial wall shear (\full) and
near-wall net radial force (\broken)
for $Re=620$ in the case of lids (a) and rings (b).
{\it The definite negative net radial force in the lids case (a) result
in the inward radial Ekman flow with negative radial wall shear
being disrupted in
the case of rings (b) by the alternating sign of the net radial force
that correlates well with the sign of the wall shear and thus
with the alternating directions of Ekman flows.}} }
\label{f:dvrdt:lid:ring}
\end{figure}
The figure~2(a)
confirms that the net radial
force near, e.g.~the lower horizontal surface $z=0$, \ $\Omega_2^2 r
- \left. \dod{p}{r} \right|_{z=0}$ (dashed) is negative, as well as
the scaled poloidal wall shear
$\frac{2}{\sqrt{Re}}\left.\dod{V_r}{z}\right|_{z=0}$. The latter
means that the $z$-derivative of $V_r$ is negative at the lower lid
which in turn results in a decrease of radial velocity with the
increase of height $z$ from noslip zero value at the lid, $V_r\Big
|_{z=0}=0$ (\ref{e:BC}) to negative values associated with the
inward Ekman flow. Thus the deficit of angular momentum in the
near-wall fluid of the Ekman layer results in unbalanced centripetal
pressure gradient set by the bulk rotation of the rest of the flow
outside the layer and drives the Ekman flow radially inward.
\begin{figure}
\centerline{\includegraphics[width=4.5in,angle=90]{obabko_fig3_vta_hydro_2D.eps}}
\caption{Azimuthal velocity $V_\theta$ versus radius for the
circular Couette flow (\full) and instantaneous axisymmetric
profiles at $z=\frac{H}{4}$ in the case of lids for the
series of Reynolds numbers $Re=620$ (\dashed), 1900
(dash-triple-dot) and 6200 (\dashddot), and in the case of rings
for the same Reynolds numbers: (\dotted), (\chain) and (\broken),
respectively.
{\it The Ekman-circulation induced momentum deficiency in azimuthal velocity
profiles in the cases with lids is greatly diminished
by the particular choice of angular velocities of
independently rotating rings.}}
\label{f:vta}
\end{figure}
To summarize, the presence of slower rotating lids disrupt the
initial ideal CCF equilibrium between centrifugal rotation and
centripetal pressure gradient set by rotation of, respectively, lids
and bulk of the flow. This leads to the negative net radial force,
$\Omega^2 r - \dod{p}{r}<0$ and inward Ekman flow, $V_r<0$ owning to
$\dod{V_r}{t}<0$. As time grows, so does the magnitude of negative
radial velocity in the Ekman layer and, eventually, the diffusion
term $\frac{1}{Re}\dsods{V_r}{z}$ (\ref{e:Lid:V_r}) in the Ekman
boundary layer of the width $\Delta{z}\sim O(\sqrt{Re})$ becomes of
the same order (i.e.~${\sim}O(1)$) as the net radial force that
results from two other terms in (\ref{e:Lid:V_r}). Thus the
diffusion effects finally balance the rotation momentum deficit of
the fluid in Ekman boundary layers near the lids in the saturation
steady state (see also \ref{s:app:EC}).
To check consistency of this argument, the saturation magnitude of
$\dod{p}{z}$ across the layer is verified to be more than an order
of magnitude smaller than the corresponding $\dod{p}{r}$ which
confirms that saturation centripetal pressure gradient $-\dod{p}{r}$
is indeed set by the bulk rotation of the fluid outside the Ekman
boundary layers. The saturation bulk rotation can be illustrated by
the instantaneous saturation profiles of azimuthal velocity shown in
figure~\ref{f:vta} for the steady cases with lids for $Re=620$
(dashed) and 1800 (dash-triple-dot), and unsteady case with lids of
$Re=6200$ (dash-double-dot) at $z=\frac{H}{4}$. Interesting that for
the range of Reynolds numbers considered, the effect of the increase
of Reynolds number is minor in comparison with significant azimuthal
momentum deficiency resulted from the change of horizontal boundary
conditions from initial ideal CCF (solid) to the cases of Ekman
flows over lids.
\begin{figure}
\centering
\subfloat[Lids]{\includegraphics[width=2.5in]{obabko_fig4a_oa_vp_6e3_lid.eps}
}
\hspace{0.5in}
\subfloat[Rings]{\includegraphics[width=2.5in]{obabko_fig4b_oa_vp_6e3_ring.eps}
}
\caption{Instantaneous contours of azimuthal vorticity and vector field of poloidal velocity
for $Re=6200$ in the case of lids (a) and rings (b).
{\it The Ekman circulation and outward radial jet near the midplane
in the case of lids is severly disrupted in the setup with rings
due to alternating inward-outward Ekman flows.}}
\label{f:lid:ring}
\end{figure}
Owing to momentum deficiency of the near wall fluid, the higher
centripetal pressure gradient drives the inward Ekman flows along
the lids that result in EC in the cylindrical annulus. In order to
further illustrate the phenomenon of EC due to horizontal
boundaries, we have plotted the contours of azimuthal vorticity
$\omega_\theta$ and vector plot of poloidal velocity $(V_r,V_z)$ in
figure~\ref{f:lid:ring}(a) for the case of $Re=6200$ with the former
given by
\begin{equation}
\omega_\theta=\dod{V_r}{z}-\dod{V_z}{r} \qquad\qquad
\omega_\theta {\Big|_{z=0}}=\left.\dod{V_r}{z}\right|_{z=0} \quad
\omega_\theta {\Big|_{r=R_1}}=-\left.\dod{V_z}{r}\right|_{r=R_1}
\label{e:omega:theta}
\end{equation}
where noslip conditions (\ref{e:BC}) along the walls has been used.
Here the vorticity contours are coloured from blue
($\omega_\theta<0$) to red ($\omega_\theta>0$). Note that this
change of colours from blue to red (through green colour whenever it
is visible) shows the locus of zero vorticity that gives approximate
location of a jet or jet-like features in the flows along the lids
at $z=0$ or $z=H$ with extremum in $V_r$ and in the flows along the
cylinders at $r=R_1$ or $r=R_2$ with minimum or maximum in $V_z$
(\ref{e:omega:theta}). In figure~\ref{f:lid:ring}(a), we observe the
Ekman boundary layers along the lids with vorticity contours
changing their colours from blue ($\omega_\theta<0$) to red at the
lower lid and from red ($\omega_\theta>0$) to blue at the upper lid.
In both instances, this change of colour shows the locus of minimum
in $V_r<0$ or location where inward Ekman flow is the strongest.
Similarly, the change of colours near the inner cylinder surface
shows the opposing vertical jet-like flows along the inner cylinder
that merge near the midplane $z=H/2$ and owing to continuity
(\ref{e:MHD:div_V:axi}), form strong outward radial jet. At these
high Reynolds numbers, beyond $Re\sim1800$, the radial jet becomes
unsteady and starts to oscillates breaking into pairs of vortices or
to be precise, into pairs of vortex rings that move toward the lids
drawn by the mass loss in the Ekman layers and thus closing the EC
cycle.
Summing up the flow pattern in the case of lids, we conclude that
because of the deficit of rotation momentum in Ekman layers, the
fluid is pushed centripetally in these layers along the lids and
further along the inner cylinder with the subsequent formation of
the strong outward radial jet that eventually transport fluid back
to the lids and closes the cycle of the EC (see also~\ref{s:app:EC}
and section~\ref{ss:amf}).
\subsubsection{Ekman Circulation Disruption Due to `Rings'}
\label{sss:rings}
When each horizontal boundary is split into a pair of rings that
rotate independently with the angular velocities $\Omega_3$ and
$\Omega_4$ (table~\ref{t:param}), the bulk rotation and resulting
centripetal pressure gradient is restored back to that of the CCF.
The restored profiles of azimuthal velocity in the cases with rings
are shown in figure~\ref{f:vta} for the same Reynolds numbers as for
the cases with lids, namely for the steady cases of $Re=620$
(dotted) and 1800 (dash-dot) and unsteady case of $Re=6200$ (long
dash). Along with the restoration of the bulk rotation back to that
of the CCF, we observe other major differences between the cases
with lids (a) and rings (b) in the flow field structure illustrated
in figure~\ref{f:lid:ring}.
Instead of a single outward radial jet and inward Ekman flows along
the lids, figure~\ref{f:lid:ring}(b) shows alternating
inward-outward Ekman flows along the rings that, as we describe
below, produce strong vertical jets near $r=R_{12}$ and a weaker
outward radial jet near midplane $z=H/2$. The alternating
inward-outward Ekman flows along, e.g., the lower inner and outer
rings ($z=0$), are also evident in
figure~2(b)
where the scaled poloidal shear
$\frac{2}{\sqrt{Re}}\left.\dod{V_r}{z}\right|_{z=0}$ (solid) is
plotted as a function of radius for $Re=620$. The radial locations
of zero shear on the inner and outer ring, ${R_s}_3$ and ${R_s}_4$,
respectively, in this case are found to be
\begin{eqnarray}
\fl \qquad {R_s}_3=0.801 \quad {R_s}_4=1.395 \qquad \mbox{such that}
\ \left.\dod{V_r}{z}\right|_{(r,z)=({R_s}_i,0)}=0 \quad \mbox{where}
\ i=3,4
\label{e:zeros:shear}
\end{eqnarray}
We observe that the scaled poloidal shear
$\frac{2}{\sqrt{Re}}\left.\dod{V_r}{z}\right|_{z=0}$ is negative
between $r=R_1$ and $r={R_s}_3$ and between $r\approx{R_{12}}$ and
$r={R_s}_4$. Similar to the case with lids
(figure~2a),
this negative $z$ derivative of
$V_r$ means that $V_r<0$ and the Ekman flow along these portions of
rings is directed radially inward. Likewise, the positive radial
velocity or outward Ekman flow between $r={R_s}_3$ and
$r\approx{R_{12}}$ and between $r={R_s}_4$ and $r=R_2$ corresponds
to positive poloidal
shear in figure~2(b).
Furthermore, as in the case of lids
(figure~2a),
the signs and zeros of the scaled
poloidal shear and radial velocity correlate well with that of the
net radial force $\Omega^2 r - \dod{p}{r}$
(dashed line in~figure~2b).
In addition, these radial locations of the reversals of net radial
force and of Ekman flows near $r={R_s}_3$ and $r={R_s}_4$
(\ref{e:zeros:shear}) coincide within upto 2\% with the radial
locations $R_3$ and $R_4$ where the ideal CCF angular
velocity~(\ref{e:CCF}) matches the angular velocity of the inner and
outer ring $\Omega_3$ and $\Omega_4$ (table~\ref{t:param}), namely
\begin{eqnarray}
\fl \qquad R_3=0.793 \quad R_4=1.369 \qquad \mbox{such that} \quad
\Omega_i=\Omega_C(R_i) \quad \mbox{where} \quad i=3,4
\label{e:R3:R4}
\end{eqnarray}
This strong correlation of reversals of the net radial force with
reversals of Ekman flow at $r={R_s}_3$ and $r={R_s}_4$
(\ref{e:zeros:shear}), coincidental with local CCF rotation at
$r=R_3\approx{R_s}_3$ and $r=R_4\approx{R_s}_4$, is completely
consistent with our argument that the balance and unbalance between
`centrifugal' rotation and centripetal pressure gradient determines
the fate of the radial flow along horizontal boundaries. Namely,
the zero radial velocity at $r={R_s}_3$ and $r={R_s}_4$ results from
the CCF-like balance of centripetal pressure gradient $-\dod{p}{r}$
and `centrifugal' rotation $\Omega^2 r\approx\Omega_C^2 r$
(\ref{e:CCF}) since $R_3\approx{R_s}_3$ and $R_4\approx{R_s}_4$
(\ref{e:zeros:shear}--\ref{e:R3:R4}). Moreover, a monotonic decrease
of $\Omega_C$ (\ref{e:CCF}) with increase of $r$ (solid line
in~figure~\ref{f:CCF}) means that the near-wall fluid rotation at
angular velocities of the rings $\Omega_3$ and $\Omega_4$ is locally
slower (faster) than that of CCF for the radius $r$ that is smaller
(bigger) than $r\approx{R_s}_3$ and $r\approx{R_s}_4$,
correspondingly. Thus near-wall fluid rotation momentum deficit
(excess) results, respectively, in the negative (positive) sign of
the net radial force $\Omega^2 r - \dod{p}{r}$ and therefore,
negative (positive) sign of radial velocity $V_r$ in
figure~\ref{f:lid:ring}(b) and poloidal shear
$\frac{2}{\sqrt{Re}}\left.\dod{V_r}{z}\right|_{z=0}$ in
figure~2(b)
for the radial location $r$ that is
smaller (larger) than $r\approx{R_s}_3$ and $r\approx{R_s}_4$.
In summary, the angular velocities of inner and outer rings
($\Omega_3$ and $\Omega_4$) set the CCF-like equilibrium radii
($r\approx{R_s}_3$ and $r\approx{R_s}_4$) by matching locally to
monotonically decreasing CCF-like profile of bulk flow rotation.
The near-wall fluid over the portions of the rings that have a
smaller radius $r$ than these CCF equilibrium radii experience
rotation momentum deficit that results in the inward Ekman flows due
to locally higher centripetal pressure gradient set by faster bulk
rotation as in the cases with lids. Conversely, when $r>{R_s}_3$ and
$r>{R_s}_4$, the bulk rotation is slower than the near-wall velocity
due to monotonic decrease of velocity profile with increase of
radius outside the Ekman layers, and the fluid has enough near-wall
rotation momentum to overcome centripetal pressure gradient and to
drive the outward Ekman flows as observed in
figure~\ref{f:lid:ring}(b).
The rest of the prominent features of the flow field in
figure~\ref{f:lid:ring}(b) like the strong vertical jets near
$r=R_{12}$ and a weak outward radial jet near the midplane $z=H/2$
are the direct consequences of these alternating inward-outward
Ekman flows along the rings. Namely, driven by rotation momentum
excess and deficit of fluid near inner and outer ring, respectively,
pairs of opposing Ekman flows along both horizontal boundaries merge
near the boundary between inner and outer ring $r=R_{12}$. Owing to
continuity (\ref{e:MHD:div_V:axi}), these pairs of colliding Ekman
flows with, presumably, equal linear radial momentum, launch the
opposing vertical jets near the ring boundary $r=R_{12}$ that become
unsteady with the increase of Reynolds number and break into vortex
pairs or vortex rings. Similarly, the Ekman flows along lower and
upper inner rings due to the rotation momentum deficit are pushed
into the corners with the inner cylinder and further along the
inner cylinder until they merge near the midplane $z=H/2$ to form a
outward radial jet as in the case with lids. But contrary to the
cases with lids, the outward radial jet now is significantly weaker
owning to the fact that the effective Reynolds number for these
flows are smaller than in the cases with lids due to the smaller
characteristic length scale (${R_s}_3-R_1<L$) and velocity scale
($\Omega_3 {R_s}_3 - \Omega_1 R_1<U$) which leads to larger Ekman
numbers
$E=\frac{\nu}{\Delta{\Omega}\:L^2}=\frac{U/(\Delta{\Omega}\:L)}{Re}$.
Finally, we would like to make two following comments. First,
three-dimensional effects appear to be negligible at these Reynolds
numbers with only noteworthy difference of considerably shorter
vertical jets near the ring boundary $r=R_{12}$ as compared to the
axisymmetric cases. Second, the angular velocities of rings control
the angle and direction of the jet near this ring boundary
$r=R_{12}$. In particular, when rings are coupled together and
rotate with the outer cylinder (`lids'), the jets become the inward
Ekman flows along lower and upper horizontal boundary so the the
angle with radius vector is $\pm\pi$, correspondingly. When rings
are decoupled and rotate with the angular velocities considered
above (table~\ref{t:param}), the Ekman flows collide near the ring
boundary $r=R_{12}$ and launch the opposing vertical jets, i.e. the
angle is $\pm\pi/2$. When rings are coupled to the inner cylinder,
we have checked that the resulting Ekman flows have radially outward
direction due to the excess of the near-wall angular momentum
leading to the zero angle between the jets and radius vector in
accordance with the mechanism described above. Moreover, this angle
is expected to be sensitive to the details of the flow in the
vicinity of the ring boundary such as presence of gaps between
rings, three-dimensionality, etc.~but it is likely to be adjusted
with an appropriate choice of angular velocities of rings shifting
the equilibrium points of local CCF balance and thus regulating the
radial extent and radial linear momentum of the Ekman flows
\citeaffixed{Sc08}{cf.}. In other words, the angular velocities of
rings control EC through the net radial momentum after the collision
of Ekman flows that sets the angle at which the jets are launched
near the ring boundary $r=R_{12}$.
\subsubsection{Summary on Horizontal Boundary Effects}
\label{sss:BC}
The CCF-like equilibrium between `centrifugal' rotation and
centripetal pressure gradient in cylindrical annulus is impossible
to achieve experimentally due to the presence of the noslip
horizontal boundaries. The rotation of these boundaries with either
faster inner cylinder or slower outer cylinder creates the Ekman
boundary layers with either angular momentum excess or deficit,
correspondingly, and results into either outward or inward Ekman
flows, respectively, that drive EC in the annulus. The splitting of
the horizontal boundaries into independently rotating rings sets the
CCF-like equilibrium points by matching locally to the CCF angular
velocity, and resulting angular momentum deficit or excess leads to
the, correspondingly, inward or outward Ekman flows along the
portions of the rings with radius, respectively, smaller or larger
than the radius of these equilibrium points. The opposing Ekman
flows along the rings collide near ring boundaries and launch the
vertical jets at an angle presumably determined by the mismatch of
their linear radial momentum. This angle is expected to be sensitive
to the details of the flow structure inside and immediately near the
gaps between rings, the vertical alignment of the horizontal
surfaces of the rings, etc. and can be adjusted
by changing the angular velocities of the rings \citeaffixed{Sc08}{cf.
\subsection{Comparison with Experiment}
\label{ss:exp}
\begin{figure}
\centerline{\includegraphics[width=5in]{obabko_fig5_vta_exp.eps}}
\caption{Dimensional azimuthal velocity profile versus radius at $z=\frac{H}{4}$ for
experimental data (\fullsquare) at $Re\approx{9300}$ \cite{Sc08},
circular Couette flow (\dotted), and our numerical simulations at
$Re=6200$: three-dimensional (\full), axisymmetric (\dashed)
and noisy axisymmetric (\chain) cases.
{\it Three-dimensional effects are negligible compared to axisymmetric case
both being slightly lower than experimental profile, and the best fit
is achieved in the axisymmetric case with random noise perturbations
applied to the surface of inner cylinder.} }
\label{f:vta:exp}
\end{figure}
We have collaborated with Princeton MRI liquid gallium experiment
group and conducted a comparison of our computations with their
experimental results. Figure~\ref{f:vta:exp} shows the comparison
of our numerical results for time-averaged azimuthal velocity in the
case with lids at Re=6200 with ideal CCF profile (dotted) and
experimental measurements (squares) conducted by Schartman (2008) at
$Re\approx{9300}$. The solid line corresponds to the axisymmetric
computation while the fully three-dimensional results are shown with
the dashed line. We observe that at this Reynolds number
($Re\approx6200$) three-dimensional time-averaged azimuthal velocity
is very close to the axisymmetric one both being upto $15\%$ lower
than the experimental data. The difference in Reynolds number is
expected to play only a minor role in this discrepancy.
The best fit of our (axisymmetric) computations (dash-dot line) with
experimental data was realized when the boundary conditions
(\ref{e:BC}) were perturbed with uniform random noise. The amplitude
of the noise was 5\% relative to the corresponding maximums of
axisymmetric solution without noise. The noise perturbation was
applied for the part of the computational domain boundary of one
spectral element long such that $R_1\le{r}<R_1+L_g$ where
$L_g=0.0200$ which is less than twice of the non-dimensional width
of the gap between the inner cylinder and inner ring in the
Princeton experiment equal to 0.0114 or 1.5 mm (Schartman 2008).
The rational behind the noise perturbation of the boundary
conditions was an attempt to model an effect of the centrifugally
unstable flow in the gap between inner cylinder and inner ring. By
an accident, boundary conditions on the inner cylinder surface
($r=R_1$) were also perturbed in this computation which turned out
to be the best fit with experimental data. The effect of this
perturbation of the inner cylinder surface boundary condition may be
similar to a random blowing. This leads us to believe that a
combination of the effects due to a run-out of inner cylinder and
due to the centrifugally unstable flow in the gaps between the inner
cylinder and inner ring may explain the discrepancy between the
simulation and experiment (see also Schartman 2008).
The further work on comparison of the simulation with experiment is
ongoing, and addition effort is needed to sort out the effects of
run-out, centrifugally unstable flow in the gaps between cylinders
and rings, vertical misalignment of horizontal surfaces of the
rings, etc.
\subsection{Torque and Angular Momentum Transport} \label{ss:amf}
Also we have studied carefully the torque behaviour and associated
angular momentum transport in the hydrodynamical setup of Princeton
MRI liquid gallium experiment (Schartman 2008) as a baseline cases
for our study of magneto-rotational instability (MRI) and MRI-driven
turbulence. In order for the MRI experimental results to have a
clear interpretation, the negative effects of the EC has to be
minimized, and the torque amplification over the CCF torque
${\cal{T}}_C$ (\ref{e:TC}) with the increase of magnetic field can
be linked directly to MRI enhancement of angular momentum transport
(AMT). Therefore, the understanding of the torque behaviour and AMT
in the baseline cases of hydrodynamical flow can not be overstated.
An application of torque to the inner cylinder results in a flow
that transports the angular momentum outward and attempts to reach a
shear-free solid body rotation with a constant angular velocity. If
the angular velocities of other boundaries are different form that
of the inner cylinder, the resulting shear has to be maintained by
application of torques to the boundaries in order to keep the
rotation rates steady. In the context of MRI study, the primary
interest is in the transport of the angular momentum from the inner
to outer cylinder in centrifugally stable regime and, as shown
below, in the minimization of EC and hence, in the reduction of the
net contribution of torques exerted on the horizontal boundaries.
Since the sum of all torques applied to the boundaries reflects the
time increase of interior angular momentum, this contribution from
the horizontal boundaries is equal to the sum of torques applied to
the cylinders in a steady state and has to be minimized for the
successful MRI experiment. Note that in a case of unsteady flow, the
time averaged torques are used when the statistically steady state
is reached.
\begin{figure}
\centerline{\includegraphics[width=5.5in]{obabko_fig6_torque.eps}}
\caption{The magnitudes of normalized torques applied to inner cylinder
($T_i= {\cal T}_1/{\cal T}_C$, open symbols) and outer cylinder
($T_o=-{\cal T}_2/{\cal T}_C$, filled symbols)
versus Reynolds number for the cases with rings
(\ \ \opendiamond \hspace{-4ex} \full \ or \fulldiamond) and with
lids (\ \ \opentriangle \hspace{-4ex} \dotted \ or \fulltriangle)
in axisymmetric cases while the three-dimensional data
are plotted with large symbols.
{\it Being the sum of magnitudes of torques applied to the
inner and outer cylinder, the net torque exerted on the
horizontal boundaries in the case of rings is significantly less
than that in the case of lids making the former closer
approximation to the CCF for which the net torque on horizontal
boundaries is zero.} }
\label{f:torque}
\end{figure}
The figure~\ref{f:torque} shows the Reynolds number dependence of
magnitudes of steady/time-averaged torque relative to the ideal CCF
torque ${\cal T}_C$ (\ref{e:TC}) that has to be applied on inner and
outer cylinders, $T_i=\frac{{\cal T}_1}{{\cal T}_C}$ (open symbols)
and $T_o=\frac{-{\cal T}_2}{{\cal T}_C}$ (filled symbols),
respectively, in order to keep constant boundary angular velocities
(table~\ref{t:param}). Being equal to unity for the case with
periodic boundary conditions (CCF), the torque magnitudes are shown
for the cases with rings (open diamonds with solid lines and filled
diamonds) and lids (open triangles with dotted lines and filled
triangles). All results are obtained in axisymmetric computations
except for the data plotted with large open symbols that shows the
results of fully 3D computations. Note that being zero in the ideal
CCF, the difference between the torques applied to the inner and
outer cylinder shown by open and closed symbols, respectively,
corresponds to the sum of torques exerted on the fluid next to the
horizontal boundaries
$$
T_i-T_o=({\cal T}_1+{\cal T}_2)/{\cal T}_C
$$
due to zero net torque in steady/statistically steady state.
Evidently, the setup with rings has an advantage of a smaller
contribution to the net torque from the horizontal boundaries and of
a smaller difference between inner and outer cylinder torque
magnitudes over the setup with lids where EC is undisturbed. We
also observe that in the range of Reynolds numbers considered the
flow makes a transition from steady axisymmetric solution at
$Re=620$ to the unsteady one at $Re=6200$ with small
three-dimensional effects. Being more significant in the case with
lids, three-dimensionality is expected to play an increasing role
with the further increase of Reynolds number.
\begin{figure}
\centering
\subfloat[Lids]{\includegraphics[width=2.00in]{obabko_fig7a_amf_lid.eps}
}
\hspace{0.5in}
\subfloat[Rings]{\includegraphics[width=2.00in]{obabko_fig7b_amf.eps}
}
\caption{Steady state contour lines of effective angular momentum
flux function $\tilde{\Psi}$
for the case of $Re=620$ with lids in the range from -3.30 to 3.30 in the increment of
0.31 (a) and with rings in the range from -1.35 to 1.35 in the increment
of 0.21 (b).
{\it In the case of lids, most of the flux lines
that originate from inner cylinder terminate at horizontal boundaries as
opposed to the case of rings where they end up mostly at the outer cylinder
which is similar to the CCF angular momentum transport between the
cylinders.}}
\label{f:amf:lid:ring}
\end{figure}
In order to illustrate a spacial variations of AMT, we have computed
an effective angular momentum flux function defined in
\ref{s:app:amt} by analogy with a streamfunction. The contours of
the effective flux function shows the (flux) lines along which the
angular momentum is transported, and the difference between the
values of the flux function at two points gives the total flux
across the segment of conical or cylindrical surfaces on which these
points lie. Figure~\ref{f:amf:lid:ring} shows steady state contour
lines of constant increment for effective angular momentum flux
function $\tilde{\Psi}$ (\ref{e:Psi:tilde}) for the case of $Re=620$
with lids (a) and rings (b). For comparison, we note that the flux
lines of (purely viscous) AMT for the ideal CCF (\ref{e:CCF}) are
the straight lines from inner to outer cylinder along $z=$const.
Despite the fact that in both cases only a single line of
$\tilde{\Psi}=0$ (i.e.~line of symmetry) is the same as in CCF case,
the case with rings exhibits the similar transport of angular
momentum along the flux lines that mostly originate at the inner
cylinder and terminate at the outer cylinder in contract to the
termination of the flux lines at the lids. The latter indicates
that in the cases with lids, the angular momentum transport is
mostly between the inner cylinder and the horizontal boundaries
contrary to more desirable CCF-like transport between the cylinders
observed in the cases with rings. Also note that the similarity
between the shape of the flux lines away from the boundaries in
figure~\ref{f:amf:lid:ring} and the shape of vorticity contour lines
and poloidal vector lines in figure~\ref{f:lid:ring} can be
explained through creation of strong $V_r$ and $V_z$ components of
velocity due to Ekman flows which affects AMT flux through advective
contributions $F^a_{rz}$ and $F^v_{zz}$, respectively, given by
relations (\ref{e:amf:a:r}) and (\ref{e:amf:a:z}).
In summary, if the ultimate objective is to achieve the flow with
AMT as close to the ideal CCF as possible, the design with rings
seems to have an advantage over the setup with lids.
\section{Conclusion and Future Work}
\label{s:concl}
In this paper we have presented axisymmetric and fully
three-dimensional Navier-Stokes calculations of circular Couette
flow (CCF) in a cylindrical annulus as the first step in our study
of magneto-rotational instability (MRI) and MRI-driven turbulence.
Inspired by Princeton MRI liquid gallium experiment, we have
computed the flow field in their experimental setup for realistic
horizontal boundary conditions of `lids' and `rings' with the
increase of Reynolds number through the onset of unsteadiness and
three-dimensionality. The presented analysis of the flow field and
angular momentum transport (AMT) allowed us to propose an
explanation of the mechanism that determines the fate of the
boundary flows and Ekman circulation (EC) as a result of a
competition between the effects of `centrifugal' rotation and
pressure gradient set by rotation of, respectively, horizontal
surfaces and bulk of the flow. In particular, with the appropriate
choice of rotation rates of the horizontal rings that control an
angle at which the vertical jets are launched near the ring
boundaries, EC can be greatly diminished and CCF-like flow can be
restored being more appropriate for the further experimental studies
of MRI saturation and enhanced AMT. In addition, our numerical
results compare favourably with the experimental data with the
maximum deviation below 15\% being considerably smaller in the cases
with `noisy' boundary conditions. The future work, therefore,
should involve higher Reynolds number computations with even more
detailed modelling of experimental geometry that includes among
others the effects of run-out of the inner cylinder, finite gaps
between the cylinders and rings, and vertical misalignment of
horizontal surfaces of the rings.
\ack
We acknowledge the support of National Science Foundation sponsored
Physics Frontier Centre for Magnetic Self-Organization in Laboratory
and Astrophysical Plasma (CMSO), and the use of computational
resources of Argonne Leadership Computing Facility (ALCF) operated
by Argonne National Laboratory and of the National Energy Research
Scientific Computing Center (NERSC) at Lawrence Berkeley National
Laboratory supported by the Office of Science of U.S. Department Of
Energy (DOE) under Contract No.~DE-AC02-05CH11231. The work was
also partially supported by NASA, grant number NNG04GD90G, and by
the Office of Science of the U.S. DOE under Contract
No.~W-31-109-Eng-38. We are also grateful to Ethan Schartman,
Michael Burin, Jeremy Goodman and Hantao Ji and to Leonid Malyshkin.
Many thanks to Aspen Center for Physics and to International Centre
for Theoretical Physics, Trieste, Italy and especially to Snezhana
Abarzhi for the invitation to participate in the First International
Conference ``Turbulent Mixing and Beyond,'' encouragement and
discussions concerning this work.
\section*{References}
|
1,116,691,499,883 | arxiv | \section{Introduction}
HH~111 (discovered by Reipurth 1989) is one of the two most remarkable
and better studied Herbig-Haro (HH) jets, the other one being HH~34
(shown to be a jet by Reipurth et al. 1986). The past observational
studies of HH 111 include:
\begin{itemize}
\item ground based (Reipurth et al. 1992; Podio et al. 2006) and
HST (Raga et al. 2002a) high and low resolution long-slit spectra,
\item optical (ground based: Reipurth et al. 1992; HST: Hartigan
et al. 2001) and IR (Coppin et al. 1998) proper motions,
\item evidence of multiplicity of the outflow source (Gredel \&
Reipurth 1993; Reipurth et al. 1999; Reipurth et al. 2000;
Noriega-Crespo et al. 2011),
\item discovery of an associated giant jet (Reipurth et al. 1997a),
\item detection of an associated, very well collimated molecular
outflow (Cernicharo \& Reipurth 1996; Nagar et al. 1997; Hatchell
et al. 1999; Lefloch et al. 2007)
\end{itemize}
Numerical simulations of variable jets calculated specifically for
modelling the observational properties of HH~111 were presented by
Masciadri et al. (2002) and Raga et al. (2002b).
Optically, the HH~111 system has a one-sided jet that appears at
$\sim 15''$ from an obscured source (detected at radio wavelengths
by Reipurth et al. 1999), extending to a distance of $\sim 40''$ W
from the source. However, a much more symmetric jet/counterjet
structure extending down to the position of the source is observed
in ground based (Gredel \& Reipurth 1994; Davis et al. 1994), HST
(Reipurth et al. 1999) and Spitzer (Noriega-Crespo et al. 2011) IR
images. At angular scales $>2'$ from the source, both of the outflow
lobes are detected optically, with total extent of $\sim 2^\circ$
for the whole system (see Reipurth et al. 1997a).
The kinematics of the HH~34 jet have been studied spectroscopically
(for a limited set of optical emission lines) with full spatial
coverage by Beck et al. (2007) and Rodr\'\i guez-Gonz\'alez et al.
(2012). Our present paper describes similar observations (i.e.,
high resolution spectroscopy with full spatial coverage), but for
the HH~111 jet. In our Integral Field Unit (IFU) Gemini South
spectra the [S II] 6730, 6716; H$\alpha$; [N II] 6583, 6548 and [O
I] 6360, 6300 are detected, and have a high enough signal-to-noise
so that velocity channel maps can be generated.
The paper is organized as follows. In section 2 we describe the
observations and the data reduction. The results are described in
section 3. Finally, the work is summarized in section 4.
\section{Observations and data reduction}
\subsection{The observations}
HH~111 was observed at GEMINI North observatory on Oct 2007 and Nov
2010, using the GMOS instrument in IFU mode (Allington-Smith et al. 2002)
under the program GN-2007B-Q-9. We used the IFU in the
single slit mode with the R831\_G5302 grating, giving a $R= 4396$
resolution at 7570\AA. In the single slit mode, the IFU field of
view is $5\arcsec$ x $3\arcsec5$, each lens covering $0\arcsec2$
on the sky.
\begin{table}\label{Table 1}
\begin{center}
\caption{Target coordinates and offsets}
\label{tab1}
\begin{tabular}{ccccc}
\tableline
\tableline
Target & RA & DEC & $\Delta p$ & $\Delta q$\tablenotemark{a}\\
& (h:m:s) & ($^{\circ}$:$^{\prime}$:$^{{\prime}{\prime}}$) & ($^{{\prime}{\prime}}$) & ($^{{\prime}{\prime}}$) \\
\tableline
IRAS 05491+0247\tablenotemark{b} &
05:51:46.25 & 02:48:29.5 & - & - \\
field 1 & 05:51:43.7 & 02:48:34.1 & - & - \\
field 2 & & & 0 & 4.4 \\
field 3 & & & 0 & 8.8 \\
field 4 & & & 2.9 & 5.6 \\
field 5 & & & -2.9 & 5.6 \\
field 6 & & & 0 & -4.4 \\
field 7 & & & 0.214 & -9.07 \\
field 8 & & & 0.214 & -13.43 \\
\tableline
\end{tabular}
\tablenotetext{1}{The offsets $\Delta p$ and $\Delta q$ are given
in arcsec, and refers to displacements perpendicular ($p$) and
parallel ($q$) to the HH 111 PA; here defined as $\sim$ 83$^{\circ}$.
Field 1 is the reference to the displacement shifts.}
\tablenotetext{2}{Coordinates of the HH 111 IRS (Reipurth et al. 2000).}
\end{center}
\end{table}
The observations has made under exceptional seeing conditions.
Using the R-band images, the seeing estimates range from $0\arcsec44$
to $0\arcsec51$ for the first epoch observations (2007) and from
$0\arcsec55$ to $0\arcsec60$ for the second one (2010).
\begin{figure}
\centerline{
\includegraphics[width=7.5cm]{f1.pdf}
}
\caption{Sketch of the eight fields observed with the GMOS-IFU
superimposed on the H$\alpha$ pre-image of HH 111 taken with the
Gemini North Telescope. The observed fields are labeled by numbers
from 1 (centered on the brilliant knot J) to 8 (finishing the mosaic
in the double knot E). The knots inside the HH 111 jet are labeled
by capital letters from E to L, following the nomenclature provided
by Reipurth (1989) and Raga et al. (2002a). The offsets between each field
are indicated in Table \ref{tab1}. The N-E axes are indicated in the
Figure, as well as a distance scale.}
\label{fig1}
\end{figure}
\begin{figure}
\centerline{
\includegraphics[scale=0.32]{f2a.pdf}
\includegraphics[scale=0.32]{f2b.pdf}
\includegraphics[scale=0.32]{f2c.pdf}
}
\caption{{\it Left:} H$\alpha$ sub-cube of field 8 without any
filtering process ($3\arcsec5 \times 5\arcsec0$). {\it Middle:} The
sum of the first two eigenvectors, after PCA analysis of the data
cube in the spectral region defined by H$\alpha$, but excluding the
emission. {\it Right:} First image (left panel)
minus the low-frequency noise (middle panel). The results show that
the instrumental fingerprint has been removed from the original
data, without any introduction of artificial structures.}
\label{fig2}
\end{figure}
H$\alpha$ narrow-band filter pre-images (on and off) were also taken
(on August 22nd, 2007) to position the IFU field with higher accuracy.
In order to cover the target, we observed the 8 fields shown in
Figure \ref{fig1}. Each field was observed with a total exposure
of 1200s (three 400s exposures, which were then median averaged to
reduce the cosmic ray contamination). Fields 1 to 6 were observed
on October 19th 2007 and fields 7 and 8 on November 16th 2007.
Fields 1 to 6 were re-observed on November 2nd 2010, due to bad CCD
settings during the original observations. Overlaps between the
different fields have been set to $0\arcsec4$ (corresponding to two
rows of IFU lenses).
Table \ref{tab1} gives the coordinates of the HH 111 IRS outflow
source (IRAS 05491+0247; see, Rodr\'{i}guez \& Reipurth 1994;
Rodr\'{i}guez et al. 1998; Reipurth et al. 2000) and the exact
offsets of the GMOS-IFU observed fields. The offsets are in arcsec
and are taken with respect to the center of field 1 (center of knot
J, see Figure \ref{fig1}). The offsets are perpendicular ($\Delta
p$) and parallel ($\Delta q$) to the HH 111 outflow axis, which is
at a PA=$-83.94^{\circ}$ position angle (Raga et al. 2002a).
\subsection{Data reduction}
Data have been reduced using special reduction package provided
by Gemini Staff \footnote{http://www.gemini.edu/node/10795} using
the IRAF reduction package \footnote{Image Reduction and Analysis
Facility is a software developed by the National Optical Astronomy
Observatory - iraf.noao.edu}. All raw images have been bias
corrected, trimmed and flat fielded. Flatfield images have also
been used to located the positions of the 1500 lenses on the frame.
Twilight images were used to estimate the grating response and after
the extraction of spectra for each lens, each one has been corrected
using this normalized response. Arcs, from the CuAr lamp, have also
been taken for the wavelength calibration. Using the bright OI night
sky line at 5577.338 \AA, we have estimated the wavelength calibration
accuracy to 0.1 \AA. The last step of the reduction was the sky
subtraction using the field located at 1\arcmin~ from the science
field. Finally the data cube has been created with the GFCUBE task
using a spatial resampling of 0.1\arcsec~ per pixel. The total
spectral coverage goes from 4835.896 \AA~ to 6957.800 \AA. The
spectral sampling is 0.339 \AA~ per spectral pixel.
We then extracted sub-cubes near the H$\alpha$, [O I]$\lambda\lambda$
6300,6364, [N II]$\lambda\lambda$ 6548,6584 and [S II]$\lambda\lambda$
6716,6731 emission lines, for each the eight observed fields.
\begin{figure*}
\centerline{
\includegraphics[scale=0.5]{f3a.pdf}
\includegraphics[scale=0.5]{f3b.pdf}
\includegraphics[scale=0.5]{f3c.pdf}
\includegraphics[scale=0.5]{f3d.pdf}
}
\caption{{\it From left to right:} Mosaics of [S
II]$\lambda$6716+$\lambda$6731 for the integrated line profile
(left), and for three different radial velocities intervals: high
velocity (HVC, medium velocity (MVC) and low velocity (LVC) components.
The intervals are defined by: $v_{rad} < -100$ km s$^{-1}$ for HVC,
$-100 < v_{rad} < -70$ (in km s$^{-1}$) for the MVC and $v_{rad}
> -70$ km s$^{-1}$ for the LVC. The coordinate axes are in arcsecs
(the origin coincides with the position of the driving source) and
the colorbar are in arbitrary units.}
\label{fig3}
\end{figure*}
\begin{figure*}
\centerline{
\includegraphics[scale=0.5]{f4a.pdf}
\includegraphics[scale=0.5]{f4b.pdf}
\includegraphics[scale=0.5]{f4c.pdf}
\includegraphics[scale=0.5]{f4d.pdf}
}
\caption{The same as in Figure \ref{fig3} for H$\alpha$.
\label{fig4}}
\end{figure*}
\begin{figure*}
\centerline{
\includegraphics[scale=0.5]{f5a.pdf}
\includegraphics[scale=0.5]{f5b.pdf}
\includegraphics[scale=0.5]{f5c.pdf}
\includegraphics[scale=0.5]{f5d.pdf}
}
\caption{The same as in Figure \ref{fig3} but for [N II]$\lambda$6548 +
$\lambda$6583.
\label{fig5}}
\end{figure*}
\begin{figure*}
\centerline{
\includegraphics[scale=0.5]{f6a.pdf}
\includegraphics[scale=0.5]{f6b.pdf}
\includegraphics[scale=0.5]{f6c.pdf}
\includegraphics[scale=0.5]{f6d.pdf}
}
\caption{The same as in Figure \ref{fig3} but for
[O I]$\lambda$6300 + $\lambda$6360.
\label{fig6}}
\end{figure*}
Further image treatment has been conducted using the pipeline
developed by Menezes et al. (2014; see also Ricci, Steiner \& Menezes
2011). It consist in i) a spatial filtering process to remove
high-spatial frequency noise, ii) a Principal Component Analysis
(PCA) to remove instrumental spatial fingerprints and iii) a
Richardson-Lucy deconvolution process. In order to remove the high
spatial frequency noise, we first calculate discrete Fourier
transforms of the images resulting from each data cube, and apply
a low-pass band filter in the frequency space (Gonzalez \& Woods
2002). To this effect we follow the procedure described in Menezes,
Steiner \& Ricci (2014). In particular, we use a Butterworth $H(u,v)$
filter of order $n=6$ (see equation 7 in Menezes et al. 2014). An
inverse Fourier transform is then applied to the filtered data. We
should mention that similar noise filtering process has been already
applied in the context of long slit spectroscopy of HH jets (Raga
\& Mateo 1998). The high-frequency cleaned data still show
low-frequency noise which we remove using the Principal Component
Analysis (PCA; see Ricci, Steiner \& Menezes 2011 and Steiner et
al. 2009 for a detailed discussion and method presentation). The
scope of this paper is not to present this technique. However, it
is worth to mention briefly how it works. The PCA technique allows
us to describe the data cube as a linear combination of an orthogonal
basis, which are the eigenvectors of a 2D covariance matrix. Such
a covariance matrix is obtained after scaling all the intensities
in the 3D data cube with the transformation:
\begin{equation}
\beta = \mu(i-1)+j
\end{equation}
\begin{figure}
\centerline{
\includegraphics[scale=0.38]{f7a.pdf}
\includegraphics[scale=0.38]{f7b.pdf}
}
\caption{Low-velocity and high-velocity components mosaic subtraction
in H$\alpha$ emission. {\it Left:} LVC - 2$\times$HVC. {\it Right:} LVC -
4$\times$HVC.
\label{fig7}}
\end{figure}
\noindent in a data cube with the spatial dimensions $n=\mu \times
\nu$: $1 \le i \le \mu$, $1 \le j \le \nu$. For each spaxel intensity,
$I_{ij\lambda}$, one defines a 2D intensity matrix {\bf I}$_{\beta
\lambda}$, which will be the subject of the PCA analysis. Its
covariance matrix is calculated, and the eigenvectors with their
associated variances are obtained (e.g., Steiner et al. 2009). The
higher its variance, the more representative of the data is a given
eigenvector. As the orthogonality is ensured, we can compose
a tomogram, which is an image that represents the projection of the
original data onto a selected eigenvector (or eigenvectors).
In Figure \ref{fig2} we show a specific observed field
(field 8 in Figure \ref{fig1}), for a given emission line ($H\alpha$),
before any filtering process (a), after the filtering process but
extracting the emission line from data (b), and the (a)-(b) subtraction
in (c). We should mention that the tomogram in (b) was obtained by
adding the first two eigenvectors (without the emission line), which
we assume to correspond to the low-frequency noise, or the instrumental
fingerprint, as described in Ricci et al.(2011).
Finally, we have applied a Richardson-Lucy deconvolution procedure
with a Gaussian PSF, with a constant FWHM of $0\arcsec44$ for fields
7 and 8, and a constante FWHM of $0\arcsec55$ for the fields 1 to
6, as suggested by the seeing in our observations. Six interactions
has been used in order to deconvolve the original data with the PSF
(see equation 10 in Menezes et al. 2014). The whole procedure
(spatial re-sampling, low-noise filtering process, instrumental
fingerprints removal technique and Richardson-Lucy deconvolution)
is explained in detail in Menezes et al. (2014), who has developed
the technique\footnote{The routines are available electronically
at the following address:
\url{http://www.astro.iag.usp.br/$\sim$PCAtomography/}.}.
\section{Results}
\subsection{Images and velocity channel maps}
We present the spatially resolved line profiles as ``velocity channel
maps'' in three velocity intervals:
\begin{itemize}
\item HVC: a high (negative) velocity channel, with
radial velocities\footnote{The velocities has not been
corrected for the LSR velocity of $+ 23$ km s$^{-1}$
Reipurth 1989.} $v_{rad} < -100$ km s$^{-1}$,
\item MVC: a medium velocity channel, with $-100$ km s$^{-1}$ $<
v_{rad} < -70$ km s$^{-1}$,
\item LVC: a low velocity channel, with $v_{rad} > -70$ km s$^{-1}$.
\end{itemize}
We also present an image (for each spectral line) which consists
of an addition of these three velocity channel maps. The channel
maps and the images have been computed for mosaics composed of the
8 observed IFU fields (see Figure 1), and cover the emission of the
HH~111 jet at distances $\approx 24''\to 50''$ from the outflow
source. The mosaics have been built taking into account the offsets
(given in Table \ref{tab1}).
In Figures \ref{fig3}, \ref{fig4}, \ref{fig5} and \ref{fig6} we
show the images and velocity channel maps for [S
II]$\lambda$6716+$\lambda$6731, H$\alpha$, [N
II]$\lambda$6548+$\lambda$6584 and [O I]$\lambda$6300+$\lambda$6360,
respectively. In the [S II] image (left frame of figure \ref{fig3})
we see knots E through L, which can be clearly recognized comparing
the image with the identifications of Reipurth et al. (1992) and
Reipurth et al. (1997b).
In the channel maps of all of the observed lines, it is clear that
there is a general trend of increasing jet width as a function of
decreasing velocity (i.e., in all of the observed emission lines
the jet becomes narrower as we go from the HVC to the MVC and to
the LVC, see Figures \ref{fig3}-\ref{fig6}). This result is in
qualitative agreement with the observations of Riera et al. (2001),
who found velocities decreasing away from the outflow axis in two
long-slit spectra cutting the HH~111 jet at the approximate positions
of knots D (not detected in the present observations) and F.
\subsection{Width vs. radial velocity dependence}
In order to illustrate the broadening of the jet at lower (i.e.,
less negative) radial velocities, in Figure 7 we show the LVC$-$HVC
channel map subtraction for the H$\alpha$ line. This subtraction
map shows that at all positions along the observed region of the
HH~111, the high radial velocity emission is restricted to a narrow,
central region of the jet cross section.
One can quantify the width vs. radial velocity dependence by measuring
the FWHM of the emission on the HVC, MVC and LVC channel maps. For
all of the emission lines, we measure the widths on the three channel
maps at the positions of knots E, F, G, H, J and L, integrating the
emission in bins of $1''$ along the jet bin (centered at the positions
of the knots). The results of this procedure are shown in Figure
8, which shows the following features\footnote{The determination
of the FWHM for the knots G1, J, K1 and L, using the [N II] and [O
I] lines show high uncertainties due to their low S/N
(see Figures \ref{fig5} and \ref{fig6}). In a few cases (knots
G1, H and K1; see Figure \ref{fig8}) we were not able
to determine the [N II] FWHM for some (if not all) velocity
components.}
\begin{itemize}
\item there is a systematic increase of width as a function of
decreasing radial velocity for all the knots, and in all emission
lines,
\item with the exception of the very broad knot E, there is a general
trend of increasing width vs. distance from the source,
\item in the lower excitation lines ([O I] and [S II]), the widths
are somewhat lower than in the higher excitation lines (H$\alpha$
and [N II]).
\end{itemize}
\begin{figure*}
\centerline{
\includegraphics[scale=0.6]{f8.pdf}
}
\caption{FWHM (in arcsecs) of the emission on the HVC, MVC and LVC
channel maps for all of the emission lines (indicated on the top
of the figure), at the positions of knots E, F, G, H, J and L
(indicated on the right of the figure). The FWHM has been obtained
integrating the emission in bins of $1\arcsec0$ along the jet
(centered at the positions of the knots) and fitting a Gaussian to
each knot longitudinal profile. For [N II] line we found it difficult
to adjust a profile due to low S/N in some knots and for some
velocity components (see also Figure \ref{fig5}). For these cases,
the FWHM appears in the histogram as an zero.
The H$\alpha$ and [N II] histograms for
the knot L has the FWHM axis divided by a factor of 2.}
\label{fig8}
\end{figure*}
This last effect has been noted by Reipurth et al. (1997), who note
that the H$\alpha$ emission is broader than the [S II] emission
(see their Figure 6, in which they present an H$\alpha$-[S II]
subtraction map). In our observations, we see that in general the [N II]
emission has widths comparable to the H$\alpha$ widths, and that
the [O I] emission has narrower widths, comparable to the ones of
the [S II] emission.
\subsection{The [S II] 6716/6731 line ratios}
In Figure \ref{fig9} we present the spatial distribution of the [S
II]$\lambda$6716/[S II]$\lambda$6731 line ratio for the integrated
line profiles and for the HVC, MVC and LVC maps. This ratio gives
the electron density ($n_e$) of the medium
Osterbrock (1989). For calculating $n_e$ we assume a temperature of
10$^4$ K, consistent with the temperature of $\sim 1.3\times10^4$
K found by Podio et al. (2006) for the HH~111 jet.
Superimposed on the spatial distribution of the electron density
are the (logarithmic spaced) iso-contours of the [S II]$\lambda$6716
+ $\lambda$6731 integrated intensity. The
integrated map (left) shows that the electron density ranges from
$\sim 1.5\times10^3$ cm$^{-3}$ at knot E (at $\sim$ 26 arcsec from
the outflow source) up to $5\times10^3$ cm $^{-3}$ in knot F (at
$\sim 2\arcsec5$ from knot E; see Figure \ref{fig9}). It then drops
to $\sim 2\times10^3$ cm$^{-3}$ between knots F and H. In knot H,
the electron density has a strong peak of $n_e\sim 3.5\times10^3$
cm$^{-3}$.
Interestingly, the electron density maps of the three velocity
channels show systematically increasing densities for decreasing
radial velocities. This is can be clearly seen at the positions of
knots E, F and J. Also, an increase in $n_e$ can be seen in the
region between knots F and G when going from the MVC to the LVC
electron density maps (see the two frames on the right of Figure
9).
\subsection{Knot E}
We now focus on the interesting structure of knot E (at $\approx
26''$ from the source, see Figures 3-7). The structures observed
in the images and channel maps of this knot are shown in Figure 10.
\begin{figure*}
\centerline{
\includegraphics[scale=0.6]{f9a.pdf}
\includegraphics[scale=0.6]{f9b.pdf}
\includegraphics[scale=0.6]{f9c.pdf}
\includegraphics[scale=0.6]{f9d.pdf}
}
\caption{{\it From left to right:} Mosaics of [S II]$\lambda$6716/[S
II]$\lambda$6731 line ratio for the integrated line profile, and
for three different radial velocities intervals: high velocity (HVC),
medium velocity (MVC) and low velocity (LVC) components. The intervals
are defined by: $v_{rad} < -100$ km s$^{-1}$ for HVC, $-100 < v_{rad}
< -70$ (in km s$^{-1}$) for the MVC and $v_{rad} > -70$ km s$^{-1}$
for the LVC. The coordinate axes are in arcsecs
and the colorbar indicates the electron density in cm$^{-3}$.}
\label{fig9}
\end{figure*}
We see that in the H$\alpha$ and [N II] images (2nd and 3rd
rows of the far left column of
Figure 10) knot E shows two side-by-side peaks. These peaks have
been observed in previous ground based (Reipurth et al. 1992) and
HST (Reipurth et al. 1997) H$\alpha$ images of HH~111. Most
interestingly, the two side-by-side peaks are well separated in the
LVC H$\alpha$ and [N II] maps (right column of Figure 10), approach
each other in the MVC map, and merge into a single, central peak
in the HVC map. Therefore, we find that the two side-by-side peaks
(previously observed in H$\alpha$ images, see Reipurth et al. 1997)
actually enclose a fainter, higher radial velocity emission region.
In the lower excitation [S II] and [O I] lines, knot E shows two
low intensity side-by-side peaks only in the LVC maps (1st and
4th rows of the far right column
of Figure 10), and these peaks are not present in the MVC and HVC
maps. The images of knot E in these lines (1st and 4th rows
of the far left column of Figure
10) show a conical structure which resembles the MVC emission.
\section{Summary}
We have obtained IFU spectra of the HH~111 jet at distances of
$24\to 49''$ from the outflow source. From the spectra, we obtain
position-velocity cubes for the [S II] 6716, 6731; H$\alpha$; [N
II] 6548, 6583 and [O I] 6300, 6360 emission lines.
We find a number of interesting results:
\begin{itemize}
\item we find that for all emission lines we have an increase in
the observed width of the jet for decreasing radial velocities.
This effect can be seen directly in the velocity channel maps
(Figures 3-6), in subtractions of pairs of velocity channel maps
(Figure 7) or more quantitatively in the widths measured for the jet
knots (Figure 8). A broadening in the HH~111 jet for decreasing
velocities has been previously observed by Riera et al. (2001), and
a qualitatively similar effect is seen in the CO emission (but at
widths in excess of $\sim 10''$, see Lefloch et al. 2007),
\item we find that the HH 111 jet is narrower in the lower ([S~II]
and [O~I]) than in the higher (H$\alpha$ and [N~II]) excitation
lines at the vicinity of some knots (namely, knots E, F K1 and
L). This effect has been previously seen in the H$\alpha$ and
[S~II] HST images of Reipurth et al. (1997). These previous
observations suggested that we might be seeing a sheath of non-radiative
shocks (seen only in H$\alpha$) surrounding the HH~111 jet beam,
but the detection of broad [N II] emission indicates that the shocks
in the sheath are indeed radiative,
\item from the [S~II] 6716/6731 line ratio we systematically find
larger electron densities at lower radial velocities (see Figure
9),
\item the ``twin peak'' structure of knot E is clearly seen in the
H$\alpha$ and [N II] images, but these two peaks merge into a central
peak in the [O I] and [S II] images. In all of the lines, the twin
peaks are seen at low radial velocities (see the LVC maps of Figure
10), but they approach and merge into a single peak at higher radial
velocities.
\end{itemize}
These characteristics can be qualitatively explained in terms of
an ``internal working surface'' model, in which the knots are
produced as a result of an outflow velocity time-variability. In
such a model, the variability produces pairs of shocks that travel
down the jet beam, producing a high pressure region that ejects
material into the cocoon of the jet. The observations of larger
widths for the higher excitation emission would then be interpreted
as higher velocity shocks produced by the material in this sideways
ejection against a slower moving cocoon. The low excitation emission
would be interpreted as coming from lower velocity shocks within
the jet beam (produced by the outflow velocity variability).
\begin{figure*}
\centerline{
\includegraphics[scale=0.28]{f10_11.pdf}
\includegraphics[scale=0.255]{f10_12.pdf}
\includegraphics[scale=0.255]{f10_13.pdf}
\includegraphics[scale=0.255]{f10_14.pdf}
}
\centerline{
\includegraphics[scale=0.28]{f10_21.pdf}
\includegraphics[scale=0.255]{f10_22.pdf}
\includegraphics[scale=0.255]{f10_23.pdf}
\includegraphics[scale=0.255]{f10_24.pdf}
}
\centerline{
\includegraphics[scale=0.28]{f10_31.pdf}
\includegraphics[scale=0.255]{f10_32.pdf}
\includegraphics[scale=0.255]{f10_33.pdf}
\includegraphics[scale=0.255]{f10_34.pdf}
}
\centerline{
\includegraphics[scale=0.28]{f10_41.pdf}
\includegraphics[scale=0.255]{f10_42.pdf}
\includegraphics[scale=0.255]{f10_43.pdf}
\includegraphics[scale=0.255]{f10_44.pdf}
}
\caption{{\it From left to right:} Integrated emission, HVC, MVC
and LVC for (from top to bottom) [S II] , H$\alpha$, [N II] and
[O I] for field 8 which contains the knot E. The axes
are in arcsec and indicates the distance from
the driving source. The maps are in arbitrary units (as
seen at the colorbars).}\label{fig10}
\end{figure*}
The shock structure of internal working surfaces was described by
Raga et al. (1990), and numerical simulations have been done for
reproducing H$\alpha$ images (Raga et al. 2002b) and long-slit
spectra (Masciadri et al. 2002) of the HH~111 jet. The data presented
here clearly provide a substantial increase in the observational
constraints, which should be a challenging test for future models
of jet knots produced by an outflow variability or by other mechanisms
(see e.g., Micono et al. 1998). In particular, our data will give
interesting constraints on the nature of the cross section of the
outflow (Raga et al. 2011) or of the possible presence of two
distinct components (Te\c sileanu et al. 2014).
We can speculate that MHD effects might be present, and might
help to shape the morphology of the HH 111 emission knots as well
as their line profiles. It is worth to note that recent and
sophisticated MHD models predict that beam recollimation (by the
hoop stress) plays an important role in order to enhance the
brightness near the jet axis, as well as to accelerate the inner
jet beam, making it faster than the surrounding jet material. This
general result was obtained by Te\c sileanu et al (2014), Hansen,
Frank \& Hartigan (2014) and Staff et al. (2015), in spite of the
differences in the initial setup in these works. In Hansen, Frank
\& Hartigan (2014), a pure toroidal magnetic field is used to study
the jet propagation, while the disk wind solution is incorporated
in the models from Staff et al (2014) and Te\c sileanu et al. (2014),
who actually introduced a two component (stellar jet plus disk wind)
flow.
Another feature shown by some HH jets (including some of the
knots studied here) is an increase of the FWHM of the jet beam with
distance from the driving source. This feature has been predicted
in the models presented by Te\c sileanu et al. (2014) and compared
with available data from RW Aur, HH 30 and HL Tau, while the solutions
presented in Staff et al. (2014) were compared with data from DG
Tau, HN Tau, RW Aur and UZ Tau. In both cases, the distances covered
by the simulations were suitable for the study of the microjets
($\sim 100$ AU). Whether or not these effects are still important
farther away from the driving source (the knot E is at $\sim 10\,000$
AU from the VLA 1 source) is an issue that should be addressed
properly in future work.
\acknowledgments
We would like to thanks the referee for his/her suggestion. We are
grateful to R. Carrasco and B. Miller (from Gemini South telescope),
and R. Schiavon (Liverpool John Moores University) for help us with
data reduction and scripts. We are thankful to T. Ricci and J.
Steiner for enlightening discussions about the PCA technique and
its applications to data-cube, and for providing the pipeline for
data reduction. AHC, MJV and HP thanks Cnpq/CAPES for financial
support using the PROCAD project (552236/2011-0) and CAPES/CNPq
Science without Borders program, under grants 2168/13-8 (AHC) and
2565/13-7 (MJV). J. Feitosa was fully supported by a INCT-A
scholarship. AR acknowledges support from CONACyT grants 101356,
101975 and 167611 and the DGAPA-UNAM grants IN105312 and IG100214.
This paper has been submitted during our (AHC and MJV)
sabbatical leave in Grenoble (IPAG/UJF), and we are very thankful
to J. Bouvier and J. Ferreira for their warm hospitality.
{\it Facilities:} \facility{Gemini (GMOS)}.
|
1,116,691,499,884 | arxiv | \section{Introduction}
Our object of study is the diagonal quartic surface $X \subset
\P^3_\mathbb{Q}$ defined by the equation
\begin{equation} \label{eq:surface}
a_0 X_0^4 + a_1 X_1^4 + a_2 X_2^4 + a_3 X_3^4 = 0
\end{equation}
where $a_0,a_1,a_2,a_3 \in \mathbb{Q}$ are non-zero rational coefficients.
Multiplying the equation~\eqref{eq:surface} through by a constant,
permuting the coefficients, or changing any of the coefficients by a
fourth power gives rise to another equation defining a surface which
is clearly isomorphic (over $\mathbb{Q}$) to the original one. Two diagonal
quartic equations related by such operations will be called
\emph{equivalent}. In particular, after replacing $X$ with an
equivalent surface, we may assume that the coefficients $a_i$ are
integers with no common factor, and that none of them is divisible by
a fourth power.
When we talk about the \emph{reduction} of $X$ modulo some prime $p$,
we mean simply the variety in $\P^3_{\mathbb{F}_p}$ defined by reducing the
equation~\eqref{eq:surface} modulo $p$. Suppose that $p$ is odd.
Then, according to the number of coefficients divisible by $p$, the
reduction at $p$ will be either: a smooth diagonal quartic surface; a
cone over a smooth diagonal quartic curve; (geometrically) a union of
four planes; or a quadruple plane.
\begin{theorem} \label{thm:main}
Let $X$ be the diagonal quartic surface over $\mathbb{Q}$ given
by~\eqref{eq:surface}, and let $H$ be the subgroup of $\mathbb{Q}^\times /
(\mathbb{Q}^\times)^4$ generated by $-1$, $4$ and the quotients $a_i / a_j$.
Suppose that the following conditions are satisfied:
\begin{enumerate}
\item \label{it:els} $X(\mathbb{Q}_v) \neq \emptyset$ for all places $v$ of $\mathbb{Q}$;
\item \label{it:235} $H \cap \{ 2,3,5 \} = \emptyset$;
\item \label{it:maximal} $|H| = 256$;
\item \label{it:reduction} there is some odd prime $p$ which divides
precisely one of the coefficients $a_i$, and does so to an odd
power; moreover, if $p \in \{ 7, 11, 17, 41 \}$, then the reduction
of $X$ modulo $p$ is not equivalent to $x^4 + y^4 + z^4 = 0$.
\end{enumerate}
Then $\Br X / \Br \mathbb{Q}$ has order $2$, and there is no Brauer--Manin
obstruction to the existence of rational points on $X$.
\end{theorem}
\begin{remark}
It is easy to check that the group $H$ may also be generated by $-1$,
$4$ and $a_i/a_0$ $(i=1,2,3)$. It follows that $H$ has order dividing
$256$.
\end{remark}
This theorem combines several ingredients, many of which are already
known. The deepest part is the result, due to Ieronymou, Skorobogatov
and Zarhin~\cite{ISZ}, that condition~\ref{it:235} above implies the
vanishing of the transcendental part of the Brauer group of $X$,
meaning that $\Br X = \Br_1 X$. The calculation that, under
condition~\ref{it:maximal} above, $\Br_1 X / \Br \mathbb{Q}$ has order $2$ can
be found in the tables contained in the author's
thesis~\cite{Bright:thesis}. The new ingredients contained in this
article are a more geometric description of the non-trivial class of
Azumaya algebras on $X$ and a proof that these algebras, under
condition~\ref{it:reduction} above, give no obstruction to the
existence of rational points on $X$.
\subsection{Background}
Let us recall the definition of the Brauer--Manin obstruction; see
Skorobogatov's book~\cite{Skorobogatov:TRP} for more details. Fix a
number field $k$ and a smooth, projective, geometrically irreducible
variety $X$ over $k$. We define the \emph{Brauer group} of $X$ to be
$\Br X = \H^2_\mathrm{\acute{e}t}(X, \mathbb{G}_\textrm{m})$. If $K$ is any field containing $k$, and
$P \in X(K)$ a $K$-point of $X$, then there is an evaluation
homomorphism $\Br X \to \Br K$, $\mathcal{A} \mapsto \mathcal{A}(P)$, which is the
natural map coming from the morphism $P\colon \Spec K \to X$. In
particular, this applies if $K=k_v$ is a completion of $k$.
As $X$ is projective, the set of adelic points of $X$ is $X(\mathbb{A}_k) =
\prod_v X(k_v)$, the product being over all places of $k$. The set
$X(\mathbb{A}_k)$ is non-empty precisely when each $X(k_v)$ is non-empty,
that is, $X$ has points over every completion of $k$. Let $\inv_v
\colon \Br k_v \to \mathbb{Q}/\mathbb{Z}$ be the invariant map. Define the following
subset of the adelic points:
\[
X(\mathbb{A}_k)^{\Br} = \big\{ (P_v) \in X(\mathbb{A}_k) \ \big| \ \sum_v \inv_v \mathcal{A}(P_v) = 0
\text{ for all } \mathcal{A} \in \Br X \big\} \text{.}
\]
Suppose that $X(\mathbb{A}_k)$ is non-empty. By class field theory, the
diagonal image of $X(k)$ is contained in $X(\mathbb{A}_k)^{\Br}$; if in fact
$X(\mathbb{A}_k)^{\Br}$ is empty, then we say there is a \emph{Brauer--Manin
obstruction} to the existence of $k$-rational points on $X$.
Let $\bar{X}$ denote the base change of $X$ to an algebraic closure
$\bar{k}$ of $k$. There is a natural filtration on $\Br X$, given by
$\Br_0 X \subseteq \Br_1 X \subseteq \Br X$, where
\begin{itemize}
\item $\Br_0 X = \im( \Br k \to \Br X)$ consists of the constant
classes in $\Br X$;
\item $\Br_1 X = \ker( \Br X \to \Br \bar{X})$ is the \emph{algebraic}
part of the Brauer group, consisting of those classes which are
split by base change to $\bar{k}$.
\end{itemize}
If $X(\mathbb{A}_k) \neq \emptyset$, then the natural homomorphism $\Br k \to
\Br X$ is injective, and we will think of $\Br k$ as being contained
in $\Br X$. The elements of $\Br k$ do not contribute to the
Brauer--Manin obstruction, so in describing $X(\mathbb{A}_k)^{\Br}$ it is
enough to consider the quotient $\Br X / \Br k$.
Elements of $\Br X \setminus \Br_1 X$ are called
\emph{transcendental}. For certain types of varieties $X$, we know
that $\Br \bar{X} = 0$ and therefore that $\Br X$ is entirely algebraic:
this is true in particular if $X$ is a curve or a rational surface.
It is, however, certainly not true if $X$ is a K3 surface, such as our
diagonal quartic surface. The transcendental part of the Brauer group
of a diagonal quartic surface has been studied by
Ieronymou~\cite{Ieronymou} and by Ieronymou, Skorobogatov and
Zarhin~\cite{ISZ}. The algebraic part of the Brauer group of a
diagonal quartic surface has been studied by the present
author~\cite{Bright:thesis, Bright:JSC-2005}.
\subsection{Outline of the proof}
We will now describe an outline of the proof of Theorem~1.1, with the
details postponed to Section~\ref{sec:alg}. As mentioned above, the
first ingredient is the following result of Ieronymou, Skorobogatov
and Zarhin.
\begin{theorem}[{\cite[Corollary~3.3]{ISZ}}]
Let $X$ and $H$ be as in the introduction, and suppose that $H \cap
\{2,3,5\} = \emptyset$. Then $\Br X = \Br_1 X$.
\end{theorem}
So any Brauer--Manin obstruction on $X$ comes entirely from the
algebraic Brauer group. The structure of $\Br_1 X / \Br \mathbb{Q}$ as an
abstract group can be computed using the isomorphism
\[
\Br_1 X / \Br \mathbb{Q} \cong \H^1(\mathbb{Q}, \Pic \bar{X}).
\]
In the case of diagonal quartic surfaces, $\Pic\bar{X}$ is generated by
the classes of the obvious 48 straight lines on $\bar{X}$, and
condition~\ref{it:maximal} of Theorem~\ref{thm:main} ensures that the
Galois action on these lines is the most general possible.
Lemma~\ref{lem:h1pic} below shows that $\Br_1 X / \Br \mathbb{Q}$ is of order
$2$.
It remains to compute the Brauer--Manin obstruction coming from the
non-trivial class in $\Br_1 X / \Br \mathbb{Q}$. In Lemma~\ref{lem:alg}, we
describe explicitly an Azumaya algebra $\mathcal{A}$ which may be defined on
any diagonal quartic surface~\eqref{eq:surface} for which $a_0 a_1 a_2
a_3$ is not a square. Condition~\ref{it:maximal} implies in
particular that $a_0 a_1 a_2 a_3$ is non-square, so the algebra $\mathcal{A}$
is defined on our particular surface.
The proof is completed by Lemma~\ref{lem:obs}. This states that,
given a prime $p$ satisfying condition~\ref{it:reduction} of
Theorem~\ref{thm:main}, the Azumaya algebra $\mathcal{A}$, evaluated at
different points of $X(\mathbb{Q}_p)$, gives invariants of both $0$ and
$\frac{1}{2}$. In particular, $\mathcal{A}$ is not equivalent to a constant
algebra, and provides no obstruction to the existence of rational
points on $X$.
\section{The algebraic Brauer--Manin obstruction}
\label{sec:alg}
In this section we describe an explicit Azumaya algebra on our
diagonal quartic surface. For this purpose we may replace $\mathbb{Q}$ by an
arbitrary number field $k$. Let $X \subset \P^3_k$ be the diagonal
quartic surface~\eqref{eq:surface}, and let $Y \subset \P^3_k$ be the
smooth quadric surface defined by
\[
a_0 Y_0^2 + a_1 Y_1^2 + a_2 Y_2^2 + a_3 Y_3^2 = 0 \text{.}
\]
There is a morphism $\phi\colon X \to Y$ given by $Y_i = X_i^2$. If
$X$ is everywhere locally soluble, then so is $Y$; and, since $Y$ is a
quadric, it follows that $Y$ has a $k$-rational point.
\begin{lemma} \label{lem:alg}
Suppose that $X$ is everywhere locally soluble. Pick a point
$P=[y_0,y_1,y_2,y_3] \in Y(k)$, and let $g \in k[Y_0,Y_1,Y_2,Y_3]$ be
the linear form
\[
g = a_0 y_0 Y_0 + a_1 y_1 Y_1 + a_2 y_2 Y_2 + a_3 y_3 Y_3
\]
defining the tangent plane to $Y$ at $P$. Let
\begin{equation} \label{eq:fdef}
f = \phi^* g = a_0 y_0 X_0^2 + a_1 y_1 X_1^2 + a_2 y_2 X_2^2 + a_3 y_3 X_3^2
\end{equation}
be the quadratic form obtained by pulling $g$ back to $X$. Write
$\theta = a_0 a_1 a_2 a_3$. Then the quaternion algebra $\mathcal{A} =
(\theta, f/X_0^2) \in \Br k(X)$ is an Azumaya algebra on $X$. The
class of $\mathcal{A}$ in $\Br X / \Br k$ is independent of the choice of
$P$.
\end{lemma}
\begin{remark}
Since $f$ is a quadratic form on $X$ and not a rational function, we
divide it by $X_0^2$ to obtain an element of $k(X)^\times$. As
always, when defining a quaternion algebra over a field, $(a,b)$ and
$(a,bc^2)$ give isomorphic algebras. So the choice of $X_0$ here is
completely arbitrary; we could replace it with any other $X_i$ or
indeed any linear form.
\end{remark}
\begin{remark}
The coordinates of the point $P$, and therefore the linear form $g$,
are only defined up to multiplication by a scalar. So the point $P$
only determines the algebra $\mathcal{A}$ up to an element of $\Br k$.
\end{remark}
\begin{proof}[Proof of Lemma~\ref{lem:alg}]
If $\theta$ is a square in $k$, then $\mathcal{A}$ is isomorphic to the
algebra of $2\times 2$ matrices over $k(X)$, and the conclusions are
trivially true. So suppose that $\theta$ is not a square in $k$.
As described for example in~\cite[Lemma~11]{SD:PLMS-2000}, to show
that $\mathcal{A}$ is an Azumaya algebra we need to show that the principal
divisor $(f/X_0^2)$ is the norm of a divisor on $X$ defined over
$k(\sqrt{\theta})$. Recall that, over $\bar{\mathbb{Q}}$, $\bar{Y}$ admits two
pencils of straight lines, the classes of which generate $\Pic \bar{Y}
\cong \mathbb{Z}^2$. The tangent plane to $Y$ at $P$ intersects $Y$ in two
lines, $L$ and $L'$, which are each defined over $k(\sqrt{\theta})$
and conjugate over $k$; so the divisor of vanishing of $g$ on $Y$ is
$L+L'$. Let $D = \phi^* L$ be the divisor obtained by pulling $L$
back to $X$, and similarly $D' = \phi^* L'$. Then $(f/X_0^2) = D + D'
- 2 D_0 = N_{k(\sqrt{\theta})/k}(D - D_0)$, where $D_0$ is the divisor
on $X$ defined by $X_0=0$.
Independence of $P$ is a routine calculation, but we reproduce it for
the sake of completeness. Let $P_1 \in Y(k)$ be another point, and
let $g_1$ be the corresponding linear form defining the tangent plane
to $Y$ at $P_1$. Then the divisor of vanishing of $g_1$ on $Y$ is
$L_1 + L_1'$, where $L_1$ is a line, defined over $k(\sqrt{\theta})$
and linearly equivalent to $L$, and $L_1'$ its conjugate over $k$. So
there exists a rational function $h$ on $Y$, defined over
$k(\sqrt{\theta})$, such that $(h) = L - L_1$. Then
\[
(g_1N_{k(\sqrt{\theta})/k}h) = (L_1 + L_1') + (L - L_1) + (L' - L_1')
= L + L'
\]
and so $g_1 N_{k(\sqrt{\theta}/k)} h$ is a constant multiple of $g$.
Let $f_1 = \phi^* g_1$; then $f/f_1$ is a constant multiplied by the
norm of a rational function defined over $k(\sqrt{\theta})$, so
$(\theta, f_1/X_0^2)$ differs from $(\theta, f/X_0^2)$ only by a
constant algebra.
\end{proof}
\begin{remark}
Even when $\theta$ is not a square in $k$, it is still possible for
the class of $\mathcal{A}$ in $\Br X / \Br k$ to be trivial. For example,
taking $k=\mathbb{Q}$, the tables of~\cite{Bright:thesis} show that, for any
integers $c_1, c_2$, the diagonal quartic surface
\[
X_0^4 + c_1 X_1^4 + c_2 X_2^4 - c_1^2 c_2^2 X_3^4
\]
has $\Br_1 X = \Br \mathbb{Q}$. In particular, the algebra $\mathcal{A}$ on this
surface is equivalent to a constant algebra. Note that
Lemma~\ref{lem:obs} below does not apply in this case, since no prime
divides exactly one of the coefficients.
\end{remark}
\begin{lemma}\label{lem:h1pic}
Let $X$ be a diagonal quartic surface over $\mathbb{Q}$. In the notation of
Theorem~\ref{thm:main}, suppose that $|H|=256$. Then $\Br_1 X / \Br
\mathbb{Q}$ is of order $2$.
\end{lemma}
\begin{proof}
This calculation can be found in~\cite{Bright:thesis}, and depends on
the well-known isomorphism $\Br_1 X / \Br \mathbb{Q} \cong \H^1(\mathbb{Q}, \Pic\bar{X})$.
What follows is a brief summary of the calculation. The variety $X$
contains (at least) 48 straight lines: for example, setting
\[
a_0 X_0^4 + a_1 X_1^4 = 0, \qquad a_2 X_2^4 + a_3 X_3^4 = 0
\]
and factorising each side over $\bar{\mathbb{Q}}$ gives equations for 16 lines;
the other 32 are obtained by permuting the indices. The lines are all
defined over the extension $K = \mathbb{Q}(i, \sqrt{2}, \sqrt[4]{a_1/a_0},
\sqrt[4]{a_2/a_0}, \sqrt[4]{a_3/a_0})$. The classes of the lines
generate the Picard group of $X$ over $\bar{\mathbb{Q}}$, which is free of
rank $20$. By the inflation-restriction exact sequence, we have
$\H^1(\mathbb{Q}, \Pic\bar{X}) = \H^1(K/\mathbb{Q}, \Pic X_K)$ and computing this
cohomology group comes down to knowing the Galois group $\Gal(K/\mathbb{Q})$
and its action on the 48 lines. Appendix~A of~\cite{Bright:thesis}
lists the result of this computation for all possible Galois groups
$\Gal(K/\mathbb{Q})$. In particular, case A222 there is where $K/\mathbb{Q}$ is of
maximal degree 256, so that the coefficients $a_i$ are ``as general as
possible''. In that case, $\H^1(K/\mathbb{Q}, \Pic X_K)$ is computed to be of
order $2$.
We claim that $[K:\mathbb{Q}] = |H|$, so that condition~\ref{it:maximal} of
Theorem~1.1 implies that $X$ falls into case A222
of~\cite{Bright:thesis}. Kummer theory shows that $[K:\mathbb{Q}(i)] = |H'|$,
where $H'$ is the subgroup of $\mathbb{Q}(i)^\times / (\mathbb{Q}(i)^\times)^4$
generated by $4$ and the $a_j/a_0$. The kernel of the natural map $r \colon
\mathbb{Q}^\times / (\mathbb{Q}^\times)^4 \to \mathbb{Q}(i)^\times / (\mathbb{Q}(i)^\times)^4$ is of
order $2$, generated by the class of $-4$; so $H' = r(H)$, and
\[
[K:\mathbb{Q}] = [K:\mathbb{Q}(i)][\mathbb{Q}(i):\mathbb{Q}] = 2|H'| = |H| \text{.}
\]
\end{proof}
\begin{remark}
Further cohomology calculations could show that, under the hypothesis
that $|H|=256$, the algebra $\mathcal{A}$ of Lemma~\ref{lem:alg} represents
the non-trivial class in $\Br X / \Br \mathbb{Q}$. However, there is no need
for this, since in our situation non-triviality is also implied by the
following lemma.
\end{remark}
\begin{lemma} \label{lem:obs}
Let $X$ be a diagonal quartic surface over $\mathbb{Q}$ given by
equation~\eqref{eq:surface}. Let $\mathcal{A}$ be the Azumaya algebra
described in Lemma~\ref{lem:alg}. Suppose that $p$ is an odd prime such
that:
\begin{enumerate}
\item \label{it:cone} $p$ divides precisely one of the coefficients
$a_0, a_1, a_2, a_3$, and does so to an odd power;
\item \label{it:locsol} $X(\mathbb{Q}_p)$ is not empty;
\item \label{it:special} if $p \in \{ 7, 11, 17, 41 \}$, then the
reduction of $X$ modulo $p$ is not equivalent to the cone over the
quartic curve $x^4 + y^4 + z^4 = 0$.
\end{enumerate}
Then $\inv_p \mathcal{A}(Q)$ takes both values $0$ and $\frac{1}{2}$ for $Q
\in X(\mathbb{Q}_p)$. In particular, the class of $\mathcal{A}$ in $\Br X / \Br \mathbb{Q}$
is non-trivial, and $\mathcal{A}$ gives no Brauer--Manin obstruction to the
existence of rational points on $X$.
\end{lemma}
\begin{remark}
Condition~\ref{it:cone} implies, in particular, that $\theta = a_0 a_1
a_2 a_3$ is not a square.
\end{remark}
\begin{remark}
Condition~(\ref{it:locsol}), that $X(\mathbb{Q}_p)$ be non-empty, is automatic
for $p \ge 37$: for the reduction of $X$ modulo $p$ is a cone over a
smooth quartic curve, which has a rational point by the Hasse--Weil
bound. For $p < 37$, one can easily check by a computer search that
the only smooth diagonal quartic curves over $\mathbb{F}_p$ lacking a rational
point are the following (up to equivalence):
\begin{itemize}
\item $x^4 + y^4 + z^4 = 0$ for $p=5$ or $29$;
\item $x^4 + y^4 + 2z^4 = 0$ for $p=5$ or $13$.
\end{itemize}
\end{remark}
\begin{proof}
Suppose, without loss of generality, that $p \mid a_0$.
In constructing $\mathcal{A}$, as described in Lemma~\ref{lem:alg}, we may
choose any point $P \in Y(\mathbb{Q})$ to start from. In particular, we may
choose $P$ such that $y_1, y_2, y_3$ are not all divisible by $p$, for
the following reason. Recall that we have assumed the coefficients
$a_i$ to be fourth-power-free, so that in particular $v_p(a_0) \le 3$.
The original surface $X$ is locally soluble at $p$, so let $[x_0, x_1,
x_2, x_3] \in X(\mathbb{Q}_p)$ with the $x_i$ $p$-adic integers, not all
divisible by $p$. If $x_1, x_2, x_3$ were all divisible by $p$, then
we would have $v_p( a_1 x_1^4 + a_2 x_2^4 + a_3 x_3^4 ) \ge 4$,
whereas $v_p(a_0 x_0^4) \le 3$, and so the defining
equation~\eqref{eq:surface} could not be satisfied. Now $[ x_0^2,
x_1^2, x_2^2, x_3^2]$ is a point of $Y(\mathbb{Q}_p)$, with $x_1, x_2, x_3$
not all divisible by $p$, and so by weak approximation $Y(\mathbb{Q})$
contains a point with the desired property.
Looking at the equation of $Y$ shows that, in fact, at most one of
$y_1, y_2, y_3$ can be divisible by $p$. It would clarify the rest of
the argument if none of $y_1, y_2, y_3$ were divisible by $p$, and the
reader is encouraged to imagine this to be the case; but unfortunately
if $p=3$ it is not always possible.
Starting from such a $P$, we obtain $f$ as in~\eqref{eq:fdef} where
the coefficient of $X_0^2$ is divisible by $p$, but at least one of
the other coefficients is not divisible by $p$. The reduction
$\tilde{f}$ of $f$ modulo $p$ is a non-zero diagonal quadratic form on
$\P^3_{\mathbb{F}_p}$, with no term in $X_0^2$.
We now reduce to a problem over $\mathbb{F}_p$. Let $\tilde{X}$ denote the
reduction of $X$ modulo $p$. Let $\tilde{Q} \in \tilde{X}(\mathbb{F}_p)$ be a
smooth point; then, by Hensel's Lemma, $\tilde{Q}$ lifts to a point $Q
\in X(\mathbb{Q}_p)$. Suppose that $\tilde{f}(\tilde{Q}) \neq 0$, and that
$X_0(Q) \neq 0$. Since $p$ divides $\theta$ to an odd power, the
description of the Hilbert symbol at~\cite[III, Theorem~1]{Serre:CA}
gives
\begin{equation} \label{eq:legendre}
\inv_p \mathcal{A}(Q) = (\theta, f(Q)/X_0^2)_p = (\theta, f(Q))_p
= \leg{\tilde{f}(\tilde{Q})}{p} \text{.}
\end{equation}
Here the leftmost equality is abusing notation slightly, since
$\inv_p$ traditionally takes values in $\{0, \frac{1}{2}\}$ whereas
the Hilbert symbol $(\cdot, \cdot)_p$ takes values in $\{\pm 1\}$.
Since $f$ is of degree $2$, the value $f(Q)$ is defined only up to
squares, and likewise $\tilde{f}(\tilde{Q})$, but the expressions
in~\eqref{eq:legendre} are well defined. The requirement that $X_0(Q)
\neq 0$ is superfluous, since we can always replace $\mathcal{A}$ by the
isomorphic algebra $(\theta, f/X_i^2)$ for some $i \neq 0$ to show
that the conclusion of~\eqref{eq:legendre} still holds.
Now let $C$ be the smooth quartic curve in $\P^2_{\mathbb{F}_p}$ defined by
\[
C : \tilde{a}_1 X_1^4 + \tilde{a}_2 X_2^4 + \tilde{a}_3 X_3^4 = 0
\text{.}
\]
This is, of course, the same as the defining equation of $\tilde{X}$,
but now considered as an equation in only three variables. Any point
of $X(\mathbb{Q}_p)$ reduces to give us a point of $\tilde{X}(\mathbb{F}_p)$ and
hence, forgetting the $X_0$-coordinate, of $C(\mathbb{F}_p)$. Since the
diagonal quadratic form $\tilde{f}$ has no term in $X_0^2$, we can
consider it as a form on $C$. Note that $\tilde{f}$ depends only on
$C$, not on our original variety $X$, since we may also construct
$\tilde{f}$ as follows: the point $\tilde{P} = (\tilde{y}_1,
\tilde{y}_2, \tilde{y}_3)$ lies on the smooth plane conic $Z:
\tilde{a}_1 Y_1^2 + \tilde{a}_2 Y_2^2 + \tilde{a}_3 Y_3^2 = 0$, and
the linear form $\tilde{g}$ defines the tangent line to $Z$ at
$\tilde{P}$. Write $\tilde\phi$ for the map from $C$ to $Z$ given by
$Y_i = X_i^2$; pulling $\tilde{g}$ back under $\tilde\phi$ gives the
form $\tilde{f}$. In particular, this shows that the divisor of
$\tilde{f}$ is a multiple of $2$: for we have $(\tilde{g}) = 2
\tilde{P}$ and therefore $(\tilde{f}) = 2( \tilde\phi^* \tilde{P} )$.
The geometric picture (which is only accurate as long as none of $y_1,
y_2, y_3$ are divisible by $p$) is that $\tilde{f}$ defines a plane
conic which is tangent to $C$ at four distinct points, which are the
four points mapping to $\tilde{P}$ under $\tilde\phi$.
Note also that the divisor $(\tilde{f})/2 = \tilde\phi^* \tilde{P}$ is
not a plane section: as long as none of $y_1, y_2, y_3$ are divisible
by $p$, this divisor consists of four distinct points of the form
$[\pm \alpha, \pm \beta, \pm \gamma]$, with $\alpha, \beta, \gamma$
all non-zero; in characteristic $\neq 2$, such points can never be
collinear. If one of $y_1, y_2, y_3$ is divisible by $p$, then we
move to an extension of $\mathbb{F}_p$, replace $\tilde{P}$ by some
$\tilde{P}'$ for which the above proof does work, and observe that
$\tilde{P}'$ is linearly equivalent to $\tilde{P}$, so $\tilde\phi^*
\tilde{P}'$ is linearly equivalent to $\tilde\phi^* \tilde{P}$, but
$\tilde\phi^* \tilde{P}'$ is not a plane section; therefore neither
can $\tilde\phi^* \tilde{P}$ be a plane section.
By~\eqref{eq:legendre}, it remains to show that the quadratic form
$\tilde{f}$ takes both square and non-square non-zero values on
$C(\mathbb{F}_p)$. Equivalently, we need to show that, for any $c \in
\mathbb{F}_p^\times / (\mathbb{F}_p^\times)^2$, the equations
\begin{equation} \label{eq:cover}
T^2 = c \tilde{f}(X_1, X_2, X_3), \qquad
\tilde{a}_1 X_1^4 + \tilde{a}_2 X_2^4 + \tilde{a}_3 X_3^4 = 0
\end{equation}
have simultaneous solutions with $T$ non-zero. These equations define
a double cover $E_c$ of $C$. As given, $E_c$ is singular at the
points with $T=0$ (which are the points lying over the zeros of
$\tilde{f}$), so we consider its normalisation $E'_c \to E_c$. This
is a smooth double cover of $C$ with the following properties:
\begin{itemize}
\item the morphism $E'_c \to E_c$ is an isomorphism outside $8$
(geometric) points lying over the points of $E_c$ with $T=0$;
\item since the divisor $(\tilde{f})$ is a multiple of $2$, the
quadratic extension of function fields $\mathbb{F}_p(E_c)/\mathbb{F}_p(C)$ is
unramified and hence so is $E'_c \to C$;
\item since the divisor $(\tilde{f})/2$ is not a plane section, this
extension contains no non-trivial extension of $\mathbb{F}_p$ and so $E'_c$
is geometrically irreducible.
\end{itemize}
By the Riemann--Hurwitz formula, $E'_c$ has genus $5$. If $p > 114$,
then the Hasse--Weil bounds show that $E'_c$ has strictly more than
$8$ points over $\mathbb{F}_p$, and so $E_c$ has at least one point with $T
\neq 0$, completing the argument in this case.
It remains to check the cases with $p < 114$. For each prime $p$, we
can take $\tilde{a}_1 = 1$ and let $\tilde{a}_2, \tilde{a}_3$ run
through $\mathbb{F}_p^\times / (\mathbb{F}_p^\times)^4$. A straightforward computer
search shows that the only cases when some $E_c$ fails to have points
are those listed in the statement of the lemma.
\end{proof}
\begin{remark}
With a slightly longer argument, we could avoid having to throw away
the points on $E_c$ with $T=0$. By taking two different $\tilde{P}$s
to start with, we obtain two different $\tilde{f}$s with no common
zeros on $C$. The ratio of the $\tilde{f}$s is a square, and the
corresponding equations~\eqref{eq:cover} patch together to give a
description of $E'_c$ with no singularities. The sophisticated reader
will recognise $E'_c$ as a torsor under $\boldsymbol{\mu}_2$
corresponding to the class $(\tilde{f}/X_0^2) \in \Pic C[2]$.
\end{remark}
\section{A counterexample}
In this section we present a counterexample showing that
Theorem~\ref{thm:main} can fail when condition~\ref{it:reduction} is
not met. We begin by giving an infinite family of diagonal quartics
satisfying conditions~\ref{it:els}--\ref{it:maximal} of
Theorem~\ref{thm:main}, but not condition~\ref{it:reduction}.
\begin{lemma} \label{lem:eg1}
Let $p,q$ be odd primes satisfying the following properties:
\begin{itemize}
\item $p \equiv q \equiv 3 \pmod 4$;
\item $p$ and $q$ are both fourth powers modulo $17$;
\item $\leg{p}{q}=1$.
\end{itemize}
Then the diagonal quartic surface
\[
X_0^4 + q X_1^4 = p X_2^4 + 17 pq X_3^4
\]
satisfies conditions~\ref{it:els}--\ref{it:maximal} of
Theorem~\ref{thm:main}.
\end{lemma}
\begin{proof}
Conditions~\ref{it:235} and~\ref{it:maximal} are clear, since there
are no non-obvious relations between the generators for $H = \langle
-1, 4, p, q, 17 \rangle$. It remains to prove local solubility. For
$\mathbb{R}$ this is clear. For primes $\ell \ge 23$ of good
reduction, the Weil conjectures guarantee a point modulo $\ell$ and
hence a point over $\mathbb{Q}_\ell$ by Hensel's Lemma. At
$\ell=3,7,11,13,17,19$, a computer search shows that every smooth
diagonal quartic surface has a rational point modulo $\ell$. At
$\ell=5$, the only smooth diagonal quartic surface lacking a point
over $\mathbb{F}_5$ is the Fermat quartic $X_0^4 + X_1^4 + X_2^4 + X_3^4 = 0$,
so for local solubility to fail we would need $q \equiv -p \equiv
-17pq \equiv 1 \pmod 5$, which is impossible.
Since $p$ and $q$ are both congruent to $3 \pmod 4$, the fourth powers
modulo $p$ or $q$ are the same as the squares. At $q$, the condition
$\leg{p}{q}=1$ guarantees local solubility; at $p$, we have
$\leg{-q}{p} = -\leg{q}{p} = \leg{p}{q}=1$ and so again the surface is
locally soluble. Finally, at $17$, the fact that $p$, hence $-p$, and
$q$ are fourth powers means that the reduction at $17$ is isomorphic
to the cone over the Fermat quartic curve $x^4+y^4+z^4=0$, which has
smooth points over $\mathbb{F}_{17}$.
\end{proof}
However, choosing $p$ and $q$ to be fourth powers modulo $17$ means
that condition~\ref{it:reduction} of Theorem~\ref{thm:main} is not
satisfied.
We will show that the Azumaya algebra $\mathcal{A}$ described in
Section~\ref{sec:alg} can give an obstruction to the existence of
rational points on $X$, at least for some values of $p$ and $q$.
Recall that multiplying the form $f$ by a constant changes $\mathcal{A}$ by a
constant algebra. To avoid contributions at unnecessary primes, we
choose our representation $\mathcal{A} = (\theta,f)$ such that the
coefficients of $f$ are integers with no common factor. (This is
equivalent to writing our point $P=[y_0, y_1, y_2, y_3]$ with the
$y_i$ integers having no common factor.)
\begin{lemma} \label{lem:eg2}
Let $X$ be the surface of Lemma~\ref{lem:eg1}, and let $\mathcal{A} =
(\theta,f)$ be normalised as above. Then, for all places $v \neq 17$,
$\inv_v \mathcal{A}(Q)=0$ for $Q \in X(\mathbb{Q}_v)$. The invariant is constant on
$X(\mathbb{Q}_{17})$.
\end{lemma}
\begin{proof}
Our normalisation of $f$ ensures that, at all places of good reduction
for $X$, the algebra $\mathcal{A}$ also has good reduction and so the
invariant is zero at these places.
See~\cite[Corollary~4]{Bright:MPCPS-2007} for one explanation of why
this is true.
The primes of bad reduction for $X$ are $2,17,p,q$. Observe that
$\theta = 17p^2q^2$ is a square in $\mathbb{R}$, $\mathbb{Q}_2$, $\mathbb{Q}_p$ and $\mathbb{Q}_q$
(the last two follow by quadratic reciprocity from the fact that $-p$
and $q$ are fourth powers, hence squares, modulo $17$). So the
conclusion is true at each of these places. Our normalisation of
$f$ ensures that, at all places of good reduction for $X$, the algebra
$\mathcal{A}$ also has good reduction and so the invariant is zero at these
places.
At $17$, the argument used in the proof of Lemma~\ref{lem:obs} shows
that $\inv_{17} \mathcal{A}(Q)$ is constant for $Q \in X(\mathbb{Q}_v)$. We give the
details. The reduction of $X$ modulo $17$ is isomorphic to the cone
$X_0^4 + X_1^4 + X_2^4 = 0$. The corresponding quadric is $Y_0^2 +
Y_1^2 + Y_2^2=0$. To show simply that the invariant is constant, we
can change $\mathcal{A}$ by a constant algebra and so may as well replace $P$
by any point which is convenient. So pick $\tilde{P}=[5,5,1,0]$ and
hence $\tilde{f} = 5 X_0^2 + 5 X_1^2 + X_2^2$. (This choice of
$\tilde{P}$ has the advantage that it does not lift to a point of the
quartic, so $\tilde{f}$ is never zero on $\mathbb{F}_{17}$-rational points of
the quartic.) Now the solutions to $X_0^4 + X_1^4 + X_2^4$ over
$\mathbb{F}_{17}$ are all of the form $[\epsilon,1,0]$, $[\epsilon,0,1]$ or
$[0,\epsilon,1]$ where $\epsilon^4=-1$, and it turns out that
evaluating $\tilde{f}$ at any of these points gives a square in
$\mathbb{F}_{17}$.
\end{proof}
We do not yet know whether the invariant at $17$ will be $0$ or
$\frac{1}{2}$. If is it $0$, then there is no Brauer--Manin
obstruction on $X$ (not even to weak approximation). If it is
$\frac{1}{2}$, then there is a Brauer--Manin obstruction to the
existence of rational points. To determine which, we only need to
evaluate the invariant at one point. A simple calculation reveals
that the first example satisfying the conditions of
Lemma~\ref{lem:eg1} does indeed give a counterexample to the Hasse
principle:
\begin{proposition}
Let $X$ be the diagonal quartic surface given by
\[
X_0^4 + 47 X_1^4 = 103 X_2^4 + (17\times 47\times 103) X_3^4 .
\]
Then $X$ has points in each completion of $\mathbb{Q}$, but the algebra $\mathcal{A}$
gives a Brauer--Manin obstruction to the existence of a rational point
on $X$.
\end{proposition}
\begin{proof}
Since $X$ satisfies the conditions of Lemma~\ref{lem:eg1}, it only
remains to evaluate the obstruction at $17$. On the quadric
\[
Y : Y_0^2 + 47 Y_1^2 = 103 Y_2^2 + (17\times 47\times 103) Y_3^2,
\]
we can take $P=[20:13:-9:0] \in Y(\mathbb{Q})$, and so obtain the Azumaya
algebra
\[
\mathcal{A} = (17, (20 X_0^2 + (47\times 13) X_1^2 + (103\times 9)
X_2^2)/X_0^2).
\]
Evaluating the quadratic form $20 X_0^2 + (47\times 13) X_1^2 +
(103\times 9) X_2^2$ at any point of $X(\mathbb{F}_{17})$ gives a non-square
value modulo $17$, and therefore $\inv_{17} \mathcal{A}(Q) = \frac{1}{2}$ for
all $Q \in X(\mathbb{Q}_{17})$. Combined with the fact that the invariant is
$0$ at each other place, we deduce that $\sum_v \inv_v \mathcal{A}(Q_v) =
\frac{1}{2}$ for all $(Q_v) \in X(\mathbb{A}_\mathbb{Q})$, and therefore that $\mathcal{A}$
gives a Brauer--Manin obstruction to the existence of a rational point
on $X$.
\end{proof}
\begin{remark}
It was not \emph{a priori} clear that starting from different points
$P$ of the quadric $Y$ should always give the same invariant at $17$.
Different points $P$ might give algebras $\mathcal{A}$ differing by a
constant algebra. After all, in performing the verification of
Lemma~\ref{lem:eg2}, we could have replaced the point
$\tilde{P}=[5,5,1,0]$ by a scalar multiple, say $[1,1,7,0]$, and
$\tilde{f}$ would have been non-square at all points instead of
square. However, our insistence that $P$ should be given by
coordinates which are coprime integers fixes the invariants at all
places other that $17$, and therefore (by the product rule) fixes the
invariant at $17$ as well. A somewhat surprising conclusion is this:
that, given any point $\tilde{P} \in \tilde{Y}(\mathbb{F}_{17})$, at most half
of the scalar multiples of $\tilde{P}$ lift to rational points of $Y$
with coprime integer coordinates.
\end{remark}
\subsection*{Acknowledgements}
I thank Sir~Peter Swinnerton-Dyer for many useful conversations on
this subject, and Damiano Testa and Samir Siksek for comments on this
article.
\bibliographystyle{abbrv}
|
1,116,691,499,885 | arxiv | \section{Introduction}
Linear quadratic stochastic optimal control problems (stochastic LQ
problems in short) represent an important class of stochastic control
problems and are very well studied in the literature, cf., e.g., the
book by \citet{YongZhou:99}, Chapter 6, for an overview. A prototype
of a stochastic LQ problem with linear quadratic cost functional is
given by the so-called \emph{optimal follower} or \emph{optimal
tracking} problem where one seeks to minimize a cost criterion of
the following form: For a deterministic time horizon $T>0$, for a
predictable target process $(\xi_t)_{0 \leq t \leq T}$ as well as
progressively measurable, nonnegative processes
$(\nu_t)_{0 \leq t \leq T}$ and $(\kappa_t)_{0 \leq t \leq T}$, for
random variables $\eta$ and $\Xi_T$ known at time $T$ and $x \in \mathbb{R}$,
find a control $u$ with state process
\begin{equation*}\label{eq:stateprocess}
X^u_t = x + \int_0^t u_s ds \quad (0 \leq t \leq T)
\end{equation*}
which minimizes the objective
\begin{equation} \label{eq:objectiveintro}
J^\eta(u) \triangleq \mathbb{E} \left[ \int_0^T (X^u_t - \xi_t)^2 \nu_t dt
+ \int_0^T \kappa_t u^2_t dt + \eta
(X^u_T - \Xi_T)^2 \right].
\end{equation}
The interpretation of such an LQ problem is the following: The first
term in~\eqref{eq:objectiveintro} measures the overall quadratic
deviation of the controlled state process $X^u$ from the target
process $\xi$ weighted with a stochastic weight process $\nu$. The
second term in \eqref{eq:objectiveintro} measures the incurred
tracking effort in terms of running quadratic costs which are imposed
on the control~$u$ with stochastic cost process $\kappa$. The third
term in \eqref{eq:objectiveintro} implements a penalization on the
quadratic deviation of the controlled state $X^u_T$ from the final
target position~$\Xi_T$ at terminal time $T$ with nonnegative random
penalization parameter~$\eta$.
It is well known in the literature that
the optimal control to such a stochastic LQ problem as well as its
optimal value is typically characterized by two coupled backward
stochastic differential equations (BSDEs): A backward stochastic
\emph{Riccati} differential equation (BSRDE) of the form
\begin{equation} \label{eq:BSRDEintro}
dc_t = \left(
\frac{c_t^2}{\kappa_t} - \nu_t \right) dt - dN_t \quad \text{on }
[0,T] \text{ with } c_T = \eta
\end{equation}
and a linear BSDE of the form
\begin{equation} \label{eq:linBSDEintro}
db_t = \left(
\frac{c_t}{\kappa_t} b_t - \nu_t \xi_t \right) dt + dM_t \quad \text{on }
[0,T] \text{ with } b_T = \eta \Xi_T,
\end{equation}
where $N$ and $M$ denote suitable c\`adl\`ag martingales (cf., e.g.,
\citet{KohlmannTang:02}, Section 5.1).
A number of interesting challenges arise when one allows the terminal
penalization parameter $\eta$ to take the value infinity with positive
(not necessarily full) probability. It is then intuitively sensible to
interpret the ``blow up'' of $\eta$ as a \emph{stochastic
terminal state} constraint of the form
\begin{equation} \label{eq:tcintro}
X^u_T = \Xi_T \quad \text{a.e. on the set } \{ \eta = + \infty \}
\end{equation}
on all controlled processes $X^u$ that produce a finite value in
\eqref{eq:objectiveintro}. Mathematically, it is less obvious how to
tackle this delicate ``partial'' constraint and how to compute the
optimal control as well as the optimal value. Indeed, note that the
involved BS(R)DEs in \eqref{eq:BSRDEintro} and \eqref{eq:linBSDEintro}
will both now exhibit with positive probability a \emph{singularity at final
time} in this case. The possibly singular BSRDE in
\eqref{eq:BSRDEintro} does not pose a serious problem; see
\citet{KrusePopier:16_1} and \citet{Popier:16}. In contrast, the
singularity in the terminal condition of the linear BSDE in
\eqref{eq:linBSDEintro} is rather unpleasant because it also involves
the desired target position $\Xi_T$, leaving the terminal condition
$b_T = \eta \Xi_T$ depend solely on the sign of $\Xi_T$ on the very set
$\{\eta=+\infty\}$ where this random variable has to be matched by the
state processes' terminal value $X^u_T$.
As a consequence, the classical solution approach cannot be followed
directly. Instead we introduce a family of auxiliary target
functionals
$$
J^c(u) \triangleq \limsup_{\tau \uparrow T} \mathbb{E} \left[ \int_0^\tau (X^u_t - \xi_t)^2 \nu_t dt +
\int_0^\tau \kappa_t u^2_t dt + c_\tau (X^u_\tau -
\hat{\xi}^c_\tau)^2 \right]
$$
parametrized by supersolutions $c$ of the BSRDE~\eqref{eq:BSRDEintro}
and where $\hat{\xi}^c_\tau$ is an \emph{optimal signal process} constructed
as a judiciously chosen average of future target positions
$(\xi_t)_{t \geq \tau}$ and $\Xi_T$. The target functional $J^c$ avoids
the singularity at time $T$ by a ``truncation in time'' focussing
on shorter time horizons $\tau<T$ at which we impose a ``classical''
finite terminal penalization. This penalization is chosen in such a
way that the corresponding optimizers can be extended consistently to
the full interval $[0,T)$ as $\tau \uparrow T$. In fact, the
corresponding auxiliary minimization problems turn out to be solvable
in a very satisfactory way: As already observed in a much simpler
setting in~\citet{BankSonerVoss:16}, we can give necessary and
sufficient conditions for the domain $\{J^c<\infty\}$ to be nonempty
and we can also describe explicitly the optimal control in feedback
form
$$
\hat{u}^c_t = \frac{c_t}{\kappa_t}(\hat{\xi}^c_t-X^{\hat{u}^c}_t)
\quad (0 \leq t < T),
$$
revealing that one should always push the controlled process towards
the optimal signal $\hat{\xi}^c$ with time-varying urgency given by
the ratio $c_t/\kappa_t$. We can even show how the regularity and
predictability of the targets $\xi$ and $\Xi_T$ as reflected in the
signal process $\hat{\xi}^c$ and its quadratic variation determine the
problem's value.
We show that for the considered supersolutions $c$ of the
BSRDE~\eqref{eq:BSRDEintro} we have $J^c(u) \geq J^\eta(u)$. This
leads us to the conjecture that for the \emph{minimal} supersolution
$c=c^{\min}$ (whose existence is guaranteed under mild conditions; see
\citet{KrusePopier:16_1}) the minimizers of these functionals are the same. While we
have to leave the proof of this conjecture for future research that
allows one to better control singular BSRDE supersolutions, we do verify
the validity of our conjecture in the examples we found in the literature.
Stochastic control problems, referred to as optimal liquidation
problems in the literature, with almost sure (i.e., $\eta \equiv + \infty$
almost surely) and deterministic terminal state constraint (targeting
the terminal position $\Xi_T = 0$), where the cost functional is
allowed to be quadratic in $X^u$ and $u$ (that is, $\xi \equiv 0$ in
\eqref{eq:objectiveintro}) have already been studied in, e.g.,
\citet{Schied:13}, \citet{AnkirchnerJeanblancKruse:14} and, in a more
general BSPDE framework, in \citet{GraeweHorstQiu:15}; allowing the
penalization parameter~$\eta$ to take the value infinity with positive
probability has been investigated in
\citet{KrusePopier:16_1}. \citet{AnkirchnerKruse:15}, still within
this context of optimal liquidation, allow the objective functional to
be additionally linear in the control $u$. They also incorporate a
specific nonzero stochastic terminal state constraint where the random
target position $\Xi_T$ is gradually revealed up to terminal time
$T$. A general class of stochastic control problems including LQ
problems with terminal states being constrained to a convex set were
studied by \citet{JiZhou:06}. However, to the best of our knowledge,
stochastic linear quadratic control problems with $\xi \neq 0$ and
possible stochastic terminal state constraint $\Xi_T \neq 0$ as
considered in the present paper have not yet been investigated.
The analysis of the stochastic LQ problem in \eqref{eq:objectiveintro}
above is especially motivated by optimal trading and hedging problems
in Mathematical Finance. In this framework the state process $X^u$
denotes an agent's position in some risky asset that she trades at a
turnover rate $u$. She wants her position to be as close as possible
to a given target strategy $\xi$ but simultaneously seeks to minimize
the induced quadratic transaction costs which are levied on her
transactions due to, e.g., stochastic price impact as measured by
$\kappa$. The weight process $\nu$ captures stochastic volatility,
that is, the risk of her open trading position due to random market
fluctuations. Finally, in case of a possible but not necessarily
almost sure occurrence of specific market conditions, encoded by the
event set $\{ \eta = + \infty \}$, she may be required to drive her
position~$X^u$ imperatively towards a predetermined random value
$\Xi_T$ at maturity~$T$ (e.g., to respect specific requirements of
contractual or regulatory nature concerning her risky asset
position). Otherwise, a penalization depending on the deviation of
$X^u_T$ from the target position $\Xi_T$ is implemented. We refer to,
e.g., \citet{RogerSin:10}, \citet{NaujWes:11}, \citet{AlmgrenLi:16},
\citet{FreiWes:13}, \citet{CarJai:15}, \citet{CaiRosenbaumTankov:15},
\citet{BankSonerVoss:16} and, for asymptotic considerations, to
\citet{ChanSircar:16}. Note, however, that the above cited papers may
neither allow for an arbitrary predictable target strategy $\xi$ nor
for stochastic price impact $\kappa$ and stochastic
volatility~$\nu$. In particular, none of them consider a stochastic
terminal state constraint like~\eqref{eq:tcintro} above with general
random target position~$\Xi_T$.
The rest of the paper is organized as follows. In Section
\ref{sec:stochLQproblem} we formulate the general stochastic LQ
problem with stochastic terminal state constraint. Our auxiliary
control problem and its solution are presented in
Section~\ref{sec:auxproblem}. Its relation to the original LQ problem
is discussed and exemplarily illustrated in
Section~\ref{sec:illustration}. The proofs are deferred to
Section~\ref{sec:proofs} and an appendix collects a
few BSDE-results which may be of independent interest.
\section{A stochastic LQ problem with stochastic terminal state
constraint}
\label{sec:stochLQproblem}
We fix a finite deterministic time horizon $T > 0$ and a filtered
probability space $(\Omega,\mathcal{F},(\mathcal{F}_t)_{0 \leq t \leq T},\mathbb{P})$
satisfying the usual conditions of right continuity and completeness.
We let $(\kappa_t)_{0 \leq t \leq T}$ and $(\nu_t)_{0 \leq t \leq T}$
denote two progressively measurable, nonnegative processes such
that
\begin{equation} \label{eq:condkappanu}
\int_0^T \left(\nu_t + \frac{1}{\kappa_t} \right) dt <
\infty \quad \mathbb{P}\text{-a.s.}
\end{equation}
Moreover, we are given a predictable target process
$(\xi_t)_{0 \leq t \leq T}$ satisfying
\begin{equation} \label{eq:condxi}
\mathbb{E} \left[ \int_0^T \vert \xi_t \vert \nu_t dt \right]
< \infty \quad \text{and} \quad \int_0^T \xi^2_t \nu_t dt < \infty
\quad \mathbb{P}\text{-a.s.},
\end{equation}
a random terminal target position $\Xi_T \in L^0(\mathbb{P},\mathcal{F}_{T-})$ as
well as an~$\mathcal{F}_{T-}$-measurable penalization parameter
$\eta$ taking values in $[0,+\infty]$. We further assume
that
\begin{equation} \label{ass:etakappa}
\mathbb{P}\left[ \eta = 0\, , \int_t^T \nu_u du = 0 \, \bigg\vert\,
\mathcal{F}_t\right] < 1 \quad \mathbb{P}\text{-a.s. for all } t \in [0,T).
\end{equation}
For such $\nu, \kappa, \xi, \Xi_T$ and $\eta$,
one can formulate the following stochastic linear quadratic
optimal control problem: Find a control $u$ from the class of
processes
\begin{equation} \label{eq:setU}
\mathcal{U} \triangleq \left\{ u \text{ progressively
measurable s.t.} \int_0^T \vert u_t \vert \, dt < \infty \text{ $\mathbb{P}$-a.s.} \right\}
\end{equation}
such that, for given $x \in \mathbb{R}$, the controlled state process
\begin{equation} \label{eq:defX}
X^u_t \triangleq x + \int_0^t u_s ds \quad (0 \leq t \leq T)
\end{equation}
minimizes the objective functional
\begin{equation} \label{eq:originalobjective}
J^\eta(u) \triangleq \mathbb{E} \left[ \int_0^T (X^u_t - \xi_t)^2 \nu_t dt
+ \int_0^T \kappa_t u^2_t dt + \eta 1_{\{ 0 \leq \eta < \infty \}}
(X^u_T - \Xi_T)^2 \right]
\end{equation}
over the set of all constrained policies
\begin{align} \label{def:originalpolicies}
\begin{aligned}
\mathcal{U}^{\Xi} \triangleq \Big\{ u \in \mathcal{U} \text{ satisfying } X^u_T = \Xi_T
\text{ $\mathbb{P}$-a.s. on } \{ \eta = + \infty\} \Big\}.
\end{aligned}
\end{align}
In short, we are interested in the stochastic LQ problem
\begin{equation} \label{eq:originalLQP}
J^\eta(u) \rightarrow \min_{u \in \mathcal{U}^{\Xi}},
\end{equation}
where the controller seeks to keep the controlled process $X^u$ close
to a given target process $\xi$ in such a way that deviations from
the final target position~$\Xi_T$ are also minimized. On
$\{ \eta = + \infty \}$, the final target position has to be reached a.s. as
incorporated in the set of admissible strategies $\mathcal{U}^{\Xi}$
in~\eqref{def:originalpolicies}.
\begin{Remark} \label{rem:intro}
$\phantom{}$
\vspace{-.5em}
\begin{enumerate}
\item In case where the random penalization parameter $\eta$ is finite
almost surely, the optimization problem in~\eqref{eq:originalLQP}
does not include a terminal state constraint and boils down to a
classical stochastic optimal control problem which is well studied
in the literature; c.f., e.g.,~\citet{KohlmannTang:02}.
\item The dynamic condition~\eqref{ass:etakappa} is
very natural for our optimal tracking problem
in~\eqref{eq:originalLQP}. It means that at any time $t < T$ some
penalization for deviating from the targets $\xi$ and $\Xi_T$ remains
conceivable, even conditionally on $\mathcal{F}_t$, so that the controller
has to stay alert all the way until the end.
\item The mild integrability conditions in \eqref{eq:condkappanu},
\eqref{eq:condxi} and \eqref{eq:setU} ensure that the stochastic LQ
problem in~\eqref{eq:originalLQP} is well defined along with some
processes to be introduced shortly.
\end{enumerate}
\end{Remark}
Mathematically, the stochastic terminal state constraint
\begin{equation} \label{eq:terminalconstraint}
X^u_T = \Xi_T \quad \text{a.e. on } \{ \eta = + \infty \}
\end{equation}
in the set of allowed controls $\mathcal{U}^\Xi$
in~\eqref{def:originalpolicies} entails technical difficulties. For
instance, it is far from obvious under what conditions we have
$\mathcal{U}^{\Xi} \neq \varnothing$ or whether $J^\eta (u) < \infty$ for some
$u \in \mathcal{U}^\Xi$. Also, as explained in the introduction, the usual
solution approach via BSDEs does not accommodate this partial
constraint.
As a possible remedy, instead of tackling the constrained stochastic
LQ problem posed in~\eqref{eq:originalLQP}, we propose to formulate a
suitable variant of this problem. Specifically, we will introduce a
family of stochastic control problems
\begin{equation} \label{eq:auxLQP}
J^c(u) \rightarrow \min_{u \in \mathcal{U}^c}
\end{equation}
with set of admissible controls
\begin{equation} \label{eq:policies}
\mathcal{U}^c \triangleq \{ u \in \mathcal{U} \text{ satisfying } J^c(u) < \infty \}
\end{equation}
and target functional $J^c$ which are parametrized by
supersolutions $c\triangleq(c_t)_{0 \leq t < T}$ to a certain singular
backward stochastic differential equation (BSDE) to be described below
in Section~\ref{subsec:BSRDE}. These auxiliary problems will dominate
the stochastic LQ problem stated in~\eqref{eq:originalLQP} in the
sense that, for all parametrizations~$c$, we have
\begin{equation} \label{eq:upperboundJ}
J^c(u) \geq J^\eta(u) \quad \text{for all } u \in \mathcal{U}^c
\end{equation}
and
\begin{equation} \label{eq:subsetJ}
\mathcal{U}^c \subseteq \mathcal{U}^{\Xi}
\end{equation}
(cf. Lemma~\ref{lem:dominate} below). We will show in
Section~\ref{subsec:mainresult} that our auxiliary problems
in~\eqref{eq:auxLQP} can be solved in a very satisfactory way: In
terms of $\xi$, $\Xi_T$ and the parameter process $c$, we provide
necessary and sufficient conditions which ensure that
$\mathcal{U}^c \neq \varnothing$ and describe explicitly the optimal
policy $\hat{u}^c$ for~\eqref{eq:auxLQP} as well as the associated
minimal costs $J^c(\hat{u}^c)$. In view of~\eqref{eq:upperboundJ}
and~\eqref{eq:subsetJ}, we thus obtain both explicit candidate
strategies for the general constrained stochastic LQ problem
formulated in~\eqref{eq:originalLQP} as well as conditions which
guarantee existence of controls entailing finite costs in the
latter.
To link these problems to the original problem~\eqref{eq:originalLQP}
it is natural to consider ``small'' solutions to the BSDE. In fact, we
conjecture that for the \emph{minimal supersolution} $c^{\min}$ of the
BSDE we have
\begin{equation} \label{eq:conjecture}
\argmin_{\mathcal{U}^\Xi} J^\eta = \argmin_{\mathcal{U}^{c^{\min}}} J^{c^{\min}}.
\end{equation}
While we cannot prove this conjecture in full generality, we provide
in Section~\ref{sec:illustration} evidence for its validity in certain
settings. These include the case of bounded coefficients, but also
some singular cases where, possibly, $\mathbb{P}[\eta = +\infty] > 0$.
\section{An auxiliary control problem}
\label{sec:auxproblem}
In this section, we will formulate and solve our auxiliary stochastic
LQ problem~\eqref{eq:auxLQP} for fixed $c$. The process $c$ will be a
supersolution to a BSRDE which we discuss in
Section~\ref{subsec:BSRDE}. In
Section~\ref{subsec:problemformulation}, we will introduce our target
functional $J^c$ whose minimizer $\hat{u}^c$ is derived in
Section~\ref{subsec:mainresult} along with the optimal costs
$J^c(\hat{u}^c)$.
\subsection{Connection between stochastic LQ problems and BSRDEs}
\label{subsec:BSRDE}
It is well known in the literature that the solution to stochastic
LQ problems like~\eqref{eq:originalLQP} is intimately related to backward
stochastic Riccati differential equations (BSRDEs):
For~\eqref{eq:originalLQP}, the Riccati dynamics take the form
\begin{equation} \label{eq:BSRDE}
dc_t = \left( \frac{c_t^2}{\kappa_t}
- \nu_t \right) dt - dN_t \quad \text{on } [0,T)
\end{equation}
for some c\`adl\`ag martingale $(N_t)_{0 \leq t < T}$; cf.,
e.g.,~\citet{Bismut:76,Bismut:78}. Moreover, the recent papers by,
e.g.,~\citet{AnkirchnerJeanblancKruse:14},~\citet{KrusePopier:16_1},~\citet{GraeweHorstQiu:15}
or~\citet{GraeweHorst:17} have shown that a terminal state constraint
as~\eqref{eq:terminalconstraint} in the LQ problem typically leads to
a singular terminal condition for the corresponding BSRDE of the form
\begin{equation} \label{eq:BSRDEtc}
\liminf_{t \uparrow T} c_t \geq
\eta \quad \mathbb{P}\text{-a.s.}
\end{equation}
This motivates us to let $c = (c_t)_{0 \leq t < T}$ denote from now on
an $(\mathcal{F}_t)_{0 \leq t < T}$-adapted, c\`adl\`ag semimartingale with
BSRDE dynamics~\eqref{eq:BSRDE} and terminal
condition~\eqref{eq:BSRDEtc}. In addition, we will assume that
\begin{equation} \label{eq:BSRDEintcond2}
\int_{[0,T)}
\frac{d[c]_t}{c_{t-}^2} < \infty \quad \text{on the set } \{ \eta =
+ \infty\},
\end{equation}
where $[c]$ denotes the quadratic variation process of~$c$ (cf., e.g.,
\citet{Prot:04}, Chapter II.6, for the quadratic variation process of
c\`adl\`ag semimartingales).
\begin{Remark} \label{rem:BSRDE}
\begin{enumerate}
\item As usual the dynamics in \eqref{eq:BSRDE} have to be
understood in the sense that the pair $(c,N)$ satisfies
\begin{equation} \label{eq:BSRDEintegral}
c_s = c_t - \int_s^t \left(
\frac{c_u^2}{\kappa_u} - \nu_u \right) du + \int_s^t dN_u
\quad (0 \leq s \leq t <T).
\end{equation}
In particular, the dynamics in \eqref{eq:BSRDE} are only required
to hold on $[0,T-\epsilon]$ for every $\epsilon > 0$, that is,
strictly before~$T$. So, more precisely, we will say that $(c,N)$
is a supersolution of the BSRDE~\eqref{eq:BSRDE} with terminal
condition $\eta$ if~\eqref{eq:BSRDEintegral}
and~\eqref{eq:BSRDEtc} hold true.
\item For bounded coefficients $\nu$, $\kappa$, $1/\kappa$,
$\eta$,~\citet{KohlmannTang:02} prove within a Brownian framework
existence and uniqueness of $c$ with dynamics in~\eqref{eq:BSRDE}
such that $\lim_{t \uparrow T} c_t = \eta$ exists $\mathbb{P}$-a.s. For
the fully singular case $\eta \equiv + \infty$ $\mathbb{P}$-a.s. and
again within a Brownian framework, existence of a minimal solution
(under suitable integrability conditions on the processes
$(\nu_t)_{0 \leq t \leq T}$ and $(\kappa_t)_{0 \leq t \leq T}$) to
the above BSRDE in~\eqref{eq:BSRDE} with singular terminal
condition $\liminf_{t \uparrow T} c_t = + \infty$ $\mathbb{P}$-a.s. are
provided in \citet{AnkirchnerJeanblancKruse:14}; cf. also
\citet{GraeweHorstQiu:15}. For the present partially singular setup,
\citet{KrusePopier:16_1} provide sufficient conditions (including
suitable integrability conditions on
$(\kappa_t)_{0 \leq t \leq T}$ and $(\nu_t)_{0 \leq t \leq T}$)
for the existence of a minimal supersolution
$(c^{\min}_t)_{0 \leq t \leq T}$ to the above BSRDE
in~\eqref{eq:BSRDE} with terminal condition~\eqref{eq:BSRDEtc} in
the sense that $c_t^{\min} \leq c_t$ for all $t \in [0,T)$ and all
processes $c$ satisfying likewise~\eqref{eq:BSRDE}
and~\eqref{eq:BSRDEtc}. Existence of actual solutions $c$ with
$\lim_{t \uparrow T} c_t=\eta$ is only known under additional
assumptions on $\eta$; see~\citet{Popier:16}.
\item The additional integrability
condition~\eqref{eq:BSRDEintcond2} on the ``blow up'' set
$\{ \eta = + \infty \}$ is implicitly shown to hold true
in~\citet{Popier:06} in a Brownian framework for constant
coefficients $\nu \equiv 0$ and $\kappa \equiv 1$; see Theorem 2
and Proposition 3 in \cite{Popier:06}. We require this
integrability condition~\eqref{eq:BSRDEintcond2} in our proof of
Lemma~\ref{lem:L} below whose result crucially feeds into our
solution presented in Section~\ref{subsec:mainresult}. We will
therefore briefly discuss exemplarily in the appendix sufficient
conditions on $(\kappa_t)_{0 \leq t \leq T}$,
$(\nu_t)_{0 \leq t \leq T}$ and $\eta$ under which property
\eqref{eq:BSRDEintcond2} does hold true in the more generic
setting of~\citet{KrusePopier:16_1}.
\end{enumerate}
\end{Remark}
As a consequence of~\eqref{eq:BSRDE}, \eqref{eq:BSRDEtc}
and~\eqref{eq:BSRDEintcond2}, let us first ascertain that $c$ is
strictly positive on $[0,T)$, a result which is crucial for our
approach below and which follows immediately from Lemma~\ref{app:lem:lbound}
in the appendix.
\begin{Lemma} \label{lem:cpositivity} For all $t \in [0,T)$ we have
$c_t > 0$ if \eqref{ass:etakappa} holds true. \qed
\end{Lemma}
Next, the BSRDE supersolution $c$ gives rise to the following auxiliary
process
\begin{equation} \label{eq:defL}
L_t \triangleq L^c_t \triangleq c_t \exp \left(-\int_0^t
\frac{c_u}{\kappa_u} du \right) \quad (0 \leq t < T).
\end{equation}
\begin{Lemma} \label{lem:L} Granted~\eqref{ass:etakappa} holds true,
the process $(L_t)_{0 \leq t < T}$ is a strictly positive c\`adl\`ag
supermartingale. In particular,
\begin{equation} \label{eq:defLlimit}
L_T \triangleq\lim_{t
\uparrow T} L_t \geq 0 \quad \text{exists } \mathbb{P}\text{-a.s.}
\end{equation}
and the extended process $(L_t)_{0 \leq t \leq T}$ is a supermartingale on
$[0,T]$. Moreover, we have $\{ \eta > 0\} \subset \{ L_T > 0\}$ up
to a $\mathbb{P}$-null set.
\end{Lemma}
\begin{proof}
Since $c_t > 0$ $\mathbb{P}$-a.s. for all $0 \leq t < T$ by
Lemma~\ref{lem:cpositivity}, it is immediate from \eqref{eq:defL}
that also $L_t > 0$ $\mathbb{P}$-a.s. for all $0 \leq t <T$. Integration by
parts and using the Riccati dynamics of $c$ in \eqref{eq:BSRDE}
yields that~$L$ satisfies the stochastic differential equation
\begin{equation}
L_0 = c_0, \quad dL_t =
L_{t-} \left( - \frac{\nu_t}{c_{t-}} dt - \frac{1}{c_{t-}}
dN_{t} \right) \quad \text{on } [0,T). \label{eq:SDEL}
\end{equation}
Since $N$ is a c\`adl\`ag local martingale on $[0,T)$, we obtain
from~\eqref{eq:SDEL} that the process $L$ is a strictly positive
c\`adl\`ag supermartingale on $[0,T)$. Hence, it follows by the
(super-)martingale convergence theorem (see, e.g.,
\citet{KaratShr:91}, Chapter 1.3, Problem 3.16) that the limit
$L_T \triangleq \lim_{t \uparrow T} L_t$ exists $\mathbb{P}$-a.s. and extends the
process $L$ to a c\`adl\`ag supermartingale on all of
$[0,T]$. Moreover, appealing to the definiton of $L$ in
\eqref{eq:defL} and the terminal condition
$\liminf_{t \uparrow T} c_t \geq \eta$ of the process $c$
in~\eqref{eq:BSRDEtc}, we have
$\{0 < \eta < \infty \} \subset \{ L_T > 0\}$. Concerning the ``blow
up'' set $\{ \eta = + \infty \}$, observe that we may write
\begin{equation} \label{eq:solL}
L_t = c_0 e^{X_t - \frac{1}{2}[X]^{\mathbf{c}}_t} \prod_{s \leq t} (1 + \Delta
X_s) e^{-\Delta X_s} \quad (0 \leq t < T),
\end{equation}
where
$X_t \triangleq - \int_0^t \frac{\nu_s}{c_{s-}} ds - \int_0^t
\frac{1}{c_{s-}} dN_{s}$ and where $[X]^{\mathbf{c}}$ denotes the
continuous part of its quadratic variation (cf., e.g.,
\citet{Prot:04}, Theorem II.37). Note that $L_s > 0$ $\mathbb{P}$-a.s. for
all $0 \leq s < T$ implies $\Delta X_s > -1$ for all $0 \leq s <
T$. Moreover, applying Taylor's formula, it holds for all
$0 \leq t < T$ that
\begin{equation*}
\sum_{s \leq t} \left\vert \log \left( (1 + \Delta
X_s) e^{-\Delta X_s} \right) \right\vert \leq \frac{1}{2} \int_{[0,T)}
\frac{1}{c^2_{s-}}d[c]_s < +\infty
\end{equation*}
a.e. on the set $\{ \eta = + \infty \}$ by virtue of condition
\eqref{eq:BSRDEintcond2}. This implies that the product of the jumps
in \eqref{eq:solL} will converge to a strictly positive limit as
$t \uparrow T$ on $\{ \eta = + \infty \}$. Concerning the limiting
behaviour of the exponential $\exp(X_t - \frac{1}{2}[X]^{\mathbf{c}}_t)$ in
\eqref{eq:solL} for $t \uparrow T$, observe that once more condition
\eqref{eq:BSRDEintcond2} prevents the limiting value from becoming 0
on $\{ \eta = + \infty \}$. Indeed, the local martingale
$\int_0^{t} dN_{s}/c_{s-}$ cannot explode as
$t \uparrow T$ for those paths along which its quadratic variation
$\int_0^t d[c]_s/c^2_{s-}$ remains bounded on $[0,T)$ (cf., e.g.,
\citet{Prot:04}, Chapter V.2, for more details).
\end{proof}
\subsection{Auxiliary target functional}
\label{subsec:problemformulation}
Let us assume that the terminal target position $\Xi_T$ is bounded,
or, more generally, that it satisfies
\begin{equation} \label{ass:Xi}
\Xi_T L_T \in L^1(\mathcal{F}_{T-},\mathbb{P}),
\end{equation}
where
$L_T = L_T^c=\lim_{t \uparrow T} c_t e^{-\int_0^t c_u/\kappa_u du}$ as
in~\eqref{eq:defLlimit}. Recalling the integrability
requirement~\eqref{eq:condxi} for the running target $\xi$, let us now
introduce the key object for our approach, the optimal signal process
$\hat{\xi}$ which is given by the c\`adl\`ag semimartingale
\begin{equation} \label{eq:defoptsignal}
\hat{\xi}_t \triangleq \hat{\xi}^c_t \triangleq
\frac{1}{L_t} \mathbb{E}\left[ \Xi_T L_T + \int_t^T \xi_r e^{-\int_0^r
\frac{c_u}{\kappa_u} du} \nu_r dr \, \bigg\vert \, \mathcal{F}_t \right]
\quad (0 \leq t < T).
\end{equation}
The optimal signal process $\hat{\xi}$ can be viewed as a weighted
average of expected future targets $\xi$ and $\Xi_T$; see our
discussion in Remark~\ref{rem:osp} and the representation of
$\hat{\xi}$ in \eqref{eq:repxi} below. Our motivation for introducing
$\hat{\xi}$ becomes very apparent when reviewing known results in the
literature to the stochastic LQ problem in~\eqref{eq:originalLQP} with
bounded coefficients; see Section 4.1 below. Observe that $\hat{\xi}$
remains unspecified for $t=T$. In fact, we can readily deduce from
Lemma~\ref{lem:L} and the integrability conditions~\eqref{eq:condxi}
and~\eqref{ass:Xi}
\begin{equation} \label{eq:limitos}
\exists \, \lim_{t \uparrow T}
\hat{\xi}_t = \Xi_T \quad \text{on the set } \{ L_T > 0 \} \supset
\{ \eta > 0 \}.
\end{equation}
On the set $\{ L_T = 0 \} \subset \{\eta = 0\}$, though, this
convergence may fail (without harm as it turns out).
Given the optimal signal process $(\hat{\xi}_t)_{0 \leq t < T}$, we
are now in a position to introduce the auxiliary LQ target functional
\begin{equation} \label{eq:defobjective}
J^c(u) \triangleq \limsup_{\tau \,
\uparrow \, T} \mathbb{E} \left[ \int_0^\tau (X^u_t - \xi_t)^2 \nu_t dt +
\int_0^\tau \kappa_t u^2_t dt + c_\tau (X^u_\tau -
\hat{\xi}^c_\tau)^2 \right],
\end{equation}
where the limes superior is taken over all sequences of stopping times
$(\tau^n)_{n=1,2,\ldots}$ converging to terminal time $T$ strictly
from below. Introducing the set of admissible controls
\begin{equation} \label{eq:admissibility}
\mathcal{U}^c = \left\{ u \in \mathcal{U} \text{ satisfying } J^c(u) < + \infty \right\}
\end{equation}
as in~\eqref{eq:policies}, we will solve completely the
auxiliary optimization problem
\begin{equation} \label{eq:problem}
J^c(u) \rightarrow \min_{u \in \mathcal{U}^c}
\end{equation}
in the next section.
\subsection{Explicit solution to the auxiliary problem}
\label{subsec:mainresult}
As it turns out, the optimal control to our auxiliary stochastic LQ
problem in~\eqref{eq:problem} and its corresponding optimal value are
explicitly computable and fully characterized by the processes $c$ and
$\hat{\xi}^c$. In terms of these, we can also characterize when the
set of admissible controls $\mathcal{U}^c$ defined in \eqref{eq:admissibility}
is nonempty. In fact, it follows from our analysis below that
$\mathcal{U}^c \neq \varnothing$ if and only if
\begin{equation} \label{ass:integrability}
\mathbb{E}\left[ \int_0^T (\xi_t -
\hat{\xi}^c_t)^2 \nu_t dt \right] < + \infty \quad \text{and} \quad
\mathbb{E}\left[ \int_{[0,T)} c_t d[\hat{\xi}^c]_t \right] < + \infty,
\end{equation}
where $[\hat{\xi}^c]$ denotes the quadratic variation process of the
semimartingale $\hat{\xi}^c$ of~\eqref{eq:defoptsignal}. In
particular, \eqref{ass:integrability} is necessary and sufficient for
well-posedness of the LQ problem in \eqref{eq:problem}:
\begin{Theorem} \label{thm:main}
Let \eqref{eq:condkappanu}, \eqref{eq:condxi}, and \eqref{ass:etakappa}
hold true. In addition, suppose that $c$ follows the Riccati
dynamics~\eqref{eq:BSRDE} with terminal condition~\eqref{eq:BSRDEtc}
and satisfies the integrability conditions~\eqref{eq:BSRDEintcond2}
and~\eqref{ass:Xi}.
Then we have $\mathcal{U}^c \neq \varnothing$ if and only if
\eqref{ass:integrability} is satisfied. In this case, the optimal
control $\hat{u}^c \in \mathcal{U}^c$ for the auxiliary problem
\eqref{eq:problem} with controlled process
$\hat{X}^c_{\cdot} \triangleq X^{\hat{u}^c}_{\cdot}$ is given by the
feedback law
\begin{equation} \label{eq:optimalcontrol}
\hat{u}^c_t =
\frac{c_t}{\kappa_t} \left(\hat{\xi}^c_t - \hat{X}^c_t \right) \quad
(0 \leq t < T),
\end{equation}
and the minimal costs are
\begin{align}
J^c(\hat{u}^c) =
c_0 ( x - \hat{\xi}^c_0)^2
+ \mathbb{E} \left[ \int_0^T (\xi_t - \hat{\xi}^c_t)^2 \nu_t dt \right]
+ \mathbb{E} \left[ \int_{[0,T)} c_t d[\hat{\xi}^c
]_t \right]. \label{eq:minimalcosts}
\end{align}
\end{Theorem}
The proof of Theorem \ref{thm:main} is deferred to Section
\ref{sec:proofs} below. Observe that the feedback law of the optimal
control in \eqref{eq:optimalcontrol} prescribes a reversion towards
the optimal signal process $\hat{\xi}^c_t$ rather than towards the
current target position~$\xi_t$. The reversion speed is controlled by
the ratio $c_t/\kappa_t$. In particular, on the ``blow-up'' set
$\{ \eta = + \infty \}$ the optimizer reverts with stronger and
stronger urgency towards the optimal signal $\hat{\xi}^c$ and hence to
the ultimate target position $\Xi_T$ due to \eqref{eq:limitos}. This
result generalizes the insights from the constant coefficient case
with almost sure terminal state constraint which are presented in
\citet{BankSonerVoss:16}.
Under the integrability conditions \eqref{ass:integrability}, the
optimal costs $J^c(\hat{u}^c)$ in \eqref{eq:minimalcosts} of the
optimizer $\hat{u}^c$ in \eqref{eq:optimalcontrol} are obviously
finite. Actually, they nicely separate into three intuitively
appealing terms making transparent how the regularity and
predictability of the targets $\xi$ and $\Xi_T$ determine the
auxiliary problem's optimal value. The first term represents the costs
due to a possibly suboptimal initial position $x$. The second term
shows how the regularity of the target process~$\xi$ feeds into the
overall costs: Targets which are poorly approximated by the optimal
signal process $\hat{\xi}^c$ in the $L^2(\mathbb{P} \otimes \nu_t dt)$-sense
produce higher costs. Finally, the third term reveals the importance
of the optimal signal's quadratic variation process
$[\hat{\xi}^c]$. Referring to the definition of $\hat{\xi}^c$ in
\eqref{eq:defoptsignal} (cf. also the representation in
\eqref{eq:repxi} below), the quadratic variation~$[\hat{\xi}^c]$ can
be viewed as a measure for the strength of the fluctuations in the
assessment of the average future target positions of $\xi$, the
terminal position $\Xi_T$ and the random variable $L_T$ which involves
the outcome of the penalization parameter $\eta$ at time $T$. In this
sense, the second integrability condition in~\eqref{ass:integrability}
can be interpreted as encoding a condition on the predictability of
the final stochastic target position $\Xi_T$ as well as the random
penalization parameter $\eta$. Loosely speaking, it ensures that the
outcome of the final position $\Xi_T$ as well as the ``blow-up'' event
$\{ \eta = + \infty \}$ on which $\Xi_T$ has to be matched by controls
in $\mathcal{U}^c$ are not allowed to come as ``too big a surprise'' at final
time $T$; see also our discussion in Section~\ref{subsec:BSV} below.
\begin{Remark}[Interpretation of the optimal signal] \label{rem:osp}
Let us present a way to interpret our optimal signal process
$\hat{\xi}$ defined in~\eqref{eq:defoptsignal}. For ease of
presentation and to avoid unnecessary technicalities, let us assume
here that the convergence in~\eqref{eq:defLlimit} also holds in
$L^1(\mathbb{P})$, that $\mathbb{E}[L_T] > 0$ and that
$0 < \nu \in L^1(\mathbb{P} \otimes dt)$ (these assumptions merely simplify
the justification of the representation in~\eqref{eq:kernel} below;
cf. Lemma~\ref{lem:remark} in Section~\ref{sec:proofs}). Then, by
defining the \emph{weight process} $(w_t)_{0 \leq t < T}$ via
\begin{equation} \label{eq:weight}
w_t \triangleq \frac{\mathbb{E}[L_T \vert
\mathcal{F}_t]}{L_t} \qquad (0 \leq t < T)
\end{equation}
as well as the measure $\mathbb{Q} \ll \mathbb{P}$ on $(\Omega,\mathcal{F}_T)$ via
\begin{equation*} \label{eq:density} \frac{d\mathbb{Q}}{d\mathbb{P}} \triangleq
\frac{L_T}{\mathbb{E}[L_T]},
\end{equation*}
we may write
\begin{align}
\hat{\xi}_t & = \frac{1}{L_t}
\mathbb{E}\left[ \Xi_T L_T + \int_t^T \xi_r e^{-\int_0^r
\frac{c_u}{\kappa_u} du} \nu_r dr \, \bigg\vert \, \mathcal{F}_t
\right] \nonumber \\
& = w_t \,
\mathbb{E}_\mathbb{Q}[\Xi_T \vert \mathcal{F}_t] + (1-w_t) \, \mathbb{E} \left[ \int_t^T \xi_r
\frac{e^{-\int_t^r \frac{c_u}{\kappa_u} du}}{(1-w_t)c_t}
\nu_r dr \bigg\vert \mathcal{F}_t \right] \label{eq:repxi}
\end{align}
for all $0 \leq t < T$. Recall that the process
$(L_t)_{0 \leq t < T}$ is a strictly positive supermartingale by
virtue of Lemma \ref{lem:L}. Consequently, the weight process
satisfies
\begin{equation*}
0 \leq w_t < 1 \quad \mathbb{P}\text{-a.s. for all } 0 \leq t < T,
\end{equation*}
where the strict inequality follows from Lemma~\ref{lem:remark}
below because we assumed $\nu > 0$ here for simplicity. Moreover,
the same lemma gives the identity
\begin{equation}
\mathbb{E} \left[ \int_t^T \frac{e^{-\int_t^r \frac{c_u}{\kappa_u}du}}{(1-w_t)c_t}
\nu_r dr \bigg\vert \mathcal{F}_t \right] = 1 \quad d\mathbb{P}
\otimes dt\text{-a.e. on } \Omega \times [0,T). \label{eq:kernel}
\end{equation}
That is, loosely speaking, the optimal signal process $\hat{\xi}$
in \eqref{eq:repxi} is a convex combination of a weighted average
of expected future target positions of $\xi$ and the expected
terminal position~$\Xi_T$, computed under the auxiliary measure
$\mathbb{Q}$. The weight shifts gradually towards the ultimate target
position $\Xi_T$ as $t \uparrow T$, provided that $L_T >
0$. Indeed, by definition of the weight process
in~\eqref{eq:weight}, martingale convergence theorem and the
convergence of the process $L$ in Lemma \ref{lem:L}, we have
\begin{equation*}
\exists \, \lim_{t \uparrow T} w_t = 1 \quad \text{on the set }
\{ L_T > 0 \}.
\end{equation*}
\end{Remark}
\section{Discussion and illustration}
\label{sec:illustration}
Let us return to the initial stochastic LQ
problem~\eqref{eq:originalLQP} with target
functional~\eqref{eq:originalobjective} and stochastic terminal state
constraint~\eqref{eq:terminalconstraint} and discuss how it relates to our
auxiliary LQ problem~\eqref{eq:problem}. Observe that, for the latter,
we tackle and resolve the delicate partial terminal state constraint
$X^u_T = \Xi_T$ on $\{ \eta = + \infty \}$ incorporated in the set of
admissible policies $\mathcal{U}^\Xi$ in~\eqref{def:originalpolicies} by
performing a \emph{truncation in time} in the auxiliary objective
functional $J^c$ in~\eqref{eq:defobjective}. Specifically, we replace
the original target functional $J^\eta$ of
problem~\eqref{eq:originalLQP} by a properly chosen limit of
stochastic LQ target functionals with strictly shorter time horizon
$\tau < T$ at which we impose a finite terminal penalization term
$c_{\tau} (X^u_\tau - \hat{\xi}^c_\tau)^2$. In fact, the optimal
signal process $\hat{\xi}^c$ turns out to be the proper key ingredient
for choosing these penalizations in a \emph{time consistent} manner;
see Remark~\ref{rem:timeconsistent} below. Moreover, in light of
$\liminf_{t \uparrow T} c_t \geq \eta$ in~\eqref{eq:BSRDEtc} and
$\lim_{t \uparrow T} \hat{\xi}^c_t = \Xi_T$ on
$\{0 < \eta \leq +\infty\}$ in \eqref{eq:limitos}, for any $\tau < T$
the penalty $c_{\tau} (X^u_\tau - \hat{\xi}^c_\tau)^2$ can be viewed
as a proxy of the terminal penalty
$\eta 1_{\{0 \leq \eta < +\infty \}} (X^u_T - \Xi_T)^2$ in
$J^\eta$. Indeed, appealing to Fatou's Lemma, monotone convergence as
well as \eqref{eq:BSRDEtc} and \eqref{eq:limitos}, we readily obtain
the following:
\begin{Lemma} \label{lem:dominate} It holds that $J^c(u) \geq J^\eta(u)$
for all $u \in \mathcal{U}^c$ and all processes $c$
satisfying~\eqref{eq:BSRDE}, \eqref{eq:BSRDEtc} and~\eqref{ass:Xi}.
In particular, we have
\begin{equation*}
X^u_T = \Xi_T \quad \text{ on the set } \{ \eta = + \infty \}
\text{ for all } u \in \mathcal{U}^c,
\end{equation*}
that is, $\mathcal{U}^c \subseteq \mathcal{U}^{\Xi}$. \qed
\end{Lemma}
In light of this lemma, it appears very natural for our auxiliary LQ
problem in~\eqref{eq:problem} to consider parameter processes $c$
which are \emph{minimal} supersolutions. In fact, this motivates our
conjecture~\eqref{eq:conjecture} that
\begin{equation*}
\argmin_{\mathcal{U}^{\Xi}} J^\eta = \argmin_{\mathcal{U}^{c^{\min}}} J^{c^{\min}}
\end{equation*}
holds true for the minimal supersolution $c^{\min}$ to the BSRDE
in~\eqref{eq:BSRDE} with terminal condition~\eqref{eq:BSRDEtc}. In the
following paragraphs of this section, we provide evidence for the
validity of this conjecture. Specifically, we will show how our
approach via the auxiliary LQ problem in~\eqref{eq:problem} with
$c^{\min}$ allows us to recover existing results in the literature to
specific variants of the stochastic LQ problem with stochastic
terminal state constraint posed in~\eqref{eq:originalLQP}. We will
also discuss possible approaches to prove the
conjecture~\eqref{eq:conjecture} based on the insights from
Section~\ref{subsec:mainresult}; see the end of
Section~\ref{subsec:KP} and also Remark~\ref{rem:timeconsistent} in
Section~\ref{sec:proofs}.
\subsection{Bounded coefficients}
\label{subsec:KT}
In case where $\eta$ is bounded along with the processes
$(\nu_t)_{0 \leq t \leq T}$, $(\kappa_t)_{0 \leq t \leq T}$ and
$(\xi)_{0 \leq t \leq T}$, our conjecture in~\eqref{eq:conjecture}
holds true. Indeed, under this conditions and within a Brownian
framework,~\citet{KohlmannTang:02} provide existence and uniqueness of
a (minimal supersolution) $c^{\min}$ to the stochastic Riccati
equation~\eqref{eq:BSRDE} such that
$\lim_{t \uparrow T} c^{\min}_t = \eta$ holds true $\mathbb{P}$-a.s. They
show that the optimal control $\hat{u}^{c^{\min}}$
in~\eqref{eq:optimalcontrol} from our Theorem~\ref{thm:main} solves
the LQ problem in~\eqref{eq:originalLQP} with objective functional
$J^\eta$ (over the set of unconstrained policies~$\mathcal{U}^{\Xi}$, recall
Remark~\ref{rem:intro}, 1.)); see~\citet{KohlmannTang:02}, Theorem
5.2. Obviously, our necessary and sufficient integrability conditions
stated in~\eqref{ass:integrability} are satisfied in this case. Note,
though, that in~\cite{KohlmannTang:02}, Section~5.1, the optimal
control $\hat{u}^{c^{\min}}$ is characterized in terms of both the
process $c^{\min}$ and the solution process $b$ to the linear BSDE
\begin{equation} \label{eq:linBSDEKT}
db_t = \left(
\frac{c^{\min}_t}{\kappa_t} b_t - \nu_t \xi_t \right) dt + dM_t \quad \text{on }
[0,T] \text{ with } b_T = \eta \Xi_T,
\end{equation}
with some c\`adl\`ag (local) martingale $(M_t)_{0 \leq t \leq
T}$. More precisely, the optimal control is
described by the feedback law
\begin{equation*}
\hat{u}^{c^{\min}}_t = -\frac{1}{\kappa_t} \left( c^{\min}_t
\hat{X}^{c^{\min}}_t - b_t \right) = \frac{c^{\min}_t}{\kappa_t}
\left( \frac{b_t}{c^{\min}_t}-
\hat{X}^{c^{\min}}_t \right) \quad (0 \leq t \leq T).
\end{equation*}
That is, in that setting without terminal constraints, our signal
process $\hat{\xi}^{c^{\min}}$ of~\eqref{eq:defoptsignal} coincides
with the ratio $b/c^{\min}$ and so the solution $b$ to the linear BSDE
is an equivalent substitute for this signal process. In contrast, in
case where $\{\eta = +\infty, \Xi_T \not=0\}$ has positive
probability, the terminal condition $b_T=\eta \Xi_T$ becomes
problematic in the sense that the linear BSDE loses all information on
$\Xi_T$ except its sign. Our signal process $\hat{\xi}^{c^{\min}}$,
however, still makes sense in this rather natural case. Note that
$c^{\min}\hat{\xi}^{c^{\min}}$ still satisfies the linear BSDE
dynamics in~\eqref{eq:linBSDEKT} on $[0,T)$ (see
equation~\eqref{eq:dymcxi3} below) but this product may not have a
sensible terminal value on $\{\eta=0\}
\cup\{\eta=+\infty\}$. Fortunately, as our analysis shows the optimal
signal process always makes sense when needed. In particular its
possible lack of a terminal value on $\{\eta=0\}$ is without harm for
our approach to the optimization problem. It thus can be viewed as a
convenient substitute for the no longer operative linear BSDE above.
\subsection{Constant coefficients}
\label{subsec:BSV}
In case of constant coefficients $\nu_t \equiv \nu \in \mathbb{R}_+$,
$\kappa_t \equiv \kappa \in \mathbb{R}_+$ and $\eta \in [0,+\infty]$ the
stochastic Riccati differential equation in~\eqref{eq:BSRDE} boils
down to a deterministic \emph{ordinary Riccati differential equation}
on $[0,T]$ of the form
\begin{equation*}
c'_t = \frac{c^2_t}{\kappa} - \nu\quad
\text{ subject to }
c_T = \eta
\end{equation*}
with explicitly available deterministic (minimal super-)solutions
\begin{equation} \label{eq:solRODE}
c^{\min}_t =
\begin{cases}
\sqrt{\nu \kappa} \, \frac{\sqrt{\nu \kappa} \sinh\left(
\sqrt{\nu/\kappa}\,(T-t) \right) + \eta \cosh\left(
\sqrt{\nu/\kappa}\,(T-t) \right)}{\eta \sinh\left(
\sqrt{\nu/\kappa}\,(T-t) \right) + \sqrt{\nu \kappa}
\cosh\left( \sqrt{\nu/\kappa}\,(T-t) \right)} & 0 \leq \eta
< + \infty \\
\sqrt{\nu \kappa} \coth(\sqrt{\nu} (T-t)/\sqrt{\kappa}), & \eta =
+ \infty
\end{cases},
\end{equation}
for all $0 \leq t < T$. As a consequence, the process $L$ given
in~\eqref{eq:defL} is also just deterministic and the optimal signal
process $\hat{\xi}^{c^{\min}}$ in~\eqref{eq:defoptsignal} can be
computed explicitly (up to the conditional expectation). Again, our
conjecture in~\eqref{eq:conjecture} holds true. Indeed, our optimal
control $\hat{u}^{c^{\min}}$ from~\eqref{eq:optimalcontrol} provided
in Theorem~\ref{thm:main} coincides with the optimal solution of the
stochastic LQ problem in~\eqref{eq:originalLQP} with objective
functional $J^0$ and $J^{\infty}$, respectively, derived
in~\citet{BankSonerVoss:16}, Theorems~3.1 and~3.2. Therein, our first
integrability condition in~\eqref{ass:integrability} is satisfied as
soon as the target process $\xi$ belongs to $L^2(\mathbb{P} \otimes dt)$ and
$\Xi_T \in L^2(\mathbb{P},\mathcal{F}_{T-})$. The second integrability condition
in~\eqref{ass:integrability} simplifies to a condition on the terminal
position $\Xi_T$ which is equivalent to
\begin{equation*}
\int_0^T \frac{\mathbb{E}[(\Xi_T - \mathbb{E}[\Xi_T \vert \mathcal{F}_s])^2]}{(T-s)^2} ds
< \infty;
\end{equation*}
see Remark 2.1 and Lemma 5.4 in \cite{BankSonerVoss:16}. It thus
reveals that the ultimate target position $\Xi_T$ has to become known
``fast enough'' for the optimally controlled process
$\hat{X}^{c^{\min}}$ in order to reach it at terminal time $T$ with
finite expected costs; c.f. also~\citet{AnkirchnerKruse:15} who
confine themselves to stochastic terminal state constraints of the
form $\Xi_T = \int_0^T \lambda_t dt$ for some progressively measurable
and suitably integrable process $(\lambda_t)_{0 \leq t \leq T}$ which
are gradually revealed as $t \uparrow T$. Related results of this
nature are also provided in~\citet{LueYongZhang:12}.
For the general case with stochastic coefficients
$\nu = (\nu_t)_{0 \leq t \leq T}$,
$\kappa = (\kappa_t)_{0 \leq t \leq T}$ and random
$\eta \in [0,\infty]$, similar effects are to be expected concerning
the final target position $\Xi_T$ \emph{and} the ``blow up'' event
$\{ \eta = + \infty\}$. As, in general, all these coefficients can be
rather intricately intertwined among each other, it seems difficult to
give conditions on these that ensure $\mathcal{U}^\Xi \neq \varnothing$ and
are more succinct than our conditions in~\eqref{ass:integrability}.
\subsection{Special case: Vanishing targets}
\label{subsec:KP}
In the special case $\xi \equiv \Xi_T \equiv 0$ $\mathbb{P}$-a.s., where
obviously $\hat{\xi} \equiv 0$ and the integrability conditions
in~\eqref{ass:integrability} hold trivially, our conjecture
in~\eqref{eq:conjecture} holds true as
well. Indeed,~\citet{KrusePopier:16_1} derive under sufficient
integrability conditions on $(\kappa_t)_{0 \leq t \leq T}$ and
$(\nu_t)_{0 \leq t \leq T}$ existence of a minimal supersolution
$c^{\min}$ to the Riccati BSDE in~\eqref{eq:BSRDE} with terminal
condition $\liminf_{t \uparrow T} \geq \eta \in [0,+\infty]$
(recall~\eqref{eq:BSRDEtc}). The minimal supersolution $c^{\min}$ is
constructed via the monotone limit
$c_t^{\min} \triangleq \lim_{n \uparrow \infty} c_t^{(n)}$ for all
$t \in [0,T)$, where $c^{(n)}$ denotes the unique (minimal
super-)solution with Riccati dynamics~\eqref{eq:BSRDE} satisfying the
terminal condition $\lim_{t \uparrow T} c^{(n)}_T = \eta \wedge n$ for
some constant $n > 0$. They show that the optimal control
$\hat{u}^{c^{\min}}$ with state process~\eqref{eq:defX} to the
stochastic LQ problem in~\eqref{eq:originalLQP} with
$\xi \equiv \Xi_T \equiv 0$ is given as in~\eqref{eq:optimalcontrol} of
our Theorem~\ref{thm:main}; see \cite{KrusePopier:16_1},
Theorem~3. That is, since $\hat{\xi}^{c^{\min}} \equiv 0$, the optimal
control with controlled process
$\hat{X}^{c^{\min}} \triangleq X^{\hat{u}^{c^{\min}}}$ is simply given by
\begin{equation} \label{eq:optcontKP}
\hat{u}^{c^{\min}}_t =
-\frac{c^{\min}_t}{\kappa_t} X^{\hat{u}^{c^{\min}}}_t = -\frac{x
L_t}{\kappa_t} \quad (0 \leq t \leq T)
\end{equation}
and the corresponding optimal costs
in~\eqref{eq:minimalcosts} simplify dramatically to
\begin{equation} \label{eq:optvalKP}
J^{c^{\min}}(\hat{u}^{c^{\min}}) = c^{\min}_0 x^2.
\end{equation}
In fact, in order to tackle the partial state constraint $\Xi_T = 0$ on the
set $\{ \eta = + \infty \}$,~\citet{KrusePopier:16_1} proceed via a
\emph{truncation in space}. Specifically, they introduce a family of
unconstrained variants of problem~\eqref{eq:originalLQP} (with
$\xi \equiv \Xi_T \equiv 0$) with objective functionals
\begin{equation} \label{eq:objectivetruncspace}
J^{(n)}(u) \triangleq \mathbb{E} \left[
\int_0^T (X^u_t)^2 \nu_t dt + \int_0^T \kappa_t u^2_t dt +
(\eta \wedge n) (X^u_T)^2 \right],
\end{equation}
where the random penalization parameter $\eta$ is replaced by
truncated versions $\eta \wedge n$. Then the corresponding optimal
controls $\hat{u}^{(n)}_t=-c^{(n)}_t X^{\hat{u}^{(n)}}_t/\kappa_t$ and
the corresponding optimal costs
$J^{(n)}(\hat{u}^{(n)}) = c_0^{(n)}x^2$ clearly satisfy
\begin{equation} \label{eq:KPequality}
\begin{aligned}
J^{\eta}(\hat{u}^\eta) \leq & \; J^{c^{\min}}(\hat{u}^{c^{\min}}) =
c^{\min}_0 x^2 = \lim_{n
\uparrow \infty} c_0^{(n)} x_0^2 \\
= & \; \lim_{n
\uparrow \infty} J^{(n)} (\hat{u}^{(n)}) = \lim_{n \uparrow \infty}
J^{c^{(n)}}(\hat{u}^{c^{(n)}}) \leq J^{\eta}(\hat{u}^\eta),
\end{aligned}
\end{equation}
where $\hat{u}^\eta$ denotes the optimizer of
problem~\eqref{eq:originalLQP} (with $\xi \equiv \Xi_T \equiv 0$). It
follows that equality holds everywhere and, by uniqueness of
optimizers, $\hat{u}^\eta = \hat{u}^{c^{\min}}$ as conjectured
in~\eqref{eq:conjecture}.
For the general case $\xi \neq 0$ and $\Xi_T \neq 0$, one could
likewise introduce as above in~\eqref{eq:objectivetruncspace} a family
of unconstrained variants of problem~\eqref{eq:originalLQP} with
objective functionals
\begin{equation*}
J^{(n)}(u) = \mathbb{E} \left[
\int_0^T (X^u_t - \xi_t)^2 \nu_t dt + \int_0^T \kappa_t u^2_t dt +
(\eta \wedge n) (X^u_T - \Xi_T)^2 \right].
\end{equation*}
Recall from the discussion in Section~\ref{subsec:KT} that this
stochastic LQ problem is fully characterized by the solution processes
$c^{(n)}$ and $b^{(n)}$ satisfying the Riccati BSDE in
\eqref{eq:BSRDE} and the linear BSDE in~\eqref{eq:linBSDEKT} with
terminal conditions $c^{(n)}_T = \eta \wedge n$ and
$b^{(n)}_T = (\eta \wedge n) \Xi_T$, respectively. In addition, under
sufficient conditions (e.g. boundedness as discussed in
Section~\ref{subsec:KT}) which guarantee~\eqref{ass:integrability} as
well as the convergence
\begin{equation*}
\limsup_{\tau
\uparrow T} \EE\left[ c^{(n)}_{\tau} (X_\tau^{\hat{u}^{c^{(n)}}} -
\xi_{\tau}^{c^{(n)}} )^2 \right] = \EE \left[ (\eta \wedge n) (X_T^{\hat{u}^{c^{(n)}}} -
\Xi_{T} )^2 \right],
\end{equation*}
our Theorem~\ref{thm:main} applies in this
context and it holds that
\begin{equation} \label{eq:limitJ}
\begin{aligned}
J^{(n)}(\hat{u}^{(n)}) = & \;
c^{(n)}_0 ( x - \hat{\xi}^{c^{(n)}}_0)^2 \\
& + \mathbb{E} \left[ \int_0^T (\xi_t - \hat{\xi}^{c^{(n)}}_t)^2 \nu_t dt \right]
+ \mathbb{E} \left[ \int_{[0,T)} c_t^{(n)} d[\hat{\xi}^{c^{(n)}}
]_t \right]
\end{aligned}
\end{equation}
for the optimal control $\hat{u}^{(n)}$. As
in~\citet{KrusePopier:16_1}, one could then try to pass to the limit
$n \uparrow \infty$. However, passing to the limit is not as
straightforward in \eqref{eq:limitJ} as it is in \eqref{eq:KPequality}
where we relied heavily on $\xi \equiv 0$, $\Xi_T \equiv 0$. Indeed,
for convergence of~\eqref{eq:limitJ}, a suitable convergence of our
signal processes $\hat{\xi}^{c^{(n)}}$ would be required which seems
to be out of reach with the current knowledge of singular BSRDEs and
so a full proof of our conjecture~\eqref{eq:conjecture} by this
approach has to be left for future research.
\section{Proofs}
\label{sec:proofs}
Throughout this section we work under the assumptions of our main
result, Theorem \ref{thm:main}. Its verification relies on a
completion of squares argument similar to \citet{KohlmannTang:02}
(cf. also \citet{YongZhou:99} for this method in solving LQ
problems). The following lemma summarizes the key identity for our
verification and illustrates again the usefulness of our signal
process $\hat{\xi}$.
\begin{Lemma} \label{lem:mastereq} Suppose the assumptions of
Theorem~\ref{thm:main} hold true. Then for all progressively
measurable, $\mathbb{P}$-a.s. locally $L^2([0,T),\kappa_t dt)$-integrable
processes $u$, the cost process
\begin{equation*}
C_t(u) \triangleq \int_0^t (X^u_s - \xi_s)^2 \nu_s ds
+ \int_0^t \kappa_s u^2_s ds + c_t
(X^u_t - \hat{\xi}_t)^2 \quad (0 \leq t < T)
\end{equation*}
is a nonnegative, c\`adl\`ag local submartingale. It allows for the
decomposition
\begin{equation} \label{eq:master}
C_t(u) = c_0(x-\hat{\xi}_0)^2 + A_t(u) + M_t(u)
\quad (0 \leq t < T),
\end{equation}
where
\begin{align}
A_t(u) \triangleq
& ~\int_0^t (\xi_s - \hat{\xi}_s)^2 \nu_s ds +
\int_0^t c_s d[\hat{\xi}]_s \nonumber \\
& + \int_0^t \kappa_s \left( u_s -
\frac{c_s}{\kappa_s} \left( \hat{\xi}_s - X^u_s \right)
\right)^2 ds \quad (0 \leq t < T)\label{eq:A} \\
\intertext{is a right continuous, nondecreasing, adapted process
and where}
M_t(u) \triangleq
& ~ \int_0^t (\hat{\xi}^2_{s-} - (X^u_{s-})^2) dN_s +
2 \int_0^t \frac{c_{s-}}{L_{s-}} (\hat{\xi}_{s-} -
X^u_{s-}) d\tilde{M}_s \label{eq:M} \\
\intertext{with}
\tilde{M}_t \triangleq &~ \mathbb{E} \left[\Xi_T L_T + \int_0^T \xi_s e^{-\int_0^s
\frac{c_u}{\kappa_u} du} \nu_s ds \, \bigg\vert\, \mathcal{F}_t \right] \quad (0 \leq t < T)
\label{eq:MtY}
\end{align}
is a local martingale on $[0,T)$.
\end{Lemma}
\begin{proof}
First, note that by~\eqref{eq:condkappanu} and
$u \in L^2([0,T),\kappa_t dt)$ locally $\mathbb{P}$-a.s., the cost process
$(C_t(u))_{0 \leq t < T}$ in~\eqref{eq:master} is well defined along
with $X^u$. Let us expand
\begin{equation*}
c_t ( X^u_t - \hat{\xi}_t)^2 = c_t (X^u_t)^2 - 2 X^u_t c_t
\hat{\xi}_t + c_t \hat{\xi}^2_t \quad (0 \leq t < T)
\end{equation*}
and then apply It\^o's formula to each of the resulting three
terms. This will be prepared by computing the dynamics of the
processes $\hat{\xi}$, $c\hat{\xi}$ and $c\hat{\xi}^2$,
respectively, in the following steps 1, 2 and 3. In step 4 we put
everything together and derive our main identity \eqref{eq:master}.
\emph{Step 1:} We start with computing the dynamics of our optimal
signal process $(\hat{\xi}_t)_{0 \leq t < T}$ defined in
\eqref{eq:defoptsignal}. For ease of notation, let us define the process
\begin{equation*}
Y_t \triangleq \int_0^t \xi_r e^{-\int_0^r \frac{c_u}{\kappa_u} du} \nu_r
dr \quad (0 \leq t \leq T).
\end{equation*}
Observe that $Y_T \in L^1(\mathbb{P})$ due to \eqref{eq:condxi}. Moreover,
recall that $\Xi_T L_T \in L^1(\mathcal{F}_{T-},\mathbb{P})$ by \eqref{ass:Xi} so
that \eqref{eq:MtY} defines a c\`adl\`ag martingale on
$[0,T]$. By the definition of $\hat{\xi}$ in
\eqref{eq:defoptsignal}, we can now express $\hat{\xi}$ in terms of
$Y$ and $\tilde{M}$ via
\begin{equation}
\hat{\xi}_t = \frac{1}{L_t}
\mathbb{E}\left[ \Xi_T L_T + \int_t^T \xi_r
e^{-\int_0^r \frac{c_u}{\kappa_u} du} \nu_r dr \, \bigg\vert \, \mathcal{F}_t
\right]
= \frac{1}{L_t} \left( \tilde{M}_t - Y_t \right) \label{eq:optsignal}
\end{equation}
for all $0 \leq t < T$. Next, recall the dynamics of $L$ on $[0,T)$
in \eqref{eq:SDEL} and note that
\begin{equation}
\Delta L_t = - \frac{L_{t-}}{c_{t-}} \Delta N_t \quad \text{and} \quad
[L]^c_t = \int_0^t \frac{L^2_{s-}}{c^2_{s-}} d[N]^c_s, \label{eq:jumpL}
\end{equation}
where $[L]^c$ and $[N]^c$ denote the path-by-path continuous parts
of the quadratic variations of $[L]$ and $[N]$, respectively (cf.,
e.g., \citet{Prot:04}, Chapter II.6, for more details). Hence,
applying It\^o's formula as in, e.g., \cite{Prot:04}, Theorem II.32,
we obtain
\begin{align}
\frac{1}{L}_t = & ~\frac{1}{L_0} - \int_0^t \frac{1}{L^2_{s-}} dL_s
+ \int_0^t \frac{1}{L^3_{s-}} d[L]_s^c \nonumber \\
& + \sum_{s \leq t} \left( \frac{1}{L_s} - \frac{1}{L_{s-}} +
\frac{1}{L^2_{s-}} \Delta L_s \right). \label{eq:dyminvL1}
\end{align}
Using \eqref{eq:jumpL}, the summands in
the sum in \eqref{eq:dyminvL1} above can be written as
\begin{equation*}
\frac{L_{s-} - L_s}{L_s L_{s-}} - \frac{\Delta
N_s}{L_{s-}c_{s-}} = \frac{\Delta N_s}{c_{s-}}
\frac{L_{s-} - L_s}{L_s L_{s-}} = \frac{(\Delta
N_s)^2}{L_s c_{s-}^2} = \frac{(\Delta
N_s)^2}{L_{s-} c_{s-} c_s},
\end{equation*}
where we also used $\Delta c_s = - \Delta N_s$ and thus the identity
$1/L_s = c_{s-}/(L_{s-} c_s)$ in the last equality. Hence,
together with the dynamics of $L$ in \eqref{eq:SDEL} and $[L]^c$ in
\eqref{eq:jumpL} we can rewrite \eqref{eq:dyminvL1} as
\begin{align}
\frac{1}{L}_t = & ~ \frac{1}{L_0} + \int_0^t
\frac{\nu_s}{L_{s-} c_{s-}} ds
+ \int_0^t \frac{1}{L_{s-} c_{s-}} dN_s \nonumber \\
& +
\int_0^t \frac{1}{L_{s-} c^2_{s-}} d[N]^c_s + \sum_{s \leq t} \frac{(\Delta
N_s)^2}{L_{s-} c_{s-} c_s}. \label{eq:dyminvL2}
\end{align}
Now, integrating by parts in \eqref{eq:optsignal} and then using the
dynamics of $1/L$ in \eqref{eq:dyminvL2} gives us
\begin{align}
\hat{\xi}_t
= & ~ \hat{\xi}_0 + \int_0^t \frac{1}{L_{s-}}
(d\tilde{M}_s - dY_s) + \int_0^t \hat{\xi}_{s-} L_{s-}
d\left( \frac{1}{L_s} \right) + \left[ \frac{1}{L},
\tilde{M} \right]_t \nonumber \\
= & ~ \hat{\xi}_0 - \int_0^t
(\xi_s -\hat{\xi}_{s-}) \frac{\nu_s}{c_{s-}} ds + \int_0^t
\frac{1}{L_{s-}} d\tilde{M}_s + \int_0^t
\frac{\hat{\xi}_{s-}}{c_{s-}} dN_s \nonumber \\
& + \int_0^t
\frac{\hat{\xi}_{s-}}{c^2_{s-}} d[N]^c_s +
\sum_{s \leq t} \frac{\hat{\xi}_{s-}}{c_{s-} c_s } (\Delta
N_s)^2
+ \left[ \frac{1}{L},
\tilde{M} \right]_t, \label{eq:dymxi1}
\end{align}
where the quadratic covariation can be computed as
\begin{align}
\left[ \frac{1}{L},
\tilde{M} \right]_t =
&
~\int_0^t \frac{1}{L_{s-}
c_{s-}} d
[\tilde{M},N]^c_s \nonumber \\
& ~+
\sum_{s \leq t} \left( \frac{\Delta \tilde{M}_s
\Delta N_s}{L_{s-} c_{s-}}
+ \frac{(\Delta N_s)^2 \Delta \tilde{M}_s}{L_{s-} c_{s-} c_s }
\right). \label{eq:dymxi2}
\end{align}
Collecting all the sums in \eqref{eq:dymxi1} together with those in
\eqref{eq:dymxi2} yields
\begin{align}
& \sum_{s \leq t} \frac{\Delta N_s}{L_{s-} c_{s-} c_s} \left( c_s
\Delta \tilde{M}_s + \Delta N_s \Delta \tilde{M}_s +
\hat{\xi}_{s-} L_{s-} \Delta N_s \right) \nonumber \\
& = \sum_{s \leq t} \frac{\Delta N_s}{L_{s-} c_{s-} c_s} \left(
\hat{\xi}_s L_s c_{s-} - \hat{\xi}_{s-} L_{s-} c_s \right), \label{eq:dymxi3}
\end{align}
where we used the fact that $\Delta N_s = -\Delta c_s$ as well as
\begin{equation}
\Delta \tilde{M}_s = \tilde{M}_s - \tilde{M}_{s-} = \hat{\xi}_s L_s -
\hat{\xi}_{s-} L_{s-} \label{eq:jumpMtilde}
\end{equation}
due to the representation of $\hat{\xi}$ in \eqref{eq:optsignal} and
the continuity of $Y$. Plugging back \eqref{eq:dymxi3} into
\eqref{eq:dymxi1} finally gives us
\begin{align}
\hat{\xi}_t =
& ~ \hat{\xi}_0 - \int_0^t
(\xi_s -\hat{\xi}_{s-}) \frac{\nu_s}{c_{s-}} ds + \int_0^t
\frac{1}{L_{s-}} d\tilde{M}_s + \int_0^t
\frac{\hat{\xi}_{s-}}{c_{s-}} dN_s \nonumber \\
& + \int_0^t
\frac{\hat{\xi}_{s-}}{c^2_{s-}} d[N]^c_s
+ \int_0^t \frac{1}{L_{s-}
c_{s-}} d[\tilde{M},N]^c_s \nonumber \\
& + \sum_{s \leq t} \frac{\Delta N_s}{L_{s-} c_{s-} c_s} \left(
\hat{\xi}_s L_s c_{s-} - \hat{\xi}_{s-} L_{s-} c_s \right). \label{eq:dymxi4}
\end{align}
\emph{Step 2:} Let us now compute the dynamics of
$c\hat{\xi}$. Again, integration by parts, together with the
dynamics of $\hat{\xi}$ in \eqref{eq:dymxi4}, yields
\begin{align}
c_t \hat{\xi}_t =
& ~ c_0 \hat{\xi}_0 + \int_0^t c_{s-} d\hat{\xi}_s
+ \int_0^t \hat{\xi}_{s-} dc_s + \left[ c,
\hat{\xi} \right]_t \nonumber \\
= & ~ c_0 \hat{\xi}_0 - \int_0^t \xi_s \nu_s ds + \int_0^t
\hat{\xi}_{s-} \frac{c_s^2}{\kappa_s} ds + \int_0^t
\frac{c_{s-}}{L_{s-}} d\tilde{M}_s \nonumber \\
& + \int_0^t \frac{\hat{\xi}_{s-}}{c_{s-}} d[N]^c_s +
\int_0^t \frac{1}{L_{s-}} d[\tilde{M}, N]^c_s
\nonumber \\
& + \sum_{s \leq t} \frac{\Delta N_s}{L_{s-} c_s} \left(
\hat{\xi}_s L_s c_{s-} - \hat{\xi}_{s-} L_{s-} c_s
\right) + \left[ c,
\hat{\xi} \right]_t. \label{eq:dymcxi1}
\end{align}
The quadratic covariation in \eqref{eq:dymcxi1} can be computed as
\begin{align}
\left[ c, \hat{\xi} \right]_t =
& - \int_0^t \frac{1}{L_{s-}} d[\tilde{M}, N]^c_s
- \int_0^t \frac{\hat{\xi}_{s-}}{c_{s-}} d[N]^c_s
\nonumber \\
& - \sum_{s \leq t} \frac{\Delta N_s \Delta \tilde{M}_s}{L_{s-}}
- \sum_{s \leq t} \frac{\hat{\xi}_{s-} (\Delta N_s)^2}{c_{s-}} \nonumber
\\
& - \sum_{s \leq t} \frac{(\Delta N_s)^2}{L_{s-} c_{s-} c_s} \left(
\hat{\xi}_s L_s c_{s-} - \hat{\xi}_{s-} L_{s-} c_s \right). \label{eq:dymcxi2}
\end{align}
The sums of the jumps in the quadratic covariation in
\eqref{eq:dymcxi2} can be rewritten (using again the identity in
\eqref{eq:jumpMtilde} as well as the fact that $\Delta c_s = -
\Delta N_s$) as
\begin{align*}
& - \sum_{s \leq t} \frac{\Delta N_s}{L_{s-} c_{s-} c_s} \left(
\Delta \tilde{M}_s c_s c_{s-} + \hat{\xi}_{s-} \Delta N_s L_{s-}
c_s + \Delta N_s
\hat{\xi}_s L_s c_{s-} - \Delta N_s \hat{\xi}_{s-} L_{s-} c_s \right) \\
& = - \sum_{s \leq t} \frac{\Delta N_s}{L_{s-} c_s} \left(
\hat{\xi}_s L_s c_{s-} - \hat{\xi}_{s-} L_{s-} c_s \right).
\end{align*}
With this observation, plugging back the quadratic covariation in
\eqref{eq:dymcxi2} into \eqref{eq:dymcxi1}, we simply get
\begin{equation}
c_t \hat{\xi}_t = c_0 \hat{\xi}_0 - \int_0^t \xi_s \nu_s ds + \int_0^t
\hat{\xi}_{s-} \frac{c_s^2}{\kappa_s} ds + \int_0^t
\frac{c_{s-}}{L_{s-}} d\tilde{M}_s. \label{eq:dymcxi3}
\end{equation}
\emph{Step 3:} Next, we compute the dynamics of
$c\hat{\xi}^2$. Application of integration by parts together
with the dynamics of $\hat{\xi}$ in \eqref{eq:dymxi4} yields
\begin{align*}
\hat{\xi}^2_t = & ~\hat{\xi}^2_0 + 2 \int_0^t \hat{\xi}_{s-}
d\hat{\xi}_s + [\hat{\xi}]_t \\
= & ~\hat{\xi}^2_0 - 2 \int_0^t \hat{\xi}_{s-} (\xi_s - \hat{\xi}_{s-})
\frac{\nu_s}{c_{s-}} ds + 2 \int_0^t
\frac{\hat{\xi}_{s-}}{L_{s-}} d\tilde{M}_s + 2 \int_0^t
\frac{\hat{\xi}^2_{s-}}{c_{s-}} dN_s \\
& + 2 \int_0^t
\frac{\hat{\xi}^2_{s-}}{c^2_{s-}} d[N]^c_s + 2
\int_0^t \frac{\hat{\xi}_{s-}}{L_{s-} c_{s-}} d[
\tilde{M},N]^c_s \\
& + 2 \sum_{s \leq t} \frac{\hat{\xi}_{s-} \Delta N_s}{L_{s-} c_{s-}
c_s} \left( \hat{\xi}_s L_s c_{s-} - \hat{\xi}_{s-} L_{s-} c_s
\right) + [\hat{\xi}]_t.
\end{align*}
Consequently, using once more integration by parts, we
obtain
\begin{align}
c_t \hat{\xi}^2_t =
& ~c_0 \hat{\xi}^2_0 + \int_0^t c_{s-} d\hat{\xi}^2_s
+ \int_0^t \hat{\xi}^2_{s-} dc_s +
[c,\hat{\xi}^2]_t \nonumber \\
= & ~c_0 \hat{\xi}^2_0 - 2 \int_0^t \hat{\xi}_{s-} (\xi_s - \hat{\xi}_{s-})
\nu_s ds + 2 \int_0^t
\frac{c_{s-} \hat{\xi}_{s-}}{L_{s-}} d\tilde{M}_s + 2 \int_0^t
\hat{\xi}^2_{s-} dN_s \nonumber \\
& + 2 \int_0^t
\frac{\hat{\xi}^2_{s-}}{c_{s-}} d[N]^c_s + 2
\int_0^t \frac{\hat{\xi}_{s-}}{L_{s-}} d[\tilde{M},N]^c_s \nonumber \\
& + 2 \sum_{s \leq t} \frac{\hat{\xi}_{s-} \Delta N_s}{L_{s-}
c_s} \left( \hat{\xi}_s L_s c_{s-} - \hat{\xi}_{s-} L_{s-} c_s
\right) + \int_0^t c_{s-} d[\hat{\xi}]_s \nonumber \\
& + \int_0^t \hat{\xi}^2_{s-} \frac{c_s^2}{\kappa_s} ds - \int_0^t
\hat{\xi}^2_{s-} \nu_s ds - \int_0^t
\hat{\xi}^2_{s-} dN_s + [c,\hat{\xi}^2]_t.
\label{eq:dymcxisq1}
\end{align}
The final quadratic covariation in \eqref{eq:dymcxisq1} can be
computed as
\begin{align}
[c,\hat{\xi}^2]_t =
& ~ -2 \int_0^t \frac{\hat{\xi}_{s-}}{L_{s-}} d[\tilde{M},N]^c_s
- 2 \sum_{s \leq
t} \frac{\hat{\xi}_{s-}}{L_{s-}} \Delta
\tilde{M}_s \Delta N_s
\nonumber \\
& - 2 \int_0^t \frac{\hat{\xi}^2_{s-}}{c_{s-}} d[N]^c_s - 2 \sum_{s \leq
t} \frac{\hat{\xi}^2_{s-}}{c_{s-}} (\Delta
N_s)^2 \nonumber \\
& - 2 \sum_{s \leq t}
\frac{\hat{\xi}_{s-} (\Delta N_s)^2}{L_{s-} c_{s-}
c_s} \left( \hat{\xi}_s L_s c_{s-} - \hat{\xi}_{s-} L_{s-} c_s
\right) + \int_0^t \Delta c_s d[\hat{\xi}]_s. \label{eq:dymcxisq2}
\end{align}
Observe that the sum of jumps in \eqref{eq:dymcxisq2} can
be rewritten as
\begin{align*}
& -2 \sum_{s \leq t} \frac{\hat{\xi}_{s-} \Delta N_s}{L_{s-} c_{s-} c_s}
\left( \Delta \tilde{M}_s c_s c_{s-} + \hat{\xi}_{s-} \Delta N_s
c_{s} L_{s-} \right. \\
& \hspace{96pt} \left. + \Delta N_s \hat{\xi}_s L_{s} c_{s-} -
\hat{\xi}_{s-} L_{s-} c_{s} \Delta N_s \right) \\
& = -2 \sum_{s \leq t} \frac{\hat{\xi}_{s-} \Delta N_s}{L_{s-}
c_s} \left( \hat{\xi}_s L_s c_{s-} - \hat{\xi}_{s-} L_{s-} c_s
\right),
\end{align*}
where we used once more the identity in \eqref{eq:jumpMtilde} and
$\Delta c_s = -\Delta N_s$. With this observation, plugging back
\eqref{eq:dymcxisq2} into \eqref{eq:dymcxisq1}, we finally obtain
\begin{align}
c_t \hat{\xi}^2_t
= & ~c_0 \hat{\xi}^2_0 - 2 \int_0^t \hat{\xi}_{s-} \xi_s
\nu_s ds + \int_0^t \hat{\xi}^2_{s-}
\nu_s ds + 2 \int_0^t
\frac{c_{s-} \hat{\xi}_{s-}}{L_{s-}} d\tilde{M}_s \nonumber \\
& + \int_0^t
\hat{\xi}^2_{s-} dN_s + \int_0^t c_s d[\hat{\xi}]_s + \int_0^t
\hat{\xi}^2_{s-} \frac{c_s^2}{\kappa_s} ds.
\label{eq:dymcxisq3}
\end{align}
\emph{Step 4:} Let us now put together all the computations from the
preceding steps. Specifically, let $u$ be a progressively
measurable, $\mathbb{P}$-a.s. locally $L^2([0,T),\kappa_t dt)$-integrable
process with corresponding controlled process $X^u$. Due to our
computations in \eqref{eq:dymcxi3} and \eqref{eq:dymcxisq3} as well
as the fact that $X^u$ is continuous and of finite variation, we get
for all $0 \leq t < T$ that
\begin{align}
& c_t ( X^u_t - \hat{\xi}_t)^2 = c_t (X^u_t)^2 - 2 X^u_t c_t
\hat{\xi}_t + c_t \hat{\xi}^2_t \nonumber \\
& = c_0(x-\hat{\xi}_0)^2 + \int_0^t c_s d[\hat{\xi}]_s -\int_0^t (X^u_{s})^2 \nu_s ds + 2 \int_0^t X^u_s
\nu_s \xi_s ds \nonumber \\
& \hspace{12pt} - 2 \int_0^t
c_{s} u_s (\hat{\xi}_{s} - X^u_s) ds + \int_0^t
\frac{c_s^2}{\kappa_s} (X^u_{s} - \hat{\xi}_{s})^2 ds - 2 \int_0^t \hat{\xi}_s \xi_s \nu_s ds +
\int_0^t \hat{\xi}^2_s \nu_s ds \nonumber \\
& \hspace{12pt} + \int_0^t (\hat{\xi}^2_{s-} - (X^u_{s-})^2) dN_s + 2
\int_0^t \frac{c_{s-}
}{L_{s-}} \left( \hat{\xi}_{s-} - X^u_{s-} \right) d\tilde{M}_s. \label{eq:C1}
\end{align}
Observe that the last two stochastic integrands sum up to $M_t(u)$
defined in~\eqref{eq:M}. Furthermore, two completions of squares in
the third line of \eqref{eq:C1} yield
\begin{align}
& c_t ( X^u_t - \hat{\xi}_t)^2 \nonumber \\
& = c_0(x-\hat{\xi}_0)^2 + \int_0^t
c_s d[\hat{\xi}]_s -\int_0^t (X^u_{s})^2 \nu_s ds + 2 \int_0^t X^u_s
\nu_s \xi_s ds \nonumber \\
& \hspace{12pt} + \int_0^t \kappa_s \left( u_s -
\frac{c_s}{\kappa_s} \left( \hat{\xi}_s - X^u_s \right)
\right)^2 ds + \int_0^t
(\xi_s - \hat{\xi}_s)^2
\nu_s ds \nonumber \\
& \hspace{12pt} - \int_0^t
\kappa_s u^2_s ds - \int_0^t \xi^2_s \nu_s ds + M_t(u) \nonumber \\
&= ~ c_0(x-\hat{\xi}_0)^2 + \int_0^t
c_s d[\hat{\xi}]_s - \int_0^t (X^u_s - \xi_s)^2 \nu_s ds \nonumber \\
& \hspace{12pt} + \int_0^t \kappa_s \left( u_s -
\frac{c_s}{\kappa_s} \left( \hat{\xi}_s - X^u_s \right)
\right)^2 ds + \int_0^t
(\xi_s - \hat{\xi}_s)^2
\nu_s ds \nonumber \\
& \hspace{12pt} - \int_0^t
\kappa_s u^2_s ds +
M_t(u) \nonumber
\end{align}
Consequently, we can write
\begin{align}
0 \leq C_t(u) = & ~\int_0^t (X^u_s - \xi_s)^2 \nu_s ds
+ \int_0^t \kappa_s u^2_s ds + c_t
(X^u_t - \hat{\xi}_t)^2 \nonumber \\
= & ~ c_0(x-\hat{\xi}_0)^2 + \int_0^t
c_s d[\hat{\xi}]_s + \int_0^t
(\xi_s - \hat{\xi}_s)^2
\nu_s ds \nonumber \\
& ~ + \int_0^t \kappa_s \left( u_s -
\frac{c_s}{\kappa_s} \left( \hat{\xi}_s - X^u_s \right)
\right)^2 ds + M_t(u) \nonumber \\
= & ~ c_0(x-\hat{\xi}_0)^2 + A_t(u) + M_t(u) \quad (0 \leq t < T)
\label{eqC3}
\end{align}
with $(A_t(u))_{0 \leq t < T}$ as defined in \eqref{eq:A}. Finally,
observe that the process $(A_t(u))_{0 \leq t < T}$ is a right
continuous, nondecreasing, adapted process and that
$(M_t(u))_{0 \leq t < T}$ is a c\`adl\`ag local martingale because
$\tilde{M}$ and $N$ are local martingales on $[0,T)$ and all
integrands in \eqref{eq:M} are left continuous (cf., e.g.,
\citet{Prot:04}, Theorem III.33). Consequently, we have that
$(C_t(u))_{0 \leq t < T}$ is a nonnegative, c\`adl\`ag local
submartingale.
\end{proof}
We are now ready to give the proof of our main Theorem \ref{thm:main}:
\medskip
\noindent\textbf{Proof of Theorem~\ref{thm:main}:} First, let
us assume that $\mathcal{U}^c \neq \varnothing$. For any $u \in \mathcal{U}^c$ we can
consider the corresponding cost process
$C_t(u) = c_0(x-\hat{\xi}_0)^2 + A_t(u) + M_t(u)$, $0 \leq t < T$, as
in~\eqref{eq:master} of Lemma~\ref{lem:mastereq} above. Let
$(\tau^n)_{n = 1,2,\ldots}$ be a localizing sequence of stopping times
for the local martingale $(M_t(u))_{0 \leq t < T}$ such that
$\tau^n \uparrow T$ $\mathbb{P}$-a.s. strictly from below as
$n \rightarrow \infty$ and $(M_{t \wedge \tau^n}(u))_{0 \leq t < T}$
is a uniformly integrable martingale for each $n$ (cf., e.g.,
\citet{Prot:04}, Chapter I.6, for more details). Then it holds by
definition of our performance functional~$J$ in
\eqref{eq:defobjective} that
\begin{align}
\infty > J(u)
& \triangleq \limsup_{\tau \uparrow T} \mathbb{E}[C_{\tau}(u)] \nonumber \\
& \geq ~c_0(x-\hat{\xi}_0)^2 + \limsup_{n \rightarrow
\infty} \left\{ \mathbb{E}[A_{\tau^n}(u)] + \mathbb{E}[M_{\tau^n}
(u)] \right\} \nonumber \\
& = ~c_0(x-\hat{\xi}_0)^2 \nonumber\\
& \hspace{18pt} + \mathbb{E} \left[ \int_0^T (\xi_s - \hat{\xi}_s)^2 \nu_s ds +
\int_0^T c_s d[\hat{\xi}]_s \right. \nonumber\\
& \hspace{95pt} \left. + \int_0^T \kappa_s \left( u_s -
\frac{c_s}{\kappa_s} \left( \hat{\xi}_s - X^u_s \right)
\right)^2 ds\right] \nonumber \\
& \geq ~c_0(x-\hat{\xi}_0)^2 + \mathbb{E}\left[ \int_0^T
(\xi_s - \hat{\xi}_s)^2 \nu_s ds \right] +
\mathbb{E} \left[
\int_{[0,T)} c_s d[\hat{\xi}]_s \right], \label{eq:lowerbound1}
\end{align}
where we used monotone convergence and applied Doob's Optional
Sampling Theorem as, e.g., in \citet{Prot:04}, Theorem I.16, in order
to get $\mathbb{E}[M_{\tau^n}(u)]=0$ for all $n \geq 1$. In particular,
the computations in \eqref{eq:lowerbound1} show that
\eqref{ass:integrability} necessarily holds true if $\mathcal{U}^c \neq
\varnothing$ (as assumed for now). In other words,
setting
\begin{equation}
v \triangleq c_0(x-\hat{\xi}_0)^2 + \mathbb{E}\left[ \int_0^T
(\xi_s - \hat{\xi}_s)^2 \nu_s ds \right] +
\mathbb{E} \left[
\int_{[0,T)} c_s d[\hat{\xi}]_s \right] < \infty, \label{eq:vfinite}
\end{equation}
we have for all $u \in \mathcal{U}^c$ the lower bound
\begin{equation}
J(u) \geq v. \label{eq:lowerbound2}
\end{equation}
Now, let us define the control $\hat{u}$ with corresponding controlled
process $\hat{X} \triangleq X^{\hat{u}}$ via the feedback law
\begin{equation*}
\hat{u}_t = \frac{c_t}{\kappa_t} (\hat{\xi}_t - \hat{X}_t) \quad (0
\leq t < T).
\end{equation*}
Observe that $\hat{u}$ is a progressively measurable process and
locally $dt$-integrable and locally $\kappa_tdt$-square-integrable on
$[0,T)$ due
to~\eqref{eq:condkappanu}. In particular,
$\hat{X}_T = x + \int_0^T \hat{u}_t dt$ exists $\mathbb{P}$-a.s and we can
invoke Lemma~\ref{lem:mastereq}. We denote by
$C_t(\hat{u}) = c_0 (x-\hat{\xi}_0)^2 + M_t(\hat{u}) + A_t(\hat{u})$,
$0 \leq t < T$, the corresponding cost process from this lemma. We
will now show that $\hat{u} \in \mathcal{U}^c$ and that $\hat{u}$ attains the
lower bound in \eqref{eq:lowerbound2}, i.e.,
\begin{equation*}
J(\hat{u}) = v
\end{equation*}
finishing our verification argument. Indeed, first note that, by choice
of $\hat{u}$, we have
\begin{equation*}
A_t(\hat{u})=\int_0^t (\xi_s - \hat{\xi}_s)^2 \nu_s ds + \int_0^t
c_s d[\hat{\xi}]_s \quad (0 \leq t < T),
\end{equation*}
whence, in particular,
\begin{equation*}
v = c_0(x-\hat{\xi}_0)^2 + \mathbb{E}[A_{T-}(\hat{u})] < \infty.
\end{equation*}
Next, since $M(\hat{u})$ is a local martingale on $[0,T)$ by virtue of
Lemma \ref{lem:mastereq} above, we can fix a localizing sequence of
stopping times $(\hat{\tau}^n)_{n=1,2,\ldots}$ such that
$\hat{\tau}^n \uparrow T$ $\mathbb{P}$-a.s. strictly from below for
$n \rightarrow \infty$ and such that
$(M_{t \wedge \hat{\tau}^n}(\hat{u}))_{0 \leq t < T}$ is a uniformly
integrable martingale for each $n$. Then, for any stopping time
$\tau < T$, applying Fatou's Lemma and once more
Doob's Optional Sampling Theorem yields
\begin{align*}
\mathbb{E}[C_{\tau}(\hat{u})] =
& ~\mathbb{E} [ \liminf_{n \rightarrow \infty}
C_{\tau \wedge \hat{\tau}^n}(\hat{u})] \leq \liminf_{n \rightarrow \infty}
\mathbb{E}[C_{\tau \wedge \hat{\tau}^n}(\hat{u})] \\
= & ~c_0(x-\hat{\xi}_0)^2 + \liminf_{n \rightarrow \infty} \left\{
\mathbb{E}[A_{\tau \wedge \hat{\tau}^n}(\hat{u})] +
\mathbb{E}[M_{\tau \wedge \hat{\tau}^n}(\hat{u})] \right\} \\
= & ~ c_0 (x-\hat{\xi}_0)^2 + \liminf_{n
\rightarrow \infty} \mathbb{E}[A_{\tau \wedge \hat{\tau}^n}(\hat{u})]
\\
= & ~c_0
(x-\hat{\xi}_0)^2 + \mathbb{E}[A_{\tau}(\hat{u})] \leq c_0
(x-\hat{\xi}_0)^2 + \mathbb{E}[A_{T-}(\hat{u})]= v,
\end{align*}
where we also used monotone convergence as well as the fact that
$(A(\hat{u})_t)_{0 \leq t < T}$ is an increasing process. Hence, it
holds that
\begin{equation}
J(\hat{u}) = \limsup_{\tau \uparrow T} \mathbb{E}[C_{\tau}(\hat{u})] \leq
v < \infty \label{eq:upperbound}
\end{equation}
and thus $\hat{u} \in \mathcal{U}^c$. In particular, due to
\eqref{eq:lowerbound2}, we actually have $J(\hat{u}) = v$ as
desired.
Finally, let us assume that \eqref{ass:integrability} is
satisfied. Then, it follows from \eqref{eq:vfinite} and
\eqref{eq:upperbound} that $\hat{u} \in \mathcal{U}^c$, i.e.,
$\mathcal{U}^c \neq \varnothing$. In other words, condition
\eqref{ass:integrability} is not only necessary but also sufficient
for $\mathcal{U}^c \neq \varnothing$. \qed \medskip
\begin{Remark} \label{rem:timeconsistent}
Let us briefly comment on a few insights offered by the preceding
proof. The argument rests on the key identity~\eqref{eq:master} of
Lemma~\ref{lem:mastereq}. For bounded coefficients
$\kappa, 1/\kappa,\nu,\eta$ and bounded targets $\xi,\Xi_T$, the
theory of BSRDEs readily allows one to deduce that $M(u)$ of
\eqref{eq:M} is a true martingale for any control $u$ with finite
expected costs; see, e.g., \citet{KohlmannTang:02}. From the key
identity~\eqref{eq:master} it then transpires that the control
$\hat{u}^c$ of~\eqref{eq:optimalcontrol} minimizes
\begin{displaymath}
\mathbb{E} \left[ \int_0^\tau (X^u_t - \xi_t)^2 \nu_t dt + \int_0^\tau
\kappa_t u^2_t dt + c_\tau (X^u_\tau - \hat{\xi}^c_\tau)^2
\right]
\end{displaymath}
\emph{simultaneously} for all stopping times $\tau \leq T$. In that
sense, the above terminal penalizations
$c_\tau (X^u_\tau - \hat{\xi}^c_\tau)^2$ are consistent replacements
for $\eta(X^u_T-\Xi_T)^2$ for these problems with shorter time
horizons.
When coefficients are unbounded, particularly when
$\P[\eta=+ \infty]>0$, it is quite possible that $M(u)$ is a strict
local martingale and so the preceding argument breaks down. Still,
being bounded from below by an integrable random variable under
condition~\eqref{ass:integrability}, $M(u)$ is a supermartingale;
its martingale property thus turns out to hinge on the control in
$L^1(\P)$ of $c_{\tau^n} (X^u_{\tau^n} - \hat{\xi}^c_{\tau^n})^2$
along a suitable sequence of stopping times $\tau^n \uparrow
T$. This control does not seem to be available in the BSRDE
literature at present, leaving our conjecture~\eqref{eq:conjecture}
still open at this point.
Observe, though, that, taking the $\limsup_{\tau \uparrow T}$ of the
above expectations, the formulation in our auxiliary target
functional $J^c(u)$ of \eqref{eq:defobjective} avoids these issues
and thus allows us to solve at least these closely related auxiliary
problems.
\end{Remark}
The final lemma justifies the interpretation in Remark \ref{rem:osp}:
\begin{Lemma} \label{lem:remark}
Let us assume that $\lim_{t \uparrow T} L_t = L_T$ in $L^1(\mathbb{P})$ and
that the local c\`adl\`ag martingale $(N_t)_{0 \leq t < T}$
in~\eqref{eq:BSRDE} satisfies $\EE[[N]^{1/2}_t] < \infty$ for all
$0 \leq t < T$. Then we have the representation
\begin{equation} \label{eq:repc1}
c_t = \mathbb{E} \left[ L_T e^{\int_0^t\frac{c_u}{\kappa_u} du} + \int_t^T e^{-\int_t^r
\frac{c_u}{\kappa_u}du} \nu_r dr \, \bigg\vert \, \mathcal{F}_t
\right] \quad (0 \leq t < T).
\end{equation}
Moreover, on $\{ \mathbb{P}[\int_t^T \nu_r dr = 0 \; \vert \; \mathcal{F}_t] < 1\}$ we have the identity
\begin{equation} \label{eq:repw1}
\mathbb{E} \left[ \int_t^T \frac{e^{-\int_t^r \frac{c_u}{\kappa_u}du}}{(1-w_t)c_t}
\nu_r dr \bigg\vert \mathcal{F}_t \right] = 1,
\end{equation}
and the weight process $w_t = \mathbb{E}[L_T \vert \mathcal{F}_t] / L_t$
of~\eqref{eq:weight} satisfies
$0 \leq w_t < 1$.
\end{Lemma}
\begin{proof}
Recall the dynamics of the process $(L_t)_{0 \leq t < T}$ in
\eqref{eq:SDEL}, i.e.,
\begin{equation*}
L_t = c_0 - \int_0^t e^{-\int_0^r \frac{c_u}{\kappa_u}du} \nu_r dr
- \int_0^t e^{-\int_0^r \frac{c_u}{\kappa_u}du}
dN_r \quad (0 \leq t < T).
\end{equation*}
Hence, for all $0 \leq t \leq s < T$ we may write
\begin{equation}
L_s - L_t = -\int_t^s e^{-\int_0^r \frac{c_u}{\kappa_u}du} \nu_r dr
- \int_t^s e^{-\int_0^r \frac{c_u}{\kappa_u}du} dN_r. \label{eq:repc2}
\end{equation}
Observe that the stochastic integral in \eqref{eq:repc2} is a
martingale on $[0,T)$ by our integrability assumption on
$[N]^{1/2}_t$. Thus, taking conditional expectations
in~\eqref{eq:repc2} yields
\begin{equation}
\mathbb{E}[ L_s \vert \mathcal{F}_t] - L_t = - \mathbb{E} \left[\int_t^s e^{-\int_0^r
\frac{c_u}{\kappa_u}du} \nu_r dr \, \bigg\vert \, \mathcal{F}_t
\right] \quad (0 \leq t \leq s < T). \label{eq:repc3}
\end{equation}
Passing to the limit $s \uparrow T$ in \eqref{eq:repc3} we obtain,
due to monotone convergence and due to the assumption that $L_s$
converges in $L^1(\mathbb{P})$ to $L_T$, the representation
\begin{equation}
L_t = \mathbb{E} \left[ L_T + \int_t^T e^{-\int_0^r
\frac{c_u}{\kappa_u}du} \nu_r dr \, \bigg\vert \, \mathcal{F}_t
\right] \quad (0 \leq t < T). \label{eq:repc4}
\end{equation}
In other words, using that
$L_t=c_t e^{-\int_0^t \frac{c_u}{\kappa_u} du}$, we can write
\begin{equation*}
c_t = \mathbb{E} \left[ L_T e^{\int_0^t\frac{c_u}{\kappa_u} du} + \int_t^T e^{-\int_t^r
\frac{c_u}{\kappa_u}du} \nu_r dr \, \bigg\vert \, \mathcal{F}_t
\right] \quad (0 \leq t < T)
\end{equation*}
as desired for \eqref{eq:repc1}. Finally, by definition of the
weight process $(w_t)_{0 \leq t < T}$ in~\eqref{eq:weight} together
with the identity in~\eqref{eq:repc1}, we can write
\begin{align}
w_t
& = \frac{\mathbb{E}[L_T \vert \mathcal{F}_t]}{L_t}
= \frac{ e^{\int_0^t \frac{c_u}{\kappa_u}
du}}{c_t} \mathbb{E} [ L_T \,\vert \, \mathcal{F}_t] =
\frac{1}{c_t} \mathbb{E} \left[ e^{\int_0^t \frac{c_u}{\kappa_u}
du} L_T \,\Big\vert \, \mathcal{F}_t \right] \nonumber \\
& = \frac{1}{c_t}
\left( c_t - \mathbb{E} \left[ \int_t^T e^{-\int_t^r \frac{c_u}{\kappa_u} du}
\nu_r dr \, \Big\vert \, \mathcal{F}_t \right] \right) \nonumber
\\
& = 1 - \frac{1}{c_t} \mathbb{E} \left[ \int_t^T e^{-\int_t^r \frac{c_u}{\kappa_u} du}
\nu_r dr \, \Big\vert \, \mathcal{F}_t \right] \quad
\text{for all } 0 \leq t < T, \label{eq:repw2}
\end{align}
which yields our claim~\eqref{eq:repw1}. In particular,
representation~\eqref{eq:repw2} also reveals that $0 \leq w_t < 1$
$\mathbb{P}$-a.s. on
$\{ \mathbb{P}[\int_t^T \nu_r dr = 0 \; \vert \; \mathcal{F}_t] < 1\}$ for all
$0 \leq t < T$.
\end{proof}
\section*{Appendix}
In this appendix, we collect some results on the BSRDE
in~\eqref{eq:BSRDE} with terminal condition~\eqref{eq:BSRDEtc} which
may be of independent interest for the theory of BSDEs. First, let us
provide lower estimates for a minimal supersolution $c^{\min}$ to the
Riccati BSDE in~\eqref{eq:BSRDE} with terminal
condition~\eqref{eq:BSRDEtc}.
\begin{Lemma} \label{app:lem:lbound}
Let $(\nu_t)_{0 \leq t \leq T}$, $(\kappa_t)_{0 \leq t \leq T}$
satisfy~\eqref{eq:condkappanu} and let $c^{\min}$ denote a minimal
supersolution to~\eqref{eq:BSRDE} with terminal
condition~\eqref{eq:BSRDEtc}. Then for all $t \in [0, T)$ we have
\begin{equation} \label{app:lbound}
c^{\min}_t \geq \mathbb{E} \left[
\frac{1}{\int_t^T \frac{1}{\kappa_s} ds + \frac{1}{\eta}} \,
\bigg\vert \, \mathcal{F}_t \right] \geq 0 \quad \mathbb{P}\text{-a.s.}
\end{equation}
with strict inequality holding true in the first estimate on
$\{ \mathbb{P}[\int_t^T \nu_s ds = 0\vert \mathcal{F}_t] < 1 \}$ and strict
inequality in the second estimate on
$\{ \mathbb{P}[\eta = 0 \vert \mathcal{F}_t] < 1\}$. In particular, any
supersolution $c$ of~\eqref{eq:BSRDE} and~\eqref{eq:BSRDEtc} will be
strictly positive throughout $[0,T)$ if~\eqref{ass:etakappa} holds true.
\end{Lemma}
\begin{proof}
We will adopt the same idea as in the proof of Lemma 11
in~\citet{Popier:06} in the case $\kappa \equiv 1$ (and
$\nu \equiv 0$). For all $n \geq 1$ we define the
processes
\begin{equation*}
\Gamma^n_t \triangleq \mathbb{E} \left[ \frac{1}{\int_t^T \frac{1}{\kappa_s} ds +
\frac{1}{\eta \wedge n}}
\, \bigg\vert \, \mathcal{F}_t \right] \quad (0 \leq t \leq T).
\end{equation*}
Note that $\Gamma^n$ is well defined because the term in the
conditional expectation is bounded by $n$. Moreover, we have
pathwise the identity
\begin{equation*}
\frac{1}{\int_t^T \frac{1}{\kappa_s} ds +
\frac{1}{\eta \wedge n}} = \eta \wedge n - \int_t^T
\frac{1}{\kappa_s} \left( \frac{1}{\int_s^T \frac{1}{\kappa_u} du +
\frac{1}{\eta \wedge n}} \right)^2 ds \quad (0 \leq t \leq T).
\end{equation*}
Thus, the process $\Gamma^n$ verifies
\begin{align*}
\Gamma^n_t
& = \mathbb{E} \left[ \eta \wedge n - \int_t^T
\frac{1}{\kappa_s} \left( \frac{1}{\int_s^T \frac{1}{\kappa_u} du +
\frac{1}{\eta \wedge n}} \right)^2 ds \, \Bigg\vert \, \mathcal{F}_t
\right] \\
& = \mathbb{E} \left[ \eta \wedge n - \int_t^T \frac{1}{\kappa_s}
\left( (\Gamma_s^n)^2 + U^n_s \right) ds \, \Big\vert \, \mathcal{F}_t
\right] \quad (0 \leq t \leq T)
\end{align*}
with adapted process $U^n$ given by
\begin{equation*}
U^n_s \triangleq \mathbb{E} \left[ \left( \frac{1}{ \int_s^T \frac{1}{\kappa_u} du +
\frac{1}{\eta \wedge n} } \right)^2
\, \Bigg\vert \, \mathcal{F}_s \right] - (\Gamma^n_s)^2 \quad (0 \leq s
\leq T).
\end{equation*}
Observe that
\begin{equation*}
d\Gamma^n_t = \left( \frac{(\Gamma_t^n)^2}{\kappa_t} +
\frac{U^n_t}{\kappa_t} \right) + dM^n_t, \quad \Gamma^n_T = \eta
\wedge n,
\end{equation*}
for some c\`adl\`ag local martingale $(M^n_t)_{0 \leq t \leq
T}$. Moreover, since $U^n_t \geq 0$ for all $0 \leq t \leq T$ due
to Jensen's inequality, we have
\begin{equation*}
-\frac{y^2}{\kappa_t} - \frac{U^n_t}{\kappa_t} \leq
-\frac{y^2}{\kappa_t} \leq -\frac{y^2}{\kappa_t} + \nu_t \quad (y
\in \mathbb{R}).
\end{equation*}
Thus, classical comparison results as in \citet{KrusePopier:16_2}, Proposition
4, together with the construction of the minimal supersolution
$(c^{\min}_t)_{0 \leq t < T}$ via a truncation procedure in
\cite{KrusePopier:16_1}, finally yields that for all $t \in [0, T)$
we have
\begin{equation*}
c^{\min}_t \geq \mathbb{E} \left[ \frac{1}{\int_t^T \frac{1}{\kappa_s} ds +
\frac{1}{\eta \wedge n}}
\, \bigg\vert \, \mathcal{F}_t \right] \geq 0\quad \mathbb{P}\text{-a.s.}
\end{equation*}
In fact on $\{ \mathbb{P}[\int_t^T \nu_s ds = 0\vert \mathcal{F}_t] < 1 \}$
comparison is strict in the first of these estimates. Moreover,
letting $n \rightarrow \infty$ we conclude~\eqref{app:lbound} where
``$>0$'' holds on $\{ \mathbb{P}[\eta = 0 \vert \mathcal{F}_t] < 1\}$.
\end{proof}
Finally, let us briefly discuss the integrability condition
in~\eqref{eq:BSRDEintcond2} for the minimal supersolution
$(c^{\min}_t)_{0 \leq t < T}$ with Riccati dynamics~\eqref{eq:BSRDE}
satisfying the terminal condition~\eqref{eq:BSRDEtc}. This condition
is not regularly discussed in the BSRDE literature and thus calls for
a verification in some sufficiently generic setting. So let us place
ourselves in the context of \citet{KrusePopier:16_1} and therein
restrict ourselves to a Brownian framework. It follows from
Proposition 3 and Remark 4 as well as Corollary~1 in
\cite{KrusePopier:16_1} with $p=2$ that for any $t \in [0,T)$ we have
the upper estimates
\begin{equation} \label{app:ubound}
c^{\min}_t \leq \frac{1}{(T-t)^2} \mathbb{E} \left[ \int_t^T (\kappa_s + (T-s)^2
\nu_s) ds \, \bigg\vert \, \mathcal{F}_t \right] \quad \mathbb{P}\text{-a.s.}
\end{equation}
In addition to that, observe that also the lower estimates derived in
Lemma~\ref{app:lem:lbound} hold true.
For simplicity, let us further confine ourselves to the following
additional assumptions on $(\nu_t)_{0 \leq t \leq T}$,
$(\kappa_t)_{0 \leq t \leq T}$ and $\eta$: We assume that the process
$(\kappa_t)_{0 \leq t \leq T}$ is bounded from below and above, i.e.,
it holds that
\begin{equation} \label{app:condkappa}
0 < k \leq \kappa_t \leq K <
\infty \quad (0 \leq t \leq T)
\end{equation}
for some constants $k,K \in \mathbb{R}$. Moreover, we assume that $\nu \in
L^1(\mathbb{P} \otimes dt)$ with
\begin{equation} \label{app:condnu2}
\frac{1}{T-t} \mathbb{E} \left[ \int_t^T (T-s)^2 \nu_s ds \, \bigg\vert \,
\mathcal{F}_t \right] \leq C \qquad (0 \leq t < T)
\end{equation}
for some constant $C < \infty$. Finally, we assume that there exists a
constant $\varepsilon > 0$ such that
\begin{equation} \label{app:condeta}
\mathbb{P} \left[ \varepsilon \leq \eta \leq + \infty \right] = 1.
\end{equation}
Observe that condition~\eqref{app:condeta} implies in particular that
$c_t>0$ $\mathbb{P}$-a.s. for all $t \in [0,T]$ by virtue of
Lemma~\ref{app:lem:lbound}.
\begin{Lemma}
Under the conditions \eqref{app:condkappa}, \eqref{app:condnu2}, and
\eqref{app:condeta} the minimal supersolution
$c \triangleq (c^{\min}_t)_{0 \leq t < T}$ to the BSRDE
in~\eqref{eq:BSRDE} on $[0,T)$ with terminal
condition~\eqref{eq:BSRDEtc} satisfies
\begin{equation*}
\int_0^T \frac{d\langle c\rangle_t}{c_t^2} < \infty \quad \text{on
the set } \{ \eta = + \infty \},
\end{equation*}
i.e., condition \eqref{eq:BSRDEintcond2} holds true.
\end{Lemma}
\begin{proof}
We extend the proof of Proposition 10 in \citet{Popier:06} done for
the specific case $\kappa \equiv 1$ and $\nu \equiv 0$ to our more
general setting by using the upper and lower bounds of the process
$(c_t)_{0 \leq t < T}$ in \eqref{app:ubound} and
\eqref{app:lbound}. First, note that conditions
\eqref{app:condkappa} and \eqref{app:condeta} imply for the lower
bound in \eqref{app:lbound} that
\begin{equation} \label{app:proof:lp}
c_t \geq \frac{k
\varepsilon}{(T-t) \varepsilon + k} \quad (0 \leq t < T).
\end{equation}
Concerning the upper bound in \eqref{app:ubound}, we obtain due to
\eqref{app:condkappa} and \eqref{app:condnu2}
\begin{equation} \label{app:proof:up}
c_t \leq \frac{K + \text{const}}{T-t} \quad (0 \leq t < T).
\end{equation}
Since the process $c$ is bounded from below on $[0,T]$, we can apply
It\^o's formula on $[0,T-\delta]$ for some $0 < \delta < T$ to the
process $\sqrt{(T-t) c_t}$. Using the BSRDE dynamics of $c$ in
\eqref{eq:BSRDE}, we obtain
\begin{align*}
0 & \leq \sqrt{(T-t) c_t} \\
& = \sqrt{T c_0} + \int_0^t \left( \frac{\sqrt{T-s}}{2
\sqrt{c_s}} \left(
\frac{c_s^2}{\kappa_s} - \nu_s \right)
- \frac{\sqrt{c_s}}{2\sqrt{T-s}} \right) ds \nonumber \\
& \hspace{12pt}
-\frac{1}{8} \int_0^t \frac{\sqrt{T-s}}{c_s^{3/2}} d\langle c
\rangle_s - \frac{1}{2} \int_0^t \frac{\sqrt{T-s}}{\sqrt{c_s}} dN_s \\
& = \sqrt{T c_0} + \frac{1}{2} \int_0^t \sqrt{T-s}
\frac{\sqrt{c_s}}{\kappa_s} \left( c_s - \frac{\nu_s \kappa_s}{c_s}
- \frac{\kappa_s}{T-s} \right) ds \nonumber \\
& \hspace{12pt}
-\frac{1}{8} \int_0^t \frac{\sqrt{T-s}}{c_s^{3/2}} d\langle c
\rangle_s - \frac{1}{2} \int_0^t \frac{\sqrt{T-s}}{\sqrt{c_s}} dN_s
\qquad (0 \leq t \leq T-\delta)
\end{align*}
and hence
\begin{align}
& \frac{1}{8} \int_0^{T-\delta} \frac{\sqrt{T-s}}{c_s^{3/2}} d\langle c
\rangle_s + \frac{1}{2} \int_0^{T-\delta} \frac{\sqrt{T-s}}{\sqrt{c_s}} dN_s
\nonumber \\
& \leq \sqrt{T c_0} + \frac{1}{2} \int_0^{T-\delta} \sqrt{T-s}
\frac{\sqrt{c_s}}{\kappa_s} \left( c_s - \frac{\nu_s \kappa_s}{c_s}
- \frac{\kappa_s}{T-s} \right) ds \label{app:proof:ito}
\end{align}
for all $0 < \delta < T$. Observe that due to the bounds on $c$ in
\eqref{app:proof:lp} and \eqref{app:proof:up} and $\kappa$ in
\eqref{app:condkappa} as well as the integrability assumption on
$\nu$, i.e., $\nu \in L^1(\mathbb{P} \otimes dt)$, it holds for all $0 < \delta < T$ that
\begin{align*}
& \mathbb{E} \left[ \int_0^{T-\delta} \sqrt{T-s}
\frac{\sqrt{c_s}}{\kappa_s} \left\vert c_s - \frac{\nu_s \kappa_s}{c_s}
- \frac{\kappa_s}{T-s} \right\vert ds \right] \\
& \leq \text{const} \, \mathbb{E} \left[ \int_0^{T-\delta} \left\vert c_s
- \frac{\nu_s \kappa_s}{c_s} - \frac{\kappa_s}{T-s} \right\vert ds
\right] \\
& \leq \text{const} \left( \, \mathbb{E} \left[ \int_0^{T-\delta} c_s ds
\right] + \mathbb{E} \left[ \int_0^{T-\delta} \frac{\nu_s
\kappa_s}{c_s} ds \right] +
\mathbb{E} \left[ \int_0^{T-\delta} \frac{\kappa_s}{T-s} ds
\right] \right) < \infty.
\end{align*}
Hence, by using the upper bound on $c$ in \eqref{app:ubound} and
Fubini's Theorem, we
can compute
\begin{align}
& \mathbb{E} \left[ \int_0^{T-\delta} \left( c_s - \frac{\nu_s \kappa_s}{c_s}
- \frac{\kappa_s}{T-s} \right) ds \right]
\leq \mathbb{E} \left[ \int_0^{T-\delta} \left( c_s
- \frac{\kappa_s}{T-s} \right) ds \right] \nonumber \\
& \leq \mathbb{E} \left[ \int_0^{T-\delta} \left( \frac{1}{(T-s)^2}
\mathbb{E} \left[ \int_s^T (\kappa_u + (T-u)^2
\nu_u) du \, \bigg\vert \, \mathcal{F}_s \right]
- \frac{\kappa_s}{T-s} \right) ds \right] \nonumber \\
& \leq \mathbb{E} \left[ \int_0^{T-\delta} \frac{1}{(T-s)^2} \left( \int_s^T
\kappa_u du \right) ds - \int_0^{T-\delta} \frac{\kappa_s}{T-s}
ds \right] \nonumber \\
& \hspace{15pt} +
\mathbb{E} \left[ \int_0^{T-\delta} \frac{1}{(T-s)^2} \left( \int_s^T
(T-u)^2 \nu_u du \right) ds \right]. \label{app:proof:exp}
\end{align}
Using once more Fubini's Theorem and the fact that $\kappa_t \leq K$
for all $0 \leq t \leq T$, we get for the first expectation in
\eqref{app:proof:exp} the estimate
\begin{align}
& \mathbb{E} \left[ \int_0^{T-\delta} \frac{1}{(T-s)^2} \left( \int_s^T
\kappa_u du \right) ds - \int_0^{T-\delta} \frac{\kappa_s}{T-s} ds
\right] \nonumber \\
& = \mathbb{E} \left[ \int_0^{T-\delta} \frac{\kappa_u}{T-u} du +
\int_{T-\delta}^T \frac{\kappa_u}{\delta} du - \frac{1}{T} \int_0^T
\kappa_u du - \int_0^{T-\delta} \frac{\kappa_s}{T-s} ds \right]
\nonumber \\
& \leq K. \label{app:proof:exp1}
\end{align}
Concerning the second expectation in \eqref{app:proof:exp},
application of Fubini's Theorem yields
\begin{align}
& \mathbb{E} \left[ \int_0^{T-\delta} \frac{1}{(T-s)^2} \left( \int_s^T
(T-u)^2 \nu_u du \right) ds \right] \nonumber \\
& \leq \mathbb{E} \left[ \int_0^{T-\delta} (T-u) \nu_u du + \delta \int_{T -
\delta}^T \nu_u du \right]. \label{app:proof:exp2}
\end{align}
Consequently, taking expectation in \eqref{app:proof:ito} and using
that the stochastic integral with respect to $N$ in
\eqref{app:proof:ito} is a true martingale on $[0,T-\delta]$ due to
\eqref{app:proof:lp} and \eqref{app:condnu2}, we obtain
together with the estimates in \eqref{app:proof:exp1} and
\eqref{app:proof:exp2} the upper bound
\begin{align*}
& \frac{1}{8} \mathbb{E} \left[ \int_0^{T-\delta}
\frac{\sqrt{T-s}}{c_s^{3/2}} d\langle c
\rangle_s \right] \\
& \leq \sqrt{T c_0} + \text{const} \,\left( K +
\mathbb{E} \left[ \int_0^{T-\delta} (T-u) \nu_u du + \delta \int_{T -
\delta}^T \nu_u du \right] \right).
\end{align*}
Passing to the limit $\delta \downarrow 0$ we get with monotone
convergence
\begin{align}
& \mathbb{E} \left[ \int_0^T \frac{\sqrt{T-s}}{c_s^{3/2}} d\langle c
\rangle_s \right] \nonumber \\
& \leq 8 \left( \sqrt{T c_0} + \text{const} \,\left( K +
\mathbb{E} \left[ \int_0^T (T-u) \nu_u du \right] \right) \right) <
\infty,
\label{app:proof:intbound}
\end{align}
due to $\nu \in L^1(\mathbb{P} \otimes dt)$. Now, using \eqref{app:lbound}, observe
that we can further estimate the process $(c_t)_{0 \leq t < T}$
from below by
\begin{align*}
c_s & \geq \mathbb{E} \left[ \frac{1}{\int_s^T \frac{1}{\kappa_u} du
+ \frac{1}{\eta}}
\, \bigg\vert \, \mathcal{F}_s \right] \geq
\mathbb{E} \left[ \frac{1}{\int_s^T \frac{1}{\kappa_u} du +
\frac{1}{\eta}} 1_{\{ \eta \, = \, + \infty \}}
\, \bigg\vert \, \mathcal{F}_s \right] \\
& = \mathbb{E} \left[ \frac{1}{\int_s^T \frac{1}{\kappa_u} du } 1_{\{ \eta
\, = \, + \infty \}}
\, \bigg\vert \, \mathcal{F}_s \right] \geq \frac{k}{T-s} \mathbb{E} \left[ 1_{\{
\eta \, = \, + \infty \}}
\, \vert \, \mathcal{F}_s \right].
\end{align*}
Plugging back this lower bound into the left hand side of
\eqref{app:proof:intbound} and using optional projection, we get
\begin{align*}
\infty & > \mathbb{E} \left[ \int_0^T \frac{\sqrt{T-s}}{c_s^{3/2}} d\langle c
\rangle_s \right] = \mathbb{E} \left[ \int_0^T
\frac{\sqrt{T-s}}{c_s^2} \sqrt{c_s} \, d\langle c
\rangle_s \right] \\
& \geq \sqrt{k} \, \mathbb{E} \left[ \int_0^T
\frac{1}{c_s^2} \mathbb{E} \left[ 1_{\{
\eta \, = \, + \infty \}}
\, \vert \, \mathcal{F}_s \right] \, d\langle c
\rangle_s \right] = \sqrt{k} \, \mathbb{E} \left[ \int_0^T
\frac{1}{c_s^2} 1_{\{\eta \, = \, + \infty \}} \, d\langle c
\rangle_s \right] \\
& = \sqrt{k} \, \mathbb{E} \left[ 1_{\{\eta \, = \, +
\infty \}} \left(\int_0^T
\frac{1}{c_s^2} \, d\langle c
\rangle_s \right) \right],
\end{align*}
which yields the desired result.
\end{proof}
\bibliographystyle{plainnat}
|
1,116,691,499,886 | arxiv | \section{Introduction}
\label{sec:Intoroduction}
The gradient flow has achieved great success in lattice field theory\cite{Luscher:2010iy, Luscher:2011bx},
and there are many applications such
as non-perturbative renormalization group\cite{Yamamura:2015kva, Makino:2018rys, Abe:2018zdc, Carosso:2018bmz, Sonoda:2019ibh, Sonoda:2020vut, Miyakawa:2021hcx, Miyakawa:2021wus, Miyakawa:2022qbz, Sonoda:2022fmk, Hasenfratz:2022wll, Abe:2022smm},
a holographic description of field theory\cite{Aoki:2015dla, Aoki:2016ohw, Aoki:2017uce, Aoki:2017bru, Aoki:2018dmc, Aoki:2019bfb, Aoki:2022lye},
O($N$) nonlinear sigma model and large $N$ expansion\cite{Makino:2014cxa, Aoki:2014dxa, Makino:2014sta, Aoki:2016env}, supersymmetric theory\cite{Nakazawa:2003zf, Nakazawa:2003tz, Kikuchi:2014rla, Aoki:2017iwi, Kadoh:2018qwg, Kadoh:2019glu, Kadoh:2019flv, Bergner:2019dim, Hieda:2017sqq, Kasai:2018koz},
and phenomenological physics to obtain the bounce solution or
sphaleron fields configuration\cite{Chigusa:2019wxb, Sato:2019axv, Hamada:2020rnp, Ho:2019ads}.
Further studies of the gradient flows
could provide a deep understanding of field theories\cite{Fujikawa:2016qis,Morikawa:2018fek}.
In the Yang-Mills flow, any correlator of the flowed field is
ultraviolet(UV) finite
at positive flow time
if the four-dimensional Yang-Mills
theory is properly renormalized.
In the case of QCD, with an extra field strength renormalization
for the flowed quarks, the similar property is obtained\cite{Luscher:2013cpa}.
This property is a key ingredient of the flow approach,
and physical quantities that are difficult to define exactly on the lattice
can be studied by lattice simulations with the flows\cite{Suzuki:2013gza, Makino:2014taa, Asakawa:2013laa, Taniguchi:2016ofw, Kitazawa:2017qab, Yanagihara:2018qqg, Harlander:2018zpi, Iritani:2018idk}.
Such a UV finiteness of gradient flow, however, does not hold
for scalar field theory in general\cite{Capponi:2015ucc}.
The interacting flow
has non-removable divergences, and
the extra field strength renormalization remains
even for the massless free flow. \footnote{
The flow equation is given only from the gradient of massless free part of the action,
while the scaler field theory at $t=0$ still has interaction terms.
The initial condition is given by a bare scalar field. }
This seems to suggest that
the gauge symmetry or other symmetries are necessary in realizing
the UV finiteness of the interacting gradient flow.
Supersymmetric gradient flow is another possibility of realizing the UV finiteness.
The supersymmetric flows are constructed
for the super Yang-Mills (SYM) in Refs.\cite{Kikuchi:2014rla, Kadoh:2018qwg} and for the super QCD in Ref.\cite{Kadoh:2019flv}.
In Ref.\cite{Kadoh:2019glu}, we also constructed
a SUSY flow in the
Wess-Zumino model, which is referred to as Wess-Zumino flow in this paper.
The Wess-Zumino flow is the simplest supersymmetric extension of the gradient flow,
and gives a good testing ground
in investigating the influence of supersymmetry on the flow approach.
In this paper,
we show that any correlation function of chiral superfields obtained from the Wess-Zumino flow is
UV finite at positive flow time in all orders of the perturbation theory.
Since the model does not have the gauge symmetry, the mechanism of realizing the
UV finiteness is quite different from that of the Yang-Mills flow.
As we will see later, it is a direct consequence of the supersymmetry, in particular, the non-renormalizaition
theorem of the Wess-Zumino model.
To show this, we firstly introduce a method of defining a Wess-Zumino flow
with renormalization-invariant couplings.
We also give a renormalization-invariant initial condition.
These renormalization invariances are immediately shown
from the non-renormalization theorem.
The perturbation calculation of the Wess-Zumino flow is carried out using
an iterative expansion of flow equation and the ordinary perturbation theory
for the boundary Wess-Zumino model.
Since the initial condition depends on the coupling constant,
the order of the perturbative expansion is given by a fractional power $g^{2/3}$.
Super-Feynman rule for one particle irreducible (1PI) supergraph is then derived.
Using the power counting theorem based on the super-Feynman rule,
the UV finiteness of the Wess-Zumino flow is established.
The rest of this paper is arranged as follows:
In Sec.\ref{phi-four}, we consider the gradient flow of the $\phi^4$ scalar field theory.
In Sec.\ref{sec:WZ_review}, we review a perturbation theory in the Wess-Zumino model as a supersymmetric extension of $\phi^4$ scalar field theory.
In Sec.\ref{sec:WZ_flow}, we construct the Wess-Zumino flow with renormalization-invariant couplings according to Ref.\cite{Kadoh:2019glu} with some modifications.
With super-Feynman rule for the correlation function derived from the iterative expansion of flow equation,
we show that the Wess-Zumino flow has the UV finiteness using the power counting theorem. Sec.\ref{sec:summary} is devoted to summarizing results.
\section{The case of $\phi^4$ theory}
\label{phi-four}
Let $t \ge 0$ be a flow time and $\varphi(t,x)$ be a $t$-dependent field.
We consider a gradient flow equation of Euclidean $\phi^4 $ theory as
\begin{align}
\frac{\partial\varphi(t,x)}{\partial t} = (\Box -m^2)\varphi (t,x) - \lambda \varphi^3(t,x)
\label{scalar_flow}
\end{align}
with an initial condition,
\begin{align}
\varphi(t=0,x)=\phi(x)\label{initial2},
\end{align}
where $\Box = \partial_\mu \partial_\mu$.
As the name suggests,
the rhs of Eq.~(\ref{scalar_flow})
is $-\delta S/\delta \phi|_{\phi \rightarrow \varphi}$
where
\begin{align}
S=\int d^4 x \, \left\{ \frac{1}{2}(\partial_\mu \phi)^2
+ \frac{m^2}{2}\phi^2+\frac{\lambda}{4}\phi^4
\right\}(x)
\label{scalar_action}
\end{align}
with a bare mass $m$ and a bare coupling constant $\lambda$.
In this setup, the scalar theory Eq.~(\ref{scalar_action}) is put on the boundary ($t=0$).
In the Yang-Mills flow, it is shown that correlation functions at positive flow time are
UV finite under the initial condition $B_\mu(t=0,x)=A_\mu(x)$ where
$A_\mu(x)$ is a bare field irrelevant to a renormalization scheme.
This property plays a crucial role in matching two different schemes that are used
for calculating nontrivial renormalizations for operators\cite{Luscher:2010iy, Luscher:2013cpa, Suzuki:2013gza, Makino:2014taa}.
In this paper, we also employ an initial condition given by bare fields for the Wess-Zumino flow
in later sections, such as Eq. \eqref{initial2} for scalar theory.
The formal solution of Eq.~(\ref{scalar_flow}) can be obtained from an
iterative approximation of the flow equation.
This is regarded as a perturbative expansion in terms of $\lambda$.
The flowed field $\varphi(t,x)$ is thus given by a tree-like graph with the boundary field $\phi$ at the endpoints.
The correlation function of the flowed field is then evaluated by the usual perturbation
theory at the boundary\cite{Luscher:2010iy, Luscher:2011bx}.
In the massless free flow where $\partial\varphi/\partial t = \Box\varphi$
and Eq.\eqref{scalar_action} gives the boundary theory,\footnote{
In this case, the action that defines the gradient flow is different one
from the boundary theory.}
any correlation function of $\varphi(t,x)$
is UV finite up to an extra wave function renormalization
once the boundary theory is properly renormalized.
However, for massive or interacting flows ($m \neq 0$ or $\lambda\neq 0$),
such a property is not obtained\cite{Capponi:2015ucc}.
This conclusion is easily understood from the 4+1 dimensional theory that
produces the same perturbative series discussed above.
As in the case of the Yang-Mills flow\cite{Luscher:2011bx},
the bulk action of the 4+1 dimensional theory is given
by
\begin{align}
& S_{\mathrm{bulk}}=\int_0^{\infty}dt \int d^4 x
L(t,x) \bigg\{ \partial_t\varphi (t,x) - (\Box-m^2)\varphi (t,x) + \lambda \varphi^3 (t,x) \bigg\}
\label{bulk_action}
\end{align}
with a Lagrange multiplier field $L(t,x)$.
In the renormalized perturbation theory, the bulk action \eqref{bulk_action} is separated
into the renormalized action and the bulk counterterm $S_{bulk \, c.t.} = \int_0^{\infty}dt \int d^4 x
L(t,x) ( \delta_{m^2}\varphi (t,x) + \delta_\lambda \varphi^3 (t,x) )$.
The bare $m$ and $\lambda$ receive UV divergent corrections which are determined
in the boundary theory. In Feynman graphs given by the 4+1 dimensional theory,
since the renormalized action
provides the damping factor ${\rm e}^{-(p^2+m_R^2)t}$ with the renormalized mass $m_R$,
any bulk loop converges for $t \neq 0$ and divergent contributions appear only at the boundary.
On the other hand,
the bulk counterterm provides divergent contributions from $\delta_{m^2}$ and $\delta_\lambda$ for $t \neq 0$.
Therefore this 4+1 dimensional theory is non-renormalizable.
\footnote{In the massless free flow,
there is no bulk counterterms, and
any UV divergence of flowed field correlators appears only
in loop integrals at the boundary.
If we took $\varphi(t=0,x)=\phi_{R}(x)$ instead of Eq.~\eref{initial2},
any correlation function is UV finite.}
Achieving UV finiteness
in the massive or interacting flow requires the absence of the bulk counterterms.
In other words, the flow equation should be given by renormalization-invariant couplings.
We consider a supersymmetric $\phi^4$ theory in the next section
because further constraints on the renormalization are needed to define such a renormalization-invariant flow equation.
\section{Review of the Wess-Zumino model}
\label{sec:WZ_review}
We work in Euclidean space with the notation of Refs.\cite{Kadoh:2018qwg, Kadoh:2019glu},
which is derived from Ref.\cite{Wess:1992cp} by a Wick rotation.
See \ref{notation} for details of the notation.
\subsection{The Wess-Zumino model}
The Wess-Zumino model is a supersymmetric extension of $\phi^4$ theory which is
given by a scalar field $A(x)$, a Weyl spinor $\psi(x)$ and an auxiliary field $F(x)$.
In the superfield formalism,
a chiral superfield
$\Phi(x,\theta,\bar\theta)$ contains the field contents as
\begin{align}
\Phi(y,\theta) \equiv A(y)+\sqrt{2}\theta\psi(y) + i\theta\theta F(y)
\end{align}
where $y_\mu=x_\mu+ i\theta \sigma_\mu \bar\theta$.
In Minkowski space, an anti-chiral superfield $\Phi^\dag$ is defined by the Hermitian conjugate of $\Phi$.
However, in Euclidean space, such a definition is incompatible with the Wick rotation.
In fact, $\bar \psi$ is not a Hermitian conjugate of $\psi$ but a different Weyl spinor.
We define an anti-chiral superfield $\bar \Phi$ that is a Euclidean counterpart of $\Phi^\dag$ as
\begin{align}
\bar \Phi(\bar y,\bar \theta) \equiv A^\ast(\bar y)+\sqrt{2}\bar \theta\bar\psi(\bar y) + i\bar\theta\bar\theta F^\ast (\bar y)
\end{align}
where $\bar y_\mu=x_\mu- i\theta \sigma_\mu \bar\theta$.
In Euclidean space, the chiral and anti-chiral superfields $\Phi$ and $\bar\Phi$ also satisfy
$\bar D_{\dot\alpha} \Phi=D_{\alpha} \bar\Phi=0$. The supersymmetry transformation of a superfield
${\cal F}(x,\theta,\bar\theta)$ is defined as
\begin{align}
\delta_\xi {\cal F}(x,\theta,\bar\theta) = (\xi Q + \bar\xi \bar Q) {\cal F}(x,\theta,\bar\theta),
\label{super_transf}
\end{align}
where the supercovariant derivatives $D, \bar D$ and supercharges $Q, \bar Q$ are defined in
\ref{notation}.
Supersymmetry transformations of component fields are derived from \eqref{super_transf}.
The Wess-Zumino model is then defined by
\begin{align}
&& S =-\int d^8 z \, \bar\Phi(z)
\Phi(z)
-\int d^4x d^2 \theta W(\Phi(z)) - \int d^4x d^2 \bar\theta W(\bar\Phi(z)),
\label{wz_action}
\end{align}
where
\begin{align}
W(\Phi) \equiv \frac{m}{2}\Phi^2+\frac{g}{3} \Phi^3
\end{align}
for bare coupling constants $m\ge 0$ and $g >0$.
To simplify the notation, we used $z=(x_\mu,\theta_\alpha, \bar\theta_{\dot\alpha})$ and
$d^8 z \equiv d^4 x d^2\theta d^2\bar\theta$.
The action is invariant under the supersymmetry transformation (\ref{super_transf}).
Renormalized superfield $\Phi_R$ and renormalized coupling constants $m_R, g_R$ satisfy
\begin{align}
\Phi_R = Z^{-\frac{1}{2}} \Phi, \quad \ \ \bar\Phi_R = Z^{-\frac{1}{2}} \bar\Phi,
\label{ren_super_field}
\end{align}
and
\begin{align}
\delta_m = mZ- m_R, \quad \delta_g=g Z^{\frac{3}{2}} -g_R.
\end{align}
The non-renormalization theorem of the Wess-Zumino model
tells us that the F-terms are not renormalized, that is, $\delta_m=\delta_g=0$
\cite{Wess:1973kz, Iliopoulos:1974zv, Grisaru:1979wc, Seiberg:1993vc}.
Therefore, we have
\begin{align}
m_R = mZ, \qquad g_R=g Z^{\frac{3}{2}}.
\label{non-renormalization}
\end{align}
It turns out that a normalized mass given by
\begin{align}
M \equiv mg^{-\frac{2}{3}}
\label{inv_mass}
\end{align}
is invariant under the renormalization.
\subsection{Perturbation theory}\label{PT}
The perturbation theory can be given
in the superfield formalism\cite{Grisaru:1979wc}.
We derive a super-Feynman rule for 1PI supergraphs of
the Wess-Zumino model in Euclidean space.
Eq. \eqref{non-renormalization} is formally confirmed by the power counting theorem derived from the super-Feynman rule.
We first introduce external chiral and anti-chiral superfields
$J$ and $\bar{J}$ satisfying $\bar D_{\dot\alpha} J=D_\alpha\bar{J}=0$ and consider
\begin{align}
Z[J,\bar{J}]=\int D\Phi D\bar{\Phi} \, e^{-S_0 -S_{int}-S_{src} }
\label{ZJ}
\end{align}
where
\begin{align}
S_{src} = -\int d^4 x d^2\theta \, J(z) \Phi(z) - \int d^4 x d^2\bar\theta \, \bar J(z) \bar \Phi(z).
\end{align}
Superfield Green's function $G(z_1,z_2,\cdots,z_m; z^\prime_{1},z^\prime_{2},\cdots, z^\prime_{n})$ is obtained by
\begin{align}
& G(z_1,\cdots,z_m; z^\prime_{1},\cdots, z^\prime_n) \nonumber \\
& \hspace{1cm} = \frac{1}{Z|_{J=\bar J=0}}
\left. \frac{\delta^m}{\delta J (z_1)\cdots \delta J (z_m)}
\frac{\delta^n}{\delta \bar J (z^\prime_{1}) \cdots \delta \bar J (z^\prime_{n})} \,
Z[J,\bar{J}] \right|_{J=\bar J=0},
\label{green_function}
\end{align}
where
\begin{align}
\frac{\delta J(z_1)}{\delta J(z_2)}=&-\frac{\bar D_1^2}{4}\delta^8(z_1-z_2), \\
\frac{\delta \bar{J}(z_1)}{\delta \bar{J}(z_2)}=&-\frac{D_1^2}{4}\delta^8 (z_1-z_2),
\end{align}
and the other functional derivatives are zero,
where $D_1$ and $\bar D_1$ are defined for $z_1$.
Let $S_0$ and $S_{int}$ be the free and interaction parts of the action, respectively.
The free field action $S_0$ can be written in the full superspace as
\begin{align}
S_0=-&\int d^8z \left\{ \bar\Phi \Phi + \frac{m}{2}\Phi \left(-\frac{D^2}{4\Box}\right) \Phi
+\frac{m}{2}\bar\Phi \left(-\frac{\bar{D}^2}{4\Box}\right)\bar\Phi \right\}(z).
\end{align}
Similarly, we have
\begin{align}
S_{int}=-\frac{g}{3} \int d^8z \left\{ \Phi^2 \left(-\frac{D^2}{4\Box} \right) \Phi + \bar \Phi^2 \left(-\frac{\bar{D}^2}{4\Box}\right)
\bar\Phi \right\}(z),
\label{int_wz}
\end{align}
and
\begin{align}
S_{src} = -\int d^8z \left\{ J\left(-\frac{D^2}{4\Box}\right) \Phi
+\bar J \left(-\frac{\bar{D}^2}{4\Box}\right)\bar\Phi \right\}(z).
\end{align}
These are easily derived using Eqs.~\eref{identity_Box1} and \eref{identity_Box2}.
A short calculation tells us that $Z_0[J,\bar J] \equiv Z[J,\bar J]|_{g=0}$ is written as
\begin{align}
Z_0[J,\bar{J}]
=&\exp\left\{\frac{1}{2}\int d^8z d^8z' \left(
J(z), \bar{J}(z)
\right) \Delta_{GRS}(z,z')\left(
\begin{array}{cc}
J(z^\prime)\\
\bar{J}(z^\prime)
\end{array}
\right) \right\},
\label{GRS_Z}
\end{align}
where
\begin{align}
\Delta_{GRS}(z,z')=&\frac{1}{-\Box+m^2}\left(
\begin{array}{cc}
\frac{m D^2}{4\Box}&1\\
1&\frac{m \bar{D}^2}{4\Box}
\end{array}
\right)\delta^8(z-z').
\end{align}
The propagator $\Delta_{GRS}$ is called
the Grisaru-Rocek-Siegel (GRS) propagator introduced in \cite{Grisaru:1979wc}.
Two point functions are thus obtained as
\begin{align}
&\langle\Phi(z_1)\bar\Phi(z_2) \rangle_0 =\frac{1}{16} \frac{\bar D^2_1 D_1^2}{-\Box_1+m^2} \delta^8(z_1-z_2),\nonumber \\
&\langle\Phi(z_1)\Phi(z_2) \rangle_0 = \frac{m}{4} \frac{\bar D^2_1}{-\Box_1+m^2} \delta^8(z_1-z_2),
\label{Phi_correlation_function}
\\
&\langle\bar\Phi(z_1)\bar\Phi(z_2) \rangle_0 =\frac{m}{4} \frac{D^2_1}{-\Box_1+m^2} \delta^8(z_1-z_2), \nonumber
\end{align}
where $\langle \cdots \rangle_0$ is the expectation value in the free theory.
The Green's function \eqref{green_function} is obtained from
\begin{align}
Z[J,\bar{J}]= \exp\left\{-S_{int}\left[\frac{\delta}{\delta J}, \frac{\delta}{\delta \bar{J}} \right] \right\}
Z_0[J,\bar{J}],
\end{align}
by evaluating the functional derivatives $\delta/\delta J$ and $\delta/\delta \bar J$.
In perturbation theory, we need to evaluate extra derivatives that arise from the Taylor expansion of
$ \exp\left\{-S_{int}\left[\frac{\delta}{\delta J}, \frac{\delta}{\delta \bar{J}} \right] \right\}$.
The perturbative calculation of Green's functions contains a term like
\begin{align}
&-S_{int}\left[\frac{\delta}{\delta J},0 \right] J(z_1)J(z_2)J(z_3)\nonumber\\
& = \frac{g}{3} \int d^8 z_4 \left\{-\frac{D_4^2}{4\Box_4} \left(\frac {\delta}{\delta J(z_4)}\right)\right\}\left(\frac {\delta}{\delta J(z_4)}\right)^2J(z_1)J(z_2)J(z_3)\nonumber\\
&= 2g \int d^8 z_4 \delta^8(z_1-z_4) \left(-\frac{\bar D_2^2}{4} \right)\delta^8(z_2-z_4)
\left(-\frac{\bar D_3^2}{4} \right)\delta^8 (z_3-z_4),
\label{sample}
\end{align}
where $J(z_i)$ attaches to anti-chiral superfields via Eq.~\eref{GRS_Z}.
We used \eref{identity_Box2} to show the second equality.
The effective action is made of 1PI supergraphs
that are calculated from 1PI Green's functions amputating propagators of external lines.
Each vertex of 1PI diagrams has two or three internal lines.
For a vertex with no external lines, two of the three internal lines have $\frac{\bar D^2}{4}$ as suggested
from the last line of Eq. \eqref{sample}.
While, for a vertex with two internal lines and one external line,
one of the two internal lines have $\frac{\bar D^2}{4}$
because the external lines are associated
with $\delta/\delta J$ without $\frac{D^2}{4\Box}$ in the second line of \eqref{sample}.
The super-Feynman rules for 1PI supergraphs are given in the momentum space as follows:
\begin{enumerate}[ \ \ (a)]
\item Use the propagators $\tilde\Delta_{GRS}$ for $\Phi\Phi, \Phi\bar\Phi, \bar\Phi\bar\Phi$, which are given by
\begin{align}
\tilde\Delta_{GRS}(p;\theta_1,\bar\theta_1,\theta_2,\bar\theta_2) \nonumber \\
& \hspace{-3cm} =
\frac{1}{p^2+m^2}\left(
\begin{array}{cc}
-\frac{m D_1^2}{4p^2}&1\\
1&-\frac{m \bar{D}_1^2}{4p^2}
\end{array}
\right)\delta^2(\theta_1-\theta_2)\delta^2(\bar\theta_1-\bar\theta_2).
\label{GRS_mom}
\end{align}
\item Write a factor $2g$ and $\int d^2 \theta d^2\bar\theta$ at each vertex.
For a vertex with $n$ internal lines ($n=2,3$),
put a factor of $-\bar{D}^2/4$ $(-D^2/4)$ at $n-1$ lines of the $n$ chiral (anti-chiral) lines.
\item Impose the momentum conservation at each vertex and integrate over undetermined loop momenta.
\item Compute the usual combinatoric factors.
\end{enumerate}
These rules are given in Euclidean space. See also Ref.\cite{Wess:1992cp} for the rule in Minkowski space.
We can calculate the superficial degrees of divergence for 1PI supergraphs using the super-Feynman rule.
Consider a 1PI supergraph with $L$ loops, $V$ vertices, $E$ external lines
and $P$ propagators of which $C$ are $\Phi\Phi$ or $\bar\Phi\bar\Phi$ massive propagators.
We count $D^2,\bar D^2$ as $p$ because $\bar D^2 D^2 \sim p^2$ for chiral superfields.
Each loop integral has $d^4p$.
The GRS propagator provides $1/p^2$ with an additional factor $1/p$ for $\Phi\Phi$ or $\bar\Phi\bar\Phi$ propagators.
The internal lines have $2V-E$ factors of $D^2$ or $\bar D^2$.
In each loop integral, we can use an identity $\delta_{12} D^2 \bar D^2 \delta_{12}=16 \delta_{12}$
to remove a $D^2\bar D^2 \sim p^2$.
The superficial degrees of divergence for the graph is given by
\begin{align}
d = 4L -2P - C + 2V - E - 2L.
\end{align}
Using $V-P+L=1$, we find
\begin{align}
d = 2-E-C.
\label{sdiv}
\end{align}
For $E=2$, $d$ can be zero (the logarithmic divergence).
If two external lines have the same chirality, $d<0$
because at least one $\Phi\Phi$ or $\bar\Phi\bar\Phi$ propagator is needed.
We have $d<0$ for $E \ge 3$.
Thus we find that the wave function renormalization exists
but the effective action does not have any divergent correction to $m\Phi^2$ and $g \Phi^3$.
\section{The Wess-Zumino flow}\label{sec:WZ_flow}
We consider a supersymmetric gradient flow in the Wess-Zumino model.
It can be shown that any correlation function of the flowed fields is UV finite
thanks to the non-renormalization theorem under an appropriate initial condition.
\subsection{The Wess-Zumino flow with renormalization-invariant couplings}
In Ref.\cite{Kadoh:2019glu}, we defined a supersymmetric flow equation
using the gradient of the action Eq.~\eref{wz_action}.
However, the bulk counterterms exist in this case,
because the bare coupling constants $m, g$ included in the flow
receive the renormalizations determined at $t=0$.
See Ref.\cite{Capponi:2015ucc} for relevant arguments.
Therefore, the flow theory with bare $m$ and $g$ is ill-defined at the quantum level.
In order to solve this issue, we introduce renormalization-invariant couplings into the flow equation.
We consider the following rescaling of coordinates and field variables,
\begin{align}
\begin{split}
x^\prime_\mu \equiv g^{\frac{2}{3}} x_\mu, \quad
\theta^\prime \equiv g^{\frac{1}{3}}\theta, \quad
\bar\theta^\prime \equiv g^{\frac{1}{3}}\bar\theta
\end{split}
\end{align}
and
\begin{align}
&A^\prime (x^\prime)\equiv g^{\frac{1}{3}}A(x), \nonumber \\
&\psi^\prime(x^\prime) \equiv \psi(x), \\
\label{rescaling}
& F^\prime (x^\prime) \equiv g^{-\frac{1}{3}} F(x). \nonumber
\end{align}
Replacing every variable of the superfields by the corresponding rescaled variable,
we have
\begin{align}
\Xi (x^\prime, \theta^\prime, \bar\theta^\prime) \equiv g^{\frac{1}{3}} \Phi (x,\theta,\bar\theta)
\end{align}
where
\begin{align}
\Xi
(y^\prime,\theta^\prime) =
A'(y')+\sqrt{2}\theta'\psi'(y')+ i\theta'\theta' F'(y'), \label{prime}
\end{align}
and $y^\prime_\mu \equiv x'_\mu+i\theta'\sigma_\mu\bar\theta' = g^{\frac{2}{3}}y$.
The differential operators satisfy $Q^\prime_\alpha = g^{\frac{1}{3}}Q_\alpha$ and $D^\prime_\alpha = g^{\frac{1}{3}}D_\alpha$. The superfield formalism is then kept unchanged because
$\Xi$ is a chiral superfield satisfying $\bar D'_{\dot \alpha} \Xi =0$ and
the supersymmetry transformation laws of $A^\prime, \psi^\prime, F^\prime$ are the same as those of
$A, \psi, F$.
Hereafter we omit the prime symbols unless they are confusing.
From a short calculation, one can show that
the Wess-Zumino action is rewritten in $\Xi(x,\theta,\bar\theta)$ and $\bar\Xi(x,\theta,\bar\theta)$ as
\begin{align}
S =&-\frac{1}{g^2} \int d^4x d^2\theta d^2\bar\theta \bar\Xi \Xi
- \frac{1}{g^2} \int d^4x d^2\theta\left(\frac{1}{2} M \Xi^2+\frac{1}{3}\Xi^3 \right)\nonumber\\
& - \frac{1}{g^2} \int d^4x d^2\bar\theta\left(\frac{1}{2} M \bar\Xi^2+\frac{1}{3}\bar\Xi^3 \right).
\label{wz_action_prime}
\end{align}
We should note that $M$ is defined as
Eq.\eqref{inv_mass}, which is invariant
under the renormalization for (\ref{wz_action}) in the standard manner.
In terms of rescaled variables, we can
consider a supersymmetric gradient flow according to Ref.\cite{Kadoh:2019glu} as
\begin{align}
\partial_t \Psi (t,z) =
g^2\frac{\bar D^2}{4} \frac{\delta S}{\delta \Xi (z)} \bigg|_{\Xi(z) \rightarrow \Psi(t,z)}
\label{superflow_formal_0}
\end{align}
where $z=(x,\theta,\bar\theta)$.
The $\bar D^2$ factor is needed to keep the superchiral condition for
$\Psi (t,z)$ because ${\delta S}/{\delta \Xi}$ is not chiral. The flow equation for $\bar\Psi$ is given by a replacement
$(\Psi,\Xi, \bar D) \leftrightarrow (\bar\Psi, \bar\Xi, D)$ from Eq.~(\ref{superflow_formal_0}).
We thus have
\begin{align}
& \partial_t \Psi = \Box\Psi - M \frac{\bar D^2}{4} \bar \Psi - \frac{ \bar D^2}{4} \bar{\Psi}^2, \label{superflow1} \\
& \partial_t \bar \Psi = \Box \bar \Psi - M \frac{D^2}{4} \Psi - \frac{ D^2}{4} {\Psi}^2 \label{superflow2}.
\end{align}
The flow equation is given with couplings that are renormalization-invariant for the
original Wess-Zumino action \eqref{wz_action} given by $(A, \psi, F)$.
The initial condition for $\Psi(t,z)$ and $\bar \Psi(t,z)$ is given in the next section.
If a supersymmetry transformation of the flowed fields
is defined
by extending (\ref{super_transf}) to the 4+1 dimensions as
$\delta_\xi \Psi (t,z) = (\xi Q + \bar\xi \bar Q) \Psi(t,z)$.
Then the flow equations and the supersymmetry transformation are consistent because
they satisfy $[\delta_\xi,\partial_t]=0$.
The superchiral condition $\bar D_{\dot\alpha} \Psi = D_{\alpha} \bar\Psi =0$ allows us to expand
$\Psi$ and $\bar \Psi$ as
\begin{align}
& \Psi(t,y,\theta) = \phi(t,y) + \sqrt{2}\theta {\cal\chi}(t,y) + i \theta\theta G(t,y), \\
& \bar \Psi(t,\bar y,\theta) = \bar \phi(t,\bar y) + \sqrt{2}\bar\theta\bar\chi(t,\bar y) + i \bar\theta\bar\theta \bar G(t,\bar y).
\end{align}
For the component fields, we have
\begin{align}
\partial_t \phi & = \Box \phi +i M \bar G +\left( 2 i \bar \phi \bar G -\bar\chi \bar\chi \right),
\label{flow_phi}
\\
\partial_t \bar \phi &= \Box \bar \phi +i M G + \left( 2 i \phi G -\chi \chi \right),
\label{flow_barphi} \\
\partial_t \chi & = \Box\chi +i\sigma_\mu\partial_\mu \left(M \bar\chi+2 \bar \phi \bar\chi \right),
\label{flow_chi}
\\
\partial_t \bar\chi & = \Box\bar\chi +i\bar\sigma_\mu\partial_\mu \left(M \chi + 2 \phi \chi \right),
\label{flow_barchi}
\\
\partial_t G &= \Box G-i \Box\left( M \bar \phi + \bar \phi^{2} \right),
\label{flow_G} \\
\partial_t \bar{G} &= \Box \bar G -i \Box \left(M \phi + \phi^{2} \right).
\label{flow_barG}
\end{align}
Since the reality condition is broken by the Wick rotation,
the Hermitian conjugate relation is not kept for the flow equation.
So $\bar \phi$ and $\bar G$ are independent complex fields which are not complex conjugates of $\phi$ and $G$.
From the initial condition given in the next section,
the complex conjugate relation is kept only at the boundary such as $\bar \phi(t=0,x)=(\phi(t=0,x))^*$.
Note that the flow equations for $\bar \phi,\bar\chi,\bar G$ are obtained from those of $\phi,\chi, G$
by a simple replacement as $\phi \leftrightarrow \bar \phi, \chi \leftrightarrow \bar\chi, G \leftrightarrow \bar G$ and $\sigma_\mu \leftrightarrow \bar\sigma_\mu$.
\subsection{The vector notation and an initial condition}
We introduce a vector notation of chiral superfields as
\begin{align}
{\bf \Psi}(t,z) =
\left(
\begin{array}{c}
{\bf \Psi}_1(t,z) \\
{\bf \Psi}_2(t,z)
\end{array}
\right)
\equiv
\left(
\begin{array}{c}
\Psi(t,z) \\
\bar \Psi(t,z)
\end{array}
\right).
\label{vector_Psi}
\end{align}
The Wess-Zumino flow equations \eqref{superflow1} and \eqref{superflow2}
can be expressed as
\begin{align}
\partial_t {\bf \Psi} = (\Box +M {\bf \Gamma \Delta}) {\bf \Psi} + \bar{\bf \Delta} {\bf N}.
\label{WZflow_vector}
\end{align}
where
\begin{align}
&{\bf \Delta} \equiv
\left(
\begin{array}{cc}
-\frac{1}{4} D^2 & 0 \\
0 & -\frac{1}{4}\bar D^2
\end{array}
\right),
\label{Delta}
\\
&\bar {\bf \Delta} \equiv
\left(
\begin{array}{cc}
-\frac{1}{4} \bar D^2 & 0\\
0 & -\frac{1}{4} D^2
\end{array}
\right),
\label{DeltaBar}
\\
&{\bf \Gamma} \equiv
\left(
\begin{array}{cc}
0 & 1\\
1 & 0
\end{array}
\right),
\label{Gamma}
\end{align}
and
the nonlinear part is characterized by
\begin{align}
{\bf N}_i (t, z) =\frac{1}{2}g_{ijk} {\bf \Psi}_j(t,z) {\bf \Psi}_k(t,z),
\end{align}
with a coefficient $g_{ijk}$ defined as $g_{ijk}=2{\bf\Gamma}_{ij} {\bf\Gamma}_{ik}$.
We consider the following initial condition,
\footnote{
For the component fields, we have
$\phi|_{t=0} =\alpha A, \chi|_{t=0} = \alpha \psi,G|_{t=0} = \alpha F$
where $\alpha=g^\frac{1}{3}$.
}
\begin{align}
{\bf \Psi}|_{t=0} = {\bf \Phi}_0
\label{inital_condition_Psi}
\end{align}
where
\begin{align}
{\bf \Phi}_0(z) \equiv
g^{\frac{1}{3}}
\left(
\begin{array}{c}
\Phi(z) \\
\bar \Phi(z)
\end{array}
\right)=
g_R^{\frac{1}{3}}
\left(
\begin{array}{c}
\Phi_R(z) \\
\bar \Phi_R(z)
\end{array}
\right).
\label{inital_condition_Phi}
\end{align}
The second equality of Eq.(\ref{inital_condition_Phi})
is a direct consequence of the non-renormalization theorem.
We may consider ${\bf \Psi}|_{t=0}= f(M) {\bf \Phi}_0$ instead of Eq.(\ref{inital_condition_Psi})
because the conclusion of this section does not change for any nonzero function $f(M)$.
Hereafter we took $f(M)=1$ for simplicity.
The operators introduced above satisfy
\begin{align}
&\bar {\bf \Delta} {\bf \Delta} \bar {\bf \Delta } = \Box \bar {\bf \Delta }, \\
& {\bf \Gamma} \bar {\bf \Delta} {\bf \Gamma} = {\bf \Delta}, \\
& {\bf \Gamma}^2={\bf 1},
\end{align}
and
\begin{align}
\bar {\bf \Delta}{\bf \Delta } {\bf \Psi} = \Box {\bf \Psi} .
\end{align}
\begin{figure}[]
\begin{center}
\includegraphics [width=100mm]
{fig1new3.pdf}
\end{center}
\caption{Tree-like graphs of the iterative solution ${\bf \Psi}(t,p,\theta,\bar\theta)$. }
\label{fig1}\end{figure}
\subsection{Iterative solution of the Wess-Zumino flow}
\label{iso}
The flowed field $\Psi(t,z)$ satisfying the Wess-Zumino flow equation can be expressed as an iterative expansion.
To show this, we first introduce a heat kernel in the superspace $z=(x_\mu,\theta_{\alpha},\bar\theta_{\dot\alpha})$ as
\begin{align}
{\bf K}_t (z)
=\left(
\begin{array}{cc}
C_t(x) &
-\frac{\bar{D}^2}{4\sqrt{-\Box}} S_t(x)\\
-\frac{D^2}{4\sqrt{-\Box}} S_t(x) &
C_t(x)
\end{array}
\right) \times \delta^2(\theta)\delta^2(\bar\theta)
\end{align}
where
\begin{align}
C_t(x)&\equiv \int \frac{d^4 p}{(2\pi)^4} e^{ipx-tp^2}\cos(tM \sqrt{p^2}),\\
S_t(x)&\equiv \int \frac{d^4 p}{(2\pi)^4} e^{ipx-tp^2}\sin(tM \sqrt{p^2}).
\end{align}
The heat kernel satisfies
\begin{align}
\left(\partial_t -\Box - M{\bf \Gamma \Delta}\right) {\bf K}_t (z) =0
\end{align}
and
\begin{align}
{\bf K}_0 (z) = \delta^8(z),
\end{align}
since $C_0(x) =\delta^4(x)$ and $S_0(x)=0$.
The flow equation \eqref{WZflow_vector}
can be solved formally as
\begin{align}
{\bf \Psi}(t,z) = \int d^8 z^\prime {\bf K}_t (z-z^\prime) {\bf \Phi}_0(z^\prime)
+ \int_0^t ds \int d^8 z^\prime \bar{\bf \Delta }{\bf K}_{t-s} (z-z^\prime) {\bf N}(s,z^\prime),
\label{formal_solution}
\end{align}
where $\bar{\bf \Delta}$ acts on $z$.
Inserting the formal solution into ${\bf \Psi}$ of ${\bf N}$ in the rhs repeatedly
yields an iterative approximation of the flow equation.
The iterative approximation can be expressed as a tree-like graph with ${\bf \Phi}_0$
at endpoints.
In Fig.\ref{fig1},
the iterative solution of the Wess-Zumino flow equation \eqref{WZflow_vector} is represented graphically.
The circle with cross associated with the endpoints of the flow time zero is one-point vertex defined by
\begin{align}
\begin{minipage}[c]{5cm}
\centering
\includegraphics[width=18mm]{fig21new4.pdf}
\end{minipage}
\begin{minipage}[l]{5cm}
\vspace{2mm}
\centering
$ \hspace{-30mm}= \hspace{8mm}{\bf \Phi}_{0,i}(p,\theta,\bar\theta)$.
\end{minipage}\label{one-point-vertex}
\end{align}
The flow vertex shown by an open circle is defined as
\begin{align}
\begin{minipage}[c]{5cm}
\centering
\includegraphics[width=25mm]{fig31new3.pdf}
\end{minipage}
\begin{minipage}[l]{5cm}
\vspace{2mm}
\centering
$ \hspace{-20mm}= \hspace{8mm}g_{ijk}\bar{\bf \Delta}_{ii}(p,\theta,\bar\theta)$,
\end{minipage}\label{flow_vertex}
\end{align}
where an operator $ \bar{\bf\Delta}_{ii}(p,\theta,\bar\theta)$ acts upon the outgoing line with the index $i$.
For each vertex (one-point and flow vertex), the Grassmann integral $\int d^2\theta d^2\bar\theta$ is performed. In addition, for the flow vertex, the flow time $t$ is integrated out from $0$ to $\infty$.
The flow line connecting the vertices is defined by
\begin{align}
\begin{minipage}[c]{4cm}
\centering
\includegraphics[width=35mm]{fig3new3.pdf}
\end{minipage}
\begin{minipage}[l]{8cm}
\vspace{0mm}
\centering
$ \hspace{10mm}= \hspace{8mm}\Theta(t-t^\prime) {\bf \tilde K}_{t-t^\prime, ij}(p, \theta-\theta^\prime, \bar \theta-\bar\theta^\prime)$.
\end{minipage}\label{flow_line}
\end{align}
where ${\bf \tilde K}_{t}(p, \theta, \bar \theta)=\int d^4x e^{-ipx}{\bf K}_{t}(x, \theta, \bar \theta)$
and $\Theta(t)$ is the Heaviside step function. The arrow indicates the direction of increasing flow time.
As for the momenta, at each flow vertex, the momentum conservation is assumed,
and an undermined momentum of ingoing flow lines is integrated.
In Fig.\ref{fig1}, the tree-like graph begins at a single square of flow time $t$
and terminates at the one-point vertices of flow time $0$. The flow time runs from $0$ to $t$ keeping the time order with step functions. The initial condition Eq.(\ref{inital_condition_Phi}) tells us that this iterative approximation
may be understood as the perturbative expansion of one third power of the coupling constant $g^{\frac{1}{3}}$.
\subsection{Super-Feynman rules}\label{Feynman_rule}
We move on to perturbative calculations of correlation functions of ${\bf \Psi}_i$ combining the above iterative approximation of the Wess-Zumino flow and the super-Feynman rules in the Wess-Zumino model at $t=0$ discussed in Sec.\ref{PT}.
For example, the leading order contribution to the two-point function is diagrammatically represented as
\begin{align}
\hspace{-15mm}\begin{minipage}[c]{10cm}
\centering
\includegraphics [width=80mm]{fig01new.pdf}
\end{minipage}\hspace{-5mm}.
\label{contraction_2Phi}
\end{align}
The staple symbol in the lhs denotes the contraction between two boundary fields ${\bf \Phi}_0$, which is given at the leading order as
\begin{align}
\begin{split}
&\langle{\bf \Phi}_{0,i}(p,\theta,\bar\theta){\bf\Phi}_{0,j}(p^\prime,\theta^\prime,\bar\theta^\prime) \rangle \\
&\qquad =
g^{\frac{2}{3}}
{\bf D}_{ij}(p,\theta,\bar\theta)
\ (2\pi)^4 \delta^4(p+p^\prime)
\delta^2(\theta-\theta^\prime)\delta^2(\bar \theta -\bar \theta^\prime)
\end{split}
\label{Phi0_correlation_function}
\end{align}
where
\begin{align}
{\bf D}(p,\theta,\bar\theta) =
\frac{1}{\sqrt{p^2+m^2}}
\left(
\begin{array}{cc}
{\rm sin}\left(\beta_0(p) \right) \frac{\bar D^2}{4} &
{\rm cos}\left(\beta_0(p) \right) \frac{\bar D^2 D^2}{16 \sqrt{p^2}} \\
{\rm cos}\left(\beta_0(p) \right) \frac{D^2 \bar D^2}{16 \sqrt{p^2}} &
{\rm sin}\left(\beta_0(p) \right) \frac{D^2}{4}
\end{array}
\right)
\label{PhiPhi_correlation_function}
\end{align}
for ${\rm tan}(\beta_0(p))=m/\sqrt{p^2}$.
As shown in Eq. \eqref{contraction_2Phi},
we obtain two-point function of $\bf \Psi$ at the leading order
taking a contraction between two $\bf \Phi_0$ for two tree level solutions of $\bf \Psi$ as
\begin{align}
\begin{split}
&\langle{\bf \Psi}_i(t,p,\theta,\bar\theta){\bf\Psi}_j(s,q,\theta^\prime,\bar\theta^\prime) \rangle\\
&\qquad = g^{\frac{2}{3}}
{\bf D}_{t+s, ij}(p,\theta,\bar\theta)
\ (2\pi)^4 \delta^4(p+q)
\delta^2(\theta-\theta^\prime)\delta^2(\bar \theta -\bar \theta^\prime)
\label{Psi_correlation_function}
\end{split}
\end{align}
where
\begin{align}
{\bf D}_{t}(p,\theta,\bar\theta) =
\frac{e^{-tp^2}}{\sqrt{p^2+m^2}}
\left(
\begin{array}{cc}
{\rm sin}\left(\beta_t(p) \right) \frac{\bar D^2}{4} &
{\rm cos}\left(\beta_t(p) \right) \frac{\bar D^2 D^2}{16 \sqrt{p^2}} \\
{\rm cos}\left(\beta_t(p) \right) \frac{D^2 \bar D^2}{16 \sqrt{p^2}} &
{\rm sin}\left(\beta_t(p) \right) \frac{D^2}{4}
\end{array}
\right)
\label{D_t}
\end{align}
for $\beta_t(p) = \beta_0(p) + t M \sqrt{p^2}$.
Thus, a field propagator associated with Eq.\eqref{Psi_correlation_function} is defined by
\begin{align}
\begin{minipage}[c]{3cm}
\centering
\includegraphics[width=35mm]{fig5new3.pdf}
\end{minipage}
\begin{minipage}[l]{9cm}
\vspace{0mm}
\centering
$ \hspace{5mm}= \hspace{5mm}g^{\frac{2}{3}} {\bf D}_{t+t^\prime,ij}(p, \theta, \bar \theta)
\delta^2(\theta-\theta^\prime)\delta^2(\bar \theta -\bar \theta^\prime)$.
\end{minipage}\label{field_propagator}
\end{align}
The time dependence appears as a sum of two boundary times,
and the diagram of field propagator is shown by a line without an arrow.
Since Eq.\eqref{D_t} reproduces Eq.\eqref{PhiPhi_correlation_function} for $t=0$,
Eq.\eqref{field_propagator} contains all of the field propagators
such as $\langle{\bf \Phi}_0{\bf\Phi}_0\rangle$ and the mixed one $\langle{\bf \Phi}_0{\bf\Psi}\rangle$ as well as
$\langle{\bf \Psi\Psi\rangle}$.
Note that each field propagator is counted as $g^{\frac{2}{3}}$ in the perturbation theory.
We reformulate the perturbation theory at $t=0$ in terms of ${\bf \Phi}_0$ because the ${\bf\Phi}_0$ propagator is treated
uniformly with flow propagators. Unlike the perturbation theory given in Sec.\ref{PT}, the GRS propagator $\Delta_{ GRS}$ is not used.
The super-Feynman rules at $t=0$ should be modified to make fit with the rules for the iterative approximation of the Wess-Zumino flow equation.
First, we rewrite the interaction part of the action (\ref{int_wz}) as
\begin{align}
S_{int}=- \int d^8z \left\{ \frac{1}{3!} h_{ijk} \left(\frac{{\bf\Delta}}{\Box}{\bf\Phi}_{0}\right)_i{\bf\Phi}_{0,j}{\bf\Phi}_{0,k}\right\}(z),
\label{int_wz_mod}
\end{align}
where $h_{ijk}=2\delta_{ij}\delta_{ik}$.
The three-point vertex of the flow time zero may be defined by
\begin{align}
\begin{minipage}[l]{5cm}
\centering
\includegraphics[width=25mm]{fig4new3.pdf}
\end{minipage}
\hspace{-0mm}= \hspace{8mm}-h_{ijk} \frac{{\bf\Delta}_{ii}(p,\theta,\bar\theta)}{p^2},
\label{boundary_vertex}
\end{align}
where ${\bf\Delta}_{ii}(p,\theta,\bar\theta)$ acts on an internal line $p,i$.
This is because
${\bf\Delta}_{ii}(p,\theta,\bar\theta)/p^2$ can be changed to
${\bf\Delta}_{jj}(q,\theta,\bar\theta)/q^2$ or ${\bf\Delta}_{kk}(r,\theta,\bar\theta)/r^2$
by using the identity $\frac{\bar{\bf \Delta}{\bf \Delta }}{\Box} {\bf \Phi}_0 = {\bf \Phi}_0$ for Eq.\eqref{int_wz_mod}.
For each boundary vertex, the Grassmann integral $\int d^2\theta d^2\bar\theta$ is performed.
Now, we consider the following one-loop correction to two point function
including one flow vertex (open circle) and one ordinary vertex (filled circle):
\footnote{The boundary vertex attached to three ${\bf\Phi}_0$ is given by
a product of Eq.\eqref{boundary_vertex} and Eq.\eqref{one-point-vertex}.}
\begin{align}
\hspace{-5mm}\begin{minipage}[c]{10cm}
\centering
\includegraphics [width=120mm]{fig02new.pdf}
\end{minipage}\hspace{20mm}.
\label{contraction_2Phi_one_loop}
\end{align}
As in the tree-level case, performing the contraction between two ${\bf \Phi}_0$ yields a field propagator.
In this case, the three lines without arrows in the rhs indicate the mixed propagators associated with $\langle{\bf\Phi}_0{\bf\Psi}\rangle$.
Here, we mention that the coupling expansion does not naively correspond to the loop expansion.
This is because the $g$ dependence arises only from the field propagators of the order $g^{2/3}$
and the vertices and flow propagator do not depend on $g$.
Each one-loop diagram in Fig.\ref{oneloop} has different orders $g^{2n/3}$ where $n$ is
the number of field propagators.
\begin{figure}[]\begin{center}
\includegraphics [width=120mm]
{fig412.pdf}\end{center}
\caption{One loop diagrams}
\label{oneloop}\end{figure}
The super-Feynman rules for the correlation functions of ${\bf\Psi}$ in the momentum space are summarized as follows:
\begin{itemize}
\item[(a)] Use Eq.\eqref{flow_line} for a flow line that is an outgoing line emanated from each flow vertex.
\item[(b)] Use Eq.\eqref{field_propagator} for a field propagator by which two points (flow vertices, boundary vertices,
and starting points denoted by $\square$) are connected.
\item[(c)] Use Eq.\eqref{flow_vertex} for each flow vertex, and
use Eq.\eqref{boundary_vertex} for each boundary vertex.
For each flow vertex at $t$, perform the flow time integral $\int_{0}^{\infty}dt$.
For all the flow and boudary vertices at $(\theta,\bar\theta)$, perform the Grassmann integral $\int d^2\theta d^2\bar\theta$.
\item[(d)] Impose the momentum conservation at each vertex and integrate over undetermined loop momenta.
\item[(e)] Compute the usual combinatoric factors.
\end{itemize}
In addition, we mention about rules and properties that are common with the Yang-Mills flow\cite{Luscher:2011bx}.
Diagrams with closed flow-line loops are absent because any loop has at least a field propagator.
The flow lines depend on the difference between two flow times of endpoints.
The flow time dependence of propagators are determined by the sum of flow times at the endpoints.
\subsection{The massive free flow}
\label{massivefreeflow}
We consider the massive free flow dropping the interaction terms from the flow equations
(but the boundary Wess-Zumino model has the interactions).
The exact solution is
\begin{align}
{\bf \Psi}(t,z) = \int d^8 z^\prime {\bf K}_t (z-z^\prime) {\bf \Phi}_0(z^\prime).
\label{free_flow_solution}
\end{align}
Then, recalling the definition of ${\bf \Phi}_0$ (\ref{inital_condition_Phi}), a correlation function of the flowed fields $\langle {\bf \Psi}(t_1,z_1) {\bf \Psi}(t_2,z_2) \ldots {\bf \Psi}(t_n,z_n) \rangle$ can be given by a linear combination of correlation functions of the renormalized fields $\Phi_R(z_i)$ and $\bar \Phi_R(z_i)$
with $(g_R)^{\frac{n}{3}}$.
In the renormalized perturbation theory, when evaluating the correlators of $\Phi_R(z_i)$ and $\bar \Phi_R(z_i)$,
UV divergences are renormalized by the normal counter terms.
So, in case of the massive free flow,
any correlation function of the flowed fields is UV finite for any nonzero flow time
if the Wess-Zumino model is properly
renormalized.
\subsection{Power counting theorem}\label{sec:power}
We can calculate the superficial degrees of divergence in the perturbation theory of the Wess-Zumino flow
using the super-Feynman rule given in Sec.~\ref{Feynman_rule}.
Since the field propagators given in Eq.\eqref{field_propagator} have $t$ dependent functions,
we need to evaluate
the following integrals for each flow vertex:
\begin{align}
I(p^2) \equiv \int_0^\infty dt e^{-tp^2}f(t,p^2),
\end{align}
where $p$ is a loop momentum and external momenta are set to zero for simplicity.
After a short calculation, we find that for large $p^2$,
\begin{align}
I(p^2) = p^{-2} f(0,p^2) + \left(p^{-2}\right)^2 f^{(1)} (0,p^2) + \cdots,
\label{t_int_identity}
\end{align}
where $f^{(n)} (t,p^2)= d^n f(t,p^2)/dt^n$.
Since flow propagators with the same chirality and field propagators have
$f(t,p^2)\sim {\rm cos}(tM\sqrt{p^2})$, ${\rm cos}(\beta_t(p^2))$, ${\rm sin}(\beta_t(p^2))$,
the extra suppression factor appears as $p^{-2}$ from the first term of \eqref{t_int_identity}.
While, for massive flow propagators with the opposite chirality,
$f(t,p^2)\sim {\rm sin}(tM\sqrt{p^2})$ leads to $f(0,p^2)=0$
and the extra factor becomes $p^{-3}$ from the second term of \eqref{t_int_identity}.
At each flow vertex with an external flow line, we can apply the identity $\frac{\bar {\bf \Delta} {\bf \Delta}}{\Box} {\bf \Psi}={\bf \Psi}$
to an internal line and move a factor of $\bar {\bf \Delta}$ to the external line by integrals of parts.
This transformation leads to an extra suppression factor $p^{-1}$ because a factor
$\frac{{\bf \Delta}}{\Box} $ remains at the internal line.
This type of transformation cannot be applied to the boundary vertex because,
since it is made of fields with the same chirality, the partial integration of $\bar {\bf \Delta}$ does not work.
Consider a 1PI supergraph with $L$ loops, $V$ boundary vertices, $V_f$ flow vertices,
$E$ external field lines, $E_f$ external flow lines and $P$ field propagators
of which $C$ are massive field propagators with the same chirality, $\Psi\Psi$ and $\bar\Psi\bar\Psi$,
and $P_f$ flow propagators of which $C_f$ are massive flow propagators with the opposite chirality.
Each loop has a $d^4 p$ integral,
and the identity $\delta_{12} D^2 \bar D^2 \delta_{12}= 16\delta_{12}$ still applies in this case
to remove a $D^2\bar D^2 \sim p^2$ at each loop.
At $t=0$,
$\Psi\bar\Psi$ propagators behave as $p^0$ while massive chiral propagators behave as $p^{-1}$ for large $p^2$.
We have extra suppression factors $p^{-2}$ from the boundary of $t$ integrations at each flow vertex
discussed above.
For massive flow propagators with the opposite chirality, we have $p^{-3}$ instead of $p^{-2}$.
Each boundary vertex has a factor of ${\bf \Delta}_{ii}/p^2 \sim p^{-1}$ on one of the internal lines.
Each {\it internal} outgoing flow line emanated from the flow vertex has a factor of $\bar{\bf \Delta}_{ii} \sim p$.
Each external flow line has a suppression factor $p^{-1}$ from the discussion using
the identity $\frac{\bar {\bf \Delta} {\bf \Delta}}{\Box} {\bf \Psi}={\bf \Psi}$.
Thus we find that the superficial degrees of divergence $d$ is given by
\begin{align}
d=2L-C-2V_f-C_f-V+V_f-E_f -E_f \label{sdod}.
\end{align}
Using a topology relation $L-P-P_f+V+V_f=1$ and a few relations
such as $3V+3V_f = E+E_f+2P+2P_f$ (each vertex has three lines) and
$V_f=E_f+P_f$ (the flow vertex has an outgoing flow line) where
$E_f \ge 1$ for nonzero $V_f$,
we finally obtain
\begin{align}
d=2-C-C_f-E-3E_f.
\end{align}
This shows that any super-Feynman graph with flow vertices is UV finite at all orders of perturbation theory.
The remaining divergences for $V_f=E_f=0$ arise from boundary vertices and cancel as
in the massive free flow case because $n$-point functions of ${\bf \Psi}(t,z)$
are those of ${\bf K}_t {\bf \Phi}_0(z)$ for $V_f=0$ and
${\bf \Phi}_0$ is given by $g_R$ and renormalized fields $\Phi_R$ from \eqref{inital_condition_Phi}.
We can conclude that any correlation function of flowed fields
is UV finite in the Wess-Zumino flow at all orders of perturbation theory.
\section{Summary}
\label{sec:summary}
We introduced a supersymmetric gradient flow with renormalization-invariant couplings in the Wess-Zumino model
and showed the UV finiteness of the correlation functions of the superfield generated by the gradient flow.
The finiteness is a consequence of the perturbative power counting theorem for the 1PI supergraphs based on the super-Feynman rules.
In particular, we found that the interaction term in the flow equation does not contribute to divergent graphs, only the boundary theory does. After the parameter renormalization in the boundary theory,
its remaining divergence of the wave function can be remove by taking a renormalization-invariant initial condition for the flowed superfield. Thus, any correlation function of the flowed superfield is UV finite
at all orders of the perturbation theory.
In non-supersymmetric theory, including the mass term (and terms like $\phi^4$ interactions) in the matter flow yields non-removable divergence as bulk counterterms. Even in the massless free flow, a wave function renormalization remains. On the other hand, since the coupling constants of the Wess-Zumino flow are invariant under the renormalization, the bulk counterterm does not appear. The extra wave function renormalization does not exist for interacting Wess-Zumino flow under appropriate initial condition.
The mechanism of giving this UV property is quite different from the Yang-Mills flow in which the gauge symmetry plays a crucial role.
The existence of the non-renormalization theorem is significant and allows us the couplings and the initial condition that are invariant under renormalization to gradient flow equation.
Gradient flows have been successfully applied to various research such as non-perturbative renormalization group, holographic descriptions of field theory
and lattice simulations. In addition, supersymmetry has been actively studied in particle physics in a variety of ways.
Therefore, supersymmetric gradient flows can be expected to have various applications.
Some of the techniques developed in this article will be very useful for subsequent studies using supersymmetric gradient flows.
\section*{Acknowledgement}
This work was supported by JSPS KAKENHI Grant Number
18K13546, 19K03853, 20K03924, 21K03537, 22H01222.
|
1,116,691,499,887 | arxiv | \section{Introduction}\label{sec:intro}
Let $S$ be a connected surface, perhaps with boundary, and either compact, or with a finite number of points removed from the interior of the surface.
The \emph{$n\up{th}$ configuration space} of $S$ is defined by:
\begin{equation*}
F_n(S)=\setr{(x_{1},\ldots,x_{n})\in S^{n}}{\text{$x_{i}\neq x_{j}$ if $i\neq j$}}.
\end{equation*}
It is well known that $\pi_1(F_n(S))\cong P_n(S)$, the \emph{pure braid group} of $S$ on $n$ strings, and that $\pi_1(F_n(S)/\sn)\cong B_n(S)$, the \emph{braid group} of $S$ on $n$ strings, where $F_n(S)/\sn$ is the quotient space of $F_{n}(S)$ by the free action of the symmetric group $\sn$ given by permuting coordinates~\cite{FaN,FoN}. If $S$ is the $2$-disc $\ensuremath{\mathbb D}^{2}$ then $B_{n}(\ensuremath{\mathbb D}^{2})$ (resp.\ $P_{n}(\ensuremath{\mathbb D}^{2})$) is the Artin braid group $B_{n}$ (resp.\ the Artin pure braid group $P_{n}$). The canonical projection $F_n(S)\to F_n(S)/\sn$ is a regular $n!$-fold covering map, and thus gives rise to the following short exact sequence:
\begin{equation}\label{eq:sessym}
1\to P_n(S) \to B_n(S) \to \sn\to 1.
\end{equation}
If $\ensuremath{\mathbb D}^{2}$ is a topological disc lying in the interior of $S$ and that contains the basepoints of the braids then the inclusion $\map{j}{\ensuremath{\mathbb D}^{2}}[S]$ induces a group homomorphism $\map{j_{\#}}{B_{n}}[B_{n}(S)]$. This homomorphism is injective if $S$ is different from the $2$-sphere $\St$ and the real projective plane $\ensuremath{\mathbb{R}P^2}$~\cite{Bi,Gol}. Let $\map{j_{\#}\left\lvert_{P_{n}}\right.}{P_{n}}[P_{n}(S)]$ denote the restriction of $j_{\#}$ to the corresponding pure braid groups. If $\beta\in B_{n}$ then we shall denote its image $j_{\#}(\beta)$ in $B_{n}(S)$ simply by $\beta$. It is well known that the centre of $B_{n}$ and of $P_{n}$ is infinite cyclic, generated by the full twist braid that we denote by $\ft$, and that $\ft$, considered as an element of $B_{n}(\St)$ or of $B_{n}(\ensuremath{\mathbb{R}P^2})$, is of order $2$ and generates the centre. If $G$ is a group then we denote its commutator subgroup by $\Gamma_{2}(G)$, its Abelianisation by $G\up{Ab}$, and if $H$ is a subgroup of $G$ then we denote its normal closure in $G$ by $\ang{\!\ang{H}\!}_{G}$.
Let $\prod_1^n\, S=S\times \cdots \times S$ denote the $n$-fold Cartesian product of $S$ with itself, let $\map{\iota_{n}}{F_n(S)}[\prod_1^n\, S]$ be the inclusion map, and let $\map{\iota_{n\#}}{\pi_1(F_n(S))}[\pi_1\left( \prod_1^n\, S\right)]$ denote the induced homomorphism on the level of fundamental groups. To simplify the notation, we shall often just write $\iota$ and $\iota_{\#}$ if $n$ is given. The study of $\iota_{\#}$ was initiated by Birman in 1969~\cite{Bi}. She had conjectured that $\ang{\!\!\ang{\,\im{j_{\#}\left\lvert_{P_{n}}\right.\!}}\!\!}_{P_{n}(S)}= \ker{\iota_{\#}}$ if $S$ is a compact orientable surface, but states without proof that her conjecture is false if $S$ is of genus greater than or equal to $1$~\cite[page~45]{Bi}. However, Goldberg proved the conjecture several years later in both the orientable and non-orientable cases for compact surfaces without boundary different from $\St$ and $\ensuremath{\mathbb{R}P^2}$~\cite[Theorem~1]{Gol}. In connection with the study of Vassiliev invariants of surface braid groups, González-Meneses and Paris showed that $\ker{\iota_{\#}}$ is also normal in $B_n(S)$, and that the resulting quotient is isomorphic to the semi-direct product $\pi_1\left(\prod_1^n\, S\right)\rtimes \sn$, where the action is given by permuting coordinates (their work was within the framework of compact, orientable surfaces without boundary, but their construction is valid for any surface $S$)~\cite{GMP}. In the case of $\ensuremath{\mathbb{R}P^2}$, this result was reproved using geometric methods~\cite{T}.
If $S=\St$, $\ker{\iota_{\#}}$ is clearly equal to $P_n(\St)$, and so by~\cite[Theorem~4]{GG2}, it may be decomposed as:
\begin{equation}\label{eq:pns2sum}
\ker{\iota_{\#}}=P_n(\St)\cong P_{n-3}(\St\setminus \brak{x_1,x_2,x_3})\times \ensuremath{\mathbb Z}_2,
\end{equation}
where the first factor of the direct product is torsion free, and the $\ensuremath{\mathbb Z}_{2}$-factor is generated by $\ft$.
The aim of this paper is to resolve Birman's conjecture for surfaces without boundary in the remaining cases, namely $S=\St$ or $\ensuremath{\mathbb{R}P^2}$, to determine the cohomological dimension of $B_{n}(S)$ and $P_{n}(S)$, where $S$ is one of these two surfaces, and to elucidate the structure of $\ker{\iota_{\#}}$ in the case of $\ensuremath{\mathbb{R}P^2}$.
In \resec{kerrp2}, we start by considering the case $S=\ensuremath{\mathbb{R}P^2}$, we study $\ker{\iota_{\#}}$, which we denote by $K_{n}$, and we show that it admits a decomposition similar to that of \req{pns2sum}.
\begin{prop}\label{prop:prop3}
Let $n\in \ensuremath{\mathbb N}$.
\begin{enumerate}[(a)]
\item\label{it:prop3a}
\begin{enumerate}[(i)]
\item\label{it:prop3ai} Up to isomorphism, the homomorphism $\map{\iota_{\#}}{\pi_1( F_n(\ensuremath{\mathbb{R}P^2}))} [\pi_1(\Pi_{1}^{n}( \ensuremath{\mathbb{R}P^2}))]$ coincides with Abelianisation. In particular, $K_{n}=\Gamma_{2}(P_{n}(\ensuremath{\mathbb{R}P^2}))$.
\item\label{it:prop3aii} If $n\geq 2$ then there exists a torsion-free subgroup $L_{n}$ of $K_{n}$ such that $K_{n}$ is isomorphic to the direct sum of $L_{n}$ and the subgroup $\ang{\ft}$ generated by the full twist that is isomorphic to $\ensuremath{\mathbb Z}_{2}$.
\end{enumerate}
\item\label{it:prop3b} If $n\geq 2$ then any subgroup of $P_n(\ensuremath{\mathbb{R}P^2})$ that is normal in $B_n(\ensuremath{\mathbb{R}P^2})$ and that properly contains $K_{n}$ possesses an element of order $4$.
\end{enumerate}
\end{prop}
Note that if $n=1$ then $B_{1}(\ensuremath{\mathbb{R}P^2})=P_{1}(\ensuremath{\mathbb{R}P^2})\cong \ensuremath{\mathbb Z}_{2}$ and $\ft[1]$ is the trivial element, so parts~(\ref{it:prop3a})(\ref{it:prop3aii}) and~(\ref{it:prop3b}) do not hold. Part~(\ref{it:prop3a})(\ref{it:prop3ai}) will be proved in \repr{abeliota}. We shall see later on in \rerem{Lnuniqueness} that there are precisely $2^{n(n-2)}$ subgroups that satisfy the conclusions of part~(\ref{it:prop3a})(\ref{it:prop3aii}), and to prove the statement, we shall exhibit an explicit torsion-free subgroup $L_{n}$. We then prove Birman's conjecture for $\St$ and $\ensuremath{\mathbb{R}P^2}$, using \repr{prop3}(\ref{it:prop3a})(\ref{it:prop3ai}) in the case of $\ensuremath{\mathbb{R}P^2}$.
\begin{thm}\label{th:birman}
Let $S$ be one of $\St$ or $\ensuremath{\mathbb{R}P^2}$, and let $n\geq 1$. Then $\ang{\!\ang{\,\im{j_{\#}\left\lvert_{P_{n}}\right.\!}}\!}_{P_{n}(S)}= \ker{\iota_{\#}}$.
\end{thm}
In \resec{propsLn}, we analyse $L_{n}$ in more detail, and we show that it may be decomposed as an iterated semi-direct product of free groups.
\begin{thm}\label{th:th4}
Let $n\geq 3$. Consider the Fadell-Neuwirth short exact sequence:
\begin{equation}\label{eq:fnpnp2}
1\to \rpminus \to P_{n}(\ensuremath{\mathbb{R}P^2}) \stackrel{q_{2\#}}{\to} P_{2}(\ensuremath{\mathbb{R}P^2}) \to 1,
\end{equation}
where $q_{2\#}$ is given geometrically by forgetting the last $n-2$ strings. Then $L_{n}$ may be identified with the kernel of the composition
\begin{equation*}
\rpminus \to P_n(\ensuremath{\mathbb{R}P^2}) \stackrel{\iota_{\#}}{\to} \underbrace{\ensuremath{\mathbb Z}_2 \times\cdots\times \ensuremath{\mathbb Z}_2}_{\text{$n$ copies}},
\end{equation*}
where the first homomorphism is that appearing in \req{fnpnp2}. The image of this composition is the product of the last $n-2$ copies of $\ensuremath{\mathbb Z}_2$. In particular, $L_{n}$ is of index $2^{n-2}$ in $\rpminus$. Further, $L_{n}$ is isomorphic to an iterated semi-direct product of free groups of the form $\F[2n-3]\rtimes (\F[2n-5]\rtimes (\cdots \rtimes(\F[5]\rtimes \F[3])\cdots))$, where for all $m\in \ensuremath{\mathbb N}$, $\F[m]$ denotes the free group of rank $m$.
\end{thm}
In the semi-direct product decomposition of $L_{n}$, note that every factor acts on each of the preceding factors. This is also the case for $\rpminus$ (see \req{semifn}), and as we shall see in \rerems{artin}(\ref{it:artin1}), this implies an Artin combing-type result for this group. Analysing these semi-direct products in more detail, we obtain the following results.
\begin{prop}\label{prop:abelianL}
If $n\geq 3$ then:
\begin{enumerate}[(a)]
\item\label{it:abparta} $\bigl(\rpminus\bigr)\up{Ab}\cong \ensuremath{\mathbb Z}^{2(n-2)}$.
\item\label{it:abpartb} $(L_{n})\up{Ab}\cong \ensuremath{\mathbb Z}^{n(n-2)}$.
\end{enumerate}
\end{prop}
In two papers in preparation, we shall analyse the homotopy fibre of $\iota$, as well as the induced homomorphism $\iota_{\#}$ when $S=\St$ or $\ensuremath{\mathbb{R}P^2}$~\cite{GG10}, and when $S$ is a space form manifold of dimension different from two~\cite{GGG}. In the first of these papers, we shall also see that $L_{n}$ is closely related to the fundamental group of an orbit configuration space of the open cylinder.
In \resec{vcd}, we study the virtual cohomological dimension of the braid groups of $\St$ and $\ensuremath{\mathbb{R}P^2}$. Recall from~\cite[page~226]{Br} that if a group $\Gamma$ is virtually torsion-free then all finite index torsion-free subgroups of $\Gamma$ have the same cohomological dimension by Serre's theorem, and this dimension is defined to be the \emph{virtual cohomological dimension} of $\Gamma$. Using equations~\reqref{pns2sum} and~\reqref{fnpnp2}, we prove the following result, namely that if $S=\St$ or $\ensuremath{\mathbb{R}P^2}$, the groups $B_n(S)$ and $P_n(S)$ have finite virtual cohomological dimension, and we compute these dimensions.
\begin{thm}\label{th:prop12}\mbox{}
\begin{enumerate}[(a)]
\item\label{it:harer1} Let $n\geq 4$. Then the virtual cohomological dimension of both $B_n(\St)$ and $P_n(\St)$ is equal to the cohomological dimension of the group $P_{n-3}(\St\setminus \brak{x_1,x_2,x_3})$. Furthermore, for all $m\geq 1$, the cohomological dimension of the group $P_{m}(\St\setminus\brak{x_1,x_2,x_3})$ is equal to $m$.
\item Let $n\geq 3$. Then the virtual cohomological dimension of both $B_n(\ensuremath{\mathbb{R}P^2})$ and $P_n(\ensuremath{\mathbb{R}P^2})$ is equal to the cohomological dimension of the group $P_{n-2}(\ensuremath{\mathbb{R}P^2}\setminus\brak{x_1, x_2})$. Furthermore, for all $m\geq 1$, the cohomological dimension of the group $P_{m}(\ensuremath{\mathbb{R}P^2}\setminus \brak{x_1, x_2})$ is equal to $m$.
\end{enumerate}
\end{thm}
The methods of the proof of \reth{prop12} have recently been applied to compute the cohomological dimension of the braid groups of all other compact surfaces (orientable and non orientable) without boundary~\cite{GGM}. \reth{prop12} also allows us to deduce the virtual cohomological dimension of the punctured mapping class groups of $\St$ and $\ensuremath{\mathbb{R}P^2}$. If $n\geq 0$, let $\mathcal{MCG}(S,n)$ denote the mapping class group of a connected, compact surface $S$ relative to an $n$-point set. If $S$ is orientable then Harer determined the virtual cohomological dimension of $\mathcal{MCG}(S,n)$~\cite[Theorem 4.1]{H}. In the case of $\St$ and $\ensuremath{\mathbb D}^{2}$, he obtained the following results:
\begin{enumerate}[(a)]
\item if $n\geq 3$, the virtual cohomological dimension of $\mathcal{MCG}(\St,n)$ is equal to $n-3$.
\item if $n\geq 2$, the cohomological dimension of $\mathcal{MCG}(\ensuremath{\mathbb D}^{2},n)$ is equal to $n-1$ (recall that $\mathcal{MCG}(\ensuremath{\mathbb D}^{2},n)$ is isomorphic to $B_{n}$~\cite{Bi2}).
\end{enumerate}
As a consequence of \reth{prop12}, we are able to compute the virtual cohomological dimension of $\mathcal{MCG}(S,n)$ for $S=\St$ and $\ensuremath{\mathbb{R}P^2}$.
\begin{cor}\label{cor:vcdmcg}
Let $n\geq 4$ (resp.\ $n\geq 3$). Then the virtual cohomological dimension of $\mathcal{MCG}(\St,n)$ (resp.\ $\mathcal{MCG}(\ensuremath{\mathbb{R}P^2},n)$) is finite, and is equal to $n-3$ (resp.\ $n-2$).
\end{cor}
If $S=\St$ or $\ensuremath{\mathbb{R}P^2}$ then for the values of $n$ given by \reth{prop12} and \reco{vcdmcg}, the virtual cohomological dimension of $\mathcal{MCG}(S,n)$ is equal to that of $B_{n}(S)$. If $S=\St$, we thus recover the corresponding result of Harer.
\subsection*{Acknowledgements}
This work took place during the visits of the first author to the Laboratoire de Math\'e\-matiques Nicolas Oresme during the periods 2\up{nd}--23\up{rd}~December 2012, 29\up{th}~November--22\up{nd}~December 2013 and 4\up{th}~October--1\up{st}~November~2014, and of the visits of the second author to the Departamento de Matem\'atica do IME~--~Universidade de S\~ao Paulo during the periods 10\up{th}~November--1\up{st}~December 2012,~1\up{st}--21\up{st} July 2013 and 10\up{th}~July--2\up{nd}~August 2014, and was supported by the international Cooperation Capes-Cofecub project n\up{o}~Ma~733-12 (France) and n\up{o}~1716/2012 (Brazil), and the CNRS/Fapesp programme n\up{o}~226555 (France) and n\up{o}~2014/50131-7 (Brazil).
\section{The structure of $K_{n}$, and Birman's conjecture for $\St$ and $\ensuremath{\mathbb{R}P^2}$}\label{sec:kerrp2}
Let $n\in \ensuremath{\mathbb N}$. As we mentioned in the introduction, if $S$ is a surface different from $\St$ and $\ensuremath{\mathbb{R}P^2}$, the kernel of the homomorphism $\map{\iota_{\#}}{P_n(S)} [{\pi_1\left(\prod_1^n\, S\right)}]$ was studied in~\cite{Bi,Gol}, and that if $S=\St$ then $\ker{\iota_{\#}}=P_n(\St)$. In the first part of this section, we recall a presentation of $P_{n}(\ensuremath{\mathbb{R}P^2})$, and we prove \repr{prop3}(\ref{it:prop3a})(\ref{it:prop3ai}). The second part of this section is devoted to proving the rest of \repr{prop3} and \reth{birman}, the latter being Birman's conjecture for $\St$ and $\ensuremath{\mathbb{R}P^2}$.
Consider the model of $\ensuremath{\mathbb{R}P^2}$ given by identifying antipodal boundary points of $\ensuremath{\mathbb D}^{2}$. We equip $F_{n}(\ensuremath{\mathbb{R}P^2})$ with a basepoint $(x_{1},\ldots,x_{n})$. For $1\leq i<j\leq n$ (resp.\ $1\leq k\leq n$), we define the element $A_{i,j}$ (resp.\ $\tau_{k}$, $\rho_{k}$) of $P_{n}(\ensuremath{\mathbb{R}P^2})$ by the geometric braids depicted in Figure~\ref{fig:gens}.
\begin{figure}[h
\hfill
\begin{tikzpicture}[scale=0.75]
\path[draw][thick] (0,0) circle(4.5);
\foreach \k in {-3.5,-2.5,...,3.5}
{\path[draw][fill] (\k,0) circle [radius=0.08];};
\tikz@path@overlay{node} at (2.9, -0.35) {$x_{k}$};
\tikz@path@overlay{node} at (-0.4, -0.5) {$x_{j}$};
\tikz@path@overlay{node} at (-2.45, 0.35) {$x_{i}$};
\tikz@path@overlay{node} at (-2, 1.6) {$A_{i,j}$};
\tikz@path@overlay{node} at (-1.5, 3.15) {$\tau_{k}$};
\tikz@path@overlay{node} at (0.9, 3.4) {$\rho_{k}$};
\path[draw][very thick, decoration={markings,mark=at position 0.5 with {\arrow{stealth'}}},postaction={decorate}] (2.5,-0.2) .. controls (2.5,-3) .. (0,-4.5);
\path[draw][very thick, decoration={markings,mark=at position 0.5 with {\arrow{stealth'}}},postaction={decorate}] (0,4.5) .. controls (2.5,3) .. (2.5,0.2);
\path[draw][very thick, color=gray!50,decoration={markings,mark=at position 0.5 with {\arrow{stealth'}}},postaction={decorate}] (2.35,0.1) .. controls (-1,2.8) .. (-3.21,3.15);
\path[draw][very thick, color=gray!50,decoration={markings,mark=at position 0.5 with {\arrow{stealth'}}},postaction={decorate}] (3.21,-3.15) .. controls (5,1) and (3.5,1) .. (2.65,0.1);
\path[draw][very thick, color=gray, decoration={markings,mark=at position 0.9 with {\arrow{stealth'}}},postaction={decorate}] (-0.5,0.15) .. controls (-0.6,1.5) and (-3,1.5) .. (-3,0);
\path[draw][very thick, color=gray] (-3,0) .. controls (-3,-0.8) and (-2,-0.8) .. (-2,0);
\path[draw][very thick, color=gray] (-2,0) .. controls (-2,1) and (-0.6,1) .. (-0.6,0.12);
\end{tikzpicture}
\hspace*{\path[fill]}
\caption{The elements $A_{i,j}$, $\tau_{k}$ and $\rho_{k}$ of $P_{n}(\ensuremath{\mathbb{R}P^2})$.}
\label{fig:gens}
\end{figure}
Note that the arcs represent the projections of the strings onto $\ensuremath{\mathbb{R}P^2}$, so that all of the strings of the given braid are vertical, with the exception of the $j\up{th}$ (resp.\ $k\up{th}$) string that is based at the point $x_{j}$ (resp.\ $x_{k}$).
\begin{thm}[{\cite[Theorem~4]{GG4}}]\label{th:basicpres}
Let $n\in\ensuremath{\mathbb N}$. The following constitutes a presentation of the pure braid group $P_n(\ensuremath{\mathbb{R}P^2})$:
\begin{enumerate}
\item[\underline{\textbf{generators:}}] $A_{i,j}$, $1\leq i<j\leq n$, and $\tau_k$, $1\leq k\leq n$.
\item[\underline{\textbf{relations:}}]\mbox{}
\begin{enumerate}[(a)]
\item\label{it:rel1} the Artin relations between the $A_{i,j}$ emanating from those of $P_n$:
\begin{equation}\label{eq:artinaij}
A_{r,s}A_{i,j}A_{r,s}^{-1}\!=\!
\begin{cases}
\! A_{i,j} & \text{if $i<r<s<j$ or $r<s<i<j$}\\
\! A_{i,j}^{-1} A_{r,j}^{-1} A_{i,j} A_{r,j} A_{i,j} & \text{if $r<i=s<j$}\\
\! A_{s,j}^{-1} A_{i,j} A_{s,j} & \text{if $i=r<s<j$}\\
\! A_{s,j}^{-1}A_{r,j}^{-1} A_{s,j} A_{r,j} A_{i,j} A_{r,j}^{-1} A_{s,j}^{-1} A_{r,j} A_{s,j} & \text{if $r<i<s<j$.}
\end{cases}
\end{equation}
\item\label{it:rel2} for all $1\leq i<j\leq n$, $\tau_i\tau_j\tau_i^{-1} = \tau_j^{-1} A_{i,j}^{-1} \tau_j^2$.
\item\label{it:rel3} for all $1\leq i\leq n$, $\tau_i^2= A_{1,i}\cdots A_{i-1,i} A_{i,i+1} \cdots A_{i,n}$.
\item\label{it:rel4} for all $1\leq i<j\leq n$ and $1\leq k\leq n$, $k\neq j$,
\begin{equation*}\label{eq:relspn}
\tau_k A_{i,j}\tau_k^{-1}=
\begin{cases}
A_{i,j} & \text{if $j<k$ or $k<i$}\\
\tau_j^{-1} A_{i,j}^{-1} \tau_j & \text{if $k=i$}\\
\tau_j^{-1} A_{k,j}^{-1} \tau_j A_{k,j}^{-1} A_{i,j} A_{k,j} \tau_j^{-1} A_{k,j} \tau_j & \text{if $i<k<j$}.
\end{cases}
\end{equation*}
\end{enumerate}
\end{enumerate}
\end{thm}
This enables us to prove that $\iota_{\#}$ is in fact Abelianisation, which is part~(\ref{it:prop3a})(\ref{it:prop3ai}) of \repr{prop3}.
\begin{prop}\label{prop:abeliota}
Let $n\in \ensuremath{\mathbb N}$. The homomorphism $\map{\iota_{\#}}{P_n(\ensuremath{\mathbb{R}P^2})} [{\pi_1(\prod_1^n\, \ensuremath{\mathbb{R}P^2})}]$ is defined on the generators of \reth{basicpres} by $\iota_{\#}(A_{i,j})=(\overline{0},\ldots,\overline{0})$ for all $1\leq i<j\leq n$, and $\iota_{\#}(\tau_k)=(\overline{0},\ldots, \overline{0}, \underbrace{\overline{1}}_{\mathclap{\text{$k\up{th}$ position}}}, \overline{0}, \ldots,\overline{0})$ for all $1\leq k\leq n$. Further, $\iota_{\#}$ is Abelianisation, and $\ker{\iota_{\#}}=K_{n}= \Gamma_{2}(P_{n}(\ensuremath{\mathbb{R}P^2}))$.
\end{prop}
\begin{proof}
For $1\leq k\leq n$, let $\map{p_{k}}{F_{n}(\ensuremath{\mathbb{R}P^2})}[\ensuremath{\mathbb{R}P^2}]$ denote projection onto the $k\up{th}$ coordinate. Observe that $\iota_{\#}=p_{1\#}\times \cdots \times p_{n\#}$, where $\map{p_{k\#}}{P_{n}(\ensuremath{\mathbb{R}P^2})}[\pi_{1}(\ensuremath{\mathbb{R}P^2})]$ is the induced homomorphism on the level of fundamental groups. Identifying $\pi_{1}(\ensuremath{\mathbb{R}P^2})$ with $\ensuremath{\mathbb Z}_{2}$ and using the geometric realisation of Figure~\ref{fig:gens} of the generators of the presentation of $P_{n}(\ensuremath{\mathbb{R}P^2})$ given by \reth{basicpres}, it is straightforward to check that for all $1\leq k,l\leq n$ and $1\leq i<j\leq n$, $p_{k\#}(A_{i,j})=\overline{0}$, $p_{k\#}(\tau_{l})=\overline{0}$ if $l\neq k$ and $p_{k\#}(\tau_{k})=\overline{1}$, and this yields the first part of the proposition. The second part follows easily from the presentation of the Abelianisation $(P_{n}(\ensuremath{\mathbb{R}P^2}))\up{Ab}$ of $P_{n}(\ensuremath{\mathbb{R}P^2})$ obtained from \reth{basicpres}. More precisely, if we denote the Abelianisation of an element $x\in P_{n}(\ensuremath{\mathbb{R}P^2})$ by $\overline{x}$, relations~(\ref{it:rel2}) and~(\ref{it:rel3}) imply respectively that for all $1\leq i<j\leq n$ and $1\leq k\leq n$, $\overline{A_{i,j}}$ and $\overline{\tau_{k}}^{2}$ represent the trivial element of $(P_{n}(\ensuremath{\mathbb{R}P^2}))\up{Ab}$. Since the remaining relations give no other information under Abelianisation, it follows that $(P_{n}(\ensuremath{\mathbb{R}P^2}))\up{Ab}\cong \ensuremath{\mathbb Z}_{2}\oplus \cdots \oplus \ensuremath{\mathbb Z}_{2}$, where $\overline{\tau_{k}}= (\overline{0},\ldots, \overline{0}, \underbrace{\overline{1}}_{\mathclap{\text{$k\up{th}$ position}}}, \overline{0}, \ldots,\overline{0})$ and $\overline{A_{i,j}}=(\overline{0},\ldots,\overline{0})$ via this isomorphism, and the Abelianisation homomorphism indeed coincides with $\iota_{\#}$ on $P_{n}(\ensuremath{\mathbb{R}P^2})$.
\end{proof}
\begin{rems}\mbox{}\label{rem:propsK}
\begin{enumerate}[(a)]
\item\label{it:propsKa} Since $K_{n}=\Gamma_{2}(P_{n}(\ensuremath{\mathbb{R}P^2}))$, it follows immediately that $K_{n}$ is normal in $B_n(\ensuremath{\mathbb{R}P^2})$, since $\Gamma_{2}(P_{n}(\ensuremath{\mathbb{R}P^2}))$ is characteristic in $P_n(\ensuremath{\mathbb{R}P^2})$, and $P_n(\ensuremath{\mathbb{R}P^2})$ is normal in $B_n(\ensuremath{\mathbb{R}P^2})$.
\item A presentation of $K_{n}$ may be obtained by a long but routine computation using the Reidemeister-Schreier method, although it is not clear how to simplify the presentation. In \reth{th4}, we will provide an alternative description of $K_{n}$ using algebraic methods.
\item\label{it:propsKc} In what follows, we shall use Van Buskirk's presentation of $B_{n}(\ensuremath{\mathbb{R}P^2})$~\cite[page~83]{vB} whose generating set consists of the standard braid generators $\sigma_{1},\ldots,\sigma_{n-1}$ emanating from the $2$-disc, as well as the surface generators $\rho_{1},\ldots,\rho_{n}$ depicted in Figure~\ref{fig:gens}. We have the following relation between the elements $\tau_{k}$ and $\rho_{k}$:
\begin{equation*}
\tau_{k}=\rho_{k}^{-1} A_{k,k+1}\cdots A_{k,n} \quad \text{for all $1\leq k\leq n$,}
\end{equation*}
where for $1\leq i<j\leq n$, $A_{i,j}=\sigma_{j-1}\cdots \sigma_{i+1}\sigma_{i}^2\sigma_{i+1}^{-1}\cdots \sigma_{j-1}^{-1}$.
In particular, it follows from \repr{abeliota} that:
\begin{equation}\label{eq:rhotauk}
\text{$\iota_{\#}(\rho_{k})= \iota_{\#}(\tau_{k})=(\overline{0},\ldots, \overline{0}, \underbrace{\overline{1}}_{\mathclap{\text{$k\up{th}$ position}}}, \overline{0}, \ldots,\overline{0})$ for all $1\leq k\leq n$.}
\end{equation}
\end{enumerate}
\end{rems}
If $n\geq 2$, the full twist braid $\ft$, which may be defined by $\ft=(\sigma_{1}\cdots \sigma_{n-1})^{n}$, is of order~$2$~\cite[page~95]{vB}, it generates the centre of $B_{n}(\ensuremath{\mathbb{R}P^2})$~\cite[Proposition~6.1]{Mu}, and is the unique element of $B_{n}(\ensuremath{\mathbb{R}P^2})$ of order~$2$~\cite[Proposition~23]{GG3}. Since $\ft\in P_{n}(\ensuremath{\mathbb{R}P^2})$, it thus belongs to the centre of $P_{n}(\ensuremath{\mathbb{R}P^2})$, and just as for the Artin braid groups and the braid groups of $\St$, it generates the centre of $P_{n}(\ensuremath{\mathbb{R}P^2})$:
\begin{prop}\label{prop:centrerp2}
Let $n\geq 2$. Then the centre $Z(P_{n}(\ensuremath{\mathbb{R}P^2}))$ of $P_{n}(\ensuremath{\mathbb{R}P^2})$ is generated by $\ft$.
\end{prop}
\begin{proof}
We prove the result by induction on $n$. If $n=2$ then $P_2(\ensuremath{\mathbb{R}P^2})\cong \quat$~\cite[page~87]{vB}, the quaternion group of order $8$, and the result follows since $\ft[2]$ is the element of $P_2(\ensuremath{\mathbb{R}P^2})$ of order $2$. So suppose that $n\geq 3$. From the preceding remarks, $\ang{\ft}\subset Z(P_{n}(\ensuremath{\mathbb{R}P^2}))$. Conversely, let $x\in Z(P_{n}(\ensuremath{\mathbb{R}P^2}))$, and consider the following Fadell-Neuwirth short exact sequence:
\begin{equation*
1\to \pi_{1}(\ensuremath{\mathbb{R}P^2}\setminus \brak{x_{1},\ldots,x_{n-1}})\to P_{n}(\ensuremath{\mathbb{R}P^2})\xrightarrow{q_{(n-1)\#}} P_{n-1}(\ensuremath{\mathbb{R}P^2})\to 1,
\end{equation*}
where $q_{(n-1)\#}$ is the surjective homomorphism induced on the level of fundamental groups by the projection $\map{q_{n-1}}{F_{n}(\ensuremath{\mathbb{R}P^2})}[F_{n-1}(\ensuremath{\mathbb{R}P^2})]$ onto the first $n-1$ coordinates. Now $q_{(n-1)\#}(x)\in Z(P_{n-1}(\ensuremath{\mathbb{R}P^2}))$ by surjectivity, and thus $q_{(n-1)\#}(x)=\Delta_{n-1}^{2\epsilon}$ for some $\epsilon\in \brak{0,1}$ by the induction hypothesis. Further, $q_{(n-1)\#}(\ft)=\ft[n-1]$, hence
\begin{equation*}
\Delta_{n}^{-2\epsilon} x\in \operatorname{Ker}(q_{(n-1)\#}) \cap Z(P_{n}(\ensuremath{\mathbb{R}P^2})),
\end{equation*}
and thus $\Delta_{n}^{-2\epsilon} x\in Z(\operatorname{Ker}(q_{(n-1)\#}))$. But $Z(\operatorname{Ker}(q_{(n-1)\#}))$ is trivial because $\operatorname{Ker}(q_{(n-1)\#})$ is a free group of rank $n-1$. This implies that $x\in \ang{\ft}$ as required.
\end{proof}
We now prove \repr{prop3}.
\pagebreak
\begin{proof}[Proof of \repr{prop3}]
Let $n\geq 3$.
\begin{enumerate}[(a)]
\item Recall that part~(\ref{it:prop3a})(\ref{it:prop3ai}) of \repr{prop3} was proved in \repr{abeliota}, so let us prove part~(\ref{it:prop3aii}). The projection $\map{q_2}{F_n(\ensuremath{\mathbb{R}P^2})}[F_{2}(\ensuremath{\mathbb{R}P^2})]$ onto the first two coordinates gives rise to the Fadell-Neuwirth short exact sequence~\reqref{fnpnp2}. Since $K_{n}=\Gamma_{2}(P_{n}(\ensuremath{\mathbb{R}P^2}))$ by \repr{abeliota}, the image of the restriction $q_{2\#}\!\left\vert_{K_{n}}\right.$ of $q_{2\#}$ to $K_{n}$ is the subgroup $\Gamma_{2}(P_{2}(\ensuremath{\mathbb{R}P^2}))=\ang{\ft[2]}$, and so we obtain the following commutative diagram:
\begin{equation}\label{eq:fnpnp2ext}
\begin{tikzcd}
1\arrow{r} & K_{n}\cap \rpminus \arrow{r}\arrow{d} & K_{n} \arrow{r}{q_{2\#}\left\vert_{K_{n}}\right.}\arrow{d} & \ang{\ft[2]} \arrow{r}\arrow{d} & 1\\
1\arrow{r} & \rpminus \arrow{r} & P_{n}(\ensuremath{\mathbb{R}P^2}) \arrow{r}{q_{2\#}} & P_{2}(\ensuremath{\mathbb{R}P^2}) \arrow{r} & 1,
\end{tikzcd}
\end{equation}
where the vertical arrows are inclusions. Now $\ang{\ft[2]}\cong \ensuremath{\mathbb Z}_{2}$, so $K_{n}$ is an extension of the group $\ker{q_{2\#}\!\left\vert_{K_{n}}\right.\!}=K_{n}\cap \rpminus$ by $\ensuremath{\mathbb Z}_2$. The fact that $q_{2\#}(\ft)=\ft[2]$ implies that the upper short exact sequence splits, a section being defined by the correspondence $\ft[2] \mapsto \ft$, and since $\ft\in Z(P_n(\ensuremath{\mathbb{R}P^2}))$, the action by conjugation on $\ker{q_{2\#}\!\left\vert_{K_{n}}\right.\!}$ is trivial. Part~(\ref{it:prop3a}) of the proposition follows by taking $L_{n}=\ker{q_{2\#}\!\left\vert_{K_{n}}\right.\!}$ and by noting that $\rpminus$ is torsion free.
\item Recall first that any torsion element in $P_n(\ensuremath{\mathbb{R}P^2})\setminus \ang{\ft}$ is of order $4$~\cite[Corollary~19 and Proposition~23]{GG3}, and is conjugate in $B_n(\ensuremath{\mathbb{R}P^2})$ to one of $a^n$ or $b^{n-1}$, where $a=\rho_{n}\sigma_{n-1}\cdots \sigma_{1}$ and $b= \rho_{n-1}\sigma_{n-2}\cdots \sigma_{1}$ satisfy:
\begin{equation}\label{eq:anbn}
\text{$a^{n}=\rho_{n}\cdots \rho_{1}$ and $b^{n-1}=\rho_{n-1}\cdots \rho_{1}$}
\end{equation}
by~\cite[Proposition~10]{GG9}. Let $N$ be a normal subgroup of $B_{n}(\ensuremath{\mathbb{R}P^2})$ that satisfies $K_{n} \subsetneqq N \subset P_{n}(\ensuremath{\mathbb{R}P^2})$. We claim that for all $u\in {\pi_1(\prod_1^n\, \ensuremath{\mathbb{R}P^2})}$ (which we identify henceforth with $\ensuremath{\mathbb Z}_{2}\oplus \cdots \oplus \ensuremath{\mathbb Z}_{2}$), exactly one of the following two conditions holds:
\begin{enumerate}[(i)]
\item $N \cap \iota_{\#}^{-1}(\brak{u})$ is empty.
\item\label{it:nint} $\iota_{\#}^{-1}(\brak{u})$ is contained in $N$.
\end{enumerate}
To prove the claim, suppose that $x\in N \cap \iota_{\#}^{-1}(\brak{u})\neq \ensuremath{\varnothing}$, and let $y\in \iota_{\#}^{-1}(\brak{u})$. Now $\iota_{\#}(x)=\iota_{\#}(y)=u$, so there exists $k\in K_{n}$ such that $x^{-1}y=k$. Since $K_{n}\subset N$, it follows that $y=xk\in N$, which proves the claim. Further, $\iota_{\#}(a^{n})=(\overline{1},\ldots,\overline{1})$ and $\iota_{\#}(b^{n-1})=(\overline{1},\ldots,\overline{1},\overline{0})$ by \repr{abeliota} and equations~\reqref{rhotauk} and \reqref{anbn}, so by the claim it suffices to prove that there exists $z\in N$ such that $\iota_{\#}(z)\in \brak{(\overline{1},\ldots,\overline{1}), (\overline{1},\ldots,\overline{1}, \overline{0})}$, for then we are in case~(\ref{it:nint}) above, and it follows that one of $a^{n}$ and $b^{n-1}$ belongs to $N$.
It thus remains to prove the existence of such a $z$. Let $x\in N\setminus K_{n}$. Then $\iota_{\#}(x)$ contains an entry equal to $\overline{1}$ because $K_{n}=\ker{\iota_{\#}}$. If $\iota_{\#}(x)=(\overline{1},\ldots,\overline{1})$ then we are done. So assume that $\iota_{\#}(x)$ also contains an entry that is equal to $\overline{0}$.
By \req{rhotauk}, there exist $1\leq r<n$ and $1\leq i_{1}<\cdots < i_{r}\leq n$ such that $\iota_{\#}(\rho_{i_{1}}\cdots \rho_{i_{r}})=\iota_{\#}(x)$. It follows from the claim and the fact that $x\in N$ that $\rho_{i_{1}}\cdots \rho_{i_{r}}\in N$ also, and so without loss of generality, we may suppose that $x=\rho_{i_{1}}\cdots \rho_{i_{r}}$. Further, since $\iota_{\#}(x)$ contains both a $\overline{0}$ and a $\overline{1}$, there exists $1\leq j\leq r$ such that $p_{i_{j}\#}(x)=\overline{1}$ and $p_{(i_{j}+1)\#}(x)=\overline{0}$, the homomorphisms $p_{k\#}$ being those defined in the proof of \repr{abeliota}. Note that we consider the indices modulo~$n$, so if $i_{j}=n$ (so $j=r$) then we set $i_{j}+1=1$. By~\cite[page 777]{GG3}, conjugation by $a^{-1}$ permutes cyclically the elements $\rho_{1},\ldots, \rho_{n}, \rho_{1}^{-1},\ldots, \rho_{n}^{-1}$ of $P_{n}(\ensuremath{\mathbb{R}P^2})$, so the $(n-1)\up{th}$ (resp.\ $n\up{th}$) entry of $x'=a^{-(n-1-i_{j})} x a^{(n-1-i_{j})}$ is equal to $\overline{1}$ (resp.\ $\overline{0}$), and $x'\in N$ because $N$ is normal in $B_{n}(\ensuremath{\mathbb{R}P^2})$. Using the relation $b=\sigma_{n-1}a$, we determine the conjugates of the $\rho_{i}$ by $b^{-1}$:
\begin{align*}
b^{-1}\rho_i b &=a^{-1}\sigma_{n-1}^{-1}\rho_{i}\sigma_{n-1}a=a^{-1}\rho_{i}a= \rho_{i+1} \quad\text{for all $1\leq i\leq n-2$}\\
b^{-1}\rho_{n-1}b &=a^{-1}\sigma_{n-1}^{-1}\rho_{n-1}\sigma_{n-1}a=
a^{-1}\sigma_{n-1}^{-1}\rho_{n-1}\sigma_{n-1}^{-1}\ldotp \sigma_{n-1}^{2}a\\
&= a^{-1}\rho_{n} a\ldotp a^{-1}\sigma_{n-1}^{2}a=\rho_{1}^{-1} \ldotp a^{-1} \sigma_{n-1}^{2}a,
\end{align*}
where we have used the relations $\rho_{i}\sigma_{n-1}=\sigma_{n-1}\rho_{i}$ if $1\leq i\leq n-2$ and $\sigma_{n-1}^{-1}\rho_{n-1}\sigma_{n-1}^{-1}=\rho_{n}$ of Van Buskirk's presentation of $B_{n}(\ensuremath{\mathbb{R}P^2})$, as well as the effect of conjugation by $a^{-1}$ on the $\rho_{j}$. Now $\sigma_{n-1}^{2}=A_{n-1,n}\in K_{n}$ by \repr{abeliota}, so $a^{-1} \sigma_{n-1}^{2}a \in K_{n}$ by \rerems{propsK}(\ref{it:propsKa}), and hence $\iota_{\#}(b^{-1}\rho_{n-1}b)=(\overline{1}, \overline{0},\ldots,\overline{0})$. It then follows that $\iota_{\#}(a^{-1}x'a)$ and $\iota_{\#}(b^{-1}x'b)$ have the same entries except in the first and last positions, so if $x''=a^{-1}x'a\ldotp b^{-1}x'b$, we have $\iota_{\#}(x'')= (\overline{1}, \overline{0},\ldots,\overline{0},\overline{1})$. Further, $x''\in N$ since $N$ is normal in $B_{n}(\ensuremath{\mathbb{R}P^2})$. Let $n=2m+\epsilon$, where $m\in \ensuremath{\mathbb N}$ and $\epsilon\in \brak{0,1}$. Then setting
\begin{equation*}
z= a^{-\epsilon}x''a^{\epsilon} \cdotp a^{-(2+\epsilon)}x'' a^{2+\epsilon}\cdots a^{-(2(m-1)+\epsilon)} x'' a^{2(m-1)+\epsilon},
\end{equation*}
we see once more that $z\in N$, and $\iota_{\#}(z)= (\overline{1},\ldots,\overline{1})$ if $n$ is even and $\iota_{\#}(z)=(\overline{1},\ldots,\overline{1}, \overline{0})$ if $n$ is odd, which completes the proof of the existence of $z$, and thus that of \repr{prop3}(\ref{it:prop3b}).\qedhere
\end{enumerate}
\end{proof}
We end this section by proving \reth{birman}.
\begin{proof}[Proof of \reth{birman}]
Let $S=\St$ or $\ensuremath{\mathbb{R}P^2}$. If $n=1$ then $\iota_{\#}$ is an isomorphism and $\im{j_{\#}\left\lvert_{P_{n}}\right.\!}$ is trivial so the result holds. If $n=2$ and $S=\St$ then $P_{n}(\St)$ is trivial, and there is nothing to prove. Now suppose that $S=\St$ and $n\geq 3$. As we mentioned in the introduction, $\ker{\iota_{\#}}=P_{n}(\St)$. Let $(A_{i,j})_{1\leq i<j\leq n}$ be the generating set of $P_{n}$, where $A_{i,j}$ has a geometric representative similar to that given in Figure~\ref{fig:gens}. It is well known that the image of this set by $j_{\#}$ yields a generating set for $P_{n}(\St)$~(\emph{cf.}~\cite[page~616]{Sc}), so $j_{\#}\left\lvert_{P_{n}}\right.$ is surjective, and the statement of the theorem follows. Finally, assume that $S=\ensuremath{\mathbb{R}P^2}$ and $n\geq 2$. Once more, $\im{j_{\#}\left\lvert_{P_{n}}\right.}=\setangl{A_{i,j}}{1\leq i<j \leq n}$, and since $A_{i,j}\in \ker{\iota_{\#}}$ by \repr{abeliota}, we conclude that $\ang{\!\!\ang{\,\im{j_{\#}\left\lvert_{P_{n}}\right.\!}}\!\!}_{P_{n}(S)} \subset \ker{\iota_{\#}}$. To prove the converse, first recall from \repr{abeliota} that $\ker{\iota_{\#}}=\Gamma_{2}(P_{n}(\ensuremath{\mathbb{R}P^2}))$. Using the standard commutator identities $[x,yz]=[x,y][y,[x,z]] [x,z]$ and $[xy,z]=[x,[y,z]][y,z][x,z]$, $\Gamma_{2}(P_{n}(\ensuremath{\mathbb{R}P^2}))$ is equal to the normal closure in $P_{n}(\ensuremath{\mathbb{R}P^2})$ of $\setr{[x,y]}{x,y\in \setl{A_{i,j},\, \rho_{k}}{\text{$1\leq i<j\leq n$ and $1\leq k\leq n$}}}$. It then follows using the relations of \reth{basicpres} that the commutators $[x,y]$ belonging to this set also belong to $\ang{\!\setangl{A_{i,j}}{1\leq i<j \leq n}\!}_{P_{n}(\ensuremath{\mathbb{R}P^2})}$, which is nothing other than $\ang{\!\ang{\,\im{j_{\#}\left\lvert_{P_{n}}\right.\!}}\!}_{P_{n}(S)}$. We conclude by normality that $\ker{\iota_{\#}}\subset \ang{\!\!\ang{\,\im{j_{\#}\left\lvert_{P_{n}}\right.\!}}\!\!}_{P_{n}(S)}$, and this completes the proof of the theorem.
\end{proof}
\section{Some properties of the subgroup $L_{n}$}\label{sec:propsLn}
Let $S=\St$ or $S=\ensuremath{\mathbb{R}P^2}$, and for all $m,n\geq 1$, let $\Gamma_{m,n}(S)=P_{m}(S\setminus \brak{x_{1},\ldots,x_{n}})$ denote the $m$-string pure braid group of $S$ with $n$ points removed. In this section, we study $\rpminus$, which is $\Gamma_{n-2,2}(\ensuremath{\mathbb{R}P^2})$, in more detail, and we prove \reth{th4} and \repr{abelianL} that enable us to understand better the structure of the subgroup $L_{n}$ defined in the proof of \repr{prop3}(\ref{it:prop3a})(\ref{it:prop3aii}).
We start by exhibiting a presentation of the group $\Gamma_{m,n}(\ensuremath{\mathbb{R}P^2})$ in terms of the generators of $P_{m+n}(\ensuremath{\mathbb{R}P^2})$ given by \reth{basicpres}. A presentation for $\Gamma_{m,n}(\St)$ is given in~\cite[Proposition~7]{GG11} and will be recalled later in \repr{prespinm}, when we come to proving \reth{prop12}. For $1\leq i<j\leq m+n$, let
\begin{equation}\label{eq:cln}
C_{i,j}= A_{j-1,j}^{-1}\cdots A_{i+1,j}^{-1} A_{i,j} A_{i+1,j} \cdots A_{j-1,j}.
\end{equation}
Geometrically, in terms of Figure~\ref{fig:gens}, $C_{i,j}$ is the image of $A_{i,j}^{-1}$ under the reflection about the straight line segment that passes through the points $x_{1},\ldots, x_{m+n}$. The proof of the following proposition, which we leave to the reader, is similar in nature to that for $\St$, but is a little more involved due to the presence of extra generators that emanate from the fundamental group of $\ensuremath{\mathbb{R}P^2}$.
\begin{prop}\label{prop:prej}
Let $n,m\geq 1$. The following constitutes a presentation of the group $\Gamma_{m,n}(\ensuremath{\mathbb{R}P^2})$:\vspace*{-5mm}
\begin{enumerate}
\item[\underline{\textbf{generators:}}] $A_{i,j}$, $\rho_{j}$, where $1\leq i<j$ and $n+1\leq j \leq m+n$.
\item[\underline{\textbf{relations:}}]
\mbox{}
\begin{enumerate}[(I)]
\item\label{it:prej1} the Artin relations described by \req{artinaij} among the generators $A_{i,j}$ of $\Gamma_{m,n}(\ensuremath{\mathbb{R}P^2})$.
\item\label{it:prej2} for all $1\leq i<j$ and $n+1\leq j <k\leq m+n$, $A_{i,j}\rho_{k}A_{i,j}^{-1}=\rho_{k}$.
\item\label{it:prej3} for all $1\leq i<j$ and $n+1\leq k<j\leq m+n$,
\begin{equation*
\rho_{k}A_{i,j}\rho_k^{-1}=
\begin{cases}
A_{i,j}&\text{if $k<i$}\\
\rho_{j}^{-1} C_{i,j}^{-1} \rho_{j} &\text{if $k=i$}\\
\rho_{j}^{-1}C_{k,j}^{-1} \rho_{j}A_{i,j} \rho_{j}^{-1}C_{k,j}\rho_{j} &\text{if $k>i$.}
\end{cases}
\end{equation*}
\item\label{it:prej4} for all $n+1 \leq k<j\leq m+n$, $\rho_{k}\rho_{j}\rho_{k}^{-1}=C_{k,j}\rho_{j}$.
\item\label{it:prej5} for all $n+1\leq j\leq m+n$,
\begin{equation*}
\text{$\rho_{j}\left(\prod_{i=1}^{j-1}\; A_{i,j}\right) \rho_{j}=\left( \prod_{l=j+1}^{m+n}\; A_{j,l}\right)$.}
\end{equation*}
\end{enumerate}
\end{enumerate}
The elements $C_{i,j}$ and $C_{k,j}$ appearing in relations~(\ref{it:prej3}) and~(\ref{it:prej4}) should be rewritten using \req{cln}.
\end{prop}
In the rest of this section, we shall assume that $n=2$, and we shall focus our attention on the groups $\Gamma_{m,2}(\ensuremath{\mathbb{R}P^2})$, where $m\geq 1$, that we interpret as subgroups of $P_{m+2}(\ensuremath{\mathbb{R}P^2})$ via the short exact sequence~\reqref{fnpnp2}. Before proving \reth{th4} and \repr{abelianL}, we introduce some notation that will be used to study the subgroups $K_{n}$ and $L_{n}$. Let $m\geq 2$, and consider the following Fadell-Neuwirth short exact sequence:
\begin{equation}\label{eq:fns}
1\!\to \Omega_{m+1}\to \rpminus[m] \stackrel{r_{m+1}}{\to} \rpminus[m-1]\to\! 1,
\end{equation}
where $r_{m+1}$ is given geometrically by forgetting the last string, and where $\Omega_{m+1}=\pi_{1}(\ensuremath{\mathbb{R}P^2}\setminus \brak{x_{1},\ldots,x_{m+1}}, x_{m+2})$. From the Fadell-Neuwirth short exact sequences of the form of~\req{fnpnp2}, $r_{m+1}$ is the restriction of $\map{q_{(m+1)\#}}{P_{m+2}(\ensuremath{\mathbb{R}P^2})}[P_{m+1}(\ensuremath{\mathbb{R}P^2})]$ to $\ker{q_{2\#}}$. The kernel $\Omega_{m+1}$ of $r_{m+1}$ is a free group of rank $m+1$ with a basis $\mathcal{B}_{m+1}$ being given by:
\begin{equation}\label{eq:gensomega}
\mathcal{B}_{m+1}=\setl{A_{k,m+2}, \rho_{m+2}}{1\leq k\leq m}.
\end{equation}
The group $\Omega_{m+1}$ may also be described as the subgroup of $\rpminus[m]$ generated by $\brak{A_{1,m+2},\ldots,A_{m+1,m+2}, \rho_{m+2}}$ subject to the relation:
\begin{equation}\label{eq:surfaceaij}
A_{m+1,m+2}=A_{m,m+2}^{-1}\cdots A_{1,m+2}^{-1}\rho_{m+2}^{-2},
\end{equation}
obtained from relation~(\ref{it:prej5}) of \repr{prej}. Equations~\reqref{cln} and~\reqref{surfaceaij} imply notably that $A_{l,m+2}$ and $C_{l,m+2}$ belong to $\Omega_{m+1}$ for all $1\leq l\leq m+1$.
Using geometric methods, for $m\geq 2$, we proved the existence of a section
\begin{equation*}
\map{s_{m+1}}{\rpminus[m-1]}[{\rpminus[m]}]
\end{equation*}
for $r_{m+1}$ in~\cite[Theorem~2(a)]{GG7}. Applying induction to \req{fns}, it follows that for all $m\geq 1$:
\begin{equation}\label{eq:semifn}
\rpminus[m]\cong \Omega_{m+1}\rtimes (\Omega_{m}\rtimes (\cdots \rtimes(\Omega_{3}\rtimes \Omega_{2})\cdots)).
\end{equation}
So $\rpminus[m]\cong \F[m+1]\rtimes (\F[m]\rtimes (\cdots \rtimes(\F[3]\rtimes \F[2])\cdots))$, which may be interpreted as the Artin combing operation for $\rpminus[m]$. It follows from this and \req{gensomega} that $\rpminus[m]$ admits $\mathcal{X}_{m+2}$ as a generating set, where:
\begin{equation}\label{eq:genspnminus2}
\mathcal{X}_{m+2}=\setl{A_{i,j},\, \rho_{j}}{3\leq j\leq m+2,\, 1\leq i\leq j-2}.
\end{equation}
\begin{rem}
For what follows, we will need to know an explicit section $s_{m+1}$ for $r_{m+1}$. Such a section may be obtained as follows: for $m\geq 2$, consider the homomorphism $\rpminus[m]\to \rpminus[m-1]$ given by forgetting the string based at $x_{3}$. By~\cite[Theorem~2(a)]{GG7}), a geometric section is obtained by doubling the second (vertical) string, so that there is a new third string, and renumbering the following strings, which gives rise to an algebraic section for the given homomorphism of the form:
\begin{align*}
A_{i,j} & \mapsto \begin{cases}
A_{1,j+1} & \text{if $i=1$}\\
A_{2,j+1} A_{3,j+1} & \text{if $i=2$}\\
A_{i+1,j+1} & \text{if $3\leq i<j$}
\end{cases}\\
\rho_{j} &\mapsto \rho_{j+1},
\end{align*}
for all $3\leq j\leq m+1$. However, in view of the nature of $r_{m+1}$, we would like this new string to be in the $(m+2)\up{th}$ position. We achieve this by composing the above algebraic section with conjugation by $\sigma_{m+1}\cdots \sigma_{3}$, which gives rise to a section
\begin{equation*}
\map{s_{m+1}}{\rpminus[m-1]}[{\rpminus[m]}]
\end{equation*}
for $r_{m+1}$ that is defined by:
\begin{equation}\label{eq:secexplicit}
\left\{\begin{aligned}
s_{m+1}(A_{i,j}) &= \begin{cases}
A_{j,m+2} A_{1,j} A_{j,m+2}^{-1} & \text{if $i=1$}\\
A_{j,m+2} A_{2,j} & \text{if $i=2$}\\
A_{i,j} & \text{if $3\leq i <j$}
\end{cases}\\
s_{m+1}(\rho_{j}) &= \rho_{j} A_{j,m+2}^{-1}.
\end{aligned}\right.
\end{equation}
for all $1\leq i< j$ and $3\leq j\leq m+1$. A long but straightforward calculation using the presentation of $\rpminus[m]$ given by \repr{prej} shows that $s_{m+1}$ does indeed define a section for $r_{m+1}$.
\end{rem}
We now prove \reth{th4}, which allows us to give a more explicit description of $L_{n}$.
\begin{proof}[Proof of \reth{th4}]
Let $n\geq 3$. By the commutative diagram~\reqref{fnpnp2ext} of short exact sequences, the restriction of the homomorphism $\map{q_{2\#}}{P_{n}(\ensuremath{\mathbb{R}P^2})}[P_{2}(\ensuremath{\mathbb{R}P^2})]$ to $K_{n}$ factors through the inclusion $\ang{\ft[2]}\to P_{2}(\ensuremath{\mathbb{R}P^2})$, and the kernel $L_{n}$ of $q_{2\#}\!\left\vert_{K_{n}}\right.$ is contained in the group $\rpminus$. We may then add a third row to this diagram:
\begin{equation}\label{eq:fnpnp3}
\begin{tikzcd}[ampersand replacement=\&]
\& 1 \arrow{d} \& 1\arrow{d} \& 1\arrow{d} \& \\
1\arrow{r} \& L_{n} \arrow{r}\arrow{d} \& K_{n} \arrow{r}{q_{2\#}\left\vert_{K_{n}}\right.} \arrow{d} \& \ang{\ft[2]} \arrow{r}\arrow{d} \& 1\\
1\arrow{r} \& \rpminus \arrow{r} \ar[dashed]{d}{\widehat{\iota}_{n-2}} \& P_{n}(\ensuremath{\mathbb{R}P^2}) \arrow{r}{q_{2\#}} \arrow{d}{\iota_{n\#}} \& P_{2}(\ensuremath{\mathbb{R}P^2}) \arrow{r} \arrow{d}{\iota_{2\#}} \& 1\\
1 \arrow{r} \&\ensuremath{\mathbb Z}_{2}^{n-2} \arrow{r}{j} \ar[dashed]{d} \&\ensuremath{\mathbb Z}_{2}^{n}\arrow{r}{\widehat{q}_{2}}\arrow{d} \&\ensuremath{\mathbb Z}_{2}^{2} \arrow{r} \arrow{d} \& 1,\\
\& 1 \& 1 \& 1 \&
\end{tikzcd}
\end{equation}
where $\map{\widehat{q}_{2}}{\ensuremath{\mathbb Z}_{2}^{n}}[\ensuremath{\mathbb Z}_{2}^{2}]$ is projection onto the first two factors, and $\map{j}{\ensuremath{\mathbb Z}_{2}^{n-2}}[\ensuremath{\mathbb Z}_{2}^{n}]$ is the monomorphism defined by $j(\overline{\epsilon_{1}}, \ldots, \overline{\epsilon_{n-2}})= (\overline{0}, \overline{0},\overline{\epsilon_{1}}, \ldots, \overline{\epsilon_{n-2}})$. The commutativity of diagram~\reqref{fnpnp3} thus induces a homomorphism $\map{\widehat{\iota}_{n-2}}{\rpminus}[\ensuremath{\mathbb Z}_{2}^{n-2}]$ that is the restriction of $\iota_{n\#}$ to $\rpminus$ that makes the bottom left-hand square commute. To see that $\widehat{\iota}_{n-2}$ is surjective, notice that if $x\in \ensuremath{\mathbb Z}_{2}^{n-2}$ then the first two entries of $j(x)$ are equal to $\overline{0}$, and using \req{rhotauk}, it follows that there exist $3\leq i_{1}<\cdots < i_{r}\leq n$ such that $\iota_{n\#}(\rho_{i_{1}}\cdots \rho_{i_{r}})=j(x)$. Furthermore, $\rho_{i_{1}}\cdots \rho_{i_{r}}\in \ker{q_{2\#}}$, and by commutativity of the diagram, we also have $\iota_{n\#}(\rho_{i_{1}}\cdots \rho_{i_{r}})= j\circ \widehat{\iota}_{n-2}\,(\rho_{i_{1}}\cdots \rho_{i_{r}})$, whence $x=\widehat{\iota}_{n-2}\,(\rho_{i_{1}}\cdots \rho_{i_{r}})$ by injectivity of $j$. It remains to prove exactness of the first column. The fact that $L_{n}\subset \ker{\widehat{\iota}_{n-2}}$ follows easily. Conversely, if $x\in \ker{\widehat{\iota}_{n-2}}$ then $x\in \rpminus$, and $x\in K_{n}$ by commutativity of the diagram, so $x\in L_{n}$. This proves the first two assertions of the theorem.
To prove the last part of the statement of the theorem, let $m\geq 1$, and consider \req{fns}.
Since $\widehat{\iota}_{m}$ is the restriction of $\iota_{(m+2)\#}$ to $\rpminus[m]$, we have $\widehat{\iota}_{m}\,(\rho_{j})=(\overline{0},\ldots, \overline{0}, \underbrace{\overline{1}}_{\mathclap{\text{$(j-2)\up{nd}$ position}}}, \overline{0}, \ldots,\overline{0})$ and $\widehat{\iota}_{m}\,(A_{i,j})=(\overline{0}, \ldots,\overline{0})$ for all $1\leq i<j$ and $3\leq j\leq m+2$. So for each $2\leq l\leq m+1$, $\widehat{\iota}_{m}$ restricts to a surjective homomorphism $\map{\widehat{\iota}_{m}\!\left\lvert_{\Omega_{l}}\right.}{\Omega_{l}}[\ensuremath{\mathbb Z}_{2}]$ of each of the factors of \req{semifn}, $\ensuremath{\mathbb Z}_{2}$ being the $(l-1)\up{st}$ factor of $\ensuremath{\mathbb Z}_{2}^{m}$, and using \req{gensomega}, we see that $\keromega[l]{m}$ is a free group of rank $2l-1$ with basis $\widehat{\mathcal{B}}_{l}$ given by:
\begin{equation}\label{eq:basiskeromega}
\widehat{\mathcal{B}}_{l}=\setl{A_{k,l+1}, \rho_{l+1} A_{k,l+1}\rho_{l+1}^{-1}, \rho_{l+1}^{2}}{1\leq k\leq l-1}.
\end{equation}
As we shall now explain, for all $m\geq 2$, the short exact sequence~\reqref{fns} may be extended to a commutative diagram of short exact sequences as follows:
\begin{equation}\label{eq:fnpnp2exta}
\begin{tikzcd}[cramped, sep=scriptsize]
{}& 1\arrow{d} & 1 \arrow{d} & 1 \arrow{d} &\\
1 \arrow{r} & \keromega[m+1]{m} \arrow{d} \arrow{r} & L_{m+2} \arrow{d} \arrow{r}{r_{m+1}\left\lvert_{L_{m+2}}\right.} & L_{m+1} \arrow{d} \arrow{r} & 1\\
1\arrow{r}& \Omega_{m+1}\arrow{r} \arrow{d}{\widehat{\iota}_{m}\left\lvert_{\Omega_{m+1}}\right.} &\rpminus[m] \arrow[yshift=0.5ex]{r}{r_{m+1}} \arrow{d}{\widehat{\iota}_{m}} &\rpminus[m-1] \arrow{r} \arrow{d}{\widehat{\iota}_{m-1}}\arrow[yshift=-0.5ex,dashrightarrow]{l}{s_{m+1}} & 1\\
1 \arrow{r} & \ensuremath{\mathbb Z}_{2} \arrow{d} \arrow{r} & \ensuremath{\mathbb Z}_{2}^{m} \arrow{d} \arrow{r} & \ensuremath{\mathbb Z}_{2}^{m-1} \arrow{d} \arrow{r} & 1.\\
& 1 & 1 & 1 &
\end{tikzcd}
\end{equation}
To obtain this diagram, we start with the commutative diagram that consists of the second and third rows and the three columns (so \emph{a priori}, the arrows of the first row are missing). The commutativity implies that $r_{m+1}$ restricts to the homomorphism $\map{r_{m+1}\left\lvert_{L_{m+2}}\right.}{L_{m+2}}[L_{m+1}]$, which is surjective, since if $w\in L_{m+1}$ is written in terms of the elements of $\mathcal{X}_{m+1}$ then the same word $w$, considered as an element of the group $\rpminus[m]$, belongs to $L_{m+2}$, and satisfies $r_{m+1}(w)=w$. Then the kernel of $r_{m+1}\left\lvert_{L_{m+2}}\right.$, which is also the kernel of $\widehat{\iota}_{m}\left\lvert_{\Omega_{m+1}}\right.$, is equal to $L_{m+2} \cap \Omega_{m+1}$. This establishes the existence of the complete commutative diagram~\reqref{fnpnp2exta} of short exact sequences. By induction, it follows from~\reqref{basiskeromega} and~\reqref{fnpnp2exta} that for all $m\geq 1$, $L_{m+2}$ is generated by
\begin{equation}\label{eq:genskeromega}
\widehat{\mathcal{X}}_{m+2}=\bigcup_{j=3}^{m+2} \widehat{\mathcal{B}}_{j-1} =\setl{A_{i,j},\, \rho_{j}A_{i,j}\rho_{j}^{-1},\, \rho_{j}^{2}}{3\leq j\leq m+2,\, 1\leq i\leq j-2}.
\end{equation}
Using the section $s_{m+1}$ defined by \req{secexplicit}, we see that $s_{m+1}(x)\in L_{m+2}$ for all $x\in \widehat{\mathcal{X}}_{m+1}$, and thus $s_{m+1}$ restricts to a section $\map{s_{m+1}\left\lvert_{L_{m+1}}\right.}{L_{m+1}}[L_{m+2}]$ for $r_{m+1}\left\lvert_{L_{m+2}}\right.$. We conclude by induction on the first row of~\reqref{fnpnp2exta} that:
\begin{align}
L_{m+2}&\cong \keromega[m+1]{m} \rtimes L_{m+1}\label{eq:semiL}\\
&\cong \keromega[m+1]{m} \rtimes \bigl(\keromega[m]{m} \rtimes \bigl(\cdots \rtimes\bigl(\keromega[3]{m} \rtimes \keromega[2]{m}\bigr)\cdots\bigr)\bigr),\label{eq:semiLlong}
\end{align}
the actions being induced by those of \req{semifn}, so by \req{basiskeromega}, $L_{m+2}$ is isomorphic to a repeated semi-direct product of the form $\F[2m+1]\rtimes (\F[2m-1]\rtimes (\cdots \rtimes(\F[5]\rtimes \F[3])\cdots))$. The last part of the statement of \reth{th4} follows by taking $m=n-2$.
\end{proof}
A finer analysis of the actions that appear in equations~\reqref{semifn} and~\reqref{semiLlong} now allows us to determine the Abelianisations of $\rpminus$ and $L_{n}$.
\begin{proof}[Proof of \repr{abelianL}]
If $n=3$ then the two assertions are clear. So assume by induction that they hold for some $n\geq 3$. From the split short exact sequence~\reqref{fns} and \req{semiL} with $m=n-1$, we have:
\begin{equation}\label{eq:semiprod}
\left\{
\begin{aligned}
\rpminus[n-1] &\cong\Omega_{n} \rtimes_{\psi} \rpminus \quad\text{and}\\
L_{n+1} &\cong \keromega{n-1} \rtimes_{\psi} L_{n},
\end{aligned}\right.
\end{equation}
where $\psi$ denotes the action given by the section $s_{n}$, and the action induced by the restriction of the section $s_{n}$ to $L_{n}$ respectively.
Before going any further, we recall some general considerations\label{genconsid} from the paper~\cite[pages~3387--88]{GG6} concerning the Abelianisation of semi-direct products. If $H$ and $K$ are groups, and if $\map{\phi}{H}[\aut{K}]$ is an action of $H$ on $K$ then one may deduce easily from~\cite[Proposition~3.3]{GG6} that:
\begin{equation}\label{eq:absemi}
(K\rtimes_{\phi} H)\up{Ab}\cong \Delta(K)\oplus H\up{Ab},
\end{equation}
where:
\begin{equation*}
\Delta(K) =K\bigl/\bigl\langle\Gamma_{2}(K)\cup \widehat{K}\bigr\rangle\bigr. \quad\text{and}\quad
\widehat{K}=\setangl{\phi(h)(k) \cdot k^{-1}}{\text{$h\in H$ and $k\in K$}}.
\end{equation*}
Recall that $\widehat{K}$ is normal in $K$ (\emph{cf.}~\cite[lines~1--4, page~3388]{GG6}), so $\langle\Gamma_{2}(K)\cup \widehat{K}\bigr\rangle $ is normal in $ K$. If $k\in K$, let $\overline{k}$ denote its image under the canonical projection $K\to \Delta(K)$. For all $k,k'\in K$ and $h,h'\in H$, we have:
\begin{align}
\phi(hh')(k)\cdot k^{-1}&= \phi(h)(\phi(h')(k)) \cdot \phi(h')(k^{-1}) \cdot \phi(h')(k)\cdot k^{-1}\notag\\
&=\phi(h)(k'') \cdot k''^{-1} \cdot \phi(h')(k)\cdot k^{-1}\label{eq:hhprime}\\
\phi(h)(kk')\cdot (kk')^{-1}&= \bigl(\phi(h)(k)\cdot k^{-1}\bigr)\cdot k\bigl(\phi(h)(k')\cdot k'^{-1}\bigr)k^{-1}.\label{eq:kkprime}
\end{align}
where $k''=\phi(h')(k)$ belongs to $K$. Let $\mathcal{H}$ and $\mathcal{K}$ be generating sets for $H$ and $K$ respectively. By induction on word length relative to the elements of $\mathcal{H}$, \req{hhprime} implies that $\widehat{K}$ is generated by elements of the form $\phi(h)(k)\cdot k^{-1}$, where $h\in \mathcal{H}$ and $k\in K$. A second induction on word length relative to the elements of $\mathcal{K}$ and \req{kkprime} implies that $\widehat{K}$ is normally generated by the elements of the form $\phi(h)(k)\cdot k^{-1}$, where $h\in \mathcal{H}$ and $k\in \mathcal{K}$. By standard arguments involving group presentations, since $\Gamma_{2}(K) \subset \bigl\langle\Gamma_{2}(K)\cup \widehat{K}\bigr\rangle\bigr.$, $\Delta(K)$ is Abelian, and a presentation of $\Delta(K)$ may be obtained by Abelianising a given presentation of $K$, and by adjoining the relators of the form $\overline{\phi(h)(k) \cdot k^{-1}}$,\label{genconsid2} where $h\in \mathcal{H}$ and $k\in \mathcal{K}$.
We now take $K=\Omega_{n}$ (resp.\ $K=\keromega{n-1}$), $H=\rpminus$ (resp.\ $H= L_{n}$) and $\phi=\psi$. Applying the induction hypothesis and \req{absemi} to \req{semiprod}, to prove parts~(\ref{it:abparta}) and~(\ref{it:abpartb}), it thus suffices to show that:
\begin{align}
\Delta(\Omega_{n}) &\cong \ensuremath{\mathbb Z}^{2}, \quad\text{and that}\label{eq:isof1}\\
\Delta\left(\keromega{n-1}\right) & \cong \ensuremath{\mathbb Z}^{2n-1} \label{eq:isof2}
\end{align}
respectively. We first establish the isomorphism~\reqref{isof1}. As we saw previously, $\Delta(\Omega_{n})$ is Abelian, and to obtain a presentation of $\Delta(\Omega_{n})$, we add the relators of the form $\overline{\psi(\tau)(\omega)\cdot \omega^{-1}}$ to a presentation of $(\Omega_{n})\up{Ab}$, where $\tau\in \mathcal{X}_{n}$ and $\omega\in \mathcal{B}_{n}$. In $\Delta(\Omega_{n})$, such relators may be written as:
\begin{equation}\label{eq:newrels}
\overline{s_{n}(\tau) \omega (s_{n}(\tau))^{-1} \omega^{-1}}= \overline{s_{n}(\tau) \omega (s_{n}(\tau))^{-1}} \;\overline{\omega^{-1}}.
\end{equation}
We claim that it is not necessary to know explicitly the section $s_{n}$ in order to determine these relators. Indeed, for all $\tau\in \mathcal{X}_{n}$, we have $p_{n+1}(\tau)=\tau$; note that we abuse notation here by letting $\tau$ also denote the corresponding element of $\mathcal{X}_{n+1}$ in $\rpminus[n-1]$. Thus $s_{n}(\tau) \tau^{-1}\in \ker{p_{n+1}}$, and hence there exists $\omega_{\tau}\in \Omega_{n}$ such that $s_{n}(\tau)=\omega_{\tau} \tau$. Since $\Delta(\Omega_{n})$ is Abelian, it follows that:
\begin{equation*
\overline{s_{n}(\tau) \omega (s_{n}(\tau))^{-1}}= \overline{\omega_{\tau} \tau \omega \tau^{-1}\omega_{\tau}^{-1}}=\overline{\vphantom{\omega_{\tau}^{-1}}\omega_{\tau}}\; \overline{\vphantom{\omega_{\tau}^{-1}}\tau \omega \tau^{-1}} \; \overline{\omega_{\tau}^{-1}}= \overline{\tau \omega \tau^{-1}},
\end{equation*}
and thus the relators of \req{newrels} become:
\begin{equation}\label{eq:relatortau}
\overline{s_{n}(\tau) \omega (s_{n}(\tau))^{-1} \omega^{-1}}=\overline{\tau \omega \tau^{-1}} \; \overline{\omega^{-1}}.
\end{equation}
This proves the claim. In what follows, the relations~(\ref{it:prej1})--(\ref{it:prej5}) refer to those of the presentation of $\rpminus[n-1]$ given by \repr{prej}. Using this presentation
and the fact that $\Delta(\Omega_{n})$ is Abelian, we see immediately that $\overline{\tau \omega \tau^{-1}}=\overline{\omega}$ for all $\tau\in \mathcal{X}_{n}$ and $\omega\in \mathcal{B}_{n}$, with the following exceptions:
\begin{enumerate}[(i)]
\item\label{it:relators1} $\tau=\rho_{j}$ and $\omega=A_{j,n+1}$ for all $3\leq j\leq n-1$. Then $\overline{\rho_{j}A_{j,n+1}\rho_{j}^{-1}}=\overline{C_{j,n+1}^{-1}}=\overline{A_{j,n+1}^{-1}}$, using relation~(\ref{it:prej3}) and \req{cln}, which yields the relator $\bigl(\,\overline{A_{j,n+1}}\,\bigr)^{2}$ in $\Delta(\Omega_{n})$.
\item\label{it:relators2} $\tau=\rho_{j}$ and $\omega=\rho_{n+1}$ for all $3\leq j\leq n$. Then $\overline{\rho_{j}\rho_{n+1}\rho_j^{-1}}=\overline{C_{j,n+1} \rho_{n+1}}=\overline{A_{j,n+1}}\;\overline{\vphantom{A_{j,n+1}}\rho_{n+1}}$ by relation~(\ref{it:prej4}) and \req{cln}, which yields the relator $\overline{A_{j,n+1}}$ in $\Delta(\Omega_{n})$.
\end{enumerate}
The relators of~(\ref{it:relators2}) above clearly give rise to those of~(\ref{it:relators1}). To obtain a presentation of $\Delta(\Omega_{n})$, which by \req{gensomega} is an Abelian group with generating set
\begin{equation*}
\setl{\overline{A_{l,n+1}},\overline{\rho_{n+1}}}{1\leq l\leq n-1},
\end{equation*}
we must add the relators $\overline{A_{j,n+1}}$ for all $3\leq j\leq n$. Thus for $j=3, \dots, n-1$, the elements $\overline{A_{j,n+1}}$ of this generating set are trivial. Further, $\overline{A_{n,n+1}}$ is also trivial, but by relation~\reqref{surfaceaij}, one of the remaining generators $\overline{A_{j,n+1}}$ may be deleted, $\overline{A_{2,n+1}}$ say, from which we see that $\Delta(\Omega_{n})$ is a free Abelian group of rank two with $\brak{\overline{A_{1,n+1}},\overline{\rho_{n+1}}}$ as a basis. This establishes the isomorphism~\reqref{isof1}, and so proves part~(\ref{it:abparta}).
We now prove part~(\ref{it:abpartb}). As we mentioned previously, it suffices to establish the isomorphism~\reqref{isof2}. Since $\keromega{n-1}$ is a free group of rank $2n-1$, we must thus show that $\Delta(\keromega{n-1})=(\keromega{n-1})\up{Ab}$. We take $K=\keromega{n-1}$ (resp.\ $H=L_{n-2}$) to be equipped with the basis $\widehat{\mathcal{B}}_{n}$ (resp.\ the generating set $\widehat{\mathcal{X}}_{n}$) of \req{basiskeromega} (resp.\ of \req{genskeromega}). The fact that $\keromega{n-1} $ is normal in $\Omega_{n}$ implies that $A_{l,n+1}$, $\rho_{n+1} A_{l,n+1}\rho_{n+1}^{-1}$, $C_{l,n+1}$ and $\rho_{n+1} C_{l,n+1}\rho_{n+1}^{-1}$ belong to $\keromega{n-1}$ for all $1\leq l\leq n$ by equations~\reqref{cln} and~\reqref{surfaceaij}. Repeating the argument given between equations~\reqref{newrels} and~\reqref{relatortau}, we see that \req{relatortau} holds for all $\tau\in \widehat{\mathcal{X}}_{n}$ and $\omega\in \widehat{\mathcal{B}}_{n}$, where $\overline{\omega}$ now denotes the image of $\omega$ under the canonical projection $\keromega{n-1}\to \Delta(\keromega{n-1})$. For $\alpha\in \rpminus$, let $c_{\alpha}$ denote conjugation in $\keromega{n-1}$ by $\alpha$ (which we consider to be an element of $\rpminus[n-1]$). The automorphism $c_{\alpha}$ is well defined because $\keromega{n-1}=\Omega_{n} \cap L_{n-1}$, so that $\keromega{n-1}$ is normal in $ \rpminus[n-1]$. We claim that $\bigl\langle\Gamma_{2}(K)\cup \widehat{K}\bigr\rangle\bigr.$ is invariant under $c_{\alpha}$. To see this, note first that $\Gamma_{2}(K)$ is clearly invariant since it is a characteristic subgroup of $K$. On the other hand, suppose that $\omega\in \keromega{n-1}$, $\tau\in L_{n-2}$ and $\alpha\in \rpminus$. Since $s_{n}(\tau)\in L_{n-1}$, $L_{n-1}$ is normal in $ \rpminus[n-1]$ and $L_{n-2} $ is normal in $ \rpminus$, we have $\alpha s_{n}(\tau) \alpha^{-1}\in L_{n-1}$, $\tau'=p_{n+1}(\alpha s_{n}(\tau) \alpha^{-1})=\alpha \tau \alpha^{-1}\in L_{n-2}$, and thus $s_{n}(\tau'^{-1})(\alpha s_{n}(\tau) \alpha^{-1})\in \keromega{n-1}$. Hence there exists $\omega_{\tau'}\in \keromega{n-1}$ such that $\alpha s_{n}(\tau) \alpha^{-1}=s_{n}(\tau') \omega_{\tau'}$. Now $\keromega{n-1}$ is normal in $ \rpminus[n-1]$, so $\omega'=\alpha\omega \alpha^{-1}\in \keromega{n-1}$, and therefore:
\begin{align*}
c_{\alpha}\bigl(s_{n}(\tau)\omega s_{n}(\tau^{-1})\omega^{-1}\bigr)&=\alpha\bigl(s_{n}(\tau)\omega s_{n}(\tau^{-1})\omega^{-1}\bigr)\alpha^{-1} = s_{n}(\tau') \omega_{\tau'} \omega' \omega_{\tau'}^{-1} s_{n}(\tau'^{-1}) \omega'^{-1}\\
&= s_{n}(\tau') (\omega_{\tau'} \omega' \omega_{\tau'}^{-1}) s_{n}(\tau'^{-1}) (\omega_{\tau'} \omega'^{-1} \omega_{\tau'}^{-1}) \cdot \omega_{\tau'} \omega' \omega_{\tau'}^{-1} \omega'^{-1},
\end{align*}
which belongs to $\bigl\langle\Gamma_{2}(K)\cup \widehat{K}\bigr\rangle\bigr.$ because $s(\tau') (\omega_{\tau'} \omega' \omega_{\tau'}^{-1}) s(\tau'^{-1}) (\omega_{\tau'} \omega'^{-1} \omega_{\tau'}^{-1})\in \widehat{K}$ and $\omega_{\tau'} \omega' \omega_{\tau'}^{-1} \omega'^{-1} \in \Gamma_{2}(K)$. This proves the claim, and implies that $c_{\alpha}$ induces an endomorphism $\widehat{c}_{\alpha}$ (an automorphism in fact, whose inverse is $\widehat{c}_{\alpha^{-1}}$) of $\Delta(K)$, in particular, if $\alpha,\alpha'\in \rpminus$ and $\omega\in \keromega{n-1}$ then $\overline{\alpha\alpha' \omega \alpha'^{-1} \alpha^{-1}}= \overline{c_{\alpha\alpha'}(\omega)}= \widehat{c}_{\alpha}(\widehat{c}_{\alpha'}(\overline{\omega}))$.
We next compute the elements $\overline{\tau \omega \tau^{-1}}$ of $\Delta(\keromega{n-1})$ in the case where $\tau=A_{i,j}$, $3\leq j\leq n$ and $1\leq i\leq j-2$, and $\omega\in \widehat{\mathcal{B}}_{n}$:
\begin{enumerate}[(i)]
\item\label{it:actionomega1} Let $\omega=A_{l,n+1}$, for $1\leq l\leq n-1$. Then
\begin{align*}
\tau \omega \tau^{-1}&=\begin{cases}
A_{l,n+1} & \text{if $j<l$ or if $l<i$}\\
A_{l,n+1}^{-1} A_{i,n+1}^{-1} A_{l,n+1}A_{i,n+1} A_{l,n+1}& \text{if $j=l$}\\
A_{j,n+1}^{-1}A_{l,n+1}A_{j,n+1}& \text{if $i=l$}\\
A_{j,n+1}^{-1} A_{i,n+1}^{-1} A_{j,n+1}A_{i,n+1} A_{l,n+1}A_{i,n+1}^{-1} A_{j,n+1}^{-1} A_{i,n+1}A_{j,n+1} & \text{if $i<l<j$}
\end{cases}
\end{align*}
by the Artin relations. We thus conclude that $\overline{\tau \omega \tau^{-1}}=\overline{\omega}$ in this case.
\item If $\omega=\rho_{n+1} A_{l,n+1}\rho_{n+1}^{-1}$, where $1\leq l\leq n-1$ then $\tau \omega \tau^{-1}= \rho_{n+1} (A_{i,j}A_{l,n+1}A_{i,j}^{-1})\rho_{n+1}^{-1}$, and from case~(\ref{it:actionomega1}), we deduce also that $\overline{\tau \omega \tau^{-1}}=\overline{\omega}$.
\item Let $\omega=\rho_{n+1}^{2}$. Then $\tau \omega \tau^{-1}=\omega$, hence $\overline{\tau \omega \tau^{-1}}=\overline{\omega}$.
\end{enumerate}
So if $\tau=A_{i,j}$ then the relators given by \req{relatortau} are trivial for all $\omega\in \widehat{\mathcal{B}}_{n}$, and $\widehat{c}_{A_{i,j}}=\ensuremath{\operatorname{\text{Id}}}_{\Delta\left(\keromega{n-1}\right)}$.
Now suppose that $\tau= \rho_{j}A_{i,j} \rho_{j}^{-1}$, where $3\leq j\leq n$ and $1\leq i\leq j-2$. Then for all $\omega\in \widehat{\mathcal{B}}_{n}$, we have:
\begin{equation*}
\overline{\tau \omega\tau^{-1}}=\overline{c_{\tau}(\omega)}= \widehat{c}_{\rho_{j}}\circ \widehat{c}_{A_{i,j}} \circ \widehat{c}_{\rho_{j}^{-1}}(\overline{\omega})=\overline{\omega},
\end{equation*}
since $\widehat{c}_{A_{i,j}}=\ensuremath{\operatorname{\text{Id}}}_{\Delta\left(\keromega{n-1}\right)}$, so $\widehat{c}_{\rho_{j}A_{i,j} \rho_{j^{-1}}}=\ensuremath{\operatorname{\text{Id}}}_{\Delta\left(\keromega{n-1}\right)}$.
By \req{genskeromega}, it remains to study the elements of the form $\overline{\tau \omega\tau^{-1}}$, where $\tau=\rho_{j}^2$, $3\leq j\leq n$, and $\omega\in \widehat{\mathcal{B}}_{n}$. Since $\overline{\rho_{j}^2 \omega\rho_{j}^{-2}}= \widehat{c}_{\rho_{j}}^{\mskip 4mu 2}(\overline{\omega})$, we first analyse $\widehat{c}_{\rho_{j}}$.
\begin{enumerate}
\item\label{it:actionker4} If $\omega=A_{l,n+1}$, for $1\leq l\leq n-1$ then by relation~(\ref{it:prej3}) and equations~\reqref{cln} and~\reqref{surfaceaij}, we have:
\begin{align*}
\widehat{c}_{\rho_{j}}(\overline{\omega})&= \widehat{c}_{\rho_{j}}(\overline{A_{l,n+1}})=\overline{\rho_{j} A_{l,n+1} \rho_{j}^{-1}}\\
&=\begin{cases}
\overline{A_{l,n+1}}&\text{if $j<l$}\\
\overline{\rho_{n+1}^{-2} \cdot \rho_{n+1} C_{l,n+1}^{-1}\rho_{n+1}^{-1} \cdot \rho_{n+1}^2}
&\text{if $j=l$}\\
\overline{\rho_{n+1}^{-2} \cdot \rho_{n+1} C_{j,n+1}^{-1} \rho_{n+1}^{-1} \cdot \rho_{n+1}^2 \cdot A_{l,n+1} \cdot \rho_{n+1}^{-2} \cdot \rho_{n+1} C_{j,n+1} \rho_{n+1}^{-1} \cdot \rho_{n+1}^2}
&\text{if $j>l$}
\end{cases}\\
&= \begin{cases}
\overline{A_{l,n+1}} & \text{if $j\neq l$}\\
\overline{\rho_{n+1} C_{j,n+1}^{-1}\rho_{n+1}^{-1}}=\left(\,\overline{\rho_{n+1} A_{j,n+1}\rho_{n+1}^{-1}}\,\right)^{-1} & \text{if $j=l$.}
\end{cases}
\end{align*}
\item\label{it:actionker5} Let $\omega=\rho_{n+1} A_{l,n+1}\rho_{n+1}^{-1}$, where $1\leq l\leq n-1$. Relation~(\ref{it:prej4}) implies that $\rho_{j} \rho_{n+1} \rho_{j}^{-1}=C_{j,n+1} \rho_{n+1}$, and so by case~(\ref{it:actionker4}) above, we have:
\begin{equation*
\widehat{c}_{\rho_{j}}(\overline{\omega})= \widehat{c}_{\rho_{j}}\left(\,\overline{\rho_{n+1} A_{l,n+1}\rho_{n+1}^{-1}}\,\right)=\begin{cases}
\overline{\rho_{n+1} A_{l,n+1}\rho_{n+1}^{-1}} & \text{if $j\neq l$}\\[0.5ex]
\overline{C_{j,n+1}^{-1}}=\left(\,\overline{A_{j,n+1}}\,\right)^{-1} & \text{if $j=l$.}
\end{cases}
\end{equation*}
\item\label{it:actionker6} Let $\omega=\rho_{n+1}^{2}$. By relation~(\ref{it:prej4}) and equations~\reqref{cln} and~\reqref{surfaceaij}, we have:
\begin{align*}
\widehat{c}_{\rho_{j}}(\overline{\omega})&=\widehat{c}_{\rho_{j}}(\overline{\rho_{n+1}^{2}})=\overline{(\rho_{j} \rho_{n+1}\rho_{j}^{-1})^2}=\overline{\rho_{n+1} C_{j,n+1}\rho_{n+1}^{-1} \cdot \rho_{n+1}^{2} \cdot C_{j,n+1}}\\
&= \overline{\rho_{n+1} A_{j,n+1}\rho_{n+1}^{-1} \cdot \rho_{n+1}^{2} \cdot A_{j,n+1}}
\end{align*}
\end{enumerate}
from which we obtain:
\begin{align*}
\widehat{c}_{\rho_{j}^{2}}\Bigl(\,\overline{\rho_{n+1}^{2}}\,\Bigr) &= \widehat{c}_{\rho_{j}}\Bigl(\,\overline{\rho_{n+1} A_{j,n+1}\rho_{n+1}^{-1} \cdot \rho_{n+1}^{2} \cdot A_{j,n+1}}\,\Bigr)\\
&= \overline{A_{j,n+1}^{-1}\cdot\rho_{n+1} A_{j,n+1}\rho_{n+1}^{-1} \cdot \rho_{n+1}^{2} \cdot A_{j,n+1}\cdot \bigl(\rho_{n+1} A_{j,n+1}\rho_{n+1}^{-1}\bigr)^{-1}} = \overline{\rho_{n+1}^{2}}.
\end{align*}
So by \req{basiskeromega}, we also have $\widehat{c}_{\rho_{j}^{2}}=\ensuremath{\operatorname{\text{Id}}}_{\Delta\left(\keromega{n-1}\right)}$. Hence for all $\tau\in L_{n-2}$ and $\omega\in \keromega{n-1}$, it follows that $\widehat{c}_{\tau}(\overline{\omega})=\overline{\omega}$ , and thus the relators $\overline{\psi(\tau)(\omega)\cdot \omega^{-1}}$ are all trivial. Since a presentation for $\Delta\left(\keromega{n-1}\right)$ is obtained by Abelianising a given presentation of $\keromega{n-1}$ and adjoining these relators, we conclude that $\Delta\left(\keromega{n-1}\right)=\left(\keromega{n-1}\right)\up{Ab}$. In particular, the fact that $\keromega{n-1}$ is a free group of rank $2n-1$ gives rise to the isomorphism~\reqref{isof2}. This completes the proof of the proposition.
\end{proof}
\begin{rems}\mbox{}\label{rem:artin}
\begin{enumerate}[(a)]
\item\label{it:artin1} An alternative description of $\rpminus$, similar to that of \req{semifn}, but with the parentheses in the opposite order, may be obtained as follows. Let $m\geq 2$ and $q\geq 1$, and consider the following Fadell-Neuwirth short exact sequence:
\begin{multline}\label{eq:fnsalt}
1\to P_{m-1}(\ensuremath{\mathbb{R}P^2}\setminus \brak{x_{1},\ldots, x_{q+1}})\to P_{m}(\ensuremath{\mathbb{R}P^2}\setminus\brak{x_{1},\ldots, x_{q}})\to\\
P_{1}(\ensuremath{\mathbb{R}P^2}\setminus\brak{x_{1},\ldots, x_{q}})\to 1,
\end{multline}
given geometrically by forgetting the last $m-1$ strings. Since the quotient is a free group $\F[q]$ of rank $q$, the above short exact sequence splits, and so
\begin{equation*}
P_{m}(\ensuremath{\mathbb{R}P^2}\setminus\brak{x_{1},\ldots, x_{q}}) \cong P_{m-1}(\ensuremath{\mathbb{R}P^2}\setminus \brak{x_{1},\ldots, x_{q+1}}) \rtimes \F[q],
\end{equation*}
and thus
\begin{equation}\label{eq:semifnbrak}
P_{n-2}(\ensuremath{\mathbb{R}P^2} \setminus \brak{x_1, x_2}))\cong (\cdots ((\F[n-1]\rtimes \F[n-2])\rtimes \F[n-3])\rtimes \cdots \rtimes \F[3])\rtimes \F[2].
\end{equation}
by induction. The ordering of the parentheses thus occurs from the left, in contrast with that of \req{semifn}. The decomposition given by \req{semifn} is in some sense stronger than that of \reqref{semifnbrak}, since in the first case, every factor acts on each of the preceding factors, which is not necessarily the case in \req{semifnbrak}, so \req{semifn} engenders a decomposition of the form~\reqref{semifnbrak}. This is a manifestation of the fact that the splitting of the corresponding Fadell-Neuwirth sequence~\reqref{fns} is non trivial, while that of~\reqref{fnsalt} is obvious.
\item Note that $L_{4}$, which is the kernel of the homomorphism $\map{\widehat{\iota}_{2}}{\rpminus[2]}[\ensuremath{\mathbb Z}_{2}^{2}]$, is also the subgroup of index $4$ of the group $(B_4(\ensuremath{\mathbb{R}P^2}))^{(3)}$ that appears in~\cite[Theorem~3(d)]{GG8}. Indeed,
by~\cite[equation~(127)]{GG8}, this subgroup of index $4$ is isomorphic to the semi-direct product:
\begin{equation*}
\F[5](A_{1,4},A_{2,4},\rho_{4}^{2}, \rho_{4} A_{1,4}\rho_{4}^{-1},\rho_{4} A_{2,4}\rho_{4}^{-1})\rtimes \F[3](A_{2,3},\rho_{3}^{2}, \rho_{3}A_{2,3}\rho_{3}^{-1}),
\end{equation*}
the action being given by~\cite[equations~(129)--(131)]{GG8} (the element $B_{i,j}$ of~\cite{GG8} is the element $A_{i,j}$ of this paper).
\end{enumerate}
\end{rems}
\begin{rem}\label{rem:Lnuniqueness}
Using the ideas of the last paragraph of the proof of \repr{prop3}(\ref{it:prop3b}), one may show that $L_{n}$ is not normal in $B_n(\ensuremath{\mathbb{R}P^2})$. Although the subgroup $L_{n}$ is not unique with respect to the properties of the statement of \repr{prop3}(\ref{it:prop3a})(\ref{it:prop3aii}), there are only a finite number of subgroups, $2^{n(n-2)}$ to be precise, that satisfy these properties. To prove this, we claim that the set of torsion-free subgroups $L_{n}'$ of $K_{n}$ such that $K_{n}=L_{n}'\oplus \ang{\ft}$ is in bijection with the set $\setr{\ker{f}}{f\in \operatorname{Hom}(L_{n},\ensuremath{\mathbb Z}_{2})}$. To prove the claim, let $K=K_{n}$, $L=L_{n}$, let $\map{q}{K}[K/L]$ be the canonical surjection, and set
\begin{equation*}
\Delta=\setr{L'}{\text{$L'<K$, $L'$ is torsion free, and $K= L'\oplus \bigl\langle \ft \bigr\rangle$}}.
\end{equation*}
Clearly $L\in \Delta$, so $\Delta\neq \ensuremath{\varnothing}$. Consider the map $\map{\phi}{\Delta}[\setr{\ker{f}}{f\in \operatorname{Hom}(L,\ensuremath{\mathbb Z}_{2})}]$ defined by $\phi(L')=L\cap L'$. This map is well defined, since if $L'=L$ then $\phi(L')=L$ is the kernel of the trivial homomorphism of $\operatorname{Hom}(L,\ensuremath{\mathbb Z}_{2})$, and if $L'\neq L$ then $L' \ensuremath{\not\subset} L$ since $[K:L']=[K:L]=2$, and so $q\left\lvert_{L'}\right.$ is surjective as $K/L\cong \ensuremath{\mathbb Z}_{2}$. Thus $\ker{q\left\lvert_{L'}\right.}=\phi(L')$ is of index $2$ in $L$, in particular, $\phi(L')$ is the kernel of some non-trivial element of $\operatorname{Hom}(L,\ensuremath{\mathbb Z}_{2})$.
We now prove that $\phi$ is surjective. Let $f\in \operatorname{Hom}(L,\ensuremath{\mathbb Z}_{2})$, and set $L''=\ker{f}$. If $f=0$ then $L''=L$, and $\phi(L)=L''$. So suppose that $f\neq 0$. Then $f$ is surjective, and $L''=\ker{f}$ is of index $2$ in $L$. Let $x\in L\setminus L''$. Then
\begin{equation}\label{eq:lprime}
L=L'' \amalg xL'',
\end{equation}
where $\amalg$ denotes the disjoint union. Since $K=L \amalg \ft L$, it follows that
\begin{equation}\label{eq:klprime}
K=L'' \amalg x L'' \amalg\ft L''\amalg x\ft L'',
\end{equation}
where $\amalg$ denotes the disjoint union. Set $L'=L'' \amalg x\ft L''$. By \req{lprime}, $x^{2}\ft L''=\ft x^{2} L''=\ft L''$ because $\ft$ is central and of order $2$, and hence $K=L' \amalg xL'$. Using once more \req{lprime}, we see that $L'$ is a group, and so the equality $K=L' \amalg xL'$ implies that $[K:L']=2$. Further, since the only non-trivial torsion element of $K$ is $\ft$, $L'$ is torsion free by \req{klprime}, and so the short exact sequence $1\to L'\to K\to \ensuremath{\mathbb Z}_{2}\to 1$ splits. Thus $L'\in \Delta$, and $\phi(L')=L''$ using equations~\reqref{lprime} and~\reqref{klprime}.
It remains to prove that $\phi$ is injective. Let $L_{1}',L_{2}'\in \Delta$ be such that $L_{1}'\cap L=\phi(L_{1}')=\phi(L_{2}')= L_{2}'\cap L$. If one of the $L_{i}'$, $L_{1}'$ say, is equal to $L$ then we must also have $L_{2}'=L$ because $L\subset L_{2}'$ and $L$ and $L_{2}'$ have the same index in $K$. So suppose that $L_{i}'\neq L$ for all $i\in\brak{1,2}$. If $i\in \brak{1,2}$ then $L''=\phi(L_{i}')=L\cap L_{i}'=\ker{f_{i}}$ for some non-trivial $f_{i}\in \operatorname{Hom}(L,\ensuremath{\mathbb Z}_{2})$, and thus $[L:L'']=2$. Let us show that $L_{1}'\subset L_{2}'$. Let $x\in L_{1}'$. If $x\in L$ then $x\in L''$, so $x\in L_{2}'$, and we are done. So assume that $x\notin L$, and suppose that $x\notin L_{2}'$. Then $q(x)$ is equal to the non-trivial element of $K/L$, and since $K/L\cong \ensuremath{\mathbb Z}_{2}$ and $\ft\notin L$, we see that $x\ft\in L$. Further, $K= L_{2}' \amalg xL_{2}'$ since $[K:L_{2}']=2$, and so $x\ft \in L_{2}'$ (for otherwise $x\ft\in xL_{2}'$, which implies that $\ft\in L_{2}'$, which is impossible because $L_{2}'$ is torsion free). But then $x\ft \in L\cap L_{2}'=L''$, and hence $x\ft \in L_{1}'$. But this would imply that $\ft \in L_{1}'$, which contradicts the fact that $L_{1}'$ is torsion free. We conclude that $L_{1}'\subset L_{2}'$, and exchanging the rôles of $L_{1}'$ and $L_{2}'$, we see that $L_{1}'=L_{2}'$, which proves that $\phi$ is injective, so is bijective, which proves the claim. Therefore the cardinality of $\Delta$ is equal to the order of the group $H^1(L, \ensuremath{\mathbb Z}_2)$, which is equal in turn to that of $H_1(L, \ensuremath{\mathbb Z}_2)$. By \repr{abelianL}(\ref{it:abpartb}), we have $L\up{Ab}=H_1(L, \ensuremath{\mathbb Z})\cong \ensuremath{\mathbb Z}^{n(n-2)}$, so $H_1(L, \ensuremath{\mathbb Z}_2)\cong H_1(L, \ensuremath{\mathbb Z}) \otimes \ensuremath{\mathbb Z}_{2}\cong \ensuremath{\mathbb Z}_{2}^{n(n-2)}$, and the number of subgroups of $K$ that satisfy the properties of \repr{prop3}(\ref{it:prop3a}) is equal to $2^{n(n-2)}$ as asserted.
\end{rem}
\section{The virtual cohomological dimension of $B_n(S)$ and $P_n(S)$ for $S=\St,\ensuremath{\mathbb{R}P^2}$}\label{sec:vcd}
Let $S=\St$ (resp.\ $S=\ensuremath{\mathbb{R}P^2}$), and for all $m,n\geq 1$, let $\Gamma_{n,m}(S)=P_{n}(S\setminus \brak{x_{1},\ldots,x_{m}})$ denote the $n$-string pure braid group of $S$ with $m$ points removed. In order to study various cohomological properties of the braid groups of $S$ and prove \reth{prop12}, we shall study $\Gamma_{n,m}(S)$. To prove \reth{prop12} in the case $S=\St$, by \req{pns2sum}, it will suffice to compute the cohomological dimension of $P_{n-3}(\St\setminus \brak{x_{1},x_{2},x_{3}})$. We recall the following presentation of $\Gamma_{n,m}(\St)$ from~\cite{GG11}. The result was stated for $m\geq 3$, but it also holds for $m\leq 2$.
\begin{prop}[{\cite[Proposition~7]{GG11}}]\label{prop:prespinm}
Let $n,m\geq 1$. The following constitutes a presentation of the group $\Gamma_{n,m}(\St)$:
\begin{enumerate}
\item[\underline{\textbf{generators:}}] $A_{i,j}$, where $1\leq i<j$ and $m+1\leq j\leq m+n$.
\item[\underline{\textbf{relations:}}]\mbox{}
\begin{enumerate}[(i)]
\item\label{it:pinmrelsa} the Artin relations described by \req{artinaij} among the generators $A_{i,j}$ of $\Gamma_{n,m}(\St)$.
\item for all $m+1\leq j\leq m+n$,
$\left( \prod_{i=1}^{j-1}\; A_{i,j}\right) \left( \prod_{k=j+1}^{m+n}\; A_{j,k}\right)=1$.
\end{enumerate}
\end{enumerate}
\end{prop}
Let $N$ denote the kernel of the homomorphism $\Gamma_{n,m}(S) \to \Gamma_{n-1,m}(S)$ obtained geometrically by forgetting the last string. If $S=\St$ (resp.\ $S=\ensuremath{\mathbb{R}P^2}$) then $N$ is a free group of rank $m+n-2$ (resp.\ $m+n-1$) and is equal to $\ang{A_{1,m+n},A_{2,m+n},\ldots, A_{m+n-1,m+n}}$ (resp.\ $\ang{A_{1,m+n},A_{2,m+n},\ldots, A_{m+n-1,m+n},\rho_{m+n}}$). Clearly $N$ is normal in $\Gamma_{n,m}(S)$. Further, it follows from relations~(\ref{it:pinmrelsa}) of \repr{prespinm} (resp.\ relations~(\ref{it:prej3}) and~(\ref{it:prej4}) of \repr{prej}) that the action by conjugation of $\Gamma_{n,m}(S)$ on $N$ induces (resp.\ does not induce) the trivial action on the Abelianisation of $N$.
In order to determine the virtual cohomological dimension of the braid groups of $S$ and prove \reth{prop12}, we shall compute the cohomological dimension of a torsion-free finite-index subgroup. In the case of $\St$ (resp.\ $\ensuremath{\mathbb{R}P^2}$), we choose the subgroup $\Gamma_{n-3,3}(\St)$ that appears in the decomposition given in \req{pns2sum} (resp.\ the subgroup $\Gamma_{n-2,2}(\ensuremath{\mathbb{R}P^2})$ that appears in \req{fnpnp2}).
\begin{proof}[Proof of \reth{prop12}]
Let $S=\St$ (resp.\ $S=\ensuremath{\mathbb{R}P^2}$), let $n>3$ and $k=3$ (resp.\ $n>2$ and $k=2$), and let $k\leq m <n$. Then by \req{pns2sum} (resp.\ \req{fnpnp2}) and \req{sessym}, $\Gamma_{n-m,m}(S)$ is a subgroup of finite index of both $P_{n}(S)$ and $B_{n}(S)$. Further, since $F_{n-m}(S\setminus \brak{x_{1},\ldots,x_{m}})$ is a finite-dimensional CW-complex and an Eilenberg-Mac~Lane space of type $K(\pi,1)$~\cite{FaN}, the cohomological dimension of $\Gamma_{n-m,m}(S)$ is finite, and the first part follows by taking $m=k$.
We now prove the second part, namely that the cohomological dimension of $\Gamma_{n-k,k}(S)$ is equal to $n-k$ for all $n>k$. We first claim that $\cd{\Gamma_{m,l}(S)}\leq m$ for all $m\geq 1$ and $l\geq k-1$. The result holds if $m=1$ since $F_1(S\setminus \brak{x_1,\ldots,x_l})$ has the homotopy type of a bouquet of circles, therefore $H^i(F_1(S\setminus \brak{x_1,\ldots,x_l}), A)$ is trivial for all $i\geq 2$ and for any local coefficients $A$, and $H^1(F_1(S\setminus \brak{x_1,\ldots,x_l}), \ensuremath{\mathbb Z})\neq 0$. Suppose by induction that the result holds for some $m\geq 1$, and consider the Fadell-Neuwirth short exact sequence:
\begin{equation*}
1\to \Gamma_{1,l+m}(S) \to \Gamma_{m+1,l}(S) \to \Gamma_{m,l}(S) \to 1
\end{equation*}
that emanates from the fibration:
\begin{equation}\label{eq:fngamma}
F_1(S\setminus \brak{x_1,\ldots,x_l,z_{1},\ldots,z_{m}})\to F_{m+1}(S\setminus \brak{x_1,\ldots,x_l}) \to F_{m}(S\setminus \brak{x_1,\ldots,x_l})
\end{equation}
obtained by forgetting the last coordinate. By~\cite[Chapter~VIII]{Br}, it follows that:
\begin{equation*}
\cd{\Gamma_{m+1,l}(S)}\leq \cd{\Gamma_{m,l}(S)}+\cd{\Gamma_{1,l+m}(S)}\leq m+1.
\end{equation*}
which proves the claim. In particular, taking $l=k$, we have $\cd{\Gamma_{m,k}(S)}\leq m$.
To conclude the proof of the theorem, it remains to show that for each $m\geq 1$ there are local coefficients $A$ such that $H^{m}(\Gamma_{m,l}(S), A)\neq 0$ for all $l\geq k$. We will show that this is the case for $A=\ensuremath{\mathbb Z}$. Again by induction suppose that $H^{m}(\Gamma_{m,l}(S), \ensuremath{\mathbb Z})\neq 0$ for all $l\geq k-1$ and for some $m\geq 1$ (we saw above that this is true for $m=1$). Consider the Serre spectral sequence with integral coefficients associated to the fibration~\reqref{fngamma}. Then we have that
\begin{equation*}
E^{p,q}_2= H^{p}\bigl(\Gamma_{m,l}(S), H^q(F_1(S\setminus \brak{x_1,\ldots,x_l,z_1,\ldots,z_m}),\ensuremath{\mathbb Z})\bigr).
\end{equation*}
Since $\cd{\Gamma_{m,l}(S)}\leq m$ and $\cd{F_1(S\setminus \brak{x_1,\ldots,x_l,z_1,\ldots,z_m}}\leq 1$ from above, it follows that this spectral sequence has two horizontal lines whose possible non-vanishing terms occur for $0\leq p \leq m$ and $0\leq q \leq 1$. We claim that the group $E^{m,1}_2$ is non trivial. To see this, first note that $H^1(F_1(S\setminus \brak{x_1,\ldots,x_l,z_1,\ldots,z_m}), \ensuremath{\mathbb Z})$ is isomorphic to the free Abelian group of rank $r=m+l-k+2$, so $r\geq m+2$, and hence $E^{m,1}_2=H^{m}\bigl(\Gamma_{m,l}(S), \ensuremath{\mathbb Z}^r\bigr)$, where we identify $\ensuremath{\mathbb Z}^r$ with (the dual of) $N\up{Ab}$. The action of $\Gamma_{m,l}(S)$ on $N$ by conjugation induces an action of $\Gamma_{m,l}(S)$ on $N\up{Ab}$. Let $H$ be the subgroup of $N\up{Ab}$ generated by the elements of the form $\alpha(x)-x$, where $\alpha\in \Gamma_{m,l}(S)$, $x\in N\up{Ab}$, and $\alpha(x)$ represents the action of $\alpha$ on $x$. Then we obtain a short exact sequence $0\to H\to N\up{Ab} \to N\up{Ab}/H\to 0$ of Abelian groups, and the long exact sequence in cohomology applied to $\Gamma_{m,l}(S)$ yields:
\begin{equation}\label{eq:lescohom}
\begin{tikzcd}[cramped, column sep=small]
\cdots \arrow{r} & H^m(\Gamma_{m,l}(S), N\up{Ab}) \arrow{r} & H^m(\Gamma_{m,l}(S), N\up{Ab}/H) \arrow{r} & H^{m+1}(\Gamma_{m,l}(S), H)\arrow{r} & \cdots.
\end{tikzcd}
\end{equation}
The last term is zero since $\cd{\Gamma_{m,l}(S)}\leq m$, and so the map between the two remaining terms is surjective. Let us determine $N\up{Ab}/H$. If $S=\St$ then from the comments following \repr{prespinm}, the action of $\Gamma_{m,l}(S)$ on $N\up{Ab}$ is trivial, so $H$ is trivial, and $N\up{Ab}/H\cong \ensuremath{\mathbb Z}^r$. So suppose that $S=\ensuremath{\mathbb{R}P^2}$. Choosing the basis
\begin{equation*}
\brak{A_{1,m+l+1},A_{2,m+l+1},\ldots, A_{m+l-1,m+l+1},\rho_{m+l+1}}
\end{equation*}
of $N$ and using \repr{prej}, one sees that the action by conjugation of the generators of $\Gamma_{m,l}(S)$ on the corresponding basis elements of $N\up{Ab}$ is trivial, with the exception of that of $\rho_{i}$ on $A_{i,m+l+1}$ for $l+1\leq i\leq m+l-1$, which yields elements $A_{i,m+l+1}^2\in H$ (by abuse of notation, we denote the elements of $N\up{Ab}$ in the same way as those of $N$), and that of $\rho_{i}$ on $\rho_{m+l+1}$, where $l+1\leq i\leq m+l$, which yields elements $A_{i,m+l+1}\in H$. In the quotient $N\up{Ab}/H$ the basis elements $A_{l+1,m+l+1},\ldots, A_{m+l-1,m+l+1}$ thus become zero, and additionally, we have also that $A_{m+l,m+l+1}$ (which is not in the given basis) becomes zero. Hence the relation $\prod_{i=1}^{m+l}\, A_{i,m+l+1}=\rho_{m+l+1}^{-2}$ is sent to the relation $\prod_{i=1}^{l}\, A_{i,m+l+1}=\rho_{m+l+1}^{-2}$, and so $N\up{Ab}/H$ is generated by (the images of) the elements $A_{1,m+l+1}, \ldots, A_{l,m+l+1}, \rho_{m+l+1}$, subject to this relation (as well as the fact that the elements commute pairwise). It thus follows that $N\up{Ab}/H\cong \ensuremath{\mathbb Z}^l$. Since the induced action of $\Gamma_{m,l}(S)$ on $N\up{Ab}/H$ is trivial, we conclude that
\begin{equation*}
H^m(\Gamma_{m,l}(S), N\up{Ab}/H)=\bigl( H^m(\Gamma_{m,l}(S), \ensuremath{\mathbb Z})\bigr)^{s},
\end{equation*}
where $s=m+l$ if $S=\St$ and $s=l$ if $S=\ensuremath{\mathbb{R}P^2}$. It then follows from \req{lescohom} that $E^{m,1}_2=H^m(\Gamma_{m,l}(S), N\up{Ab})\neq 0$. Since $E^{p,q}_2=0$ for all $p>m$ and $q>1$, we have $E^{m,1}_2=E^{m,1}_{\infty}$, thus $E^{m,1}_{\infty}$ is non trivial, and hence $H^{m+1}(\Gamma_{m+1,l}(S), \ensuremath{\mathbb Z})\neq 0$. This concludes the proof of the theorem.
\end{proof}
We end this paper with a proof of \reco{vcdmcg}.
\begin{proof}[Proof of \reco{vcdmcg}]
Let $S=\St$ (resp.\ $S=\ensuremath{\mathbb{R}P^2}$). If $n\geq 3$ (resp.\ $n\geq 2$) then $B_{n}(S)$ and $\mathcal{MCG}(S,n)$ are closely related by the following short exact sequence~\cite{Sc}:
\begin{equation*}
1 \to \bigl\langle \ft\bigr\rangle\to B_{n}(S)\stackrel{\beta}{\to} \mathcal{MCG}(S,n) \to 1,
\end{equation*}
where the kernel is isomorphic to $\ensuremath{\mathbb Z}_{2}$. Now assume that $n\geq 4$ (resp.\ $n\geq 3$), so that $B_{n}(S)$ is infinite. If $\Gamma$ is a torsion-free subgroup of $B_{n}(S)$ of finite index then $\beta(\Gamma)$, which is isomorphic to $\Gamma$, is a torsion-free subgroup of $\mathcal{MCG}(S,n)$ of finite index, and hence the virtual cohomological dimension of $\mathcal{MCG}(S,n)$ is equal to that of $B_{n}(S)$. The result then follows by \reth{prop12}.
\end{proof}
|
1,116,691,499,888 | arxiv | \section{Introduction}\label{sec:intro}
Deep Neural Networks (DNNs) have achieved remarkable results in various fields, including image recognition, natural language processing or medical diagnostics.
``Machine Learning as a Service'' (MLaaS) is a recent and popular DNN-based business use-case~\citep{philipp2020machine, li2017scaling, ribeiro2015mlaas},
where clients pay for predictions from a service provider.
However, this approach requires trust between the client and the service provider. In cases where the data is sensitive, such as military, financial, or health information, clients may be hesitant (or are simply not allowed) to share their data with the service provider for privacy reasons. On the service provider's side, training DNNs requires large amounts of data, technical expertise, and computer resources, which can be expensive and time-consuming. As a result, service providers may hesitate to give the model directly to the client, as it may be easily reverse-engineered (or at least make the attacker's task much easier), hindering the growth of MLaaS activity. Allowing the clients to perform the inference locally is also not very practical as any model update would have to be pushed to all clients, not to mention the complex support of the various client hardware/software configurations, etc.
HE/FHE~\cite{gentry2009fully} is an ideal technology to address Privacy-Preserving in Machine Learning (PPML) as it allows the computations to be performed directly on encrypted data. By encrypting its data before sharing it with the service provider, the client ensures that it remains private while the service provider can still provide accurate predictions. This solves the trust issue and also gives a competitive advantage in regions where data regulations are stricter, such as Europe's General Data Protection Regulation (GDPR)~\citep{regulation2016regulation}. The most popular HE schemes are BGV/BFV~\cite{brakerski2014leveled,Brakerski12,FanV12}, CKKS~\cite{cheon2017homomorphic} and Torus-FHE (TFHE)~\cite{chillotti2016faster, chillotti2020tfhe} and this paper presents our proposed solution that utilizes TFHE to provide a privacy-preserving MLaaS framework for DNNs. TFHE enables very fast gate bootstrapping as well as circuit bootstapping and operations over Boolean gates. Moreover, extended versions of TFHE, such as Concrete~\cite{chillotti2020concrete} allows programmable bootstrapping enabling evaluation of certain functions during the bootstrapping step itself.
The security and flexibility provided by HE come at a cost, as the computation, communication, and memory overheads are significant, especially for complex functions such as DNNs:
\vspace*{-0.3cm}
\setlength{\leftmargin}{0cm}
\begin{itemize}
\setlength\itemsep{-0.1em}
\item Time overhead:
on server side, an FHE logic gate computation within a TFHE scheme takes milliseconds ~\cite{chillotti2016faster}, compared to nanoseconds for a standard logic gate. On client side the overhead for encryption/decryption of the data is usually not an issue, as milliseconds are enough for those operations.
\item Communication overhead:
an MNIST image sent in clear represents a few kBs, while encrypted within any HE scheme it is of the order of a few MBs (excluding the public key which typically requires hundred(s) of MBs)~\cite{clet2021bfv}. Moreover, PPML schemes not based on TFHE, such as those based on CKKS, require a significant communication overhead due to the need for multiple exchanges between the client and the cloud ~\cite{clet2021bfv}.
\item Memory overhead: TFHE-based schemes typically require a few of MBs of RAM on server side, while those based on CKKS need several GBs per image~\cite{gilad2016cryptonets, brutzkus2019low}, even up to hundreds of GBs for the most time-efficient solutions~\cite{lee2022low} (384GB of RAM usage to infer a single CIFAR-10 image using {\tt ResNet}\xspace).
\item Technical difficulties and associated portability issues: mastering an HE/FHE framework requires highly technical and rare expertise.
\end{itemize}
These challenges can be approached from different directions, but only a few works have considered designing DNNs models that are compatible/efficient with state-of-the-art HE frameworks in a flexible and portable manner. This paper focuses on the integration of the recent FHE scheme, Torus-FHE (TFHE), with DNNs.
\myparagraph{Our contributions.} To address the above issue, we propose a DNN design framework called {\tt {TT-TFHE}}\xspace that effectively scales TFHE usage to tabular and large datasets using a new family of Convolutional Neural Networks (CNNs) called Truth-Table Neural Networks ({\tt TTnet}\xspace). Our proposed framework provides an easy-to-implement, automated {\tt TTnet}\xspace-based design toolbox that utilizes the Pytorch and Concrete (python-based) open-source libraries for state-of-the-art deployment of DNN models on CPU, leading to fast and accurate inference over encrypted data. The {\tt TTnet}\xspace architecture, being lightweight and differentiable, allows for the implementation of CNNs with direct expressions as truth tables, making it easy to use in conjunction with the TFHE open-source library (specifically the Concrete implementation) for automated operation on lookup tables. Therefore, in this paper, we try to tackle the TFHE efficiency overhead as well as the technical/portability issue.
\myparagraph{Our experimental results.} Evaluation on three tabular datasets (Cancer, Diabetes, Adult) shows that our proposed {\tt {TT-TFHE}}\xspace framework outperforms in terms of accuracy (by up to $+3\%$) and time (by a factor $7$x to $1200$x) any state-of-the-art DNN \& HE set-up. For all these datasets, our inference time runs in a few seconds, with very small memory and communication requirements, enabling for the first time a fully practical deployment in industrial/real-world scenarios, where tabular datasets are prevalent~\cite{cartella2021adversarial,buczak2015survey,clements2020sequential,ulmer2020trust,evans2009online}.
For MNIST and CIFAR-10 image benchmarks, we further explore an approach for private inference proposed by LoLa~\cite{brutzkus2019low}, in which the user/client side is able to compute a first layer and send the encrypted results to the cloud. In this real-world scenario, the user/client performs the computation of a standard public layer (such as the first block of the open-source {\tt VGG16}\xspace model) and sends the encrypted results to the cloud for further computation using HE. Through experimental evaluation, we demonstrate that our proposed framework greatly outperforms all previous TFHE set-ups~\cite{sanyal2018tapas, fu2021gatenet, chillotti2021programmable} in terms of inference time. Specifically, we show that {\tt {TT-TFHE}}\xspace can infer one MNIST image in 7 seconds with an accuracy of 98.1\% or one CIFAR-10 image in 570 seconds with an accuracy of 74\%, which is from one to several orders of magnitude faster than previous TFHE schemes, and even comparable to the fastest state-of-the-art HE set-ups (that do not benefit from the TFHE advantages) while maintaining the same 128-bit security level.
Our solutions represent a significant step towards practical privacy-preserving inference, as they offer fast inference with limited requirements in terms of memory on server side (only a few MBs, in contrary to other non-TFHE-based schemes), and thus can easily be scaled to multiple users.
In addition, they benefit from lower communication overhead. In other words, this is the first work presenting a fully practical solution of private inference (\textit{i.e.} a few seconds for inference time and a few MBs of memory/communication) on both tabular datasets and MNIST.
\myparagraph{Outline.} In Section~\ref{sec:related}, we present related works in the field of HE and HE-friendly neural networks. In Section~\ref{sec:TT-THE_framewroks}, we introduce our proposed {\tt {TT-TFHE}}\xspace framework, while in Section~\ref{sec:Results} we provide an evaluation of the performance of our framework on various datasets and various privacy settings. Finally, in Section~\ref{sec:Conclusion}, we discuss the limitations of the proposed framework and present our conclusions.
\section{Related Works}\label{sec:related}
A lot of interest has been observed in PPML in the recent past, especially for implementing DNNs. Most of these efforts assume that the (unencrypted) model is deployed in the cloud, and the encrypted inputs are sent from the client side for processing. Inference timing of HE-enabled DNN models is the key parameter, but other factors, such as ease of automation and simplicity of such transformation have also attracted some attention~\cite{nGraph,chet,armadillo}.
Broadly, the problem can be approached from four complementary directions: 1) Optimizing the implementation of some DNN building blocks, such as the activation layers, using HE operations~\cite{jovanovic2022private,lee2022low}; 2) Parallelizing the computation and batching of images~\cite{gilad2016cryptonets,chou2018faster,brutzkus2019low} (this is aided by the ring encoding of HE in certain cases). Such efforts also include implementing a hybrid client-server protocol for computation~\cite{juvekar2018gazelle, mishra2020delphi}; 3) Optimizing the underlying HE operations~\cite{chillotti2016faster, chillotti2021programmable,ducas2015fhew}; 4) Designing a HE-friendly DNN~\cite{sanyal2018tapas,lou2019she,fu2021gatenet}. This fourth category has been relatively less explored and is the main focus of this work.
To the best of our knowledge, the FHE-DiNN paper~\cite{bourse2018fast} was the first to propose a quantified DNN to facilitate FHE operations. Then, the TAPAS framework~\cite{sanyal2018tapas} pushed this strategy further by identifying Binary Neural Networks (BNNs) as effective DNN modeling techniques for HE-enabled inference. This direction has been enhanced later by GateNet~\cite{fu2021gatenet}, which optimizes the BNN models by grouping the channels to reduce the number of gates.
Yet, none of these works actually explored the automation perspectives for optimizing a model itself for HE inference, as they still heavily leverage some manual optimizations concerning the underlying FHE library. {\tt {TT-TFHE}}\xspace, however, is fully automated.
Moreover, compared to previously proposed automated approaches, the translation from non-HE model to HE-enabled model is much simpler as all optimizations are handled during the design phase of the model, making {\tt {TT-TFHE}}\xspace much more amenable for typical machine learning experts with little knowledge of FHE.
\section{{\tt {TT-TFHE}}\xspace Framework}\label{sec:TT-THE_framewroks}
\subsection{Threat model}\label{subsec:TT-THE_Threat}
PPML methods are designed to protect against a variety of adversaries, including malicious insiders and external attackers who may have access to the neural network's inputs, outputs, or internal parameters. The level of secrecy required depends on the specific application and the potential impact of a successful attack. Common secrecy goals include protecting the input to the inference, ensuring that only authorized parties know the result of the inference, and keeping the weights and biases of the neural network secret from unauthorized parties. Some PPML approaches also aim to keep the architecture of the neural network
confidential from unauthorized parties. However, most PPML methods do not address this last point, and some interactive approaches assume that the architecture of the neural network is known to all parties. Moreover, attacks on MLaaS settings exist and are very tricky to defend against ~\cite{tramer2016stealing, juuti2019prada}. Thus, in this paper, we assume that an attacker can access the neural network and only the client's data privacy matters.
\subsection{FHE general set-up}\label{subsec:TT-THE_set-up}
In this paper, the client $\mathcal{C}$ will encrypt its data locally and send it along with its public key to the server $\mathcal{S}$ (there is no need to send it again once it is pre-shared). The server will compute its algorithm on the encrypted data and send the encrypted result to $\mathcal{C}$, who will decrypt its result locally. The server will have no access to the data in clear.
\begin{figure*}[htb!]
\centering
\includegraphics[scale=0.50]{fig/FigureICML_v7.pdf}
\caption{The $N_1/{\tt TT}\xspace$ setting. The client computes locally a layer $N_1$, encrypts the obtained output, and sends it to the server/cloud with the public key (which can be pre-shared). The server will compute through FHE the {\tt TTnet}\xspace layer with the linear regression and send the result to the client. The client can decrypt this output and make a few last computations to obtain the result of the inference. When $N_1$ is identity ($\emptyset/{\tt TT}\xspace$ or more generally $\emptyset/N_2$ for some neural network $N_2$), we denote the setting as Fully Private (Full-Pr).
}
\label{fig:global_schema}
\end{figure*}
The classical configuration is that the entire model is computed privately on server side and we call this configuration Fully Private (Full-Pr). However, as it is anyway quite hard to defend against model weight-stealing attacks (our threat model does not include model privacy) and as already proposed by the LoLa team ~\cite{brutzkus2019low}, we also consider situations where the client can perform some local pre-processing (\textit{i.e.} a first layer or block of the neural network) to speed up the server computation without compromising its data security. This setting helps the user to perform its inference faster than in a Full-Pr situation with little memory cost on their side. Such pre-computation would usually be from a public architecture such as {\tt VGG}\xspace, {\tt AlexNet}\xspace or a {\tt ResNet}\xspace~\cite{simonyan2014very,krizhevsky2017imagenet, he2016deep} which are fully available online. In our case, this layer will then typically be followed by a {\tt TTnet}\xspace model and a linear regression that will be computed through FHE on server side. This is a common approach in deep learning, where most models are fine-tuned from one of these three, or with fixed first layers followed by a shallow network.
We will denote $N_1/N_2$ a configuration where the client performs the computation of the neural network $N_1$ locally and the remaining part $N_2$ is performed privately on server side. The final linear regression after $N_2$ is always performed privately on server side, but some of the last few computations can be done by the client (for example a part of the sum and the final $ArgMax$ ~\cite{podschwadt2022survey}). The general setup $N_1/{\tt TT}\xspace$ (where the part performed on server is a {\tt TTnet}\xspace) is depicted in Figure~\ref{fig:global_schema}. $\emptyset/{\tt TT}\xspace$ will represent one extreme case where the entire {\tt TTnet}\xspace neural network is performed privately (Full-Pr) and $N/\emptyset$ will represent the other extreme case where the entire neural network $N$ is performed on client side, except the final linear regression. We will denote ${\tt VGG}\xspace_{1L}$ the first layer of ${\tt VGG16}\xspace$ (the first convolution), and ${\tt VGG}\xspace_{1B}$ its first block (the first two convolutions and the pooling layer afterwards).
\subsection{Truth-table DCNN ({\tt TTnet}\xspace)}\label{subsec:TTmodel}
\begin{figure*}[htb!]
\centering
\begin{subfigure}[t]{0.55\textwidth}
\scalebox{0.9}{
\begin{tikzpicture}
\foreach \x in {0,...,9}
\draw[gray, thick] (0 + \x*0.5,0.5) rectangle (0.5 + \x*0.5,1);
\foreach \y in {0,...,3}
\foreach \x in {0,...,3}
\draw[gray, thick] (5.5 + \x*0.5,1 - \y) rectangle (6+ \x*0.5,1.5- \y);
\foreach \x in {0,1}
\draw[gray,thick] (8.5+\x*0.5,0.9) rectangle (9+\x*0.5,1.4);
\draw[darkseagreen2, thick] (5.35 + 0*0.5,1-3.15) rectangle (6.65,1.65);
\draw[green, thick] (8.35+0*0.5,1.5) rectangle (9.15+0*0.5,0.8);
\draw[darkseagreen2, densely dashed] (5.5 + 1*0.5,1.65+0.0) -- (8.5+0*0.5,0.6+0.9);
\draw[darkseagreen2, densely dashed] (5.5 + 1*0.5,1-3.15) -- (8.5+0*0.5,-0.1+0.9);
\draw[darkseagreen2, densely dashed] (-0.1+ 3*0.5,0.6+0.5) -- (5.5,1.65);
\draw[darkseagreen2, densely dashed] (0.1 + 3*0.5,-0.1+0.5) -- (5.35+0*0.5,1-3.15);
\draw[darkseagreen2, thick] (-0.1,0.65-0.2) rectangle (0.1 + 6*0.5,1.1);
\draw [decorate,decoration={brace,amplitude=5pt,raise=1ex}]
(-0.1,0.9) -- (0.1 + 6*0.5,0.9) node[midway,yshift= 2em]{Patch Size};
\draw[orange, thick] (0 + 0*0.5,0.55) rectangle (0.5 + 3*0.5,1.05);
\draw[orange, thick] (5.45 + 0*0.5,1 - 0.05) rectangle (6.05+ 0*0.5,1.5+ 0.05);
\draw[orange, densely dashed] (0,0.25+0.5) -- (5.5,1.25);
\draw[orange, densely dashed] (0+4*0.5,0.25+0.5) -- (5.5+0.5,1.25);
\draw[blue, thick] (1 + 0*0.5,0.5-0.1) rectangle (1.5 + 3*0.5,1.0);
\draw[blue, thick] (6 + 0*0.5,1 - 0) rectangle (6.5+ 0*0.5,1.5+0.07);
\draw[blue, densely dotted] (1,0.25+0.5) -- (6,1.25);
\draw[blue, densely dotted] (1+4*0.5,0.25+0.5) -- (6+0.5,1.25);
\draw[black, fill=black] (0, -0.5) rectangle (0 + 6*0.5,0) node[pos=0.5,text=white] {Truth Table, x};
\foreach \x in {0,...,5}
\draw[gray, thick] (0 + \x*0.5,-0.5-1*0.5) rectangle (0.5 + \x*0.5,-0.-1*0.5) node[pos = 0.5, black] {0};
\foreach \x in {1,...,5}
\draw[gray, thick] (0 + \x*0.5,-0.5-2*0.5) rectangle (0.5 + \x*0.5,-0.-2*0.5) node[pos = 0.5, black] {0};
\draw[gray, thick] (0 + 0*0.5,-0.5-2*0.5) rectangle (0.5 + 0*0.5,-0.-2*0.5) node[pos = 0.5, black] {1};
\draw[gray, thick] (0 + 0*0.5,-0.5-3*0.5) rectangle (0.5 + 0*0.5,-0.-3*0.5) node[pos = 0.5, black] {0};
\foreach \x in {2,...,5}
\draw[gray, thick] (0 + \x*0.5,-0.5-3*0.5) rectangle (0.5 + \x*0.5,-0.-3*0.5) node[pos = 0.5, black] {0};
\draw[gray, thick] (0 + 1*0.5,-0.5-3*0.5) rectangle (0.5 + 1*0.5,-0.-3*0.5) node[pos = 0.5, black] {1};
\foreach \y in {4}
\foreach \x in {0,...,5}
\draw[gray, thick] (0 + \x*0.5,-0.5-\y*0.5) rectangle (0.5 + \x*0.5,-0.-\y*0.5) node[pos = 0.5, black] {...};
\foreach \y in {5,6}
\foreach \x in {0,...,4}
\draw[gray, thick] (0 + \x*0.5,-0.5-\y*0.5) rectangle (0.5 + \x*0.5,-0.-\y*0.5) node[pos = 0.5, black] {1};
\draw[gray, thick] (0 + 5*0.5,-0.5-5*0.5) rectangle (0.5 + 5*0.5,-0.-5*0.5) node[pos = 0.5, black] {0};
\draw[gray, thick] (0 + 5*0.5,-0.5-6*0.5) rectangle (0.5 + 5*0.5,-0.-6*0.5) node[pos = 0.5, black] {1};
\draw[black, fill=black] (8.35, -0.5) rectangle (9.15,0.0) node[pos=0.5,text=white] {$\Phi_F$};
\draw[gray, thick] (8.35,-0.5-1*0.5) rectangle (9.15,-0-1*0.5) node[pos = 0.5, black] {1};
\draw[gray, thick] (8.35,-0.5-2*0.5) rectangle (9.15,-0.-2*0.5) node[pos = 0.5, black] {1};
\draw[gray, thick] (8.35,-0.5-3*0.5) rectangle (9.15,-0.-3*0.5) node[pos = 0.5, black] {0};
\draw[gray, thick] (8.35,-0.5-4*0.5) rectangle (9.15,-0.-4*0.5) node[pos = 0.5, black] {...};
\draw[gray, thick] (8.35,-0.5-5*0.5) rectangle (9.15,-0.-5*0.5) node[pos = 0.5, black] {0};
\draw[gray, thick] (8.35,-0.5-6*0.5) rectangle (9.15,-0.-6*0.5) node[pos = 0.5, black] {1};
\end{tikzpicture}
}
\caption{Converting a function $\Phi_F$ into a truth table. The above example has two layers: the first one has parameters (input channel, output channel, kernel size, stride) = $(1,4,4,2)$, while the second $(4,1,2,2)$. The patch size (the patch being the region of the input that produces the feature, which is commonly referred to as the receptive field~\cite{araujo2019computing}) of $\Phi_F$ is $6$ (i.e., \textcolor{darkseagreen2}{green} box) since the output feature (i.e., \textcolor{green}{light green} box) requires $6$ input entries (i.e., \textcolor{orange}{orange} and \textcolor{blue}{blue} box).}
\label{fig: Amplification Layer}
\end{subfigure}
\hspace{0.8cm}
\begin{subfigure}[t]{0.35\textwidth}
\begin{tikzpicture}
\foreach \x in {0,...,5} {
\draw[rounded corners] (0+ \x, 0) rectangle (0.5+\x, 4);
\draw[->] (0.5+ \x -1, 2) -- (1+\x-1, 2); }
\path (0, 0) -- node[rotate=90,anchor=center] {Conv1D}(0.5, 4);
\path (0+1, 0) -- node[rotate=90,anchor=center] {Batch Normalization 1D}(0.5 + 1, 4);
\path (0+2, 0) -- node[rotate=90,anchor=center] {SeLU}(0.5+2, 4);
\path (0+3, 0) -- node[rotate=90,anchor=center] {Conv1D (kernel size = 1)}(0.5+3, 4);
\path (0+4, 0) -- node[rotate=90,anchor=center] {Batch Normalization 1D}(0.5+4, 4);
\path (0+5, 0) -- node[rotate=90,anchor=center] {$bin_{act}$}(0.5+5, 4);
\end{tikzpicture}
\caption{LTT overview of a Expanding AutoEncoder LTT in 1 dimension: the Conv1D with kernel size = 1 is the amplification layer. The intermediate values are real and the input/output values are binary.}
\label{fig:LTTAutoEncoder}
\end{subfigure}
\caption{A Learning Truth Table (LTT) block. The intermediate values and weights are floating points, input/output values are binary.
}
\label{fig: inner and outer Learning Truth Table Block}
\end{figure*}
Truth Table Deep Convolutional Neural Networks ({\tt TTnet}\xspace) were proposed in~\cite{https://doi.org/10.48550/arxiv.2208.08609} as DCNNs convertible into truth tables by design, with security applications (a companion video is provided by the authors \href{https://youtu.be/loGlpVcy0AI}{https://youtu.be/loGlpVcy0AI}). While recent developments in DNN architecture have focused on improving performance, the resulting models have become increasingly complex and difficult to verify, interpret and implement. Thus, the authors focused on CNNs, which are widely used in the field, and tried to transform them into tractable Boolean functions. A depiction of the general architecture of {\tt TTnet}\xspace can be found in Figure~\ref{fig:General_archi} of Appendix~\ref{appendix:architecture}.
\myparagraph{CNN filter as a tractable Boolean function.} Their proposed method converts CNN filters into binary truth tables by 1) decreasing the input size (noted as $n$ in the rest of the paper) which reduces the complexity of the CNN filter function, 2) using the Heaviside step function denoted as $bin_{act}=(1+sgn(x))/2$ to transform the inputs and outputs into binary values. This results in a tractable Boolean function (for $n$ not too large) that can be stored as a truth table, as seen in Figure~\ref{fig: Amplification Layer}. To achieve high accuracy, the CNN filter must also be non-linear before the step function. Then, the CNN filter becomes a tractable non-linear truth table, which is referred to as a Learning Truth Table (LTT) block.
\myparagraph{Description of an LTT block.} Among all the families of LTT blocks possible - we represent in Figure~\ref{fig:LTTAutoEncoder} an Expanding Auto-Encoder LTT block (E-AE LTT). An E-AE LTT block is composed of two layers of grouped CNN with an expanding factor. Figure~\ref{fig: Amplification Layer} shows a computation of E-AE 1D LTT block. We can observe that the input size is small ($n =6$), the input/output values are binary and SeLU is an activation function. Note that the inputs are binary but the weights and the intermediate values are real. We integrated LTT blocks into $\mathcal{TT}$net as CNN filters are integrated into DCNNs: each LTT layer is composed of multiple LTT blocks and there are multiple LTT layers in total.
\subsection{Challenges and optimizations for integrating {\tt TTnet}\xspace with TFHE-Concrete}\label{subsec:Challenges}
The Concrete library~\cite{chillotti2020concrete} is a software implementation of TFHE. It is designed to provide a highly efficient and secure platform for performing mathematical operations on Boolean encrypted data. The library utilizes automated operations on lookup tables (concrete-numpy) to achieve high performance while maintaining a high level of security. The library is also designed to be very user-friendly, with simple and intuitive interfaces for performing encryption and decryption operations. The Concrete library has been shown to provide significant improvements in terms of memory and communication overheads compared to other HE schemes. Furthermore, it is open source, which allows researchers and practitioners to easily integrate it into their projects and benefit from its advanced features.
The use of {\tt TTnet}\xspace architecture in combination with TFHE (and more specifically with Concrete) naturally provides a number of advantages. Firstly, the lightweight and differentiable nature of {\tt TTnet}\xspace allows for the implementation of CNNs with direct expressions as truth tables, which is well-suited for Concrete. Additionally, the reduced complexity of the {\tt TTnet}\xspace architecture leads to reduced computations and good scalability. Yet, the integration of {\tt TTnet}\xspace into the TFHE framework presents several challenges that need to be addressed in order to achieve high performance. We detail below the constraints imposed by FHE libraries and the optimizations implemented to overcome these limitations and achieve state-of-the-art performance on various datasets.
\myparagraph{Constraints imposed by Concrete.} The Concrete implementation of the TFHE library, which utilizes automated operations on lookup tables, imposes a maximum limit of $16$ on the input bit size $n$ of the truth table. However, $n$ has a strong impact on efficiency (cf Appendix~\ref{appendix:concrete}) and our tests show that $n=4$ or $n=6$ seem to offer the best trade-offs. Indeed, since we learn kernels of convolutional layers of size $(3,2)$ or $(2,3)$, it is convenient to use $n$ a multiple of $2$ and/or $3$. Moreover, input sizes larger than $n=8$ lead to a prohibitive average time per call.
Concrete proposes a built-in linear regression feature, but we tested that it performs not as well as an ad-hoc linear regression with multiplications and additions.
Finally, a very important limitation is that Concrete does not support multi-precision table lookups: the call time of each table lookup (even the small ones) will be equal to the call time of the largest bitwidth operations in the whole circuit. For example, if one of the computations operates on 16 bits, all lookup tables, even small, will require the call time of a 16-bit table lookup. This can be problematic when handling the final linear regression as we will have to sum many Boolean values and the sum result bitwidth might be larger than our planned table lookup size, therefore slowing down our entire implementation. If the sum result of our linear regression requires 16 bits, all our $4$-bit lookup tables call time will be on par with $16$-bit table lookups, going from $75$ms to almost $5$s per call. For now, the computation time is too slow for results that require a sum on more than 4 bits. To overcome this limitation, we implemented several optimizations to improve the accuracy of the model, particularly for image datasets (see below).
\myparagraph{Limitations imposed by {\tt TTnet}\xspace.} The original {\tt TTnet}\xspace paper proposed a training method that is not suitable for high accuracy performance as it was trained to resist PGD attacks~\citep{madry2017towards}, which reduces accuracy. Additionally, the pre-processing and final sparse layers in {\tt TTnet}\xspace being binary, this also leads to a significant decrease in accuracy. To address these limitations, we replace the final sparse binary linear regression with a linear layer with floating point weights (later quantified on 4 bits not to deteriorate too much the performances on Concrete) and propose a new training method that emphasizes accuracy (see below). We also propose to use a setting $N_1/{\tt TT}\xspace$, where a first layer $N_1$ (a layer or block of a general open-source model) is applied to overcome the loss of information due to the first binarization. This is for example a standard method used in BNNs.
\myparagraph{Training optimizations.} To improve the accuracy of our model, we took several steps to optimize the training process. First, we removed the use of PGD attacks during training, as they have been shown to reduce accuracy. Next, we employed the DoReFa-Net method~\cite{zhou2016dorefa} for CIFAR-10, a technique for training convolutional neural networks with low-bitwidth activations and gradients. Finally, to overcome the limitations of the {\tt TTnet}\xspace grouping method, we extended the training to 500 epochs, resulting in a more accurate model.
\myparagraph{Architectural optimizations.} For tabular datasets, the straightforward enhancements proposed in the previous sections are sufficient to achieve high accuracy. However, this is not the case for image datasets such as MNIST, CIFAR-10, and ImageNet. Therefore, we modify the general architecture of our model. Specifically, we use an architecture similar to the one presented for ImageNet in the original {\tt TTnet}\xspace paper. The limitation of our model is that it can see far in the spatial dimension but has limited representation in the channel dimension. To balance this property, we use three techniques: multi-headers
, residual connections~\cite{he2016deep}, and channel shuffling~\cite{zhang2018shufflenet}. We have a single layer composed of four different functions in parallel: one LTT block with a kernel size of (3,2) and group 1 (to see far in space and low in channel), one LTT block with a kernel size of (2,3) and group 1, one LTT block with a kernel size of 1 and 6 groups (to see low in space but high in channel representation), and an identity function (as a residual connection for stability). We tried with a second layer to increase accuracy, but with the current version of Concrete (without the support of multi-precision table lookups), this led to sub-optimal performance/accuracy trade-offs with a drastic increase in FHE inference time.
\myparagraph{Client server usage as optimizations.} We describe here a solution to the issue of Concrete not supporting the results of linear regression that would be on more than 4 bits: we utilize the client's computing resources not only to prepare the input to the server, but also to post-process the server's output. Namely, the client will compute a small part of the final linear regression. Indeed, we quantify the weights of the final linear regression of {\tt TTnet}\xspace to 4 bits, which we divide into 4 binary matrices. Then, the server performs partial sums on each of these 4 matrices. Since the outputs of {\tt TTnet}\xspace are binary, the weights are binary, and the maximum number input bitwidth for our lookup tables is 4 bits. For optimal performance, we perform sub-sums of size 16 to ensure that the result of each sub-sum is always lesser than $2^4$ as proposed by Zama's team\footnote{\href{https://community.zama.ai/t/load-model-complex-circuit/369/4}{https://community.zama.ai/t/load-model-complex-circuit/369/4}}. It is important to note that by doing this, we maintain the privacy of the weights of the linear regression as they remain unknown to the client. Additionally, the function computation by the client is very light, for example in the case of the MNIST dataset in the ${\tt VGG}\xspace_{1B}$/${\tt TT}\xspace$ setting, it will represent at most $\frac{N_\text{features}}{N_{bits}} = \frac{576}{16} = 12$ sums of 4-bit integers to be performed for each $4$ weight matrices. The client will eventually add these four outputs to obtain its final result, \textit{i.e.} computing $\sum_{i=0}^{3}{2^i*w_i * \text{output}_i}$. Also, note that this client computation is fixed and will remain the same even if the model needs to be updated. Finally, this limitation is due to Concrete and likely temporary as Concrete identified lookup tables multi-precision as an ``important feature to come''\footnote{\href{https://community.zama.ai/t/load-model-complex-circuit/369/4}{https://community.zama.ai/t/load-model-complex-circuit/369/4}}.
\myparagraph{Patching optimizations.} Due to its highly decoupled nature, our model performance is dependent on the number of patches. The classification of an image can be viewed as $N$ independent computations, where $N$ is the number of patches of the original image. This is a unique property and is highly useful for addressing the issue of Concrete requiring a lot of RAM when compiling complex circuits, especially when one will have to scale to larger datasets. For example on MNIST, we have a patch of size (8x8x1) instead of an entire image of size (28x28x1).
\section{Results}
\label{sec:Results}
The project implementation was done in Python, with the PyTorch library~\cite{paszke2019pytorch} for training, in numpy for testing in clear, and the Concrete-numpy library for FHE inference.
Our workstation consists of 4 Nvidia GeForce 3090 GPUs (only for training) with 24576 MiB memory and eight cores Intel(R) Core(TM) i7-8650U CPU clocked at 1.90 GHz, 16 GB RAM. For all experiments, the CPU Turbo Boost is deactivated and the processes were limited to using four cores.
The details of all our models' architecture can be found in Appendix~\ref{appendix:models_architecture}.
\subsection{Tabular datasets}
\label{subsec:Tabular}
\begin{table}[htb!]
\medskip
\caption{\label{table:Results_tabular} Tabular dataset results for {\tt {TT-TFHE}}\xspace and competitors. All results are in Full-Pr setting. All our models use a table lookup bitwidth of $n=5$, except for Diabetes where we use $n=6$. \strut}
\resizebox{\columnwidth}{!}{
\begin{tabular}{|l|c|c|cc|cc|cc|}
\cline{2-9}
\multicolumn{1}{c|}{} & \textbf{FHE} & \#CPU & \multicolumn{2}{c|}{\textbf{Adult}} & \multicolumn{2}{c|}{\textbf{Cancer}} & \multicolumn{2}{c|}{\textbf{Diabetes}} \\ \cline{4-9}
\multicolumn{1}{c|}{} & \textbf{family} & \textbf{cores} & Acc. & Time & Acc. & Time & Acc. & Time \\ \midrule
ETHZ/CCS22 & CKKS & 64 & 81.6\% & 420s & - & - & - & - \\ \hline
TAPAS & \multirow{2}{*}{TFHE} & 16 & - & - & 97.1\% & 3.5s & 54.9\% & 250s \\
\textbf{Ours} & & 4 & 85.3\% & 5.6s & 97.1\% & 1.9s & 57.0\% & 1.2s \\ \bottomrule
\end{tabular}}
\end{table}
Our experimental results shown in Table~\ref{table:Results_tabular} (in Table~\ref{table:Results_tabular_normalized} of Appendix~\ref{appendix:singleCPU} for results normalized to a single CPU) demonstrate the superior performance of the proposed {\tt {TT-TFHE}}\xspace framework on three tabular datasets (Cancer, Diabetes, Adult) in terms of both accuracy and computational efficiency. The framework achieved an increase in accuracy of up to $+3\%$ compared to state-of-the-art DNN \& HE set-ups based on TFHE such as TAPAS~\cite{sanyal2018tapas} or on CKKS such as the recent work from the ETHZ team~\cite{jovanovic2022private}. More impressively, the inference time per CPU was significantly reduced by a factor $1205$x, $7$x, and $825$x on Adult, Cancer, and Diabetes datasets respectively.
This enables the practical deployment of our framework in industrial and real-world scenarios where tabular datasets are prevalent ~\cite{cartella2021adversarial,buczak2015survey,clements2020sequential,ulmer2020trust,evans2009online}, with low memory and communication overhead (see Table~\ref{table:Results_Memory} in Appendix~\ref{appendix:RAM_usage}). Note that these results are in the Full-Pr setting, as the binarization process in {\tt {TT-TFHE}}\xspace does not lead to any particular issue for tabular datasets.
\subsection{Image datasets}
\label{subsec:Image}
Our experimental results and comparisons for image datasets are given in Table~\ref{table:Results_image} (in Table~\ref{table:Results_image_normalized} of Appendix~\ref{appendix:singleCPU} for all results normalized to a single CPU).
\begin{table}[htb!]
\vspace*{-0.3cm}
\caption{\label{table:Results_image} Image dataset results for {\tt {TT-TFHE}}\xspace and competitors. Results denoted with * are estimated by the original authors, not measured. All our models use a table lookup bitwidth of $n=4$, except the underlined ones that use $n=6$. \strut}
\medskip
\resizebox{\columnwidth}{!}{
\begin{tabular}{ll|cccc|c|cc|c|}
\cline{3-10}
\multicolumn{2}{c|}{} & \multicolumn{4}{c|}{Full-Pr ($\emptyset$/$N$)} & ${\tt VGG}\xspace_{1B}$/$\emptyset$ & \multicolumn{2}{c|}{${\tt VGG}\xspace_{1L}$/$N$} & ${\tt VGG}\xspace_{1B}$/$N$ \\ \cline{3-10}
\multicolumn{2}{c|}{\textbf{TFHE-based schemes}} & TAPAS & GateNet & Zama & \textbf{Ours} & \textbf{Ours} & Zama & \textbf{Ours} & \textbf{Ours} \\
\multicolumn{2}{c|}{\#CPU cores} & 16 & 2 & 6 & 4 & 4 & 128 & 4 & 4 \\\hline
\multicolumn{1}{|l}{\multirow{2}{*}{\textbf{MNIST}}} & Acc. (\%) & 98.6 & 98.8* & 97.1 & \underline{97.2} & 97.5 & - & 98.2 & 98.1 \\
\multicolumn{1}{|l}{} & Time & 37h & 44h* & 115s & \underline{83.6s} & 0.04s & - & 8.7s & 7s \\ \hline
\multicolumn{1}{|l}{\multirow{2}{*}{\textbf{CIFAR-10}}} & Acc. (\%) & - & 80.5* & - & - & 70.4 & 62.3 & 69.4/\underline{72.1} & 74.1/\underline{75.3} \\
\multicolumn{1}{|l}{} & Time & - & 3920h* & - & - & 0.4s & 29m & 9.5m/\underline{6.2h} & 9.5m/\underline{6.2h} \\ \hline
\end{tabular}}
\medskip
\resizebox{\columnwidth}{!}{
\begin{tabular}{ll|ccccc|}
\cline{3-7}
\multicolumn{2}{c|}{} & \multicolumn{5}{c|}{Full-Pr ($\emptyset$/$N$)} \\ \cline{3-7}
\multicolumn{2}{c|}{\textbf{non-TFHE-based schemes}} & CryptoNets & Fast CryptoNets & \:\:\:\: SHE \:\:\:\: & \:\:\:\: Lola \:\:\:\: & Lee \textit{et. al.} \\
\multicolumn{2}{c|}{\#CPU cores} & 4 & 6 & 10 & 8 & 1 \\\hline
\multicolumn{1}{|l}{\multirow{2}{*}{\textbf{MNIST}}} & Acc. (\%) & 99 & 98.7 & 99.5 & 99.0 & - \\
\multicolumn{1}{|l}{} & Time & 4.2m & 39s & 9.3s & 2.2s & - \\ \hline
\multicolumn{1}{|l}{\multirow{2}{*}{\textbf{CIFAR-10}}} & Acc. (\%) & - & 76.7 & 92.5 & 74.1 & 91.3 \\
\multicolumn{1}{|l}{} & Time & - & 11h & 37.6m & 12.2m & 37.8m \\ \hline
\end{tabular}}
\end{table}
\subsubsection{Fully private (Full-Pr) setting}
\label{subsubsec:Fullyprivate}
In the Full-Pr configuration, we focused on the performance of our method on the MNIST dataset, as the binarization process in {\tt {TT-TFHE}}\xspace resulted in a significant loss of accuracy for CIFAR-10 or an increase in inference time. {\tt {TT-TFHE}}\xspace offers a competitive trade-off compared to other TFHE-based methods, such as TAPAS~\cite{sanyal2018tapas}, GateNet~\cite{fu2021gatenet} or Zama~\cite{chillotti2021programmable}. It is three orders of magnitude faster than TAPAS or GateNet, while showing only a slightly lower accuracy of 1.4\%. In comparison to Zama, our method is $2$x faster per CPU for the same level of accuracy. Additionally, we highlight that for our single layer LTT block of size $n=6$, we require 1600 calls to $6$-bit lookup tables, which leads to an inference time of only 83 seconds on four CPU cores.
{\tt {TT-TFHE}}\xspace is even competitive in terms of inference time with some non-TFHE-based schemes such as (Fast) CryptoNets~\cite{gilad2016cryptonets}, but can be one order of magnitude slower with slightly lower accuracy. Therefore, one can observe that the Full-Pr setting of TFHE implemented in our framework generally underperforms compared to the very latest Full-Pr CKKS or BFV setups. This is explained by the first binarization process in {\tt {TT-TFHE}}\xspace, which compresses too much information embedded in the input image. Yet, we again emphasize the many advantages of TFHE-based solutions compared to non-TFHE-based ones: little memory required allowing easy/efficient multi-client inference, low communication overhead, no security warning on TFHE while CKKS secret key can be recovered in polynomial time~\cite{li2021security} (a fix was proposed afterward ~\cite{li2022securing} but not yet implemented in SEAL for example), etc. We will see in the next sub-section that the performance situation is very different in the setting where the client can perform some pre-computation layer.
\subsubsection{Other settings}
\label{subsubsec:PublicLayer}
We first observe that when allowing the client to apply a simple pre-processing layer, the performance increases drastically for {\tt {TT-TFHE}}\xspace. We have implemented and benchmarked both ${\tt VGG}\xspace_{1L}/{\tt TT}\xspace$ and ${\tt VGG}\xspace_{1B}/{\tt TT}\xspace$ settings, both with {\tt TTnet}\xspace models with $n=6$ and $n=4$ and we obtained a $10$x performance improvement, with an increase in accuracy. For reference, we have also tested the ${\tt VGG}\xspace_{1B}/\emptyset$ setting where only a linear regression is computed privately on server: we remark that adding a {\tt TTnet}\xspace in the server computation indeed improves accuracy by about 4\%.
One could argue that more {\tt VGG}\xspace blocks could be computed on client side to further increase the accuracy, but this would reduce the generality of the first layers and lead to PPML solutions that would not adapt very well to multiple use cases. We have tried blocks of other more recent CNNs than {\tt VGG}\xspace, such as {\tt DenseNet}\xspace ~\cite{huang2017densely}, but the results remain very similar.
We can compare the {\tt {TT-TFHE}}\xspace results to some TFHE-based competitors, as Zama proposed a similar setting \footnote{\url{https://github.com/zama-ai/concrete-ml/tree/release/0.6.x/use_case_examples/cifar_10_with_model_splitting}} with ${\tt VGG}\xspace_{1L}$ pre-computed by the client for CIFAR-10, and against which we infer $100$x faster per CPU and with a $10\%$ increase in accuracy.
Our {\tt {TT-TFHE}}\xspace results are now even competitive against non-TFHE-based solutions (even though they miss many of TFHE advantages), being faster than (Fast) CryptoNets~\cite{gilad2016cryptonets} and SHE~\cite{lou2019she}, and on par with Lola~\cite{brutzkus2019low} and Lee \textit{et. al.}~\cite{lee2022low} per CPU. We note that SHE and Lee \textit{et. al.} have better accuracy than our model.
\subsection{Memory/communication cost of {\tt {TT-TFHE}}\xspace}
\label{subsubsec:Memory}
In Table~\ref{table:Results_Memory} from Appendix~\ref{appendix:RAM_usage}, we compare the memory and communication needs for all our settings. We can observe that the deeper the representation, the smaller the communication needs: the inputs to the server become smaller as we go deeper into the neural network (there are also fewer computations to do in the server). Some settings do not need public keys as only a linear regression is done, and thus no programmable bootstrapping is involved~\cite{chillotti2021programmable}. Then, the largest the lookup tables (in terms of the number of features and size), the larger will be the public keys as there will be more bootstrapping. Also, the optimization proposed in Section~\ref{subsec:Challenges} to ease the linear regression comes with a cost: it increases the size of the outputs. Indeed, only the ${\tt VGG}\xspace_{1B}$/$\emptyset$ setting does not use this optimization as there is no lookup-table involved and thus no bootstrapping. Finally, the encrypted inputs size increases with the number of features.
\myparagraph{Comparison with other methods.} As stated in~\cite{clet2021bfv}, CKKS is more memory greedy than TFHE for communication. In Table~\ref{table:RAM_methods} of Appendix~\ref{appendix:RAM_usage}, we compare between each method the amount of RAM needed on server side for CIFAR-10 dataset. The best accuracy is the {\tt ResNet}\xspace proposed by~\cite{lee2022low}, but it also leads to the highest consumption in RAM. With LoLa setting, the accuracy is indeed lower but it requires $32$x lesser RAM. Then, our method reduces again the memory on server by almost a factor of $15$x for the same accuracy. RAM size on server matters for cloud computing as pricing increases along with memory needs\footnote{\label{azurepricing}\href{https://azure.microsoft.com/en-us/pricing/details/virtual-machines/windows/}{https://azure.microsoft.com/en-us/pricing/}}, thus low-memory solutions help the scalability of the MLaaS.
\section{Limitations and Conclusion}
\label{sec:Conclusion}
\myparagraph{Limitations.} Our proposed framework is still not as performant as the latest non-TFHE-based solutions with regard to running time and accuracy. Moreover, CIFAR-10 and larger datasets remain out of reach for industrial use. Finally, {\tt {TT-TFHE}}\xspace is for the moment limited by the constraints of the Concrete library.
\myparagraph{Conclusion.} In this paper, we presented a new framework, {\tt {TT-TFHE}}\xspace, which greatly outperforms all TFHE-based PPML solutions in terms of inference time, while maintaining acceptable accuracy. Our proposed framework is a practical solution for real-world applications, particularly for tabular data and small image datasets like MNIST, as it requires minimal memory/communication cost, provides strong security for the client's data, and is easy to deploy. We believe that this research will spark further investigations into the utilization of truth tables for privacy-preserving data usage, emphasized by the recently NIST Artificial Intelligence Risk Management Framework~\cite{ai2023artificial}.
\newpage
|
1,116,691,499,889 | arxiv | \section{Introduction}
\label{sec:introduction}
The (Wilkinson Microwave Anisotropy Probe) WMAP5
results~\cite{2008arXiv0803.0715G,2008arXiv0803.0570H,2008arXiv0803.0732H,
2008arXiv0803.0593N,2008arXiv0803.0586D,2008arXiv0803.0547K} give strong
indication in favour of cosmic inflation over other mechanisms for the
production of primordial fluctuations~\cite{inf25}. Since inflation
generally takes place at high energy, recently there has been a flurry
of activity in constructing models inspired by or derived from string
theory (for recent reviews see \textsl{e.g.}~
Refs.~\cite{McAllister:2007bg,Burgess:2007pz,Kallosh:2007ig,HenryTye:2006uv}).
In a large category of these models, particularly brane-antibrane
inflation~\cite{Silverstein:2003hf,Alishahiha:2004eh,Chen:2004gc,Chen:2005ad,Lorenz:2007ze,Lorenz:2008je,Lorenz:2008et,Langlois:2008qf,Langlois:2008wt}
and D3/D7 inflation~\cite{Dasgupta:2002ew,Dasgupta:2004dw}, the end
result of the inflationary phase is the creation of D-strings (as well
as potentially F-strings~\cite{Copeland:2003bj}), interpreted from the
four-dimensional point of view as cosmic strings. Since cosmic strings
are strongly ruled out as the main originator of primordial
fluctuations, the D-(and indeed F-) string tension is severely
constrained and (under certain assumptions) such that $G_{_{\rm
N}}\mu\lesssim 10^{-6}$ in order to preserve the features of the WMAP5
results~\cite{Pogosian:2003mz}. Nevertheless, the existence and possible
detection of the effects of D-strings in the aftermath of an era of
brane inflation could be a testable prediction of string theory.
\par
D-strings themselves have been conjectured to be in correspondence with
the $D$-term strings of supergravity~\cite{Dvali:2003zh}. One of their
remarkable properties is that they satisfy a
Bogomolnyi-Prasad-Sommerfield (BPS) condition, \textsl{i.e.}~ they have no binding
energy and preserve 1/2 of the original supersymmetries. They also carry
fermionic zero modes and are therefore vorton candidates, leading to
possible interesting phenomenological consequences~\cite{Brax:2006zu}.
\par
The identification between $D$-term strings and D-strings has been made
in the low energy limit, when field gradients are small. Inspired by the
case of open string modes which can be effectively described by a
non-linear action of the Dirac-Born-Infeld (DBI) type, in this paper we
construct models of cosmic strings which depart from the low energy
approximation and generalise the Abelian-Higgs model to a non-linear
one. We will call the resulting topological objects `DBI-cosmic
strings', and they are exact solutions of the generalised non-linear DBI
action. The action we consider is very different from others which have
been discussed in the literature,
Refs.~\cite{Moreno:1998vy,Yang:2000uj,Sarangi:2007mj,Brihaye:2001ag,
Babichev:2006cy,Babichev:2007tn}, and in particular does not lead to
pathological configurations. In the limit of small field gradients our
DBI strings reduce to Abelian-Higgs strings. We construct the DBI
string solutions numerically in a broad range of parameter space, using
two numerical methods: a relaxation method and a shooting algorithm. In
this way, we show, in particular, that DBI strings with a potential term
corresponding to the BPS limit of the Abelian-Higgs model are no-longer
BPS. More specifically, $\mu_{2n} \geq 2\mu_{n}$, where $\mu _n $ is
the action per unit time and length for a string with a winding number
$n$: the equality only holds in the low-energy limit. Borrowing language
from the standard cosmic string
literature~\cite{Hindmarsh:1994re,Vilenkin:1994,Rubakov}, the strings
are therefore in the type II regime (though the deviations from BPS are
small, in a sense we will quantify). The network of strings produced
will therefore not contain junctions, and all the strings will have the
same tension $\mu_{n=1}$. In the cosmological context we therefore
expect the DBI-string network to evolve in the standard way
\textsl{e.g.}~~\cite{Bevis:2007gh}, containing infinite strings and loops,
radiating energy through gravitational waves, and eventually reaching a
scaling solution.
\par
The paper is organised as follows. In
section~\ref{subsec:abelianmodel} we recall the
properties of Abelian-Higgs cosmic strings, while their realisation in the
D3/D7 system is discussed in subsection~\ref{subsec:d3d7system}.
In section~\ref{sec:dbics}, we first briefly review the different
non-linear actions which have been put forward so far in the literature.
Then in subsection~\ref{subsec:dbiaction}, we motivate and present our
proposed non-linear action for cosmic strings, which we expect to be
applicable when field gradients are large.
In section~\ref{sec:dbisol}, we study the DBI-cosmic strings
solutions analytically and numerically.
In subsection~\ref{subsec:analytical}, we present simple analytical
estimates which allow us to roughly guess the form of the DBI string
profiles and, in subsection~\ref{subsec:numerics}, we compute them
numerically by means of two different methods (shooting and over
relaxation). In section~\ref{sec:conclusions} we briefly summarise
our main findings and discuss our conclusions. Finally, the appendix
gives the full non-linear structure of the DBI cosmic string action.
\section{Abelian-Higgs Cosmic Strings}
\label{sec:abelianhiggscosmicstrings}
\subsection{The Abelian-Higgs model}
\label{subsec:abelianmodel}
We begin by recalling briefly the properties of standard Abelian-Higgs
cosmic strings, and at the same time introduce our notation following
Ref.~\cite{Vilenkin:1994} though we use the signature $(-+++)$.
\par
The Abelian-Higgs model is governed by the action
\begin{equation}
\label{eq:standardaction}
S=-\int {\rm d}^4 x \sqrt{-g}\left[(D_\mu \Sigma)
(D^\mu \Sigma)^{\dagger}+\frac{1}{4}
F_{\mu\nu}F^{\mu\nu}+ V(| \Sigma |) \right] \, ,
\end{equation}
where the potential is given by
\begin{equation}
\label{eq:defpot}
V\left(\left\vert\Sigma \right\vert\right)
= \frac{\lambda}{4}\left(\left\vert \Sigma\right \vert^2
-\eta^2\right)^2\, .
\end{equation}
In Eq.~(\ref{eq:standardaction}), $D_{\mu}$ denotes the covariant
derivative defined by $D_{\mu}\equiv \partial_\mu -iq A_\mu$ with
$A_{\mu }$ the vector potential, $F_{\mu \nu}\equiv \partial _{\mu
}A_{\nu}-\partial _{\nu}A_{\mu}$, and $q$ is the gauge coupling. The
potential is characterised by two free parameters: the coupling
$\lambda>0$ and an energy scale $\eta $. It is useful to introduce the
dimensionless coupling
\begin{equation}
\beta \equiv \frac{\lambda}{2q^2} = \frac{m^2_{\rm s}}{m^2_{\rm
g}}
\label{beta-def} \, ,
\end{equation}
where the Higgs mass is $m_{\rm s}=\sqrt{\lambda} \eta/\sqrt 2$ and the
vector mass $m_{\rm g}=q \eta$.
\par
Due to the non-trivial topology of the vacuum manifold, after gauge
symmetry breaking the model possesses vortex (or cosmic string)
solutions for which the scalar field can be expressed as
\begin{equation}
\label{eq:profileX}
\Sigma \left(r,\theta\right) = \eta
X\left(\rho \right){\rm e}^{in \theta} \, ,
\end{equation}
where we have used the cylindrical coordinates, and the cosmic string is
aligned along the $z$-axis. Here $n$ is the winding number proportional
to the quantised magnetic flux on the string, and we have defined a
rescaled radial coordinate
\begin{equation}
\label{eq:defrho}
\rho \equiv \lambda^{1/2} \eta r \, ,
\end{equation}
with $X\left(\rho \right) \to 1$ at infinity, while $X(0)=0$. In the
radial gauge, the only non-vanishing component of the vector potential
$A_{\mu }$ is $A_{\theta}(\rho)$ with $A_{\theta}(0)=0$. We define
\begin{equation}
Q \equiv n -qA_\theta \, ,
\end{equation}
so that the tension, defined to be the action per unit time and length
$\mu=-S/{\rm d}t {\rm d}z$, can be expressed as
\begin{eqnarray}
\label{eq:actiontx}
\mu_n(\beta)&=&{2\pi {\eta}^2} \int _0^{+\infty}{\rm d}\rho \rho
\left[\left(\frac{{\rm d}X}{{\rm d}\rho}\right)^2
+\frac{Q^2 X^2}{\rho^2}+
\frac{\beta}{\rho^2}\left(\frac{{\rm d}Q}{{\rm d}\rho}\right)^2
+\frac{1}{4}\left(X^2-1\right)^2\right]
\nonumber
\\
&\equiv & 2 \pi \eta^2 g_n\left(\beta^{-1}\right) \, .
\end{eqnarray}
The function $g _n\left(\beta^{-1}\right)$ is plotted in
Fig.~\ref{fig:action_vs_charge} (left panel).
\begin{figure}
\begin{center}
\includegraphics[width=7.7cm]{action_vs_charge.eps}
\includegraphics[width=7.7cm]{diffaction_vs_charge.eps} \caption{Left
panel: the tension of $n=1$ and $n=2$ Abelian-Higgs strings as a
function of $\beta^{-1}=2q^2/\lambda $. Right panel: the binding energy
$\mu_{2n}-2\mu_n$ for $n=1$ Abelian-Higgs strings as a function of
$\beta^{-1}$. When the BPS condition $\beta=1$ is satisfied, $\mu
_n=2\pi n\eta ^2$ so that $\mu _{2n}-2\mu _n=0$ as can be verified in
the figure.} \label{fig:action_vs_charge}
\end{center}
\end{figure}
In the BPS case, $\beta=1$,
the tension given in Eq.~(\ref{eq:actiontx}) can be re-expressed as
\begin{eqnarray}
\label{eq:reformulaction}
\mu_n &=& {2\pi {\eta}^2} \int _0^{+\infty}{\rm d}\rho \rho \Biggl\{
\left(\frac{{\rm d}X}{{\rm d}\rho}-\frac{QX}{\rho}\right)^2
+\left[\frac{1}{\rho}\frac{{\rm d}Q}{{\rm d}\rho}
-\frac{1}{2}(X^2-1) \right]^2
\nonumber
\\
& & \qquad \qquad \qquad
+\frac{1}{\rho} \frac{\rm d}{\rm d\rho}\left[Q
(X^2-1)\right]\Biggr\} \, ,
\end{eqnarray}
and is minimised for cosmic strings that are
solutions of the BPS equations
\begin{equation}
\frac{{\rm d}X}{{\rm d}\rho}= \frac{XQ}{\rho}\, ,
\quad
\frac{{\rm d}Q}{{\rm d}\rho}= \frac{\rho}{2} (X^2-1)\, .
\end{equation}
On inserting back into Eq.~(\ref{eq:reformulaction}) one finds
\begin{equation}
\mu_n= 2\pi \eta^2 n \,
\end{equation}
so that $g_n\left(1\right)=n$.
The functions $g_{n=1}\left(\beta ^{-1}\right)$ and $g_{n=2}\left(\beta
^{-1}\right)$ are plotted in the left hand panel of
Fig.~\ref{fig:action_vs_charge} whereas the binding energy
$\mu_2-2\mu_1$ is shown in the right hand panel. In the BPS limit,
$g_n\left(1\right)=n$, and the force between vortices
vanishes~\cite{Jacobs:1978ch}. For $\beta<1$, the strings are type I
with a negative binding energy, while for $\beta>1$ they are type II
with positive binding energy. Type I string therefore attract and can
form bound states or `zippers'~\cite{Bettencourt:1994kc} linked by
junctions. Zippers may form (in a certain regime of parameter space)
when two strings in a network collide,
Refs.~\cite{Copeland:2006eh,Salmi:2007ah}.
A network of type II strings, on the other hand,
contains no junctions and the strings all have the same tension
$\mu_{n=1}$.
We now outline how BPS Abelian-Higgs cosmic strings form in certain
string theory models of inflation. This will motivate our discussion of
DBI strings in section \ref{sec:dbics}. There we will see that DBI
strings with $\beta=1$ have a positive binding energy and hence repel
each other.
\subsection{BPS Abelian-Higgs strings in the D3-D7 system}
\label{subsec:d3d7system}
In string theory models, cosmic strings can form after the end of
inflation~\cite{McAllister:2007bg,Burgess:2007pz,Kallosh:2007ig,
HenryTye:2006uv}. In the D3/D7
system~\cite{Dasgupta:2002ew,Dasgupta:2004dw} in particular, the two
branes attract during the inflationary period
and then eventually coalesce forming D-strings. The whole picture
(inflation and string formation) can be described in terms of the field
theoretical $D$-term hybrid inflation~\cite{Dvali:2003zh}. In this
language the D-strings have been conjectured to be analogous to $D$-term
strings, and furthermore --- as we now outline ---
the strings are BPS Abelian-Higgs strings.
\par
In the D3/D7 system there are three complex
fields~\cite{Dasgupta:2002ew,Dasgupta:2004dw}: the inflaton $\phi$ and
the waterfall fields $\phi^\pm$. In string theory, $\phi$ is the
interbrane distance and $\phi^\pm$ are in correspondence with the open
strings between the branes.
In the supersymmetric
language, the K\"ahler potential is
\begin{equation}
K=-\frac{1}{2}\left(\phi-\phi^{\dagger}\right)^2+\vert \phi^+\vert^2
+\vert\phi^-\vert^2\, ,
\end{equation}
leading to canonically normalised fields. Notice that the
inflation direction is invariant under the real shift symmetry
$\phi\to \phi+c$, thus guaranteeing the flatness of the inflaton
direction at the classical level~\cite{Hsu:2003cy}. The superpotential is
\begin{equation}
W=\lambda \phi\phi^+\phi^- \, .
\end{equation}
During inflation, the $U(1)$ symmetry under which the waterfall fields
have charges $\pm 1$ is not broken, i.e $\phi^\pm=0$. The scalar
potential is flat and picks up a slope at the one loop level. This is
enough to drive inflation. As $\phi$ decreases, it goes through a
threshold after which the waterfall field $\phi^+$ condenses and the
inflaton vanishes. This corresponds to the coalescence of the D3- and
D7-branes. The effective potential describing the condensation is given
by the $D$-term potential (the F-terms all vanish when $\phi=0,\
\phi^-=0$)
\begin{equation}
V_{D} = \frac{g^2}{2}\left(\vert\phi^+\vert^2 -\xi\right)^2
\label{eq:pot}.
\end{equation}
The term $\xi$ is called a Fayet-Iliopoulos (FI)
term~\cite{Binetruy:2004hh}. As $\phi^+$ condenses and $\langle
\phi^+\rangle=\sqrt \xi$, cosmic strings form interpolating between a
vanishing field in the core and $\sqrt \xi$ at infinity. These cosmic
strings are BPS objects preserving one half of the original
supersymmetries. Their tension is known to be $\mu_n= 2\pi n
\xi$~\cite{Dvali:2003zh}. As a consequence,
there is no binding energy as
$\mu_{2n}=2\mu_n$.
\par
In fact~\cite{Dvali:2003zh}, the $D$-term string model of D-string
formation is nothing but an Abelian-Higgs model with particular
couplings
\begin{equation}
{\cal L}= D_\mu \phi^+\left( D^\mu \phi^+\right)^{\dagger}
+\frac{1}{4g^2}F_{\mu\nu}F^{\mu\nu}+V_{D}\, ,
\end{equation}
where $D_\mu \phi^+= (\partial_\mu -i A_\mu )\phi^+$. Upon rescaling
$A_\mu \to A_\mu/g$ and comparing with (\ref{eq:standardaction}), this
leads to the identification
\begin{equation}
q=g\, , \quad \lambda = 2g^2\, ,\quad \xi=\eta^2\, ,
\end{equation}
corresponding to $\beta=1$ and, hence, a BPS system. This explains why
one recovers $\mu_{2n}=2\mu_n$.
\par
Moreover, the energy scale $\sqrt{\xi}$ can be given a stringy
interpretation. Indeed, one can show that the Fayet-Iliopoulos term is
related to internal fluxes on D7-branes~\cite{Burgess:2003ic}. For this
purpose, let us consider a ten-dimensional metric in the form
\begin{equation}
{\rm d}s^2 =
g_{\mu\nu}{\rm d}X^\mu {\rm d}X^\nu + g_{pq} {\rm d}X^p {\rm d}X^q
+ g_{ij} {\rm d}X^i {\rm d}X^j \, ,
\end{equation}
with $\mu=0,\cdots ,3$, $p=4,\cdots ,7$ and $i = 8,9$. The quantity
$g_{\mu\nu}$ is the four-dimensional metric and $g_{pq}$ and $g_{ij}$
the compactification six-dimensional metric. This corresponds to the
metric on $K^3\times T^2$ compactifications for instance. The internal
dimensions of the D7-brane are the coordinates $a \equiv (\mu,p)$ while
the D3-brane lies along the $\mu$ coordinates. We denote by $T_7$ the
brane tension and $g_\us$ the string coupling. The four-dimensional gauge
coupling is given by
\begin{equation}
\frac{1}{g^2}= \frac{T_7 V_4 \ell_\us ^4}{g_\us}\, ,
\end{equation}
where the string length is $\ell_\us= 1/\sqrt {2\pi \alpha'}$ and $V_4=
\int {\rm d}^4 x \sqrt { \det g_{pq}}$ is the volume of the four compact
internal dimensions of the D7-brane. Consider now a dimensionless
magnetic flux $F_{pq}$ along the internal dimensions $(p,q)=4, \cdots,7$,
of a D7-brane. Then
\begin{equation}
\xi^2=\frac{T_7}{2g_\us g^2}\int {\rm d}^4 x \sqrt{-g_{pq}}
F_{pq}F^{pq}\, .
\end{equation}
Notice that the absolute value of the FI term is not fixed, it can be
decomposed as $\xi^2= \zeta/ \ell_s^2$ where the prefactor depends on
$F_{pq}$.
\par
In the following, we will consider cosmic string models for which the
canonical kinetic terms have been replaced by a non-linear term of the
DBI type. In effective actions describing string theory phenomena, and
particularly brane dynamics, such a replacement is mandatory as soon as
the gradient terms in the effective action become large. Indeed, the DBI
action usually describes the dynamics of the open strings in
correspondence with the brane motion (such as the 3-3 and 7-7 open
strings in the D3/D7 system). As we have recalled, the formation of
cosmic strings in the D3/D7 system is governed by the 3-7 strings of no
obvious geometric significance. In such a situation, and assuming that
there could be higher order terms correcting the lowest order
Lagrangian, the effect of the higher order corrections to the kinetic
terms (terms in $\vert D\phi^+\vert^{2p},\ p>1$) would be to induce
modifications of the cosmic string profile and of the tension.
\par
In the following, we do not restrict our attention to a particular
setting such as the D3/D7 system. We discuss possible non-linear
extensions of the Abelian-Higgs system and then motivate a specific,
well defined form. We then analyse the departure from the BPS case
induced by the higher order terms.
\section{DBI Cosmic Strings}
\label{sec:dbics}
\subsection{Non-standard actions for cosmic strings}
\label{subsec:dbiactioncs}
As discussed in the previous section, we are interested in situations in
which gauged cosmic strings form when higher order corrections to the
kinetic terms in the action cannot be neglected. In the absence of an
explicit derivation from, say, string theory, we take a phenomenological
approach (which, however, is strongly inspired by string theory). This
is presented in subsection~\ref{subsec:dbiaction}. We will construct an
action [given in Eq.~(\ref{eq:DBIaction}) and reproduced below] which
satisfies the following two criteria: Firstly, the Abelian-Higgs limit
should be recovered when gradients are small. In particular, the action
should be a continuous deformation of the Abelian-Higgs model. Secondly,
the resulting cosmic string solutions should have no pathological and/or
singular behaviour as the model becomes more and more non-linear (in
field gradients).
In the remainder of this subsection, we compare our action,
Eq.~(\ref{eq:DBIactiondim}):
\begin{eqnarray}
S &\propto & \int {\rm d}^4x \Biggl\{\sqrt{-\det \left[ g_{\mu
\nu} +({D}_{(\mu}{\phi}) ({D}_{\nu)}
{\phi})^{\dagger}+{F}_{\mu\nu} \right]}-\sqrt{-g} \nonumber \\
& & +\sqrt{-g} V\left(|{\phi}|\right)\Biggr\} \, .
\end{eqnarray}
to other non-linear actions which have been considered in the literature
and which do not satisfy the above criteria.
\par
In Refs.~\cite{Moreno:1998vy,Yang:2000uj,Brihaye:2001ag} an attempt to
construct a non-linear model for (electrically) charged vortices in
(2+1) dimensions uses an hybrid approach with a (truncated) Born-Infeld
action for the gauge field, a standard linear action for the Higgs
field, and a Chern-Simons term\footnote{No charged vortex solutions
exist when the Chern-Simons term is absent.}:
\begin{eqnarray}
S &=& \int {\rm d}^4x\sqrt{-g}\Biggl[
\sigma^2\Biggl(\sqrt{1+ \frac{1}{2\sigma^2}
F_{\mu\nu}F^{\mu\nu} }-1\Biggr) +\frac{\kappa}{4\pi}
\epsilon^{\mu\nu\rho} A_\mu F_{\nu\rho}
\nonumber \\
& & +\frac{1}{2}\vert D\phi\vert^2 +V(\phi)\Biggr]\, ,
\end{eqnarray}
where $\sigma $ is a parameter of dimension two and $\vert D\phi\vert^2=
(D_\mu \phi) (D^\mu \phi)^{\dagger}$. At a threshold $\sigma=\sigma_{\rm
c}$ corresponding to the very non-linear regime, the gauge field becomes
singular at the origin of the vortex whilst, below the threshold, no
solution exists. Incorporating the Higgs kinetic terms into the square
root, while dropping the Chern-Simons term, leads to the following
expression
\begin{eqnarray}
S &=& \int {\rm d}^4x \sqrt{-g}\Biggl\{
\sigma ^2 \Biggl[\Biggl(1+\frac{1}{2\sigma ^2}F_{\mu \nu}F^{\mu \nu}
+\frac{1}{\sigma^2}\left\vert D\phi\right\vert ^2
+\frac{1}{\sigma ^4}\tilde{F}_{\mu}\tilde{F}_{\nu}D^{\mu}\phi^{\dagger}
D^{\nu}\phi
\nonumber \\ & &
-\frac{1}{2\sigma ^4}\left\vert D \phi\right\vert^2
F_{\mu \nu}F^{\mu
\nu}\Biggr)^{1/2}-1\Biggr]-V\left(\phi\right)\Biggr\}\, ,
\label{tr1}
\end{eqnarray}
where $\tilde{F}_\mu\equiv \epsilon_{\mu\nu\rho} F^{\nu\rho}/2$. With
such a very particular form (which differs from the one we will propose
shortly), one does not find finite energy solutions.
\par
A model for (global) cosmic strings was proposed in
Ref.~\cite{Sarangi:2007mj} with an action given by
\begin{equation}
S=-\int {\rm d}^4x\sqrt{-g}\left[\sqrt{1+ \vert \partial
\phi\vert^2}-1 +V(\phi)\right]\,
\end{equation}
(this is identical to Eq.~(\ref{tr1}) in the global limit). For this
model the solutions become multi-valued and undefined at the origin as
soon as the magnitude of $V\left(\phi\right)$ becomes sufficiently
large~\cite{Sarangi:2007mj}.
\par
In view of these negative approaches where singularities and pathologies
abound, one could be tempted to think that non-linear cosmic string
actions all lead to these problems. Fortunately, a well-behaved action
has been suggested by Sen~\cite{sen} and studied in
Ref.~\cite{Kim:2005tw} in the case of D-strings obtained at the end of
D-${\bar{\rm D}}$ inflation. In such a system, hybrid inflation occurs
and the r\^ole of the waterfall field is played by the open string
tachyon $T$ with a charge $\pm 1$ under the $U(1)$ gauge groups of the
D3- (respectively $\bar{\rm{D}}3$-) brane. When the two branes coincide
the effective action reads
\begin{equation}
S=-T_3\int {\rm d}^4x \, V\left(T,T^{\dagger}\right)\left[
\sqrt {\det{\left( - g^+\right)}}
+\sqrt{\det{\left(- g^-\right)}}\right]\, ,
\end{equation}
where $ g^{\pm}_{\mu\nu}= g_{\mu\nu} \pm \ell_\us^2 F_{\mu\nu}+\left(D_\mu
TD_\nu T^{\dagger} + D_\mu T^{\dagger} D_\nu T\right)/2$ and $V$ is the
tachyon potential. Since $\det (-
g^+)=\det (-g^-)$ (see the Appendix), the action reduces to
\begin{equation}
S=-2T_3\int {\rm d}^4x V\left(T,T^{\dagger}\right)
\sqrt{\det{( - g^+)}}\, ,
\end{equation}
and it admits BPS vortex solutions.
\par
Topological defects in models with general non-linear kinetic terms were
studied in Refs.~\cite{Babichev:2006cy,Babichev:2007tn}. The action
proposed in~\cite{Babichev:2006cy} for global topological defects
differs from the tachyon action above (in the global limit) as the
potential is added to the generalised kinetic terms:
\begin{equation}
\label{kdefect}
S=\int {\rm d}^4 x\left[M^4 K\left(\frac{X}{M^4}\right)
- V(\phi)\right]\, .
\end{equation}
Here
\begin{equation}
X\equiv \frac 12(\partial_\mu\phi_a\partial^\mu\phi_a)
\end{equation}
is the standard kinetic term, $K(X)$ is some non-linear function,
$\phi_a$ is a set of scalar fields, $M$ has dimensions of mass, and the
potential term provides the symmetry breaking term. One of the main
restrictions imposed in~\cite{Babichev:2006cy} on the form of the
non-linear function $K(X)$ is that $K(X)$ should have a canonical
asymptotic form, $K(X)\sim X$ as $X\to 0$. However, for large gradients
$K(X)$ could deviate considerably from the canonical kinetic terms. The
former requirement implies a non-pathological behaviour of solutions
far from the defect core, while the different possibilities for $K(X)$
at infinity leads to deviations of the defect from the standard case
inside the core. The action (\ref{kdefect}) leads to non-pathological
solutions for so-called $k$-defects --- domain walls, cosmic strings and
monopoles --- whose properties can differ considerably from those of
standard defects.
\par
A gauge version of action (\ref{kdefect}) was considered
in Ref.~\cite{Babichev:2007tn}: for a complex scalar field
\begin{equation}
\label{kvortex}
S=\int {\rm d}^4 x\left[M^4 K\left(\frac{X}{M^4}\right)
- V(\phi) - \frac{1}{4}F^{\mu\nu}F_{\mu\nu}\right]
\end{equation}
with
\begin{equation}
X\equiv \frac{1}{2} (D_\mu \phi)(D^\mu \phi)^\dagger.
\end{equation}
It was shown that non-pathological cosmic string solutions exist at
least for some choices of the non-linear function
$K(X)$~\cite{Babichev:2007tn}.
\par
In the following we will motivate and study a non-linear extension of
Abelian-Higgs model which retains some of the properties of the tachyon
and $k$-defect models. The potential will be additive as in the
$k$-defect case while the kinetic terms have a DBI form as in the
tachyon case. However, the kinetic terms are {\it not} a function solely
of $X$ anymore: they differentiate between the radial and angular
gradients of the defects. This springs from the origin of the kinetic
terms as induced from the normal motion of a D3 brane embedded in a
larger space-time.
\subsection{A DBI Action for Cosmic Strings}
\label{subsec:dbiaction}
We now turn to the action that we propose in this article. Consider a
brane model in which cosmic strings appear as deformations of a
brane. (In a sense the brane becomes curved with a puncture at the
location of the string, as we discuss.) To do so, consider a
ten-dimensional setting as is natural for brane models derived from or
inspired by string theory. We choose a non-warped compactification and
write the ten-dimensional metric in cylindrical form
\begin{equation}
{\rm d}s^2_{10}\equiv g_{AB}^{10} {\rm d}X^A {\rm d}X^B
= {\rm d}s^2_4 + 2g_{\alpha\bar \beta}
{\rm d}Z^\alpha {\rm d}\bar Z^{\bar \beta}\, ,
\end{equation}
where
\begin{equation}
{\rm d}s^2_4\equiv g_{\mu \nu}{\rm d}X^\mu {\rm d}X^\nu
= -({\rm d}X^0)^2+ {\rm d}R^2 + R^2 {\rm d}\Theta^2
+{\rm d}Z^2 \, .
\end{equation}
The metric along the internal dimensions $g_{\alpha \bar \beta}$
($\alpha=5,6,7$) is kept arbitrary, \textsl{i.e.}~ Hermitian and positive
definite, and we have assumed that the six-dimensional manifold is
complex (it could be a Calabi-Yau manifold) therefore having complex
coordinates. The complex coordinates are crucial to analyse cosmic
strings.
\par
Consider the DBI action for a three-brane embedded along the
first four coordinates
\begin{equation}
S=-T\int {\rm d}^4x \sqrt{-\det{\left(\tilde g_{\mu \nu}+\ell_\us^2
{\cal F}_{\mu \nu}\right)}} -\int {\rm d}^4 x\sqrt{-g}
V\left(\sqrt T Z^\alpha\right)\, ,
\end{equation}
where $T$ is the brane tension, ${\cal F}_{\mu \nu}$ is the field
strength on the brane (and has dimension two), distances have dimension
minus one and ${\cal A}_\mu$ has dimension one. We have included a
potential for the deformations $Z^\alpha$ of the normal directions to
the three-branes.
As suitable when the normal directions are charged under the
world-volume gauge group [in this case the local $U(1)$ on the brane],
we include a covariant derivative in the definition of the induced
metric
\begin{equation}
\tilde g_{\mu \nu}= g_{\mu \nu} + g_{\alpha\bar\beta}\left({\cal D}_\mu
Z^\alpha {\cal D}_\nu \bar Z^{\bar \beta}+{\cal D}_\mu \bar Z^{\bar\beta}
{\cal D}_\nu Z^{\alpha}\right)\,
\label{hoinduced}
\end{equation}
with
\begin{equation}
{\cal D}_\mu=\partial_\mu -i\hat{q}{\cal A}_\mu \, .
\end{equation}
Clearly, when the gauge fields vanish, $\tilde{g}_{\mu \nu}$ is simply
the induced metric on the brane.
A similar extension of the induced metric to charged fields has already
been introduced in the context of N-coinciding D-branes~\cite{Myers}
with the corresponding non-Abelian $SU(N)$ gauge theory. There the
brane coordinates are in the adjoint representation and have kinetic
terms involving the $SU(N)$ covariant derivative~\cite{Myers}. We extend
this procedure to the DBI cosmic string situation with a $U(1)$ gauge
group\footnote{In the D-brane context, the brane fields do not carry any
$U(1)$ charge as they belong to the adjoint representation.}
\par
When the six-dimensional metric is nearly flat
$g_{\alpha\bar\beta}=\delta_{\alpha \bar \beta}$ locally, the action
becomes
\begin{eqnarray}
S &=& -T\int {\rm d}^4x \Biggl\{\sqrt{-\det{\left[g_{\mu \nu}
+\left({\cal D}_\mu Z^\alpha {\cal
D}_\nu \bar Z_{\bar \alpha}+{\cal D}_\mu \bar Z^{\bar\alpha} {\cal D}_\nu
Z_{\alpha}\right)+\ell_\us^2 {\cal F}_{\mu \nu}\right]}}
\nonumber \\ & & \qquad \qquad \qquad
-\sqrt{-g}\Biggr\}
-\int {\rm d}^4 x\sqrt{-g}\, V\left(\sqrt T Z^\alpha\right)\, ,
\end{eqnarray}
where, as usual, we have subtracted the action of the ``flat'' brane so
that the Abelian-Higgs model is recovered when gradients are small. In
the following we suppose that only one normal direction is excited and
define $\Phi \equiv Z^1$. The resulting action is given by
\begin{eqnarray}
S &=&-T\int{\rm d}^4x \Biggl\{\sqrt{-\det
\left[ g_{\mu \nu} + ({\cal D}_{ ( \mu } \Phi) ({\cal D}_{\nu)}
\Phi)^{\dagger}
+ \ell_s^2 {\cal F}_{\mu\nu} \right] } -\sqrt{-g}
\nonumber \\ & &
\qquad \qquad \qquad
+\sqrt{-g}\frac{V(\sqrt{T}|\Phi|)}{T} \Biggr\} \, .
\label{eq:DBIaction}
\end{eqnarray}
Notice that, in the above equation, the potential [\textsl{i.e.}~ $V(x)$ as a
function of $x$] is still given by the expression~(\ref{eq:defpot}) and,
therefore, contains the parameter $\lambda $. When the complex scalar
field $\Phi$ vanishes, Eq.~(\ref{eq:DBIaction}) describes Born-Infeld
electrodynamics~\cite{Born:1934gh}.
\par
We now discuss action~(\ref{eq:DBIaction}) in detail. In particular we
compare its properties to those of the Abelian-Higgs
action~(\ref{eq:standardaction}) discussed in
section~\ref{sec:abelianhiggscosmicstrings}, and then we construct the
static cosmic string solutions of the action.
\par
A first important property of Eq.~(\ref{eq:DBIaction}) is that, to
leading order in derivatives, it reduces to the standard
action~(\ref{eq:standardaction}) on identifying
\begin{equation}
\Sigma = \sqrt{T} \Phi \, ,
\label{sigmaphilink}
\end{equation}
and redefining the charge according to the following expression
\begin{equation}
q= \frac{ \hat{q}}{\sqrt{T} \ell_\us^2}
\end{equation}
together with the gauge field
\begin{equation}
{\cal A}_\mu= \frac{A_\mu}{\sqrt{T} \ell_\us^2}\, .
\end{equation}
Hence, if the spatial derivatives characterising DBI-strings are small
(we shall discuss whether or not this is the case below), their
properties should to be very similar to Abelian Higgs strings. More
generally, however, and as discussed in detail in the Appendix where we
calculate the determinant explicitly, Eq.~(\ref{eq:DBIaction}) contains
terms of higher order in covariant derivatives as well as numerous
different mixing terms between ${\cal F}$ and ${\cal D}$ (suitably
contracted). These extra terms could significantly change the string
solution and the resulting strings' properties relative to the Abelian
Higgs case. It follows from this that our action is very different from
that considered by Sarangi in Ref.~\cite{Sarangi:2007mj}, even in the
global case. As a consequence we will find non-pathological cosmic
strings solutions with a continuous limit to Abelian-Higgs strings.
\par
We now focus on the cosmic string solutions of
Eq.~(\ref{eq:DBIaction}). For this purpose, first it is useful to pass
to dimensionless variables, denoted with a hat. Explicitly we set
\begin{eqnarray}
\hat{\Phi}\equiv \frac{\Phi}{\ell_\us}\, ,
\quad
\hat{\cal F}_{\mu \nu}\equiv \ell_\us^2 {\cal F}_{\mu \nu}\, ,
\quad
\hat{\cal D}_\mu\equiv \ell_\us {\cal D}_{\mu}\, ,
\quad
\hat{\eta}\equiv \frac{\eta}{\sqrt{T}\ell_\us}\, ,
\quad
\hat{r}\equiv \frac{r}{\ell_\us}\, ,
\label{dimensionle}
\end{eqnarray}
as well as
\begin{equation}
\hat{V}(|\hat{\Phi}|)\equiv
\frac{\hat{\lambda}}{4} \left(\hat{\Phi}^2 -
\hat{\eta}^2\right)^2\, ,
\end{equation}
where $\hat{\lambda}\equiv\lambda T\ell_\us^4$, so that the action becomes
\begin{eqnarray}
S &=& -T\ell_\us^4\int {\rm d}^4x \Biggl\{\sqrt{-\det \left[ g_{\mu
\nu} +(\hat{\cal D}_{(\mu} \hat{\Phi}) (\hat{\cal D}_{\nu)}
\hat{\Phi})^{\dagger}+\hat{\cal F}_{\mu\nu} \right]}-\sqrt{-g}
\nonumber \\ & & +\sqrt{-g} \hat V\left(|\hat{\Phi}|\right)\Biggr\}
\, . \label{eq:DBIactiondim}
\end{eqnarray}
We now follow the procedure outlined in section
\ref{subsec:abelianmodel} for Abelian-Higgs cosmic strings, however for
action (\ref{eq:DBIactiondim}). In dimensionless cylindrical
coordinates, ${\rm d}s^2=-{\rm d}\hat t^2+{\rm d} \hat{r}^2
+\hat{r}^2{\rm d}\theta ^2+{\rm d}\hat z^2$ and in the radial gauge
($\hat{A}_{\hat{r}}=0$), the cosmic string profile is
\begin{equation}
\hat{\Phi}=\hat{\eta} X(\rho) {\rm e}^{in\theta}\, ,\quad
Q(\rho)=n-\hat{q}\hat{\cal A}_\theta\left(\hat{r}\right)\, ,
\label{newprof}
\end{equation}
where we have defined a new radial coordinate $\rho $ by the following
expression
\begin{equation}
\rho \equiv{\hat{\lambda}}^{1/2} \hat{\eta }\hat{r}\,
\label{newdef}
\end{equation}
which should be compared to Eq.~(\ref{eq:defrho}).
The boundary conditions on the fields are
\begin{eqnarray}
\lim _{\rho \rightarrow 0} X=0\, , \quad
\lim _{\rho \rightarrow 0} Q=n\, , \quad
\lim _{\rho \rightarrow \infty} X=1\, , \quad
\lim _{\rho \rightarrow \infty} Q=0\, .
\end{eqnarray}
Substituting Eqs.~(\ref{newprof}) and (\ref{newdef}) into
(\ref{eq:DBIactiondim}) as well as using (\ref{dimensionle}), we
find that\footnote{Note that $\gamma^{-2}={\cal D}$
where ${\cal D}$ is discussed in the Appendix.}
\begin{eqnarray}
\left( - \det{g_{\mu \nu}} \right) \gamma^{-2} &= &
-\det \left[ g_{\mu \nu} +(\hat{\cal D}_{(\mu} \hat{\Phi}) (\hat{\cal D}_{\nu)}
\hat{\Phi})^{\dagger}+\hat{\cal F}_{\mu \nu} \right]\, ,
\end{eqnarray}
where we have defined the $\gamma $ factor by
\begin{eqnarray}
\label{eq:defgamma}
\gamma ^{-2}
&\equiv &
\left[1
+\alpha\left(\frac{{\rm d}X}{{\rm d}\rho}\right)^2\right]
\left(1+\frac{\alpha Q^2 X^2}{\rho^2}\right)
+\frac{\alpha}{\rho^2}
\left(\frac{{\rm d}Q}{{\rm d}\rho}\right)^2 \, .
\end{eqnarray}
Hence the string tension defined as $-S/ \ell_\us^2 {\rm d} \hat z{\rm d}
\hat t$ is given by
\begin{eqnarray}
\mu_n &=& \frac{4\pi {{\eta}^2}}{\alpha}
\int _0^{+\infty}{\rm d}\rho \rho
\left\{\sqrt{
\left[1+ \alpha \left(\frac{{\rm d}X}{{\rm d}\rho}\right)^2\right]
\left(1+\alpha\frac{Q^2 X^2 }{\rho^2}\right)
+\frac{\alpha \beta}{\rho^2}
\left(\frac{{\rm d}Q}{{\rm d}\rho}\right)^2}
\right.
\nonumber \\
& & \left.
\vphantom{\sqrt{
\left[1+ \alpha \left(\frac{{\rm d}X}{{\rm d}\rho}\right)^2\right]
\left[1+\alpha\frac{Q^2 X^2 }{\rho^2}\right]
+\frac{\alpha \beta}{\rho^2}
\left(\frac{{\rm d}Q}{{\rm d}\rho}\right)^2}}
-1+\frac{\alpha}{8}(X^2-1)^2 \right\} \, ,
\label{eq:genten}
\end{eqnarray}
where
\begin{equation}
\label{eq:defalpha}
\alpha \equiv 2\hat{\lambda}\hat{\eta }^4\, ,
\end{equation}
and, as in the Abelian-Higgs case,
\begin{equation}
\beta=\frac{\hat{\lambda}}{2\hat{q}^2}= \frac{\lambda}{2q^2}\, .
\end{equation}
Eq.~(\ref{eq:genten}) is the main result of this section and represents
the non-linear DBI generalisation of the linear Abelian-Higgs model: it
should be compared to Eq.~(\ref{eq:actiontx}). Notice that it involves
the single additional parameter $\alpha$ which measures the deformation
from the Abelian-Higgs model, since Eq.~(\ref{eq:genten}) reduces to the
tension of Abelian Higgs strings in the linear limit $\alpha \to 0$. As
discussed in section \ref{subsec:abelianmodel}, Abelian-Higgs strings
are BPS when $\beta=1$, and hence for $\beta=1$, DBI-cosmic strings with
tension given by Eq.~(\ref{eq:genten}) are a continuous deformation of
the BPS Abelian-Higgs strings. This property is, of course, very
important and constitutes an additional motivation for the action given
in Eq.~(\ref{eq:DBIactiondim}).
\par
Finally, we note that the argument of the cosmic string profile $\rho$
is also identical to its counterpart in the Abelian-Higgs case, whatever
the value of $\alpha$. Hence we will be able to find continuous
deformations of the cosmic string profiles parameterised by $\alpha$ and
depending on the universal variable $\rho$.
\section{DBI String Solutions}
\label{sec:dbisol}
\subsection{Analytical Estimates}
\label{subsec:analytical}
Having established the model and its action, we now turn to the
solutions of the equations of motion. The DBI cosmic string equations
follow from Eq.~(\ref{eq:genten}) and read
\begin{eqnarray}
& & \frac{{\rm d}}{{\rm d}\rho}\left[\rho \gamma
\left(1+\frac{\alpha Q^2 X^2}{\rho^2}
\right)\frac{{\rm d}X}{{\rm d}\rho}\right] =
\frac{\rho}{2}(X^2-1)X
+\gamma \frac{Q^2X}{\rho}\left[1+\alpha
\left(\frac{{\rm d}X}{{\rm d}\rho}\right)^2
\right]\, ,\nonumber
\label{eq:scalar}
\\ \\
& &\frac{{\rm d}}{{\rm d}\rho}\left(\frac{\gamma}{\rho}
\frac{{\rm d}Q}{{\rm d}\rho}\right)
=\frac{\gamma Q}{\beta \rho }
\left[1+\alpha \left(\frac{{\rm d}X}{{\rm d}\rho}\right)^2
\right]X^2 \, ,
\label{eq:gauge}
\end{eqnarray}
for the scalar field and gauge fields, respectively, where $\gamma $ is
defined in Eq.~(\ref{eq:defgamma}). In the Abelian Higgs limit,
$\alpha=0$, Eqs.~(\ref{eq:scalar}) and (\ref{eq:gauge}) reduce to the
standard cosmic string field equations for which $\gamma=1$. Deviations
from Abelian Higgs strings will occur if $\gamma <1$. Notice that
here the fields are purely space-dependent. For time-dependent fields,
and particularly in DBI inflationary cosmology with inflaton $\phi(t)$
whose dynamics is described by action (\ref{eq:DBIactiondim}) in the
global limit, then $\gamma$ is a generalisation of the cosmological
Lorentz factor. Indeed, as can be seen from Eq.~(\ref{eq:defgamma}) in
the case when $g_{\mu \nu}$ describes an homogeneous and isotropic
manifold, $\gamma^{-2} =1-{\dot{\phi}^2}/{T(\phi)}$
where $T(\phi)$ is related to the metric of the extra-dimensions. The
difference in sign between spatial and temporal derivatives is
responsible for the fact that deviations from standard cosmology
($\gamma =1$) occur here when $\gamma \rightarrow +\infty$ rather than
$\gamma \ll 1$.
\par
Unfortunately, as is clear from Eqs.~(\ref{eq:scalar})
and~(\ref{eq:gauge}), the DBI cosmic string equations cannot be solved
exactly. We have therefore carried out a full numerical integration of
the equations of motion. For convenience, we will focus on the deformed
BPS case where $\beta=1$ and $\alpha\ne 0$ (though $\beta\neq 1$ and
$\alpha\ne 0$ can also been done with the numerical methods used here).
\par
Before discussing the numerical results, we analyse the asymptotic
behaviour of the fields in order to obtain a rough understanding of the
solution. We will consider the two limits $\rho\to 0$ and
$\rho\to\infty$ and will address two issues. The first one is the
non-existence of singularities in the core of the cosmic string. The
second one will be the determination of the shape of the cosmic string
profile both at the origin and at infinity. In particular we will find
that the functional form of the string profile is similar to the
Abelian-Higgs case inside the core, the only difference springing from
$\alpha$-dependent factors.
\par
Consider first the $\rho \rightarrow 0$ limit and let us examine the
possibility of singular DBI strings deep in the string core. As already
discussed, the DBI features of the solutions depend on $\gamma$. In
particular, extreme deviations from the Abelian-Higgs case would appear
if $\gamma \to 0$ at the origin. This can only happen if the derivative
of $X$ and/or $Q$ become extremely large, \textsl{i.e.}~ the string becomes
singular. Let us first assume that the gradient of $Q$ becomes large and
dominates the $\gamma$ factor, \textsl{i.e.}~ $\gamma\sim \rho {\rm d}\rho/
\left(\sqrt{\alpha}{\rm d}Q\right)$. The gauge equation becomes
non-sensical as the left-hand of (\ref{eq:gauge}) vanishes and the
right-hand side does not. Hence there is no regime where the gradient
$Q$ is arbitrarily large leading to $\gamma\to 0$ at the origin. We now
examine the possibility that $X$ becomes singular at the origin with a
large gradient. In this limit, we find
\begin{equation}
\gamma \sim \frac{\rho}{\alpha X\left({\rm d}X/{\rm d}\rho\right)}
\left[1-
\frac{1}{2}\frac{1}{\alpha \left({\rm d}X/{\rm d}\rho\right)^2}
-\frac{1}{2n^2}\frac{\rho ^2}{\alpha X^2}\right]\, ,
\end{equation}
where $Q\sim n$ close to the origin and we have expanded $\gamma$ in
$1/\left[\alpha \left({\rm d}X/{\rm d}\rho\right)^2\right] \ll 1$ and
$\rho^2/\left(\alpha X^2\right) \ll 1$, this last condition being the
only one compatible with the condition on the derivative of $X$. Working
to first order in these two parameters, the profile is determined by
\begin{equation}
\frac{{\rm d}}{{\rm d}\rho}\left\{X
\left[\frac{1}{n^2}\frac{\rho ^2}{\alpha X^2}
-\frac{1}{\alpha \left({\rm d}X/{\rm d}\rho\right)^2}\right]\right\}
=\frac{{\rm d}X}{{\rm d}\rho}\left[
\frac{1}{\alpha \left({\rm d}X/{\rm d}\rho\right)^2}
-\frac{1}{n^2}\frac{\rho ^2}{\alpha X^2}\right]\, .
\end{equation}
Notice that to zeroth order in the two small parameters, the equation is
tautological. In the limit $\rho \to 0$ with an ansatz $X\sim
\rho^\delta$ the equation of motion is satisfied for $\delta^2=n^2$. The
only solution satisfying $X(0)=0$ is obtained for $\delta=n$ which has
finite derivative at the origin. This contradicts our premises and, as a
result, we conclude that singular DBI strings do not exist.
\par
Having shown that the strings are not singular, we will now show that
the functional form of the solutions is similar to the ones in the
Abelian-Higgs case. Let us assume that, in the limit $\rho\to 0$, the
DBI solutions are of the form
\begin{equation}
X(\rho)=A_{_{\rm DBI}}\rho^p \, , \quad Q(\rho)=n-B_{_{\rm
DBI}}\rho^q\, , \label{asympt0}
\end{equation}
where $p$ and $q$ are two constants which we will determine below, while
$A_{_{\rm DBI}}$ and $B_{_{\rm DBI}}$ are two constants to be obtained
by numerical integration; and we have taken into account the boundary
conditions at $\rho=0$: $X(0)=0$ and $Q(0)=n$. By direct substitution of
the asymptotic form (\ref{asympt0}) into the equations of motion
Eqs.~(\ref{eq:scalar}), (\ref{eq:gauge}) and taking the limit $\rho\to
0$, one can check that the correct asymptotic form for $X$ and $Q$
reads,
\begin{equation}
X(\rho)=A_{_{\rm DBI}}\rho^n \, , \quad Q(\rho)=n-B_{_{\rm
DBI}}\rho ^2\, . \label{XQ0}
\end{equation}
Thus $p=n$ and $q=2$ and, as guessed above, the only difference between
DBI cosmic strings and Abelian-Higgs cosmic strings close to the origin
is in the numerical values of the prefactors $A_{_{\rm DBI}}$ and
$B_{_{\rm DBI}}$ which are $\alpha$-dependent. In particular, these
coefficients become large for large $\alpha$ implying that away from the
origin but for reasonable and finite values the gamma factor becomes
noticeably different from one, \textsl{i.e.}~ the cosmic strings are in a mild DBI
regime.
\par
From Eqs.~(\ref{eq:defgamma}) and~(\ref{XQ0}) it immediately
follows that $\gamma$ is always finite at $\rho=0$. This a salient
point as it confirms that the cosmic strings constructed with a
DBI action are non-singular at the origin. This is of course yet
another argument supporting the fact that Eq.~(\ref{eq:genten})
represents the natural DBI generalisation for cosmic strings.
\par
More precisely we find that as $\rho\to 0$ there are two possible
regimes: the standard regime where $|1-\gamma|\ll 0$ and a mild
DBI regime where $\gamma < 1$ but finite. Let us first analyse the
global string, for which $Q=0$. It is clear from
Eqs.~(\ref{eq:defgamma}) and~(\ref{XQ0}) that for $n\geq 2$ the
mild DBI regime cannot be realised around $\rho\to 0$. Note,
however, that for $\alpha\gg 1$ we find numerically that $A_{_{\rm
DBI}}\gg 1$, which implies that away from the origin the gradient
${\rm d}X/{\rm d}\rho$ becomes large, so that the solution is in
the mild DBI regime. For $n=1$ the mild DBI regime is valid
starting from $\rho=0$, if $\alpha\gg 1$. For $\alpha\ll 1$ the
regime is always of non-DBI type, independently of $n$.
\par
In the case of gauge strings, the situation is similar in the
limit $\rho\to 0$. Again for $\alpha\ll 1$ the non-DBI regime is
realised. For $\alpha\gg 1$ we find numerically that $A_{_{\rm
DBI}}$ and $B_{_{\rm DBI}}$ in Eq.~(\ref{XQ0}) are large. Thus the
gradient ${\rm d}Q/{\rm d}\rho$ is large too, while the terms
proportional to ${\rm d}X/{\rm d}\rho$ and to $Q^2X^2$ are large
only for $n=1$, and they are small in a small region around
$\rho=0$ for $n\geq 2$. However, these terms become large away
from the origin, since a large constant $A_{_{\rm DBI}}$ implies
that ${\rm d}X/{\rm d}\rho$ becomes large at some point. In
conclusion, we find that for $\alpha$ large enough, the cosmic
strings are in a mild DBI regime for finite values of $\rho$. This
is confirmed numerically.
\par
Finally let us notice that at infinity, independently of $\alpha$, both
the gradients ${\rm d}X/{\rm d}\rho$ and ${\rm d}Q/{\rm d}\rho$ are
small, and the cosmic string matches the standard behaviour. This is in
agreement with the general findings for topological defects with a
non-canonical kinetic term.
\par
In summary, the difference between the Abelian-Higgs and DBI
strings will be small very far from the core of the string, while
the DBI string can differ from the Abelian-Higgs one inside the
core of the string: the larger $\alpha$ the larger the
difference inside the core.
\subsection{Numerical Solutions}
\label{subsec:numerics}
As mentioned above, the equations of motion~(\ref{eq:scalar})
and~(\ref{eq:gauge}) cannot be solved analytically. For this reason, we
now turn to a full numerical integration.
\par
As is well-known, the numerical integration is non-trivial because the
boundary conditions are not fixed at the same point. The solutions
discussed is this article have been obtained by means of two independent
methods: a relaxation method~\cite{AP84,Peter:1992dw,Ringeval:2002qi}
and a shooting method. More precisely, the former is in fact the over
relaxation method. The over relaxation method differs from the
relaxation method (also known as the Newton iteration method) by the
fact that the Newtonian iteration step is multiplied by a factor of
$\omega$. In the standard case, convergence for the over relaxation
method is guaranteed provided the over relaxation parameter $\omega <2$
and, therefore, a good choice is for instance $\omega \sim 1.99$. Here,
the highly non-linear nature of the equation of motions may render the
over relaxation method unstable. To deal with this problem, we have
considered a ``step-dependent'' over relaxation parameter $\omega $
interpolating from $\omega \sim 1$, close to the origin, and to $\omega
\to 1.3 $ at ``infinity''. As already mentioned the choice $\omega
=1.3\ll 1.99$ is due to the highly non linear behaviour of the
equations. We have observed very severe instabilities for higher values
of $\omega$. On the other hand, the shooting method can be directly
implemented in its standard formulation in the case of global strings,
since there is only one integration constant to be obtained, $A_{_{\rm
DBI}}$. While in the gauge case the presence of two ``shooting''
constants, $A_{_{\rm DBI}}$ and $B_{_{\rm DBI}}$, makes the direct
implementation of the standard scheme impossible, we have thus modified
the shooting method appropriately. All in all, the two different
numerical procedures, a relaxation and a shooting method, give the same
numerical solutions, up to small numerical errors.
\par
Numerical integration of the equations of motion~(\ref{eq:scalar})
and~(\ref{eq:gauge}) are presented and discussed below.
\begin{figure}
\begin{center}
\includegraphics[width=7.7cm]{profile_global_n=1_various_alpha.eps}
\includegraphics[width=7.7cm]{profile_global_n=2_various_alpha.eps}
\caption{Left panel: the profiles of global DBI strings for $n=1$ and
for various values of the parameter $\alpha$ defined in
Eq.~(\ref{eq:defalpha}). Right panel: same as left panel but with $n=2$}
\label{fig:globalprofile}
\end{center}
\end{figure}
Firstly, in Figs.~\ref{fig:globalprofile}, we consider global DBI
strings (that is to say without the gauge field) for, respectively,
winding numbers $n=1$ (left panel) and $n=2$ (right panel) and different
$\alpha $'s. This figure confirms the qualitative statements made in the
previous subsection. We notice that, even for ``non-perturbative''
values of $\alpha $, \textsl{i.e.}~ $\alpha >1$, the difference between the
standard and the DBI profiles remains quite small. Moreover, as
announced, the maximum difference lies at a (dimensionless) radius
$\rho$ of order one, namely half way from the origin and the region
where $X\to 1$. Another remark is that the DBI profiles are always above
the standard profiles. This is of course expected since the DBI regime
means larger derivatives which, in the present context, implies the
above mentioned property. Finally, one can check that the asymptotic
behaviours discussed in the previous subsection are clearly observed in
Figs.~\ref{fig:globalprofile}. Indeed, for $n=1$, we notice that
$X(\rho)\sim A_{_{\rm DBI}}\rho$ where $A_{_{\rm DBI}}$ is clearly a
function of $\alpha$ (see in particular the zoom in the left panel). The
same remark applies for $n=2$, where $X(\rho)\sim \rho ^2$.
\begin{figure}
\begin{center}
\includegraphics[width=7.7cm]{profile_n=1_al=1.eps}
\includegraphics[width=7.7cm]{profile_n=2_al=1.eps} \caption{Left panel:
the solid lines represent the profiles of the scalar and gauge fields of
a DBI strings with $n=1$ and $\alpha=1$ while the dashed lines are the
profiles of the scalar and gauge fields of a standard Abelian-Higgs
string (\textsl{i.e.}~ $\alpha=0$). Right panel: same as left panel but with
$n=2$. Notice that, on the $y$-axis, we have used the notation $a\equiv
\hat{q}\hat{A}_{\theta}/n$.} \label{fig:dbiprofileal=1}
\end{center}
\end{figure}
Secondly, in Figs.~\ref{fig:dbiprofileal=1}, we display the profiles for
DBI local strings, \textsl{i.e.}~ for the scalar field and the gauge field, in the
case where $\alpha =1$, $n=1$ (left panel) and $n=2$ (right panel). The
same remarks as before apply. In particular, the scalar field profile
always lies above its Abelian-Higgs counter part and, on the contrary,
the DBI gauge field profile always lies inside the standard profile. As
already discussed, this is because, in the DBI regime, the gradients
are, by definition, larger than in the standard case. This means that a
DBI string has a core smaller than an Abelian-Higgs string. As before,
one can also check that the asymptotic behaviours are those discussed in
the previous subsection. This is true in particular for the gauge field
for which we always see that $Q\sim n-\rho ^2$ at the origin.
\begin{figure}
\begin{center}
\includegraphics[width=7.7cm]{comp_local_global_n=1_al=1.eps}
\includegraphics[width=7.7cm]{slope.eps} \caption{Left panel: comparison
of the cosmic string profiles for global and local DBI strings with
$n=1$ and $\alpha=1$. The solid line represents the scalar field profile
for a local Abelian-Higgs string while the dotted line is the
corresponding DBI profile. On the other hand, the dashed line represents
the scalar field profile for an Abelian-Higgs global string whereas the
dotted-dashed line is the corresponding DBI profile still in the global
case. Right panel: ratio $A_{_{\rm DBI}}/A_{_{\rm standard}}$ (see the
text) as a function of the parameter $\alpha$ for different values of
the winding number $n$.} \label{fig:moreprofiles}
\end{center}
\end{figure}
Thirdly, additional information on the profiles can be gained from
Figs.~\ref{fig:moreprofiles}. In the left panel, we have compared the
local and global profiles. One can notice that the global profile is
less concentrated than the local one. Another remark is that the
difference between the Abelian-Higgs and DBI profiles is more important
in the local case than in the global one. In the right panel, we have
compared the slopes at the origin. In the standard case, one has
$X(\rho)\sim A_{_{\rm standard}}\rho ^n$ and it has been argued before
that in the DBI case, one also has $X(\rho)\sim A_{_{\rm DBI}}\rho
^n$. We have represented the ratio $A_{_{\rm DBI}}/A_{_{\rm standard}}$
for various values of $\alpha $ and $n$. One notices that the larger
$\alpha $, the steeper the DBI slope, which seems natural since the
value of the parameter $\alpha $ controls how important the DBI effects
are. We also remark that the same trend is observed when one increases
$n$ rather than $\alpha$. In conclusion, from these two figures, one
confirms that the deeper one penetrates into the DBI regime, the
narrower the core of a cosmic string is. The effect is larger in the
local case than in the global one and for large winding numbers than for
small ones.
\par
Fourthly, given the numerical solutions presented above it is
straightforward to calculate their tension which, from the action given
by Eq.~(\ref{eq:genten}), takes the form
\begin{equation}
\mu _n\left(X,Q\right)= 2\pi \eta^2 f_n(\alpha)\, ,
\end{equation}
where $f_n(0)=n$ in the Abelian-Higgs case. In Fig.~\ref{fig:actiondbi}
(left panel), we plot the universal functions $f_n(\alpha)$ for the DBI
local strings. We notice that the DBI action is, for any $n$ and/or
$\alpha $, smaller than the corresponding standard action. Moreover, at
a fixed value of $\alpha$, the (absolute) difference between the DBI and
Abelian-Higgs actions increases with the winding number. The fact that
the DBI action is smaller than the standard one is likely to have
important physical consequences, in particular with regards to the
formation of DBI cosmic strings. Indeed, if their energy is smaller than
in the standard case, one can legitimely expect their formation to be
favoured as compared to the Abelian-Higgs case.
\begin{figure}
\begin{center}
\includegraphics[width=7.7cm]{action_vs_alpha.eps}
\includegraphics[width=7.7cm]{diffaction.eps} \caption{Left panel: the
solid lines represent the DBI string tension as a function of $\alpha$
for different values of the winding number $n$ (from $n=1$ to $n=4$
going from the bottom to the top of the plot). The dashed lines
corresponds to the Abelian-Higgs tension, namely $\mu _n=2\pi \eta ^2
n$, and are thus horizontal lines located at the $y$-coordinate
$n$. Right panel: the DBI string binding energy $\mu_{2n}-2\mu_n$ for
various $n$ as a function of the parameter $\alpha$.}
\label{fig:actiondbi}
\end{center}
\end{figure}
In the right panel in Fig.~\ref{fig:actiondbi}, we have represented the
DBI string binding energy $\mu_{2n}-2\mu _n$ as a function of the
parameter $\alpha$ for different values of the winding number $n$. We
observe that this quantity is always positive but small in comparison to
one. Moreover, as expected since one has $\left(\mu_{2n}-2\mu
_n\right)\left(\alpha =0\right)=0$, it increases with $\alpha$. We
conclude that when $\alpha \neq 0$, the DBI cosmic string is no longer a
BPS object. The fact that $\mu_{2n}>2\mu _n$ means that, when they meet,
two DBI strings will not constitute a new single string with winding
number $2n$ since this appears to be disfavoured from the energy point
of view. This has important consequences for cosmology since the above
discussion implies that the behaviour of a network of DBI cosmic strings
will be similar to the behaviour of a network of Abelian-Higgs
strings. This means that the cosmological constraints derived, for
instance in Refs.~\cite{Pogosian:2003mz}, also apply to the present
case.
\par
Finally, in Fig.~\ref{fig:energydensity}, we represent the energy
density as a function of the dimensionless radial coordinate $\rho$ for
$n=1$ (left panel) and $n=2$ (right panel) for different values of the
parameter $\alpha $. We notice that the DBI energy densities are usually
more peaked than the Abelian-Higgs ones. Moreover, the larger $\alpha $,
the more peaked the distributions. The case $n=2$ is particularly
interesting. One observes that, as $\alpha $ increases, the peaks of the
distribution are displaced towards the left, \textsl{i.e.}~ towards smaller values
of $\rho$. This is probably due to the fact that, as discussed at the
beginning of this subsection, the difference between the DBI and
Abelian-Higgs profiles is maximum for intermediate values of $\rho$.
\begin{figure}
\begin{center}
\includegraphics[width=7.7cm]{energy_n=1.eps}
\includegraphics[width=7.7cm]{energy_n=2.eps} \caption{Left panel: the
energy density for DBI strings as a function of $\rho$ for $n=1$ for
different values of the parameter $\alpha$. Right panel: same as right
panel but for $n=2$.}
\label{fig:energydensity}
\end{center}
\end{figure}
\section{Conclusions}
\label{sec:conclusions}
We have considered a natural DBI generalisation of the Abelian-Higgs
model whereby the kinetic terms of the Higgs fields do not lead to a
linear differential operator in the equations of motion. The particular
form of the action is motivated by a specific extra-dimensional model
where the Higgs field becomes a complex direction normal to a
D3-brane. Although this model leads to nice cosmic string properties, it
is not directly related to a string theory model. As such, the closest
model of string theory which could have lead to such a DBI action is the
D3/D7 system where BPS cosmic strings are formed at the end of an
hybrid-like inflation phase. Unfortunately, the charged fields
associated to the open string joining the D3- and D7-branes have no
obvious geometric meaning and therefore do not lead to our DBI
action. It would certainly be very interesting to see if our
construction can be embedded within string theory.
\par
As a four-dimensional model of non-canonical type, the DBI model of
cosmic strings does not suffer from any pathology such as divergences or
non-single-valuedness of the field profiles (typical of other non-linear
actions which have been proposed in the literature). Indeed we find that
DBI strings can be continuously deformed to their Abelian-Higgs
analogue. In fact, the main difference from the Abelian-Higgs case
appears in the BPS case where the DBI strings show a small departure
from the BPS property. In particular, we find that the string tension is
reduced, a property which may have some phenomenological significance in
order to relax the bound on the string tension coming from Cosmic
Microwave Background (CMB) data. Moreover we find that the DBI strings
have a positive binding energy implying that the string coalescence is
energetically disfavoured, leading to the likely formation of networks
with singly-wound strings and statistical properties akin to the usual
Abelian-Higgs ones.
\par
In the present article, we have not tackled some important aspects of
DBI string dynamics such as string scattering (for which we expect that
the higher order terms discussed in the Appendix may play an important
r\^ole), gravitational back reaction, and the coupling to fermions and
their zero modes. This is currently under investigation.
\par
In summary, we have introduced DBI cosmic strings as non-singular
solutions derived from a non-linear Lagrangian. We have studied the
solutions numerically and found that they differ significantly from
their Abelian-Higgs analogues. However, the network properties of these
strings is almost certainly similar to those of type II Abelian-Higgs
cosmic strings.
\ack
We would like to thank Patrick Peter and Christophe Ringeval for useful
discussions, particularly on the numerical aspect of this work. We also
thank David Langlois and S\'ebastien Renaux-Petel, as well as Renata
Kallosh for enlightening discussions on the stringy aspect of this work.
\section{Appendix}
\label{sec:appendix}
In this appendix, we study in more detail the general form of the
action~(\ref{eq:DBIaction}) considered in this
article. Eq.~(\ref{eq:DBIaction}) reads
\begin{eqnarray}
S &=& -T \int {\rm d}^4x \left\{\sqrt{-\det \left[ g_{\mu \nu} +
({\cal D}_{ ( \mu } \Phi) ({\cal D}_{\nu)} \Phi)^{\dagger} + \ell_s^2 {\cal F}_{\mu\nu}
\right]}\nonumber \right. \\
& & \left. \vphantom{\sqrt{-\det \left[ g_{\mu \nu} +
({\cal D}_{ ( \mu } \Phi) ({\cal D}_{\nu)} \Phi)^{\dagger} +
(D_{\nu} \Phi) (D_{\mu} \Phi)^\dagger + \ell_s^2 {\cal F}_{\mu\nu}
\right]}}
-\sqrt{-g}+\sqrt{-g}\frac{V\left(\sqrt{T}\left|\Phi\right|\right)}{T}
\right\}\, ,
\nonumber \\
&\equiv &-T \int {\rm d}^4 x \sqrt{-g}
\left[ \left(\sqrt{{\cal D}} -1 \right) +
\frac{V(\sqrt{T}|\Phi|)}{T} \right] \, ,
\label{bliss}
\end{eqnarray}
where ${\cal D}$ is defined by
\begin{equation}
{\cal D} \equiv \det \left[
\delta^{\mu}_{\;\; \nu} + ({\cal D}^{\mu} \Phi) ({\cal D}_{\nu}
\Phi)^{\dagger} + ({\cal D}^{\mu} \Phi)^\dagger ({\cal D}_{\nu}
\Phi) + \ell_\us^2 {\cal F}^{\mu}{}_{\nu} \right]\, ,
\label{Ddef}
\end{equation}
$T$ has dimensions of (energy)$^4$ and $\ell_\us$ is a length scale. As
before, ${\cal D}_\mu= \partial_\mu -i\hat{q}{\cal A}_\mu$.
\par
Our goal is to compute and simplify Eq.~(\ref{Ddef}) for ${\cal D}$. As
it is clear from its definition, this will allow us to derive a more
compact formula for our action in the general case. In
Eq.~(\ref{eq:genten}) we have evaluated the action~(\ref{bliss}) for a
cylindrically symmetric static string profile. In this case it takes a
simple form. However, when there is time dependence and less symmetry
--- as occurs for example in string scattering --- it is important to
know the general form of the action.
\par
First define the following quantities
\begin{equation}
N^{\nu} \equiv {\cal D}^{\mu} \Phi \, ,
\label{Ndef}
\quad
S^{\mu}{}_{\nu}\equiv N^{\mu}\bar{N}_\nu +\bar{N}^\mu
N_{\nu}\, ,
\label{Sdef}
\quad
R^{\mu}{}_{\nu}\equiv S^{\mu}{}_{\nu}+ {\cal F}^{\mu}{}_{\nu}\, ,
\label{Rdef}
\end{equation}
where are bar denotes complex conjugation and we set $\ell_s=1$ in this
appendix. Note that by definition $S^{\mu\nu}$ is a symmetric matrix and
${\cal F}^{\mu\nu}$ is antisymmetric, while $S^{\mu}{}_{\nu}$ and ${\cal
F}^{\mu}{}_{\nu}$ are in general neither symmetric nor antisymmetric.
Denote by $S$ the matrix with components $S^{\mu}{}_{\nu}$, while ${\cal
F}$ is the matrix with components ${\cal F}^{\mu}{}_{\nu}$. For integer
$n$ and $p$
\begin{equation}
{\rm{tr}} \left(S^p{\cal F}^{2n+1}\right)=0\, .
\label{superU}
\end{equation}
On the other hand, we also have
\begin{eqnarray}
{\cal D} &=&
\det\left(\delta^{\mu}{}_{\nu}+R^{\mu}{}_{\nu}\right)\,
\\&=&
-\frac{1}{4!}\varepsilon_{\alpha_1\alpha_2\alpha_3\alpha_4}
\varepsilon^{\beta_1\beta_2\beta_3\beta_4}
\left(\delta^{\alpha_1}{}_{\beta_1}+R^{\alpha_1}{}_{\beta_1}\right)
\left(\delta^{\alpha_2}{}_{\beta_2}+R^{\alpha_2}{}_{\beta_2}\right)
\nonumber \\ & & \times
\left(\delta^{\alpha_3}{}_{\beta_3}+R^{\alpha_3}{}_{\beta_3}\right)
\left(\delta^{\alpha_4}{}_{\beta_4}+R^{\alpha_4}{}_{\beta_4}\right)\,
\label{expand}
\end{eqnarray}
which, on using the identity
\begin{equation}
\varepsilon_{\alpha_1\alpha_2\alpha_3\alpha_4}\varepsilon^{\alpha_1\cdots
\alpha_j\beta_{j+1}\cdots\beta_4}=-\left(4-j\right)!j!
\delta^{[\beta_{j+1}}_{\alpha_{j+1}}{}^{\cdots}_{\cdots}
\delta^{\beta_4]}_{\alpha_4}\, ,
\end{equation}
gives
\begin{equation}
{\cal D}= 1
+R^\alpha{}_{\alpha}+R^{[\alpha}{}_{\alpha}R^{\beta]}{}_{\beta}
+R^{[\alpha}{}_{\alpha}R^{\beta}{}_{\beta}R^{\gamma]}{}_{\gamma}
+R^{[\alpha}{}_{\alpha}R^{\beta}{}_{\beta}R^{\gamma}{}_{\gamma}
R^{\delta]}{}_{\delta}\, .
\end{equation}
We now evaluate each term in the above equation. For the
first (linear in $R$), it follows from Eqs.~(\ref{Sdef})
and~(\ref{superU}) that
\begin{equation}
R^\alpha{}_{\alpha}= S^\alpha{}_{\alpha}=2\bar{N}_\alpha N^\alpha =
2\left({\cal D}^{\mu}\Phi \right)\left({\cal D}_{\mu}
\Phi\right)^{\dagger}\, .
\end{equation}
The quadratic term is given by
\begin{eqnarray}
R^{[\alpha}{}_{\alpha}R^{\beta]}{}_{\beta}
&=& S^{[\alpha}{}_{\alpha}S^{\beta]}{}_{\beta}+
2S^{[\alpha}{}_{\alpha}{\cal F}^{\beta]}{}_{\beta}
+{\cal F}^{[\alpha}{}_{\alpha}{\cal F}^{\beta]}{}_{\beta}\, ,
\\&=&
\frac{1}{2}\left[{\rm{tr}} ^2 \left(S\right)- {\rm tr}\left(S^2\right)
\right]-\frac{1}{2}{\rm{tr}}\left({\cal F}^2\right)\, ,
\label{version1}
\\
&=&\left(\bar{N}_\alpha N^\alpha \right)^2
- \left({N}_{\alpha}N^{\alpha}\right)
\left(\bar{N}_\beta \bar{N}^\beta\right)
-\frac{1}{2}{\rm{tr}}\left({\cal F}^2\right)\, ,
\label{version2}
\end{eqnarray}
where to get from Eq.~(\ref{version1}) to Eq.~(\ref{version2}) we have
used Eq.~(\ref{Sdef}). Notice that these terms are compatible with the
$U(1)$ invariance of the action. The next step is to calculate the cubic
term. It is given by
\begin{eqnarray}
R^{[\alpha}{}_{\alpha}
R^{\beta}{}_{\beta}R^{\gamma]}{}_{\gamma}
&=&
S^{[\alpha}{}_{\alpha}S^{\beta}{}_{\beta}S^{\gamma]}{}_{\gamma}
+3 S^{[\alpha}{}_{\alpha}S^{\beta}{}_{\beta}{\cal
F}^{\gamma]}{}_{\gamma}
+3S^{[\alpha}{}_{\alpha}{\cal F}^{\beta}{}_{\beta}{\cal
F}^{\gamma]}{}_{\gamma}
\nonumber \\ & &
+ {\cal F}^{[\alpha}{}_{\alpha}{\cal
F}^{\beta}{}_{\beta}{\cal F}^{\gamma]}{}_{\gamma}\, .
\end{eqnarray}
The term in $S^3$ vanishes for the single complex scalar field studied
here since, on using Eq.~(\ref{Sdef}), it contains the contraction of an
antisymmetric tensor with a symmetric one. Similarly
$S^{[\alpha}{}_{\alpha}S^{\beta}{}_{\beta}{\cal F}^{\gamma]}{}_{\gamma}
=0={\cal F}^{[\alpha}{}_{\alpha}{\cal F}^{\beta}{}_{\beta} {\cal
F}^{\gamma]}{}_{\gamma}$ on using Eq.~(\ref{superU}). Therefore, the
cubic term takes the form
\begin{equation}
R^{[\alpha}{}_{\alpha}R^{\beta}{}_{\beta}
R^{\gamma]}{}_{\gamma}
=3S^{[\alpha}{}_{\alpha}{\cal F}^{\beta}{}_{\beta}
{\cal F}^{\gamma]}{}_{\gamma}=\frac{1}{2}\left[ -{\rm{tr}} \left(S\right)
{\rm{tr}}\left({\cal F}^2\right)+2{\rm{tr}}\left(S{\cal F}^2\right)\right]\, .
\end{equation}
Finally, the quartic term can be expressed as
\begin{eqnarray}
R^{[\alpha}{}_{\alpha}R^{\beta}{}_{\beta}R^{\gamma}{}_{\gamma}
R^{\delta]}{}_{\delta}
&=&
S^{[\alpha}{}_{\alpha}S^{\beta}{}_{\beta}S^{\gamma}{}_{\gamma}
S^{\delta]}{}_{\delta}
+4S^{[\alpha}{}_{\alpha}S^{\beta}{}_{\beta}S^{\gamma}{}_{\gamma}
{\cal F}^{\delta]}{}_{\delta}
+4S^{[\alpha}{}_{\alpha}{\cal F}^{\beta}{}_{\beta}
{\cal F}^{\gamma}{}_{\gamma}{\cal F}^{\delta]}{}_{\delta}
\label{line1}
\nonumber \\ & &
+6S^{[\alpha}{}_{\alpha}S^{\beta}{}_{\beta}
{\cal F}^{\gamma}{}_{\gamma} {\cal F}^{\delta]}{}_{\delta}
+{\cal F}^{[\alpha}{}_{\alpha}{\cal F}^{\beta}{}_{\beta}
{\cal F}^{\gamma}{}_{\gamma}{\cal F}^{\delta]}{}_{\delta}
\\
&=&
6S^{[\alpha}{}_{\alpha}S^{\beta}{}_{\beta}
{\cal F}^{\gamma}{}_{\gamma}{\cal F}^{\delta]}{}_{\delta}
+{\cal F}^{[\alpha}{}_{\alpha}{\cal F}^{\beta}{}_{\beta}
{\cal F}^{\gamma}{}_{\gamma}{\cal F}^{\delta]}{}_{\delta}\, ,
\label{line2}
\end{eqnarray}
since the terms on the first line in the above equations vanish, on
using the same arguments as above. Also
\begin{eqnarray}
S^{[\alpha}{}_{\alpha}S^{\beta}{}_{\beta}
{\cal F}^{\gamma}{}_{\gamma}{\cal F}^{\delta]}{}_{\delta}
&=&
\frac{1}{4!}\Biggl\{4{\rm{tr}}\left(S\right)
{\rm{tr}} \left(S{\cal F}^2\right)-4{\rm{tr}}\left({\cal F}^2 S^2\right)
-2{\rm{tr}}\left({\cal F}S{\cal F}S\right)
\nonumber
\\
& &+\left[{\rm{tr}} \left(S^2\right)
-{\rm{tr}}^2\left(S\right)\right]
{\rm{tr}} \left({\cal F}^2\right) \Biggr\}\, .
\\
{\cal F}^{[\alpha}{}_{\alpha}{\cal F}^{\beta}{}_{\beta}
{\cal F}^{\gamma}{}_{\gamma}{\cal F}^{\delta]}{}_{\delta}
&=& \frac{1}{4!}\left[-6{\rm{tr}}\left({\cal F}^4\right)
+3{\rm{tr}}^2 \left({\cal F}^2\right)\right]
\end{eqnarray}
Therefore, in the end, one obtains the following expression for ${\cal
D}$
\begin{eqnarray}
{\cal D} &=& 1+{\rm{tr}}\left(S\right)-\frac{1}{2}{\rm{tr}}\left({\cal
F}^2\right)
+\frac{1}{8}\left[{\rm{tr}} ^2\left({\cal F}^2\right)
-2{\rm{tr}}\left({\cal F}^4\right)\right]
\nonumber
\label{puke1}
\\
&& +\frac{1}{2} \left[{\rm{tr}} ^2\left(S\right)
-{\rm{tr}}\left(S^2\right)\right]
+ \frac{1}{2}\biggl[2{\rm{tr}}\left(S{\cal F}^2\right)
-{\rm{tr}}\left(S\right){\rm{tr}}\left({\cal F}^2\right) \biggr]
\nonumber
\label{puke4}
\\
&&+ \frac{1}{4}\left[{\rm{tr}}\left(S^2\right)- {\rm{tr}} ^2\left(
S\right)\right] {\rm{tr}}\left({\cal F}^2\right)+
{\rm{tr}}\left(S\right){\rm{tr}}\left(S{\cal F}^2\right) -{\rm{tr}}\left({\cal F}^2
S^2\right)
\label{ffinal}
\end{eqnarray}
The three terms of the first line in Eq.~(\ref{puke1}), when substituted
in Eq.~(\ref{bliss}) and on expanding the square-root, give the standard
Abelian-Higgs model. The last two terms of Eq.~(\ref{puke1}) are the
standard terms of Born-Infeld electro-dynamics. Finally, as discussed in
the main text, the factor ${\cal D}$ and, hence, our action defined by
Eq.~(\ref{eq:DBIaction}), contains terms higher order in covariant
derivatives as well as mixing terms between ${\cal F}^2$ and the
covariant derivatives.
\section*{References}
|
1,116,691,499,890 | arxiv | \section{Methods}
\subsection{Sample}
We use a sample grown by molecular beam epitaxy containing a single layer of self-assembled InGaAs quantum dots in a GaAs matrix, embedded in a Schottky diode for charge state control. The Schottky diode structure comprises an n$^{+}$-doped layer 35 nm below the quantum dots and a $\sim$6-nm thick partially transparent titanium layer evaporated on top of the sample surface. This device structure allows for deterministic charging of the quantum dots and shifting of the exciton energy levels via the DC Stark effect. 20 pairs of GaAs/AlGaAs layers form a distributed Bragg reflector below the quantum dot layer for increased collection efficiency in the spectral region between 960 nm and 980 nm. Spatial resolution and collection efficiency are enhanced by a zirconia solid immersion lens in Weierstrass geometry positioned on the top surface of the device. The device is cooled in a liquid-helium bath cryostat to 4.2 K and surrounded by a superconducting magnet.
\subsection{Spin inversion to prevent nuclear spin polarisation}
The inversion necessary to cancel phase terms in the average readout signal and suppress nuclear polarisation can be provided by either a coherent $\pi$-rotation, or an incoherent re-pumping step. For Hahn echo a $\pi$-rotation suffices, however for measurements of the time-averaged dephasing, the enhanced sensitivity for longer delays between the $\pi$/2-rotations requires complete spin inversion such that the rotation has to be supplemented with a pumping step.
\subsection{Pulse sequence and detection}
Optical pulse sequences are constructed from a Ti:Sapphire pulsed laser in picosecond-mode, detuned from the optical resonance by $\sim$ 3 nm and a resonant continuous-wave diode laser. Both are modulated with fibre-coupled waveguide electro-optic modulators. The modulators are locked to the 76-MHz repetition rate of the pulsed laser via a pulse delay generator with 8-ps jitter. Additional suppression of the readout pulse is provided by an acousto-optical modulator, realising 6000:1 suppression of readout lasers and $>$2000:1 rotation pulse suppression. The readout laser is used at a power below optical saturation to avoid spin-pumping when not reading the spin state. The readout fluorescence is filtered from the resonant laser by polarisation mode rejection. Additional filtering of the detuned rotation pulses is provided by a holographic grating with a 30-GHz full width at half maximum and a first-order diffraction efficiency above 90\%. The photon detection events are recorded with a time-correlation unit and a single photon detector with a timing resolution of 350 ps.
\section{References}
\begin{enumerate}[{[1]}]
\item Matthiesen, C., Vamivakas, A. N. \& Atatüre, M. Subnatural Linewidth Single Photons from a Quantum Dot. Phys. Rev. Lett. \textbf{108}, 93602 (2012).
\item Press, D., Ladd, T. D., Zhang, B. \& Yamamoto, Y. Complete quantum control of a single quantum dot spin using ultrafast optical pulses. Nature \textbf{456}, 218–221 (2008).
\item Gao, W. B., Fallahi, P., Togan, E., Miguel-Sanchez, J. \& Imamoglu, A. Observation of entanglement between a quantum dot spin and a single photon. Nature \textbf{491}, 426–430 (2012).
\item De Greve, K. et al. Quantum-dot spin-photon entanglement via frequency downconversion to telecom wavelength. Nature \textbf{491}, 421-425 (2012).
\item Schaibley, J. R. et al. Demonstration of quantum entanglement between a single electron spin confined to an InAs quantum dot and a photon. Phys. Rev. Lett. \textbf{110}, 1–5 (2013).
\item Hahn, E. L. Spin Echoes. Phys. Rev. \textbf{80}, 580–594 (1950).
\item Viola, L., Knill, E. \& Lloyd, S. Dynamical Decoupling of Open Quantum Systems. Phys. Rev. Lett. \textbf{82}, 2417–2421 (1999).
\item Abe, E. et al. Electron spin coherence of phosphorus donors in silicon: Effect of environmental nuclei. Phys. Rev. B \textbf{82}, 9–12 (2010).
\item de Lange, G., Wang, Z. H., Ristè, D., Dobrovitski, V. V \& Hanson, R. Universal dynamical decoupling of a single solid-state spin from a spin bath. Science \textbf{330}, 60–63 (2010).
\item Press, D. et al. Ultrafast optical spin echo in a single quantum dot. Nature Photon. \textbf{4}, 367–370 (2010).
\item Greilich, A. et al. Mode locking of electron spin coherences in singly charged quantum dots. Science \textbf{313}, 341–345 (2006).
\item Xu, X. et al. Coherent Population Trapping of an Electron Spin in a Single Negatively Charged Quantum Dot. Nature Phys. \textbf{4}, 2–5 (2008).
\item Bechtold, A. et al. Three-stage decoherence dynamics of an electron spin qubit in an optically active quantum dot. Nature Phys. \textbf{11}, 1005-1008 (2015).
\item Hackmann, J., Glasenapp, P., Greilich, A., Bayer, M. \& Anders, F. B. Influence of the Nuclear Electric Quadrupolar Interaction on the Coherence Time of Hole and Electron Spins Confined in Semiconductor Quantum Dots. Phys. Rev. Lett. \textbf{115}, 207401 (2015).
\item Chekhovich, E. a., Hopkinson, M., Skolnick, M. S. \& Tartakovskii, a. I. Quadrupolar induced suppression of nuclear spin bath fluctuations in self-assembled quantum dots. Nature Commun. \textbf{6}, 6348 (2014).
\item Latta, C., Srivastava, A. \& Imamoglu, A. Hyperfine interaction-dominated dynamics of nuclear spins in self-assembled InGaAs quantum dots. Phys. Rev. Lett. \textbf{107}, 1–5 (2011).
\item Latta, C. et al. Confluence of resonant laser excitation and bidirectional quantum-dot nuclear-spin polarization. Nature Phys. \textbf{5}, 758–763 (2009).
\item Ladd, T. D. et al. Pulsed nuclear pumping and spin diffusion in a single charged quantum dot. Phys. Rev. Lett. \textbf{105}, 1–4 (2010).
\item Xu, X. et al. Fast spin state initialization in a singly charged InAs-GaAs quantum dot by optical cooling. Phys. Rev. Lett. \textbf{99}, 1–4 (2007).
\item Botzem, T., McNeil, R. P. G., Schuh, D., Bougeard, D. \& Bluhm, H. Quadrupolar and anisotropy effects on dephasing in two-electron spin qubits in GaAs. at http://arxiv.org/abs/1508.05136 (2015)
\item Cywiński, Ł., Witzel, W. M. \& Das Sarma, S. Pure quantum dephasing of a solid-state electron spin qubit in a large nuclear spin bath coupled by long-range hyperfine-mediated interactions. Phys. Rev. B \textbf{79}, 245314 (2009).
\item Bluhm, H. et al. Long coherence of electron spins coupled to a nuclear spin bath. Nature Phys. \textbf{7}, 10 (2010).
\item Sinitsyn, N. a., Li, Y., Crooker, S. a., Saxena, a. \& Smith, D. L. Role of nuclear quadrupole coupling on decoherence and relaxation of central spins in quantum dots. Phys. Rev. Lett. \textbf{109}, 1–5 (2012).
\item Braun, P.-F. et al. Direct Observation of the Electron Spin Relaxation Induced by Nuclei in Quantum Dots. Phys. Rev. Lett. \textbf{94}, 116601 (2005).
\item Chekhovich, E. A. et al. Structural analysis of strained quantum dots using nuclear magnetic resonance. Nature Nanotech. \textbf{7}, 646–650 (2012).
\item Stanley, M. J. et al. Dynamics of a mesoscopic nuclear spin ensemble interacting with an optically driven electron spin. Phys. Rev. B \textbf{90}, 195305 (2014).
\item Merkulov, I. a \& Rosen, M. Electron spin relaxation by nuclei in semiconductor quantum dots. Phys. Rev. B \textbf{65}, 1–8 (2002).
\item Högele, a. et al. Dynamic nuclear spin polarization in the resonant laser excitation of an InGaAs quantum dot. Phys. Rev. Lett. \textbf{108}, 1–5 (2012).
\item Bulutay, C. Quadrupolar spectra of nuclear spins in strained In xGa 1-xAs quantum dots. Phys. Rev. B \textbf{85}, 1–12 (2012).
\item Witzel, W. M. \& Das Sarma, S. Quantum theory for electron spin decoherence induced by nuclear spin dynamics in semiconductor quantum computer architectures: Spectral diffusion of localized electron spins in the nuclear solid-state environment. Phys. Rev. B \textbf{74}, 035322 (2006).
\item De Greve, K. et al. Coherent control and suppressed nuclear feedback of a single quantum dot hole qubit. Nature Phys. \textbf{7}, 5 (2011).
\item Sallen, G. et al. Nuclear magnetization in gallium arsenide quantum dots at zero magnetic field. Nature Commun. \textbf{5}, 3268 (2014).
\end{enumerate}
\end{document} |
1,116,691,499,891 | arxiv |
\subsection{Data samples}
\label{data_samples}
In this analysis we are using data samples identical to the samples used in~\cite{VierjetJADE,naroska,jadenewas,OPALPR299,movilla02b,pedrophd},
collected by the JADE~experiment between 1979 and
1986; they correspond to a total integrated luminosity of ca. 195 \invpb.
Table~\ref{lumi} contains the breakdown of the data samples--data taking period, energy range,
mean centre-of-mass energy,
integrated luminosity and the number of selected hadronic events.
\begin{table}[h]
\caption{Year of data taking, energy range, integrated luminosity,
average centre-of-mass energy and the numbers of selected data events
for each data sample}
\label{lumi}
\begin{tabular}{ l l l l r }
\hline\noalign{\smallskip}
year & range of & \rs\ mean & luminosity & selected \\
& \rs\ in GeV & in GeV & (pb$^{-1}$) & events \\
\noalign{\smallskip}\hline\noalign{\smallskip}
1981 & 13.0--15.0 & 14.0 & 1.46 & 1783
\\
1981 & 21.0--23.0 & 22.0 & 2.41 & 1403
\\
1981--1982 & 33.8--36.0 & 34.6 & 61.7 & 14313
\\
1986 & 34.0--36.0 & 35.0 & 92.3 & 20876
\\
1985 & 37.3--39.3 & 38.3 & 8.28 & 1585
\\
1984--1985 & 43.4--46.4 & 43.8 & 28.8 & 4376 \\
\noalign{\smallskip}\hline
\end{tabular}
\end{table}
\subsection{The \mbox{\rm\bf JADE}\ detector}
\label{sec_detector}
The JADE detector is described in detail
in ref.~\cite{naroska}.
Energy measurement by the electromagnetic calorimeter and
the reconstruction of charged particle tracks in the central track detector
are the main ingredients for
this analysis.
The central jet chamber was
positioned in an axial magnetic field of 0.48 T
provided by a solenoidal magnet.\footnote{{In the JADE
right-handed coordinate system the $+x$ axis pointed towards the
centre of the PETRA ring, the $y$ axis pointed upwards and the $z$
axis pointed in the direction of the positron beam. The polar angle
$\theta$ and the azimuthal angle $\phi$ were defined with respect to
$z$ and $x$, respectively, while $r$ was the distance from the
$z$-axis.}} The magnet coil was surrounded by the lead glass calorimeter,
which measured electromagnetic energy and consisted of a
barrel and two endcap sections.
\subsection{Selection of events}
\label{selection}
The selection--identical to the one used in~\cite{VierjetJADE}--aims at selecting hadronic
events in the JADE data
excluding events with much energy
lost by initial state radiation (ISR).
The rejected background consists to a large degree of two photon events.
It uses cuts on event multiplicity,
on visible energy and longitudinal momentum
balance.
The cuts are documented in~\cite{StdEvSel1,StdEvSel2,StdEvSel3,jadenewas}.
So called good tracks and calorimeter clusters are identified by appropriate
criteria \cite{VierjetJADE}.
Double counting of energy from charged tracks and calorimeter
clusters is avoi\-ded by subtracting the estimated contribution
of a charged track from the associated cluster energy.
The number of selected events for each energy point is given in
table~\ref{lumi}.
\subsection{Monte Carlo samples}
\label{MCsamples}
To correct the data for experimental effects and backgrounds
we use samples of MC simulated events.
Using \py~5.7~\cite{jetset3} we simulate
the process $\ensuremath{\mathrm{e^+e^-}}\to\mathrm{hadrons}$.
For systematic checks we use
corresponding samples obtained by simulation with \hw~5.9~\cite{herwig51}.
We process the MC samples generated at each energy point
through a full si\-mu\-la\-tion of the JADE detector~\cite{jadesim1,jadesim2,jadesim3}, summarized in ~\cite{pedrophd};
and we reconstruct them in essentially the same way as the data.
Using the parton shower models \py~6.158, \hw~6.2~\cite{herwig6} and \ar~4.11~\cite{ariadne3}
we employ in addition large samples of
MC events without detector
simulation, in order to compare with the
corrected data. For the purpose of
comparison with the data, the MC
events include the effects of
hadronisation, i.e. the transition of
partons into hadrons.
All used major versions of the models were adjusted to LEP~1 data by the OPAL
collaboration~\cite{OPALPR141,OPALPR379}, so we expect comparable results
from them.
\section{Introduction}
\label{intro}
Electron-positron annihilation into hadrons constitutes a precise
testing ground of Quantum Chromodynamics\linebreak (QCD). Commonly jet
production rates or distributions of event shape variables have been
studied. Predictions of perturbative QCD combined with hadronisation
corrections derived from models have been found to describe the data
at low and high energies well, see
e.g.~\cite{jadenewas,OPALPR299,movilla02b,OHab,STKrev}.
In this analysis we use data from the \mbox{\rm JADE}\ experiment, recorded in the
years 1979 to 1986 at the PETRA \ensuremath{\mathrm{e^+e^-}}\ collider at DESY at six
centre-of-mass (c.m.) energies \rs\ covering the range 14--44~GeV.
We
measure the first five moments of event shape variables for the
first time in this low \rs\ region of \ensuremath{\mathrm{e^+e^-}}\ annihilation and compare the
data to predictions by Monte Carlo (MC) models and by perturbative QCD.
Moments sample all phase space, but
are more sensitive to specific parts of phase space, dependent on their order.
From the comparison of the data with theory we extract the
strong coupling \as.
The measurement of the moments, as well as the \as~determination, follow closely
the analysis by the \mbox{\rm OPAL}\ experiment in the complete
LEP energy range of 91--209~GeV \cite{OPALPR404}.
This work supplements our previous analyses on jet production rates, determinations of \as\
and four jet production, using \mbox{\rm JADE}\ and \mbox{\rm OPAL}\ data \cite{OPALPR299,VierjetJADE,VierjetJO}.
The outline of the paper is as follows. In Sect.~\ref{theory}, we present the observables
used in the analysis and describe the perturbative QCD predictions.
In Sect.~\ref{analysis_procedure} the analysis procedure is
explained in detail. Sect.~\ref{systematic} contains the discussion of the
systematic checks which are performed and the resulting systematic
errors. We collect the results and describe the determination of \as\ in
Sect.~\ref{results},
and we summarize in Sect.~\ref{summary}.
\section{Observables}
\label{theory}
Event shape variables are a convenient way to characterise properties of hadronic events
by the distribution of particle momenta. For the definition of the variables we refer to \cite{OPALPR404}.
The event shapes considered here are
Thrust {T},
C-parameter {C},
Heavy Jet Mass {\mh},
jet broadening variables {\bt} and {\bw},
and the transition value between 2 and 3 jets in the Durham jet scheme, {\ytwothree}.
The \as\ determination
in \cite{OPALPR404} is based on distributions and moments of these variables.
Their theo\-re\-ti\-cal description is currently the most advanced
\cite{resummation,NNLOESs,Weinzierl}. Further, we
measure moments of Thrust major {\tma}, Thrust minor {\tmi}, Oblateness {O}, Spheri\-city {S},
Light Jet Mass {\ml}, and Narrow Jet Broadening {\bn}. Moments of these variables and
variances of all measured event shapes will be made available in the HEPDATA database.\footnote{\tt http://durpdg.dur.ac.uk/HEPDATA/}.
Generic event shape variables $y$ are constructed such that spherical and multi-jet events
yield large values of $y$, while
two narrow back-to-back jets yield $y\simeq0$. Thrust $T$ is an exception to this rule. By using
$y=\thr$ instead the condition is fulfilled for all event shapes.
The $n$th, $n=1,2,\ldots$, moment of the distribution of the event
shape variable $y$ is defined by
\begin{equation}\momn{y}{n}=\int_0^{y_{max}} y^n \frac{1}{\sigma}
\frac{\dd\sigma}{\dd y} \dd y \,,\end{equation}
where $y_{max}$ is the
kinematically allowed upper limit of the variable $y$.
Predictions have been calculated for the moments of event shapes. Their evolution
with c.m. energy allows direct tests of the predicted energy evolution
of the strong coupling \as. Furthermore we
determine \ensuremath{\alpha_\mathrm{S}(M_{\mathrm{Z^0}})}\ by evolving our measurements to the energy scale given by the mass of the \zzero\ boson.
The theoretical calculations
involve a integration over full phase space, which implies that
comparison with data always probes all of the available phase space.
This is in contrast to QCD predictions for distributions; these are
commonly only compared with data--e.g.\ in order to measure \as--in
restricted regions, where the theory is well defined and describes the data
well, see e.g.~\cite{jadenewas}.
Comparisons of QCD predictions for
moments of event shape distributions with data are thus complementary
to tests of the theory using distributions.
Uncertainties in the NNLO predictions for event shape distributions in the two-jet region~\cite{NNLOESs,Weinzierl} prevent the reliable calculation of moments to NNLO at present, and therefore we compare with NLO predictions only.
The QCD prediction of \momn{y}{n} at parton level, in next-to-leading order (NLO) perturbation theory, and with
$\asb\equiv\as/(2\pi)$, is
\begin{equation}
\momn{y}{n}^{\rm part,theo} = {\cal A}_n \,\asb + ({\cal B}_n-2{\cal A}_n) \,\asbsq\,.
\label{eq_qcdmom}
\end{equation}
The values of the coefficients\footnote{The \asbsq\ coefficient is written as ${\cal B}_n-2{\cal A}_n$
because the QCD calculations are normalized to the Born cross section $\sigma_0$,
while the data are normalized to the total hadronic cross section,
$\sigtot=\ensuremath{\sigma_{0}}(1+2\asb)$ in LO.} ${\cal A}_n$
and ${\cal B}_n$ can be obtained by numerical
integration of the QCD matrix
elements using the program EVENT2~\cite{event2}.
The coupling \asb\ and the \asbsq\ coefficient depend on the renormalisation scale $\mu$~\cite{ert}.
For the sake of clarity the renormalisation scale factor is defined as
$\xmu\equiv\mu/\rs$, so setting \xmu\ to one implies that the
renormalisation scale is \rs.
A truncated fixed order QCD calculation such
as~(\ref{eq_qcdmom}) will then depend on \xmu. The renormalisation
scale dependence is implemented by the replacement ${\cal
B}_n\rightarrow {\cal B}_n+\beta_0\ln(\xmu){\cal A}_n$ where
$\beta_0=11-\frac{2}{3}\nf$ is the leading order $\beta$-function
coefficient of the renormalisation group equation and $\nf=5$ is the
number of active quark flavours.
\section{Analysis procedure}
\label{analysis_procedure}
\input{jadedetepj}
\input{jadedataepj}
\input{jademcepj}
\input{jadeevselepj}
\subsection{Corrections to the Data}
\label{detectorcorrection}
The data are corrected further for the effects of limited detector acceptance and resolution,
and residual ISR following \cite{VierjetJADE}.
All selected charged tracks, as well as the electromagnetic
calorimeter clusters remaining after the correction for
double counting of energy as described above, are used
in the evaluation of the event shape moments.
The values of the moments after the application of all selection cuts are said
to be at the detector level.
As the QCD predictions are calculated for massless quarks
we have to correct our data for the presence of events originating from \bbbar\ final states.
Especially at low \rs\ the large
mass of the b quarks and of the subsequently produced and decaying B hadrons will
influence the values of the event shape variables.
Therefore
in the \mbox{\rm JADE}\ analysis events from the process $\ensuremath{\mathrm{e^+e^-}}\rightarrow\bbbar$
(approximately 1/11 of the hadronic events) are
considered as background.
For the determination of the moments we calculate the sums $\sum_i
y_{i,\rm data}^n$ (for moment order $n=1,\ldots,5$) where $i$ counts all selected events.
The expected contribution of \bbbar\ background events $\sum_i
y^n_{i,\bbbar}$, as estimated by \py, is subtracted from the observed sum $\sum_i y^n_{i,\rm data}$.
By a multiplicative correction we then account for
the effects of detector imperfections and
of residual ISR and two photon background.
Two sets of sums $\sum_i y^n_i$ are calculated from MC simulated
signal events.
At detector level, MC events are treated
identically to the data. The
hadron level set is computed using the true momenta of the stable
particles in the event\footnote{ All charged and neutral particles
with a lifetime larger than $3\times 10^{-10}$~s are considered
stable.}, and uses only events where $\rsp$, the c.m.
energy of the event, reduced due to ISR, satisfies $\rs-\rsp<0.15$~GeV.
The
ratio of the MC hadron level moment over the MC detector level moment
is applied as a detector correction factor for the data; the corrected sums are
normalized by the corrected total number of selected events $N_{\rm tot}$ yielding the
final values of \momn{y}{n},
\begin{equation}
\momn{y}{n}^{\rm had} = \frac{\momn{y}{n}^{\rm had,MC}}{\momn{y}{n}^{\rm det,MC}}
\cdot \left( \sum_i{y}^{n}_{i,\rm det} - \sum_i{y}^{n}_{i,\rm \bbbar} \right)/N_{\rm tot}\,.
\label{detcor-eq}
\end{equation}
The corrected total number of events is
calculated from the number of selected events in the data in the same
way as for the moments.
There
is some disagreement between the detector corrections calculated using
\py\ or \hw\ at low \rs\ while at larger \rs\ the correction factors
agree well for most observables. The difference in detector corrections
will be evaluated as an experimental systematic uncertainty, see Sect.~\ref{systematic}.
The detector correction factors\linebreak \cdet$={\momn{y}{n}^{\rm had,MC}}/{\momn{y}{n}^{\rm det,MC}}$ as determined using
PY\-THIA are shown in Fig.~\ref{detcor}.
\begin{figure}
\begin{center}
\includegraphics[width=0.47\textwidth]{cdet2.eps}
\end{center}
\caption{Detector correction factors
$C_{\rm det}$ as calculated
using the \py\ MC model (see text for details). Line types correspond to
moment order as shown in {\it top left} figure}
\label{detcor}
\end{figure}
\input{jadesystepj}
\section{Results}
\label{results}
\subsection{Values of event shape moments}
\label{momentvalues}
The first five moments of the six event shape variables after
subtraction of \bbbar\ background and correction for detector effects measured by \mbox{\rm JADE}\ are
listed in Tables~\ref{tabmomA} and \ref{tabmomB} and shown in Figs.~\ref{hadron} and
\ref{hadron2}. Superimposed we show
the moments predicted by the \py, \hw\ and \ar\ MC
models tuned by \mbox{\rm OPAL}\ to LEP~1 data. The moments become smaller by approximately one order of
magnitude with each increasing moment order; the higher moments
are more strongly suppressed
with centre-of-mass
energy. Statistical and experimental systematic uncertainties strongly increase with moment
order.
In order to make a clearer
comparison between data and models the lower plots in Figs.~\ref{hadron} and \ref{hadron2} show the
differences between data and each model divided by the combined
statistical and experimental error for $\rs=14$ and 35~GeV. The three
models are seen to describe the data fairly well; \py\ and \ar\ are
found to agree better with the data than \hw.
\begin{figure*}
\begin{center}
\includegraphics[width=0.8\textwidth]{plot_a2ESis=048.eps}
\end{center}
\caption{First five moments of \thr, \cp\ and \bt\ at hadron level
compared with predictions based on \py~6.158, \hw~6.2 and
\ar~4.11 MC events.
The inner error bars--where visible--show the statistical errors,
the outer bars show the total errors. Where no error bar is visible,
the total error is smaller than the point size.
The lower panels show the
differences between data and MC at $\rs=14$~ and 35~GeV,
divided by the total error}
\label{hadron}
\end{figure*}
\begin{figure*}
\begin{center}
\includegraphics[width=0.8\textwidth]{plot_a2ESis=9105.eps}
\end{center}
\caption{First five moments of \bw, \ytwothree\ and \mh\ at hadron level
compared with predictions based on \py~6.158, \hw~6.2 and
\ar~4.11 MC events.
The inner error bars--where visible--show the statistical errors,
the outer bars show the total errors. Where no error bar is visible,
the total error is smaller than the point size.
The lower panels show the
differences between data and MC at $\rs=14$~ and 35~GeV,
divided by the total error}
\label{hadron2}
\end{figure*}
\subsection{Determination of \boldmath{\as}}
\label{fitprocedure}
In order to measure the strong coupling \as,
we
fit the QCD predictions to the corrected moment values \momn{y}{n}, i.e. to the data
shown in Tables~\ref{tabmomA} and \ref{tabmomB}.
The theoretical predictions using the \oaa\ calculation described in Sect.~\ref{theory} provide
values at the parton level.
It is necessary to correct for hadronisation effects
in order to compare the theory with the hadron level data.
Therefore the moments are calculated at
hadron and parton level using large samples of \py\ 6.158 events and, as a
cross check,
samples obtained by simulation with \hw~6.2 and \ar~4.11.
Parton level is the stage at the end of the parton shower in the simulation of an hadronic event.
In order to correct for hadronisation
the data points are then multiplied by the ratio $\chad={\momn{y}{n}^{\rm part,MC}}/{\momn{y}{n}^{\rm had,MC}}$ of the parton over hadron
level moments; $\momn{y}{n}^{\rm part}=\chad\cdot\momn{y}{n}^{\rm had}$.
The models use cuts on quantities like e.g. the invariant mass between
partons in order to regulate divergencies in the predictions for the
parton shower evolution. As a consequence in some events no parton
shower is simulated and the original quark-antiquark pair enters the
hadronisation stage of the model directly. This leads to a bias in
the calculation of moments at the parton level, since $y=0$ in this
case for all observables considered here (\ytwothree\ cannot be calculated in this
case).
In order to avoid this bias
we exclude in the simulation at the parton level events without
gluon radiation, as in \cite{Daisuke}.
After this exclusion, the \rs\ evolution of the moments follows the QCD prediction;
the change of the prediction is comparable in size with the differences
between employed MC generators.
At the hadron and detector level all
events are used.
The hadronisation correction factors \chad\ as obtained from \py~6.158
are shown in Fig.~\ref{hadcor}. We find
that the hadronisation correction factors
can be as large as two at low \rs. For larger \rs\ the hadronisation corrections decrease as
expected.
The models \py~6.158, \hw~6.2 and \ar\ 4.11 do not agree well for
moments based on \bw, \ytwothree\ and \mh\ at low \rs. The
differences between the models are studied as a systematic
uncertainty in the fits.
\begin{figure}
\includegraphics[width=0.47\textwidth]{momchad2.eps}
\caption{Hadronisation correction factors $C^{\rm had}$
as calculated using the MC model \py~6.158 (see text for details). Line types correspond to
moment order as shown in {\it top left} figure}
\label{hadcor}
\end{figure}
A \chisq\ value for each moment \momn{y}{n} is calculated using the
formula
\begin{equation}
\chi^{2} = \sum_i (\momn{y}{n}^{\rm part}_i-\momn{y}{n}^\mathrm{part,theo}_i)^{2}/\sigma_{i}^{2}\,,
\label{simplechi2}
\end{equation}
where $i$ counts the energy points, $\sigma_i$ denotes the
statistical errors and $\momn{y}{n}^{\rm part,theo}$ is taken from~(\ref{eq_qcdmom}).
The \chisq\ value is
minimized with respect to \ensuremath{\alpha_\mathrm{S}(M_{\mathrm{Z^0}})}\ for each moment $n$ separately.
The statistical uncertainty is found by varying the minimum value $\chisq_{\rm min}$ to $\chisq_{\rm min}+1$.
The
evolution of \as\ from \mz\ to c.m. energy $(\rs)_i$ is implemented in the fit in two-loop
precision \cite{ESW}.
The renormalisation scale factor \xmu, as
discussed in Sect.~\ref{theory}, is set to~1.
\subsection{Fits of \mbox{\rm\bf JADE}\ data}
\label{JadeFits}
Data and \mbox{NLO}\ prediction are compared, and this
is repeated for every systematic variation.
The results are\linebreak shown in Fig.~\ref{fit_plot} and listed in Table~\ref{JADE-note-dt}.
Figure \ref{fit_plot} also contains the
combination of the fit results discussed below.
The values of \chisqd\ are in the order of 1-10, the fitted predictions--including the
energy evolution of \as--are consistent with the data.
The fit to \momn{\mh}{1} does not converge and therefore no result is shown.\footnote{Equation \ref{eq_qcdmom} precludes a real solution \asb, if
${\cal B}_n - 2 {\cal A}_n < -{\cal A}_n^2/4\momn{y}{n}$. For \momn{\mh}{1} this relation
is fulfilled in the whole energy range 14--207~GeV, see Tables~\ref{tabmomA} and \ref{tabmomB}
and \cite{OPALPR404}. The \mbox{NLO}\ coefficient
is negative in the case of \momn{\bw}{1}, too.
This observable gives the maximum value of \chisqd=98.5/5, further
problems in the determination of \as\ using \momn{\bw}{1} show up in Subsect.~\ref{fitJO}.}
\begin{figure}[h]
\includegraphics[width=0.48\textwidth]{DissJADE588.eps}
\caption{Measurements of \ensuremath{\alpha_\mathrm{S}(M_{\mathrm{Z^0}})}\ using fits to moments of six event
shape variables at \Petra\ energies.
The inner error bars--where visible--show the statistical errors,
the outer bars show the total errors.
The {\it dotted line} indicates
the weighted average described in Subsect.~\ref{ascombs}, the {\it shaded band} shows its error.
Only the measurements
indicated by {\it solid symbols} are used for this purpose}
\label{fit_plot}
\end{figure}
The fitted values of \ensuremath{\alpha_\mathrm{S}(M_{\mathrm{Z^0}})}\ increase steeply with the order $n$ of the moment used,
for \momn{(1-T)}{n}, \momn{\cp}{n} and \momn{\bt}{n}.
This effect is less pronounced and systematic
for \momn{\bw}{n}, \momn{(\ytwothree)}{n} and \momn{\mh}{n}.
In Fig.~\ref{baplot}
we show the ratio $K={\cal B}_n/{\cal A}_n$ of NLO
and LO coefficients for the six observables used in our fits
to investigate the origin of this behaviour.
Steeply increasing values of \ensuremath{\alpha_\mathrm{S}(M_{\mathrm{Z^0}})}\ with moment order $n$
for \momn{(1-T)}{n}, \momn{\cp}{n} and \momn{\bt}{n}
and
increasing values of $K$ with $n$
are clearly correlated. There is also a correlation with the rather large scale
uncertainties in the respective fits.
The other
observables \momn{\bw}{n}, \momn{(\ytwothree)}{n} and \momn{\mh}{n}
have more stable results for \ensuremath{\alpha_\mathrm{S}(M_{\mathrm{Z^0}})}\ and
correspondingly fairly
constant values of $K$.
The reason that the fit of \momn{\mh}{1} does not
converge is the large and negative value of $K$.
\subsection{Combined fits of \mbox{\rm\bf JADE}\ and \mbox{\rm\bf OPAL}\ data}
\label{fitJO}
For the most significant results we supplement the \mbox{\rm JADE}\ data
with the analogous \mbox{\rm OPAL}\ data~\cite{OPALPR404}, covering the energy range of 91 to 209~GeV.
The \mbox{\rm JADE}\ and \mbox{\rm OPAL}\ detectors are very similar, both in
construction and in the values of many detector parameters.
The combined use of the \mbox{\rm JADE}\ and \mbox{\rm OPAL}\ data can therefore be
expected to lead to consistent measurements, with small
systematic differences. Our analysis
procedure is therefore constructed to be similar to the
one used in the \mbox{\rm OPAL}\ analysis~\cite{OPALPR404}, in particular in the
estimate of the systematic errors.
The central values and statistical errors of the combined fits are found employing the \chisq\
calculation~(\ref{simplechi2})
as above.\footnote{For this reason systematic differences between the two experiments contribute
to the sometimes high \chisq\ values; in Figs.~\ref{JOIndiCP} and \ref{JOIndiBW}
the experimental
uncertainties are indicated separately.}
However, the systematic uncertainties in this case cannot be found by simple repetitions of the fits,
as the \mbox{\rm JADE}\ and \mbox{\rm OPAL}\ systematic variations are not identical.
The systematic uncertainties are correlated between different energy points, and
including general correlations, the
\chisq\ calculation shown in~(\ref{simplechi2}) has to be generalised
to \cite{pdg06}
\begin{eqnarray}
\label{chi2corrIndiJO}
\chi^{2} = \sum_{i,j} &&(\momn{y}{n}_i^{\rm part}-\momn{y}{n}^{\rm part,theo}_i)\cdot\nonumber\\
& & V^{-1}_{ij} \cdot(\momn{y}{n}_j^{\rm part}-\momn{y}{n}^{\rm part,theo}_j) \; ,
\end{eqnarray}
where the $V_{ij}$ are the covariances
of the $n$-th moment at the energy points $i$ and $j$. They have the form
$V_{ij}=S_{ij}+E_{ij}$, with statistical covariances $S_{ij}$ and
experimental systematic covariances $E_{ij}$. The matrix $S_{ij}$ is diagonal,
$S_{ii} = \sigma_{{\rm stat.},\, i}^2\,$,
as data of different energy points are independent.
The experimental systematic covariances $E_{ij}$ are only partly known:
\begin{itemize}
\item The diagonal entries are given by
\[
E_{ii}= \sigma_{{\rm exp.},\, i}^2\,,
\]
denoting by $\sigma_{\rm exp.,i}$ the experimental uncertainty at energy point
$i$.
\item The non diagonal entries can only follow from plausible assumptions.
We employ the {\it minimum overlap
assumption}\footnote{Fitting the low energy \mbox{\rm JADE}\ points (14, 22 GeV)
this assumption results
\cite{CHPphd} in a more accurate and more conservative error estimation
than the {\it full overlap assumption}
$E_{ij} = \mathrm{Max}\{\sigma_{{\rm exp.},\,i}^2\,,\;\sigma_{{\rm exp.},\,j}^2\}$
employed in \cite{OPALPR299}.}
\begin{equation}
E_{ij} = \mathrm{Min}\{\sigma_{{\rm exp.},\,i}^2\,,\;\sigma_{{\rm exp.},\,j}^2\}\,,
\label{indiJOmin}
\end{equation}
\end{itemize}
The total errors are found by fits employing the \chisq\ calculation (\ref{chi2corrIndiJO}).
We
use the relative experimental uncertainties
to determine the experimental uncertainties of
the central values from the fits without correlations.
\begin{figure}
\includegraphics[width=0.48\textwidth]{fitExp=JOpapermode=minovlDaVer=588LP=1HP=10CP.eps}
\caption{Fits of the NLO predictions to \mbox{\rm JADE}\ and \mbox{\rm OPAL}\ measurements of moments of
\cp\ at parton level.
The {\it solid lines} show the \rs\ evolution of the \mbox{NLO}\ prediction based on the fitted value
of \ensuremath{\alpha_\mathrm{S}(M_{\mathrm{Z^0}})}.
The inner error bars--where visible--show the statistical errors used in the fit,
the outer bars show the total errors. Where no error bar is visible,
the total error is smaller than the point size
}
\label{JOIndiCP}
\end{figure}
\begin{figure}
\includegraphics[width=0.48\textwidth]{fitExp=JOpapermode=minovlDaVer=588LP=1HP=10BW.eps}
\caption{Fits of the NLO predictions to \mbox{\rm JADE}\ and \mbox{\rm OPAL}\ measurements of moments of \bw\ at parton level.
The {\it solid lines} show the \rs\ evolution of the \mbox{NLO}\ prediction based on the fitted value
of \ensuremath{\alpha_\mathrm{S}(M_{\mathrm{Z^0}})}.
The inner error bars--where visible--show the statistical errors used in the fit,
the outer bars show the total errors. Where no error bar is visible,
the total error is smaller than the point size. Problems of the \mbox{NLO}\ prediction
at low \rs\ are discussed in the text
}
\label{JOIndiBW}
\end{figure}
Figures~\ref{JOIndiCP} and \ref{JOIndiBW} show the comparison of data points and
predictions for the moments of the C-parameter and the wide jet broadening \bw.
The
predictions for \momn{\cp}{n} are seen to be
in good agreement with the data and significantly confirm the QCD prediction of the energy dependence of $\as(\rs)$,
also known as asymptotic freedom.
The prediction slightly overshoots the higher moments of \thr,
\cp\ and \bt\ at 14\,GeV, and undershoots the moments
of \bw, \mh,
and sometimes \ytwothree.
At low \rs\ the \mbox{NLO}\ predictions of the
\bw, \ytwothree\ and \mh\ distributions are (unphysically) negative
in a large range of the two jet region
\cite{pedrophd}. Therefore the \mbox{NLO}\ prediction for the moments is likely to be incomplete
and too low to provide a satisfactory description of the data at low c.m.
energies. In the case of \momn{\bw}{1} the $\as^2$ coefficient is even negative, and we
do not show or use this fit.
The results are listed in Table~\ref{JOdpgminovl} and shown in
Fig.~\ref{fit_plot_JO}.
\begin{figure}[h]
\begin{center}
\hspace{-1.5cm}\includegraphics[width=0.48\textwidth]{DissJADE588OPAL.eps}
\end{center}
\caption
{Measurements of \ensuremath{\alpha_\mathrm{S}(M_{\mathrm{Z^0}})}\ using fits to moments of six event
shape variables at \Petra\ and \mbox{LEP}\ energies.
The inner error bars--where visible--show the statistical errors,
the outer bars show the total errors.
The experimental systematic uncertainties
are estimated by the minimum overlap assumption.
The {\it dotted line} indicates
the weighted average described in the text, the {\it shaded band} shows its error.
Only the measurements
indicated by {\it solid symbols} are used for this purpose}\label{fit_plot_JO}
\label{fit_plotJO}
\end{figure}
\subsection{Combination of \boldmath{\as} determinations}
\label{ascombs}
To make full use of the data, we combine the measurements of \ensuremath{\alpha_\mathrm{S}(M_{\mathrm{Z^0}})}\ from the
various moments and event shapes and determine a single value.
An extensive study was done by the LEP QCD working group on this
problem~\cite{MinovlConf,MAF,OPALPR404,STKrev,ALEPH}, and their procedure is adopted here.
A weighted mean of the \ensuremath{\alpha_\mathrm{S}(M_{\mathrm{Z^0}})}\ measurements is calculated
which minimizes the $\chi^{2}$ formed from the
measurements and the combined value.
This mean value, \ensuremath{\alpha_\mathrm{S}(M_{\mathrm{Z^0}})}, is given by
\begin{equation}
\ensuremath{\alpha_\mathrm{S}(M_{\mathrm{Z^0}})}=\sum w_{i} \, \asi \;\;\;\; \mathrm{with}\;\;\;\;
w_{i}=\frac{\sum_{j}(V^{\prime~-1})_{ij}}{\sum_{jk}(V^{\prime~-1})_{jk}},
\end{equation}
where the measured values of \ensuremath{\alpha_\mathrm{S}(M_{\mathrm{Z^0}})}\
are denoted \asi, their covariance matrix $V^{\prime}$,
and the individual results are counted by $i$, $j$ and $k$.
The presence
of highly correlated and dominant systematic errors
makes
a reliable estimate of $V^{\prime}$ difficult.
Undesirable features (such as negative\linebreak weights)
can be caused by small
uncertainties in the estimation of these correlations.
Therefore only
experimental systematic errors--assumed to
be partially correlated by minimum overlap as
$V^{\prime}_{ij}=\min(\sigma^2_{{\rm exp},i},\sigma^2_{{\rm exp},j})$--and
statistical correlations are taken to contribute
to the off-diagonal elements of the covariance matrix.
The statistical correlations
are determined using MC simulations at the
parton level.\footnote{The result is identical if the correlations are determined
using \py, \hw\ or \ar\ at 14.0...43.8 GeV, or determined at hadron level instead of parton level.
The correlation values are cited
in~\cite{CHPphd}; at 14~GeV and parton level they vary between 29\% and 99\% and are larger than 50\% mostly.} The
diagonal elements
are calculated from
all error
contributions--statistical, experimental, hadronisation and theory uncertainties.
Using the weights derived from the covariance matrix $V^{\prime}$ the
theory uncertainties are computed by analogously combining the
\ensuremath{\alpha_\mathrm{S}(M_{\mathrm{Z^0}})}\ values
from setting $\xmu=2.0$ or $\xmu=0.5$,
and the hadronisation uncertainties by combining the results obtained with the
alternative hadronisation models.
To select observables with an
apparently converging perturbative prediction, we consider \cite{OPALPR404} only those results for which the NLO term in
equation~(\ref{eq_qcdmom}) is less than half the corresponding LO term (i.e.\
$|K\as/2\pi|<0.5$ or $|K|<25$), namely
\momone{\thr}, \momone{\cp}, \momone{\bt}, \momn{\bw}{n} and
\momn{(\ytwothree)}{n}, $n=1,\ldots,5$; and \momn{\mh}{n},
$n=2,\ldots,5$. These are results from 17 observables in total;
or 16 observables from \mbox{\rm JADE}\ and \mbox{\rm OPAL}, exclu\-ding \momn{\bw}{1}.
The $K$ values are
shown in Fig.~\ref{baplot}.
\begin{figure}
\begin{center}
\includegraphics[width=0.45\textwidth]{Figure26.eps}
\end{center}
\caption
{The ratio $K=\mathcal{B}_n/\mathcal{A}_n$
of \mbox{NLO}\ and \mbox{LO}\ coefficients for the first five moments of the six event shape variables
used in the determination of \as, see also \cite{OPALPR404}
}
\label{baplot}
\end{figure}
Using only \mbox{\rm JADE}\ data, the result of the combination is
\resJlines
and is shown in Fig.~\ref{fit_plot}.
Combining \mbox{\rm JADE}\ and \mbox{\rm OPAL}\ measurements, the result is
\begin{eqnarray*
and is shown in Fig.~\ref{fit_plotJO}.
Both values are above, but still consistent with the world average of
$\ensuremath{\alpha_\mathrm{S}(M_{\mathrm{Z^0}})}=0.1189\pm0.0010$~\cite{bethke06}. It has been observed previously
in comparisons of event shape distributions with NLO~\cite{OPALPR075} or\linebreak
NNLO~\cite{asNNLO}
QCD predictions with $\xmu=1$ that fitted values of \ensuremath{\alpha_\mathrm{S}(M_{\mathrm{Z^0}})}\ tend to
be large compared to the world average.
To enable comparison with earlier and more specific analyses \cite{JADE-paper} we
combine the \mbox{\rm JADE}\ fit results from only the first\footnote{Because of the problems with the
\mbox{NLO}\ description of \momn{\mh}{1}$^{\rm part}$, \momn{\mh}{2} is often regarded as the first moment of \mh.}
moments \momone{\thr}, \momone{\cp}, \momone{\bt}, \momone{\bw}, \momone{\ytwothree} and \momn{\mh}{2}.
This yields a value of
\begin{eqnarray*}
\ensuremath{\alpha_\mathrm{S}(M_{\mathrm{Z^0}})}&=&0.1243\pm0.0001\ensuremath{\mathrm{(stat.)}}\pm0.0009\ensuremath{\mathrm{(exp.)}}\nonumber\\
& &\pm0.0010\had\pm0.0070\ensuremath{\mathrm{(theo.)}} \,.
\end{eqnarray*}
The slightly smaller error in this determination of \as\ reflects the fact that the lower
order moments are less sensitive to the multijet region of the
event shape distributions. This leads to a smaller statistical and theoretical uncertainty.
In all three measurements the scale uncertainty is dominant.
\section{Summary}
\label{summary}
In this paper we present measurements of moments of event shape distributions
at centre-of-mass
energies between 14 and 44~GeV using data of the \mbox{\rm JADE}\ experiment. The
predictions of the \py, \hw\ and \ar\ MC models tuned by \mbox{\rm OPAL}\
to LEP~1 data are found to be in reasonable agreement with the measured
moments.
From fits of \oaa\ predictions to selected event shape moments
corrected for experimental and hadronisation effects
we have
determined the strong coupling to be\linebreak
\resJtot\ using only \mbox{\rm JADE}\ data, and
\resJOtot\ using combined \mbox{\rm JADE}\ and \mbox{\rm OPAL}\ data.
Fits to moments of \mh, \bw\ and \ytwothree\ return large values of \chisqd;
the higher moments, in particular of the \thr, \cp\ and
\bt\ event shape variables, lead to systematically enlarged values of
\as.
Results where \as\ is steeply rising with moment order are strongly correlated with the relative
size of the \asbsq\ coefficient and thus are most likely affected by deficiencies of the \mbox{NLO}\ prediction.
The \mbox{\rm JADE}\ experiment assesses an interesting energy range for the perturbative
analysis since the energy evolution of the strong coupling is more pronounced at
low energies.
{\small
\section*{Acknowledgements}
\par
This research was supported by the DFG cluster of excellence `Origin
and Structure of the Universe'.
}
\bibliographystyle{iopart}
\section{Systematic uncertainties}
\label{systematic}
Several contributions
to the experimental uncertainties are estimated by repeating the
analysis with varied track or event selection cuts or varied procedures
as in~\cite{VierjetJADE}. For each systematic
variation the value of the event shape moment or of \as\ is determined
and then compared to the default value.
The experimental systematic uncertainty
quoted is the result of adding in quadrature all contributions.
In the fits of
the QCD predictions to the data two further systematic uncertainties
are evaluated:
\begin{itemize}
\item Using \hw~6.2\ and
\ar~4.11\ instead of \py~6.158\ we assess the uncertainties associated with the hadronisation
correction (Sect.~\ref{fitprocedure}).
The hadronisation systematic
uncertainty is defined by the larger change in \as\ resulting from these
alternatives.
\item By varying
the renormalisation scale factor \xmu\ we assess the theoretical uncertainty associated
with missing higher
order terms in the theoretical prediction.
The renormalisation scale factor \xmu\ is set to 2.0
and 0.5.
The theoretical systematic uncertainty is defined by the larger deviation from the default
value.
\end{itemize}
|
1,116,691,499,892 | arxiv | \section{Introduction}
The stability and basin of attraction of periodic orbits is an important problem in many applications. Already the determination of a periodic orbit is a non-trivial task as it involves solving the differential equation. The classical definition of stability, as well as its study using a Lyapunov function require the knowledge of the position of the periodic orbit which in many applications can only be approximated. An alternative way to study the stability and basin of attraction is contraction analysis, which is a local criterion and does not require us to know the location of the periodic orbit.
Throughout the paper we will study the autonomous ODE
\begin{eqnarray}
\dot{{\bf x}}&=&{\bf f}({\bf x})\label{ODE}
\end{eqnarray}
where ${\bf f}\in C^\sigma(\mathbb R^n,\mathbb R^n)$ with $\sigma\ge 1$. We denote the solution ${\bf x}(t)$ with initial condition ${\bf x}(0)={\bf x}_0$ by $S_t{\bf x}_0={\bf x}(t)$ and assume that it exist for all $t\ge 0$.
In the next definition we will define a contraction metric on $\mathbb R^n$.
Note that $M({\bf x})$ defines a point-dependent scalar product through $\langle {\bf v},{\bf w}\rangle_{M({\bf x})}={\bf v}^TM({\bf x}){\bf w}$ for all ${\bf v},{\bf w}\in\mathbb R^n$.
\begin{definition}[Contraction metric]\label{Riemannian}
A Riemannian metric is a function $M\in C^0(G,\mathbb S^n)$, where $G\subset \mathbb R^n$ is open and $\mathbb S^n$ denotes the symmetric $n\times n$ matrices, $M({\bf x})$ is positive definite for all ${\bf x}\in G$ and the orbital derivative of $M$ exists for all ${\bf x}\in G$ and is continuous, i.e.
$$M'({\bf x})=\frac{d}{dt} M(S_t{\bf x})\big|_{t=0}$$
exists and is continuous. A sufficient condition for the latter is that $M\in C^1(G,\mathbb S^n)$; then $M_{ij}'({\bf x})=\nabla M_{ij}({\bf x})\cdot {\bf f}({\bf x})$ for all $i,j\in\{1,\ldots,n\}$.
Define \begin{eqnarray}
L_M({\bf x};{\bf v})&:=&\frac{1}{2}{\bf v}^T\left(M({\bf x})D{\bf f}({\bf x})+D{\bf f}({\bf x})^TM({\bf x})+M'({\bf x})\right){\bf v}. \label{LM}
\end{eqnarray}
The Riemannian metric $M$ is called {\bf contraction metric in $K\subset G$ with exponent $-\nu<0 $} if $L_M({\bf x})\le -\nu$ for all ${\bf x}\in K$, where
\begin{eqnarray}
L_M({\bf x})&:=&\max_{{\bf v}^TM({\bf x}){\bf v}=1,{\bf v}^T M({\bf x}){\bf f}({\bf x})=0}L_M({\bf x};{\bf v}).\label{L_M}
\end{eqnarray}
\end{definition}
The following theorem shows the implications of the existence of such a contraction metric on a certain set in the phase space.
\begin{theorem}
Let $\varnothing\not= K \subset \mathbb R^n$ be
a compact, connected and positively invariant set which contains no equilibrium.
Let $M$ be a contraction metric in $K$ with exponent $-\nu<0 $, see Definition \ref{Riemannian}.
Then there exists one and only one periodic orbit $\Omega\subset K$. This periodic orbit
is exponentially asymptotically stable, and the real parts of all Floquet exponents --
except the trivial one -- are less than or equal to $-\nu$.
Moreover, the basin of attraction $A(\Omega)$ contains $K$.
\end{theorem}
This theorem goes back to Borg \cite{borg} with $M({\bf x})=I$, and has been extended to a general Riemannian metric \cite{stenstroem}. For more results on contraction analysis for a periodic orbit see \cite{hartman,hartmanbook,leonov3,leonov96}.
Note that a similar result holds with an equilibrium if the contraction takes place in all directions ${\bf v}$, i.e. if $L_M({\bf x})\le -\nu$ in \eqref{L_M} is replaced by
${\mathcal L}_M({\bf x}):=\max_{{\bf v}^TM({\bf x}){\bf v}=1}L_M({\bf x};{\bf v})\le -\nu$. For more references on contraction analysis see \cite{lohmiller}, and for the relation to Finsler-Lyapunov functions see \cite{finsler}.
Note that $L_M({\bf x})$ is a continuous with respect to ${\bf x}$ and, as we will show in the paper, also locally Lipschitz-continuous. Due to the maximum, however, it is not differentiable in general.
In this paper we are interested in converse results, i.e. given an exponentially stable periodic orbit, does a Riemannian contraction metric as in Definition \ref{Riemannian} exist?
\cite{lohmiller} gives a converse theorem, but here $M(t,{\bf x})$ depends on $t$ and will, in general, become unbounded as $t\to\infty$. In \cite{giesl04} the existence of such a contraction metric was shown on a given compact subset of $A(\Omega)$, first on the periodic orbit, using Floquet theory, and then on $K$, using a Lyapunov function. The local construction, however, neglected the fact that the Floquet representation of solutions of the first variation equation along the periodic orbit is in general not real, but complex. We will show in this paper, that, by choosing the complex Floquet representation appropriately, the constructed Riemannian metric is real-valued, thus justifying the arguments in \cite{giesl04}. Moreover, we will show the existence of a Riemannian metric on the whole, possibly unbounded basin of attraction by using a new construction. The Riemannian metric will be arbitrarily close to the true rate of exponential attraction. Let us summarize the main result of the paper in the following theorem.
\begin{theorem}\label{main}
Let $\Omega$ be an exponentially stable periodic orbit of $\dot{{\bf x}}={\bf f}({\bf x})$, let $-\nu$ be the largest real part of all its non-trivial Floquet exponents and ${\bf f}\in C^\sigma(\mathbb R^n,\mathbb R^n)$ with $\sigma\ge 3$.
Then for all $\epsilon\in (0,\nu/2)$ there exists a contraction metric $M\in C^{\sigma-1}( A(\Omega), \mathbb S^{n})$ in $A(\Omega)$ as in Definition \ref{Riemannian} with exponent $-\nu+\epsilon<0 $, i.e.
\begin{eqnarray}L_M({\bf x})&=&\frac{1}{2}\max_{{\bf v}^TM({\bf x}){\bf v}=1,{\bf v}^T M({\bf x}){\bf f}({\bf x})=0}{\bf v}^T\left(M({\bf x})D{\bf f}({\bf x})+D{\bf f}({\bf x})^TM({\bf x})+M'({\bf x})\right){\bf v}
\nonumber\\
&\le& -\nu+\epsilon\label{esti}
\end{eqnarray}
holds for all ${\bf x}\in A(\Omega)$.
\end{theorem}
The metric is constructed in several steps: first on the periodic orbit, then in a neighborhood, and finally in the whole basin of attraction. In the proof, we define a projection of points ${\bf x}$ in a neighborhood of the periodic orbit onto the periodic orbit, namely onto ${\bf p}\in \Omega$, such that to $({\bf x}-{\bf p} )^TM({\bf p}){\bf f}({\bf p})=0$. This is then used to synchronize the times of solutions through ${\bf x}$ and ${\bf p}$, and to define a time-dependent distance between these solutions, which decreases exponentially.
Let us compare our result with converse theorems for a contraction metric for an equilibrium. In \cite{converse}, three converse theorems were obtained: Theorem 4.1 constructs a metric on a given compact subset of the basin of attraction (see \cite{giesl04} for the case of a periodic orbit), Theorem 4.2 constructs a metric on the whole basin of attraction (see this paper for the case of a periodic orbit), while Theorem 4.4 constructs a metric as solution of a linear matrix-valued PDE (see \cite{other} for the case of a periodic orbit). The latter construction is beneficial for its computation by solving the PDE, and it also constructs a smooth function; however, the exponential rate of attraction cannot be recovered, which is an advantage of the approach in this paper.
Let us give an overview over the paper: In Section \ref{Floquet} we prove a special Floquet normal form to ensure that the contraction metric that we later construct on the periodic orbit is real-valued. In Section \ref{conv} we prove the main result of the paper, Theorem \ref{main}, showing the existence of a Riemannian metric on the whole basin of attraction. The section also contains Corollary \ref{help_other}, defining a projection onto the periodic orbit and related estimates. In the appendix we prove that $L_M$ is locally Lipschitz-continuous.
\section{Floquet normal form}
\label{Floquet}
Before we consider the Floquet normal form, we will prove a lemma which calculates $L_M({\bf x})$ for the Riemannian metric $M({\bf x})=e^{2V({\bf x})}N({\bf x})$.
\begin{lemma}\label{lem}
Let $N\colon \mathbb R^n\to \mathbb S^n$ be a Riemannian metric and $V\colon \mathbb R^n\to \mathbb R$ a continuous and orbitally continuously differentiable function.
Then $M({\bf x})=e^{2V({\bf x})}N({\bf x})$ is a Riemannian metric and
\begin{eqnarray*}
L_M({\bf x})&=&L_N({\bf x})+V'({\bf x}).
\end{eqnarray*}
\end{lemma}
\begin{proof}
It is clear that $M({\bf x})$ is a positive definite for all ${\bf x}$ since $e^{2V({\bf x})}>0$.
We have
\begin{eqnarray*}
L_M({\bf x};{\bf v})&=&
\frac{1}{2}{\bf v}^T\left(M({\bf x})D{\bf f}({\bf x})+D{\bf f}({\bf x})^TM({\bf x})+M'({\bf x})\right){\bf v}\\
&=&
\frac{1}{2}{\bf v}^T\bigg(e^{2V({\bf x})}N({\bf x})D{\bf f}({\bf x})+e^{2V({\bf x})}D{\bf f}({\bf x})^TN({\bf x})\\
&&\hspace{1.2cm}+
e^{2V({\bf x})}(2V'({\bf x})N({\bf x})+N'({\bf x}))\bigg){\bf v}\\
&=&
\frac{1}{2}{\bf w}^T\left(N({\bf x})D{\bf f}({\bf x})+D{\bf f}({\bf x})^TN({\bf x})+
N'({\bf x})\right){\bf w}
+{\bf w}^TN({\bf x}){\bf w} \, V'({\bf x})
\end{eqnarray*}
with ${\bf w}=e^{V({\bf x})}{\bf v}$, so $L_M({\bf x};{\bf v})=L_N({\bf x}; {\bf w})+{\bf w}^TN({\bf x}){\bf w} \, V'({\bf x})$.
Thus,
\begin{eqnarray*}
L_M({\bf x})&=&\max_{{\bf v}^TM({\bf x}){\bf v}=1,{\bf v}^TM({\bf x}){\bf f}({\bf x})=0}L_M({\bf x};{\bf v})\\
&=&\max_{{\bf w}^TN({\bf x}){\bf w}=1,{\bf w}^T N({\bf x}){\bf f}({\bf x})=0}\left[L_N({\bf x};{\bf w})
+{\bf w}^TN({\bf x}){\bf w} V'({\bf x})\right]\\
&=&L_N({\bf x})+ V'({\bf x}).
\end{eqnarray*}
This shows the lemma.
\end{proof}
In order to show later that our constructed Riemannian metric $M$ is real-valued, we will construct a special Floquet normal form in Proposition \ref{prop} such that the matrix in \eqref{real} is real-valued. In Corollary \ref{coro} we will show estimates in the case that \eqref{var} is the first variation equation of a periodic orbit.
The proof of the following proposition is inspired by \cite{chicone}.
\begin{proposition}\label{prop}
Consider the periodic differential equation
\begin{eqnarray}\dot{{\bf y}}&=&F(t){\bf y}\label{var}
\end{eqnarray} where $F\in C^s(\mathbb R,\mathbb R^{n\times n})$ is $T$-periodic, $s\ge 1$ and denote by $\Phi\in C^s(\mathbb R,\mathbb R^{n\times n})$ its principal fundamental matrix solution with $\Phi(0)=I$.
Then there exists a $T$-periodic function $P\in C^s(\mathbb R,\mathbb C^{n\times n})$ with $P(0)=P(T)=I$ and a matrix $B\in \mathbb C^{n\times n}$ such that for all $t\in\mathbb R$
$$\Phi(t)=P(t)e^{Bt}.$$
Denote by $\lambda_1,\ldots,\lambda_r\in \mathbb R\setminus \{0\}$ the pairwise distinct real eigenvalues and by $\lambda_{r+1},\overline{\lambda_{r+1}},\ldots,$
$\lambda_{r+c},\overline{\lambda_{r+c}}\in\mathbb C\setminus \mathbb R$ the pairwise distinct pairs of complex conjugate complex eigenvalues of $\Phi(T)$ with algebraic multiplicity $m_j$ of $\lambda_j$.
For $\epsilon>0$ there exists a non-singular matrix $S\in \mathbb R^{n\times n}$ such that $B=SAS^{-1}$ with $A=\operatorname{blockdiag}(K_1,K_2,\ldots,K_{r+c})$ and $K_j\in \mathbb C^{m_j\times m_j}$ for $j=1,\ldots,r$ and $K_j\in \mathbb R^{2m_j\times 2m_j}$ for $j=r+1,\ldots,r+c$ as well as
$$\frac{1}{2}{\bf w}^*(A^*+A){\bf w}\le \sum_{j=1}^{r+c}c_j\sum_{i=1}^{m_j}|w_{i+\sum_{k=1}^{j-1}m_k}|^2
\text{ for all }{\bf w}\in \mathbb C^n,$$
where $c_j=\left(\frac{\ln |\lambda_j|}{T}+\epsilon\right)$ if $m_j\ge 2$ and $c_j=\frac{\ln |\lambda_j|}{T}$ if $m_j=1$.
Moreover, we have
\begin{eqnarray}
(P^{-1}(t))^*(S^{-1})^*S^{-1}P^{-1}(t)&\in& \mathbb R^{n\times n}\label{real}
\end{eqnarray}
for all $t\in \mathbb R$.
\end{proposition}
\begin{proof}
Since $F\in C^s$, we also have $\Phi\in C^s(\mathbb R,\mathbb R^{n\times n})$.
Noting that $\Psi(t):=\Phi(t+T)$ solves \eqref{var} with $\Psi(0)=\Phi(T)$, we obtain from the uniqueness of solutions that
\begin{eqnarray}
\Phi(t+T)&=&\Psi(t)=\Phi(t)\Phi(T)\text{ for all }t\in \mathbb R.\label{uni}
\end{eqnarray}
Consider $C:=\Phi(T)\in \mathbb R^{n\times n}$ which is non-singular and hence all eigenvalues of $\Phi(T)$ are non-zero. Let $\epsilon':=\frac{1}{2}\min\left( \frac{\epsilon T}{2},1\right)$ and $S\in \mathbb R^{n\times n}$ be such that
$S^{-1}CS=:J$ is in real Jordan normal form with the $1$ replaced by $\epsilon' |\lambda_j|$ for each eigenvalue $\lambda_j$, i.e. $J$ is a block-diagonal matrix with blocks $J_j$ of the form
$J_j=\left(\begin{array}{lllll}\lambda_j&\epsilon' |\lambda_j|&&&\\
&\lambda_j&\epsilon' |\lambda_j|&&\\
&&\ddots&\ddots&\\
&&&\lambda_j&\epsilon' |\lambda_j|\\
&&&&\lambda_j\end{array}\right)\in\mathbb R^{m_j\times m_j}$ for real eigenvalues $\lambda_j$ of $C$ and
$J_j=\left(\begin{array}{ccccccc}\alpha_j&-\beta_j&\epsilon' r_j&&&&\\
\beta_j&\alpha_j&&\epsilon' r_j&&&\\
&&\ddots&&\ddots&&\\
&&&\alpha_j&-\beta_j&\epsilon' r_j&\\
&&&\beta_j&\alpha_j&&\epsilon' r_j\\
&&&&&\alpha_j&-\beta_j\\
&&&&&
\beta_j&\alpha_j\end{array}\right)\in\mathbb R^{2m_j\times 2m_j}$ for each pair of complex eigenvalues $\alpha_j\pm i\beta_j$ of $C$, where $r_j=\sqrt{\alpha_j^2+\beta_j^2}$ and $m_j$ denotes the dimension of the generalized eigenspace of one of them; note we have pairs of complex conjugate eigenvalues since $C$ is real.
This can be achieved by letting $S_1\in\mathbb R^{n\times n}$ be an invertible matrix such that
$S^{-1}_1CS_1$ is the standard real Jordan Normal Form with $1$ on the super diagonal. Then define $S_2$ to be a matrix of blocks
$$\operatorname{diag}(1,\epsilon' |\lambda_j|,
(\epsilon')^2|\lambda_j|^2,\ldots,(\epsilon')^{m_j-1}|\lambda_j|^{m_j-1})$$
for real $\lambda_j$ and
$$\operatorname{diag}(1,1,\epsilon' |\lambda_j|,\epsilon' |\lambda_j|, \ldots,(\epsilon')^{m_j-1}|\lambda_j|^{m_j-1},(\epsilon')^{m_j-1}|\lambda_j|^{m_j-1})$$ for a pair of complex conjugate eigenvalues $\lambda_j$ and $\overline{\lambda_j}$. Setting $S=S_1S_2$ yields the result.
For each of the blocks, we will now construct a matrix $K_j\in\mathbb C^{m_j\times m_j}$ for real eigenvalues $\lambda_j$ and $K_j\in\mathbb R^{2m_j\times 2m_j}$ for each pair of complex eigenvalues $\alpha_j\pm i\beta_j$ such that
$$e^{K_j T}=J_j,$$
which shows with $B=SAS^{-1}$, where $A:=\operatorname{blockdiag}(K_1,\ldots,K_r)$,
\begin{eqnarray}
e^{BT}&=&Se^{AT}S^{-1}=S\operatorname{blockdiag}(e^{K_1T},\ldots,e^{K_rT})S^{-1}\nonumber\\
&=&SJS^{-1}=C=\Phi(T).\label{*}
\end{eqnarray}
We distinguish between three cases: $\lambda_j$ being real positive, real negative or complex.
Using the series expansion of $
\ln (1+x)$ we obtain for a nilpotent matrix $M\in \mathbb R^{n\times n}$
\begin{eqnarray}
\exp\left(\sum_{k=1}^\infty \frac{(-1)^{k+1}}{k}M^k\right)&=&I+M;\label{log}
\end{eqnarray}
note that the sum is actually finite.
\vspace{0.3cm}
\noindent \underline{\bf Case 1: $\lambda_j\in \mathbb R^+$}
Writing $J_j=\lambda_j(I+\epsilon' N)$ with the nilpotent matrix $N=
\left(\begin{array}{llll}0&1&&\\
&\ddots&\ddots&\\
&&0&1\\
&&&0\end{array}\right)\in\mathbb R^{m_j\times m_j}$, we define
$$K_j=\frac{1}{T}\left( (\ln \lambda_j)I+\sum_{k=1}^{m_j-1} \frac{(-1)^{k+1}}{k} (\epsilon')^kN^k\right)\in \mathbb R^{m_j\times m_j}.$$
Since $I$ and $N$ commute, we have with \eqref{log} and $N^k=0$ for $k\ge m_j$
\begin{eqnarray*}
\exp(K_j T)=\lambda_j \left(I+\epsilon' N\right)=J_j.
\end{eqnarray*}
\vspace{0.5cm}
\noindent \underline{\bf Case 2: $\lambda_j\in \mathbb R^-$}
With the nilpotent matrix $N=
\left(\begin{array}{llll}0&1&&\\
&\ddots&\ddots&\\
&&0&1\\
&&&0\end{array}\right)\in\mathbb R^{m_j\times m_j}$ we write $J_j=-|\lambda_j| (I-\epsilon' N)$ and define
$$K_j=\frac{1}{T}\left( (i\pi +\ln |\lambda_j|)I+\sum_{k=1}^{m_j-1} \frac{(-1)^{k+1}}{k}(-\epsilon')^kN^k\right)\in \mathbb C^{m_j\times m_j}.$$
Since $I$ and $N$ commute, and $N^k=0$ for $k\ge m_j$ we have with \eqref{log}
\begin{eqnarray*}
\exp(K_j T)=-|\lambda_j | \left(I-\epsilon' N\right)=J_j.
\end{eqnarray*}
\vspace{0.3cm}
\noindent
\underline{\bf Case 3: $\lambda_j=\alpha_j+i\beta_j$ with $\beta_j\not=0$ }
We only consider one of the two complex conjugate eigenvalues $\lambda_j$ and $\overline{\lambda_j}$ of $\Phi(T)$.
Writing $\lambda_j$ in polar coordinates gives
$\lambda_j=\alpha_j+i\beta_j=r_je^{i\theta_j}=r_j\cos\theta_j + i r_j\sin \theta_j$ with $r_j>0$ and $\theta_j \in (0,2\pi)$. Then,
defining $R_j=r_j\left(\begin{array}{cc}\cos \theta_j &-\sin \theta_j\\
\sin \theta_j&\cos \theta_j\end{array}\right)$, ${\mathcal R}=\operatorname{blockdiag} (R_j,R_j,\ldots,R_j)\in \mathbb R^{2m_j\times 2m_j}$ and the nilpotent matrix $\mathcal{N}
\in\mathbb R^{2m_j\times 2m_j}$ having $2\times 2$ blocks of $ \left(\begin{array}{cc}\cos \theta_j &\sin \theta_j\\
-\sin \theta_j&\cos \theta_j\end{array}\right)=\left(\begin{array}{cc}\cos \theta_j &-\sin \theta_j\\
\sin \theta_j&\cos \theta_j\end{array}\right)^{-1}$ above its diagonal, we have
$J_j={\mathcal R}(I+\epsilon' \mathcal{N})$.
We define $\Theta=\left(\begin{array}{cc}0 &-\theta_j\\
\theta_j&0\end{array}\right)$ and
$$K_j=\frac{1}{T}\left( (\ln r_j)I+\operatorname{blockdiag}(\Theta,\Theta,\ldots,\Theta)+ \sum_{k=1}^{2m_j-2}\frac{(-1)^{k+1}}{k} (\epsilon') ^k\mathcal{N}^k\right)\in \mathbb R^{2m_j\times 2m_j}.$$
Since $I$, $\operatorname{blockdiag}(\Theta,\Theta,\ldots,\Theta)$ and $\mathcal{N}$ commute, we have, using $\mathcal{N}^k=0$ for $k\ge 2m_j-1$ and \eqref{log}
\begin{eqnarray*}
\lefteqn{
\exp(K_j T)}\\&=&r_j \operatorname{blockdiag}\left( \left(\begin{array}{cc}\cos \theta_j &-\sin \theta_j\\
\sin \theta_j&\cos \theta_j\end{array}\right),\ldots,\left(\begin{array}{cc}\cos \theta_j &-\sin \theta_j\\
\sin \theta_j&\cos \theta_j\end{array}\right)\right) (I+\epsilon'\mathcal{N})\\
&=&J_j.
\end{eqnarray*}
We can now define $P\in C^s(\mathbb R, \mathbb C^{n\times n})$ by $P(t)=\Phi(t)e^{-Bt}$, which satisfies $P(0)=I$ and
\begin{eqnarray*}
P(t+T)&=&\Phi(t+T)e^{-BT}e^{-Bt}\\
&=&\Phi(t)\Phi(T)e^{-BT}e^{-Bt}\text{ by }\eqref{uni}\\
&=&P(t)\text{ by }\eqref{*}
\end{eqnarray*}
for all $t\ge 0$, so in particular $P(T)=P(0)=I$. We can now write
$$\Phi(t)=P(t)e^{Bt}.$$
This shows the first statement of the proposition.
\vspace{0.3cm}
We now evaluate $A^*+A=\operatorname{blockdiag}(K_1^*+K_1,\ldots,K_r^*+K_r)$. Let us consider $K_j$ as in the three cases above. If $m_j=1$, then $K_j$ below does not contain the last sum with $\epsilon'$ and the form of $c_j$ is immediately clear.
\vspace{0.3cm}
\noindent
\underline{\bf Case 1: $\lambda_j\in \mathbb R^+$}
$$K_j=\frac{1}{T}\left( (\ln \lambda_j)I+\sum_{k=1}^{m_j-1} \frac{(-1)^{k+1}}{k}(\epsilon')^kN^k\right) \in\mathbb R^{m_j\times m_j};$$
hence, for ${\bf w}\in \mathbb C^{m_j}$
\begin{eqnarray*}
\lefteqn{
\frac{1}{2}{\bf w}^*(K_j^*+K_j){\bf w}}\\
&=&
\frac{\ln \lambda_j}{T}\sum_{i=1}^{m_j}|w_i|^2\\
&&+\epsilon'
\frac{1}{2T}
\left(\overline{w_1}w_2+w_1\overline{w_2}+\overline{w_2}w_3+w_2\overline{w_3}+\ldots+
\overline{w_{m_j-1}}w_{m_j}+w_{m_j-1}\overline{w_{m_j}}\right)\\
&&
-\frac{(\epsilon')^2}{2}
\frac{1}{2T}
\left(\overline{w_1}w_3+w_1\overline{w_3}+\overline{w_2}w_4+w_2\overline{w_4}+\ldots+
\overline{w_{m_j-2}}w_{m_j}+w_{m_j-2}\overline{w_{m_j}}\right)\\
&&+\ldots\\
&&
+(-1)^{m_j}\frac{(\epsilon')^{m_j-1}}{m_j-1}
\frac{1}{2T}
\left(\overline{w_1}w_{m_j}+w_1 \overline{w_{m_j}}\right)\,.
\end{eqnarray*}
Note that the Cauchy--Schwarz inequality implies $ \mathbb {R}\ni\overline{\xi}\eta+\xi\overline{\eta}\le |\xi|^2+|\eta|^2$, which yields that, using $\epsilon'=\frac{1}{2}\min\left(\frac{\epsilon T}{2},1\right)$
\begin{eqnarray*}
\frac{1}{2}{\bf w}^*(K_j^*+K_j){\bf w}
&\le & \frac{\ln \lambda_j}{T}\sum_{i=1}^{m_j}|w_i|^2\\
&&+\frac{\epsilon'+( \epsilon')^2 + \ldots+(\epsilon')^{m_j-1} }{T}\sum_{i=1}^{m_j}|w_i|^2\\
&\le &
\left(
\frac{\ln \lambda_j}{T} +\epsilon\left(\frac{1}{2}+\frac{1}{4}+\frac{1}{8}+\ldots \right)\right)\sum_{i=1}^{m_j}|w_i|^2\\
&\le&
\left(
\frac{\ln \lambda_j}{T} +\epsilon\right)\sum_{i=1}^{m_j}|w_i|^2.
\end{eqnarray*}
\vspace{0.3cm}
\newpage
\noindent
\underline{\bf Case 2: $\lambda_j\in \mathbb R^-$}
$$K_j=\frac{1}{T}\left( (i\pi +\ln |\lambda_j|)I+\sum_{k=1}^{m_j-1} \frac{(-1)^{k+1}}{k}(-\epsilon')^kN^k\right)\in \mathbb C^{m_j\times m_j};$$
hence, for ${\bf w}\in \mathbb C^{m_j}$
\begin{eqnarray*}
\lefteqn{
\frac{1}{2}{\bf w}^*(K_j^*+K_j){\bf w}}\\
&=&
\frac{\ln |\lambda_j|}{T}\sum_{i=1}^{m_j}|w_i|^2\\
&&+\epsilon'
\frac{1}{2T}
\left(\overline{w_1}w_2+w_1\overline{w_2}+\overline{w_2}w_3+w_2\overline{w_3}+\ldots+
\overline{w_{m_j-1}}w_{m_j}+w_{m_j-1}\overline{w_{m_j}}\right)\\
&&
-\frac{(\epsilon')^2}{2}
\frac{1}{2T}
\left(\overline{w_1}w_3+w_1\overline{w_3}+\overline{w_2}w_4+w_2\overline{w_4}+\ldots+
\overline{w_{m_j-2}}w_{m_j}+w_{m_j-2}\overline{w_{m_j}}\right)\\
&&+\ldots\\
&&
+(-1)^{m_j}\frac{(\epsilon')^{m_j-1}}{m_j-1}
\frac{1}{2T}
\left(\overline{w_1}w_{m_j}+w_1 \overline{w_{m_j}}\right)\\
&\le&
\left(
\frac{\ln |\lambda_j|}{T} +\epsilon\right)\sum_{i=1}^{m_j}|w_i|^2
\end{eqnarray*}
similarly to case 1.
\vspace{0.3cm}
\noindent
\underline{\bf Case 3: $\lambda_j=\alpha_j+i\beta_j$ with $\beta_j\not=0$ }
Recall that
$$K_j=\frac{1}{T}\left( (\ln r_j)I+\operatorname{blockdiag}(\Theta,\Theta,\ldots,\Theta)+ \sum_{k=1}^{2m_j-2} \frac{(-1)^{k+1}}{k}(\epsilon' )^k\mathcal{N}^k\right)\in \mathbb R^{2m_j\times 2m_j};$$
where $\Theta=\left(\begin{array}{cc}0 &-\theta_j\\
\theta_j&0\end{array}\right)$ and the nilpotent matrix $\mathcal{N}$ has $2\times 2$ blocks of $ \left(\begin{array}{cc}\cos \theta_j &\sin \theta_j\\
-\sin \theta_j&\cos \theta_j\end{array}\right) $ on its super diagonal.
Note that all entries of $\mathcal{N}^k$, $k\in \mathbb N$ are real and have an absolute value of $\le 1$ as they are of the form
$\cos(k\theta_j)$ and $\pm \sin(k\theta_j)$ for $k=1,2,\ldots$.
Hence, for ${\bf w}\in \mathbb C^{2m_j}$
\begin{eqnarray*}
\frac{1}{2}{\bf w}^*(K_j^*+K_j){\bf w}
&=&
\frac{\ln r_j}{T}\sum_{i=1}^{2m_j}|w_i|^2\\
&&+\epsilon'
\frac{1}{2T}
\bigg(\cos\theta_j (\overline{w_1}w_3+w_1\overline{w_3})+\sin\theta_j (\overline{w_1}w_4+w_1\overline{w_4})\\
&&
-\sin\theta_j (\overline{w_2}w_3+w_2\overline{w_3})+\cos\theta_j (\overline{w_2}w_4+w_2\overline{w_4})+\ldots
\bigg)+\ldots
\end{eqnarray*}
\begin{eqnarray*}
&\le&
\frac{\ln r_j}{T}\sum_{i=1}^{2m_j}|w_i|^2\\
&&+2
\frac{\epsilon'+(\epsilon')^2+\ldots+(\epsilon')^{2m_j-1}}{T}\sum_{i=1}^{2m_j}|w_i|^2
\\
&\le &
\left(
\frac{\ln r_j}{T} +\epsilon\left(\frac{1}{2}+\frac{1}{4}+\frac{1}{8}+\ldots \right)\right)\sum_{i=1}^{2m_j}|w_i|^2\\
&\le&
\left(
\frac{\ln r_j}{T} +\epsilon\right)\sum_{i=1}^{2m_j}|w_i|^2
\end{eqnarray*}
since $\epsilon'= \min \left(\frac{\epsilon T}{2},1\right)$.
This shows the second statement of the proposition.
To show that $(P^{-1}(t))^*(S^{-1})^*S^{-1}P^{-1}(t)$ has real entries, note that
\begin{eqnarray*}
P^{-1}(t)&=&e^{Bt}\Phi^{-1}(t)\\
&=&Se^{At}S^{-1}\Phi^{-1}(t)
\end{eqnarray*}
so that
\begin{eqnarray*}
(P^{-1}(t))^*(S^{-1})^*S^{-1}P^{-1}(t)
&=&(\Phi^{-1}(t))^*(S^{-1})^*
(e^{At})^*e^{At}S^{-1}\Phi^{-1}(t).
\end{eqnarray*}
It is thus sufficient to show that $(e^{At})^*e^{At}$ is real-valued, since all other matrices are real-valued.
Note that since
$A=\operatorname{blockdiag}(K_1,\ldots,K_r)$, we have
\begin{eqnarray*}
e^{At}&=&\operatorname{blockdiag}(e^{tK_1},\ldots,e^{tK_r}),\\
(e^{tA})^*
e^{tA}&=&\operatorname{blockdiag}((e^{tK_1})^*e^{tK_1},\ldots,(e^{tK_r})^*e^{tK_r})
\end{eqnarray*} and the
blocks where $K_j$ have only real entries are trivially real-valued (cases 1 and 3).
In case 2, $K_j=\frac{1}{T}((i\pi+ \ln |\lambda_j|)I+N')$, where $N'\in\mathbb R^{m_j\times m_j}$ is a nilpotent, upper triangular matrix. Then, noting that $I$ and $N'$ commute,
\begin{eqnarray*}
e^{tK_j}&=&e^{\frac{ t}{T}(i\pi+\ln |\lambda_j|)} \exp\left(\frac{t}{T}N'\right)\\
(e^{tK_j})^*&=&e^{\frac{ t}{T}(-i\pi+\ln |\lambda_j|)} \exp\left(\frac{t}{T}(N')^T\right)\\
(e^{tK_j})^*e^{tK_j}&=&e^{\frac{ 2t}{T}\ln |\lambda_j|}\exp\left(\frac{t}{T}(N')^T\right)\exp\left(\frac{t}{T}N'\right),
\end{eqnarray*}
which has real entries.
\end{proof}
\begin{corollary}\label{coro}
Consider the ODE $\dot{{\bf x}}={\bf f}({\bf x})$ with $f\in C^\sigma(\mathbb R^n,\mathbb R^n)$, $\sigma\ge 2$ and let $S_t{\bf q}$ be an exponentially stable periodic solution with period $T$ and ${\bf q}\in \mathbb R^n$.
Then the first variation equation $\dot{{\bf y}}=D{\bf f}(S_t {\bf q}){\bf y}$ is of the form
as in the previous proposition with $s=\sigma-1$; 1 is a single eigenvalue of $\Phi(T)$ with eigenvector ${\bf f}({\bf q})$ and all other eigenvalues of $\Phi(T)$ satisfy $|\lambda|<1$.
More precisely, if $-\nu<0$ is the maximal real part of all non-trivial Floquet exponents, we have $\frac{\ln |\lambda|}{T}\le -\nu$.
With the notations of Proposition \ref{prop} we can assume that $\lambda_1=1$ and $S{\bf e}_1={\bf f}({\bf q})$.
Then we have for all $\epsilon> 0$
\begin{eqnarray*}
{\bf f}(S_t{\bf q})&=&P(t)S{\bf e}_1\text{ for all }t\in\mathbb R\\
\text{ and }\frac{1}{2}{\bf w}^*(A^*+A){\bf w}&\le& \left(-\nu+\epsilon\right)(\|{\bf w}\|^2-|w_1|^2).
\end{eqnarray*}
for all ${\bf w}\in \mathbb C^n$, where $\|{\bf w}\|=\sqrt{{\bf w}^* {\bf w}}$.
\end{corollary}
\begin{proof}
Since ${\bf f}(S_t{\bf q})$ solves \eqref{var}, we have ${\bf f}(S_t{\bf q})=P(t)e^{Bt}{\bf f}({\bf q})$ and, in particular for $t=T$, ${\bf f}({\bf q})={\bf f}(S_T{\bf q})=e^{BT}{\bf f}({\bf q})$.
Hence, \begin{eqnarray*}
{\bf f}(S_t{\bf q})&=&P(t)
Se^{At}S^{-1}{\bf f}({\bf q})\\
&=&P(t)Se^{At}{\bf e}_1\\
&=&P(t) S{\bf e}_1
\end{eqnarray*}
since $K_1=0$ in the definition of $A$.
Proposition \ref{prop} shows the result taking $\lambda_1=1$ and $m_1=1$ into account.
\end{proof}
\section{Converse theorem}\label{conv}
We will prove Theorem \ref{main}, showing that a contraction metric exists for an exponentially stable periodic orbit in the whole basin of attraction. Moreover, we can achieve the bound $-\nu+\epsilon$ for $L_M$ for any fixed $\epsilon>0$, where $-\nu$ denotes the largest real part of all non-trivial Floquet exponents.
Note that we consider contraction in directions ${\bf v}$ perpendicular to ${\bf f}({\bf x})$ with respect to the metric $M$, i.e. ${\bf v}^TM({\bf x}){\bf f}({\bf x})=0$. One could alternatively consider directions perpendicular to ${\bf f}({\bf x})$ with respect to the Euclidean metric, i.e. ${\bf v}^T{\bf f}({\bf x})=0$, but then the function $L_M$ needs to reflect this, see \cite{other,leonov1}.
In the proof we will first construct $M=M_0$ on the periodic orbit $\Omega$ using Floquet theory. Then, we define a projection $\pi$ of points in a neighborhood $U$ of $\Omega$ onto $\Omega$ such that $({\bf x}-\pi({\bf x}))^TM_0(\pi({\bf x})){\bf f}(\pi({\bf x}))=0$, which will be used to synchronize the time of solutions such that
$\pi(S_\tau {\bf x})=S_{\theta_{\bf x}(\tau)}\pi({\bf x})$. Finally, $M$ will be defined through a scalar-valued function $V$ by
$M({\bf x})=M_1({\bf x})e^{2V({\bf x})}$, where $M_1=M_0$ on the periodic orbit.
\begin{proof} [of Theorem \ref{main}]
Note that we assume ${\bf f}\in C^\sigma(\mathbb R^n,\mathbb R^n)$ to achieve more detailed results concerning the smoothness and assume lower bounds on $\sigma$ as appropriate for each result; we always assume $\sigma\ge 2$.
\vspace{0.3cm}
\newpage
\noindent \underline{\bf I. Definition and properties of $M_0$ on $\Omega$}
\noindent We fix a point ${\bf q}\in\Omega$ and consider the first variation equation
\begin{eqnarray}
\dot{{\bf y}}&=&D{\bf f}(S_t{\bf q}){\bf y}\label{variation}
\end{eqnarray}
which is a $T$-periodic, linear equation for ${\bf y}$, and $D{\bf f}\in C^{\sigma-1}$. By Proposition \ref{prop} and Corollary \ref{coro} the principal fundamental matrix solution $\Phi\in C^{\sigma-1}(\mathbb R,\mathbb R^{n\times n})$ of (\ref{variation}) with $\Phi(0)=I$ can be written as
$$\Phi(t)=P(S_t{\bf q})e^{Bt},$$
where $B\in\mathbb C^{n\times n}$;
note that $P\in C^{\sigma-1}(\mathbb R, \mathbb C^{n\times n})$ can be defined on the periodic orbit as it is $T$-periodic. By the assumptions on $\Omega$, the eigenvalues of $B$ are $0$ with algebraic multiplicity one and the others have a real part $\le -\nu<0$.
We define $S$ as in Proposition \ref{prop} and define the $C^{\sigma-1} $-function
\begin{eqnarray}
M_0(S_t {\bf q})&=&P^{-1}(S_t{\bf q})^*(S^{-1})^*S^{-1}P^{-1}(S_t{\bf q})\in\mathbb R^{n\times n}.\label{M_0}
\end{eqnarray}
Note that $M_0(S_t{\bf q})$ is real by Proposition \ref{prop}, symmetric, since it is Hermitian and real, and positive definite by
\begin{eqnarray}
{\bf v}^TM_0(S_t{\bf q}){\bf v} &=&\|S^{-1}P^{-1}(S_t{\bf q}){\bf v}\|^2 \mbox{ for all } {\bf v}\in \mathbb {R}^n\label{N=M}
\end{eqnarray}
and since $S^{-1}P^{-1}(S_t{\bf q})$ is non-singular.
We will now calculate
$L_{M_0}(S_t{\bf q};{\bf v})$. First, we have for the orbital derivative
$$
M_0'(S_t{\bf q})=(({P}^{-1}(S_t{\bf q}))')^\ast (S^{-1})^\ast S^{-1}P^{-1}(S_t{\bf q}) +P^{-1}(S_t{\bf q})^\ast (S^{-1})^\ast S^{-1}(P^{-1}(S_t{\bf q}))'\,.
$$
Furthermore, by using $(P^{-1}(S_t{\bf q})P(S_t{\bf q} ))'=0$, we obtain $$(P^{-1}(S_t{\bf q}))'=-P^{-1}(S_t{\bf q})(P(S_t{\bf q}))' P^{-1}(S_t{\bf q}).$$ In addition, since $t\mapsto P(S_t{\bf q} )e^{Bt }$ is a solution of \eqref{variation}, we have
$(P(S_t{\bf q}))'=D{\bf f}(S_t{\bf q})P(S_t{\bf q})- P(S_t{\bf q} )B$. Altogether, we get
\begin{eqnarray}
(P^{-1}(S_t{\bf q} ))'&=&-P^{-1}(S_t{\bf q})D{\bf f}(S_t{\bf q}) +B P^{-1}(S_t{\bf q})\,.\label{alto}
\end{eqnarray}
Hence,
\begin{align*}
M_0'(S_t{\bf q}) &=-D{\bf f}(S_t{\bf q})^TP^{-1}(S_t{\bf q})^\ast (S^{-1})^\ast S^{-1}P^{-1}(S_t{\bf q})\\
&\quad \,+P^{-1}(S_t{\bf q})^\ast B^\ast (S^{-1})^\ast S^{-1}P^{-1}(S_t{\bf q})
\\
&\quad\, -P^{-1}(S_t{\bf q})^\ast (S^{-1})^\ast S^{-1}P^{-1}(S_t{\bf q})D{\bf f}(S_t{\bf q}) \\
&\quad\, +P^{-1}(S_t{\bf q})^\ast (S^{-1})^\ast S^{-1}B P^{-1}(S_t{\bf q}).
\end{align*}
Thus, we obtain
\begin{eqnarray*}
\lefteqn{M_0(S_t{\bf q}) D{\bf f}(S_t{\bf q})+ D{\bf f}(S_t{\bf q})^TM_0(S_t{\bf q})+ M_0'(S_t{\bf q})}
\\
&= & P^{-1}(S_t{\bf q})^\ast B^\ast (S^{-1})^\ast S^{-1}P^{-1}(S_t{\bf q}) +P^{-1}(S_t{\bf q})^\ast (S^{-1})^\ast S^{-1}B P^{-1}(S_t{\bf q})\,.
\end{eqnarray*}
Furthermore, we have for ${\bf v}\in \mathbb {R}^n$
\begin{eqnarray}
L_{M_0}(S_t{\bf q};{\bf v})&=&
\frac{1}{2} \lefteqn{ {\bf v}^T\left(M_0(S_t{\bf q}) D{\bf f}(S_t{\bf q})+ D{\bf f}(S_t{\bf q})^TM_0(S_t{\bf q})+ M_0'(S_t{\bf q})\right){\bf v}}\nonumber
\\
&=& {\bf v}^T P^{-1}(S_t{\bf q})^\ast (S^{-1})^\ast \left( \frac{1}{2}\left(S^\ast B^\ast (S^{-1})^\ast+S^{-1}BS\right)\right) S^{-1}P^{-1}(S_t{\bf q}){\bf v}\nonumber
\\
&=&{\bf w}^\ast\left(\frac{1}{2}\left(A^\ast+A\right)\right){\bf w}\,,\label{**}
\end{eqnarray}
where ${\bf w}:=S^{-1}P^{-1}(S_t{\bf q}){\bf v}\in \mathbb {C}^n$ and $A=S^{-1}BS$.
For ${\bf v}\in\mathbb R^n$ with ${\bf v}^TM_0(S_t{\bf q}){\bf v}=1$ and ${\bf f}(S_t{\bf q})^TM_0(S_t{\bf q}){\bf v}=0$ we have
\begin{eqnarray*}
{\bf w}^*{\bf w}&=&{\bf v}^T P^{-1}(S_t{\bf q})^* (S^{-1})^*
S^{-1}P^{-1}(S_t{\bf q}){\bf v}\\
&=&{\bf v}^TM_0(S_t{\bf q}){\bf v}\\
&=&1
\end{eqnarray*}
and, using ${\bf e}_1=S^{-1}P^{-1}(S_t{\bf q}){\bf f}(S_t{\bf q})$ from Corollary \ref{coro} \begin{eqnarray*}
w_1&=&{\bf e}_1^* {\bf w}\\
&=&{\bf f}(S_t{\bf q})^T P^{-1}(S_t{\bf q})^* (S^{-1})^*
S^{-1}P^{-1}(S_t{\bf q}){\bf v}\\
&=&{\bf f}(S_t{\bf q})^TM_0(S_t{\bf q}){\bf v}\\
&=&0.
\end{eqnarray*}
This shows with Corollary \ref{coro} and \eqref{**}
\begin{eqnarray}
L_{M_0}(S_t{\bf q})&=&
\max_{{\bf v}^TM_0(S_t{\bf q}){\bf v}=1,{\bf v}^TM_0(S_t{\bf q}){\bf f}(S_t{\bf q})=0}
L_{M_0}(S_t {\bf q};{\bf v})\nonumber \\
&\le&
\max_{{\bf w}\in \mathbb C^n,w_1=0,\|{\bf w}\|=1}
(-\nu+\epsilon)(\|{\bf w}\|^2-|w_1|^2)\nonumber\\
& \le&-\nu+\epsilon.\label{LMv}
\end{eqnarray}
\vspace{0.3cm}
\noindent \underline{\bf II. Projection}
\noindent
Fix a point ${\bf q}\in \Omega$ on the periodic orbit.
For ${\bf x}$ near the periodic orbit we define the projection
$\pi({\bf x})=S_\theta {\bf q}$ on the periodic orbit orthogonal
to ${\bf f}(S_\theta {\bf q})$ with respect to the scalar product $\langle {\bf v},{\bf w}\rangle_{M_0(S_\theta {\bf q})}={\bf v}^T M_0(S_\theta {\bf q}){\bf w}$ implicitly by \eqref{proj} below.
The following lemma is based on the implicit function theorem and shows that the projection can be defined in a neighborhood of the periodic orbit, not just locally.
\begin{lemma}\label{imp}
Let $\Omega$ be an exponentially stable periodic orbit of $\dot{{\bf x}}={\bf f}({\bf x})$ where ${\bf f}\in C^\sigma( \mathbb R^n,\mathbb R^n)$ with $\sigma\ge 2$.
Then there is a compact, positively invariant neighborhood $U$ of $\Omega$ with $U\subset A(\Omega)$ and a function $\pi\in C^{\sigma-1}(U,\Omega)$ such that $\pi({\bf x})={\bf x}$ if and only if ${\bf x}\in \Omega$. Moreover, for all ${\bf x}\in U$ we have
\begin{eqnarray}
( {\bf x}-\pi({\bf x}))^TM_0(\pi({\bf x})){\bf f}(\pi({\bf x}))&=&0.\label{proj}
\end{eqnarray}
\end{lemma}
\begin{proof} Fix a point ${\bf q}\in \Omega$ and define $M_0$ by \eqref{M_0}.
Define the $C^{\sigma-1}$ function
$$G({\bf x},\theta)=({\bf x}-S_\theta {\bf q})^TM_0(S_\theta {\bf q}){\bf f}(S_\theta {\bf q})$$
for ${\bf x}\in \mathbb R^n$, $\theta\in \mathbb R$.
Define the following constants:
\begin{eqnarray*}
\min_{{\bf p}\in \Omega}\|{\bf f}({\bf p})\|&= & c_1>0\\
\max_{{\bf p}\in \Omega}\|{\bf f}({\bf p})\|&= & c_2\\
\max_{{\bf p}\in \Omega}\|D{\bf f}({\bf p})\|&=&c_3\\
\max_{{\bf p}\in \Omega}\|P({\bf p})\|&=&p_1\\
\max_{{\bf p}\in \Omega}\|P^{-1}({\bf p})\|&=&p_2\\
\min_{{\bf p}\in \Omega}\|M_0({\bf p})\|&= & m_1>0\\
\max_{{\bf p}\in \Omega}\|M_0({\bf p})\|&= & m_2\\
\max_{{\bf p}\in \Omega}\|M_0'({\bf p})\|&= & m_3,
\end{eqnarray*}
with the matrix norm $\|\cdot\|=\|\cdot\|_2$, which is induced by the vector norm $\|\cdot\|=\|\cdot\|_2$ and is sub-multiplicative.
We will first prove the following quantitative version of the local implicit function theorem, using that $\theta$ is one-dimensional.
\begin{lemma}\label{implicit}
There are constants $\delta,\epsilon>0$ such that
for each point ${\bf x}_0=S_{\theta_0}{\bf q} \in \Omega$, there is a function $p_{{\bf x}_0}\in C^{\sigma-1}( B_\delta({\bf x}_0), B_\epsilon(\theta_0))$ such that
for all $({\bf x},\theta)\in B_\delta({\bf x}_0)\times B_\epsilon(\theta_0)$
$$G({\bf x},\theta)=0\Longleftrightarrow \theta=p_{{\bf x}_0}({\bf x}).$$
If ${\bf x}\in B_{\delta/2}({\bf x}_0)$, then $p_{{\bf x}_0}({\bf x})\in B_{\epsilon/2}(\theta_0).$
\end{lemma}
\begin{proof}
We have
\begin{eqnarray*}
G_\theta({\bf x},\theta)&=&\frac{d}{d\theta}( {\bf x}-S_\theta {\bf q})^TM_0(S_\theta {\bf q}){\bf f}(S_\theta {\bf q})\\
&=&-{\bf f}(S_\theta {\bf q})^TM_0(S_\theta{\bf q}){\bf f}(S_\theta {\bf q})\\
&&+
( {\bf x}-S_\theta {\bf q})^TM_0'(S_\theta {\bf q}){\bf f}(S_\theta {\bf q})\\
&&
+( {\bf x}-S_\theta {\bf q})^TM_0(S_\theta {\bf q})D{\bf f}(S_\theta
{\bf q}){\bf f}(S_\theta {\bf q})\,.
\end{eqnarray*}
With $\min_{\theta \in
[0,T]}{\bf f}(S_\theta {\bf q})^TM_0(S_\theta{\bf q}){\bf f}(S_\theta {\bf q})\ge c_1^2m_1>0$ we have for all
$ \|{\bf x}-S_\theta {\bf q}\|<\delta_2:= \frac{c_1^2m_1}{2c_2 (m_3+m_1c_3)}$
\begin{eqnarray*}
G_\theta({\bf x},\theta)&<& -c_1^2m_1
+\delta_2c_2 (m_3+m_1c_3)\
= \ -\frac{c_1^2m_1}{2}<0.
\end{eqnarray*}
Let $\delta_1:=\frac{\delta_2}{2} $ and $\epsilon_1:=\frac{\delta_2}{2c_2}$.
For
any ${\bf x}_0=S_{\theta_0}{\bf q}\in \Omega$ we have for all ${\bf x}\in \mathbb R^n$ with
$\|{\bf x}-{\bf x}_0\|<\delta_1$ and all $\theta\in \mathbb R$ with $|\theta-\theta_0|<\epsilon_1$
\begin{eqnarray}
G_\theta({\bf x},\theta)&<& -\frac{c_1^2m_1}{2}<0 \label{Ftheta}
\end{eqnarray} since
$$\|{\bf x}-S_\theta{\bf q}\|\le \|{\bf x}-{\bf x}_0\|+\|S_{\theta_0}{\bf q}-S_\theta {\bf q}\|<
\delta_1+|\theta_0-\theta| c_2<\delta_2.$$
Since $G({\bf x}_0,\theta_0)=0$ we have with $\epsilon:=\epsilon_1/2$ by \eqref{Ftheta} \begin{eqnarray*}
G({\bf x}_0,\theta_0+\epsilon)&<&-\frac{c_1^2m_1}{2}\epsilon,\\
G({\bf x}_0,\theta_0-\epsilon)&>&\frac{c_1^2m_1}{2}\epsilon.
\end{eqnarray*}
Furthermore, we have
\begin{eqnarray*}
\nabla_{\bf x} G({\bf x},\theta)&=&{\bf f}(S_\theta {\bf q})^TM_0(S_\theta {\bf q}),\\
\|\nabla_{\bf x} G({\bf x},\theta)\|&\le&c_2m_1
\end{eqnarray*}
for all ${\bf x}\in \mathbb R^n$ and $\theta\in \mathbb R$.
Define $\delta:=\min\left(\delta_1,\frac{c_1^2m_1}{4c_2m_1}\epsilon\right)$.
For $\|{\bf x}-{\bf x}_0\|< \delta$ we have
\begin{eqnarray*}G({\bf x},\theta_0+\epsilon)&\le&G({\bf x}_0,\theta_0+\epsilon)\\
&&+\int_0^1
\nabla_{\bf x} G({\bf x}_0+\lambda ({\bf x}-{\bf x}_0),\theta_0+\epsilon)\,d\lambda \cdot ({\bf x}-{\bf x}_0) \\
&<&-\frac{c_1^2m_1}{2}\epsilon+c_2m_1 \delta \\
&\le &-\frac{c_1^2m_1}{4}\epsilon \ < \ 0\\
\text{ and }G({\bf x},\theta_0-\epsilon)&>&\frac{c_1^2m_1}{4}\epsilon \ >\ 0.
\end{eqnarray*}
Since $G({\bf x},\theta)$ is strictly decreasing with respect to $\theta$ in $B_\epsilon(\theta_0)$ by \eqref{Ftheta} , the intermediate value theorem implies that there is a unique $\theta^*\in (\theta_0-\epsilon,\theta_0+\epsilon)$ such that $G({\bf x},\theta^*)=0$, which defines a function $p_{{\bf x}_0}({\bf x})=\theta^*$. The statement for $\epsilon/2$ and $\delta/2$ follows similarly.
The smoothness of $p_{{\bf x}_0}$ follows by the classical implicit function theorem, since $G\in C^{\sigma-1}$.
\end{proof}
Now we want to show the uniqueness of the function in a suitable neighborhood $\widetilde{U}$ of $\Omega$. Denote the minimal period of the periodic orbit by $T$; we can assume that $\epsilon<T$.
Define
$$c:=\min_{{\bf p}\in \Omega}\min_{\theta\in [-T/2,T/2]\setminus (-\epsilon/2,\epsilon/2)}\|S_\theta{\bf p}-{\bf p}\|>0.$$
We can conclude that if $\|S_\theta{\bf p}-{\bf p}\|\le c/2$ with ${\bf p}\in \Omega$ and $|\theta|\le T/2$, then $|\theta|<\epsilon/2$.
Let $\delta'=\min(\delta/2,c/4)$. Since $\Omega$ is compact and $\Omega\subset \bigcup_{{\bf x}_0\in \Omega}B_{\delta'}({\bf x}_0)$, there is a finite number of ${\bf x}_i=S_{\theta_i}{\bf q}\in \Omega$, $i=1\ldots,N$, with
\begin{eqnarray}
\Omega&\subset& \bigcup_{i=1}^N B_{\delta'}({\bf x}_i)=:\widetilde{U},\label{cover}
\end{eqnarray}
such that $\widetilde{U}$ is an open neighborhood of $\Omega$.
We want to show that the $p_{{\bf x}_i}=p_i$ define a unique function $p\colon \widetilde{U}\to S^1_T$, where $S^1_T$ are the reals modulo $T$ such that $p=p_i$ on $B_{\delta'}({\bf x}_i)$. We need to show that if ${\bf x}\in B_{\delta'}({\bf x}_i)\cap B_{\delta'}({\bf x}_j)$, then $p_i({\bf x})=p_j({\bf x})$.
Let ${\bf x}\in B_{\delta'}({\bf x}_i)\cap B_{\delta'}({\bf x}_j)$ and, without loss of generality, $|\theta_j-\theta_i|\le T/2$ since the $\theta_i$ and $\theta_j$ are uniquely defined only modulo $T$. Then
\begin{eqnarray*}
\|{\bf x}_i-S_{\theta_j-\theta_i}{\bf x}_i\|&=&
\|{\bf x}_i-{\bf x}_j\|\\&\le &\|{\bf x}_i-{\bf x}\|+\|{\bf x}-{\bf x}_j\|\\
&<&2\delta'\ \le\ \min(\delta,c/2).
\end{eqnarray*}
Hence, $|\theta_j-\theta_i|<\epsilon/2$.
Since ${\bf x}\in B_{\delta/2}({\bf x}_i)\cap B_{\delta/2}({\bf x}_j)$, we have $p_i({\bf x})\in B_{\epsilon/2}(\theta_i)$ and $p_j({\bf x})\in B_{\epsilon/2}(\theta_j)$ by Lemma \ref{implicit}.
Then
\begin{eqnarray*}
|p_i({\bf x})-\theta_j|
&\le&|p_i({\bf x})-\theta_i| +|\theta_i-\theta_j| \
< \ \epsilon
\end{eqnarray*}
and similarly $p_j({\bf x})\in B_\epsilon(\theta_i)$.
Moreover, ${\bf x}\in B_{\delta}({\bf x}_j)\cap B_{\delta}({\bf x}_j)$. Lemma \ref{implicit} implies that $\theta=p_i({\bf x})$ if and only if $G({\bf x},\theta)=0$ if and only if $\theta=p_j({\bf x})$, which shows $p_i({\bf x})=p_j({\bf x})$.
Since $\Omega$ is stable, we can choose $\Omega\subset U^\circ\subset U\subset \widetilde{U}$ such that $U$ is compact and positively invariant. For ${\bf x}\in U$ define $\pi({\bf x})=S_{p({\bf x})}{\bf q}$. Since $p$ is defined by $p_{{\bf x}_i}$, we have by Lemma \ref{implicit} $0=G({\bf x},p({\bf x}))=({\bf x}-\pi({\bf x}))^TM_0(\pi({\bf x})){\bf f}(\pi({\bf x}))$.
If ${\bf x}=S_\theta{\bf q} \in \Omega$, then there is a ${\bf x}_i=S_{\theta_i}{\bf q}\in \Omega$ by \eqref{cover} such that ${\bf x}\in B_{\delta'}({\bf x}_i)$ and thus, as above, $|\theta-\theta_i|<\epsilon/2$. Hence, by Lemma \ref{implicit}, as ${\bf x}\in B_\delta({\bf x}_i)$ and $\theta\in B_\epsilon(\theta_i)$, $p_i({\bf x})=\theta$ and thus $\pi({\bf x})={\bf x}$, as this satisfies $0=G({\bf x},\theta)$. If ${\bf x}\not\in \Omega$, then, since $\pi({\bf x})\in \Omega$, ${\bf x}\not=\pi({\bf x})$. This shows the lemma.
\end{proof}
\noindent \underline{\bf III. Synchronization}
\noindent In this step we synchronize the time between the solution $S_t{\bf x}$ and the solution on the periodic orbit $S_\theta \pi({\bf x})$ such that \eqref{first} holds. This will enable us to later define a distance between $S_t{\bf x}$ and $\Omega$ in Step IV.
\begin{definition}
For ${\bf x}\in U$ we can define $\theta_{\bf x}\in C^{\sigma-1}(\mathbb R^+_0,\mathbb R)$ by $\theta_{\bf x}(0)=0$ and
\begin{eqnarray}
S_{\theta_{\bf x}(t)}\pi({\bf x})&=&\pi(S_t {\bf x})\label{first}
\end{eqnarray}
for all $t\ge 0$.
We have \begin{eqnarray}
\dot{\theta}_{\bf x} (t)
&=&({\bf f}(S_t{\bf x})^TM_0(S_{\theta_{\bf x}(t)} \pi({\bf x})){\bf f}(S_{\theta_{\bf x}(t)} \pi({\bf x})))\nonumber\\
&&\bigg({\bf f}(S_{\theta_{\bf x}(t)} \pi({\bf x}))^TM_0(S_{\theta_{\bf x}(t)} \pi({\bf x})){\bf f}(S_{\theta_{\bf x}(t)} \pi({\bf x}))\nonumber\\
&& -
( S_t{\bf x}-S_{\theta_{\bf x}(t)} \pi({\bf x}))^T
\big[M_0'(S_{\theta_{\bf x}(t)} \pi({\bf x})) {\bf f}(S_{\theta_{\bf x}(t)} \pi({\bf x}))\nonumber\\
&&\hspace{0.5cm} +M_0(S_{\theta_{\bf x}(t)} \pi({\bf x}))D{\bf f}(S_{\theta_{\bf x}(t)} \pi({\bf x})) {\bf f}(S_{\theta_{\bf x}(t)} \pi({\bf x}))\big]\bigg)^{-1}.\label{eqbruch}
\end{eqnarray}
The denominator of \eqref{eqbruch} is strictly positive for all $t\ge 0$ and ${\bf x} \in U$.
\end{definition}
\begin{proof}
Denote $\pi({\bf x})=:{\bf p}\in \Omega$. Observe, that both sides of \eqref{first} equal for $t=0$.
For any $t\ge 0$, $S_t{\bf x}\in U$ and $\pi(S_t {\bf x})$ denotes a point on the periodic orbit, so we can write it as
$\pi(S_t {\bf x})=S_{\theta_{\bf x}(t)}{\bf p}$. Note that $\theta_{\bf x}(t)$ is only uniquely defined modulo $T$, however, it is uniquely defined by the requirement that $\theta_{\bf x}$ is a continuous function.
By \eqref{proj}, we have
$$(S_t {\bf x}-S_{\theta_{\bf x}(t)}{\bf p})^TM_0(S_{\theta_{\bf x}(t)} {\bf p}){\bf f}(S_{\theta_{\bf x}(t)}{\bf p})=0.$$
Hence, $\theta_{\bf x}(t)$ is implicitly defined by
\begin{eqnarray}
Q(t,\theta)&=&(S_t{\bf x}-S_\theta {\bf p})^TM_0(S_\theta {\bf p}){\bf f}(S_\theta {\bf p})=0.
\label{theta}
\end{eqnarray}
Note that $\theta_{\bf x}\in C^{\sigma-1}(\mathbb R^+_0,\mathbb R)$ by the Implicit
Function Theorem which implies
\begin{eqnarray*}
\frac{ d\theta_{\bf x}}{dt}
&=&-\frac{\partial_t Q(t,\theta)}{\partial_\theta Q(t,\theta)}\bigg|_{\theta=\theta_{\bf x}(t)}
\nonumber\\
&=&({\bf f}(S_t{\bf x})^TM_0(S_{\theta_{\bf x}(t)} {\bf p}){\bf f}(S_{\theta_{\bf x}(t)} {\bf p}))\nonumber\\
&&\bigg({\bf f}(S_{\theta_{\bf x}(t)} {\bf p})^TM_0(S_{\theta_{\bf x}(t)} {\bf p}){\bf f}(S_{\theta_{\bf x}(t)} {\bf p})\nonumber\\
&&\hspace{0.3cm}-
( S_t{\bf x}-S_{\theta_{\bf x}(t)} {\bf p})^TM_0'(S_{\theta_{\bf x}(t)} {\bf p}) {\bf f}(S_{\theta_{\bf x}(t)} {\bf p})\nonumber\\
&&\hspace{0.3cm}-( S_t{\bf x}-S_{\theta_{\bf x}(t)} {\bf p})^TM_0(S_{\theta_{\bf x}(t)} {\bf p})D{\bf f}(S_{\theta_{\bf x}(t)} {\bf p}) {\bf f}(S_{\theta_{\bf x}(t)} {\bf p})\bigg)^{-1}.
\end{eqnarray*}
With the notations of the proof of Lemma \ref{imp}, for $S_t{\bf x}\in U$ there is a point ${\bf x}_i=S_{\theta_i} {\bf q} \in \Omega$ such that $S_t{\bf x}\in B_{\delta'}({\bf x}_i)$. We have $S_t{\bf x}\in B_\delta({\bf x}_i)$ and, modulo $T$, we have $p_i(S_t{\bf x})=\theta_{\bf x}(t)\in B_\epsilon(\theta_i)$. Hence, the denominator is $>\frac{c_1^2m_1}{2}$ by \eqref{Ftheta}.
\end{proof}
\begin{lemma}\label{change}
For ${\bf x}\in U$ we have
$$S_{\theta_{S_\tau {\bf x}}(t)}\pi(S_\tau {\bf x})=S_{\theta_{\bf x}(t+\tau)}\pi({\bf x})$$
for all $t,\tau\ge 0$.
\end{lemma}
\begin{proof}
We apply \eqref{first} to the point $S_\tau {\bf x}$ and the time $t$, obtaining
$$S_{\theta_{S_\tau {\bf x}}(t)}\pi(S_\tau {\bf x})=\pi(S_t S_\tau {\bf x}).$$
Now we apply \eqref{first} to the point $ {\bf x}$ and the time $t+\tau$, obtaining
$$S_{\theta_{\bf x}(t+\tau)}\pi({\bf x})=\pi(S_{t+\tau} {\bf x}).$$
As both right-hand sides are equal by the semi-flow property, this proves the
statement.
\end{proof}
\vspace{0.3cm}
\noindent \underline{\bf IV. Distance to the periodic orbit}
\noindent
In the following lemma we define a distance of points in $U$ to the periodic orbit, and we show that it decreases exponentially.
\begin{lemma}\label{def_d}
Let $\epsilon<\min(1,\nu/2)$ and $\sigma\ge 2$. Then there is a positively invariant, compact neighborhood $U$ of the periodic orbit $\Omega$ such that
the function $d\in C^{\sigma-1}(U,\mathbb R^+_0)$, defined by
$$d({\bf x})=({\bf x}-\pi({\bf x}))^TM_0(\pi({\bf x}))({\bf x}-\pi({\bf x}))$$
satisfies $d({\bf x})=0$ if and only if ${\bf x}\in \Omega$. Moreover, $d'({\bf x})<0$ for all ${\bf x}\in U\setminus \Omega$
and
\begin{eqnarray*}
d(S_t{\bf x})&\le& e^{2(-\nu+2\epsilon) t}d({\bf x})\text{ for all ${\bf x}\in U$ and all $t\ge 0$,}\\
1-\epsilon&\le& \dot{\theta}_{\bf x}(t)\ \le \ 1+\epsilon\text{ for all $t\ge 0$}.
\end{eqnarray*}
\end{lemma}
\begin{proof}
Note that $d$ is $C^{\sigma-1}$ as all of its terms are. As $M_0({\bf x})$ is positive definite, $d({\bf x})=0$ if and only if ${\bf x}=\pi({\bf x})$, i.e. ${\bf x}\in \Omega$ by Lemma \ref{imp}.
Define
\begin{eqnarray}
c^*&:=&\frac{ \epsilon}{2 p_1\, p_2 \|S^{-1}\| \, \|S\| }>0,
\label{defc}\\
c_4&=&2c_2\frac{2c_3m_2+m_3+c^*m_2}{c_1^2m_1},\label{c4def}\\
v^*&:=&\frac{\epsilon}{2 p_1\, p_2 \|S^{-1}\| \, \|S\| \, c_4(c_3+\|B\|)}\,.\label{defv}
\end{eqnarray}
where the constants where defined in Step II, proof of Lemma \ref{imp}.
For ${\bf y}\in U$ we use the Taylor expansion around $ \pi({\bf y})\in \Omega$. Hence, there is a function ${\boldsymbol \psi}({\bf y})$ satisfying
\begin{eqnarray}
{\bf f}({\bf y})&=&{\bf f}(\pi({\bf y}))+D{\bf f}(\pi({\bf y}))({\bf y}-\pi({\bf y}))+{\boldsymbol \psi}({\bf y})\label{Taylor}
\end{eqnarray}
with $\|{\boldsymbol \psi}({\bf y})\|\le c^* \|{\bf y}-\pi({\bf y})\|$ for all ${\bf y} \in U$, noting that $\Omega$ is compact,
where we choose $U$ still to be a positively invariant, compact neighborhood of $\Omega$, possibly smaller than before and such that also have
\begin{eqnarray}
\|{\bf y}-\pi({\bf y})\|&\le&\delta'=\min\left(v^*, \frac{c_1^2m_1}{2c_2[m_3+ m_2c_3] },\frac{\epsilon}{c_4},1\right)\text{ for all ${\bf y}\in U$.}\label{vest}
\end{eqnarray}
Recall that, due to the definition of $M_0$ and \eqref{first} we have
\begin{eqnarray*}
d({\bf x})&=&({\bf x}-\pi({\bf x}))^T(P^{-1}(\pi({\bf x})))^*(S^{-1})^*S^{-1}P^{-1}(\pi({\bf x}))({\bf x}-\pi({\bf x}))\\
d(S_t{\bf x})&=&(S_t{\bf x}-S_{\theta_{\bf x}(t)}\pi({\bf x}))^T(P^{-1}(S_{\theta_{\bf x}(t)}\pi({\bf x})))^*(S^{-1})^*\\
&&S^{-1}P^{-1}(S_{\theta_{\bf x}(t)}\pi({\bf x}))(S_t{\bf x}-S_{\theta_{\bf x}(t)}\pi({\bf x})).
\end{eqnarray*}
Now let us calculate the orbital derivative, denoting
$\theta(t):=\theta_{\bf x}(t)$.
\begin{eqnarray*}
d'(S_t{\bf x})&=&\bigg[\frac{d}{dt}\left(P^{-1}(S_{\theta(t)}\pi({\bf x}))\right)(S_t{\bf x}-S_{\theta(t)}\pi({\bf x}))\\
&&+P^{-1}(S_{\theta(t)}\pi({\bf x}))[{\bf f}(S_t{\bf x})-\dot{\theta}(t){\bf f}(S_{\theta(t)}\pi({\bf x}))]\bigg]^*\\
&&(S^{-1})^*S^{-1}P^{-1}(S_{\theta(t)}\pi({\bf x}))(S_t{\bf x}-S_{\theta(t)}\pi({\bf x}))\\
&&+
(S_t{\bf x}-S_{\theta(t)}\pi({\bf x}))^T(P^{-1}(S_{\theta(t)}\pi({\bf x})))^*(S^{-1})^*S^{-1}\\
&&
\bigg[\frac{d}{dt}\left(P^{-1}(S_{\theta(t)}\pi({\bf x}))\right)(S_t{\bf x}-S_{\theta(t)}\pi({\bf x}))\\
&&+P^{-1}(S_{\theta(t)}\pi({\bf x}))[{\bf f}(S_t{\bf x})-\dot{\theta}(t){\bf f}(S_{\theta(t)}\pi({\bf x}))]\bigg].
\end{eqnarray*}
We denote ${\bf p}:=\pi({\bf x})$ and ${\bf v}(t):=S_t{\bf x}-S_{\theta(t)}\pi({\bf x})=S_t{\bf x}-\pi(S_t{\bf x})$ by \eqref{first}.
Hence, using \eqref{vest} for ${\bf y}=S_t{\bf x}\in U$ since ${\bf x}\in U$, which is positively invariant, we have
\begin{eqnarray}
\|{\bf v}(t)\|&\le&\delta'=\min\left(v^*, \frac{c_1^2m_1}{2c_2[m_3+ m_2c_3] },\frac{\epsilon}{c_4},1\right)\label{vest2}
\end{eqnarray}
for all $t\ge 0$.
We have
$\frac{d}{dt}\left(P^{-1}(S_{\theta(t)}\pi({\bf x}))\right)=\dot{\theta}(t)(-P^{-1}(S_{\theta(t)}{\bf p})D{\bf f}(S_{\theta(t)}{\bf p})+B P^{-1}(S_{\theta(t)}{\bf p}))$ by \eqref{alto}.
Thus,
\begin{eqnarray}
d'(S_t{\bf x})&=&
\bigg[\dot{\theta}(t)(-P^{-1}(S_{\theta(t)}{\bf p})D{\bf f}(S_{\theta(t)}{\bf p})+B P^{-1}(S_{\theta(t)}{\bf p})){\bf v}(t)\nonumber\\
&&+P^{-1}(S_{\theta(t)}{\bf p})[{\bf f}(S_t{\bf x})-\dot{\theta}(t){\bf f}(S_{\theta(t)}{\bf p})]\bigg]^*\nonumber\\
&&(S^{-1})^*S^{-1}P^{-1}(S_{\theta(t)}{\bf p}){\bf v}(t)\nonumber\\
&&+{\bf v}(t)^T(P^{-1}(S_{\theta(t)}{\bf p}))^*(S^{-1})^*S^{-1}\nonumber\\
&&
\bigg[\dot{\theta}(t)(-P^{-1}(S_{\theta(t)}{\bf p})D{\bf f}(S_{\theta(t)}{\bf p})+B\nonumber P^{-1}(S_{\theta(t)}{\bf p})){\bf v}(t)\nonumber\\
&&+P^{-1}(S_{\theta(t)}{\bf p})[{\bf f}(S_t{\bf x})-\dot{\theta}(t){\bf f}(S_{\theta(t)}{\bf p})]\bigg].\label{eq33}
\end{eqnarray}
Using the Taylor expansion \eqref{Taylor} for ${\bf y}=S_t{\bf x}$, we obtain with $\pi(S_t{\bf x})=S_{\theta(t)}{\bf p}$,
\begin{eqnarray}
{\bf f}(S_t{\bf x})&=&{\bf f}(S_{\theta(t)}{\bf p})+D{\bf f}(S_{\theta(t)}{\bf p}) {\bf v}(t)+{\boldsymbol \psi}(S_t{\bf x})
\label{Taylor2}
\end{eqnarray}
and thus with \eqref{eqbruch}
\begin{eqnarray*}
\lefteqn{
\dot{\theta}(t)-1}\nonumber\\
&=&\bigg( {\bf f}(S_t{\bf x})^TM_0(S_{\theta(t)}{\bf p}){\bf f}(S_{\theta(t)}{\bf p})-{\bf f}(S_{\theta(t)}{\bf p})^TM_0(S_{\theta(t)}{\bf p}){\bf f}(S_{\theta(t)}{\bf p})\\
&&\hspace{0.5cm}+{\bf v}(t)^TM_0'(S_{\theta(t)}{\bf p}){\bf f}(S_{\theta(t)}{\bf p})+{\bf v}(t)^TM_0(S_{\theta(t)}{\bf p})D{\bf f}(S_{\theta(t)}{\bf p}){\bf f}(S_{\theta(t)}{\bf p})\bigg)\\
&&\bigg({\bf f}(S_{\theta(t)}{\bf p})^TM_0(S_{\theta(t)}{\bf p}){\bf f}(S_{\theta(t)}{\bf p})-{\bf v}(t)^TM_0'(S_{\theta(t)}{\bf p}){\bf f}(S_{\theta(t)}{\bf p})\\
&&\hspace{0.5cm}-{\bf v}(t)^TM_0(S_{\theta(t)}{\bf p})D{\bf f}(S_{\theta(t)}{\bf p}){\bf f}(S_{\theta(t)}{\bf p})\bigg)^{-1}\\
&=&\bigg( {\bf v}(t)^TD{\bf f}(S_{\theta(t)}{\bf p})^TM_0(S_{\theta(t)}{\bf p}){\bf f}(S_{\theta(t)}{\bf p})+{\boldsymbol \psi}(S_t{\bf x})^TM_0(S_{\theta(t)}{\bf p}){\bf f}(S_{\theta(t)}{\bf p})\\
&&+\hspace{0.5cm} {\bf v}(t)^TM_0'(S_{\theta(t)}{\bf p}){\bf f}(S_{\theta(t)}{\bf p})+{\bf v}(t)^TM_0(S_{\theta(t)}{\bf p})D{\bf f}(S_{\theta(t)}{\bf p}){\bf f}(S_{\theta(t)}{\bf p})\bigg)\\
&&\bigg({\bf f}(S_{\theta(t)}{\bf p})^TM_0(S_{\theta(t)}{\bf p}){\bf f}(S_{\theta(t)}{\bf p})-{\bf v}(t)^TM_0'(S_{\theta(t)}{\bf p}){\bf f}(S_{\theta(t)}{\bf p})\\
&&\hspace{0.5cm}-{\bf v}(t)^TM_0(S_{\theta(t)}{\bf p})D{\bf f}(S_{\theta(t)}{\bf p}){\bf f}(S_{\theta(t)}{\bf p})\bigg)^{-1}
\end{eqnarray*}
which shows, using \eqref{vest2} and \eqref{c4def},
\begin{eqnarray}
| \dot{\theta}(t)-1|&\le&\frac{\|{\bf v}(t)\| c_2[2c_3 m_2 +m_3]+
\|{\boldsymbol \psi}(S_t{\bf x})\| m_2c_2 }{c_1^2m_1-\|{\bf v}(t)\|c_2[m_3+ m_2c_3] }\nonumber\\
&\le&2c_2\frac{ 2c_3 m_2 +m_3+
c^* m_2}{c_1^2m_1 }\|{\bf v}(t)\|=c_4\|{\bf v}(t)\|\le \epsilon\,.\label{est}
\end{eqnarray}
In particular, we have
$1-\epsilon\le \dot{\theta}(t)\le 1+\epsilon$, which shows the existence of $\theta(t)$ for all $t\ge 0$, $\dot{\theta}(t)>0$ for all $t\ge 0$, that $\theta(t)$ is a bijective function from $[0,\infty)$ to $[0,\infty)$ and $\lim_{t\to\infty}\theta(t)=\infty$.
Hence, we have from \eqref{eq33} and \eqref{Taylor2}
\begin{eqnarray*}
d'(S_t{\bf x})&=&\big[(1-\dot{\theta}(t))P^{-1}(S_{\theta(t)}{\bf p})D{\bf f}(S_{\theta(t)}{\bf p}){\bf v}(t)
+B P^{-1}(S_{\theta(t)}{\bf p}){\bf v}(t)\\
&&\hspace{0.5cm}-(1-\dot{\theta}(t))B P^{-1}(S_{\theta(t)}{\bf p}){\bf v}(t)+(1-\dot{\theta}(t))
P^{-1}(S_{\theta(t)}{\bf p}){\bf f}(S_{\theta(t)}{\bf p})
\\
&&\hspace{0.5cm}+P^{-1}(S_{\theta(t)}{\bf p}){\boldsymbol \psi}(S_t{\bf x})\big]^*
(S^{-1})^*S^{-1}P^{-1}(S_{\theta(t)}{\bf p}){\bf v}(t)\\
&&+{\bf v}(t)^T(P^{-1}(S_{\theta(t)}{\bf p}))^*(S^{-1})^*S^{-1}\big[(1-\dot{\theta}(t))P^{-1}(S_{\theta(t)}{\bf p})D{\bf f}(S_{\theta(t)}{\bf p}){\bf v}(t)\\
&&\hspace{0.5cm}+B P^{-1}(S_{\theta(t)}{\bf p}){\bf v}(t)-(1-\dot{\theta}(t))B P^{-1}(S_{\theta(t)}{\bf p}){\bf v}(t)\\
&&\hspace{0.5cm}
+(1-\dot{\theta}(t))
P^{-1}(S_{\theta(t)}{\bf p}){\bf f}(S_{\theta(t)}{\bf p})+P^{-1}(S_{\theta(t)}{\bf p}){\boldsymbol \psi}(S_t{\bf x})\big]\\
&\le&2
\|S^{-1}P^{-1}(S_{\theta(t)}{\bf p}){\bf v}(t)\|\, \|S^{-1}\| \, \|P^{-1}(S_{\theta(t)}{\bf p})\|\,\\
&&\hspace{0.5cm}
\left[|1-\dot{\theta}(t)|(\|D{\bf f}(S_{\theta(t)}{\bf p})\| +\|B\|)\|{\bf v}(t)\|+\|{\boldsymbol \psi}(S_t{\bf x})\|\right]\\
&&+{\bf v}(t)^*(P^{-1}(S_{\theta(t)}{\bf p}))^*\left[(S^{-1})^*S^{-1}B +B^*(S^{-1})^*S^{-1}
\right]P^{-1}(S_{\theta(t)}{\bf p}){\bf v}(t)
\end{eqnarray*}
using $$0={\bf f}(S_{\theta(t)}{\bf p})^*M_0(S_{\theta(t)}{\bf p}){\bf v}(t)={\bf f}(S_{\theta(t)}{\bf p})^*(P^{-1}(S_{\theta(t)}{\bf p}))^*(S^{-1})^*S^{-1}P^{-1}(S_{\theta(t)}{\bf p}){\bf v}(t)$$ by \eqref{theta}.
Setting ${\bf w}(t)=S^{-1}P(S_{\theta(t)}{\bf p})^{-1}{\bf v}(t)$, we obtain, using \eqref{est}
and \eqref{vest2}
\begin{eqnarray*}
d'(S_t{\bf x})
&\le&2\, p_2
\|{\bf w}(t)\|\, \|S^{-1}\| \|{\bf v}(t)\|
\left[c_4 (c_3+\|B\|) \|{\bf v}(t)\|+c^*\right]\\
&&+{\bf w}(t)^* \left[S^{-1}B S+S^*B^*(S^{-1})^*\right]{\bf w}(t)\\
&\le&2 p_1 p_2 \|S\| \,
\|S^{-1}\| \left[c_4 (c_3+\|B\|)\|{\bf v}(t)\|+c^*\right]\,\|{\bf w}(t)\|^2\\
&&+{\bf w}(t)^* \left[A+A^*\right]{\bf w}(t)\\
&\le&2\epsilon\,\|{\bf w}(t)\|^2+{\bf w}(t)^* \left[A+A^*\right]{\bf w}(t)
\end{eqnarray*}
by \eqref{defv} and \eqref{defc}.
Noting that
$$w_1(t)={\bf e}_1^*{\bf w}(t)={\bf f}(S_{\theta(t)}{\bf p})^*(P^{-1}(S_{\theta(t)}{\bf p}))^*(S^{-1})^*S^{-1}P^{-1}(S_{\theta(t)}{\bf p}){\bf v}(t)=0$$
we have with Corollary \ref{coro}
$${\bf w}(t)^* \left[A+A^*\right]{\bf w}(t) \le 2 (-\nu+\epsilon)
\|{\bf w}(t)\|^2.$$
Altogether, we have
\begin{eqnarray*}
d'(S_t{\bf x})
&\le&
\left[ 2\epsilon-2\nu+2\epsilon\right]\|{\bf w}(t)\|^2\\
&=&2(-\nu+2\epsilon) d(S_t{\bf x}),
\end{eqnarray*}which shows $d(S_t{\bf x})\le e^{2(-\nu+2\epsilon) t}d({\bf x})$ for all ${\bf x}\in U$ and $t\ge 0$.
\end{proof}
Let us summarize the results obtained so far in the following corollary.
\begin{corollary}\label{help_other}
Let $\Omega$ be an exponentially stable periodic orbit of $\dot{{\bf x}}={\bf f}({\bf x})$ with ${\bf f}\in C^\sigma(\mathbb R^n,\mathbb R^n)$ and $\sigma\ge 2$, such that
$-\nu<0$ is the maximal real part of all non-trivial Floquet exponents.
For $\epsilon_0\in (0,\min(\nu,1))>0$ there is a compact, positively invariant neighborhood $U$ of $\Omega$ with $\Omega\subset U^\circ$ and $U\subset A(\Omega)$, and a map $\pi\in C^{\sigma-1}( U, \Omega)$ with $\pi({\bf x})={\bf x}$ if and only if ${\bf x}\in\Omega$.
Furthermore, for a fixed ${\bf x}\in U$, there is a bijective $C^{\sigma-1}$ map
$\theta_{\bf x}\colon [0,\infty)\to[0,\infty)$ with inverse
$t_{\bf x}=\theta_{\bf x}^{-1}\in C^{\sigma-1}( [0,\infty),[0,\infty))$
such that $\theta_{\bf x}(0)=0$ and
$$\pi(S_t{\bf x})=S_{\theta_{\bf x}(t)}\pi({\bf x})$$
for all $t\in [0,\infty)$.
We have $\dot{\theta}_{\bf x}(t)\in \left[1-\epsilon_0,1+\epsilon_0\right]$ for all $t\ge 0$ and
$\dot{t}_{\bf x}(\theta)\in \left[1-\epsilon_0,1+\epsilon_0\right]$ for all $\theta\ge 0$.
Finally, there is a constant $C>0$ such that
\begin{eqnarray}|\dot{t}_{\bf x}(\theta)-1|&\le& Ce^{(-\nu+\epsilon_0) \theta}\label{res1}\\
\|S_{t_{\bf x}(\theta)}{\bf x}-S_\theta \pi({\bf x})\|&\le& Ce^{(-\nu+\epsilon_0) \theta}\|{\bf x}-\pi({\bf x})\|\label{res2}
\end{eqnarray}
for all $\theta\ge 0$ and all ${\bf x}\in U$.
\end{corollary}
\begin{proof}
Setting $\epsilon:=\frac{\epsilon_0}{2(1+\nu)}\le\min\left( \frac{\epsilon_0}{2},\frac{1}{2}\right)\le \min\left( \frac{\nu}{2},1\right)$, all results follow directly from Lemma \ref{def_d} by using the inverse $t(\theta)$ of $\theta(t)$. Indeed, we have\begin{eqnarray*}
|\dot{t}_{\bf x}(\theta)-1|&=&\left|\frac{1-\dot{\theta}_{\bf x}(t(\theta))}{\dot{\theta}_{\bf x}(t(\theta))}\right|\\
&\le&\frac{\epsilon}{1-\epsilon}\\
&\le&2\epsilon\le\epsilon_0\,.
\end{eqnarray*}
Furthermore, we have by \eqref{est} and noting that
$ m_1 \|S_{t_{\bf x}(\theta)}{\bf x}-S_\theta \pi({\bf x})\|^2\le d(S_{t_{\bf x}(\theta)}{\bf x}) \le m_2 \|S_{t_{\bf x}(\theta)}{\bf x}-S_\theta \pi({\bf x})\|^2$
\begin{eqnarray*}
|\dot{t}_{\bf x}(\theta)-1|&\le&
\left|\frac{1-\dot{\theta}_{\bf x}(t(\theta))}{1/2}\right|\\
&\le&2c_4\|{\bf v}(t(\theta))\|\\
&\le&\frac{2c_4}{\sqrt{m_1}}\sqrt{d(S_{t(\theta)}{\bf x})}\\
&\le&Ce^{(-\nu+2\epsilon)t(\theta)}\sqrt{d({\bf x})}\\
&\le&Ce^{(-\nu+2\epsilon)(1-2\epsilon)\theta}\\
&\le&Ce^{(-\nu+2\epsilon(1+\nu)-4\epsilon^2)\theta}\\
&\le&Ce^{(-\nu+\epsilon_0)\theta},
\end{eqnarray*}
using $t(\theta)=\int_0^\theta \dot{t}(\tau)\,d\tau\ge \theta(1-2\epsilon)$ and that $d({\bf x})$ is bounded in $U$.
Similarly, we can prove \eqref{res2} from Lemma \ref{def_d}.
\end{proof}
\vspace{0.3cm}
\noindent \underline{\bf V. Definition of $M_1$ and $M$ in $A(\Omega)$}
\noindent
For all ${\bf x}\in U$ we have defined the distance
$$d({\bf x})=({\bf x}-\pi({\bf x}))^TM_0(\pi({\bf x}))({\bf x}-\pi({\bf x}))$$
in Lemma \ref{def_d}
which is $C^{\sigma-1}$. Let $\iota>0$ be so small that
the set $\Omega_{2\iota}:=\{{\bf x}\in U\colon d({\bf x})\le 2\iota\}$ satisfies $\Omega_{2\iota}\subset U^\circ$. Define
the $C^\infty$ functions $h_1\colon \Omega_\iota \to [0,1]$, $h_2\colon \Omega_{2\iota} \to [0,1]$ such that $h_1({\bf x})=1$ for all $d({\bf x})\le \frac{\iota}{3}$ and $h_1({\bf x})=0$ for all $d({\bf x})\ge \frac{2}{3} \iota$, and $h_2({\bf x})=1$ for all $d({\bf x})\le \frac{4}{3}\iota$ and $h_2({\bf x})=0$ for all $d({\bf x})\ge \frac{5}{3} \iota$. Set
$$M_1({\bf x}):=\left\{\begin{array}{ll}
I&\mbox{ if }{\bf x}\not \in \Omega_{2\iota},\\
(1-h_2({\bf x}))I+h_2({\bf x})M_0(\pi({\bf x}))&\mbox{ if }{\bf x} \in \Omega_{2\iota}.\end{array}\right.$$
It is clear that $M_1({\bf x})$ is positive definite for all ${\bf x}\in \mathbb R^n$, $M_1$ is $C^{\sigma-1}$ and $M_1(\pi({\bf x}))=M_0(\pi({\bf x}))$ for all ${\bf x}\in \Omega_{\frac{4}{3}\iota}$.
We will define the Riemannian metric $M$ through $M_1$ and a scalar-valued function $V\colon A(\Omega)\to \mathbb R$, which will be defined later.
Let us denote $\mu:=\nu-\epsilon>0$. The function $V$ will be continuous and continuously orbitally differentiable and satisfy
\begin{eqnarray}
V'({\bf x})&=&-L_{M_1}({\bf x})+r({\bf x}), \text{ where }\label{eqV}\\
r({\bf x})&=&\left\{\begin{array}{ll}
-\mu&\mbox{ if }{\bf x}\not \in \Omega_\iota,\\
-\mu(1-h_1({\bf x}))+h_1({\bf x})L_{M_1}(\pi({\bf x}))&\mbox{ if }{\bf x} \in \Omega_\iota.\end{array}\right.
\end{eqnarray}
Note that $$r({\bf x})\le -\mu$$
for all ${\bf x}\in \mathbb R^n$. Indeed, for ${\bf x}\in\Omega_\iota$ we have $L_{M_1}(\pi({\bf x}))=L_{M_0}(\pi({\bf x}))\le -\mu$ as $\pi({\bf x})\in \Omega$, see \eqref{LMv}, and thus
\begin{eqnarray*}
r({\bf x})&=& -\mu+\underbrace{h_1({\bf x})}_{\ge 0}\underbrace{(\mu+L_{M_1}(\pi({\bf x}))}_{\le 0}
\ \le\ -\mu\,.
\end{eqnarray*}
Then we define
$$M({\bf x})=e^{2V({\bf x})}M_1({\bf x}).$$
We obtain by Lemma \ref{lem}
$$L_M({\bf x})=L_{M_1}({\bf x})+V'({\bf x})
=L_{M_1}({\bf x})-L_{M_1}({\bf x})+r({\bf x})\le-\mu.$$
This shows the theorem. In the last steps we will define the function $V$ and prove the properties stated above.
\vspace{0.3cm}
\noindent \underline{\bf VI. Definition of $V_{loc}$}
\noindent
We define $V_{loc}({\bf x})$ for $ {\bf x}\in \Omega_\iota$. Note that $\Omega_\iota$ is positively invariant by Lemma \ref{def_d}, so
$S_t {\bf x}\in \Omega_\iota$ for all $t\ge 0$. We define
\begin{eqnarray}
V_{loc}({\bf x})&=&\int_0^\infty [L_{M_1}(S_t {\bf x})-L_{M_1}(S_{\theta_{\bf x}(t)}
\pi({\bf x}))]\,dt.\label{Vdefi}
\end{eqnarray}
We have $V_{loc}({\bf x})=0$ for all ${\bf x}\in\Omega$.
We will show that the $V_{loc}$ is well-defined, continuous and orbitally continuously differentiable for all ${\bf x}\in \Omega_\iota$ and that
(\ref{eqV}) holds for all ${\bf x}\in \overline{\Omega_{\iota/3}}$.
For ${\bf x}\in U$, define
\begin{eqnarray*}
g_T(\tau,{\bf x})&=&\int_\tau^{T+\tau} [L_{M_1}(S_t {\bf x})-L_{M_1}(S_{\theta_{\bf x}(t)}
\pi({\bf x}))]\,dt.
\end{eqnarray*}
By Lemma \ref{def_d} there is a constant $C>0$ such that, defining ${\bf p}:=\pi({\bf x})\in \Omega$,
\begin{eqnarray}
\label{expsta}
\|S_t{\bf x}-S_{\theta_{\bf x}(t)}{\bf p}\|&\le& Ce^{-\mu_0 t}
\end{eqnarray}
for all $t\ge 0$ and all ${\bf x}\in U$ with $\mu_0:=\nu-2\epsilon>0$; note that $S_{\theta_{\bf x}(t)}{\bf p}=\pi(S_t{\bf x})$ by \eqref{first}.
Now, we use Lemma \ref{Lipschitz} and $\sigma\ge 3$, showing that $L_{M_1}$ is Lipschitz-continuous on the compact set $U$; note that $\sigma-1\ge 2$.
Hence,
\begin{eqnarray*}
\left|L_{M_1}(S_t {\bf x})-L_{M_1}(S_{\theta_{\bf x}(t)}
\pi({\bf x}))\right|
&\le&L C_1\left\|S_t {\bf x}-S_{\theta_{{\bf x}}(t)}{\bf p}\right\|\\
&\le&L C_2 e^{-\mu_0 t}
\end{eqnarray*}
by \eqref{expsta},
which is integrable over $[0,\infty)$. Hence, by Lebesgue's dominated convergence theorem, the function $g_T(\tau,{\bf x})$ converges point-wise for $T\to \infty$ for all $\tau\ge 0$ and ${\bf x}\in U$.
Choose $\theta_0>0$ so small that $S_{-\theta_0}\Omega_{\iota}\subset U$.
We have that
\begin{eqnarray*}
\lefteqn{\frac{\partial }{\partial \tau}g_T(\tau,{\bf x})}\\&=&
[L_{M_1}(S_{T+\tau} {\bf x})-L_{M_1}(S_{\theta_{\bf x}(T+\tau)}
\pi({\bf x}))]
- \left(L_{M_1}(S_{\tau}{\bf x})-L_{M_1}(S_{\theta_{{\bf x}}(\tau)}{\bf p})\right)\\
&=&
[L_{M_1}(S_T(S_\tau {\bf x}))-L_{M_1}(S_{\theta_{S_\tau{\bf x}}(T)}
\pi(S_\tau{\bf x}))]
- \left(L_{M_1}(S_{\tau}{\bf x})-L_{M_1}(S_{\theta_{{\bf x}}(\tau)}{\bf p})\right)
\end{eqnarray*}
by Lemma \ref{change}. For ${\bf x}\in\Omega_\iota$, the right-hand side
converges uniformly in $\tau \in (-\theta_0,\theta_0)$
as $T\to \infty$ to
$- \left(L_{M_1}(S_{\tau}{\bf x})-L_{M_1}(S_{\theta_{{\bf x}}(\tau)}{\bf p})\right)$ by the same estimate as above. Hence, we can exchange $\frac{d}{d\tau}$ and $\lim_{T\to\infty}$.
Altogether, we thus have for all ${\bf x}\in\Omega_\iota$, using Lemma \ref{change}
\begin{eqnarray*}
V_{loc}'({\bf x})&=&\frac{d}{d\tau}V_{loc}(S_\tau {\bf x})\bigg|_{\tau=0}\\
&=&\frac{d}{d\tau}\int_0^\infty [L_{M_1}(S_{t+\tau} {\bf x})-L_{M_1}(S_{\theta_{S_\tau{\bf x}}
(t)}
\pi(S_\tau{\bf x}))]\,dt\bigg|_{\tau=0}\\
&=&\frac{d}{d\tau}\lim_{T\to\infty}\int_0^T [L_{M_1}(S_{t+\tau} {\bf x})-L_{M_1}(S_{\theta_{\bf x}(t+\tau)}
\pi({\bf x}))]\,dt\bigg|_{\tau=0}\\
&=&\frac{d}{d\tau}\lim_{T\to\infty}\int_\tau^{T+\tau} [L_{M_1}(S_{t} {\bf x})-L_{M_1}(S_{\theta_{\bf x} (t)}
\pi({\bf x}))]\,dt\bigg|_{\tau=0}\\
&=&\frac{d}{d\tau}\lim_{T\to \infty}g_T(\tau,{\bf x})\bigg|_{\tau=0}\\
&=&\lim_{T\to \infty}\frac{d}{d\tau}g_T(\tau,{\bf x})\bigg|_{\tau=0}\\
&=&- L_{M_1}({\bf x})+L_{M_1}({\bf p})
\end{eqnarray*}
and in particular, that $V_{loc}$ is continuously orbitally differentiable.
Note that $V_{loc}'({\bf x})=-L_{M_1}({\bf x})+r({\bf x})$
for all ${\bf x}\in\overline{\Omega_{\iota/3}}$.
\noindent \underline{\bf VII. Definition of $V_{glob}$ in $A(\Omega)$}
\noindent
For the global part note that
$V_{loc}$ is defined and smooth in $\Omega_{\iota}$ and we have
$V_{loc}'({\bf x})=-L_{M_1}({\bf x})+r({\bf x})$ for all ${\bf x}\in \overline{\Omega_{\iota/3}}$.
The global function $V_{glob}\colon A(\Omega)\setminus \Omega\to \mathbb R$ is defined as the solution of the non-characteristic Cauchy problem
\begin{eqnarray}
\begin{array}{lcl}
\nabla V_{glob}({\bf x})\cdot {\bf f}({\bf x})&=&-L_{M_1}({\bf x})+r({\bf x})
\mbox{ for }{\bf x}\in A(\Omega)\setminus
\Omega\\
V_{glob}({\bf x})&=&V({\bf x}) \mbox{ for }
{\bf x}\in\Gamma,\end{array}\label{cauchy}
\end{eqnarray} where
$\Gamma=\{{\bf x}\in U\mid d({\bf x})=\iota/3\}$.
In particular, we can construct the solution by first defining the function $\tau\in C^\sigma(A(\Omega)\setminus \Omega,\mathbb R)$ implicitly by
$$d(S_{\tau}{\bf x})=\iota/3.$$
Since ${\bf x}\in A(\Omega)\setminus \Omega$, there exists a $\tau$ satisfying the equation, and since $d'({\bf x})<0$ for all ${\bf x}\in \Gamma$, $\tau({\bf x})$ is unique. The function $\tau$ is $C^{\sigma-1}$, since $d$ and $S_\tau$ are. We have
$\tau'({\bf x})
-1$. Then the function
$$V_{glob}({\bf x})=\int_0^{\tau({\bf x})} q(S_t{\bf x})\,dt+V_{loc}(S_{\tau({\bf x})}({\bf x}))$$
with $q({\bf x}):=L_{M_1}({\bf x})-r({\bf x})$ is continuous and orbitally continuously differentiable and satisfies \eqref{cauchy}, noting that $S_{\tau({\bf x})}({\bf x})=S_{\tau(S_\theta{\bf x})}(S_\theta{\bf x})$ for all $\theta\ge 0$. Indeed, for ${\bf x}\in \Gamma$ we have $ V_{glob}({\bf x})=V_{loc}({\bf x})$ and we have
\begin{eqnarray*}
V_{glob}'({\bf x})&=&\frac{d}{d\theta}\left(\int_0^{\tau(S_\theta {\bf x})} q(S_{t+\theta}{\bf x})\,dt+V(S_{\tau(S_\theta {\bf x})}(S_\theta {\bf x}))\right)\bigg|_{\theta=0}\\
&=&\frac{d}{d\theta}\left(\int_\theta^{\tau(S_\theta {\bf x})+\theta} q(S_{t}{\bf x})\,dt+V(S_{\tau( {\bf x})}( {\bf x}))\right)\bigg|_{\theta=0}\\
&=&\left( q(S_{\tau(S_\theta {\bf x})+\theta}{\bf x})(\tau'( {\bf x})+1)
-q(S_\theta {\bf x})\right)\bigg|_{\theta=0}\\
&=&-q({\bf x})
\end{eqnarray*}
since $\tau'({\bf x})=-1$.
Note that we have $V_{glob}({\bf x})=V_{loc}({\bf x})$ for ${\bf x}\in \overline{\Omega_{\iota/3}}\setminus \Omega$, and hence $V_{glob}$ can be extended to a continuous and orbitally continuously differentiable function $V$ on $A(\Omega)$ satisfying \eqref{eqV} by setting $V_{glob}({\bf x}):=V_{loc}({\bf x})=0$ for all ${\bf x} \in \Omega$.
This proves the theorem.
\end{proof}
\section*{Conclusions}
In this paper we have proven a converse theorem, showing the existence of a contraction metric for an exponentially stable periodic orbit. The metric is defined in its basin of attraction and the bound on the function $L_M$ is arbitrarily close to the true exponential rate of attraction.
\begin{appendix}
\section{Local Lipschitz-continuity of $L_M$}
In the appendix we prove that the function $L_M$ is locally Lipschitz continuous.
\begin{lemma}\label{Lipschitz} Let ${\bf f}\in C^2(\mathbb R^n,\mathbb R^n)$ and $M\in C^2(\mathbb R^n,\mathbb S^n)$ such that $M({\bf x})$ is positive definite for all ${\bf x}\in \mathbb R^n$.
Then $L_M$ is locally Lipschitz continuous on $D=\{{\bf x}\in \mathbb R^n\mid {\bf f}({\bf x})\not={\boldsymbol 0}\}$.
\end{lemma}
\begin{proof}
For ${\bf y}\in D$ we define a projection $P_{\bf y}\colon \mathbb R^n\to\mathbb R^n$ onto the $(n-1)$-dimensional space of vectors ${\bf w}\in \mathbb R^n$ with ${\bf f}({\bf y})^TM({\bf y}){\bf w}=0$ by
$$P_{\bf y} {\bf v} ={\bf v} -\frac{{\bf f}({\bf y})^TM({\bf y}){\bf v}}{{\bf f}({\bf y})^TM({\bf y}){\bf f}({\bf y})}{\bf f}({\bf y})$$
for all ${\bf y}\in D$ and all ${\bf v}\in \mathbb R^n$.
Note that indeed
\begin{eqnarray*}
{\bf f}({\bf y})^TM({\bf y})P_{\bf y} {\bf v}
&=&{\bf f}({\bf y})^TM({\bf y}){\bf v} -\frac{{\bf f}({\bf y})^TM({\bf y}){\bf v}}{{\bf f}({\bf y})^TM({\bf y}){\bf f}({\bf y})}
{\bf f}({\bf y})^TM({\bf y}){\bf f}({\bf y})\\
&=&{\boldsymbol 0}.
\end{eqnarray*}
Fix ${\bf x}\in D$ and choose a basis ${\bf v}_1={\bf f}({\bf x}),{\bf v}_2,\ldots,{\bf v}_n$ of $\mathbb R^n$ such that ${\bf v}_i^TM({\bf x}){\bf v}_j=0$ for $i\not=j$.
Choose $\epsilon>0$ such that
\begin{eqnarray}
{\bf f}({\bf y})^TM({\bf x}){\bf f}({\bf x})&\not=&0\label{not0}
\end{eqnarray} holds for all ${\bf y}\in B_\epsilon({\bf x})$; note that for ${\bf y}={\bf x}$ we have ${\bf f}({\bf x})^TM({\bf x}){\bf f}({\bf x})\not=0$.
For ${\bf y}\in B_\epsilon({\bf x})$ we define ${\bf w}_1={\bf f}({\bf y})$ and ${\bf w}_i=P_{\bf y}{\bf v}_i$ for $i=2,\ldots,n$. We show that $({\bf w}_1,\ldots,{\bf w}_n)$ is a basis of $\mathbb R^n$.
Let us first show that ${\bf w}_i\not={\boldsymbol 0}$ for $i=2,\ldots,n$. Assuming the opposite, we have
\begin{eqnarray}
{\bf v}_i&=&\frac{{\bf f}({\bf y})M({\bf y}){\bf v}_i}{{\bf f}({\bf y})^TM({\bf y}){\bf f}({\bf y})}{\bf f}({\bf y})\label{first_eq}\\
0&=&\frac{{\bf f}({\bf y})M({\bf y}){\bf v}_i}{{\bf f}({\bf y})^TM({\bf y}){\bf f}({\bf y})}\nonumber
\end{eqnarray}
multiplying by ${\bf f}({\bf x})^TM({\bf x})$ from the left
as ${\bf f}({\bf x})^TM({\bf x}){\bf f}({\bf y})\not=0$ by \eqref{not0}. This, however, implies by \eqref{first_eq} that ${\bf v}_i={\boldsymbol 0}$ which is a contradiction.
${\bf w}_1\not={\boldsymbol 0}$ follows directly from \eqref{not0}.
We express ${\bf f}({\bf y})=\sum_{j=1}^n\beta_j{\bf v}_j$ and note that multiplying this equation by ${\bf f}({\bf x})^TM({\bf x})$ from the left gives $$0\not={\bf f}({\bf x})^TM({\bf x}){\bf f}({\bf y})=\beta_1 {\bf f}({\bf x})^TM({\bf x}){\bf f}({\bf x})$$
by \eqref{not0}, i.e. in particular $\beta_1\not=0$.
To show that the ${\bf w}_i$ form a basis, we assume $\sum_{i=1}^n\alpha_i{\bf w}_i={\boldsymbol 0}$. Multiplying this equation by ${\bf f}({\bf y})^TM({\bf y})$ from the left gives
$\alpha_1 {\bf f}({\bf y})^TM({\bf y}){\bf f}({\bf y})=0$ by the projection property, hence $\alpha_1=0$.
Hence,
\begin{eqnarray*}
{\boldsymbol 0}&=&\sum_{i=2}^n\alpha_i\left[{\bf v}_i-\frac{{\bf f}({\bf y})^TM({\bf y}){\bf v}_i}{{\bf f}({\bf y})^TM({\bf y}){\bf f}({\bf y})}{\bf f}({\bf y})\right]\\
&=&\sum_{i=2}^n\alpha_i{\bf v}_i-
\sum_{i=2}^n\sum_{j=1}^n \frac{{\bf f}({\bf y})^TM({\bf y}){\bf v}_i}{{\bf f}({\bf y})^TM({\bf y}){\bf f}({\bf y})}\beta_j{\bf v}_j.
\end{eqnarray*}
Using that ${\bf v}_j$ is a basis, we can conclude that the coefficient in front of ${\bf v}_1$ is zero, namely
$$
\sum_{i=2}^n \frac{{\bf f}({\bf y})^TM({\bf y}){\bf v}_i}{{\bf f}({\bf y})^TM({\bf y}){\bf f}({\bf y})}\beta_1=0.$$
Since $\beta_1\not=0$, we have $
\sum_{i=2}^n \frac{{\bf f}({\bf y})^TM({\bf y}){\bf v}_i}{{\bf f}({\bf y})^TM({\bf y}){\bf f}({\bf y})}=0$.
Plugging this back in, we obtain $\sum_{i=2}^n\alpha_i{\bf v}_i={\boldsymbol 0}$, which shows $\alpha_2=\ldots=\alpha_n=0$ as the ${\bf v}_i$ are linearly independent.
Now define the matrix-valued function $Q\colon B_\epsilon({\bf x})\to \mathbb R^{n\times n}$ by the columns
$$Q({\bf y})=({\bf w}_1({\bf y}),\ldots,{\bf w}_n({\bf y})).$$
Note that
$Q\in C^2(B_\epsilon({\bf x}),\mathbb R^{n\times n})$ due to the smoothness of ${\bf f}$ and $M$, and $Q$ is invertible for every ${\bf y}$. We have ${\bf w}^TM({\bf y}){\bf f}({\bf y})=0$ if and only if ${\bf w}\in \operatorname{span} ({\bf w}_2({\bf y}),\ldots,{\bf w}_n({\bf y})$, which in turn is equivalent to
${\bf u}\in \operatorname{span}({\bf e}_2,\ldots,{\bf e}_n)=:E_{n-1}$, where ${\bf u}=Q({\bf y})^{-1}{\bf w}$ and ${\bf e}_1,\ldots,{\bf e}_n$ denotes the standard basis in $\mathbb R^n$.
Now we write
\begin{eqnarray*}
\lefteqn{L_M({\bf y})}\\
&=&\max_{{\bf w}^TM({\bf y}){\bf w}=1,{\bf w}^TM({\bf y}){\bf f}({\bf y})=0}
\frac{1}{2}{\bf w}^T\left[M({\bf y})D{\bf f}({\bf y})+D{\bf f}({\bf y})^TM({\bf y})+M'({\bf y})\right]{\bf w}\\
&=&
\max_{{\bf u}^TQ({\bf y})^TM({\bf y})Q({\bf y}){\bf u}=1,{\bf u}\in E_{n-1}}
\frac{1}{2}{\bf u}^TQ({\bf y})^T\\
&&\hspace{1cm}\left[M({\bf y})D{\bf f}({\bf y})+D{\bf f}({\bf y})^TM({\bf y})+M'({\bf y})\right]Q({\bf y}){\bf u}.
\end{eqnarray*}
Denoting by $[A]_{n-1}\in \mathbb S^{n-1}$ the lower-right square $(n-1)$ matrix of $A\in \mathbb S^{n}$ and
with ${\bf u}=\left(\begin{array}{l}0\\\widetilde{\bf u} \end{array}\right)$, where $\widetilde{\bf u}\in\mathbb R^{n-1}$ we get
\begin{eqnarray*}
L_M({\bf y})
&=&
\max_{\widetilde{\bf u}^T[Q({\bf y})^TM({\bf y})Q({\bf y})]_{n-1}\widetilde{\bf u}=1,\widetilde{\bf u}\in \mathbb R^{n-1}}
\frac{1}{2}\widetilde{\bf u}^T\bigg[Q({\bf y})^T\big[M({\bf y})D{\bf f}({\bf y})\\
&&\hspace{1cm}
+D{\bf f}({\bf y})^TM({\bf y})+M'({\bf y})\big]Q({\bf y})\bigg]_{n-1}\widetilde{\bf u}.
\end{eqnarray*}
Now denote by $\operatorname{Chol}(A)$ the unique Cholesky decomposition of the symmetric, positive definite matrix $A\in \mathbb S^{n-1}$, such that $\operatorname{Chol}(A)$ is an invertible, upper triangular matrix with $\operatorname{Chol}(A)^T\operatorname{Chol}(A)=A$. Denoting $C({\bf y}):=\operatorname{Chol}([Q({\bf y})^TM({\bf y})Q({\bf y})]_{n-1})\in \mathbb R^{(n-1)\times (n-1)}$ and $\widetilde{\bf v}=C({\bf y})\widetilde{\bf u}\in\mathbb R^{n-1}$ we have
\begin{eqnarray*}
L_M({\bf y})
&=&
\max_{\|\widetilde{\bf v}\|=1,\widetilde{\bf v}\in \mathbb R^{n-1}}
\frac{1}{2}\widetilde{\bf v}^T(C^{-1}({\bf y}))^T\\
&&\hspace{0.8cm}\left[Q({\bf y})^T\left[M({\bf y})D{\bf f}({\bf y})+D{\bf f}({\bf y})^TM({\bf y})+M'({\bf y})\right]Q({\bf y})\right]_{n-1}C^{-1}({\bf y})\widetilde{\bf v}\\
&=&
\max_{\|\widetilde{\bf v}\|=1,\widetilde{\bf v}\in \mathbb R^{n-1}}
\widetilde{\bf v}^TH({\bf y})\widetilde{\bf v}\\
&=&\lambda_{max}(H({\bf y}))
\end{eqnarray*}
where $H({\bf y})\in \mathbb S^{n-1}$ is defined by
\begin{eqnarray*}H({\bf y})&=&\frac{1}{2}(C^{-1}({\bf y}))^T\left[Q({\bf y})^T\left[M({\bf y})D{\bf f}({\bf y})+D{\bf f}({\bf y})^TM({\bf y})+M'({\bf y})\right]Q({\bf y})\right]_{n-1}\\
&&C^{-1}({\bf y}).
\end{eqnarray*}
The function ${\bf y}\to H({\bf y})$ is continuously differentiable as the Cholesky decomposition, the inverse, the operation $[\cdot]_{n-1}$, $Q$, $M$, $D{\bf f}$ and $M'$ are continuously differentiable by the assumptions. Hence, the function $H({\bf y})$ is locally Lipschitz-continuous. The function $\lambda_{max}$ is globally Lipschitz-continuous, hence, $L_M$ is locally Lipschitz-continuous.
\end{proof}
\end{appendix}
{\small
\bibliographystyle{plain}
|
1,116,691,499,893 | arxiv | \section{Introduction}
\label{sec:introduction}
The class of convolution neural networks (CNNs)~\cite{cite_11} is one of the key deep learning architecture and currently a very active field of study. The basic CNNs architecture~\cite{cite_12} comprise alternatively stacked multiple layers that can perform hierarchical feature learning. Over the past decade, CNNs have achieved a huge success compared to conventional machine learning techniques in solving a wide range of problems in different fields of applications such as classification \cite{cite_11,cite_1,cite_28}, object detection \cite{cite_2,cite_3} and recognition~\cite{cite_4}, semantic segmentation~\cite{cite_5} and many others \cite{cite_30,cite_38}. The hierarchical feature learning process~\cite{cite_6} of CNNs are starting with the inputs that are fed into the network and output a class label scores at the end. Initial convolution layers learn basic level features which are local patterns, directly from raw input image pixels by convolving with learnable filter kernels. Following layers of the network develop high level abstractions from extracted features in hierarchical order. Generally, these preliminary level features are edges, colors, and textures. The edges are defined as sharp variations or discontinuities in pixel values. These sharp variation points localize the edge contours which are essential for developing high-level features.
This makes the edge feature identification is an important and fundamental step of feature learning process. There are many different types of edges exists, created by object boundaries, highlights, shadows, textures, occlusion and etc. Though the convolutions extract the edge features during the preliminary learning, enhancing edge features at the input level can provide an additional boost. We propose two edge enhancing mechanisms using wavelet transform and develop an additional preprocessing layer to implement each method. Then we empirically investigate the performance of proposed methods. The experimental setup uses common and publicly available datasets MNIST, SVHN, and CIFAR-10. Our results demonstrate that both the proposed methods have outperformed the previous work and baselines.
The rest of this paper is organized as follows; We first review the related work in section~\ref{related_work} and discuss our approach for proposed methods including the mathematical background of the wavelet based modulus maxima edge detection. The proposed edge enhancement methods present in section~\ref{approach}. Section~\ref{experimental_setup} describes the implementations and the datasets we have used. Discussion and the obtained results are in section~\ref{results}.
\section{Related Work}
\label{related_work}
The wavelet transform~\cite{cite_15} is capable of providing powerful time-frequency representations. Furthermore, in terms of images, the discrete wavelet transform has the ability to decompose an image into different frequency information levels. There have been several attempts to use this technique in the context of machine learning and in the class of deep learning as follows. Wavelet network in~\cite{cite_32} is a result of a combination between wavelet and neural network \cite{cite_33,cite_34} which is an auto-encoder and constituted with three layers. The authors have examined the effect on classification tasks using 3 different wavelet basis functions. Another work on handwritten digit recognition using SVM and KNN classifiers in~\cite{cite_35} is powered by the wavelet.
In terms of CNNs, the wavelet transform has been used to solve computer vision problems because of its ability to extract diverse frequency information of images. Classification tasks including images~\cite{cite_25,cite_24}, textures~\cite{cite_26}, and multi-scale face super-resolution~\cite{cite_27} are few of them. The work in~\cite{cite_24} pre-processes the input data in the wavelet domain and then fed to the network. They have proposed two methods depending on how the decomposed wavelet coefficients are fused together. Another work done in~\cite{cite_25}, applies the discrete wavelet transform to extract features and a neural network used to perform classification by the resultant feature vector. Authors in~\cite{cite_26} have employed discrete wavelet transform to convert images into the wavelet domain and exploit important features to perform classification using a CNN. Furthermore, Liu \textit{et al.}~\cite{cite_37} introduced a multi-level wavelet CNN model for image restoration. However, most of the work paid attention to decomposing wavelet coefficients directly rather than processing them to enhance particular features like edges.
Therefore, we introduce two mechanisms to apply the wavelet transform to enhance the edge features in input images to improve the classification of CNNs. The concept of edge detection~\cite{cite_14,cite_7,cite_8,cite_13} is referred to identifying edges starting from contours of small structures to boundaries of large visual objects in an image. Mallat~\cite{cite_14} presented a multi-scale edge detection algorithm using the wavelet transform that develops edge representations by finding the local maxima of a wavelet transform modulus which is also equivalent to gradient-based edge detection. This will be the base technique for one of our purposed method. The other method is a naive edge detection mechanism also based on wavelet transformation with limited coefficients reconstruction after decomposition.
\section{Approach}
\label{approach}
\subsection{Discrete wavelet transform}
\label{dwt}
The discrete wavelet transform~\cite{cite_17,cite_16} is heavily used in image processing applications~\cite{cite_19,cite_18}. It decomposes an image into different levels of frequency interpretations by simultaneously passing through a set of low pass and high pass filters. Resulting wavelet coefficients contain decomposed image detail information and are known as wavelet coefficients. There are two main types of coefficients: detail coefficients and approximation coefficients. One level of decomposition down samples the coefficients by a factor of two to prevent information redundancy. After the first level of decomposition, resultant approximation coefficient can be subjected to further wavelet transform decompositions to generate second level coefficients and so on as illustrated in Fig. \ref{fig.1}. The detail coefficients contain the high-frequency information and are generated by passing through the high pass filters. The approximation coefficients provide less detailed low-frequency versions of the original image that are generated by passing through the low pass filters. Since an image is a two dimensional (2D) signal or data, the discrete wavelet transform is applied in a 2D manner. This 2D transform is applied to an image as two operations of one-dimensional discrete wavelet transform along the rows and columns separately which results in four different wavelet coefficients. Resultant coefficients comprise three detail coefficients containing vertical, horizontal and diagonal details, and one approximation coefficient containing low-frequency details.
\begin{figure*}[h]
\begin{center}
\includegraphics[width=\linewidth]{images/dwt_process.jpg}
\end{center}
\caption{Discrete wavelet transform decompositions.}
\label{fig.1}
\end{figure*}
\subsection{Mathematical background for modulus maxima edge detection}
\label{math_background}
Basic gradient-based edge detection process analyzes the image points from their first or second order derivatives to detect sharp variation points where edges are localized. Extrema of the first derivative and zero crossings of the second derivative are corresponding to sharp variation points where edges have occurred. Mallat~\cite{cite_14} method establishes a systematic relationship between the wavelet transform and the edge detection which is the base mechanism of our proposed modulus maxima method. It describes that the wavelet transform of each image point is referring to the derivative at a given scale. This can prove by relating the wavelet with differentiable smoothing function whose integral is equal to $1$. One can easily choose Gaussian as the differentiable smoothing function. Since this study is interested in image data, computations are performed two-dimensionally along the image axis $x$ and $y$. There defined $\psi^x(x,y)$ and $\psi^y(x,y)$ as the first derivative of differentiable smoothing function $\theta(x,y)$ whose integral over $x$ and $y$ is equal to $1$ and converges to $0$ at the infinity. Considering in multiscale, scaling factor $s$ is added as the scale dilation.
\begin{equation}\label{eq.1}
\psi_{s}^x(x,y)=\frac{\partial \theta(x,y)}{\partial x}\hspace{0.5cm}\mathrm{and}\hspace{0.5cm}\psi_{s}^y(x,y)=\frac{\partial \theta(x,y)}{\partial y}.
\end{equation}
The wavelet transform is computed by convolving an image $f(x,y)$ with a dilated wavelet $\psi_{s}(x,y)$ and is given by,
\begin{equation}\label{eq.2}
W_{s}f(x,y)=f*\psi_{s}(x,y).
\end{equation}
As we are processing discrete images, the discrete wavelet transform is considered. Therefore, the dilation can impose from continues scaling factor to dyadic sequence $(2^{j})_{j\in\mathbb{Z}}$ that refers to $2D$ dyadic wavelet transform. The $2D$ smoothing function is $\theta_{2^{j}}(x,y)$. The image function $f(x,y)$ convolves with the smoothing function and then,
\begin{equation}\label{eq.3}
\psi_{2^{j}}^x(x,y)=\frac{\partial {\theta_{2^{j}}}(x,y)}{\partial x}\hspace{0.5cm}\mathrm{and}\hspace{0.5cm}\psi_{2^{j}}^y(x,y)=\frac{\partial {\theta_{2^{j}}}(x,y)}{\partial y}.
\end{equation}
\begin{equation}\label{eq.4}
W_{2^{j}}^xf(x,y)=f*\psi_{2^{j}}^x(x,y)\hspace{0.5cm}\mathrm{and}\hspace{0.5cm}W_{2^{j}}^yf(x,y)=f*\psi_{2^{j}}^y(x,y).
\end{equation}
Then the gradient vector is given by,
\begin{equation}\label{eq.5}
\left(\begin{array}{c}{W_{2^{j}}^xf(x,y)}\\ {W_{2^{j}}^yf(x,y)}\end{array}\right)=2^{j}\left(\begin{array}{c}{\frac{\partial }{\partial x}(f*{\theta_{2^{j}}})(x,y)}\\ {\frac{\partial }{\partial y}(f*{\theta_{2^{j}}})(x,y)}\end{array}\right)=2^{j}\nabla(f*\theta_{2^{j}})(x,y).
\end{equation}
Local minima or maxima of the wavelet transform (first-order derivative) corresponds to variations of the pixel intensities. The absolute value of the first derivatives can either be a maximum or a minimum where local maxima correspond to sharp variations and local minima correspond to gradually varyings. Both the gradient vector and the direction vector can compute at each image point;
\begin{equation}\label{eq.6}
M_{2^{j}}f(x,y)=\sqrt{|W_{2^{j}}^x(x,y)|^2+|W_{2^{j}}^y(x,y)|^2}.
\end{equation}
Direction of the gradient with the horizontal axis is given by,
\begin{equation}\label{eq.7}
A_{2^{j}}f(x,y)=\arctan\left(\frac{W_{2^{j}}^yf(x,y)}{W_{2^{j}}^xf(x,y)}\right).
\end{equation}
Sharp variation point locations are given by the local maxima of $M_{2^{j}}f(x,y)$ and the directions are given by $A_{2^{j}}f(x,y)$. The modulus has a local maximum in direction of the gradient corresponding to the locations where sharp intensity variation points exist. In order to find these local maxima of the modulus at each image point, modulus of the gradient is compared with its local neighborhood. Positions of the local modulus maxima are corresponding to the image edges along with their directions. Fig. \ref{fig.2} graphically illustrates the outputs of steps of the modulus maxima process.
\begin{figure*}[h]
\begin{center}
\includegraphics[width=\linewidth]{images/5.jpg}
\end{center}
\caption{Graphical representation of modulus maxima process (sample image from MNIST dataset). (a): Original image; (b): horizontal detail coefficients obtained from wavelet transform; (c): vertical detail coefficients obtained from wavelet transform; (d): Edge representation development from local modulus maxima.}
\label{fig.2}
\end{figure*}
\section{Methodology}
\label{methodology}
This study proposes two methods to develop edge feature enhanced input images to CNNs. Firstly a naive edge enhancement method is proposed and the modulus maxima method is the purposed second as explained from the section \ref{math_background}.
\subsection{Naive method}
\label{naive_method}
The naive method proposes a primary mechanism to enhance edge features of input images to CNNs using the wavelet transform. As illustrated in figure \ref{fig.3}, this process starts by applying the discrete wavelet transform to decompose an input into series of wavelet coefficients. The coarsest approximation coefficient and several detail coefficients are generated depending on the number of decomposition levels using `Haar' wavelet as basis wavelet function. Haar wavelet~\cite{cite_23} is the simplest wavelet basis function that has squared shape and commonly used in image processing applications. secondly, the resulting coarsest approximation coefficient is discarded in order to remove lowest frequency details. Then the input image is reconstructed back by inverse wavelet transform using the same wavelet basis function with remaining detail coefficients. The reconstructed image is fed as the input to the CNN. The fed input is a basic level edge enhanced representation of the original image.
\begin{figure*}[h]
\begin{center}
\includegraphics[width=\linewidth]{images/modulus_maxima_method.jpg}
\end{center}
\caption{Naive edge enhancement method.}
\label{fig.3}
\end{figure*}
\subsection{Modulus maxima method}
\label{modulus_maxima_method}
The modulus maxima methodology from section \ref{math_background} is the base mechanism for the second proposed method. It is able to develop edge feature enhanced representations out of original input images. This process is initiating by Gaussian smoothing of the input image for noise reduction. The wavelet transform of the smoothed input image is then performed using `Haar' wavelet. Since the proposed method is interested in preserving most salient edge information, only the first level of the wavelet transform is performed. As the discrete wavelet transform decomposes the input image into four different coefficients as explained in section \ref{dwt}, Only the detail coefficients which are containing high-frequency information including horizontal, vertical, and diagonal coefficients are being taken into account from this point onwards. The modulus of the detail coefficients are then calculated at each image point according to (\ref{eq.6}) and the directions are calculated using (\ref{eq.7}). The edge representations are developed by finding the local maximal of the modulus along the gradient directions subsequently. This is done by applying non-maximal suppression over the local neighborhoods. The non-maximal suppression is explained in Alg. \ref{alg.1}. As a post-processing, a proper thresholding is applied to improve the developed map. The final output of the process is obtained by reconstructing the image back by inverse wavelet transform from the built edge representation and the approximation coefficient remaining from the wavelet transform of the original input. Fig. \ref{fig.4} graphically illustrates the described process. The output of the reconstruction becomes the input to CNN for feature learning and classification.
\begin{algorithm}[h]
\KwIn{Modulus $(M)$ and angle $(A)$ of the gradient vector computed by (\ref{eq.6}) and (\ref{eq.7}) respectively}
\KwOut{Local maxima $(LM)$ of the modulus}
\For(\tcp*[f]{$i,j$ are image indices}){$i$}{
\For{$j$}{
\If{$A_{i,j}$ in between $\left[-\frac{\pi}{8},\frac{\pi}{8}\right]$ and$ \left[\pi-\frac{\pi}{8}, \pi+\frac{\pi}{8}\right]$}{\If{$M_{i,j}>M_{i+1,j}$ and $M_{i,j}>M_{i-1,j}$}{$LM_{i,j}\gets M_{i,j}$}}
\ElseIf{$A_{i,j}$ in between $\left[\frac{\pi}{2}-\frac{\pi}{8},\frac{\pi}{2}+\frac{3\pi}{8}\right]$ and$ \left[\frac{3\pi}{2}-\frac{\pi}{8}, \frac{3\pi}{2}+\frac{\pi}{8}\right]$}{\If{$M_{i,j}>M_{i,j+1}$ and $M_{i,j}>M_{i,j-1}$}{$LM_{i,j}\gets M_{i,j}$}}
\ElseIf{$A_{i,j}$ in between $\left[\frac{\pi}{4}-\frac{\pi}{8},\frac{\pi}{4}+\frac{3\pi}{8}\right]$ and$ \left[\frac{5\pi}{4}-\frac{\pi}{8}, \frac{5\pi}{4}+\frac{\pi}{8}\right]$}{\If{$M_{i,j}>M_{i+1,j+1}$ and $M_{i,j}>M_{i-1,j-1}$}{$LM_{i,j}\gets M_{i,j}$}}
\Else{\If{$M_{i,j}>M_{i+1,j-1}$ and $M_{i,j}>M_{i-1,j+1}$}{$LM_{i,j}\gets M_{i,j}$}}
}
}
\caption{Non-maximal suppression.}
\label{alg.1}
\end{algorithm}
\begin{figure*}[h]
\begin{center}
\includegraphics[scale=0.38]{images/modulus_maxima.jpg}
\end{center}
\caption{Graphical illustration of modulus maxima process.}
\label{fig.4}
\end{figure*}
\section{Experimental Setup}
\label{experimental_setup}
In order to evaluate how proposed methods are performing, both methods are implemented to CNNs, then trained and tested from the scratch. The implementation of the naive edge enhancement method is labeled from now on as NEE-\textit{CNN} and the modulus maxima method is labeled as MMEE-\textit{CNN} for ease of explanation. The annotation \textit{CNN} denotes the base CNN architecture and will be changed depending on the baseline CNN and the compared previous work. For example, if the base network is AlexNet, proposed methods will be labeled as NEE-AlexNet and MMEE-AlexNet. For each method, a new data processing layer is developed by implementing the proposed method and is appended to on top of the CNN architecture as displayed in Fig. \ref{fig.5}. This layer develops edge feature enhanced input images and feed them to the first convolution layer of the CNN. As per investigations, the complete system is then trained and tested on several different datasets.
\begin{figure*}[h]
\begin{center}
\includegraphics[width=\linewidth]{images/CNN.jpg}
\end{center}
\caption{Overview of the CNN architecture implementation.}
\label{fig.5}
\end{figure*}
Firstly, we employed AlexNet~\cite{cite_11} as the base CNN architecture to implement proposed methods. AlexNet is composed of 7 layers including 5 convolutional layers and 2 fully connected layers. In AlexNet architecture, each convolution layer is followed by ReLu activation function to introduce non-linearity. Furthermore, batch normalization~\cite{cite_36} applied version of AlexNet is also employed to use as an additional baseline. We used dropout~\cite{cite_11} with 0.8 of keep probability for fully connected layers while training to prevent over-fitting. The developed preprocessing layer is appended just before the first convolution layer of each network. This layer processes raw input images to feed edge enhanced images into the network for further feature learning. We trained four architectures on three different datasets and the obtained results are discussed in section \ref{results}.
To investigate how the developed methods perform in classification tasks, all three networks are trained separately on three data sets. The trained networks are tested on the testing data portion of each dataset. Most common and publicly available datasets are selected: MNIST~\cite{cite_12}, SVHN~\cite{cite_22}, and CIFAR-10~\cite{cite_29}. MNIST is a popular and one of the most common preliminary dataset in machine learning practices. The dataset composed of 28 by 28 gray-scale handwritten digit images from 0-9 with 60,000 full training data and 10,000 full testing data. SVHN dataset is a real-world street view house number images extracted from Google maps street view. It contains 65931 training data and 26032 testing data. CIFAR-10 dataset is composed of 32 by 32 images with 50,000 training examples and 10,000 testing examples under 10 classes of real-world objects like car, airplane, dog, and horse. All datasets have used without any data augmentation.
\section{Results}
\label{results}
We trained our both implementations, classic AlexNet, and AlexNet with BN from the scratch on each dataset. Furthermore, we implemented and tested our methods on network models that have been used in previous work~\cite{cite_32,cite_24}. As an overview, the obtained results exhibit that both proposed methods are outperforming the baselines and previous work for all datasets.
Table \ref{results-table-cifar-10-alexnet} shows classification results of CIFAR-10 for network models NEE-AlexNet, MMEE-AlexNet, AlexNet and AlexNet-BN (Batch normalization applied version). Moreover, we compared the proposed methods with the previous work~\cite{cite_24} and results are displayed in Table \ref{results-table-cifar-10-previous_work}. Note that this experiment has employed the same network from compared work for CIFAR-10 dataset. As shown in the Table \ref{results-table-cifar-10-alexnet}, our networks are significantly outperforming both AlexNet baselines. The results are yielding the accuracy of the most accurate baseline, AlexNet-BN by 0.63\% and 0.47\% increment for the modulus maxima method and the naive method respectively. Results comparison with~\cite{cite_24}, also exhibits nearly 1.5\% accuracy gain for the modulus maxima method and 1.2\% gain for the naive method over best classification accuracy obtained. Between our two proposed methods, the modulus maxima method has been able to show the most success.
\setlength{\tabcolsep}{4pt}
\begin{table}[h]
\centering
\caption{Test accuracies on CIFAR-10 with AlexNet as baseline architecture.}
\label{results-table-cifar-10-alexnet}
\begin{tabular}{lc}
\hline\noalign{\smallskip}
Network & Accuracy \% \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
\textbf{MMEE-AlexNet (Proposed)} & \textbf{89.63} \\
\textbf{NEE-AlexNet (Proposed)} & \textbf{89.47} \\
AlexNet & 87 \\
AlexNet-BN & 89 \\
\hline
\end{tabular}
\end{table}
\setlength{\tabcolsep}{1.4pt}
\setlength{\tabcolsep}{4pt}
\begin{table}[h]
\centering
\caption{Results comparison with~\cite{cite_24} and proposed methods for CIFAR-10.}
\label{results-table-cifar-10-previous_work}
\begin{tabular}{lc}
\hline\noalign{\smallskip}
Network & Accuracy \% \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
CNN & 77.53 \\
CNN-WAV 2 & 76.42 \\
CNN-WAV 4 & 85.67 \\
\textbf{MMEE-CNN (Proposed)} & \textbf{87.21} \\
\textbf{NEE-CNN (Proposed)} & \textbf{86.84 } \\
\hline
\end{tabular}
\end{table}
\setlength{\tabcolsep}{1.4pt}
As we investigated for CIFAR-10, we also employed AlexNet as the baseline network architecture for the MNIST too. The obtained results are displayed in the Table \ref{results-table-MNIST-alexnet}. Even the proposed methods are exceeding the baselines, the gain is not as significant as CIFAR-10. It only showed 0.04\% and 0.07\% accuracy yield from Alexnet-BN baseline for MMEE-AlexNet and NEE-Alexnet respectively. This is because of the simplicity of MNIST and AlexNet easily reach high accuracies without much effort. However, still, the results show that the proposed methods are further improving the classification accuracies. Table \ref{results-table-MNIST-previous-work} compares the results from our methods with~\cite{cite_32}. This previous work has used different wavelet basis functions and they have developed a wavelet network to extract features. However, our methods have outperformed this work by shallow margins.
\setlength{\tabcolsep}{4pt}
\begin{table}[h]
\centering
\caption{Test accuracies on MNIST with AlexNet as baseline architecture.}
\label{results-table-MNIST-alexnet}
\begin{tabular}{lc}
\hline\noalign{\smallskip}
Network & Accuracy \% \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
\textbf{MMEE-AlexNet (Proposed)} & \textbf{99.29} \\
\textbf{NEE-AlexNet (Proposed)} & \textbf{99.26} \\
AlexNet & 99.17 \\
AlexNet-BN & 99.22 \\
\hline
\end{tabular}
\end{table}
\setlength{\tabcolsep}{1.4pt}
\setlength{\tabcolsep}{4pt}
\begin{table}[ht]
\centering
\caption{Results comparison with~\cite{cite_32} and proposed methods for MNIST.}
\label{results-table-MNIST-previous-work}
\begin{tabular}{lc}
\hline\noalign{\smallskip}
Network & Accuracy \% \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
\textbf{Modulus maxima edge enhancement approach (Proposed)} & \textbf{99.3} \\
\textbf{Naive edge enhancement approach (Proposed)} & \textbf{99.26} \\
Wavelet network approach with mexican hat wavelet & 94.2 \\
Wavelet network approach-Morlet wavelet & 99.21 \\
Wavelet network approach-rasp wavelet & 99.2 \\
\hline
\end{tabular}
\end{table}
\setlength{\tabcolsep}{1.4pt}
For further evaluations, experimental results on SVHN dataset are shown in Table \ref{results-table-SVHN-alexnet}. Our method again outperforms the baseline CNN by showing accuracy gains 0.45\% and 0.65\% for the modulus maxima method and the naive method respectively. The significance of the accuracy gain proves that the proposed methods are successfully performing in assisting the feature learning process by providing edge enhanced representations.
\setlength{\tabcolsep}{4pt}
\begin{table}[!b]
\centering
\caption{Test accuracies on SVHN with AlexNet as baseline architecture.}
\label{results-table-SVHN-alexnet}
\begin{tabular}{lc}
\hline\noalign{\smallskip}
Network & Accuracy \% \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
\textbf{MMEE-AlexNet (Proposed)} & \textbf{94.43} \\
\textbf{NEE-AlexNet (Proposed)} & \textbf{94.16} \\
AlexNet & 93.24 \\
AlexNet-BN & 93.81 \\
\hline
\end{tabular}
\end{table}
\setlength{\tabcolsep}{1.4pt}
Identifying visual objects in images are mostly depend on its shape or in other words, the combinations of edges of visual objects. During the classification, learning of the first level convolution layers mostly relies on these edge features. Hence the improved feature representations provided by the proposed methods are effectively assisting the learning to achieve better classification accuracy. The Obtained results have shown that enhancing edge features can significantly affect the classification accuracy and hence improve the learning process. Furthermore, results are further confirming that the modulus maxima method develops richer feature enhanced representations and performs better than the naive method.
\section{Conclusion}
\label{conclusion}
We have proposed and empirically evaluated two wavelet based edge enhancement mechanisms to pre-process the input images to convolutional neural networks. The aim of this preprocessing is to improve and assist the learning of the network by enhancing the edge features. The first method performs the process by discarding the coarsest approximation coefficient generated from the discrete wavelet transform of the original input image and then reconstructed by the inverse wavelet transform with the remaining detail coefficients. Secondly, a more complex method is proposed to detect edges by finding local maxima of the modulus of wavelet coefficients as discussed in section \ref{modulus_maxima_method}. The obtained results from the experiments conducted have shown that the proposed methods achieved better classification accuracies compared to the baselines and the previous work. It is notable that the developed systems achieve success in classifying images where the edges are prominent features to be learned during the classification. The Haar wavelet is used as the base wavelet in both proposed methods. There are other wavelets also available in the literature that are also suitable for this application such as Daubechies and Morlet. Thresholding operation of the modulus maxima methodology can implement as a learnable process alongside with the CNN so it can produce better outputs and `\cite{cite_39}.\\
\bibliographystyle{splncs04}
\section{Introduction}
\label{sec:introduction}
The class of convolution neural networks (CNNs)~\cite{cite_11} is one of the key deep learning architecture and currently a very active field of study. The basic CNNs architecture~\cite{cite_12} comprise alternatively stacked multiple layers that can perform hierarchical feature learning. Over the past decade, CNNs have achieved a huge success compared to conventional machine learning techniques in solving a wide range of problems in different fields of applications such as classification \cite{cite_11,cite_1,cite_28}, object detection \cite{cite_2,cite_3} and recognition~\cite{cite_4}, semantic segmentation~\cite{cite_5} and many others \cite{cite_30,cite_38}. The hierarchical feature learning process~\cite{cite_6} of CNNs are starting with the inputs that are fed into the network and output a class label scores at the end. Initial convolution layers learn basic level features which are local patterns, directly from raw input image pixels by convolving with learnable filter kernels. Following layers of the network develop high level abstractions from extracted features in hierarchical order. Generally, these preliminary level features are edges, colors, and textures. The edges are defined as sharp variations or discontinuities in pixel values. These sharp variation points localize the edge contours which are essential for developing high-level features.
This makes the edge feature identification is an important and fundamental step of feature learning process. There are many different types of edges exists, created by object boundaries, highlights, shadows, textures, occlusion and etc. Though the convolutions extract the edge features during the preliminary learning, enhancing edge features at the input level can provide an additional boost. We propose two edge enhancing mechanisms using wavelet transform and develop an additional preprocessing layer to implement each method. Then we empirically investigate the performance of proposed methods. The experimental setup uses common and publicly available datasets MNIST, SVHN, and CIFAR-10. Our results demonstrate that both the proposed methods have outperformed the previous work and baselines.
The rest of this paper is organized as follows; We first review the related work in section~\ref{related_work} and discuss our approach for proposed methods including the mathematical background of the wavelet based modulus maxima edge detection. The proposed edge enhancement methods present in section~\ref{approach}. Section~\ref{experimental_setup} describes the implementations and the datasets we have used. Discussion and the obtained results are in section~\ref{results}.
\section{Related Work}
\label{related_work}
The wavelet transform~\cite{cite_15} is capable of providing powerful time-frequency representations. Furthermore, in terms of images, the discrete wavelet transform has the ability to decompose an image into different frequency information levels. There have been several attempts to use this technique in the context of machine learning and in the class of deep learning as follows. Wavelet network in~\cite{cite_32} is a result of a combination between wavelet and neural network \cite{cite_33,cite_34} which is an auto-encoder and constituted with three layers. The authors have examined the effect on classification tasks using 3 different wavelet basis functions. Another work on handwritten digit recognition using SVM and KNN classifiers in~\cite{cite_35} is powered by the wavelet.
In terms of CNNs, the wavelet transform has been used to solve computer vision problems because of its ability to extract diverse frequency information of images. Classification tasks including images~\cite{cite_25,cite_24}, textures~\cite{cite_26}, and multi-scale face super-resolution~\cite{cite_27} are few of them. The work in~\cite{cite_24} pre-processes the input data in the wavelet domain and then fed to the network. They have proposed two methods depending on how the decomposed wavelet coefficients are fused together. Another work done in~\cite{cite_25}, applies the discrete wavelet transform to extract features and a neural network used to perform classification by the resultant feature vector. Authors in~\cite{cite_26} have employed discrete wavelet transform to convert images into the wavelet domain and exploit important features to perform classification using a CNN. Furthermore, Liu \textit{et al.}~\cite{cite_37} introduced a multi-level wavelet CNN model for image restoration. However, most of the work paid attention to decomposing wavelet coefficients directly rather than processing them to enhance particular features like edges.
Therefore, we introduce two mechanisms to apply the wavelet transform to enhance the edge features in input images to improve the classification of CNNs. The concept of edge detection~\cite{cite_14,cite_7,cite_8,cite_13} is referred to identifying edges starting from contours of small structures to boundaries of large visual objects in an image. Mallat~\cite{cite_14} presented a multi-scale edge detection algorithm using the wavelet transform that develops edge representations by finding the local maxima of a wavelet transform modulus which is also equivalent to gradient-based edge detection. This will be the base technique for one of our purposed method. The other method is a naive edge detection mechanism also based on wavelet transformation with limited coefficients reconstruction after decomposition.
\section{Approach}
\label{approach}
\subsection{Discrete wavelet transform}
\label{dwt}
The discrete wavelet transform~\cite{cite_17,cite_16} is heavily used in image processing applications~\cite{cite_19,cite_18}. It decomposes an image into different levels of frequency interpretations by simultaneously passing through a set of low pass and high pass filters. Resulting wavelet coefficients contain decomposed image detail information and are known as wavelet coefficients. There are two main types of coefficients: detail coefficients and approximation coefficients. One level of decomposition down samples the coefficients by a factor of two to prevent information redundancy. After the first level of decomposition, resultant approximation coefficient can be subjected to further wavelet transform decompositions to generate second level coefficients and so on as illustrated in Fig. \ref{fig.1}. The detail coefficients contain the high-frequency information and are generated by passing through the high pass filters. The approximation coefficients provide less detailed low-frequency versions of the original image that are generated by passing through the low pass filters. Since an image is a two dimensional (2D) signal or data, the discrete wavelet transform is applied in a 2D manner. This 2D transform is applied to an image as two operations of one-dimensional discrete wavelet transform along the rows and columns separately which results in four different wavelet coefficients. Resultant coefficients comprise three detail coefficients containing vertical, horizontal and diagonal details, and one approximation coefficient containing low-frequency details.
\begin{figure*}[h]
\begin{center}
\includegraphics[width=\linewidth]{images/dwt_process.jpg}
\end{center}
\caption{Discrete wavelet transform decompositions.}
\label{fig.1}
\end{figure*}
\subsection{Mathematical background for modulus maxima edge detection}
\label{math_background}
Basic gradient-based edge detection process analyzes the image points from their first or second order derivatives to detect sharp variation points where edges are localized. Extrema of the first derivative and zero crossings of the second derivative are corresponding to sharp variation points where edges have occurred. Mallat~\cite{cite_14} method establishes a systematic relationship between the wavelet transform and the edge detection which is the base mechanism of our proposed modulus maxima method. It describes that the wavelet transform of each image point is referring to the derivative at a given scale. This can prove by relating the wavelet with differentiable smoothing function whose integral is equal to $1$. One can easily choose Gaussian as the differentiable smoothing function. Since this study is interested in image data, computations are performed two-dimensionally along the image axis $x$ and $y$. There defined $\psi^x(x,y)$ and $\psi^y(x,y)$ as the first derivative of differentiable smoothing function $\theta(x,y)$ whose integral over $x$ and $y$ is equal to $1$ and converges to $0$ at the infinity. Considering in multiscale, scaling factor $s$ is added as the scale dilation.
\begin{equation}\label{eq.1}
\psi_{s}^x(x,y)=\frac{\partial \theta(x,y)}{\partial x}\hspace{0.5cm}\mathrm{and}\hspace{0.5cm}\psi_{s}^y(x,y)=\frac{\partial \theta(x,y)}{\partial y}.
\end{equation}
The wavelet transform is computed by convolving an image $f(x,y)$ with a dilated wavelet $\psi_{s}(x,y)$ and is given by,
\begin{equation}\label{eq.2}
W_{s}f(x,y)=f*\psi_{s}(x,y).
\end{equation}
As we are processing discrete images, the discrete wavelet transform is considered. Therefore, the dilation can impose from continues scaling factor to dyadic sequence $(2^{j})_{j\in\mathbb{Z}}$ that refers to $2D$ dyadic wavelet transform. The $2D$ smoothing function is $\theta_{2^{j}}(x,y)$. The image function $f(x,y)$ convolves with the smoothing function and then,
\begin{equation}\label{eq.3}
\psi_{2^{j}}^x(x,y)=\frac{\partial {\theta_{2^{j}}}(x,y)}{\partial x}\hspace{0.5cm}\mathrm{and}\hspace{0.5cm}\psi_{2^{j}}^y(x,y)=\frac{\partial {\theta_{2^{j}}}(x,y)}{\partial y}.
\end{equation}
\begin{equation}\label{eq.4}
W_{2^{j}}^xf(x,y)=f*\psi_{2^{j}}^x(x,y)\hspace{0.5cm}\mathrm{and}\hspace{0.5cm}W_{2^{j}}^yf(x,y)=f*\psi_{2^{j}}^y(x,y).
\end{equation}
Then the gradient vector is given by,
\begin{equation}\label{eq.5}
\left(\begin{array}{c}{W_{2^{j}}^xf(x,y)}\\ {W_{2^{j}}^yf(x,y)}\end{array}\right)=2^{j}\left(\begin{array}{c}{\frac{\partial }{\partial x}(f*{\theta_{2^{j}}})(x,y)}\\ {\frac{\partial }{\partial y}(f*{\theta_{2^{j}}})(x,y)}\end{array}\right)=2^{j}\nabla(f*\theta_{2^{j}})(x,y).
\end{equation}
Local minima or maxima of the wavelet transform (first-order derivative) corresponds to variations of the pixel intensities. The absolute value of the first derivatives can either be a maximum or a minimum where local maxima correspond to sharp variations and local minima correspond to gradually varyings. Both the gradient vector and the direction vector can compute at each image point;
\begin{equation}\label{eq.6}
M_{2^{j}}f(x,y)=\sqrt{|W_{2^{j}}^x(x,y)|^2+|W_{2^{j}}^y(x,y)|^2}.
\end{equation}
Direction of the gradient with the horizontal axis is given by,
\begin{equation}\label{eq.7}
A_{2^{j}}f(x,y)=\arctan\left(\frac{W_{2^{j}}^yf(x,y)}{W_{2^{j}}^xf(x,y)}\right).
\end{equation}
Sharp variation point locations are given by the local maxima of $M_{2^{j}}f(x,y)$ and the directions are given by $A_{2^{j}}f(x,y)$. The modulus has a local maximum in direction of the gradient corresponding to the locations where sharp intensity variation points exist. In order to find these local maxima of the modulus at each image point, modulus of the gradient is compared with its local neighborhood. Positions of the local modulus maxima are corresponding to the image edges along with their directions. Fig. \ref{fig.2} graphically illustrates the outputs of steps of the modulus maxima process.
\begin{figure*}[h]
\begin{center}
\includegraphics[width=\linewidth]{images/5.jpg}
\end{center}
\caption{Graphical representation of modulus maxima process (sample image from MNIST dataset). (a): Original image; (b): horizontal detail coefficients obtained from wavelet transform; (c): vertical detail coefficients obtained from wavelet transform; (d): Edge representation development from local modulus maxima.}
\label{fig.2}
\end{figure*}
\section{Methodology}
\label{methodology}
This study proposes two methods to develop edge feature enhanced input images to CNNs. Firstly a naive edge enhancement method is proposed and the modulus maxima method is the purposed second as explained from the section \ref{math_background}.
\subsection{Naive method}
\label{naive_method}
The naive method proposes a primary mechanism to enhance edge features of input images to CNNs using the wavelet transform. As illustrated in figure \ref{fig.3}, this process starts by applying the discrete wavelet transform to decompose an input into series of wavelet coefficients. The coarsest approximation coefficient and several detail coefficients are generated depending on the number of decomposition levels using `Haar' wavelet as basis wavelet function. Haar wavelet~\cite{cite_23} is the simplest wavelet basis function that has squared shape and commonly used in image processing applications. secondly, the resulting coarsest approximation coefficient is discarded in order to remove lowest frequency details. Then the input image is reconstructed back by inverse wavelet transform using the same wavelet basis function with remaining detail coefficients. The reconstructed image is fed as the input to the CNN. The fed input is a basic level edge enhanced representation of the original image.
\begin{figure*}[h]
\begin{center}
\includegraphics[width=\linewidth]{images/modulus_maxima_method.jpg}
\end{center}
\caption{Naive edge enhancement method.}
\label{fig.3}
\end{figure*}
\subsection{Modulus maxima method}
\label{modulus_maxima_method}
The modulus maxima methodology from section \ref{math_background} is the base mechanism for the second proposed method. It is able to develop edge feature enhanced representations out of original input images. This process is initiating by Gaussian smoothing of the input image for noise reduction. The wavelet transform of the smoothed input image is then performed using `Haar' wavelet. Since the proposed method is interested in preserving most salient edge information, only the first level of the wavelet transform is performed. As the discrete wavelet transform decomposes the input image into four different coefficients as explained in section \ref{dwt}, Only the detail coefficients which are containing high-frequency information including horizontal, vertical, and diagonal coefficients are being taken into account from this point onwards. The modulus of the detail coefficients are then calculated at each image point according to (\ref{eq.6}) and the directions are calculated using (\ref{eq.7}). The edge representations are developed by finding the local maximal of the modulus along the gradient directions subsequently. This is done by applying non-maximal suppression over the local neighborhoods. The non-maximal suppression is explained in Alg. \ref{alg.1}. As a post-processing, a proper thresholding is applied to improve the developed map. The final output of the process is obtained by reconstructing the image back by inverse wavelet transform from the built edge representation and the approximation coefficient remaining from the wavelet transform of the original input. Fig. \ref{fig.4} graphically illustrates the described process. The output of the reconstruction becomes the input to CNN for feature learning and classification.
\begin{algorithm}[h]
\KwIn{Modulus $(M)$ and angle $(A)$ of the gradient vector computed by (\ref{eq.6}) and (\ref{eq.7}) respectively}
\KwOut{Local maxima $(LM)$ of the modulus}
\For(\tcp*[f]{$i,j$ are image indices}){$i$}{
\For{$j$}{
\If{$A_{i,j}$ in between $\left[-\frac{\pi}{8},\frac{\pi}{8}\right]$ and$ \left[\pi-\frac{\pi}{8}, \pi+\frac{\pi}{8}\right]$}{\If{$M_{i,j}>M_{i+1,j}$ and $M_{i,j}>M_{i-1,j}$}{$LM_{i,j}\gets M_{i,j}$}}
\ElseIf{$A_{i,j}$ in between $\left[\frac{\pi}{2}-\frac{\pi}{8},\frac{\pi}{2}+\frac{3\pi}{8}\right]$ and$ \left[\frac{3\pi}{2}-\frac{\pi}{8}, \frac{3\pi}{2}+\frac{\pi}{8}\right]$}{\If{$M_{i,j}>M_{i,j+1}$ and $M_{i,j}>M_{i,j-1}$}{$LM_{i,j}\gets M_{i,j}$}}
\ElseIf{$A_{i,j}$ in between $\left[\frac{\pi}{4}-\frac{\pi}{8},\frac{\pi}{4}+\frac{3\pi}{8}\right]$ and$ \left[\frac{5\pi}{4}-\frac{\pi}{8}, \frac{5\pi}{4}+\frac{\pi}{8}\right]$}{\If{$M_{i,j}>M_{i+1,j+1}$ and $M_{i,j}>M_{i-1,j-1}$}{$LM_{i,j}\gets M_{i,j}$}}
\Else{\If{$M_{i,j}>M_{i+1,j-1}$ and $M_{i,j}>M_{i-1,j+1}$}{$LM_{i,j}\gets M_{i,j}$}}
}
}
\caption{Non-maximal suppression.}
\label{alg.1}
\end{algorithm}
\begin{figure*}[h]
\begin{center}
\includegraphics[scale=0.38]{images/modulus_maxima.jpg}
\end{center}
\caption{Graphical illustration of modulus maxima process.}
\label{fig.4}
\end{figure*}
\section{Experimental Setup}
\label{experimental_setup}
In order to evaluate how proposed methods are performing, both methods are implemented to CNNs, then trained and tested from the scratch. The implementation of the naive edge enhancement method is labeled from now on as NEE-\textit{CNN} and the modulus maxima method is labeled as MMEE-\textit{CNN} for ease of explanation. The annotation \textit{CNN} denotes the base CNN architecture and will be changed depending on the baseline CNN and the compared previous work. For example, if the base network is AlexNet, proposed methods will be labeled as NEE-AlexNet and MMEE-AlexNet. For each method, a new data processing layer is developed by implementing the proposed method and is appended to on top of the CNN architecture as displayed in Fig. \ref{fig.5}. This layer develops edge feature enhanced input images and feed them to the first convolution layer of the CNN. As per investigations, the complete system is then trained and tested on several different datasets.
\begin{figure*}[h]
\begin{center}
\includegraphics[width=\linewidth]{images/CNN.jpg}
\end{center}
\caption{Overview of the CNN architecture implementation.}
\label{fig.5}
\end{figure*}
Firstly, we employed AlexNet~\cite{cite_11} as the base CNN architecture to implement proposed methods. AlexNet is composed of 7 layers including 5 convolutional layers and 2 fully connected layers. In AlexNet architecture, each convolution layer is followed by ReLu activation function to introduce non-linearity. Furthermore, batch normalization~\cite{cite_36} applied version of AlexNet is also employed to use as an additional baseline. We used dropout~\cite{cite_11} with 0.8 of keep probability for fully connected layers while training to prevent over-fitting. The developed preprocessing layer is appended just before the first convolution layer of each network. This layer processes raw input images to feed edge enhanced images into the network for further feature learning. We trained four architectures on three different datasets and the obtained results are discussed in section \ref{results}.
To investigate how the developed methods perform in classification tasks, all three networks are trained separately on three data sets. The trained networks are tested on the testing data portion of each dataset. Most common and publicly available datasets are selected: MNIST~\cite{cite_12}, SVHN~\cite{cite_22}, and CIFAR-10~\cite{cite_29}. MNIST is a popular and one of the most common preliminary dataset in machine learning practices. The dataset composed of 28 by 28 gray-scale handwritten digit images from 0-9 with 60,000 full training data and 10,000 full testing data. SVHN dataset is a real-world street view house number images extracted from Google maps street view. It contains 65931 training data and 26032 testing data. CIFAR-10 dataset is composed of 32 by 32 images with 50,000 training examples and 10,000 testing examples under 10 classes of real-world objects like car, airplane, dog, and horse. All datasets have used without any data augmentation.
\section{Results}
\label{results}
We trained our both implementations, classic AlexNet, and AlexNet with BN from the scratch on each dataset. Furthermore, we implemented and tested our methods on network models that have been used in previous work~\cite{cite_32,cite_24}. As an overview, the obtained results exhibit that both proposed methods are outperforming the baselines and previous work for all datasets.
Table \ref{results-table-cifar-10-alexnet} shows classification results of CIFAR-10 for network models NEE-AlexNet, MMEE-AlexNet, AlexNet and AlexNet-BN (Batch normalization applied version). Moreover, we compared the proposed methods with the previous work~\cite{cite_24} and results are displayed in Table \ref{results-table-cifar-10-previous_work}. Note that this experiment has employed the same network from compared work for CIFAR-10 dataset. As shown in the Table \ref{results-table-cifar-10-alexnet}, our networks are significantly outperforming both AlexNet baselines. The results are yielding the accuracy of the most accurate baseline, AlexNet-BN by 0.63\% and 0.47\% increment for the modulus maxima method and the naive method respectively. Results comparison with~\cite{cite_24}, also exhibits nearly 1.5\% accuracy gain for the modulus maxima method and 1.2\% gain for the naive method over best classification accuracy obtained. Between our two proposed methods, the modulus maxima method has been able to show the most success.
\setlength{\tabcolsep}{4pt}
\begin{table}
\centering
\caption{Test accuracies on CIFAR-10 with AlexNet as baseline architecture.}
\label{results-table-cifar-10-alexnet}
\begin{tabular}{lc}
\hline\noalign{\smallskip}
Network & Accuracy \% \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
\textbf{MMEE-AlexNet (Proposed)} & \textbf{89.63} \\
\textbf{NEE-AlexNet (Proposed)} & \textbf{89.47} \\
AlexNet & 87 \\
AlexNet-BN & 89 \\
\hline
\end{tabular}
\end{table}
\setlength{\tabcolsep}{1.4pt}
\setlength{\tabcolsep}{4pt}
\begin{table}
\centering
\caption{Results comparison with~\cite{cite_24} and proposed methods for CIFAR-10.}
\label{results-table-cifar-10-previous_work}
\begin{tabular}{lc}
\hline\noalign{\smallskip}
Network & Accuracy \% \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
CNN & 77.53 \\
CNN-WAV 2 & 76.42 \\
CNN-WAV 4 & 85.67 \\
\textbf{MMEE-CNN (Proposed)} & \textbf{87.21} \\
\textbf{NEE-CNN (Proposed)} & \textbf{86.84 } \\
\hline
\end{tabular}
\end{table}
\setlength{\tabcolsep}{1.4pt}
As we investigated for CIFAR-10, we also employed AlexNet as the baseline network architecture for the MNIST too. The obtained results are displayed in the Table \ref{results-table-MNIST-alexnet}. Even the proposed methods are exceeding the baselines, the gain is not as significant as CIFAR-10. It only showed 0.04\% and 0.07\% accuracy yield from Alexnet-BN baseline for MMEE-AlexNet and NEE-Alexnet respectively. This is because of the simplicity of MNIST and AlexNet easily reach high accuracies without much effort. However, still, the results show that the proposed methods are further improving the classification accuracies. Table \ref{results-table-MNIST-previous-work} compares the results from our methods with~\cite{cite_32}. This previous work has used different wavelet basis functions and they have developed a wavelet network to extract features. However, our methods have outperformed this work by shallow margins.
\setlength{\tabcolsep}{4pt}
\begin{table}
\centering
\caption{Test accuracies on MNIST with AlexNet as baseline architecture.}
\label{results-table-MNIST-alexnet}
\begin{tabular}{lc}
\hline\noalign{\smallskip}
Network & Accuracy \% \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
\textbf{MMEE-AlexNet (Proposed)} & \textbf{99.29} \\
\textbf{NEE-AlexNet (Proposed)} & \textbf{99.26} \\
AlexNet & 99.17 \\
AlexNet-BN & 99.22 \\
\hline
\end{tabular}
\end{table}
\setlength{\tabcolsep}{1.4pt}
\setlength{\tabcolsep}{4pt}
\begin{table}
\centering
\caption{Results comparison with~\cite{cite_32} and proposed methods for MNIST.}
\label{results-table-MNIST-previous-work}
\begin{tabular}{lc}
\hline\noalign{\smallskip}
Network & Accuracy \% \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
\textbf{Modulus maxima edge enhancement approach (Proposed)} & \textbf{99.3} \\
\textbf{Naive edge enhancement approach (Proposed)} & \textbf{99.26} \\
Wavelet network approach with mexican hat wavelet & 94.2 \\
Wavelet network approach-Morlet wavelet & 99.21 \\
Wavelet network approach-rasp wavelet & 99.2 \\
\hline
\end{tabular}
\end{table}
\setlength{\tabcolsep}{1.4pt}
For further evaluations, experimental results on SVHN dataset are shown in Table \ref{results-table-SVHN-alexnet}. Our method again outperforms the baseline CNN by showing accuracy gains 0.45\% and 0.65\% for the modulus maxima method and the naive method respectively. The significance of the accuracy gain proves that the proposed methods are successfully performing in assisting the feature learning process by providing edge enhanced representations.
\setlength{\tabcolsep}{4pt}
\begin{table}
\centering
\caption{Test accuracies on SVHN with AlexNet as baseline architecture.}
\label{results-table-SVHN-alexnet}
\begin{tabular}{lc}
\hline\noalign{\smallskip}
Network & Accuracy \% \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
\textbf{MMEE-AlexNet (Proposed)} & \textbf{94.43} \\
\textbf{NEE-AlexNet (Proposed)} & \textbf{94.16} \\
AlexNet & 93.24 \\
AlexNet-BN & 93.81 \\
\hline
\end{tabular}
\end{table}
\setlength{\tabcolsep}{1.4pt}
Identifying visual objects in images are mostly depend on its shape or in other words, the combinations of edges of visual objects. During the classification, learning of the first level convolution layers mostly relies on these edge features. Hence the improved feature representations provided by the proposed methods are effectively assisting the learning to achieve better classification accuracy. The Obtained results have shown that enhancing edge features can significantly affect the classification accuracy and hence improve the learning process. Furthermore, results are further confirming that the modulus maxima method develops richer feature enhanced representations and performs better than the naive method.
\section{Conclusion}
\label{conclusion}
We have proposed and empirically evaluated two wavelet based edge enhancement mechanisms to pre-process the input images to convolutional neural networks. The aim of this preprocessing is to improve and assist the learning of the network by enhancing the edge features. The first method performs the process by discarding the coarsest approximation coefficient generated from the discrete wavelet transform of the original input image and then reconstructed by the inverse wavelet transform with the remaining detail coefficients. Secondly, a more complex method is proposed to detect edges by finding local maxima of the modulus of wavelet coefficients as discussed in section \ref{modulus_maxima_method}. The obtained results from the experiments conducted have shown that the proposed methods achieved better classification accuracies compared to the baselines and the previous work. It is notable that the developed systems achieve success in classifying images where the edges are prominent features to be learned during the classification. The Haar wavelet is used as the base wavelet in both proposed methods. There are other wavelets also available in the literature that are also suitable for this application such as Daubechies and Morlet. Thresholding operation of the modulus maxima methodology can implement as a learnable process alongside with the CNN so it can produce better outputs.\\
\bibliographystyle{splncs04}
|
1,116,691,499,894 | arxiv | \section{Introduction}
Since the discovery of protonic conductivity in aliovalent-doped SrCeO$_3$~\cite{Iwahara1981,Iwahara1983}, protonic conduction in perovskite-type oxides ABO$_3$ has been the subject of numerous studies, experimental as well as computational~\cite{Cherry1995,Iwahara1996,Kreuer1999,Kreuer2009,Norby2009}. The high protonic conductivity in perovskite oxides opens the way for a wide range of technological applications such as Protonic Ceramic Fuel Cells (PCFCs), hydrogen separators, etc.
However, if the diffusion of protons has been extensively explored by \textit{ab initio} calculations in cubic perovskites such as barium zirconate~\cite{Bjorketun2007,Sundell2007}, only very few works have studied this phenomenon in orthorhombic perovskites~\cite{Bilic2007}, although excellent proton conductors, such as SrCeO$_3$ or BaCeO$_3$, can be found among such systems.
Proton conductors are usually obtained by replacing some cations of a host oxide compound by cations with lower valence. In perovskite oxides having a tetravalent element on the B site (Ti, Zr, Ce, Sn), this can be done by inserting on this site a trivalent element. Such substitution creates charge-compensating oxygen vacancies that make the compound reactive with respect to water dissociation if it is put in contact with humid atmosphere. Such hydration reaction is commonly written, using Kr\"oger-Vink notations, as
\begin{equation}\label{hydration}
H_2O + V_O^{\bullet \bullet} + O_O^X \rightarrow 2 OH^{\bullet}_O.
\end{equation}
It generates protonic defects $OH^{\bullet}_O$, localized approximately along [100]-type directions inside the interoctahedral space of the perovskite network, and that can move from an oxygen site to another by simple thermal activation. Three possible motions of the proton in the perovskite network have been distinguished:
(i) the reorientation: the OH bond does not break and simply turns by $\approx$ 90$^{\circ}$ around the B-O-B axis containing the oxygen atom.
(ii) the intra-octahedral hopping: the proton leaves its oxygen site to move on another oxygen site of the same octahedron.
(iii) the inter-octahedral hopping: the proton leaves its oxygen site to move on another oxygen site that does not belong to the same octahedron.
In a previous work~\cite{Hermet2012}, we have studied by density-functional theory calculations the thermodynamics of hydration and oxidation of Gd-doped barium cerate BaCe$_{1-{\delta}}$Gd$_{\delta}$O$_{3-\frac{{\delta}}{2}}$ (BCGO). In particular, we have showed that hydration was an exothermic process and accurately determined the energy landscape of the proton near and far from the Gd dopant. We showed that this energy landscape can be well approximated by a surface with 16 kinds of local minima (8 in the close vicinity of the dopant, and 8 further). This complexity is the consequence of the highly distorted geometry of the host BaCeO$_3$, that adopts in its ground state the $Pnma$ space group. Consequently, proton migration throughout such energy surface involves many different energy barriers that need to be explored in order to get insight into proton conduction at the macroscopic scale. Previous works have studied proton migration in BaCeO$_3$, but only in the cubic phase~\cite{Munch1996,Kreuer1998,Munch2000}. Therefore, in this work, we present an exhaustive study of the Minimum Energy Paths associated to the possible motions for the proton in orthorhombic BCGO, and the values of their energy barriers.
\section{Computational details\label{details}}
\subsection{Method}
We have performed density functional theory (DFT) calculations using the plane-wave code ABINIT~\cite{Gonze2009,Bottin2008}.
The Generalized Gradient Approximation (GGA-PBE~\cite{Perdew1996}) was employed to describe electronic exchange and correlation. The calculations were carried out in the framework of the projector augmented wave (PAW) approach~\cite{Blochl1994,Torrent2008}. The same supercell as that of Ref.~\onlinecite{Hermet2012} was used: it consists of
80 atoms and has an orthorhombic symmetry ($Pnma$ space group). The First Brillouin Zone of this supercell was sampled by a 2$\times$2$\times$2 \textbf{k}-point grid, and the plane-wave cutoff was set to 20~Ha. The numerical accuracy on the total energies associated to this scheme is better than 1 mHa/atom. The cut-off radii of our PAW atomic data can be found in Ref.~\onlinecite{Hermet2012}.
In order to compute Minimum Energy Paths, the first task was to identify the stable sites of the proton in BCGO, which was previously achieved in Ref.~\onlinecite{Hermet2012}. This was performed by substituting in the 80-atom supercell one Ce by one Gd and introducing one hydrogen atom, that was placed in its different possible sites, close to the Gd dopant and far from it. In each configuration, the atomic positions were optimized until all the cartesian components of atomic forces were below 1$\times$10$^{-4}$~Ha/Bohr ($\approx$ 0.005~eV/{\AA}).
The possible energy barriers between pairs of stable protonic sites have then been computed using the so-called simplified string method~\cite{E2002,E2007}.
The simplified string method is an iterative algorithm allowing to find the Minimum Energy Path (MEP) between two stable configurations. It consists in discretizing the path into equidistant configurations, that we call "images". At each iteration, a two-step procedure is applied: first, each image is moved along the direction given by the atomic forces (evolution step), then the images are redistributed along the path in order to be kept equidistant (reparametrization step).
To determine the number of iterations of string method, we used an optimization criterion related to the energy of the images: the optimization of the MEP is stopped when the total energy (averaged over all the images) difference between an iteration and the previous one is lower than 1$\times$10$^{-5}$~Ha. In such an algorithm, the result should be carefully converged with respect to the number of images along the path, which forced us to use up to 19 images in the case of some intra-octahedral hopping processes. Once the MEP has been correctly converged, the maximum energy along the path provides us the transition state, and thus the energy barrier of the corresponding process (hopping or reorientation). Finally, we point out that all the atoms of the supercell were allowed to move during the computation of the MEP, thus providing energy barriers in a ``fully-relaxed'' system.
For the sake of numerical efficiency, we have used the three traditional levels of parallelization present in the ABINIT code (\textbf{k}-points, bands, plane waves) together with a fourth level on the images of the system used to discretize the MEP. This fourth level has a quasi-linear scalability and, since the number of images used to discretize the path can be as large as 19, thousands of cpu cores can be used to compute and relax the MEP with high efficiency. Typical jobs were done on 3000~cpu cores using these four parallelization levels, allowing us to take maximal benefit of the potentialities of parallel supercomputers.
\subsection{Approximations to the computation of energy barriers}
Additional remarks have to be mentionned about the limitations of our approach and the approximations made to compute the energy barriers.
First of all, the string method, like the Nudged Elastic Band method, allows to compute the Minimum Energy Paths between two stable configurations and thus to obtain ``fully-relaxed'' (static) barriers, as opposed to ``dynamical'' barriers that would be obtained, for instance, by counting the occurences of each event within a molecular dynamics run and fitting the rates by an Arrhenius law. Static barriers neglect some collective effects and the so-called recrossing processes. In theory, they make sense only if the whole structure is able to relax instantaneously when the proton moves from a stable position to another. However, the time scale associated to the hydrogen motion is much smaller than the ones of the deformation of the surrounding structure, which involves much softer phonon modes. The motion of protons in an unrelaxed envionment would naturally lead to higher barriers than those calculated from fully-relaxed DFT calculations. Nevertheless, as shown by Li and Wahnstr{\"o}m~\cite{Li1992} in metallic palladium, the jump of the proton has to be considered in a reverse way. Due to the vibrations of surrounding atoms and to the high vibration frequency of hydrogen, protons currently jump at a moment where the surrounding atoms are in a geometrical configuration close to the calculated relaxed one. That is why the calculated barriers can result very close to those currently observed. Further work should be nervertheless necessary to verify that the proton jump, for instance during {\it ab inito} Molecular Dynamics simulations, occurs for a geometry of surrounding atoms close to that calculated in the fully relaxed DFT static scheme.
Second, the present barriers do not include quantum contributions from zero-point motions. They are valid in the limit where nuclei can be considered as classical particles. If this approximation is correct for heavy atoms in the temperature range interesting PCFCs, this is not so obvious for the proton~\cite{Sundell2007}. Indeed, proton tunnelling might occur and thus significantly lower the barrier height, especially in the hopping case~\cite{Zhang2008}. This approximation leads to overestimated barriers.
Last, the use of the Generalized Gradient Approximation tends to underestimate the activation energy for proton transfer in hydrogen-bonded systems~\cite{Bjorketun2007}. This underestimation is due to an over-stabilization of structures in which an hydrogen is equally shared between two electronegative atoms~\cite{Barone1996}.
Consequently, the barriers presented in this work purely reflect the GGA potential energy surface of the proton in its host compound. They are static barriers, free from collective, dynamical and quantum effects.
\section{Review of preliminary results: structure of BaCeO$_3$ and protonic sites}
\setcounter{subsubsection}{0}
\subsection{BaCeO$_{3}$ and BCGO structure}
As many perovskites~\cite{Goudochnikov2007}, BaCeO$_3$ has an orthorhombic structure ($Pnma$ space group~\cite{Knight2001}) at room temperature (RT). At high temperature, it undergoes three structural phase transitions, the first one at $\approx$ 550 K towards an $Imma$ structure, and the second one at $\approx$ 670 K towards a rhombohedral $R\bar{3}c$ structure. At very high temperature ($\approx$ 1170 K), it eventually evolves towards the parent $Pm\bar{3}m$ cubic structure, that of the ideal perovskite. The presence of dopants randomly distributed throughout the matrix may change transition temperatures.
However Melekh {\it et al.}~\cite{Melekh1997} found that the first transition in 10\%-Gd-doped BaCeO$_3$ occurs around 480-540~K, close to the one they found for pure BaCeO$_3$ of 533~K. At RT, Gd-doped BaCeO$_3$ is therefore orthorhombic.
Our calculations provide optimized configurations and Minimum Energy Paths. These computations are thus relevant when performed in combination with the ground state structure of BaCeO$_3$, {\it i.e.} the orthorhombic $Pnma$ structure, which was used as starting point in all the calculations, and globally preserved along the optimizations procedures. The computed energy barriers can therefore be used to understand proton diffusion in BCGO below $\approx$ 550 K. However, from a more general point of view, the present results provide a useful microscopic insight into proton diffusion in a low-symmetry perovskite compound, typical of those used as electrolytes in Proton Ceramic Fuel Cells (the $Pnma$ structure is common to many perovskites such as cerates, zirconates, titanates or stannates).
The structural parameters obtained for BaCeO$_{3}$ and BCGO within the present scheme can be found in Ref.~\onlinecite{Hermet2012}. They are in excellent agreement with experiments, despite a slight overestimation of the lattice constants related to the use of the GGA.
\subsection{Protonic sites in perovskites: general considerations\label{stable-site}}
As previously explained, proton conduction in an ABO$_3$ perovskite compound -- where B is a tetravalent element -- might be obtained by substituting B atoms by trivalent elements such as Gd (this creates oxygen vacancies by charge compensation) and by subsequently exposing the new compound to humid atmosphere. The protons as charge carriers then appear through the dissociation of water molecules into the oxygen vacancies, according to the well-known hydration reaction (see equation~\ref{hydration}).
The precise location of the stable protonic sites in the perovskite network seems to strongly depend on the lattice parameter and distortion of the host compound.
It is commonly admitted that protons are bonded to an oxygen atom and remain in the form of hydroxyl groups located on oxygen sites. But the orientation of the O-H bond is not that clear. On the one hand, it was proposed that it could be oriented along the BO$_6$ octahedra edge because of its dipolar moment~\cite{Kreuer1995,Kreuer2009}, leading to 8 possible sites per oxygen atom. On the other hand, previous experimental~\cite{Hempelmann1998} and ab initio~\cite{Glockner1999,Davies1999,Tauer2011} studies have found only four sites per oxygen atom oriented along the pseudo-cubic directions.
In fact, the stable protonic sites seem to be indeed
(i) along or close to the octahedra edge for perovskites with relatively small lattice constant $a_0$, such as SrTiO$_3$~\cite{Cherry1995,Matsushita1999} or LaMnO$_3$~\cite{Cherry1995} ($a_0$=3.91~\AA), leading to the existence of 8 protonic sites per oxygen atom,
(ii) along the pseudo-cubic directions for perovskites with large lattice constant, such as SrZrO$_3$~\cite{Davies1999} or BaCeO$_3$~\cite{Glockner1999,Tauer2011,Hermet2012} (pseudo-cubic lattice constant $a_0$=4.14 and 4.41~\AA{} respectively), leading to the existence of 4 protonic sites per oxygen atom.
This trend can easily be explained : as the lattice constant decreases, the nearest oxygen gets closer and closer to the proton, attracting it sufficiently (through hydrogen bond) to bend the O-H bond towards the octahedron edge.
\subsection{Protonic sites in Gd-doped BaCeO$_3$}
In our previous calculations on BCGO~\cite{Hermet2012}, which has a large pseudo-cubic lattice constant of 4.41~\AA, we found indeed four stable protonic sites per oxygen atom. Considering that the $Pnma$ structure contains two inequivalent oxygen atoms O1 and O2, this leads to the existence of 8 inequivalent stable positions for the proton, if we ignore the symmetry-breaking caused by the presence of dopants. These positions have been labeled 1a, 1b, 1c and 1d for those attached to O1 (apical oxygen), and 2a, 2b, 2c and 2d for those attached to O2 (equatorial oxygen), see Fig.~\ref{8protons}.
\begin{figure}[h]
\includegraphics[scale=0.32]{Figure1.png}
\caption{The 8 stable positions for the proton around the Gd dopant.\label{8protons}}
\end{figure}
However, when one Ce atom is replaced by a Gd dopant, both the translational symmetry and the symmetry between the four equatorial oxygens O2 of the first coordination shell of this specific B site are broken. More precisely, the presence of Gd splits the four O2 into two pairs of symmetry equivalent oxygen atoms (O2 and O2'). The four inequivalent protonic sites related to O2 (2a, 2b, 2c and 2d) are thus split into 8 inequivalent sites, called 2a, 2b, 2c, 2d, 2a', 2b', 2c' and 2d'. The first coordination shell of Gd exhibits therefore 12 kinds of inequivalent protonic sites. Beyond this shell, the symmetry-breaking is even more complex.
Nevertheless, we have shown in Ref.~\onlinecite{Hermet2012} that this new emerging complex protonic energy landscape can be very well approximated by a surface containing 16 kinds of inequivalent local minima: 8 corresponding to the 8 sites shown in Fig.~\ref{8protons} close to a Gd dopant, and 8 associated to the same sites "far" from the dopant, {\it i.e.} beyond its first oxygen coordination shell. Tab.~\ref{energy-proton} gives the relative energy associated to each site (taken from Ref.~\onlinecite{Hermet2012}): in the first coordination shell of Gd, only 8 sites among the 12 can be considered as non-equivalent. Beyond also, only the same 8 kinds of sites can be considered as non-equivalent with a very good accuracy. In other words, the symmetry-breaking caused by the presence of dopants can be considered as having no significant influence on the energy landscape of the protonic defects. In order to distinguish the sites of these two families, we introduce another letter, "n" (for a site \textbf{near} the dopant), or "f" (for a site \textbf{far} from the dopant).
To summarize, the 16 kinds of stable positions are labeled by
\begin{itemize}
\item a number (1 or 2) corresponding to the oxygen type (apical and equatorial, respectively),
\item a letter (``a", ``b", ``c" or ``d") corresponding to the O-H direction (shown in figure~\ref{8protons}),
\item and another letter, ``n" for a site \textbf{near} the dopant, or ''f" for a site \textbf{far} from the dopant.
\end{itemize}
\begin{table}
\centering
\begin{tabular}{lp{2cm}lp{2cm}}
& Gd-OH-Ce & &Ce-OH-Ce \\
1an & 0.00 & 1af &0.09 \\
1bn & 0.01 & 1bf &0.08 \\
1cn & 0.11 & 1cf &0.25 \\
1dn & 0.00 & 1df &0.14 \\
2an (2a'n) & 0.17 (0.16) & 2af &0.25 \\
2bn (2b'n) & 0.05 (0.05) & 2bf &0.12 \\
2cn (2c'n) & 0.15 (0.13) & 2cf &0.29 \\
2dn (2d'n) & 0.08 (0.09) & 2df &0.23 \\
\end{tabular}
\caption{Energy (in eV) of the possible proton binding sites in BCGO relative to the most stable one (1an).}
\label{energy-proton}
\end{table}
In the presence of a dopant, the OH bond might slightly deviate from the pseudo-cubic direction: usually the proton is expected to bend towards the dopant due to the opposite formal charge of the corresponding defects (+1 for the protonic defect $OH_O^{\bullet}$ versus -1 for the dopant defect $Gd_{Ce}^{'}$). But it also depends on the dopant size~\cite{Davies1999,Bjorketun2007}.
In the present case, the proton has indeed a tendency to bend slightly towards the dopant, but with a deviation from the pseudo-cubic direction lower than 10\textdegree{} (see Tab.~\ref{angle-values}).
It is possible to divide the eight stable sites into two categories: either the proton is able to hop from one octahedron to another (a/b-type) or not (c/d-type). The c/d-type site shows a noticeable bending (around 5\textdegree) while the a/b-type are almost perfectly aligned along the pseudo-cubic direction. This absence of bending may be due to the stabilization of a/b-type sites by an hydrogen bond with the facing oxygen, which is in those cases rather close. This hydrogen bond would be dominant over the proton-dopant interaction, especially since the dopant is much further than for a c/d-type site.
\begin{figure}
\centering
\includegraphics[scale=0.32]{Figure2.png}
\caption{Angles between the pseudo-cubic direction and the actual O-H bond for the two subcategories of protonic sites.}
\label{angle-picture}
\end{figure}
\begin{table}
\centering
\begin{tabular}{cdd}
Position & \multicolumn{1}{l}{$\theta$ near Gd} & \multicolumn{1}{l}{$\theta$ far from Gd}\\
1a & -0.1$\textdegree$ & 0.2$\textdegree$ \\
1b & -0.1$\textdegree$ & 0.2$\textdegree$ \\
1c & 5.3$\textdegree$ & 0.5$\textdegree$ \\
1d & 3.5$\textdegree$ & 0.5$\textdegree$ \\
2a & -0.5$\textdegree$ & 0.6$\textdegree$ \\
2b & 1.6$\textdegree$ & 1.2$\textdegree$ \\
2c & 5.0$\textdegree$ & 2.1$\textdegree$ \\
2d & 8.2$\textdegree$ & 4.9$\textdegree$ \\
\end{tabular}
\caption{Values of the angle described in figure~\ref{angle-picture}, for a proton near a dopant, and far from a dopant.}
\label{angle-values}
\end{table}
Note that in perovskites with smaller lattice constant, the bending is usually stronger, but also highly dopant-dependent. Bjorketun {\it et al.}\cite{Bjorketun2007} have studied this dependence in BaZrO$_3$ and got a bending angle from 6.9\textdegree{} for Gadolinium up to 20.4\textdegree{} for Gallium. An even higher bending of around 30\textdegree{} for Scandium, Yttrium or Ytterbium have been found in SrZrO$_3$~\cite{Davies1999}.
\section{Energy barriers}
\setcounter{subsubsection}{0}
We have seen that the energy landscape of stable protonic sites in Gd-doped BaCeO$_3$ is really complex, due to the distortions of the $Pnma$ structure and the presence of dopants. As a result, there are many different values for the energy barriers, associated to several diffusion mechanisms, even by considering the simplified energy landscape (with 16 minima) presented previously.
\subsection{The three different mechanisms: reorientation, intra-octahedral and inter-octahedral hopping}
In an ideal cubic perovskite, there are two kinds of processes for the proton motion: reorientation and transfer (or hopping)~\cite{Bjorketun2005}, to which only two different energy barriers can be associated, provided the proton is assumed to be far from any dopant. In BaZrO$_3$, the reorientation (resp. transfer) barrier is 0.14~eV (resp. 0.25~eV), while in cubic BaTiO$_3$~\cite{Gomez2005}, it is 0.19~eV (resp. 0.25~eV). In such simple systems, each proton in a stable site has four different possibilities to move: two reorientations and two intra-octahedral hopping, the inter-octahedral hopping being considered as unlikely (because the oxygen facing the OH group is too far).
However, the existence of tilts of oxygen octahedra, very common in perovskite oxides~\cite{Goudochnikov2007} having low tolerance factor $t=\frac{r_A+r_O}{\sqrt{2}(r_B+r_O)}$, makes the inter-octahedral hopping more likely in these strongly distorted structures (Fig.~\ref{3motions}), because some inter-octahedral oxygen-oxygen distances might be considerably lowered by the antiferrodistortive motions of the oxygen atoms. The proton may thus jump directly from one octahedron to another (one inter-octahedral hopping instead of two intra-octahedral hoppings), which might result in an increase of the macroscopic diffusion coefficients. Tab.~\ref{flip-inter} emphasizes the link between the tolerance factor $t$, the perovskite structure, and the possibility of inter-octahedral transfer according to the works mentionned.
\begin{figure}
\centering
\includegraphics[scale=0.45]{Figure3.png}
\caption{Possible motions of the proton in the perovskite $Pnma$ structure. (a) reorientation, (b) intra-octahedral hopping, (c) inter-octahedral hopping.}
\label{3motions}
\end{figure}
As explained in Sec.~\ref{stable-site}, in perovskites with small lattice constant ($\leq$ 4.0 \AA), the proton in its stable site tends to bend towards one oxygen atom of one neighboring octahedron instead of being equidistant from both neighboring oxygens. In such systems, there are therefore twice more stable sites than in perovskites with larger lattice constant, so that an additional rotational mechanism might exist, corresponding to the slight reorientation of OH, bending from the edge of one neighboring octahedron to the other. This mechanism was previously called ``flip"~\cite{Rasim2011} or ``bending"~\cite{Munch1996} or ``inter-octahedron hopping"~\cite{Matsushita1999} (but ``inter-octahedron reorientation" should be less confusing, since the bond between O and H is not broken during this process).
However, the energy barrier of the flip is usually rather low ($\lesssim 0.1$~eV~\cite{Matsushita1999}), and thus most of the time neglected.
It can also be seen as part of the intra-octahedral transfer mechanism : before jumping from one oxygen to another, there is a little reorientation of the proton in order to get an alignment O-H...O. The intra-octahedral transfer would thus be a two-step mechanism with bending then stretching.
Tab.~\ref{flip-inter} illustrates the possible correlation between the lattice parameter and the possibility to flip for several proton conductor perovskites.
Note that some studies found a possible inter-octahedral transfer in small cubic perovskite such as SrTiO$_3$~\cite{Kreuer1999,Munch1999a,Munch2000} or even in cubic perovskites with large lattice constant such as BaZrO$_3$~\cite{Merinov2009}, in contradiction with other works~\cite{Munch2000,Gomez2005}.
\begin{table}
\centering
\begin{tabular}{lccccc}
Perovskite & a$_0$ (\AA)& t & Structure & Flip & Inter\\
SrCeO$_3$~\cite{Munch1999} & 4.29 & 0.89 & $Pnma$ & no & yes\\
CaZrO$_3$~\cite{Davies1999,Islam2001,Shi2005,Gomez2005} & 4.04 & 0.92 & $Pnma$ & no & yes \\
BaCeO$_3$~\cite{Munch1999} & 4.41 & 0.94 & $Pnma$ & no & yes \\
SrZrO$_3$~\cite{Davies1999,Shi2005,Gomez2007}& 4.14 & 0.95 & $Pnma$ & no & yes \\
CaTiO$_3$~\cite{Munch1999a,Munch2000,Gomez2005} & 3.85 & 0.97 & $Pnma$ & yes & yes \\
BaZrO$_3$~\cite{Shi2005,Gomez2005,Bjorketun2005} & 4.25 & 1.01 & $Pm\bar{3}m$ & no & no \\
SrTiO$_3$~\cite{Munch1999a,Munch2000} & 3.91 & 1.01 & $Pm\bar{3}m$ ($I4/mcm$) & yes & no \\
BaSnO$_3$~\cite{Bevillon2008a} & 4.16 & 1.03 & $Pm\bar{3}m$ & no & no \\
BaTiO$_3$~\cite{Gomez2005} & 4.06 & 1.07 & $Pm\bar{3}m$ ($R3m$) & no & no \\
\end{tabular}
\caption{Pseudo-cubic lattice constant a$_0$ from DFT calculations (GGA), tolerance factor $t$ calculated from Shannon ionic radii, crystal space group of different perovskite oxides, and whether flip or inter-octahedral hopping can occur or not. For BaTiO$_3$, the high-temperature cubic structure is considered, which is the one simulated in Ref.~\onlinecite{Gomez2005}. The cubic structure is also considered for SrTiO$_3$, rather than the low-temperature tetragonal structure. In those two cases, the ground state space group is given between parenthesis.}
\label{flip-inter}
\end{table}
\subsection{Energy barriers and Minimum Energy Paths}
Using the string method, the Minimum Energy Paths joining the various stable sites have been computed, giving access to the transition states and thus the energy barrier for the corresponding proton motion. These energy barriers are provided in Tab.~\ref{barrier-value}. Note that the barriers far from dopants ({\it i.e.} from "f" to "f") have been also computed in a 80-atom supercell without dopant and a +1 charge state (to simulate the protonic defect), compensated by a uniform charged background. The energy barrier values obtained are identical to the ones obtained in the doped supercell within 0.01 eV and are presented in the Appendix.
\begin{table}
\centering
\begin{tabular}{c cc cc cc cc cc}
&\multicolumn{4}{c}{Reorientation} & \multicolumn{4}{c}{Intra} &\multicolumn{2}{c}{Inter} \\
From & To &\emph{{$\Delta$E}} & To & \emph{{$\Delta$E}} & To & \emph{{$\Delta$E}} & To & \emph{{$\Delta$E}} & To & \emph{{$\Delta$E}} \\
\hline
1an & 1bn & 0.50 & 1dn & 0.10 & 2dn & 0.37 & 2df & 0.58 & 1bf & 0.24 \\
1bn & 1cn & 0.30 & 1an & 0.49 & 2dn & 0.32 & 2df & 0.48 & 1af & 0.24 \\
1cn & 1dn & 0.05 & 1bn & 0.20 & 2bn & 0.29 & 2bf & 0.43 & & \\
1dn & 1an & 0.09 & 1cn & 0.16 & 2bn & 0.36 & 2bf & 0.52 & & \\
2an & 2bn & 0.31 & 2dn & 0.15 & 2cn & 0.22 & 2cf & 0.40 & 2af & 0.25 \\
2bn & 2cn & 0.28 & 2an & 0.43 & 1cn & 0.35 & 1cf & 0.51 & 2bf & 0.21 \\
& & & & & 1dn & 0.31 & 1df & 0.47 & & \\
2cn & 2dn & 0.03 & 2bn & 0.18 & 2an & 0.23 & 2af & 0.45 & & \\
2dn & 2an & 0.23 & 2cn & 0.09 & 1an & 0.29 & 1af & 0.44 & & \\
& & & & & 1bn & 0.24 & 1bf & 0.39 & & \\
\hline
1af & 1bf & 0.54 & 1df & 0.14 & 2df & 0.50 & 2dn & 0.44 & 1bf & 0.19 \\
& & & & & & & & & 1bn & 0.16 \\
1bf & 1cf & 0.33 & 1af & 0.54 & 2df & 0.45 & 2dn & 0.40 & 1af & 0.20 \\
& & & & & & & & & 1an & 0.16 \\
1cf & 1df & 0.06 & 1bf & 0.18 & 2bf & 0.36 & 2bn & 0.32 & & \\
1df & 1af & 0.08 & 1cf & 0.15 & 2bf & 0.42 & 2bn & 0.39 & & \\
2af & 2bf & 0.36 & 2df & 0.17 & 2cf & 0.39 & 2cn & 0.36 & 2af & 0.21 \\
& & & & & & & & & 2an & 0.17 \\
2bf & 2cf & 0.33 & 2af & 0.49 & 1cf & 0.47 & 1cn & 0.42 & 2bf & 0.16 \\
& & & & & 1df & 0.44 & 1dn & 0.39 & 2bn & 0.13 \\
2cf & 2df & 0.02 & 2bf & 0.17 & 2af & 0.36 & 2an & 0.28 & & \\
2df & 2af & 0.20 & 2cf & 0.08 & 1af & 0.37 & 1an & 0.34 & & \\
& & & & & 1bf & 0.31 & 1bn & 0.28 & & \\
\end{tabular}
\caption{Energy barriers (eV) for proton reorientation, intra-octahedral hopping ("intra") and inter-octahedral hopping ("inter").}
\label{barrier-value}
\end{table}
Starting from a given initial position, the possible motions for the proton are: two reorientations, two intra-octahedral hopping, and possibly one inter-octahedral hopping if the configuration is favorable (which is the case for a and b-type positions where the oxygen atom facing the proton is close enough). Looking at Tab.~\ref{barrier-value}, we can notice that barriers between two ``near'' sites or two ``far'' sites, corresponding to reorientation barriers, are very similar (difference within 0.05 eV). This is expected as the energy surface of protons bonded to an oxygen 1st neighbor of a dopant is almost simply shifted by 0.1~eV compared to that of protons far from the dopant, leading to similar energy landscape. However, the case of hopping is more complicated since the Coulomb interaction between H and Gd prevents hydrogen from easily escaping from the dopant neighborhood. Thus, hopping barriers between a ``near'' site and a ``far'' site have usually a higher value that the ones corresponding to the backward motion.
Fig.~\ref{profile} illustrates the energy profile for each of the three possible kinds of mechanisms (note this is not an exhaustive list of all possible profiles):
Fig.~\ref{profile}a shows the energy profile, as well as the evolution of the O-H distance and the angle $\phi$ from the initial O-H direction in the case of a complete turn around an oxygen O1 near the dopant. Using the notations of Tab.~\ref{barrier-value}, it corresponds to the 4 reorientation mechanisms: $1an\Rightarrow1bn\Rightarrow1cn\Rightarrow1dn\Rightarrow1an$. These 4 reorientation barriers have not the same profile at all: not only the barrier height can differ by a factor 5, but also the angle between two stable sites varies from 60\textdegree{} to 120\textdegree{} instead of being set to 90\textdegree{} (case of an ideal cubic perovskite). Figs.~\ref{profile}b and \ref{profile}c give similar information but for intra-octahedral and inter-octahedral hoppings respectively.
Both mechanisms seem to occur in two steps: first a reorientation, slight for inter-octahedral hopping ($\approx$ 5\textdegree) and larger for intra-octahedral hopping ($\approx$ 45\textdegree) in order to get O-H-O aligned, then the jump between both oxygen atoms. This reorientation can be related to what we mentioned as ``flip'' in the previous section.
\begin{center}
\begin{figure*}
\centering
\includegraphics[scale=0.625]{Figure4.png}
\caption{Energy profiles and evolution of some geometric quantities along typical Minimum Energy Paths. The angle $\phi$ is between the initial and current O-H direction.}
\label{profile}
\end{figure*}
\end{center}
\section{Discussion}
\subsection{Comparison between Gd-doped BaCeO$_3$ and In-doped CaZrO$_3$}
The present results on Gd-doped BaCeO$_3$ can be compared with previous values computed in In-doped CaZrO$_3$~\cite{Islam2001,Bilic2007,Bilic2008}, as both materials exhibit the same kind of structural distortion: BaCeO$_3$ and CaZrO$_3$ have the same perovskite structure with very close Goldschmidt's tolerance factor (0.94 and 0.92 respectively) and thus have the same orthorhombic structure with P$nma$ space group. However, according to its bigger tolerance factor, BaCeO$_3$ should be slightly less distorted from the cubic structure and thus inter-octahedral transfer may be harder than in CaZrO$_3$. Tab.~\ref{structural-bco-vs-czo} confirms that BaCeO$_3$ is a bit closer to an ideal cubic structure than CaZrO$_3$.
\begin{table}
\centering
\begin{tabular}{cccc}
& BaCeO$_3$ & CaZrO$_3$[\onlinecite{Bilic2007}] & cubic \\
a$_c$ (\AA)& 4.44 & 4.06 & --\\
a/a$_c$& 1.41 & 1.39 & 1.41 \\
b/a$_c$& 1.42 & 1.44 & 1.41 \\
c/a$_c$& 2.00 & 2.00 & 2.00 \\
$\overline{\text{A-O}}$/a$_c$ ($\pm\sigma$)& 0.71 ($\pm 0.21$) & 0.72 ($\pm 0.22$)& 0.71 \\
$\overline{\text{B-O}}$/a$_c$ ($\pm\sigma$)& 0.51 ($\pm 0.00$)& 0.52 ($\pm 0.00$) & 0.50\\
A-O-A (deg) & 153.85 & 144.74 & 180.00 \\
B-O-B (deg) & 156.45 & 145.49 & 180.00 \\
\end{tabular}
\caption{Structural parameters (lattice parameters, cation-oxygen distances and angles) for BaCeO$_3$, CaZrO$_3$ and a fictitious cubic perovskite.}
\label{structural-bco-vs-czo}
\end{table}
The same tendency is indeed observed with a very large range of possible values for energy barriers from a few 0.01 eV up to nearly 1~eV.
For instance, in BCGO, reorientation barriers can take a wide range of different values, starting at less than 0.1~eV for barrier between c-type and d-type sites up to 0.5~eV for barrier between a-type and b-type sites. The same results have been found for In-doped CaZrO$_3$~\cite{Bilic2007} except for the fact that the largest barrier can go up to 0.9~eV.
The very small barrier between c and d sites might explain why position 1c is not considered at all in the work of Bilic and Gale~\cite{Bilic2007} (only 7 different positions instead of our 8 positions near a specific B-atom) and 2c near some specific oxygen atoms O2. According to Tab.~\ref{energy-proton}, 1c and 2c are much higher in energy than nearby positions, that is why the reorientation barriers from c-type site are really small.
In both materials, possible inter-octahedral hoppings have a smaller energy barrier than intra-octahedral hopping. This follows from the ability of any oxygen octahedron to bend towards another in the orthorhombic $Pnma$ structure, so that two facing oxygens (belonging to different octahedra) can be made very close to each other. But each octahedron remains rigid, so that its own oxygen atoms cannot be made closer to each other (though a little distortion during the transfer is observed, in agreement with previous calculations~\cite{Cherry1995}). Of course, the inter-octahedral hoppings are possible only when the oxygen atoms involved are close to each other (this corresponds to a/b type within our notation). The c/d type oxygens, which are made further from each other as a result of the tilting process, are excluded from the inter-octahedral motions.
According to those common tendencies, we can suggest that all orthorhombic perovskites behave alike and make some assumptions:
\begin{enumerate}[i/]
\item rather low barriers ($\lesssim$ 0.2~eV) for inter-octahedral hopping depending on the level of distortion (barrier is smaller as distortion increases)
\item higher barriers ($\sim$ 0.3-0.6~eV) for intra-octahedral hopping
\item a wide range of values for reorientation, from less than 0.1~eV up to 0.8~eV, depending on the type of protonic site.
\end{enumerate}
Finally, there is a quantitative difference between both materials concerning the attractive power of the dopant: it seems much harder to escape from Indium in CaZrO$_3$ than from Gd in BaCeO$_3$. The barrier to escape from Indium is on average three times higher than the backward barrier, while in BaCeO$_3$ the escaping barrier is higher only by 50\%. This may be due to the nature of the dopant as suggested by Bjorketun et al.~\cite{Bjorketun2007}, which have shown that energy barriers for proton migration near a dopant can be strongly dependent of its nature. Therefore Gadolinium seems to be a good candidate as a dopant since its power of attraction is low enough to let the proton escape relatively easily.
\subsection{Rate-limiting events}
The rate-limiting process in such distorted system is not so obvious. Contrary to what can be expected, the reorientation is not necessarily much faster than the hopping. Munch and co-workers have found that the proton transfer step is indeed rate limiting in BaCeO$_3$ but of the same order of magnitude as reorientation for SrCeO$_3$~\cite{Munch1999}. More precisely, they computed an activation energy for rotational diffusion in BaCeO$_3$ of 0.07 eV for O$_1$ and 0.11 eV for O$_2$, close to the values we get for the lowest reorientation barriers. In earlier work~\cite{Munch1997}, they found for Ba\{Ce,Zr,Ti\}O$_3$ that reorientation happens much faster with a time scale of $\sim 10^{-12}$~s, while proton transfer occurs at a time scale of $10^{-9}$~s. However the three materials have been studied in their cubic structure, thus preventing the low-barrier inter-octahedral transfer. Gomez and co-workers~\cite{Gomez2007} precise that the rate-limiting process in orthorhombic structure is an intra-octahedral transfer. The fact that most of these studies only focus on the cubic structure might explain why the transfer step has been thought to be rate-limiting.
\section{Conclusion}
In this work, we have performed density-functional calculations on fully hydrated Gd-doped barium cerate and computed in an exhaustive way the Minimum Energy Paths between stable protonic sites close and far from the Gd dopant.
Proton transport in perovskites is usually described as a two-step Grotthuss-type diffusion mechanism: a quick reorientation, followed by a transfer to another oxygen~\cite{Kreuer1999}. However, even if this is correct in principle, we have found that in Gd-doped BaCeO$_3$, the reorientation is not necessarily a fast process compared to transfer. In this distorted perovskite with orthorhombic $Pnma$ space group, inter-octahedral hoppings with rather low barriers $\sim 0.2$~eV do exist.
Also, reorientation mechanisms can be very different from one site to another and thus take a wide range of possible values from 0.02~eV up to 0.54~eV. To a lesser extent, the same argument can be applied to intra-octahedral hopping for which the energy barrier varies between 0.22 and 0.58~eV.
All these results are qualitatively comparable with a previous work focused the orthorhombic perovskite In-doped CaZrO$_3$~\cite{Bilic2007}. The low barriers found for inter-octahedral hopping in these orthorhombic structures suggest that protonic diffusion could be much faster in such structure than in the cubic one, since an inter-octahedral hopping is equivalent to two intra-octahedral transfers but with a higher rate. All the barrier values will be exploited in Kinetic Monte-Carlo simulations to check the actual rate of reorientation versus hopping, and simulate proton trajectories on larger space and time scales.
Finally, gadolinium in barium cerate seems to be interesting as a dopant as it acts like a shallow trap for protons, with rather low escaping barrier (compared to indium in calcium zirconate), enabling the proton to diffuse quite easily. However, other trivalent dopants could be tested to check whether they have better properties for protonic diffusion.
\section{Acknowledgements}
This work was performed using the HPC resources of the TERA-100 supercomputer of CEA/DAM and from GENCI-CCRT/CINES (Grants 2010-096468 and 2011-096468).
We acknowledge that some contributions to the present work have been achieved using the PRACE Research Infrastructure resource (machine CURIE) based in France at Bruy\`eres-le-Chatel (Preparatory Access 2010PA0397).
\section{Appendix: computation of energy barriers far from dopants using a charged supercell}
The barriers corresponding to motions far from the dopant, {\it i.e.} from a ``f'' configuration to another ``f'' configuration, have been recomputed using an undoped supercell in which the charge of the proton is compensated by a uniform charged background (jellium), as frequently done for the simulation of charged defects. In such cases, there are 16 different motions: 8 reorientations, 5 intra-octahedral hoppings and 3 inter-octahedral hoppings. This corresponds to 30 barrier values. The energy barriers obtained using this method are compared to the ones obtained using the doped supercell in Tab.~\ref{barrier-bco}: the values obtained using the two methods are the same within 0.01 eV, confirming that a 80-atom supercell is large enough to contain a region ``close'' to the dopant and a region ``far'' from it. The proton ``far'' from the dopant does not feel the influence of Gd atoms, and can be considered as in pure BaCeO$_3$. Besides, the fact that we get the same values in both cases suggests that the jellium only induces a systematic shift in total energies, but does not affect energy differences.
\begin{table}
\centering
\begin{tabular}{ccccc}
Barrier & \multicolumn{2}{c}{pure BaCeO$_3$} & \multicolumn{2}{c}{``far'' BaCeGdO$_3$} \\
Reorientation & $\rightarrow$ & $\leftarrow$ & $\rightarrow$ & $\leftarrow$ \\
1a-1b & 0.54 & 0.54 & 0.54 & 0.54 \\
1b-1c & 0.33 & 0.18 & 0.33 & 0.18 \\
1c-1d & 0.06 & 0.15 & 0.06 & 0.15 \\
1d-1a & 0.09 & 0.14 & 0.08 & 0.14 \\
2a-2b & 0.36 & 0.49 & 0.36 & 0.49 \\
2b-2c & 0.33 & 0.17 & 0.33 & 0.17 \\
2c-2d & 0.03 & 0.08 & 0.02 & 0.08 \\
2d-2a & 0.20 & 0.17 & 0.20 & 0.17 \\
Hopping & $\rightarrow$ & $\leftarrow$ & $\rightarrow$ & $\leftarrow$ \\
1a-2d & 0.50 & 0.37 & 0.50 & 0.37 \\
1b-2d & 0.45 & 0.31 & 0.45 & 0.31 \\
1c-2b & 0.36 & 0.47 & 0.36 & 0.47 \\
1d-2b & 0.42 & 0.44 & 0.42 & 0.44 \\
2a-2c & 0.39 & 0.36 & 0.39 & 0.36 \\
1a-1b & 0.19 & 0.20 & 0.19 & 0.20 \\
2a-2a & 0.21 & -- & 0.21 & -- \\
2b-2b & 0.16 & -- & 0.16 & -- \\
\end{tabular}
\caption{Comparison of barrier values ``far'' from the dopant in BCGO and in pure charged BaCeO$_3$.}
\label{barrier-bco}
\end{table}
|
1,116,691,499,895 | arxiv |
\subsection{GPU SM Architecture}
\label{sec:sm_arch}
\noindent
\newedit{Figure~\ref{fig:gpu1} illustrates a representative SM architecture where shared memory may share a single on-chip memory structure with L1D cache~\cite{nvidia2009nvidia, nvidia2012nvidia}.
The single on-chip memory structure consists of 32 banks with 512 rows, where 128 or 384 contiguous rows can be allocated to shared memory (\textit{i.e}\onedot} \def\Ie{\textit{I.e}\onedot, 16KB or 48KB) based on user configuration and the remaining are allocated as L1D cache~\cite{gebhart2012unifying}.
While all 32 L1D cache banks operate in tandem for a single contiguous 32$\times$4-byte (128-byte) L1D cache request, all 32 shared memory banks can be accessed independently and serve upto 32 shared memory requests in parallel.
L1D cache buffers data from underlying memory and keeps a separate tag array to identify data hit. In such architecture, a L1D cache access is serialized. That is, tag array is accessed before the banks are accessed~\cite{edmondson1995internal}.
In contrast, as shared memory stores intermediate results generated by ALU for each Cooperative Thread Array (CTA) which is explicitly manipulated by programmers, it neither needs tags nor accesses data in underlying memory.
Hence, there is no datapath between shared memory and L2 cache, and no cache write/eviction policies are applied in shared memory~\cite{nvidia2009nvidia, nvidia2012nvidia}. In addition, to manage the shared memory space, each SM keeps an independent Shared Memory Management Table (SMMT)~\cite{yang2012shared}
where each CTA reserves one entry to store the size and base address of allocated shared memory.
}
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{figs/gpu1.eps}
\caption{GPU SM architecture.}
\label{fig:gpu1}
\end{figure}
\begin{comment}
\end{comment}
\begin{figure}[b]
\centering
\subfloat[]{\label{fig:l1_contention}\rotatebox{0}{\includegraphics[width=0.4\linewidth]{figs/l1_contention}}}
\hspace{4pt}
\subfloat[]{\label{fig:VTA_mech}\rotatebox{0}{\includegraphics[width=0.57\linewidth]{figs/VTA_mech}}}
\caption{(a) An example of locality and interference and (b) VTA structure.}
\end{figure}
\subsection{Cache Interference}
\label{sec:interfere}
\noindent
As many warps share small L1D cache, they often contend for the same cache line.
Hence, cached data of an active warp are frequently evicted by cache accesses of other active warps.
This phenomenon is referred to as \textit{cache interference} which often changes supposedly a regular memory access pattern into an irregular one.
Figure~\ref{fig:l1_contention} depicts an example of how the cache interference worsens data locality in L1D cache,
where warps \texttt{W0} and \texttt{W1} send memory requests to get data \texttt{D0} and \texttt{D4}, respectively.
However, since \texttt{D0} and \texttt{D4} are mapped to the same cache set \texttt{S0},
repeated memory requests from \texttt{W0} and \texttt{W1} to get \texttt{D0} and \texttt{D4} keep evicting \texttt{D4} and \texttt{D0} at cycles \texttt{(a)}, \texttt{(b)}, \texttt{(e)}, and \texttt{(f)}.
Unless the memory requests from \texttt{W1} and \texttt{W0} evicted \texttt{D0} and \texttt{D4}, respectively,
they should have been L1D cache hits.
Such a cache hit opportunity is also called \textit{potential of data locality}, which can be quantified by the frequency of re-referencing the same data unless cache interference occurs.
\subsection{Potential of Data Locality Detection}
\label{sec:vta}
\noindent
To detect the potential of data locality described in Section~\ref{sec:interfere}, we may leverage a Victim Tag Array (VTA)~\cite{rogers2012cache} where
we store a Warp ID (WID) in each cache tag, as shown in Figure~\ref{fig:VTA_mech}.
A WID in a cache tag is to track which warp brought current data in a cache line.
When a memory request of a warp evicts data in a cache line, we first take
(1) the address in the cache tag associated with the evicted data and
(2) the WID of the warp evicting the data.
Then we store (1) and (2) in a VTA entry which is indexed by the WID stored in the cache tag (\textit{i.e}\onedot} \def\Ie{\textit{I.e}\onedot, the WID of the warp which brought the evicted data in the cache line).
When memory requests of an active warp repeatedly incur VTA hits,
they exhibit potential of data locality.
\subsection{Methodology}
\label{sec:method}
\noindent\textbf{GPU architecture.}
We use GPGPU-Sim 3.2.2~\cite{aaamodt2012gpgpu} and configure it to model a GPU similar to NVIDIA GTX 480;
see Table \ref{tab:config} for the detailed GPGPU-Sim configuration parameters~\cite{nvidia2009nvidia}.
Besides, we enhance the baseline L1D and L2 caches with a XOR-based set index hashing technique~\cite{nugteren2014detailed}, making it close to the real GPU device's configuration.
Subsequently, we implement seven different warp schedulers:
(1) \texttt{GTO} (GTO scheduler with set-index hashing \cite{nugteren2014detailed});
(2) \texttt{CCWS};
(3) \texttt{Best-SWL} (best static wavefront limiting);
(4) \texttt{statPCAL} (representative implementation of bypass scheme\cite{li2015priority} that performs similar or better than \cite{li2015locality,tian2015adaptive});
(5) \texttt{CIAO-P} (\texttt{CIAO} with only redirecting memory requests of interfering warp to shared memory);
(6) \texttt{CIAO-T} (\texttt{CIAO} with only selective warp throttling); and
(7) \texttt{CIAO-C} (\texttt{CIAO} with both \texttt{CIAO-T} and \texttt{CIAO-P}).
Note that \texttt{CCWS}, \texttt{Best-SWL}, and \texttt{CIAO-P/T/C} leverage \texttt{GTO} to decide the order of execution of warps.
\texttt{CCWS} and \texttt{CIAO-T/C} stall a varying number of warps depending on memory access characteristics monitored at runtime.
In contrast, \texttt{Best-SWL} stalls a fixed number of warps throughout execution of a benchmark; we profile each benchmark to determine the number of stalled warps giving the highest performance for each benchmark; see column $\rm N_{wrp}$ in Table~\ref{tab:workload_charac}.
\noindent \textbf{Benchmarks.} We evaluate a large collection of benchmarks from \texttt{PolyBench}~\cite{grauer2012auto}, \texttt{Mars}~\cite{he2008mars} and \texttt{Rodinia}~\cite{che2009rodinia}
which are categorized into three classes:
(1) large-working set (LWS), (2) small-working set (SWS), and (3) compute-intensive (CI).
Table~\ref{tab:workload_charac} tabulates chosen benchmarks and their characteristics.
\begin{figure*}
\centering
\subfloat[IPC]{\label{fig:ATAX_back_IPC_fig}\rotatebox{0}{\includegraphics[width=0.33\linewidth]{figs/ATAX_back_IPC_fig}}}
\subfloat[Number of active warps]{\label{fig:ATAX_back_AW_fig}\rotatebox{0}{\includegraphics[width=0.33\linewidth]{figs/ATAX_back_AW_fig}}}
\subfloat[Cache interference]{\label{fig:ATAX_back_interf_fig}\rotatebox{0}{\includegraphics[width=0.33\linewidth]{figs/ATAX_back_interf_fig}}}
\caption{Comparison between \texttt{Best-SWL}, \texttt{CCWS} and \texttt{CIAO-T} over time: \texttt{ATAX} and \texttt{Backprop}}
\label{fig:IPCtrace_ATAX_BACK}
\end{figure*}
\begin{figure*}
\centering
\subfloat[IPC]{\label{fig:SYRK_KMN_IPC}\rotatebox{0}{\includegraphics[width=0.33\linewidth]{figs/SYRK_KMN_IPC}}}
\subfloat[Number of active warps]{\label{fig:SYRK_KMN_activewarp}\rotatebox{0}{\includegraphics[width=0.33\linewidth]{figs/SYRK_KMN_activewarp}}}
\subfloat[Cache interference]{\label{fig:SYRK_KMN_interference}\rotatebox{0}{\includegraphics[width=0.33\linewidth]{figs/SYRK_KMN_interference}}}
\caption{Comparison of \texttt{CIAO-T}, \texttt{CIAO-P} and \texttt{CIAO-C} over time: \texttt{SYRK} and \texttt{KMN}.}
\label{fig:IPCtrace_SYRK_seperate}
\end{figure*}
\ignore{
\begin{figure*}
\centering
\subfloat[IPC]{\label{fig:ATAX_back_IPC_fig}\rotatebox{0}{\includegraphics[width=1\linewidth]{figs/ATAX_back_SYRK_KMN_IPC_fig}}}
\subfloat[Number of active warps]{\label{fig:ATAX_back_AW_fig}\rotatebox{0}{\includegraphics[width=1\linewidth]{figs/ATAX_back_SYRK_KMN_AW_fig}}}
\subfloat[Cache Interference]{\label{fig:ATAX_back_interf_fig}\rotatebox{0}{\includegraphics[width=1\linewidth]{figs/ATAX_back_SYRK_KMN_interf_fig}}}
\caption{Performance analysis of \texttt{ATAX}, \texttt{Backprop}, \texttt{SYRK}, and \texttt{KMN}.
}
\label{fig:IPCtrace}
\end{figure*}
}
\subsection{Performance Analysis}
\label{sec:analy}
\noindent
Figure~\ref{fig:overall_ipc}
plots the IPC values with the seven warp schedulers and the \textbf{geometric-mean} IPC values of three benchmark classes (LWS, SWS, and CI), respectively,
normalized to those with \texttt{GTO}.
Overall, \texttt{CCWS}, \texttt{Best-SWL}, \texttt{statPCAL}, and \texttt{CIAO-C} provide 2\%, 16\%, 24\% and 56\% higher performance than \texttt{GTO}, respectively.
\texttt{GTO} performs worst among all evaluated schedulers, because, it shuffles only the order of executed warps and does not notably reduce cache thrashing caused by many active warps accessing small L1D cache.
In contrast, \texttt{Best-SWL} outperforms \texttt{GTO} as it throttles some warps, reducing the number of memory accesses to small L1D and thus cache thrashing.
Nonetheless, as \texttt{Best-SWL} must decide the number of throttled warps before execution of a given application,
it cannot effectively capture the optimal number of throttled warps varying within an application compared to warp schedulers that dynamically throttle the number of executed warps such as \texttt{CCWS} and \texttt{CIAO}.
For example, as \texttt{ATAX} exhibits very dynamic cache access patterns at runtime,
\texttt{CCWS} outperforms \texttt{Best-SWL} by 49\%.
Note that \texttt{CCWS} gives notably lower performance than \texttt{Best-SWL}
especially for CI benchmarks; considerably affecting its performance.
That is because running more active warps achieves higher performance for CI benchmarks, whereas \texttt{CCWS} unnecessarily stalls some active warps to give a higher priority to a few warps exhibiting high data locality.
\texttt{statPCAL} gives up to 37\% higher performance than \texttt{Best-SWL} by up to 37\% because \texttt{statPCAL} offers higher TLP.
Specifically, when \texttt{statPCAL} detects under-utilization of L2 and/or main memory bandwidth, it activates throttled warps and makes these warp directly access the underlying memory (\textit{i.e}\onedot} \def\Ie{\textit{I.e}\onedot, bypassing L1D cache).
Due to the long access latency and limited bandwidth of underlying memory, however,
\texttt{statPCAL} cannot significantly improve performance of LWS and SWS workloads such as \texttt{KMN}, \texttt{SYRK}, etc.
\texttt{CIAO-T} provides 32\% and 34\% higher performance than \texttt{CCWS} and \texttt{GTO}, respectively.
Furthermore, \texttt{CIAO-T} offers 22\% higher performance than \texttt{Best-SWL} for every benchmark except for \texttt{SYR2K}, \texttt{II}, and \texttt{KMN} exhibiting static cache access patterns at runtime.
Both \texttt{CIAO-T} and \texttt{CCWS} dynamically stall some active warps at runtime, but
our evaluation shows that it is often more effective to throttle the warps that considerably interfere with other warps than the warps with low potential of data locality
as \texttt{CCWS} does.
Furthermore, for CI benchmarks, \texttt{CIAO-T} offers as high performance as \texttt{GTO} in contrast to \texttt{CCWS};
refer to our earlier comparison between \texttt{GTO} and \texttt{CCWS} for CI benchmarks.
\texttt{CIAO-P}
gives 34\% higher performance than \texttt{GTO}.
\newedit{We observe that \texttt{CIAO-P} offers the highest TLP among all seven warp schedulers, entailing 28\% higher performance than \texttt{CIAO-T} for SWS class benchmarks. This is because \texttt{CIAO-P} fully utilizes the unused space of shared memory (cf. Figure~\ref{fig:shmutil_fig}). }
Nonetheless, its benefits can be limited for LWS class benchmarks in which the redirected memory requests of interfering warps are often too intensive and thus thrash the shared memory as well.
In such a case, \texttt{CIAO-T} can perform better than \texttt{CIAO-P}, giving 48\% and 66\% higher performance than \texttt{CIAO-P} and \texttt{CCWS}, respectively, as shown in Figure \ref{fig:IPC_fig}.
Lastly, \texttt{CIAO-C}, which synergistically integrates \texttt{CIAO-T} and \texttt{CIAO-P}, provides 56\%, 54\%, 17\% and 16\% higher performance than \texttt{GTO}, \texttt{CCWS}, \texttt{CIAO-T}, and \texttt{CIAO-P}, respectively.
\begin{figure}[b]
\centering
\subfloat[Various epoches.]{\label{fig:epoch}\rotatebox{0}{\includegraphics[width=0.48\linewidth]{figs/epoch}}}
\hspace{2pt}
\subfloat[Vairous high cut-off lines.]{\label{fig:highcutoff}\rotatebox{0}{\includegraphics[width=0.48\linewidth]{figs/highcutoff}}}
\caption{Sensitivity analysis
}
\label{fig:sensi_scheduler}
\end{figure}
\begin{figure*}
\centering
\subfloat[IPC comparison of varying L1D cache configurations.]{\label{fig:IPC_fig_sens}\rotatebox{0}
{\includegraphics[width=0.49\linewidth]{figs/IPC_fig_sens.eps}}}
\subfloat[IPC comparison of vayring DRAM bandwidths.]{\label{fig:IPC_fig_sens1}\rotatebox{0}
{\includegraphics[width=0.49\linewidth]{figs/IPC_fig_sens1.eps}}}
\caption{IPC of different L1D cache and DRAM configurations.
}
\label{fig:sensi1_cache}
\end{figure*}
\subsection{Effectiveness of Interference Awareness}
\noindent
Figure~\ref{fig:IPCtrace_ATAX_BACK}
shows the IPC, the number of active warps, and cache interference over time of \texttt{ATAX} as a representative application that exhibits distinct execution phases in a single kernel execution.
For example, \texttt{ATAX} exhibits two distinct execution phases.
The first phase comprised of the first 40-million instructions is very memory-intensive, whereas the second phase is very compute-intensive.
Figure \ref{fig:ATAX_back_IPC_fig} shows that \texttt{CIAO-T} outperforms \texttt{CCWS} and \texttt{Best-SWL} for the first 40-million instructions executed.
\texttt{CIAO-T} exhibits higher performance during this phase because \texttt{CIAO-T} more effectively reduces cache interference by throttling severely interfering warps, as shown in Figure \ref{fig:ATAX_back_interf_fig}.
After the first phase, \texttt{ATAX} starts a compute-intensive phase, performing the computation by fully exploiting data locality on the GPU caches.
As \texttt{Best-SWL} cannot capture this dynamics at runtime, it executes only 2 warps for the second phase execution of \emph{ATAX}.
In contrast, \texttt{CCWS} and \texttt{CIAO-C} dynamically reduce the number of stalled warps as they observe fewer cache misses and less cache interference,
giving 4$\times$ higher geometric-mean performance than \texttt{Best-SWL}.
We choose \texttt{Backprop} as a representative application that is very compute-intensive but also experiences many cache misses.
Figure \ref{fig:IPCtrace_ATAX_BACK} shows the performance change of \emph{Backprop} over time.
\texttt{Best-SWL} and \texttt{CIAO-T} provide 500 IPC on average.
However, \texttt{CCWS} notably degrades the performance, ranging from 320 to 150 IPC because \texttt{CCWS} ends up giving a higher priority to warps with higher data locality and stalling more than 40 warps (or significantly reducing TLP).
In contrast, \texttt{CIAO-T}, which offers performance similar to \texttt{Best-SWL}, more selectively throttles warps than \texttt{CCWS} (i.e., only 10$\sim$20 most interfering warps), better preserving TLP.
\subsection{Sensitivity to Working Set Size}
\label{sec:tsa}
\noindent \textbf{Small-working set.}
Figure~\ref{fig:IPCtrace_SYRK_seperate} shows the performance of three \texttt{CIAO} schemes for \texttt{SYRK} over time.
\texttt{SYRK} is a representative application with SWS.
Specifically,
Figure~\ref{fig:IPCtrace_SYRK_seperate}
illustrates IPC, the number of active warps, and the number of cache conflicts over time of texttt{SYRK} over time with three \texttt{CIAO} schemes.
As shown in Figure \ref{fig:SYRK_KMN_IPC}, \texttt{CIAO-P} offers higher IPC than \texttt{CIAO-T} overall.
This is because, it can secure higher TLP (\textit{cf}\onedot} \def\Cf{\textit{Cf}\onedot Figure \ref{fig:SYRK_KMN_activewarp}), whereas \texttt{CIAO-T} alone hurts TLP by throttling many active warps.
Using the unused shared memory space, \texttt{CIAO-P} can effectively reduce cache interference without sacrificing TLP in contrast to \texttt{CIAO-T}.
As expected, \texttt{CIAO-C} selectively stalls very few warps.
\noindent \textbf{Large-working set.}
Figure~\ref{fig:IPCtrace_SYRK_seperate} also depicts the performance of three \texttt{CIAO} schemes \texttt{KMN}, a representative application with LWS.
As shown in Figure~\ref{fig:SYRK_KMN_IPC}, \texttt{CIAO-T} provides 50\% higher IPC than \texttt{CIAO-P}, and
\texttt{CIAO-C} always achieves the highest performance during the entire execution period amongst all three schemes.
This is because, as shown in Figure \ref{fig:SYRK_KMN_interference}, \texttt{CIAO-P} still suffers from severe shared memory interference
as the amount of data requested by the partitioned warps exceeds the amount that shared memory can efficiently accommodate.
In contrast, \texttt{CIAO-C} can better utilize shared memory by selectively throttling only the warps that cause severe interference.
\ignore{
\begin{figure*}
\centering
\subfloat[IPC comparison]{\label{fig:IPC_fig_16112}\rotatebox{0}
{\includegraphics[width=0.8\linewidth]{figs/IPC_fig_16112.eps}}}
\subfloat[Geo-mean IPC]{\label{fig:IPC_fig2_16112}\rotatebox{0}
{\includegraphics[width=0.18\linewidth]{figs/IPC_fig2_16112.eps}}}
\caption{IPC of 16KB L1D cache and 112KB shared mem.
}
\label{fig:sensi_cache}
\end{figure*}
}
\ignore{the highest reduction in L2 miss rate comes from LRR + CIAO -- compared to LRR and LRR + \texttt{CCWS} by 52.7\% and 48.9\%, respectively. This dramatic decrease in miss rate results from a combined effect of the active warp number throttling and the consideration of cache interference upon warp scheduling. On the other hand, \texttt{GTO} + CIAO reduces the L2 miss rate by 28.3\% and 13.8\% over \texttt{GTO} and \texttt{GTO} + \texttt{CCWS}, respectively. Due to the high number of active warps, even restricting to the oldest warps, \texttt{GTO} still allows too much data to be contained in L2 cache. Even though, \texttt{CCWS} can further alleviate the L2-level cache interference by strictly limiting the active warp number, warps with high potential of data locality, which are prioritized in \texttt{CCWS}, can still contend with each other. The scheduling policy of CIAO successfully exploits this critical observation regarding the potential for further reduction in L2-level interference.
}
\ignore{
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{figs/L1Dynenergy_fig.eps}
\caption{L1 total energy analysis.
}
\label{fig:L1Dynenergy_fig}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{figs/L2dynenergy_fig.eps}
\caption{L2 total energy analysis.
}
\label{fig:L2dynenergy_fig}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{figs/DRAMdynenergy_fig.eps}
\caption{DRAM dynamic energy analysis.
}
\label{fig:DRAMdynenergy_fig}
\end{figure}
}
\subsection{Sensitivity Study}
\label{subsubsec:sensi}
\noindent \textbf{Epoch value.}
Figure~\ref{fig:epoch} shows the effect of varying \texttt{high-cutoff} epoch values on the IPC for all the memory-intensive workloads.
As we increase the epoch from 1K to 50K instructions, the change in IPC is within 15\%.
Note that different workloads can achieve best performance with different epoch values.
That is because epoch determines the frequency of checking cache interference for \texttt{CIAO}.
A shorter epoch provides fast response to cache interference, while a longer epoch can more accurately detect the warp causing most interference.
Taking this trade-off into account, we choose 5K instructions as \texttt{high-cutoff} epoch value.
An adaptive scheme can be future work.
\noindent \textbf{High-cutoff threshold.}
Figure~\ref{fig:highcutoff} depicts performance corresponding to different \texttt{high-cutoff} thresholds,
where the \texttt{low-cutoff} threshold is fixed to half of it.
All benchmarks show steady performance within 5\% change during the entire execution period.
This is because our \texttt{CIAO} throttles the active warps causing most interference, which can easily exceed the current thresholds we set. 1\% is chosen in the paper.
\newedit{
\noindent \textbf{L1D cache/DRAM configurations.} Figure~\ref{fig:sensi1_cache} illustrates the performance of LWS and SWS workloads by configuring various L1D cache/DRAM design parameters:
(1) \texttt{GTO};
(2) \texttt{GTO-cap} (\texttt{GTO} but increase L1D cache capacity to 48 KB and reduce shared memory size to 16 KB);
(3) \texttt{GTO-8way} (\texttt{GTO} but increase L1D cache associativity to 8 way);
(4) \texttt{statPCAL-2X} (\texttt{statPCAL} but double DRAM bandwidth from 177 GB/s to 340 GB/s);
(5) \texttt{CIAO-C};
(6) \texttt{CIAO-C-2X} (\texttt{CIAO-C} but double DRAM bandwidth).
As shown in Figure \ref{fig:IPC_fig_sens}, while increasing L1D cache capacity (\texttt{GTO-cap}) and associativity (\texttt{GTO-8way}) can effectively improve the overall performance by 108\% and 51\% compared to \texttt{GTO}, \texttt{CIAO-C} still outperforms \texttt{GTO-cap} and \texttt{GTO-8way} by 14\%, and 57\%, respectively. This is because, \texttt{GTO-cap} and \texttt{GTO-8way} cannot fully eliminate cache interference, as they cannot distinguish the requests between interfering and interfered warps and effectively isolate them. On the other hand, while \texttt{statPCAL-2X} can benefit from the increased DRAM bandwidth, bypassing requests to underlying DRAM still suffers from long DRAM delay as the latency of DRAM access is much longer than that of L1D cache access. Hence, as shown in Figure \ref{fig:IPC_fig_sens1}, \texttt{CIAO-C-2X} outperforms \texttt{statPCAL-2X} by 16\%, on average.
}
\ignore{
\noindent \textbf{Varying L1D cache sizes.}
Figure~\ref{fig:sensi1_cache}
illustrates the performance of all workloads for the various L1D cache configurations shown in Table~\ref{tab:config}.
For 48KB L1D cache and 16KB shared memory, \texttt{CIAO-C} gives 29\%, 27\%, 13\%, and 12\% higher IPC than \texttt{GTO}, \texttt{CCWS}, \texttt{Best-SWL}, and \texttt{statPCAL}, respectively.
Since 48KB L1D cache is much larger than the default 16KB size, the performance of \texttt{GTO} improves greatly and leaves less room for improvement by \texttt{CIAO}\xspace.
}
\subsection{Overhead Analysis}
\noindent
Implementing the interference detector, \texttt{CIAO} leverages the VTA structure originally proposed by \texttt{CCWS}~\cite{rogers2012cache}, but employs only 8 VTA entries for each warp (\textit{i.e}\onedot} \def\Ie{\textit{I.e}\onedot, half of the VTA entries that \texttt{CCWS} uses).
Using CACTI 6.0~\cite{muralimanohar2009cacti}, we estimate that the area of one VTA structure is only 0.65 $mm^2$ for 15 SMs, which accounts for only 0.12\% of the total chip size of NVIDIA GTX480 (529 $mm^2$~\cite{geforce-gtx-480}).
In addition, \texttt{CIAO} uses 48 registers as VTA-hit counters (one for each warp).
Since each VTA-hit counter resets at the start of each kernel, a 32-bit counter is sufficient to prevent its overflow.
The interference and pair lists are implemented with SRAM arrays indexed by WIDs.
Since the total number of active warps in a CTA does not exceed 64 (usually, 48 active warps in each SM), we configure the interference and pair lists with 64 entries.
Each entry of the interference list requires 8 (= 6+2) bits to store one warp index and saturation counter value, while each entry of pair list requires 12 (=6 + 6) bits to store two warp indices.
Using CACTI 6.0, we estimate that the combined area of the VTA-hit counters, interference list, and pair lists is 549 $um^{2}$ per SM (8235 $um^{2}$ for 15 SMs).
On the other hand, Equation \ref{eq:irs} is implemented with a few adders, a shifter, and a comparator, which also requires very low cost (2112 gates).
For our shared memory modification, the translation unit, multiplexer and MSHR only need 4500 gates and 64B storage per SM.
We also track the power consumption of new components employed in \texttt{CIAO} by leveraging GPUWattch~\cite{leng2013gpuwattch},
which reveals the average power is around 79mW.
Overall, \texttt{CIAO} improves the performance by more than 50\% with a negligible area cost (less than 2\% of the total GTX480 chip area) and power consumption (only 0.3\% of GTX480 overall power).
\subsection{Cache Interference Detection}
\label{sec:schedule}
\noindent \textbf{Estimation of cache interference.}
A level of cache interference experienced by a warp can be quantified by
an Individual Re-reference Score (IRS)
which can be expressed by:
\begin{equation}
\label{eq:irs}
IRS_i = \frac{F^i_{VTA-hits}}{N_{executed-inst}/N_{active-warp}}
\end{equation}
\begin{figure}[b]
\centering
\includegraphics[width=1\linewidth]{figs/vta.eps}
\caption{Microarchitecture adaptation for \texttt{CIAO}\xspace.}
\label{fig:vta}
\end{figure}
\noindent where $i$ is active warp number, $F^i_{VTA-hits}$ is the number of VTA hits for warp $i$, $N_{executed-inst}$ is the total number of executed instructions, and $N_{active-warp}$ is the number of active warps running on an SM, respectively.
$IRS_i$ represents VTA hits per instruction (\textit{i.e}\onedot} \def\Ie{\textit{I.e}\onedot, intensity of VTA hits) for warp $i$.
High $IRS_i$ indicates warp $i$ has experienced severe cache interference in a given epoch.
Based on $IRS_i$, \texttt{CIAO}\xspace (1) decides whether it isolates warps interfering with warp $i$, (2) stalls these interfering warps, or (3) reactivates the stalled warps.
\begin{figure*}
\centering
\includegraphics[width=1\linewidth]{figs/shm.eps}
\caption{GPU on-chip memory structure adaptation.}
\label{fig:shm}
\end{figure*}
\noindent \textbf{Decision thresholds.}
For these aforementioned three decisions we introduce two threshold values: (1) \texttt{high-cutoff} and (2) \texttt{low-cutoff}.
$IRS_i$ over \texttt{high-cutoff} indicates that warp $i$ has experienced severe cache interference.
Subsequently, \texttt{CIAO}\xspace decides to isolate or stall the warp that most recently and severely interfered with warp $i$.
$IRS_i$ below \texttt{low-cutoff} often indicates that warp $i$ has experienced light cache interference and/or completed its execution.
Then, \texttt{CIAO}\xspace decides to reactivate previously stalled warps or
redirect memory requests of these warps back to L1D cache.
As these two thresholds influence the efficacy of \texttt{CIAO}\xspace, we sweep these two values, evaluate diverse memory-intensive applications, and determine that \texttt{high-cutoff} and \texttt{low-cutoff}, which minimize cache interference and maximize performance, are 0.01 and 0.005, respectively.
See Section~\ref{subsubsec:sensi} for our sensitivity analysis.
\noindent \textbf{Epochs.}
As $IRS_i$ changes over time, \texttt{CIAO}\xspace should track the latest $IRS_i$ and compare it against \texttt{high-cutoff} and \texttt{low-cutoff} to precisely determine whether a warp needs to be isolated, stalled, or reactivated. However, the update of $IRS_i$ calculation consumes more than 6 cycles, which can be on the critical path of performance.
To this end, \texttt{CIAO}\xspace divides the execution time into \texttt{high-cutoff} and \texttt{low-cutoff} epochs, respectively.
At the end of each \texttt{high-cutoff} (or \texttt{low-cutoff}) epoch, \texttt{CIAO}\xspace updates $IRS_i$ and compares it against \texttt{high-cutoff} (or \texttt{low-cutoff}).
The \texttt{low-cutoff} epoch should be shorter than the \texttt{high-cutoff} epoch because of the following reasons.
As preserving high TLP is a key to improve GPU performance, \texttt{CIAO}\xspace attempts to minimize a negative effect of stalling warps by reactivating stalled warps as soon as these warps start not to notably interfere with other warps at runtime.
To validate this strategy, we sweep \texttt{high-cutoff} and \texttt{low-cutoff} epoch values, evaluate diverse memory-intensive applications, and determine that the best \texttt{high-cutoff} and \texttt{low-cutoff} epoch values are every 5000 and 100 instructions, respectively.
See Section~\ref{subsubsec:sensi} for our in-depth sensitivity analysis.
\noindent \textbf{Microarchitecture support.}
Figure~\ref{fig:vta} depicts the necessary hardware, which is built upon the existing VTA organization~\cite{rogers2012cache}, to implement a cache interference detector.
To capture different levels of cache interference experienced by individual warps, we implement a VTA-hit counter per warp and a total instruction counter per SM (\texttt{VTACount0-k} and \texttt{Inst-total} in the figure) atop a VTA.
Each VTA-hit counter records the number of VTA hits for each warp, and the total instruction counter tracks the total number of instructions executed by a given SM (\textit{i.e}\onedot} \def\Ie{\textit{I.e}\onedot, $N_{executed-inst}$ in Eq.(\ref{eq:irs})).
To compare $IRS_i$ against \texttt{high-cutoff} and \texttt{low-cutoff}, we implement the cutoff testing unit which can be implemented by registers, a shifter, and simple comparison logic.
Lastly, we implement the samplers to count the number of executed instructions and determine whether or not the end of a \texttt{high-cutoff} or \texttt{low-cutoff} epoch has been reached.
To manage the information related to tracking interfering warps for each warp, we implement the interference list.
Each entry is indexed by WID of a given warp and stores a 6-bit WID of an interfering warp and a 2-bit saturation counter (\texttt{C} in the figure).
When a VTA hit occurs, the corresponding entry of interference list is updated, as described in Section~\ref{sec:interference_detection}.
\texttt{CIAO}\xspace checks the interference list for warp $i$ whenever it needs to isolate or stall an interfering warp based on $IRS_i$.
To facilitate this, we also augment a 1-bit active flag (\texttt{V}) and 1-bit isolation flag (\texttt{I}) with each ready warp entry in the warp list (\textit{i.e}\onedot} \def\Ie{\textit{I.e}\onedot, a component of warp scheduler).
Using \texttt{V} and \texttt{I} bits, the warp scheduler can identify whether a given warp is in active (\texttt{V}=\texttt{1}, \texttt{I}=\texttt{0}), isolated (\texttt{V}=\texttt{1}, \texttt{I}=\texttt{1}), or stalled state (\texttt{V}=\texttt{0}).
We also implement a \textit{pair list}.
Each entry is indexed by the WID of a warp at the front of the warp list and composed of two fields
to record which interfered warp triggered to redirect memory requests of the warp or stall the warp in the past.
Suppose that warp $i$ is at the front of the warp list.
Based on WIDs from the first or second field of the entry indexed by warp $i$,
\texttt{CIAO}\xspace checks $IRS_k$ where $k$ is the WID of the interfered warp that previously triggered to either redirect memory requests of warp $i$ or stalling warp $i$.
Then \texttt{CIAO}\xspace decides whether it reactivates warp $i$ or redirects memory requests of warp $i$ back to L1D cache based on $IRS_k$.
For example, as \texttt{W0} is severely interfered by \texttt{W1}, \texttt{CIAO}\xspace decides to redirect memory requests of \texttt{W1} to unused shared memory space.
Then \texttt{W0} is recorded in the first field of the entry indexed by \texttt{W1} and \texttt{I} associated with \texttt{W1} is set, as depicted in Figure~\ref{fig:vta}.
Subsequently, \texttt{W1} begins to send memory requests to the shared memory, but \texttt{CIAO}\xspace observes that \texttt{W1} also severely interferes with \texttt{W3} that sends its memory requests to the shared memory.
As \texttt{CIAO}\xspace decides to stall \texttt{W1}, \texttt{W3} is recorded in the second field of the entry indexed by \texttt{W1} and \texttt{V} associated with \texttt{W1} is cleared.
When \texttt{CIAO}\xspace needs to reactivate \texttt{W1} later, the second field of the pair list entry and \texttt{V} corresponding to \texttt{W1} are cleared to
inform the warp scheduler of the event that the warp is active.
When \texttt{CIAO}\xspace needs to make \texttt{W1} send its memory request back to L1D cache, the corresponding field in the pair list entry and \texttt{I} are cleared.
See Section~\ref{sec:pat} for more details on the pair list.
\subsection{Shared Memory Architecture}
\label{sec:shared_mem_arch}
\noindent
Figure~\ref{fig:shm}a and b illustrate \texttt{CIAO}\xspace on-chip memory architecture and its data placement layout, respectively.
\noindent \textbf{Determination of unused shared memory space.}
One challenge to utilize unused shared memory space is that shared memory is managed by programmers and the used amount of shared memory space varies across implementations of a kernel.
To make \texttt{CIAO}\xspace on-chip memory architecture transparent to programmers, we leverage the existing SMMT structure to determine the unused shared memory space.
When a CTA is launched, \texttt{CIAO}\xspace checks the corresponding SMMT entry to determine the amount of unused shared memory space (\textit{cf}\onedot} \def\Cf{\textit{Cf}\onedot Section~\ref{sec:sm_arch}).
Then, \texttt{CIAO}\xspace inserts a new entry in the SMMT with the start address and size of unused shared memory to reserve the space for storing 128-byte data blocks and tags.
\noindent \textbf{Placement of tags and data.}
In contrast to L1D cache, shared memory does not have a separate memory array to accommodate tags~\cite{gebhart2012unifying}.
In this work, instead of employing an additional tag array, we propose to place both 128-byte data blocks and their tags into the shared memory.
This is to minimize the modification of the current on-chip memory structure architected to be configured as both L1D cache and shared memory.
As shown in Figure~\ref{fig:shm}b, we partition 32 shared memory banks into two bank groups and stripe a 128-byte data block across 16 banks within one bank group.
Each 128-byte data block can be accessed in parallel since each shared memory bank allows 64-bit accesses~\cite{nvidia2012nvidia}.
Since a tag and a WID require only 31 bits (= 25 + 6 bits), two tags can be placed in a single bank which is different from banks storing the corresponding data blocks.
Then 32 tags can be grouped together to better utilize a row of one bank group (\textit{i.e}\onedot} \def\Ie{\textit{I.e}\onedot, 16 banks).
This design strategy, which puts a tag and the corresponding data block into two different bank groups, shuns bank conflicts and thus allows accesses of a tag and a data block in parallel.
Furthermore, we only use the unused shared memory space as direct-mapped cache so that a pair of a 128-byte data block and the corresponding tag can be accessed with a single shared memory access.
\noindent \textbf{Address translation unit.}
As shown in Figure~\ref{fig:shm}b, we introduce a hardware address translation unit in front of shared memory to determine where a target 128-byte data block and its tag exist in the shared memory.
In practice, a global memory address can be decomposed by cache-related information such as a tag, block index and byte offset.
However, as the usage of shared memory can be varying based on the needs of each CTA, we put an 8-bit mask into the translation unit to decide how many rows will be used for each CTA at runtime.
Figure~\ref{fig:shm}c shows how our translation unit determines locations of a target data block and its tag;
the data block address (of shared memory) consists of four fields, the byte offset (``\texttt{F}''), bank index (``\texttt{B}''), bank group (``\texttt{G}''), and row index (``\texttt{R}''), which are presented from LSB to MSB.
Specifically, we have 8-byte rows per bank, 16 banks per group, two bank groups and 256 rows (at most),
which in turn 3, 4, 1, and 8 bits for \texttt{F}, \texttt{B}, \texttt{G} and \texttt{R}, respectively.
The remaining bits (16 bits in this example) are used as part of the tag.
Note that our tags also contain 6-bit WID and 9-bit data block index as the number of cache lines required can be greater than the number of rows.
In \texttt{CIAO}\xspace, one row within a bank group can hold 32 tags since a physical row per bank contains two tags.
That is, the actual position of a tag can be indicated by 5 bits (\textit{i.e}\onedot} \def\Ie{\textit{I.e}\onedot, 1 \texttt{F} and 4 \texttt{B} bits),
which are also used for the row index of the corresponding data block.
To access a data block and the corresponding tag in parallel, \texttt{G} of the data block will be flipped and assigned to such tag's 5 bits as a significant bit.
The remaining \texttt{R} bits are assigned to the row index of the target tag.
Note that, as shown in the figure, the start of index for both a data block and a tag can be rearranged by considering the data block and tag offset registers,
which are used to adapt the unused shared memory size allocated for cache.
\noindent \textbf{Datapath connection.}
When we leverage unused shared memory as cache, we need a datapath between shared memory and L2 cache.
Since the shared memory is disconnected from the global memory in the conventional GPU,
we need to adapt the on-chip memory structure, which is partitioned between L1D cache and shared memory, to share some resources of the L1D cache with the shared memory (\textit{e.g}\onedot} \def\Eg{\textit{E.g}\onedot, datapath to L2 cache, MSHR, etc.).
As illustrated in Figure~\ref{fig:shm}a,
a multiplexer is implemented to connect the write queue (WQ) and response queue (RespQ) to either L1D cache or shared memory.
The \texttt{CIAO}\xspace cache control logic controls the multiplexer based on the isolation flag bit (\texttt{I}) and the result of checking cache tags associated with accessing L1D cache or shared memory serving as cache.
We also augment an extra field with each MSHR entry to store the shared memory address of a memory request from the aforementioned address translation unit.
Once the shared memory issues a fill request after a miss, the request reserves one MSHR entry by filling in its global and translated shared memory addresses.
If the response from L2 cache matches the global address recorded in the corresponding MSHR entry, the filling data can be directly stored in the shared memory based on the translated shared memory address.
\noindent \textbf{Performance optimization and coherence.}
When \texttt{CIAO}\xspace redirects memory requests of an interfering warp from L1D cache to shared memory, the shared memory does not have any data.
This can incur (1) performance degradation because of cold misses and (2) some coherence issues.
To address these two issues,
when \texttt{CIAO}\xspace needs to access the shared memory, the cache controller first checks the tag array of L1D cache.
If a target data resides in L1D cache (not in shared memory), the L1D cache will evict the data directly to the response queue,
which is used to buffer the fetched data from L2 cache and invalidate the corresponding cache line in L1D cache.
Note that checking the tag array and accessing L1D cache are serialized as described in Section~\ref{sec:sm_arch}.
Meanwhile, the shared memory issues a fill request to MSHR, as the shared memory does not have the data yet.
During this process, the target data will be directly fetch from the response queue to the shared memory (\textit{cf}\onedot} \def\Cf{\textit{Cf}\onedot Figure~\ref{fig:shm}a)
In this way, we naturally migrate data from L1D cache to shared memory, hiding the penalty of cold cache misses and coherence issues.
\begin{algorithm}[t]
\scriptsize
\DontPrintSemicolon
i := getWarpToBeScheduled()\;
InstNo := getNumInstructions()\;
ActiveWarpNo := getNumActiveWarp()\;
\uIf{Warp(i).V == 0 \textbf{and} end of low cut-off epoch}{
\tcc{Warp(i) is stalled}
k := Pair\_List[i][1]\;
$IRS_k$ := $^{VTAHit[k]}/_{InstNo/ActiveWarpNo}$\;
\uIf{$IRS_k$ \textgreater low-cutoff \textbf{and} Warp(k) needs executing}{
\textbf{continue}\;
}
\uElse{
Warp(i).V := 1\;
Pair\_List[i][1] := -1 \tcp{cleared} } }
\uElseIf{Warp(i).I == 1 \textbf{and} end of low cut-off epoch}{
\tcc{Warp(i) redirects to access shared memory}
k := Pair\_List[i][0]\;
$IRS_k$ := $^{VTAHit[k]}/_{InstNo/ActiveWarpNo}$\;
\uIf{$IRS_k$ \textgreater low-cutoff \textbf{and} Warp(k) needs executing}{
\textbf{continue}\;
}
\uElse{
Warp(i).I := 0\;
Pair\_List[i][0] := -1 \tcp{toggling} } }
\uIf{Warp(i).V == 1 \textbf{and} end of high cut-off epoch }{
\tcc{Warp(i) is active}
$IRS_i$ := $^{VTAHit[i]}/_{InstNo/ActiveWarpNo}$\;
j := Interference\_List[i]\;
\uIf{$IRS_i$ \textgreater high-cutoff \textbf{and} $j$ != $i$ }{
\uIf{ Warp(j).I == 1}{
Warp(j).V := 0\;
Pair\_List[j][1] := i\;
}
\uElseIf{ Warp(j).I == 0}{
Warp(j).I := 1\;
Pair\_List[j][0] := i\;
}
}
}
\caption{\texttt{CIAO}\xspace scheduling algorithm}
\label{algo:CIAO}
\end{algorithm}
\subsection{Putting It All Together}
\label{sec:pat}
\noindent
Algorithm~\ref{algo:CIAO} describes how \texttt{CIAO}\xspace schedules warps.
For every \texttt{low-cutoff} epoch, the warp at the front of the warp list (\textit{e.g}\onedot} \def\Eg{\textit{E.g}\onedot, warp $i$), is examined
to decide whether \texttt{CIAO}\xspace redirects memory requests of warp $i$ back to L1D cache or reactivate warp $i$.
More specifically, \texttt{CIAO}\xspace first checks the first or second field of the \textit{pair list} entry corresponding to warp $i$.
Once \texttt{CIAO}\xspace confirms that either \texttt{CIAO}\xspace previously redirected memory requests of warp $i$ to shared memory or stalled warp $i$ because warp $i$ severely interfered with another warp (\textit{e.g}\onedot} \def\Eg{\textit{E.g}\onedot, warp $k$),
it redirects the memory requests of warp $i$ back to L1D cache or reactivate warp $i$,
unless the following two conditions are satisfied:
(1) $IRS_k$ is still higher than \texttt{low-cutoff} and (2) warp $k$ has not completed its execution.
Every \texttt{high-cutoff} epoch, \texttt{CIAO}\xspace examines $IRS_i$.
If warp $i$ is in the active warp list and $IRS_i$ is higher than \texttt{high-cutoff},
\texttt{CIAO}\xspace looks up the \textit{interference} list to determine which warp has most severely interfered with warp $i$.
Once \texttt{CIAO}\xspace determines the most interfering warp (\textit{e.g}\onedot} \def\Eg{\textit{E.g}\onedot, warp $j$) for warp $i$,
\texttt{CIAO}\xspace checks whether it has redirected memory requests of warp $j$ to shared memory or stalled warp $j$.
If \texttt{CIAO}\xspace sees that warp $j$ has still sent memory requests to L1D cache, it isolates warp $j$, redirects memory requests of warp $j$ to shared memory,
and records warp $i$ in the first field of the pair list entry corresponding to warp $j$ to indicate that warp $i$ has triggered to redirect memory requests of warp $j$.
If \texttt{CIAO}\xspace has already redirected memory requests of warp $j$, then
\texttt{CIAO}\xspace starts to stall warp $j$ and records warp $i$ in second field of the pair list entry corresponding to warp $j$.
This record can be referenced when \texttt{CIAO}\xspace decides to reactivate warp $i$ in future.
\section{#1}}
\newcommand{\subsect}[1]{\subsection{#1}}
\newcommand{\subsubsect}[1]{\subsubsection{#1}}
\newcommand{\mysect}[1]{\subsect{#1}}
\newtheorem{ourtask}{Task}
\subsection{Cache Interference Detection}
\label{sec:interference_detection}
\noindent
As introduced in Section~\ref{sec:interfere}, some warps incur more severe cache interference than other warps (\textit{i.e}\onedot} \def\Ie{\textit{I.e}\onedot, non-uniform cache interference).
However, it is non-trivial to capture such non-uniform interference occurring during the execution of applications
at compile time \cite{chenadaptive}.
Thus, we need to determine severely interfering and interfered warps at runtime.
At run time, we may track severely interfered warps, leveraging a VTA structure (\textit{cf}\onedot} \def\Cf{\textit{Cf}\onedot Section~\ref{sec:vta}).
A na\"ive way to determine severely interfering warps for each warp, however, demands a high storage cost,
because each warp needs to keep track of cache misses incurred by all other $n-1$ warps.
This in turn requires a storage structure with $n(n-1)$ entries where $n$ is the number of active warps per SM (\textit{i.e}\onedot} \def\Ie{\textit{I.e}\onedot, 48 warps).
Searching for a cost-effective way to determine severely interfering warps, we exploit our following observation on an important characteristic of cache interference.
\begin{figure}
\centering
\subfloat[]{\label{fig:unbalance_arrow}\rotatebox{0}{\includegraphics[width=0.34\linewidth]{figs/unbalance_arrow}}}
\subfloat[]{\label{fig:kmeans_un1}\rotatebox{0}{\includegraphics[width=0.61\linewidth]{figs/kmeans_un1}}}
\subfloat[]{\label{fig:sat_counter}\rotatebox{0}{\includegraphics[width=0.92\linewidth]{figs/sat_counter}}}
\caption{(a) Warps interfering with warp W34 and their interference frequency.
(b) Min and max interference frequencies experienced by each warp and each evaluated workload. (c) Interference detection example.}
\end{figure}
\begin{figure*}
\centering
\includegraphics[width=1\linewidth]{figs/toggle_thrott_exam.eps}
\caption{\texttt{CIAO}\xspace execution flow.}
\label{fig:thrott_exam}
\end{figure*}
\newedit{ Figure~\ref{fig:unbalance_arrow} shows that \texttt{W32} interferes with \texttt{W34}, more than two thousand times, whereas some warps (\textit{e.g}\onedot} \def\Eg{\textit{E.g}\onedot, \texttt{W2}) do not interfere with \texttt{W34} at all in \texttt{KMEANS}~\cite{che2009rodinia};
we observe a similar trend on cache interference in all other benchmarks that we tested (cf. Figure~\ref{fig:kmeans_un1}). }
Observing such an interference characteristic, we propose to track only the most recently and frequently interfering warp for each warp.
This significantly reduces the storage cost required to track every interfering warp for each warp.
Specifically, \texttt{CIAO}\xspace keeps a small memory structure denoted by \textit{interference list}
where each entry is indexed by the WID of a currently executed warp.
To track the most recently and frequently interfering warp for a currently executed warp,
we may augment each list entry with a 2-bit saturation counter.
Figure~\ref{fig:sat_counter} illustrates how \texttt{CIAO}\xspace utilizes the counter to track an interfering warp.
Suppose that a previously executed warp (\texttt{W32}) interfered with a currently executed warp (\texttt{W34}),
That is, \texttt{W32} is an interfering WID and \texttt{W34} is an interfered WID.
Subsequently, the interfering WID is stored in the list entry indexed by the interfered WID,
and the counter in the list entry is set to \texttt{00};
the interfering WID is provided by a VTA entry field that tracks which warp incurred the last eviction (\textit{cf}\onedot} \def\Cf{\textit{Cf}\onedot Section~\ref{sec:vta}).
Whenever \texttt{W32} interferes with \texttt{W34} (not shown in the figure), the counter is incremented by 1.
Suppose that the counter has already reached \texttt{11} (\redcircled{\small{1}}) at a given cycle.
When another warp (\texttt{W42}) interferes with warp \texttt{W34} in a subsequent cycle, the counter is decremented by 1 (\redcircled{\small{2}}).
Then, if warp \texttt{W32} interferes with \texttt{W34} again, the counter is incremented by 1 (\redcircled{\small{3}}).
The interfering WID in the list entry is replaced with the most recent interfering WID only when its saturation counter is decreased to \texttt{00},
so that the warp with most frequent cache interference can be kept in the interference list.
\subsection{CIAO On-Chip Memory Architecture}
\label{sec:warp_partitioning}
\noindent
An effective way to reduce cache interference is to isolate cache accesses of interfering warps from those of interfered warps
after partitioning the cache space and allocating separate cache lines to the interfering warps.
Prior work proposed various techniques to partition the cache space for CPUs (\textit{e.g}\onedot} \def\Eg{\textit{E.g}\onedot, \cite{qureshi2006utility, srikantaiah2008adaptive}).
However, the size of L1D cache is insufficient to apply such techniques for GPUs,
as the number of GPU threads sharing L1D cache lines is very large, compared with that of CPU threads.
For example, only two or three cache lines can be allocated to each warp, if we apply a CPU-based cache partitioning technique to the L1D cache of GTX480.
Such a small number of cache lines per warp can even worsen cache thrashing.
\newedit{Meanwhile, we observe that programmers prefer L1D cache rather than shared memory for programming simplicity and the limited number of running GPU threads constrains the usage of shared memory, leading to
a large fraction of shared memory unused
(\textit{cf}\onedot} \def\Cf{\textit{Cf}\onedot $F_{smem}$ of Table~\ref{tab:workload_charac} in Section~\ref{sec:method}).
This agrees to prior work's analysis~\cite{hayes2014unified, virtualthread}. }
Exploiting such unused shared memory space,
we propose to redirect memory requests of severely interfering warps to the unused shared memory space.
As there is no cache interference at the beginning of kernel execution,
memory requests of all the warps are directed to L1D cache, as depicted in Figure~\ref{fig:thrott_exam}a.
However, as the kernel execution progresses, cache accesses begin to compete one another to acquire specific cache lines in L1D cache.
As the intensity of cache interference exceeds a threshold, \texttt{CIAO}\xspace determines severely interfering warps (\textit{cf}\onedot} \def\Cf{\textit{Cf}\onedot Section~\ref{sec:interference_detection}).
Subsequently, \texttt{CIAO}\xspace redirects memory requests of these interfering warps to unused shared memory space,
isolating the interfering warps from the interfered warps in terms of cache accesses, as depicted in Figure~\ref{fig:thrott_exam}b.
This in turn can significantly reduce cache contentions without throttling warps (\textit{i.e}\onedot} \def\Ie{\textit{I.e}\onedot, hurting TLP).
\newedit{After the redirection, the memory requests are forwarded from L1D cache to shared memory but the data may already present in the L1D cache (\textit{cf}\onedot} \def\Cf{\textit{Cf}\onedot W3/D3 in Figure~\ref{fig:thrott_exam}b). To guarantee cache coherence between L1D cache and shared memory, single data copy needs to be exclusively stored in either shared memory or L1D cache.
Such challenge can be addressed by migrating the data copy from L1D cache to shared memory, which may take the steps as follows: 1) a data miss signal would be raised for shared memory, 2) the data copy in L1D cache would be evicted to response queue, and 3) a new entry of MSHR would be filled with the pointer referring to the location of single data copy in the response queue. Later on, to fill the data miss, shared memory fetches data from response queue based on the location information recorded in MSHR.
}
When \texttt{CIAO}\xspace detects significant decrease in cache contentions due to a change in cache access patterns or completion of execution of some warps,
it redirects the memory requests of these interfering warps from shared memory back to L1D cache (\textit{cf}\onedot} \def\Cf{\textit{Cf}\onedot Figure~\ref{fig:thrott_exam}c).
To exploit the unused shared memory space for the aforementioned purpose, however, there are two challenges.
First, the shared memory has its own address space separated from the global memory,
and there is no hardware support that translates a global memory address to a shared memory address.
Second, the shared memory does not have a direct datapath to L2 cache and main memory~\cite{jamshidi2014d}.
That is, it always receives and sends data only through the register file.
To overcome these limitations, we propose to adapt shared memory architecture as follows.
First, we implement a address translation unit in front of shared memory to translate a given global memory address to a local shared memory address.
Second, we slightly adapt the datapath between L1D and L2 caches such that the shared memory can also access L2 cache when the unused shared memory space serves as cache.
\subsection{CIAO Warp Scheduling}
\label{sec:warp_throttling}
\noindent
Although \texttt{CIAO}\xspace on-chip memory architecture
can effectively isolate cache accesses of interfering warps from those of interfered warps, its efficacy depends on various run-time factors, such as the number of interfering warps and the amount of unused shared memory space.
For example,
the interfering warps end up thrashing the shared memory as well when the amount of unused shared memory space
is insufficient to handle a large number of memory requests from the interfering warps in a short time period (cf. Figure~\ref{fig:thrott_exam}d).
To efficiently handle such a case, we propose to throttle interfering warps \textit{only} when it is not effective to redirect memory requests of interfering warps to the shared memory.
Specifically, sharing the same cache interference detector used for \texttt{CIAO}\xspace on-chip memory architecture,
\texttt{CIAO}\xspace monitors the intensity of interference at the shared memory at runtime.
Once the intensity of interference at the shared memory exceeds a threshold, \texttt{CIAO}\xspace stalls
the most severely interfering warp at the shared memory (\textit{e.g}\onedot} \def\Eg{\textit{E.g}\onedot, \texttt{W2} in Figure~\ref{fig:thrott_exam}e).
\texttt{CIAO}\xspace repeats this step until the intensity of interference at the shared memory falls below the threshold.
As some warps complete their execution and subsequently the intensity of interference at the shared memory falls below the threshold,
\texttt{CIAO}\xspace starts to reactivate the stalled warp(s) in the reverse order to keep high TLP and maximize the utilization of shared memory (\textit{cf}\onedot} \def\Cf{\textit{Cf}\onedot Figure~\ref{fig:thrott_exam}f).
Note that \texttt{CIAO}\xspace warp scheduling shares the same interference detector with \texttt{CIAO}\xspace on-chip memory architecture, instead of
keeping two separate interference detectors for L1D and shared memory, respectively.
This is because isolated interfering warps do not compete L1D cache with warps that exclusively access L1D cache, and memory accesses of isolated interfering warps often interfere with one another.
In other words, L1D cache and shared memory interferences do not affect each other.
Hence, L1D cache and shared memory can share the same VTA array to detect interferences.
\section{Introduction}
\label{sec:introduction}
\input{introduction}
\section{Background}
\label{sec:background}
\input{background}
\section{Architecture and Scheduling}
\label{sec:overview}
\input{overview}
\section{Implementation}
\label{sec:implementation}
\input{implementation}
\section{Evaluation}
\label{sec:result}
\input{evaluation}
\section{Discussion and Related Work}
\label{sec:relatedwork}
\input{related}
\section{Conclusion}
\label{sec:conclusion}
\input{conclusion}
\section{Acknowledgement}
\label{sec:ack}
\input{acknowledge}
\bibliographystyle{IEEEtran}
|
1,116,691,499,896 | arxiv | \section{Introduction}
\textquotedblleft Planck's constant $h$ is a `quantum
constant\textquotedblright' is what I am told by my students.
\ \textquotedblleft Planck's constant is not allowed in a classical
theory\textquotedblright\ is the view of many physicists. \ I believe that
these views misunderstand the role of physical constants and, in particular,
the role of Planck's constant $h$ within classical and quantum theories. \ It
is noteworthy that despite the current orthodox view which restricts the
constant $h$ to quantum theory, Planck searched for many years for a way to
fit his constant $h$ into classical electrodynamics, and felt he could find no
place. \ Late in life, Planck\cite{SA} acknowledged that his colleagues felt
that his long futile search \textquotedblleft bordered on a
tragedy.\textquotedblright\ \ However, Planck's constant $h$ indeed has a
natural place in classical electrodynamics. \ Yet even today, most physicists
are unaware of this natural place, and some even wish to suppress information
regarding this role. \ In this article, we review the historical\cite{history}
appearance of Planck's constant, and then emphasize its contrasting roles in
classical and quantum theories. \
\section{Physical Constants as Scales for Physical Phenomena}
Physical constants appear in connection with measurements of the scale of
natural phenomena. \ Thus, for example, Cavendish's constant $G$ appeared
first in connection with gravitational forces using the Newtonian theory of
gravity. \ At the end of the 18th century, Cavendish's experiment using a
torsion balance in combination with Newton's theory provided the information
needed for an accurate evaluation of the constant. \ However, because physical
constants refer to natural phenomena and are not the exclusive domain of any
one theory, Cavendish's constant $G$ also reappears in Einstein's 20th-century
general-relativistic description of gravitational phenomena. \
In a similar fashion, Planck's constant $h$ sets the scale of electromagnetic
phenomena at the atomic level. \ It was first evaluated in 1899 in connection
with a fit of experimental data to Wien's theoretical suggestion for the
spectrum of blackbody radiation. \ Subsequently, the constant has reappeared
both in quantum theory and in classical electrodynamics. \ In its role as a
scale, the constant $h$ can appear in any theory which attempts to explain
aspects of atomic physics.
\section{Physical Constants of the 19th Century}
The 19th century saw developments in theories involving a number of physical
constants which are still in use today. \ The kinetic theory of gases involves
the gas constant $R$ and Avogadro's number $N_{A}$. \ The unification of
electricity and magnetism in Maxwell's equations involves an implicit or
explicit (depending on the units) appearance of the speed of light in vacuum
$c.$ \ Measurements of blackbody radiation led to the introduction of new
physical constants. \ Thus the appearance of the Stefan-Boltzmann law
$\mathcal{U}_{T}=a_{S}T^{4}V$ for the thermal energy $\mathcal{U}_{T}$ of
radiation in a cavity of volume $V$ at temperature $T$ introduced a new
constant $a_{S},$ Stefan's constant, in 1879. Today this constant is
reexpressed in terms of later constants as $a_{S}=\pi^{2}k_{B}^{4
/(15\hbar^{3}c^{3}).$ \ In the 1890s, careful experimental work on the
spectrum of blackbody radiation led to Wien's theoretical suggestion for the
blackbody radiation spectrum with its two constants, (labeled here as $\alpha$
and $\beta),
\begin{equation}
\rho_{W}(\nu,T)=\alpha\nu^{3}\exp[-\beta\nu/T].\label{Wien
\end{equation}
Also at the end of the century, the measurement of the ratio of charge to mass
for cathode rays led to new constants involving the charge and mass of the electron.
\section{Appearance of Planck's Constant}
Planck's great interest in thermodynamics led him to consider the equilibrium
of electric dipole oscillators when located\ in random classical radiation.
\ Planck found that the random radiation spectrum could be connected to the
average energy $U(\nu,T)$ of an electric dipole oscillator of natural
frequency $\nu$ as $\rho(\nu,T)=(8\pi\nu^{2}/c^{3})U(\nu,T)$.~\ \ Introducing
Wien's suggestion $\rho_{W}$ in (\ref{Wien}) for the spectral form, Planck
found for the average energy of an oscillato
\begin{equation}
U_{W}(\nu,T)=h\nu\exp[-\beta\nu/T].
\end{equation}
Here is where Planck's constant $h$\ first appeared in physics. \ At the
meeting of the Prussian Academy of Sciences on May 18, 1899, Planck
reported\cite{May1899} the value $\beta$ as $\beta=0.4818\times10^{-10}$
sec$\cdot$K$^{o}$ and the value of the constant $h$ as $h=6.88510^{-27}$
erg-sec. \ Thus initially, Planck's constant appeared as a numerical fit to
the experimental blackbody data when using theoretical ideas associated with
Wien's proposed form for the radiation spectrum. \ Planck's constant $h$ had
nothing to do with quanta in its first appearance in physics.
In the middle of the year 1900, experimentalists Rubens and Kurlbaum found
that their measurements of the blackbody radiation spectrum departed from the
form suggested by Wien in Eq. (\ref{Wien}). \ It was found that at low
frequencies (long wavelengths) the spectrum $\rho(\nu,T)$ seem to be
proportional to $\nu^{2}T.$ \ Planck learned of the new experimental results,
and, using ideas of energy and entropy for a dipole oscillator, introduced a
simple interpolation between the newly-suggested low-frequency form and the
well-established Wien high-frequency form. \ His interpolation gave him the
Planck radiation form corresponding to a dipole oscillator energy
\begin{equation}
U_{P}(\nu,T)=\frac{h\nu}{\exp[\beta\nu/T]-1}. \label{Planck
\end{equation}
Planck reported\cite{Oct1900} this suggested blackbody radiation spectrum to
the German Physical Society on October 19, 1900. \ The Planck spectrum
(\ref{Planck}) involved exactly the same constants as appeared in his work
when starting from Wien's spectrum. \ The experimentalists confirmed that
Planck's new suggested spectrum was an excellent fit to the data. \ Once again
Planck's constant $h$ appeared as a parameter in a fit to experimental data
when working with an assumed theoretical form. \ There was still no suggestion
of quanta in connection with Planck's constant $h.$
Planck still needed a theoretical justification for his new spectral form.
\ Although Planck had hoped initially that his dipole oscillators would act as
black particles and would bring random radiation into the equilibrium
blackbody spectrum, he had come to realize that his small linear oscillators
would not change the frequency spectrum of the random radiation; the incident
and the scattered radiation were at the same frequency. \ In late 1900 in an
\textquotedblleft act of desperation,\textquotedblright\ Planck turned to
Boltzmann's statistical work which he had earlier \textquotedblleft vehemently
rejected.\textquotedblright\ \ It was in connection with the use of
statistical ideas that Planck found that the constant $\beta$ in the radiation
spectra could be rewritten as $\beta=h/k_{B}$ where $k_{B}=R/N_{A},$ where $R$
and $N_{A}$ were the constants which had already appeared in the kinetic
theory of gases. \ Indeed, it was Planck who introduced Boltzmann's constant
$k_{B}$ into physics. \ Boltzmann had always stated that entropy $S$ was
proportional to the logarithm of probability $W$ without ever giving the
constant of proportionality. \ Planck introduced the equality $S=k_{B}\ln W.$
\ Also, in his calculations of the probability, Planck departed from
Boltzmann's procedures in retaining the energy connection $\mathcal{E}=h\nu$
without taking the expected limit $\mathcal{E\rightarrow}0.$ \ Only by
avoiding the limit, could Planck recover his radiation spectrum. \ It was here
in retaining $\mathcal{E}=h\nu,$ rather than taking Boltzmann's limit, that
the association of Planck's constant $h$\ with quanta first appeared. \
\section{Planck's Constant in Current Quantum Theory}
Although Planck's constant did not originally appear in connection with
quantum theory, the constant became a useful scale factor when dealing with
the photoelectric effect, specific heats of solids, and the Bohr atom. \ The
theoretical context for all these phenomena was quantum theory. \ Quantum
theory developed steadily during the first third of the 20th century. \ Today
quantum theory is regarded as the theory which gives valid results for
phenomena at the atomic level. \
Our current textbooks emphsize that quantum theory incorporates Planck's
constant $h$ into the essential aspects of the theory involving non-commuting
operators. \ Thus position and momentum are associated with operators
satisfying the commutation relation $[\widehat{x},\widehat{p}_{x
]=ih/(2\pi)=i\hbar.$ \ The non-commutativity is associate with a non-zero
value of Planck's constant $h$ and disappears when $h$ is taken to zero.
\ Associated with the non-commuting operators is the zero-point energy
$U=(1/2)h\nu_{0}$ of a harmonic oscillator of natural frequency $\nu_{0}.$
\ The oscillator zero-point energy vanishes along with a vanishing value for
$h.$
\section{Planck's Constant in Classical Electrodynamics}
The natural place for Planck's constant within classical electrodynamics is as
the scale factor of the source-free contribution to the general solution of
Maxwell's equations. \ Maxwell's equations for the electromagnetic fields
$\mathbf{E}(\mathbf{r},t)$ and $\mathbf{B}(\mathbf{r,}t)$ can be rewritten in
terms of the potentials $\Phi(\mathbf{r},t)$ and $\mathbf{A}(\mathbf{r},t)$
where $\mathbf{E}=-\nabla\Phi-(1/c)\partial\mathbf{A}/\partial t$ and
$\mathbf{B}=\nabla\times\mathbf{A.}$ \ In the Lorenz gauge, Maxwell's
equations for the potentials become wave equations with sources in the charge
density $\rho(\mathbf{r},t)$ and current density $\mathbf{J}(\mathbf{r},t),
\begin{equation}
\left( \nabla^{2}-\frac{1}{c^{2}}\frac{\partial^{2}}{\partial t^{2}}\right)
\Phi(\mathbf{r},t)=-4\pi\rho(\mathbf{r},t), \label{scalar
\end{equation
\begin{equation}
\left( \nabla^{2}-\frac{1}{c^{2}}\frac{\partial^{2}}{\partial t^{2}}\right)
\mathbf{A}(\mathbf{r},t)=-4\pi\frac{\mathbf{J}(\mathbf{r},t)}{c}.
\label{vector
\end{equation}
The general solutions of these differential equations in all spacetime with
outgoing wave boundary conditions are
\begin{equation}
\Phi(\mathbf{r},t)=\Phi^{in}(\mathbf{r},t)
{\textstyle\int}
d^{3}r^{\prime
{\textstyle\int}
dt^{\prime}\frac{\delta(t-t^{\prime}-|\mathbf{r-r}^{\prime}|/c)
{|\mathbf{r-r}^{\prime}|}\rho(\mathbf{r}^{\prime},t^{\prime}),
\end{equation}
\begin{equation}
\mathbf{A}(\mathbf{r},t)=\mathbf{A}^{in}(\mathbf{r},t)
{\textstyle\int}
d^{3}r^{\prime
{\textstyle\int}
dt^{\prime}\frac{\delta(t-t^{\prime}-|\mathbf{r-r}^{\prime}|/c)
{|\mathbf{r-r}^{\prime}|}\frac{\mathbf{J}(\mathbf{r}^{\prime},t^{\prime})}{c},
\end{equation}
where $\Phi^{in}(\mathbf{r},t)$ and $\mathbf{A}^{in}(\mathbf{r},t)$ are the
(homogeneous) source-free contributions to the general solutions of Eqs.
(\ref{scalar}) and (\ref{vector}). \ It is universally accepted that these are
the accurate solutions. \ However, all the textbooks\cite{texts} of
electromagnetism omit the source-free contributions $\Phi^{in}(\mathbf{r},t)$
and $\mathbf{A}^{in}(\mathbf{r},t),$ and present only the contributions due to
the charge and currents sources $\rho(\mathbf{r},t)$ and $\mathbf{J
(\mathbf{r},t).$ \
Within a laboratory situation, the experimenter's sources correspond to
$\rho(\mathbf{r},t)$ and $\mathbf{J}(\mathbf{r},t)$ while the source-free
terms $\Phi^{in}(\mathbf{r},t)$ and $\mathbf{A}^{in}(\mathbf{r},t)$ correspond
to contributions the experimenter does not control which are present when the
experimenter enters his laboratory. \ The terms $\Phi^{in}(\mathbf{r},t)$ and
$\mathbf{A}^{in}(\mathbf{r},t)$ might correspond to radio waves coming from a
nearby broadcasting station, or might correspond to thermal radiation from the
walls of the laboratory. \ In a shielded laboratory held at zero temperature,
there is still random classical radiation present which can be measured using
the Casimir force\cite{Casimir} between two parallel conducting plates.
\ Intrepreted within classical electromagnetic theory, this Casimir force
responds to all the classical radiation surrounding the conducting parallel
plates, and it is found experimentally\cite{exp} that this force does not go
to zero as the temperature goes to zero. \ The Casimir force, interpreted
within classical electromagnetic theory, indicates that there is classical
electromagnetic zero-point radiation present throughout spacetime. \ The
zero-point spectrum is Lorentz invariant with a scale set by Planck's constant
$h.$ \ Thus in a shielded laboratory at zero-temperature, the vector potential
$\mathbf{A}(\mathbf{r},t)$ should be written as
\begin{align*}
\mathbf{A}(\mathbf{r},t) &
{\textstyle\sum_{\lambda=1}^{2}}
{\textstyle\int}
d^{3}k\widehat{\epsilon}(\mathbf{k},\lambda)\left( \frac{h}{4\pi^{3}\omega
}\right) ^{1/2}\sin\left[ \mathbf{k}\cdot\mathbf{r-}\omega t+\theta
(\mathbf{k},\lambda)\right] \\
&
{\textstyle\int}
d^{3}r^{\prime
{\textstyle\int}
dt^{\prime}\frac{\delta(t-t^{\prime}-|\mathbf{r}-\mathbf{r}^{\prime
|/c)}{|\mathbf{r}-\mathbf{r}^{\prime}|}\frac{\mathbf{J}(\mathbf{r}^{\prime
},t^{\prime})}{c},
\end{align*}
including a source-free contribution which is a Lorentz-invariant spectrum of
plane waves with random phases $\theta(\mathbf{k},\lambda)$ and with Planck's
constant $h$\ setting the scale.\cite{Breview}
As indicated here, there is a natural place for Planck's constant $h$ within
classical electromagnetic theory. \ Once this zero-point radiation is present
in the theory, it causes zero-point energy for dipole oscillators, and it can
be used to explain Casimir forces, van der Waals forces, oscillator specific
heats, diamagnetism, the blackbody radiation spectrum, and the absence of
\textquotedblleft atomic collapse.\textquotedblright\cite{any} \
\section{Contrasting Roles for Planck's Constant}
Planck's constant $h$\ sets the scale for atomic phenomena. \ However, the
constant plays contrasting roles in classical and quantum theories. \ Within
quantum theory, Planck's constant $h$ is embedded in the essential aspects of
the theory. \ If one sets Planck's constant to zero, then the quantum
character of the theory disappears; the non-commuting operators become simply
commuting c-numbers, and the zero-point energy of a harmonic oscillator drops
to zero.
On the other hand, within classical electrodynamics, Planck's constant does
not appear in Maxwell's fundamental differential equations. \ Rather, Planck's
constant appears only in the (homogeneous) source-free contribution to the
general solution of Maxwell's equations. \ Thus classical electrodynamics can
exist in two natural forms. \ In one form Planck's constant is taken as
non-zero, and one can explain a number of natural phenomena at the atomic
level. \ In the other form, Planck's constant is taken to vanish. \ It is only
this second form which appears in the textbooks of electromagnetism and modern
physics. \ However, even this form is quite sufficient to account for our
macroscopic electromagnetic technology.
\section{Comments on Planck's Constant in Classical Theory}
Planck was a \textquotedblleft reluctant revolutionary\textquotedblright\ who
tried to conserve as much of 19th century physics as possible. \ Late in life
Planck wrote\cite{SA} in his scientific autobiography, \textquotedblleft My
futile attempts to fit the elementary quantum of action somehow into the
classical theory continued for a number of years, and they cost me a great
deal of effort. \ Many of my colleagues saw in this something bordering on a
tragedy.\textquotedblright\ \ Within his lifetime, there seems to have been no
recognition of a natural place for Planck's constant $h$ within classical
electrodynamics. \ It was only in the 1960s, beginning with careful work by
Marshall,\cite{Marshall} that it became clear that Planck's constant $h$ could
be incorporated in an natural way within classical electrodynamics.
Despite the realization that Planck's constant is associated with atomic
phenomena and that at least some of atomic phenomena can be described within
either classical or quantum theory, there seems great reluctance on the part
of some physicists to acknowledge the possibility of Planck's constant
appearing within classical theory. \ One referee wrote the following
justification\cite{personal} in rejecting the idea of Planck's constant
appearing within classical electromagnetism: \textquotedblleft But as a
pedagogical matter, doesn't it muddy the distinction between classical and
quantum physics? \ The traditional dividing line may be in some aspects
arbitrary, but at least it is clear ($\hbar\implies$
quantum).\textquotedblright\ \ This referee clearly misunderstands the nature
of physical constants and seems willing to sacrifice scientific accuracy to
pedagogical simplicity.
I believe that many physicists would prefer truth to convenient pedagogy.
\ Certainly the introduction of Planck's constant as the scale factor for
source-free classical zero-point radiation expands the range of phenomena
described by classical electromagnetic theory.\cite{any} \ The recognition
that Planck's constant has a natural place within classical theory may provide
a broadening perspective beyond a confining quantum orthodoxy.\ \ In any case,
one suspects that Planck would have been pleased to find that there is indeed
a natural role for his constant $h$ within classical electromagnetic theory.
|
1,116,691,499,897 | arxiv | \section{Introduction}
\label{sec:intro}
The security of cyber-physical system, modelled in the abstraction level of events~\cite{WMW10}, has attracted much research interest from the discrete-event system community, with most of the existing works devoted to attack detection and security verification \cite{CarvalhoEnablementAttacks}-\cite{WP}, synthesis of attackers \cite{Goes2017}-\cite{ZSL2021}, and synthesis of resilient supervisors \cite{Su2018}, \cite{Su20}-\cite{LS20BJ}.
The problem of covert sensor attacker synthesis has been studied extensively \cite{Goes2017}-\cite{Mohajerani20}, \cite{Su2018}, \cite{ZSLG2020}. In \cite{Su2018}, it is shown that, under a normality assumption on the sensor attackers, the supremal covert sensor attacker exists and can be effectively synthesized. In \cite{Goes2017}-\cite{Mohajerani20}, a game-theoretic approach is presented to synthesize covert sensor attackers, without imposing the normality assumption.
The problem of covert (sensor-)actuator attacker synthesis has been addressed in \cite{Lin2018}-\cite{Kh19}, by employing a reduction to the (partial-observation) supervisor synthesis problem.
However, in all the previous works, it can be restrictive to assume the model of the supervisor to be known to the adversary, which is unlikely unless the adversary is an insider.
Recently, we have considered a more practical setup where the model of the supervisor is not available to the adversary \cite{LTZS20}.
To compensate the lack of knowledge on the model of the supervisor, it is assumed in \cite{LTZS20} that the adversary has recorded a (prefix-closed)
finite set of observations of the runs of the closed-loop system. In this more challenging setup, a covert attacker needs to be synthesized based solely on the model of the plant and the given set of observations. From the adversary's point of view, any supervisor that is consistent with the given set of observations may have been deployed. And the synthesized attacker needs to ensure the damage-reachability and the covertness against all the supervisors that are consistent with the given set of observations. The difficulty of this synthesis problem lies in the fact that there can be in general an infinite number of supervisors which are consistent with the observations, rendering the synthesis approaches developed in the existing works ineffective. In \cite{LTZS20}, we have proposed a technique to compute covert damage-reachable attackers by formulating it as an instance of the supervisor synthesis problem on certain surrogate plant model, which is constructed without using the model of the supervisor. Due to the over-approximation in the surrogate plant model, the synthesized attacker in \cite{LTZS20} cannot ensure the supremality in general. It is worth noting that there is a gap between the de facto supremality, assuming the model of the supervisor is known, and the supremality (from the adversary's point of view) that could be attained with a limited knowledge of the model of the supervisor. It is the supremality from the adversary's point of view that is of interest in this work.
In this paper, we also assume the model of the supervisor is not available to the adversary and the adversary can use the observations to assist the synthesis of covert attackers.
The main contributions of this work are listed as follows.
\begin{itemize}
\setlength{\itemsep}{3pt}
\setlength{\parsep}{0pt}
\setlength{\parskip}{0pt}
\item We provide a sound and complete procedure for the synthesis of covert damage-reachable attackers, given the model of the plant and the finite set of observations. The solution methodology is to reduce it to the problem of partial-observation supervisor synthesis for certain transformed plant, which shows the decidability of the observation-assisted covert (damage-reachable) attacker synthesis problem. We allow sensor replacement/deletion attacks\footnote{It is also possible to deal with sensor insertion attacks by using our approach, which requires some modifications in our constructions, and the detailed model is given in \cite{TLZS21}. The reason why we do not consider insertion attack is: In the standard DES under supervisory control, which is assumed to run strings of the form $(\Gamma(\Sigma - \Sigma_{o})^{*}\Sigma_{o})^{*}$, following Ramadge-Wonham framework, we know that any string that is not of this format immediately reveals the existence of an attacker and leads to the system halting, making the insertion attacks not useful for the standard DES. Naturally, sensor insertion attacks shall be allowed in practice, but the Ramadge-Wonham framework is an idealization of networked control systems (with zero delays) and makes the interpretation of sensor insertions attacks not that natural.} and actuator enablement/disablement attacks. In comparison, there are two limitations regarding the approach proposed in \cite{LTZS20}: 1) it only provides a sound, but generally incomplete, heuristic algorithm for the synthesis of covert damage-reachable attackers due to the use of over-approximation in the surrogate plant, and 2) it cannot deal with actuator enablement attacks.
\item The approach proposed in this work can synthesize the supremal covert damage-reachable attacker, among those attackers which can ensure the damage-reachability and the covertness against all the supervisors consistent with the set of observations, under the assumption $\Sigma_{c} \subseteq \Sigma_{o}$. We provide a formal proof of the supremality and the correctness of the synthesized attackers, by reasoning on the model of the attacked closed-loop system, adapted from \cite{LS20}, \cite{LS20J}, \cite{LS20BJ}. In comparison, supremality is not guaranteed in \cite{LTZS20}, due to the use of over-approximation in the surrogate plant.
\end{itemize}
In practice, one may observe the closed-loop system for a sufficiently long time, i.e., obtain a sufficient number of observations of the runs of the closed-loop system, and hope to learn an exact observable model of the closed-loop system, that is, the natural projection of the closed-loop system, also known as the monitor~\cite{LS20J}. However, this approach has two problems. First of all, it is not efficient, indeed infeasible, to learn the observable model of the closed-loop system, as in theory an infinite number of runs needs to be observed. We can never guarantee the correctness of the learnt model for any finite set of observations, without an oracle for confirming the correctness of the learnt model. Secondly, even if we obtain an exact observable model of the closed-loop system, the model in general has insufficient information for us to extract a model of the supervisor and use, for example, the technique developed in \cite{LZS19,LS20} for synthesizing covert attackers. A much more viable and efficient approach is to observe the closed-loop system for just long enough, by observing as few runs of the closed-loop system as possible, to extract just enough information to carry out the synthesis of an non-empty covert attacker. If a given set of observations is verified to be
sufficient for us to synthesize a non-empty covert attacker, then we know that more observations will only allow more permissive covert attacker to be synthesized.
The solution proposed in this work can determine if any given set of observations contains enough information for the synthesis of a non-empty covert attacker and can directly synthesize a covert attacker from the set of observations whenever it is possible.
This paper is organized as follows. In Section \ref{sec:Preliminaries}, we recall the preliminaries which are needed for understanding this paper. In Section \ref{sec:Component models under sensor-actuator attack}, we then introduce the system setup and present the model constructions. The proposed synthesis solution as well as the correctness proof are presented in Section \ref{sec:Synthesis of Maximally Permissive Covert Attackers Against Unknown Supervisors}. Finally, in Section \ref{sec:Conclusions}, the conclusions are drawn. A running example is given throughout the paper.
\section{Preliminaries}
\label{sec:Preliminaries}
In this section, we introduce some basic notations and terminologies that will be used in this work, mostly following~\cite{WMW10, CL99, HU79}.
Given a finite alphabet $\Sigma$, let $\Sigma^{*}$ be the free monoid over $\Sigma$ with the empty string $\varepsilon$ being the unit element.
A language $L \subseteq \Sigma^{*}$ is a set of strings.
The event set $\Sigma$ is partitioned into $\Sigma = \Sigma_{c} \dot{\cup} \Sigma_{uc} = \Sigma_{o} \dot{\cup} \Sigma_{uo}$, where $\Sigma_{c}$ (respectively, $\Sigma_{o}$) and $\Sigma_{uc}$ (respectively, $\Sigma_{uo}$) are defined as the sets of controllable (respectively, observable) and uncontrollable (respectively, unobservable) events, respectively. As usual, $P_{o}: \Sigma^{*} \rightarrow \Sigma_{o}^{*}$ is the natural projection defined as: 1) $P_{o}(\varepsilon) = \varepsilon$, 2) $(\forall \sigma \in \Sigma) \, P_{o}(\sigma) = \sigma$ if $\sigma \in \Sigma_{o}$, otherwise, $P_{o}(\sigma) = \varepsilon$, 3) $(\forall s\sigma \in \Sigma^{*}) \, P_{o}(s\sigma) = P_{o}(s)P_{o}(\sigma)$.
We sometimes also write $P_o$ as $P_{\Sigma_o}$, to explicitly illustrate the co-domain $\Sigma_o^*$.
A finite state automaton $G$ over $\Sigma$ is given by a 5-tuple $(Q, \Sigma, \xi, q_{0}, Q_{m})$, where $Q$ is the state set, $\xi: Q \times \Sigma \rightarrow Q$ is the (partial) transition function, $q_{0} \in Q$ is the initial state, and $Q_{m}$ is the set of marker states.
We write $\xi(q, \sigma)!$ to mean that $\xi(q, \sigma)$ is defined. We define $En_{G}(q) = \{\sigma \in \Sigma|\xi(q, \sigma)!\}$.
$\xi$ is also extended to the (partial) transition function $\xi: Q \times \Sigma^{*} \rightarrow Q$ and the transition function $\xi: 2^{Q} \times \Sigma \rightarrow 2^{Q}$ \cite{WMW10}, where the later is defined as follows: for any $Q' \subseteq Q$ and any $\sigma \in \Sigma$, $\xi(Q', \sigma) = \{q' \in Q|(\exists q \in Q')q' = \xi(q, \sigma)\}$.
Let $L(G)$ and $L_{m}(G)$ denote the closed-behavior and the marked behavior, respectively. $G$ is said to be marker-reachable if some marker state of $G$ is reachable~\cite{WMW10}. $G$ is marker-reachable iff $L_m(G) \neq \emptyset$. When $Q_{m} = Q$, we shall also write $G = (Q, \Sigma, \xi, q_{0})$ for simplicity.
The ``unobservable reach'' of the state $q \in Q$ under the subset of events $\Sigma' \subseteq \Sigma$ is given by $UR_{G, \Sigma - \Sigma'}(q) := \{q' \in Q|[\exists s \in (\Sigma - \Sigma')^{*}] \, q' = \xi(q,s)\}$.
We shall abuse the notation and define $P_{\Sigma'}(G)$ to be the finite state automaton $(2^{Q} - \{\emptyset\}, \Sigma, \delta, UR_{G, \Sigma - \Sigma'}(q_{0}))$ over $\Sigma$, where $UR_{G, \Sigma - \Sigma'}(q_{0}) \in 2^Q-\{\emptyset\}$ is the initial state, and the (partial) transition function $\delta: (2^{Q} - \{\emptyset\}) \times \Sigma \rightarrow (2^{Q} - \{\emptyset\})$ is defined as follows:
\begin{enumerate}[(1)]
\setlength{\itemsep}{3pt}
\setlength{\parsep}{0pt}
\setlength{\parskip}{0pt}
\item For any $\emptyset \neq Q' \subseteq Q$ and any $\sigma \in \Sigma'$, if $\xi(Q', \sigma) \neq \emptyset$, then $\delta(Q', \sigma) = UR_{G, \Sigma - \Sigma'}(\xi(Q', \sigma))$, where $UR_{G, \Sigma - \Sigma'}(Q'') = \bigcup\limits_{q \in Q''}UR_{G, \Sigma - \Sigma'}(q)$
for any $\emptyset \neq Q'' \subseteq Q$
\item For any $\emptyset \neq Q' \subseteq Q$ and any $\sigma \in \Sigma - \Sigma'$, $\delta(Q', \sigma) = Q'$.
\end{enumerate}
We note that the construction of $P_{\Sigma'}(G)$ is equivalent to carrying out a chaining of natural projection, determinization and self-loops adding on $G$.
As usual, for any two finite state automata $G_{1} = (Q_{1}, \Sigma_{1}, \xi_{1}, q_{1,0}, Q_{1,m})$ and $G_{2} = (Q_{2}, \Sigma_{2}, \xi_{2}, q_{2,0}, Q_{2,m})$, where $En_{G_{1}}(q) = \{\sigma \in \Sigma_1|\xi_{1}(q, \sigma)!\}$ and $En_{G_{2}}(q) = \{\sigma \in \Sigma_2|\xi_{2}(q, \sigma)!\}$, their synchronous product \cite{CL99} is denoted as $G_{1}||G_{2} := (Q_{1} \times Q_{2}, \Sigma_{1} \cup \Sigma_{2}, \zeta, (q_{1,0}, q_{2,0}), Q_{1,m} \times Q_{2,m})$, where the (partial) transition function $\zeta$ is defined as follows, for any $(q_{1}, q_{2}) \in Q_{1} \times Q_{2}$ and $\sigma \in \Sigma = \Sigma_1 \cup \Sigma_2$:
\[
\begin{aligned}
& \zeta((q_{1}, q_{2}), \sigma) := \\ & \left\{
\begin{array}{lcl}
(\xi_{1}(q_{1}, \sigma), \xi_{2}(q_{2}, \sigma)) & & {\rm if} \, {\sigma \in En_{G_{1}}(q_{1}) \cap En_{G_{2}}(q_{2}),}\\
(\xi_{1}(q_{1}, \sigma), q_{2}) & & {\rm if} \, {\sigma \in En_{G_{1}}(q_{1}) \backslash \Sigma_{2},}\\
(q_{1}, \xi_{2}(q_{2}, \sigma)) & & {\rm if} \, {\sigma \in En_{G_{2}}(q_{2}) \backslash \Sigma_{1},}\\
{\rm not \, defined} & & {\rm otherwise.}
\end{array} \right.
\end{aligned}
\]
For convenience, for any two finite state automata $G_{1}$ and $G_{2}$, we write $G_1=G_2$ iff $L(G_{1}) = L(G_{2})$ and $L_{m}(G_{1}) = L_{m}(G_{2})$. We also write $G_1 \sqsubseteq G_2$ iff $L(G_{1}) \subseteq L(G_{2})$ and $L_{m}(G_{1}) \subseteq L_{m}(G_{2})$. It then follows that $G_1=G_2$ iff $G_1 \sqsubseteq G_2$ and $G_2 \sqsubseteq G_1$.
\textbf{Notation.} Let $\Gamma = \{\gamma \subseteq \Sigma|\Sigma_{uc} \subseteq \gamma\}$ denote the set of all the possible control commands. In this work, it is assumed that when no control command is received by plant $G$, then only uncontrollable events could be executed.
For a set $\Sigma$, we use $\Sigma^{\#}$ to denote a copy of $\Sigma$ with superscript ``$\#$'' attached to each element in $\Sigma$. Intuitively speaking, ``$\#$'' denotes the message tampering due to the sensor attacks; the specific meanings of the relabelled events will be introduced later in Section \ref{sec:Component models under sensor-actuator attack}. Table \ref{tab:notations} summarizes the notations of main components and symbols that would be adopted in this work.
\begin{table}[htbp]
\centering
\caption{NOTATIONS}
\label{tab:notations}
\begin{tabular}{ll}
\hline
\hline\\ [-0.34cm]
Notation & Meaning\\
\hline
$AC$ & Sensor attack constraints\\
\hline
$\mathcal{A}$ & Sensor-actuator attacker\\
\hline
$G$ & Plant\\
\hline
$CE^{A}$ & Command execution under actuator attack\\
\hline
$S$ & Supervisor\\
\hline
$BT(S)^{A}$ & Bipartite supervisor under attack\\
\hline
$NS$ & \tabincell{l}{Supremal safe command non-deterministic \\ supervisor} \\
\hline
$OCNS^{A}$ & \tabincell{l}{Supremal safe and observation-consistent \\command non-determinsitic supervisor under \\ attack} \\
\hline
$S^{\downarrow}$ & \tabincell{l}{The least permissive supervisor consistent \\ with observations} \\
\hline
$\overline{S^{\downarrow,A}}$ & \tabincell{l}{The least permissive supervisor consistent \\ with observations under attack \\ (a complete automaton)} \\
\hline
$\Sigma_{s,a}$ & \tabincell{l}{the set of compromised observable events for \\the attacker}\\
\hline
$\Sigma_{c,a}$ & \tabincell{l}{the set of actuator attackable events for the \\attacker}\\
\hline
$\Sigma_{s,a}^{\#}$ & \tabincell{l}{the set of events of sending compromised \\ events to the supervisor by the attacker}\\
\hline
\hline
\end{tabular}
\end{table}
\section{Component models under sensor-actuator attack}
\label{sec:Component models under sensor-actuator attack}
In this section, we shall introduce the system architecture under sensor-actuator attack~\cite{LS20J} and the model of each component. The system architecture is shown in Fig. \ref{fig:System architecture under attack}, which consists of the following components:
\begin{itemize}
\setlength{\itemsep}{3pt}
\setlength{\parsep}{0pt}
\setlength{\parskip}{0pt}
\item Plant $G$.
\item Command execution $CE^{A}$ under actuator attack.
\item Sensor attack subject to sensor attack constraints $AC$.
\item Unknown supervisor $BT(S)^{A}$ under attack (with an explicit control command sending phase).
\end{itemize}
\begin{figure}[htbp]
\begin{center}
\includegraphics[height=3.6cm]{System_Architecture.pdf}
\caption{System architecture under sensor-actuator attack}
\label{fig:System architecture under attack}
\end{center}
\end{figure}
In this work, we shall assume that $\Sigma_{c} \subseteq \Sigma_{o}$, which can be easily satisfied in reality. Generally speaking, the reason why we adopt this assumption is: Under this assumption, the least permissive supervisor that is consistent with the collected observations exists, and this is a critical point to prove the decidability of the problem studied in this work, which shall be analyzed later in Theorem IV.3-IV.8.
Even if this assumption is relaxed, the proposed synthesis algorithm is still guaranteed to be sound and it in general generates more permissive solutions than the heuristic algorithm proposed in~\cite{LTZS20}, but it is then generally incomplete as well. For more details, the reader is referred to Remark IV.1 in Section IV. In the following, we explain how the models shown in the system architecture of Fig. 1 can be constructed.
\subsection{Sensor attack constraints $AC$}
\label{subsec:sensor attack constraints}
In this work, the basic assumptions of the sensor attacker\footnote{We simply refer to the sensor attack decision making part of the sensor-actuator attacker as the sensor attacker.} is given as follows: 1) The sensor attacker can only observe the events in $\Sigma_{o}$, which is the set of observable events of the plant; the set of compromised observable events for the sensor attacker is denoted as $\Sigma_{s,a} \subseteq \Sigma_{o}$. 2) The sensor attacker can implement deletion or replacement attacks w.r.t. the events in $\Sigma_{s,a}$. 3) The sensor attack action (deletion or replacement) is instantaneous. When an attack is initiated for a specific observation, it will be completed before the next event can be executed by the plant $G$.
Briefly speaking, to encode the tampering effects of sensor attack on $\Sigma_{s,a}$, 1) We make a (relabelled) copy $\Sigma_{s,a}^{\#} = \{\sigma^{\#}|\sigma \in \Sigma_{s,a}\}$ of $\Sigma_{s,a} \subseteq \Sigma_{o}$ such that events in $\Sigma_{s,a}$ are executed by the plant, while events in $\Sigma_{s,a}^{\#}$ are those attacked copies sent by the sensor attacker and received by the supervisor. 2) Each transition labelled by $\sigma \in \Sigma_{s,a}$ in the bipartite supervisor is relabelled to $\sigma^{\#}$, in order to reflect the receiving of the attacked copy $\sigma^{\#}$ instead of $\sigma$. The above two techniques allow us to capture the effects of sensor attack.
Next, we shall introduce the model: sensor attack constraints, which serves as a ``template'' to describe the capabilities of the sensor attack. The sensor attack constraints is modelled as a finite state automaton $AC$, shown in Fig. \ref{fig:Sensor attack constraints}\footnote{$\Gamma$ can be viewed as a set of events, where each $\gamma \in \Gamma$ denotes the event of sending (and receiving) the control command $\gamma$ itself.}
\begin{figure}[htbp]
\begin{center}
\includegraphics[height=2.2cm]{Sensor_Attack_Constraints.pdf}
\caption{The (schematic) model for sensor attack constraints $AC$}
\label{fig:Sensor attack constraints}
\end{center}
\end{figure}
\[
AC = (Q_{ac}, \Sigma_{ac}, \xi_{ac}, q_{ac}^{init})
\]
\begin{itemize}
\setlength{\itemsep}{3pt}
\setlength{\parsep}{0pt}
\setlength{\parskip}{0pt}
\item $Q_{ac} = \{q_{ac}^{init}, q_{0}, q_{1}\}$
\item $\Sigma_{ac} = \Sigma \cup \Sigma_{s,a}^{\#} \cup \Gamma \cup \{stop\}$
\item $\xi_{ac}: Q_{ac} \times \Sigma_{ac} \rightarrow Q_{ac}$
\end{itemize}
The (partial) transition function $\xi_{ac}$ is defined as follows:
\begin{enumerate}[1.]
\setlength{\itemsep}{3pt}
\setlength{\parsep}{0pt}
\setlength{\parskip}{0pt}
\item For any $\sigma \in \Sigma_{uo} \cup \Gamma$, $\xi_{ac}(q_{ac}^{init}, \sigma) = q_{ac}^{init}$. (occurrence of an unobservable event)
\item For any $\sigma \in \Sigma_{s,a}$, $\xi_{ac}(q_{ac}^{init}, \sigma) = q_{0}$. (observation of a compromised event)
\item For any $\sigma \in \Sigma_{o} - \Sigma_{s,a}$, $\xi_{ac}(q_{ac}^{init}, \sigma) = q_{1}$. (observation of an observable and non-compromised event)
\item For any $\sigma \in \Sigma_{s,a}$, $\xi_{ac}(q_{0}, \sigma^{\#}) = q_{1}$. (sensor replacement)
\item For any $n \in \{0,1\}$, $\xi_{ac}(q_{n}, stop) = q_{ac}^{init}$. (end of attack)
\end{enumerate}
We shall briefly explain the model $AC$. For the state set, the initial state $q_{ac}^{init}$ denotes that the sensor attacker has not observed any event in $\Sigma_{o}$ since the system initiation or the last attack operation.
$q_{0}$ ($q_{1}$, respectively) is a state denoting that the sensor attacker has observed some event in $\Sigma_{s,a}$ ($\Sigma_{o} - \Sigma_{s,a}$, respectively).
For the event set, any event $\sigma^{\#}$ in $\Sigma_{s,a}^{\#}$ denotes an event of sending a compromised observable event $\sigma$ to the supervisor by the sensor attacker. Thus, due to the existence of sensor attack, the supervisor can only observe the relabelled copy $\Sigma_{s,a}^{\#}$ instead of $\Sigma_{s,a}$. Any event $\gamma \in \Gamma$ denotes an event of sending a control command $\gamma$ by the supervisor, which will be introduced later in Section \ref{subsec:unknown supervisor}. The event $stop$ denotes the end of the current round of sensor attack operation. In this work, we shall treat any event in $\Sigma_{o} \cup \Sigma_{s,a}^{\#} \cup \{stop\}$ as being observable to the sensor attacker.
For the (partial) transition function $\xi_{ac}$,
\begin{itemize}
\setlength{\itemsep}{3pt}
\setlength{\parsep}{0pt}
\setlength{\parskip}{0pt}
\item Case 1 says that the occurrence of any event in $\Sigma_{uo} \cup \Gamma$, which is unobservable to the sensor attacker and cannot be attacked, would only lead to a self-loop at the state $q_{ac}^{init}$. The purpose of adding Case 1 is to ensure 1) the alphabet of $AC$ is $\Sigma \cup \Sigma_{s,a}^{\#} \cup \Gamma \cup \{stop\}$, and 2) any event $\sigma \in \Sigma_{uo} \cup \Gamma$ is not defined at non-$q_{ac}^{init}$ states and thus any event in $\Sigma_{o}$ is immediately followed by an event in $\Sigma_{s,a}^{\#} \cup \{stop\}$ to simulate the immediate attack operation or the end of the attack operation following the observation of an event in $\Sigma_{o}$.
\item Case 2 and Case 3 say that the observation of any event in $\Sigma_{s,a}$ ($\Sigma_{o} - \Sigma_{s,a}$, respectively) would lead to a transition to the state $q_{0}$ ($q_{1}$, respectively), where the sensor attacker may perform some attack operations (cannot perform attack operations, respectively).
\item Case 4 says that at the state $q_{0}$, i.e., the sensor attacker has just observed some compromised observable event in $\Sigma_{s,a}$, it can implement sensor replacement attacks by replacing what it observes with any compromised observable event in $\Sigma_{s,a}$.
\item Case 5 says that at the state $q_{0}$ or $q_{1}$, the sensor attacker can end the current round of sensor attack operation.
Since the supervisor could only observe its relabelled copy in $\Sigma_{s,a}^{\#}$ for any compromised event in $\Sigma_{s,a}$, after observing an event in $\Sigma_{s,a}$, if the sensor attacker decides to end the current round of operation at state $q_{0}$, denoted by $stop$, then it indeed implements the deletion attack.
\end{itemize}
Based on the model of $AC$, we know that $|Q_{ac}| = 3$.
\subsection{Plant $G$}
\label{subsec:Plant}
Plant is modelled by a finite state automaton $G = (Q, \Sigma, \xi, q^{init})$. We use $Q_{d} \subseteq Q$ to denote the set of bad (unsafe) states in $G$, which is the goal state set for the sensor-actuator attacker. We shall assume each state in $Q_{d}$ is deadlocked, since damage cannot be undone\footnote{Since each state in $Q_d$ is deadlocked, we can also merge these equivalent states into one deadlocked state.}.
\subsection{Command execution $CE^{A}$ under actuator attack}
\label{subsec:Command execution}
In the supervisory control, the input to the plant is the control commands in $\Gamma$, while the output of the plant is the events in $\Sigma$. There is thus a ``transduction" from the input $\gamma \in \Gamma$ of $G$ to the output $\sigma \in \Sigma$ of $G$, which requires an automaton model over $\Sigma \cup \Gamma$ that describes the phase from using a control command to executing an event at the plant. This automaton model is referred to as the command execution automaton $CE$~\cite{LZS19},~\cite{LS20J},~\cite{zhu2019}, which is given as follows:
\[
CE = (Q_{ce}, \Sigma_{ce}, \xi_{ce}, q_{ce}^{init})
\]
\begin{itemize}
\setlength{\itemsep}{3pt}
\setlength{\parsep}{0pt}
\setlength{\parskip}{0pt}
\item $Q_{ce} = \{q^{\gamma}|\gamma \in \Gamma\} \cup \{q_{ce}^{init}\}$
\item $\Sigma_{ce} = \Gamma \cup \Sigma$
\item $\xi_{ce}: Q_{ce} \times \Sigma_{ce} \rightarrow Q_{ce}$
\end{itemize}
The (partial) transition function $\xi_{ce}$ is defined as follows:
\begin{enumerate}[1.]
\setlength{\itemsep}{3pt}
\setlength{\parsep}{0pt}
\setlength{\parskip}{0pt}
\item For any $\gamma \in \Gamma$, $\xi_{ce}(q_{ce}^{init}, \gamma) = q^{\gamma}$. (command reception)
\item For any $\sigma \in \gamma \cap \Sigma_{uo}$, $\xi_{ce}(q^{\gamma}, \sigma) = q^{\gamma}$. (unobservable event execution)
\item For any $\sigma \in \gamma \cap \Sigma_{o}$, $\xi_{ce}(q^{\gamma}, \sigma) = q_{ce}^{init}$. (observable event execution)
\end{enumerate}
We shall briefly explain the model $CE$. For the state set, 1) $q_{ce}^{init}$ is the initial state, denoting that $CE$ is not using any control command; 2) $q^{\gamma}$ is a state denoting that $CE$ is using the control command $\gamma$.
For the (partial) transition function $\xi_{ce}$, Case 1 says that once $CE$ starts to use $\gamma$, it will transit to the state $q^{\gamma}$. Cases 2 and 3 say that at the state $q^{\gamma}$, the execution of any event in $\gamma \cap \Sigma_{uo}$ will lead to a self-loop, that is, $\gamma$ will be reused, and the execution of any event in $\gamma \cap \Sigma_{o}$ will lead to the transition to the initial state, that is, $CE$ will wait for the next control command to be issued from the supervisor. We note that only events of $G$ can happen from $q^{\gamma}$ in $CE$.
Next, we shall construct the command execution automaton under actuator attack, denoted as $CE^{A}$, where the superscript ``$A$'' indicates that this component is the version of command execution automaton which considers the effects of attack. The same naming rule in terms of the superscript would be adopted in the following text. In this work, we consider a class of actuator attackers that can implement both the enablement and disablement attacks, that is, the actuator attacker is capable of modifying the control command $\gamma$ (issued by the supervisor) by enabling or disabling some events in a specified attackable subset $\Sigma_{c,a} \subseteq \Sigma_{c}$, where $\Sigma_{c}$ is the set of controllable events~\cite{LZS19}. Then, based on $CE$, we shall encode the impacts of actuator attack on the event execution phase, and generate the command execution automaton $CE^{A}$ under actuator attack~\cite{LS20},~\cite{LS20J}, which is shown in Fig. \ref{fig:Command execution automaton}. Compared with $CE$, the changes are marked blue.
\begin{figure}[htbp]
\begin{center}
\includegraphics[height=2.2cm]{Command_Execution.pdf}
\caption{The (schematic) model for command execution automaton $CE^A$ under actuator attack}
\label{fig:Command execution automaton}
\end{center}
\end{figure}
\[
CE^{A} = (Q_{ce,a}, \Sigma_{ce,a}, \xi_{ce,a}, q_{ce,a}^{init})
\]
\begin{itemize}
\setlength{\itemsep}{3pt}
\setlength{\parsep}{0pt}
\setlength{\parskip}{0pt}
\item $Q_{ce,a} = Q_{ce}$
\item $\Sigma_{ce,a} = \Sigma_{ce} = \Gamma \cup \Sigma$
\item $\xi_{ce,a}: Q_{ce,a} \times \Sigma_{ce,a} \rightarrow Q_{ce,a}$
\item $q_{ce,a}^{init} = q_{ce}^{init}$
\end{itemize}
The (partial) transition function $\xi_{ce,a}$ is defined as follows:
\begin{enumerate}[1.]
\setlength{\itemsep}{3pt}
\setlength{\parsep}{0pt}
\setlength{\parskip}{0pt}
\item For any $q, q' \in Q_{ce,a}$ and any $\sigma \in \Sigma_{ce,a}$, $\xi_{ce}(q, \sigma) = q' \Rightarrow \xi_{ce,a}(q, \sigma) = q'$. (transitions retaining)
\item For any $q \in \{q^{\gamma}|\gamma \in \Gamma\}$ and any $\sigma \in \Sigma_{c,a} \cap \Sigma_{uo}$, $\xi_{ce,a}(q, \sigma) = q$. (attackable and unobservable event enablement)
\item For any $q \in \{q^{\gamma}|\gamma \in \Gamma\}$ and any $\sigma \in \Sigma_{c,a} \cap \Sigma_{o}$, $\xi_{ce,a}(q, \sigma) = q_{ce,a}^{init}$. (attackable and observable event enablement)
\item For any $\sigma \in \Sigma_{uc}$, $\xi_{ce}(q_{ce}^{init}, \sigma) = q_{ce}^{init}$. (uncontrollable event execution)
\end{enumerate}
In the above definition of $\xi_{ce,a}$,
\begin{itemize}
\setlength{\itemsep}{3pt}
\setlength{\parsep}{0pt}
\setlength{\parskip}{0pt}
\item Case 1 retains all the transitions defined in $CE$.
\item Due to the existence of actuator attack, which can enable the events in $\Sigma_{c,a}$, in Case 2 and Case 3 we need to model the occurrences of any attackable event in $\Sigma_{c,a}$, where the execution of an unobservable event in $\Sigma_{c,a} \cap \Sigma_{uo}$ will lead to a self-loop and the execution of an observable event in $\Sigma_{c,a} \cap \Sigma_{o}$ will lead to the transition back to the initial state $q_{ce,a}^{init}$.
\item In Case 4, we need to add the transitions labelled by the uncontrollable events at the initial state $q_{ce}^{init}$ because the sensor attacker considered in this work can carry out sensor deletion attack on some compromised observable event in $\Sigma_{s,a}$, resulting in that the occurrence of this event cannot be observed by the supervisor and thus no control command is issued by the supervisor; in this case, although the command execution automaton receives no control command from the supervisor, it could still execute uncontrollable events, if they are defined at the current state of the plant $G$, since uncontrollable events are always allowed to be fired \footnote{For the model of $CE$, we do not need to add the self-loops labelled by the uncontrollable events at the initial state, since $CE$ describes the execution model in the absence of attack. That is, once the plant fires an observable event, the supervisor will definitely observe the event and immediately issue a control command containing all the uncontrollable events.}.
\end{itemize}
Here we remark that the actuator disablement would be automatically taken care of by the synthesis procedure in Section \ref{sec:Synthesis of Maximally Permissive Covert Attackers Against Unknown Supervisors} as the attackable event set $\Sigma_{c,a}$ is controllable by the actuator attacker, i.e., the actuator attack could always disable these events.
Based on the model of $CE^A$, we know that $|Q_{ce,a}| = |\Gamma| + 1$.
\vspace{0.1cm}
\textbf{Example III.1} We adapt the water tank example from \cite{Su2018} as a running example, whose schematic diagram is shown in Fig. \ref{fig:schematic diagram of the water tank}. The system consists of a constant supply rate, a water tank, and a control valve at the bottom of the tank controlling the outgoing flow rate. We assume the valve can only be fully open or fully closed, resulting in the two events: $open$ and $close$. The water level can be measured, whose value can trigger some predefined events that denote the water levels: low ($L$), high ($H$), extremely low ($EL$) and extremely high ($EH$). Our control goal is to adjust the control valve operation such that the water level would not be extremely low or extremely high. We assume all the events are observable, i.e., $\Sigma_{o} = \Sigma = \{L, H, EL, EH, close, open\}$. $\Sigma_{c,a} = \Sigma_{c} = \{close, open\}$. $\Sigma_{s,a} = \{L, H, EL, EH\}$. $\Gamma = \{v_{1}, v_{2}, v_{3}, v_{4}\}$. $v_{1} = \{L, H, EL, EH\}$. $v_{2} = \{close, L, H, EL, EH\}$. $v_{3} = \{open, L, H, EL, EH\}$. $v_{4} = \{close, open, L, H, EL, EH\}$. The model of the plant $G$ (the state marked by red cross is the bad state), command execution automaton $CE$, command execution automaton $CE^{A}$ under actuator attack, and the sensor attack constraints $AC$ are shown in Fig. \ref{fig:Plant G} - Fig. \ref{fig:Example_Sensor attack constraints AC}, respectively.
\begin{figure}[htbp]
\begin{center}
\includegraphics[height=4.4cm]{Water_Tank.pdf}
\caption{The schematic diagram of the water tank operation scenario}
\label{fig:schematic diagram of the water tank}
\end{center}
\end{figure}
\begin{figure}[htbp]
\begin{center}
\includegraphics[height=3.2cm]{G.pdf}
\caption{Plant $G$}
\label{fig:Plant G}
\end{center}
\end{figure}
\begin{figure}[htbp]
\centering
\subfigure[]{
\begin{minipage}[t]{0.43\linewidth}
\centering
\includegraphics[height=1.5in]{CE.pdf}
\end{minipage}
}
\subfigure[]{
\begin{minipage}[t]{0.43\linewidth}
\centering
\includegraphics[height=1.5in]{CE_A.pdf}
\end{minipage}
}
\centering
\caption{(a) Command execution automaton $CE$. (b) Command execution automaton $CE^{A}$ under actuator attack (after automaton minimization), where $v_{1} - v_{4}$ means $v_{1}, v_{2}, v_{3},v_{4}$.}
\label{fig:Example_command execution}
\end{figure}
\begin{figure}[htp]
\begin{center}
\includegraphics[height=2.4cm]{AC.pdf}
\caption{Sensor attack constraints $AC$}
\label{fig:Example_Sensor attack constraints AC}
\end{center}
\end{figure}
\subsection{Unknown supervisor $BT(S)^{A}$ under attack}
\label{subsec:unknown supervisor}
In the absence of attacks, a supervisor $S$ over the control constraint $\mathcal{C}=(\Sigma_{c}, \Sigma_{o})$ is often modelled by a finite state automaton $S = (Q_{s}, \Sigma_{s} = \Sigma, \xi_{s}, q_{s}^{init})$, which satisfies the controllability and observability constraints \cite{B1993}:
\begin{itemize}
\setlength{\itemsep}{3pt}
\setlength{\parsep}{0pt}
\setlength{\parskip}{0pt}
\item (Controllability) For any state $q \in Q_{s}$ and any event $\sigma \in \Sigma_{uc}$, $\xi_{s}(q, \sigma)!$,
\item (Observability) For any state $q \in Q_{s}$ and any event $\sigma \in \Sigma_{uo}$, if $\xi_{s}(q, \sigma)!$, then $\xi_{s}(q, \sigma) = q$.
\end{itemize}
The control command issued by the supervisor $S$ at state $q \in Q_{s}$ is defined to be $\Gamma(q) = En_{S}(q) = \{\sigma \in \Sigma|\xi_{s}(q,\sigma)!\}$. We assume the supervisor $S$ will immediately issue a control command to the plant whenever an event $\sigma \in \Sigma_{o}$ is received or when the system initiates.
Based on the command execution automaton $CE$ and the plant $G$, we shall notice that while $CE$ can model the transduction from $\Gamma$ to $\Sigma$, the transduction needs to be restricted by the behavior of $G$. Thus, only $CE||G$ models the transduction from the input $\Gamma$ of $G$ to the output $\Sigma$ of $G$. The diagram of the supervisory control feedback loop (in the absence of attack) can then be refined as in Fig. \ref{fig:Supervisory_Control_Bipartite_Supervisor}, where $BT(S)$, to be introduced shortly, is a control-equivalent bipartite \footnote{Strictly speaking, $BT(S)$ is not bipartite as unobservable events in $\Sigma_{uo}$ would lead to self-loops. In this work, for convenience, we shall always call supervisors with such structures bipartite ones.} supervisor to $S$ and explicitly models the control command sending phase.
\begin{figure}[htbp]
\begin{center}
\includegraphics[height=2.6cm]{Supervisory_Control_Bipartite_Supervisor.pdf}
\caption{The refined diagram of the supervisory control feedback loop}
\label{fig:Supervisory_Control_Bipartite_Supervisor}
\end{center}
\end{figure}
Next, we shall show how to model this bipartite supervisor $BT(S)$ based on $S$~\cite{LZS19}.
For any supervisor $S = (Q_{s}, \Sigma_{s} = \Sigma, \xi_{s}, q_{s}^{init})$, the procedure to construct $BT(S)$ is given as follows:
\[
BT(S) = (Q_{bs}, \Sigma_{bs}, \xi_{bs}, q_{bs}^{init})
\]
\begin{enumerate}[1.]
\setlength{\itemsep}{3pt}
\setlength{\parsep}{0pt}
\setlength{\parskip}{0pt}
\item $Q_{bs} = Q_{s} \cup Q_{s}^{com}$, where $Q_{s}^{com}:= \{q^{com} \mid q \in Q_s\}$
\item $\Sigma_{bs} = \Sigma \cup \Gamma$
\item \begin{enumerate}[a.]
\setlength{\itemsep}{3pt}
\setlength{\parsep}{0pt}
\setlength{\parskip}{0pt}
\item $(\forall q^{com} \in Q_{s}^{com}) \, \xi_{bs}(q^{com}, \Gamma(q)) = q$. (command sending)
\item $(\forall q \in Q_{s})(\forall \sigma \in \Sigma_{uo}) \, \xi_{s}(q, \sigma)! \Rightarrow \xi_{bs}(q, \sigma) = \xi_{s}(q, \sigma) =q$. (occurrence of an unobservable event)
\item $(\forall q \in Q_{s})(\forall \sigma \in \Sigma_{o}) \, \xi_{s}(q, \sigma)! \Rightarrow \xi_{bs}(q, \sigma) = (\xi_{s}(q, \sigma))^{com}$. (observation of an observable event)
\end{enumerate}
\item $q_{bs}^{init} = (q_{s}^{init})^{com}$
\end{enumerate}
We shall explain the above construction procedure. For the state set, we add $Q_{s}^{com}$, which is a relabelled copy of $Q_{s}$ with the superscript ``com'' attached to each element of $Q_{s}$. Any state $q^{com} \in Q_{s}^{com}$ is a control state denoting that the supervisor is ready to issue the control command $\Gamma(q)$. Any state $q \in Q_{s}$ is a reaction state denoting that the supervisor is ready to react to an event $\sigma \in \Gamma(q)$.
For the (partial) transition function $\xi_{bs}$, Step 3.a says that at any control state $q^{com} \in Q_{s}^{com}$, after issuing the control command $\Gamma(q)$, the supervisor would transit to the reaction state $q$. Step 3.b says that at any reaction state $q \in Q_{s}$, the occurrence of any unobservable event $\sigma \in \Sigma_{uo}$, if it is defined, would lead to a self-loop, i.e., the state still remains at the reaction state $q$. Step 3.c says that at any reaction state $q \in Q_{s}$, the occurrence of any observable event $\sigma \in \Sigma_{o}$, if it is defined,
would lead to a transition to the control state $(\xi_{s}(q, \sigma))^{com}$. The initial state of $BT(S)$ is changed to the initial control state $(q_{s}^{init})^{com}$ which would issue the initial control command $\Gamma(q_{s}^{init})$ when the system initiates. Thus, if we abstract $BT(S)$ by merging the states $x_{com}$ and $x$, treated as equivalent states, then we can recover $S$. In this sense, $BT(S)$ is control equivalent to $S$~\cite{LS20J}.
In this work, the model of the supervisor is unknown to the adversary, but we assume a safe supervisor has been implemented, that is, in $G||CE||BT(S)$, we assume no plant state in $Q_{d}$ can be reached. Since the attacker can only observe events in $\Sigma_{o}$, the only prior knowledge available to the adversary is the model of the plant $G$ and a set of observations $O \subseteq P_{o}(L(G||CE||BT(S)))$, where $P_{o}: (\Sigma \cup \Gamma)^{*} \rightarrow \Sigma_{o}^{*}$ (or $O \subseteq P_{o}(L(G||S))$, where\footnote{We here abuse the notation $P_o$ for two different natural projections from different domains. But it shall be clear which natural projection we refer to in each case. } $P_{o}: \Sigma^{*} \rightarrow \Sigma_{o}^{*}$) \cite{LTZS20} of the system executions under the unknown supervisor. The set of the attacker's observations $O$ is captured by a finite state automaton $M_{o} = (Q_{o}, \Sigma_{o}, \xi_{o}, q_{o}^{init})$, i.e., $O = L(M_{o})$. We refer to $M_o$ as the observation automaton. Since $O$ is finite, without loss of generality, we assume there is exactly one deadlocked state $q_{o}^{dl} \in Q_{o}$ in $M_{o}$ and, for any maximal string $s \in O$ (in the prefix ordering \cite{WMW10}), we have $\xi_{o}(q_{o}^{init}, s) = q_{o}^{dl}$~\cite{LTZS20}. Then, we have the following definition.
\emph{Definition III.1 (Consistency)} Given the plant $G$, a supervisor $S$ is said to be consistent with a set of observations $O$ if $O \subseteq P_{o}(L(G||CE||BT(S)))$, where $P_{o}: (\Sigma \cup \Gamma)^{*} \rightarrow \Sigma_{o}^{*}$ (or $O \subseteq P_{o}(L(G||S)$, where $P_{o}: \Sigma^{*} \rightarrow \Sigma_{o}^{*}$).
\vspace{0.1cm}
\textbf{Example III.2} We shall continue with the water tank example. We assume the attacker has collected a set of observations $O$, which is captured by $M_{o}$ shown in Fig. \ref{fig:Observations M_o}.
\begin{figure}[htbp]
\begin{center}
\includegraphics[height=2.15cm]{M_o.pdf}
\caption{Observations $M_{o}$}
\label{fig:Observations M_o}
\end{center}
\end{figure}
In this work, since we take the attack into consideration and aim to synthesize a covert sensor-actuator attacker against the unknown supervisor, we shall modify $BT(S)$ to generate a new bipartite supervisor $BT(S)^{A}$ under attack by modelling the effects of the sensor-actuator attack on the supervisor. The construction of $BT(S)^{A}$ consists of \textbf{Step 1} and \textbf{Step 2}:
\textbf{Step 1:} Firstly, in this work, we assume the monitoring \cite{LS20} function is embedded into the supervisor, that is, the supervisor is able to compare its online observations of the system execution with the ones that can be observed in the absence of attack, and once some information inconsistency happens, the supervisor can assert the existence of an attacker and halts the system operation. To embed the monitoring mechanism into the supervisor in the absence of attack, we adopt what we refer to as the universal monitor $P_{\Sigma_{o} \cup \Gamma}(G||CE)$ to refine $BT(S)$ by synchronous product and obtain $BT(S)||P_{\Sigma_{o} \cup \Gamma}(G||CE)$.
To see that $P_{\Sigma_{o} \cup \Gamma}(G||CE)$ is a universal monitor that works for any supervisor $S$, we perform the diagrammatic reasoning as follows (see Fig. \ref{fig:Monitor_Embedding_Reasoning}).
\begin{figure}[htbp]
\begin{center}
\includegraphics[height=4cm]{Monitor_Embedding_Reasoning.pdf}
\caption{The supervisory control feedback loop with an embedded monitor}
\label{fig:Monitor_Embedding_Reasoning}
\end{center}
\end{figure}
\begin{enumerate}[1.]
\item Since the monitor observes the output and the input of the supervisor $S$, it can observe the events in $\Sigma_o \cup \Gamma$.
\item The model of the universal monitor is then exactly its observable model $P_{\Sigma_o \cup \Gamma}(G || CE)$ of everything that is external to $S$, i.e., $G || CE$.
\end{enumerate}
We refer to $P_{\Sigma_o \cup \Gamma}(G || CE)$ as a universal monitor as it looks external against $S$ and thus ignores the model of $S$; intuitively, it is a monitor that works for any supervisor $S$. Then,
when the monitoring mechanism is embedded into the supervisor, we simply refine the universal monitor $P_{\Sigma_o \cup \Gamma}(G || CE)$ with the supervisor model $BT(S)$ to obtain $BT(S)||P_{\Sigma_{o} \cup \Gamma}(G||CE)$. We record this automaton as $BT(S)^{M}$. Thus,
\[
BT(S)^{M} = BT(S)||P_{\Sigma_{o} \cup \Gamma}(G||CE)
\]
We denote $BT(S)^{M} = (Q_{bs,1}, \Sigma \cup \Gamma, \xi_{bs,1}, q_{bs,1}^{init})$. It is noteworthy that $BT(S)^{M}$ is bipartite as $BT(S)$ is bipartite. Thus, we could partition the state set $Q_{bs,1}$ into two parts, $Q_{bs,1} = Q_{bs,1}^{rea} \cup Q_{bs,1}^{com}$, where at any state of $Q_{bs,1}^{rea}$, only events in $\Sigma$ are defined, and at any state of $Q_{bs,1}^{com}$, only events in $\Gamma$ are defined. Then, we write
\[
BT(S)^{M} = (Q_{bs,1}^{rea} \cup Q_{bs,1}^{com}, \Sigma \cup \Gamma, \xi_{bs,1}, q_{bs,1}^{init})
\]
We have the following useful results.
\vspace{0.1cm}
\emph{Proposition III.1.} $L(BT(S)^{M}||G||CE) = L(BT(S)||G||CE)$.
\emph{Proof:} LHS = $L(BT(S)||P_{\Sigma_{o} \cup \Gamma}(G||CE)||G||CE) = L(BT(S)||G||CE)$ = RHS. \hfill $\blacksquare$
\vspace{0.1cm}
\vspace{0.1cm}
\emph{Corollary III.1.} $O \subseteq P_{o}(L(BT(S)^{M}||G||CE))$.
\emph{Proof:} This directly follows from \emph{Proposition III.1} and the fact that $O \subseteq P_{o}(L(BT(S)||G||CE))$. \hfill $\blacksquare$
\vspace{0.1cm}
\vspace{0.1cm}
\begin{comment}
We now bring in the concept of the $\mathcal{C}$-abstraction of $G$~\cite{LS20BJ}, where $\mathcal{C}=(\Sigma_{c}, \Sigma_{o})$ is the control constraint for the supervisor $S$. The $\mathcal{C}$-abstraction of $G$ is defined to be
$P_{\mathcal{C}}(G)=(2^Q, \Sigma \cup \Gamma, \Delta_{\mathcal{C}}, \{q_0\})$, where $\Delta_{\mathcal{C}}$ is defined as follows.
\begin{enumerate}
\item for any $\emptyset \neq V \subseteq Q$ and any $\gamma \in \Gamma$, $\Delta_{\mathcal{C}}(V, \gamma)=UR_{G, \gamma \cap \Sigma_{uo}}(V)$
\item for any $\emptyset \neq V \subseteq Q$ and any $\sigma \in \Sigma_o$, $\Delta_{\mathcal{C}}(V, \sigma)=\delta(V, \sigma)$
\item for any $\emptyset \neq V \subseteq Q$ and any $\sigma \in \Sigma_{uo}$, $\Delta_{\mathcal{C}}(V, \sigma)=V$
\end{enumerate}
\end{comment}
\emph{Proposition III.2.} $BT(S)||P_{\Sigma_{o} \cup \Gamma}(G||CE||BT(S)) = BT(S)^{M}$.
\emph{Proof:} It is clear that LHS = $BT(S)||P_{\Sigma_{o} \cup \Gamma}(G||CE||BT(S)) \sqsubseteq BT(S)||P_{\Sigma_{o} \cup \Gamma}(G||CE)$ = RHS. We have RHS $= BT(S)||P_{\Sigma_{o} \cup \Gamma}(G||CE) \sqsubseteq BT(S)||P_{\Sigma_{o} \cup \Gamma}(G||CE||BT(S))$=LHS, as the sequences in $P_{\Sigma_{o} \cup \Gamma}(G||CE)$ that can survive the synchronous product with $BT(S)$ must come from
$P_{\Sigma_{o} \cup \Gamma}(G||CE||BT(S))$.\hfill $\blacksquare$
\vspace{0.1cm}
Based on \emph{Proposition III.2}, $BT(S)^{M}$ indeed embeds the monitor $P_{\Sigma_{o} \cup \Gamma}(G||CE||BT(S))$ \cite{LS20}, which is adopted to detect the attacker by comparing the online observations with the ones that can be observed in the absence of attack.
\textbf{Step 2:} We shall encode the effects of the sensor-actuator attack into $BT(S)^{M}$ to generate $BT(S)^{A}$. The effects of the sensor-actuator attack include the following: 1) Due to the existence of the sensor attack, for any event $\sigma \in \Sigma_{s,a}$, the supervisor cannot observe it but can observe the relabelled copy $\sigma^{\#} \in \Sigma_{s,a}^{\#}$ instead, 2) Any event in $\Sigma_{c,a} \cap (\Sigma_{uo} \cup \Sigma_{s,a})$ might be enabled by the actuator attack and its occurrence is unobservable to the supervisor, and 3) the covertness-breaking situations can happen, i.e., information inconsistency between the online observations and the ones that can be observed in the absence of attack can happen. The construction procedure of $BT(S)^{A}$ is given as follows:
\[
BT(S)^{A} = (Q_{bs,a}, \Sigma_{bs,a}, \xi_{bs,a}, q_{bs,a}^{init})
\]
\begin{enumerate}[1.]
\setlength{\itemsep}{3pt}
\setlength{\parsep}{0pt}
\setlength{\parskip}{0pt}
\item $Q_{bs,a} = Q_{bs,1} \cup \{q^{detect}\}= Q_{bs,1}^{rea} \cup Q_{bs,1}^{com} \cup \{q^{detect}\}$
\item $\Sigma_{bs,a} = \Sigma \cup \Sigma_{s,a}^{\#} \cup \Gamma$
\item \begin{enumerate}[a.]
\setlength{\itemsep}{3pt}
\setlength{\parsep}{0pt}
\setlength{\parskip}{0pt}
\item $(\forall q, q' \in Q_{bs,1})(\forall \sigma \in \Sigma_{s,a}) \, \xi_{bs,1}(q, \sigma) = q' \Rightarrow \xi_{bs,a}(q, \sigma^{\#}) = q' \wedge \xi_{bs,a}(q, \sigma) = q$. (compromised event relabelling)
\item $(\forall q \in Q_{bs,1}^{rea}) (\forall \sigma \in \Sigma_{c,a} \cap (\Sigma_{uo} \cup \Sigma_{s,a})) \, \xi_{bs,a}(q, \sigma) = q$. (occurrence of an attackable but unobservable event)
\item $(\forall q, q' \in Q_{bs,1})(\forall \sigma \in (\Sigma - \Sigma_{s,a}) \cup \Gamma) \, \xi_{bs,1}(q, \sigma) = q' \Rightarrow \xi_{bs,a}(q, \sigma) = q'$. (transitions retaining)
\item $(\forall q \in Q_{bs,1}^{rea})(\forall \sigma \in \Sigma_{o} - \Sigma_{s,a}) \, \neg \xi_{bs,1}(q, \sigma)! \Rightarrow \xi_{bs,a}(q, \sigma) = q^{detect}$. (covertness-breaking)
\item $(\forall q \in Q_{bs,1}^{rea})(\forall \sigma \in \Sigma_{s,a}) \, \neg \xi_{bs,1}(q, \sigma)! \Rightarrow \xi_{bs,a}(q, \sigma^{\#}) = q^{detect}$. (covertness-breaking)
\end{enumerate}
\item $q_{bs,a}^{init} = q_{bs,1}^{init}$
\end{enumerate}
We shall briefly explain the above procedure for constructing $BT(S)^{A}$. Firstly, at Step 1, all the states in $BT(S)^{M}$ are retained, and we add a new state $q^{detect}$ into the state set to explicitly model that the presence of the attacker is detected. Then, for the (partial) transition function $\xi_{bs,a}$, at Step 3.a, we perform the following: 1) all the transitions labelled by events in $\Sigma_{s,a}$ are replaced with the copies in $\Sigma_{s,a}^{\#}$, denoted by $\xi_{bs,a}(q, \sigma^{\#}) = q{'}$, and 2) the transitions labelled by events in $\Sigma_{s,a}$ and originally defined in $BT(S)^{M}$ at state $q$ would become self-loops since these events can be fired and are unobservable to the supervisor, denoted by $\xi_{bs,a}(q, \sigma) = q$. At Step 3.b, at any reaction state $q \in Q_{bs,1}^{rea}$, we shall add self-loop transitions labelled by the events in $\Sigma_{c,a} \cap (\Sigma_{uo} \cup \Sigma_{s,a})$ since such events can be enabled due to the actuator attack at the state $q$ and are unobservable to the supervisor. At Step 3.c, all the other transitions, labelled by events in $(\Sigma - \Sigma_{s,a}) \cup \Gamma$, in $BT(S)^{M}$ are retained. Step 3.d and Step 3.e are defined to encode the covertness-breaking situations: at any reaction state $q \in Q_{bs,1}^{rea}$, for any observable event $\sigma \in \Sigma_{o}$, we add the transition, labelled by $\sigma \in \Sigma_{o} - \Sigma_{s,a}$ or the relabelled copy $\sigma^{\#} \in \Sigma_{s,a}^{\#}$, to the state $q^{detect}$ if $\neg \xi_{bs,1}(q, \sigma)!$. Intuitively, the event $\sigma$ should not be observed at the state $q$ in the absence of attack\footnote{To the supervisor, once an event $\sigma^{\#} \in \Sigma_{s, a}^{\#}$ is observed, which is issued by the attacker, the supervisor believes that the event $\sigma \in \Sigma_{s, a}$ has been executed in the plant.
This is the reason why we relabel $\Sigma_{s,a}$ to $\Sigma_{s,a}^{\#}$ in the supervisor model and this does not change its control function.
}.
Based on the model of $BT(S)^{A}$, we know that $|Q_{bs,a}| \leq 2|Q_{s}| + 1$.
\textbf{Example III.3} We shall continue with the water tank example. For a supervisor $S$ shown in Fig. \ref{fig:Example_S_To_BT(S)A}. (a), the step-by-step constructed $BT(S)$, $BT(S)^{M}$, and $BT(S)^{A}$ are illustrated in Fig. \ref{fig:Example_S_To_BT(S)A}. (b), (c), (d), respectively.
\begin{figure}[htbp]
\begin{center}
\includegraphics[height=4.7cm]{S_To_BTSA.pdf}
\caption{(a) $S$. (b) $BT(S)$. (c) $BT(S)^{M}$. (d) $BT(S)^{A}$.}
\label{fig:Example_S_To_BT(S)A}
\end{center}
\end{figure}
Specifically, the detailed construction procedure of $BT(S)$ for the supervisor $S$ given in Fig. \ref{fig:Example_S_To_BT(S)A}. (a) (also Fig. \ref{fig:R3C13_S_To_BT(S)}. (i)) is presented as follows:
\begin{enumerate}[1.]
\setlength{\itemsep}{3pt}
\setlength{\parsep}{0pt}
\setlength{\parskip}{0pt}
\item Firstly, since the state set of $S$ is $Q_{s} = \{0\}$, based on Step 1 in the construction of $BT(S)$, we have $Q_{bs} = Q_{s} \cup Q_{s}^{com} = \{0^{com},0\}$, where the state $0^{com}$ is the initial state of $BT(S)$ according to Step 4. The generated state set of $BT(S)$ is shown in Fig. \ref{fig:R3C13_S_To_BT(S)}. (ii).
\item Secondly, based on Step 3.a and the fact that $\Gamma(0) = \{L,H,EL,EH\} = v_{1}$, we have $\xi_{bs}(0^{com}, \Gamma(0) = v_{1}) = 0$, which is shown by the added transition in Fig. \ref{fig:R3C13_S_To_BT(S)}. (iii).
\item Finally, based on Step 3.c, since $\xi_{s}(0,L)!$ and $\xi_{s}(0,L) = 0$, we have $\xi_{bs}(0,L) = (\xi_{s}(0,L))^{com} = 0^{com}$, which is shown by the added transition in Fig. \ref{fig:R3C13_S_To_BT(S)}. (iv). Similarly, we could generate the transitions labelled as $H$, $EL$ and $EH$ from state 0 to state $0^{com}$.
\end{enumerate}
\begin{figure}[htbp]
\begin{center}
\includegraphics[height=1.88cm]{R3C13_S_To_BTS.png}
\caption{The construction procedure of $BT(S)$ based on $S$. (i) is $S$ and (iv) is $BT(S)$.}
\label{fig:R3C13_S_To_BT(S)}
\end{center}
\end{figure}
\subsection{Sensor-actuator attacker}
\label{subsec:sensor-actuator attack}
The sensor-actuator attacker is modelled by a finite state automaton $\mathcal{A} = (Q_{a}, \Sigma_{a}, \xi_{a}, q_{a}^{init})$, where $\Sigma_{a} = \Sigma \cup \Sigma_{s,a}^{\#} \cup \Gamma \cup \{stop\}$. In addition, there are two conditions that need to be satisfied:
\begin{itemize}
\setlength{\itemsep}{3pt}
\setlength{\parsep}{0pt}
\setlength{\parskip}{0pt}
\item (A-controllability) For any state $q \in Q_{a}$ and any event $\sigma \in \Sigma_{a,uc} := \Sigma_{a} - (\Sigma_{c,a} \cup \Sigma_{s,a}^{\#} \cup \{stop\})$, $\xi_{a}(q, \sigma)!$
\item (A-observability) For any state $q \in Q_{a}$ and any event $\sigma \in \Sigma_{a,uo} := \Sigma_{a} - (\Sigma_{o} \cup \Sigma_{s,a}^{\#} \cup \{stop\})$, if $\xi_{a}(q, \sigma)$!, then $\xi_{a}(q, \sigma) = q$.
\end{itemize}
A-controllability states that the sensor-actuator attacker can only disable events in $\Sigma_{c,a} \cup \Sigma_{s,a}^{\#} \cup \{stop\}$. A-observability states that the sensor-actuator attacker can only make a state change after observing an event
in $\Sigma_{o} \cup \Sigma_{s,a}^{\#} \cup \{stop\}$.
In the following text, we shall refer to
\[
\begin{aligned}
\mathscr{C}_{ac} = (\Sigma_{c,a} \cup \Sigma_{s,a}^{\#} \cup \{stop\}, \Sigma_{o} \cup \Sigma_{s,a}^{\#} \cup \{stop\})
\end{aligned}
\]
as the attacker's control constraint, and $(\Sigma_{o}, \Sigma_{s,a}, \Sigma_{c,a})$ as the attack constraint.
It is apparent that the attacker's control constraint $\mathscr{C}_{ac}$ is uniquely determined by the attack constraint $(\Sigma_{o}, \Sigma_{s,a}, \Sigma_{c,a})$. Based on the assumption $\Sigma_{c} \subseteq \Sigma_{o}$, we have $\Sigma_{c,a} \cup \Sigma_{s,a}^{\#} \cup \{stop\} \subseteq \Sigma_{o} \cup \Sigma_{s,a}^{\#} \cup \{stop\}$, implying the supremality of the sensor-actuator attacker.
\section{Synthesis of Supremal Covert Attackers Against Unknown Supervisors}
\label{sec:Synthesis of Maximally Permissive Covert Attackers Against Unknown Supervisors}
In this section, we shall present the solution methodology for the synthesis of the supremal covert sensor-actuator attacker against unknown (safe) supervisors by using observations. Firstly, in this work, since we focus on the synthesis of sensor-actuator attacker that aims to cause damage-infliction, we shall denote the marker state set of $G$ as $Q_{d}$, and still denote the modified automaton as $G = (Q, \Sigma, \xi, q^{init}, Q_{d})$ in the following text. Then, based on the above-constructed component models in Section \ref{sec:Component models under sensor-actuator attack}, we know that, given any plant $G$, command execution automaton $CE^{A}$ under actuator attack, sensor attack constraints $AC$, sensor-actuator attacker $\mathcal{A}$, bipartite supervisor $BT(S)^{A}$ under attack\footnote{To be precise, $BT(S)^{A}$ is not an attacked supervisor as it also embeds the model of the attacked monitor.}, the closed-loop behavior is (cf. Fig. 1)
\[
\begin{aligned}
\mathcal{B} = G||CE^{A}||AC||BT(S)^{A}||\mathcal{A} = (Q_{b}, \Sigma_{b}, \xi_{b}, q_{b}^{init}, Q_{b,m})
\end{aligned}
\]
\begin{itemize}
\setlength{\itemsep}{3pt}
\setlength{\parsep}{0pt}
\setlength{\parskip}{0pt}
\item $Q_{b} = Q \times Q_{ce,a} \times Q_{ac} \times Q_{bs,a} \times Q_{a}$
\item $\Sigma_{b} = \Sigma \cup \Sigma_{s,a}^{\#} \cup \Gamma \cup \{stop\}$
\item $\xi_{b}: Q_{b} \times \Sigma_{b} \rightarrow Q_{b}$
\item $q_{b}^{init} = (q^{init}, q_{ce,a}^{init}, q_{ac}^{init}, q_{bs,a}^{init}, q_{a}^{init})$
\item $Q_{b,m} = Q_{d} \times Q_{ce,a} \times Q_{ac} \times Q_{bs,a} \times Q_{a}$
\end{itemize}
Then, we have the following definitions~\cite{LS20J}.
\vspace{0.1cm}
\emph{Definition IV.1 (Covertness)} Given any plant $G$, command execution automaton $CE^{A}$ under actuator attack, sensor attack constraints $AC$, and bipartite supervisor $BT(S)^A$ under attack, the sensor-actuator attacker $\mathcal{A}$ is said to be covert against the supervisor $S$ w.r.t. the attack constraint $(\Sigma_{o}, \Sigma_{s,a}, \Sigma_{c,a})$ if any state in
\[
\begin{aligned}
Q_{bad} = \{(q, q_{ce,a}, q_{ac}, q_{bs,a}, q_{a}) \in Q_{b}|q \notin Q_{d} \wedge q_{bs,a} = q^{detect}\}
\end{aligned}
\]
is not reachable in $\mathcal{B}$.
\vspace{0.1cm}
\emph{Definition IV.2 (Damage-reachable)} Given any plant $G$, command execution automaton $CE^{A}$ under actuator attack, sensor attack constraints $AC$, and bipartite supervisor $BT(S)^A$ under attack, the sensor-actuator attacker $\mathcal{A}$ is said to be damage-reachable against the supervisor $S$ w.r.t. the attack constraint $(\Sigma_{o}, \Sigma_{s,a}, \Sigma_{c,a})$ if some marker state in $Q_{b,m}$ is reachable in $\mathcal{B}$, that is, $L_{m}(\mathcal{B}) \neq \emptyset$.
\vspace{0.1cm}
\emph{Definition IV.3 (Successful)} Given any plant $G$, a set of observations $O$, and the attack constraint $(\Sigma_{o}, \Sigma_{s,a}, \Sigma_{c,a})$, a sensor-actuator attacker $\mathcal{A}$ is said to be successful if it is covert and damage-reachable against any safe supervisor that is consistent with $O$.
\vspace{0.1cm}
\emph{Definition IV.4 (Supremality)} Given any plant $G$, a set of observations $O$, and the attack constraint $(\Sigma_{o}, \Sigma_{s,a}, \Sigma_{c,a})$, a successful sensor-actuator attacker $\mathcal{A}$ is said to be supremal if for any other successful sensor-actuator attacker $\mathcal{A}'$, we have $L(G||CE^{A}||AC||BT(S)^{A}||\mathcal{A}') \subseteq L(G||CE^{A}||AC||BT(S)^{A}||\mathcal{A})$, for any safe supervisor $S$ that is consistent with $O$.
\vspace{0.1cm}
\emph{Remark IV.1} If $L(G||CE^{A}||AC||BT(S)^{A}||\mathcal{A}') \subseteq L(G||CE^{A}||AC||BT(S)^{A}||\mathcal{A})$, then we have
\[
\begin{aligned}
& L_{m}(G||CE^{A}||AC||BT(S)^{A}||\mathcal{A}') \\= & L(G||CE^{A}||AC||BT(S)^{A}||\mathcal{A}')||L_{m}(G) \\ \subseteq & L(G||CE^{A}||AC||BT(S)^{A}||\mathcal{A})||L_{m}(G) \\ = & L_{m}(G||CE^{A}||AC||BT(S)^{A}||\mathcal{A})
\end{aligned}
\]
Based on the above definitions, the observation-assisted covert attacker synthesis problem to be solved in this work is formulated as follows:
\vspace{0.1cm}
\textbf{Problem 1:} Given the plant $G$, a set of observations $O$ and the attack constraint $(\Sigma_{o}, \Sigma_{s,a}, \Sigma_{c,a})$, synthesize the supremal successful, i.e., covert and damage-reachable, sensor-actuator attacker?
Next, we shall present our solution methodology for \textbf{Problem 1}.
\subsection{Main idea}
\label{subsec:Main idea of the Solution Methodology}
\begin{figure}[htbp]
\begin{center}
\includegraphics[height=5.7cm]{Methodology_Procedure.pdf}
\caption{The procedure of the proposed solution methodology}
\label{fig:The procedure of solution methodology}
\end{center}
\end{figure}
Before we delve into the detailed solution methodology which will be presented in Section \ref{subsec:Solution methodology}, the high-level idea of our method shown in Fig. \ref{fig:The procedure of solution methodology} is explained in the following. We perform the chaining of two synthesis constructions, in order to synthesize the supremal covert damage-reachable sensor-actuator attacker, assisted with the finite set of observations $O$.
\begin{enumerate}[1.]
\item At the first step, based on the plant $G$ and the command execution automaton $CE$, we shall synthesize $NS$, the supremal safe command non-deterministic supervisor~\cite{zhu2019, Linnetworked} that embeds all the safe partial-observation supervisors in the sense of~\cite{YL16}, by using the normality property based synthesis~\cite{WMW10, zhu2019, Linnetworked, WLLW18}.
Then, by using the observations $O$, we perform a direct pruning and attack modelling on the synthesized command non-deterministic supervisor $NS$ to obtain $OCNS^{A}$, the supremal safe and observation-consistent command non-deterministic supervisor under sensor-actuator attack. The covertness-breaking states will be encoded in $OCNS^{A}$. In addition, based on the observations $O$, we shall construct $\overline{S^{\downarrow,A}}$, whose marked behavior encodes the least permissive supervisor (that is consistent with $O$) under attack, to make sure that the synthesized sensor-actuator attacker is damage-reachable against any safe supervisor that is consistent with the observations~\cite{LTZS20}.
\item At the second step, based on $G$, $OCNS^{A}$, $\overline{S^{\downarrow,A}}$, $AC$, and $CE^{A}$, we can employ techniques similar to those of~\cite{LS20, LS20J} to reduce the synthesis of the supremal covert damage-reachable attacker to the synthesis of the supremal safe partial-observation supervisor.
\end{enumerate}
In particular, the model of the unknown supervisor $S$ is not needed for the above constructions. Intuitively speaking, the synthesized attacker $\mathcal{A}$ ensures covertness and damage-reachability because of the following reasons:
\begin{itemize}
\setlength{\itemsep}{3pt}
\setlength{\parsep}{0pt}
\setlength{\parskip}{0pt}
\item It ensures covertness against all the safe supervisors which are consistent with the observations, since it already ensures covertness against $OCNS^{A}$, the supremal safe and observation-consistent command non-deterministic supervisor (under attack), which embeds all the possible safe (partial-observation) supervisors that are consistent with the observations.
\item It ensures the damage-reachability against all the supervisors that are consistent with the observations, since it already ensures damage-reachability against $\overline{S^{\downarrow,A}}$~\cite{LTZS20}, which induces the smallest marked behavior.
\end{itemize}
It follows that we can use the many tools and techniques \cite{Susyna} - \cite{Malik07} that have been developed for the synthesis of the supremal partial-observation supervisor to synthesize the supremal covert damage-reachable attacker assisted with the observations.
\subsection{Solution methodology}
\label{subsec:Solution methodology}
\noindent \textbf{Step 1: Construction of $NS$}
Firstly, we shall synthesize $NS$, the supremal safe command non-deterministic supervisor\footnote{We can refer to Fig. 8. Instead of employing a command deterministic supervisor $BT(S)$ (to control $G|| CE$), which issues a unique control command at each control state, we can employ $NS$ for the control of $G|| CE$. In particular, $NS$ has the choice of issuing different control commands at each control state.}. The procedure is given as follows:
\noindent \textbf{Procedure 1:}
\begin{enumerate}[1.]
\setlength{\itemsep}{3pt}
\setlength{\parsep}{0pt}
\setlength{\parskip}{0pt}
\item Compute $\mathcal{P} = G||CE = (Q_{\mathcal{P}}, \Sigma_{\mathcal{P}} = \Sigma \cup \Gamma, \xi_{\mathcal{P}}, q_{\mathcal{P}}^{init})$\footnote{By definition, the marker state set of $\mathcal{P}$ should be $Q_{d} \times Q_{ce}$, but here, we shall mark all the states of $\mathcal{P}$ since we only care about safe supervisors.}.
\item Generate $\mathcal{P}_{r} = (Q_{\mathcal{P}_{r}}, \Sigma_{\mathcal{P}_{r}}, \xi_{\mathcal{P}_{r}}, q_{\mathcal{P}_{r}}^{init})$
\begin{itemize}
\setlength{\itemsep}{3pt}
\setlength{\parsep}{0pt}
\setlength{\parskip}{0pt}
\item $Q_{\mathcal{P}_{r}} = Q_{\mathcal{P}} - \{(q, q_{ce}) \in Q_{\mathcal{P}}|\, q \in Q_{d}\}$
\item $\Sigma_{\mathcal{P}_{r}} = \Sigma_{\mathcal{P}} = \Sigma \cup \Gamma$
\item $(\forall q, q' \in Q_{\mathcal{P}_{r}})(\forall \sigma \in \Sigma_{\mathcal{P}_{r}}) \, \xi_{\mathcal{P}}(q, \sigma) = q' \Leftrightarrow \xi_{\mathcal{P}_{r}}(q, \sigma) = q'$
\item $q_{\mathcal{P}_{r}}^{init} = q_{\mathcal{P}}^{init}$
\end{itemize}
\item Synthesize the supremal supervisor $NS = (Q_{ns}, \Sigma_{ns} = \Sigma \cup \Gamma, \xi_{ns}, q_{ns}^{init})$ over the control constraint $(\Gamma - \{\Sigma_{uc}\}, \Sigma_{o} \cup \Gamma)$ by treating $\mathcal{P}$ as the plant and $\mathcal{P}_{r}$ as the requirement such that $\mathcal{P}||NS$ is safe w.r.t. $\mathcal{P}_{r}$.
\end{enumerate}
We shall briefly explain \textbf{Procedure 1}. As illustrated in Fig. \ref{fig:Supervisory_Control_Bipartite_Supervisor}, at Step 1, we shall construct a lifted plant $\mathcal{P} = G||CE$, where the issuing of different control commands is modelled and can be controlled. In addition, since we only consider safe supervisors, at Step 2, we shall then remove any state of $\{(q, q_{ce}) \in Q_{\mathcal{P}}|\, q \in Q_{d}\}$ in $\mathcal{P}$ to generate the requirement $\mathcal{P}_{r}$. Thus, by treating $\mathcal{P}$ as the plant and $\mathcal{P}_{r}$ as the requirement, we can synthesize the supremal safe command-nondeterministic supervisor $NS$ over the control constraint $(\Gamma - \{\Sigma_{uc}\}, \Sigma_{o} \cup \Gamma)$, whose existence is guaranteed since $\Gamma - \{\Sigma_{uc}\} \subseteq \Sigma_{o} \cup \Gamma$~\cite{WMW10},~\cite{GLM20},~\cite{zhu2019},~\cite{Linnetworked}. Here, the control command $\Sigma_{uc}$ is not controllable to the supervisor because it entirely consists of uncontrollable events, which are always allowed to be fired at the plant $G$. We note that $NS$ is a deterministic automaton, but command non-deterministic in the sense that at each control state, more than two different control commands may be issued.
Based on the structure of $G$ and $CE$, the synthesized $NS$ is a bipartite structure (introduced in Section \ref{subsec:unknown supervisor}). For technical convenience, we shall write the state set of $NS$ as $Q_{ns} = Q_{ns}^{rea} \cup Q_{ns}^{com}$ ($Q_{ns}^{rea}$ and $Q_{ns}^{com}$ denote the set of reaction states and control states, respectively), where
\begin{itemize}
\setlength{\itemsep}{3pt}
\setlength{\parsep}{0pt}
\setlength{\parskip}{0pt}
\item At any state of $Q_{ns}^{rea}$, any event in $\Gamma$ is not defined.
\item At any state of $Q_{ns}^{rea}$, any event in $\Sigma_{uo}$, if defined, leads to self-loops, and any event in $\Sigma_{o}$, if defined, would lead to a transition to a control state.
\item At any state of $Q_{ns}^{com}$, only events in $\Gamma$ are defined.
\item At any state of $Q_{ns}^{com}$, any event in $\Gamma$, if defined, would lead to a transition to a reaction state.
\end{itemize}
We shall briefly explain why these 4 cases hold. We know that: 1) since the closed-behavior of $\mathcal{P}$ is a subset of $\overline{(\Gamma(\Sigma - \Sigma_{o})^{*}\Sigma_{o})^{*}}$, the closed-behavior of $NS$ is also a subset of $\overline{(\Gamma(\Sigma - \Sigma_{o})^{*}\Sigma_{o})^{*}}$, 2) any transition labelled as an unobservable event in $\Sigma - \Sigma_{o}$ would be a self-loop in $NS$ while any transition labelled as an event in $\Sigma_{o} \cup \Gamma$ would enable $NS$ to make a state transition. Thus, we could always divide the state set of $NS$ into two disjoint parts: 1) the set of control states $Q_{ns}^{com}$, where only the control commands in $\Gamma$ are defined (Case 3), 2) the set of reaction states $Q_{ns}^{rea}$, where any event in $\Gamma$ is not defined (Case 1) and the events, if defined at such a state, belong to $\Sigma$. In addition, based on the format of closed-behavior of $\mathcal{P}$, Case 2 and Case 4 naturally hold. Based on \textbf{Procedure 1}, we know that $|Q_{ns}| \leq 2^{|Q| \times |Q_{ce}|}$.
\textbf{Example IV.1} We shall continue with the water tank example, whose setup is shown in \textbf{Example III.1}. Based on the plant $G$ and command execution automaton $CE$ illustrated in Fig. \ref{fig:Plant G} and Fig. \ref{fig:Example_command execution}, respectively, the synthesized supremal safe command-nondeterministic supervisor $NS$ is illustrated in Fig. \ref{fig:Example_NS}. At the initial state 0 of $NS$, it could issue any control command in $v_{1} = \{L, H, EL, EH\}$, $v_{2} = \{close, L, H, EL, EH\}$, $v_{3} = \{open, L, H, EL, EH\}$, $v_{4} = \{close, open, L, H, EL, EH\}$, after which it would transit to state 1. Then, after the plant $G$ receives the control command issued from the supervisor at the initial state, it would always execute the event $L$ or $H$ no matter which control command it receives. Next, there are two cases:
\begin{enumerate}[1.]
\setlength{\itemsep}{3pt}
\setlength{\parsep}{0pt}
\setlength{\parskip}{0pt}
\item If $NS$ receives the observation $L$, it would transit to state 2, at which it could issue the control command $v_{1}$ or $v_{2}$. If it issues $v_{1}$ and transits to state 6, then $G$ would not execute any event under $v_{1}$ as $v_{1}$ does not contain any event that could be executed w.r.t. the state of $G$, thus, no event occurs at state 6 of $NS$; otherwise, i.e., if it issues $v_{2}$ and transits to state 4, then $G$ would execute the event $close$. After $NS$ receives the observation $close$, it would transit to state 0, at which any control command of $v_{1} - v_{4}$ could be issued.
\item If $NS$ receives the observation $H$, it would transit to state 3, at which it could issue the control command $v_{1}$ or $v_{2}$. If it issues $v_{1}$ and transits to state 6, then $G$ would not execute any event under $v_{1}$ as $v_{1}$ does not contain any event that could be executed w.r.t. the state of $G$, thus, no event occurs at state 6 of $NS$; otherwise, i.e., if it issues $v_{3}$ and transits to state 5, then $G$ would execute the event $open$. After $NS$ receives the observation $open$, it would transit to state 0, at which any control command of $v_{1} - v_{4}$ could be issued.
\end{enumerate}
\begin{figure}[htbp]
\begin{center}
\includegraphics[height=3.3cm]{NS.pdf}
\caption{The synthesized supremal safe command-nondeterministic supervisor $NS$}
\label{fig:Example_NS}
\end{center}
\end{figure}
\vspace{0.1cm}
\noindent \textbf{Step 2: Construction of $OCNS^{A}$}
Next, as stated in the main idea of Section \ref{subsec:Main idea of the Solution Methodology}, based on the synthesized $NS$ and observations $O$, we shall construct $OCNS^{A}$, which encodes the supremal safe and observation-consistent command non-deterministic supervisor under sensor-actuator attack. The step-by-step construction procedure is given as follows, including \textbf{Step 2.1} - \textbf{Step 2.3}.
\vspace{0.1cm}
\noindent \textbf{Step 2.1: Construction of $OC$}
\vspace{0.1cm}
Based on $M_{o} = (Q_{o}, \Sigma_{o}, \xi_{o}, q_{o}^{init})$ which captures the observations $O$, we shall construct a bipartite structure $OC$ to embed all the supervisors consistent with $O$. The construction of $OC$ is similar to that of a bipartite supervisor $BT(S)$ shown in Section \ref{subsec:unknown supervisor}, which is given as follows:
\[
OC = (Q_{oc}, \Sigma_{oc}, \xi_{oc}, q_{oc}^{init})
\]
\begin{enumerate}[1.]
\setlength{\itemsep}{3pt}
\setlength{\parsep}{0pt}
\setlength{\parskip}{0pt}
\item $Q_{oc} = Q_{o} \cup Q_{o}^{com} \cup \{q_{oc}^{dump}\}$, where $Q_{o}^{com}:= \{q^{com}|q \in Q_{o}\}$
\item $\Sigma_{oc} = \Sigma \cup \Gamma$
\item $(\forall q^{com} \in Q_{o}^{com})(\forall \gamma \in \Gamma) \, En_{M_{o}}(q) \subseteq \gamma \Rightarrow \xi_{oc}(q^{com}, \gamma) = q$. (ensure the command consistency with observations)
\item $(\forall q \in Q_{o})(\forall \sigma \in \Sigma_{o}) \, \xi_{o}(q, \sigma)! \Rightarrow \xi_{oc}(q, \sigma) = (\xi_{o}(q, \sigma))^{com}$. (observation of an observable event consistent with observations)
\item $(\forall q \in Q_{o})(\forall \sigma \in \Sigma_{o}) \, \neg \xi_{o}(q, \sigma)! \Rightarrow \xi_{oc}(q, \sigma) = q_{oc}^{dump}$. (observation of an observable event inconsistent with observations)
\item $(\forall q \in Q_{o})(\forall \sigma \in \Sigma_{uo}) \, \xi_{oc}(q, \sigma) = q$. (occurrence of an unobservable event)
\item $(\forall \sigma \in \Sigma \cup \Gamma) \, \xi_{oc}(q_{oc}^{dump}, \sigma) = q_{oc}^{dump}$.
\item $q_{oc}^{init} = (q_{o}^{init})^{com}$
\end{enumerate}
Firstly, at Step 1, the state set $Q_{oc}$ consists of the reaction state set $Q_{o}$ and the control state set $Q_{o}^{com}$. In addition, we add a new state $q_{oc}^{dump}$ to denote that some event sequence which is not collected in the observations $O$ happens. At Step 3, for any control state $q^{com}$, we shall allow the issuing of any control command $\gamma$ satisfying the condition $En_{M_{o}}(q) \subseteq \gamma$, i.e., $\gamma$
can generate the event executions $En_{M_{o}}(q)$ that have been collected at the state $q$.
At Step 4, all the transitions originally defined at the state $q$ in $M_{o}$ are retained and would drive the state change to the control state $(\xi_{o}(q,\sigma))^{com}$. At Step 5, for any reaction state $q$, any event in $\Sigma_{o}$, which has not been collected in the current observations, would lead to a transition to the dump state $q_{oc}^{dump}$.
At Step 6, at any reaction state, all the events in $\Sigma_{uo}$ will lead to self-loops because they are unobservable.
At Step 7, any event in $\Sigma \cup \Gamma$ is defined at the state $q_{oc}^{dump}$ since $q_{oc}^{dump}$ is a state denoting that the transition has gone out of the observations collected by the attacker, implying that any event in $\Sigma \cup \Gamma$ might happen.
\vspace{0.1cm}
\noindent \textbf{Step 2.2: Construction of $OCNS$}
\vspace{0.1cm}
Although $OC$ embeds all the supervisors consistent with $O$, we need to ensure they are safe supervisors. Thus, we shall adopt the above-synthesized $NS = (Q_{ns}^{rea} \cup Q_{ns}^{com}, \Sigma_{ns} = \Sigma \cup \Gamma, \xi_{ns}, q_{ns}^{init})$, which encodes all the possible safe bipartite supervisors, to refine the structure of $OC$. To achieve this goal, we compute the synchronous product $OCNS = NS||OC = (Q_{ocns}, \Sigma_{ocns}, \xi_{ocns}, q_{ocns}^{init})$, where it can be easily checked that $Q_{ocns} \subseteq (Q_{ns}^{rea} \times (Q_{o} \cup \{q_{oc}^{dump}\})) \cup (Q_{ns}^{com} \times (Q_{o}^{com} \cup \{q_{oc}^{dump}\}))$. We shall refer to $OCNS$ as the observation consistent command non-determinsitic supervisor.
\vspace{0.1cm}
\noindent \textbf{Step 2.3: Construction of $OCNS^{A}$}
\vspace{0.1cm}
Based on $OCNS$, we shall encode the effects of the sensor-actuator attacks to generate $OCNS^{A}$, which is similar to the construction procedure of $BT(S)^A$ given in \textbf{Step 2} of Section \ref{subsec:unknown supervisor}.
\[
OCNS^{A} = (Q_{ocns}^{a}, \Sigma_{ocns}^{a}, \xi_{ocns}^{a}, q_{ocns}^{init,a})
\]
\begin{enumerate}[1.]
\setlength{\itemsep}{3pt}
\setlength{\parsep}{0pt}
\setlength{\parskip}{0pt}
\item $Q_{ocns}^{a} = Q_{ocns} \cup \{q_{cov}^{brk}\}$
\item $\Sigma_{ocns}^{a} = \Sigma \cup \Sigma_{s,a}^{\#} \cup \Gamma$
\item $(\forall q, q' \in Q_{ocns}^{a})(\forall \sigma \in \Sigma_{s,a}) \, \xi_{ocns}(q, \sigma) = q' \Rightarrow \xi_{ocns}^{a}(q, \sigma^{\#}) = q' \wedge \xi_{ocns}^{a}(q, \sigma) = q$
\item $(\forall q \in Q_{ocns}^{a})(\forall \sigma \in \Sigma_{c,a} \cap (\Sigma_{uo} \cup \Sigma_{s,a})) \, q \in Q_{ns}^{rea} \times (Q_{o} \cup \{q_{oc}^{dump}\}) \Rightarrow \xi_{ocns}^{a}(q, \sigma) = q$
\item $(\forall q, q' \in Q_{ocns}^{a})(\forall \sigma \in (\Sigma - \Sigma_{s,a}) \cup \Gamma) \, \xi_{ocns}(q, \sigma) = q' \Rightarrow \xi_{ocns}^{a}(q, \sigma) = q'$
\item $(\forall q \in Q_{ocns}^{a})(\forall \sigma \in \Sigma_{o} - \Sigma_{s,a}) \, q \in Q_{ns}^{rea} \times (Q_{o} \cup \{q_{oc}^{dump}\}) \wedge \neg \xi_{ocns}(q, \sigma)! \Rightarrow \xi_{ocns}^{a}(q, \sigma) = q_{cov}^{brk}$
\item $(\forall q \in Q_{ocns}^{a})(\forall \sigma \in \Sigma_{s,a}) \, q \in Q_{ns}^{rea} \times (Q_{o} \cup \{q_{oc}^{dump}\}) \wedge \neg \xi_{ocns}(q, \sigma)! \Rightarrow \xi_{ocns}^{a}(q, \sigma^{\#}) = q_{cov}^{brk}$
\item $q_{ocns}^{init,a} = q_{ocns}^{init}$
\end{enumerate}
At Step 1, all the states in $OCNS$ are retained, and we shall add a new state $q_{cov}^{brk}$ to denote the covertness-breaking situations.
At Step 3, due to the existence of sensor attack, at any state $q$, any transition labelled by $\sigma \in \Sigma_{s,a}$ in $OCNS$ is relabelled with $\sigma^{\#} \in \Sigma_{s,a}^{\#}$ because the supervisor can observe $\Sigma_{s,a}^{\#}$ instead of $\Sigma_{s,a}$; in addition, a self-loop labelled by $\sigma$ is added at the state $q$ because such event can happen and is unobservable to the supervisor. At Step 4, for any state $q \in Q_{ns}^{rea} \times (Q_{o} \cup \{q_{oc}^{dump}\})$, we shall add the self-loop transitions labelled by events in $\Sigma_{c,a} \cap (\Sigma_{uo} \cup \Sigma_{s,a})$ since they can be enabled due to the actuator attack and are unobservable to the supervisor. At Step 5, all the other transitions, labelled by events in $(\Sigma-\Sigma_{s, A}) \cup \Gamma$, defined in $OCNS$ are kept.
At Step 6 and Step 7, we shall explicitly encode the covertness-breaking situations: at any state $q \in Q_{ns}^{rea} \times (Q_{o} \cup \{q_{oc}^{dump}\})$, any event in $\Sigma_{o}$, which should not have been observed in the absence of attack, denoted by $\neg \xi_{ocns}(q, \sigma)!$, would lead to a transition labelled as $\sigma \in \Sigma_{o} - \Sigma_{s,a}$ or $\sigma^{\#} \in \Sigma_{s,a}^{\#}$ to the state $q_{cov}^{brk}$, meaning that the existence of the sensor-actuator attacker is exposed.
Based on the model of $OCNS^{A}$, we know that $|Q_{ocns}^{a}| = |Q_{ns}| \times (2|Q_{o}| + 1) + 1 \leq 2^{|Q| \times |Q_{ce}|} \times (2|Q_{o}| + 1) + 1$.
\vspace{0.1cm}
\textbf{Example IV.2} We shall continue with the water tank example. Based on \textbf{Step 2.1} - \textbf{Step 2.3}, the observation automaton $M_{o}$ and the synthesized supremal safe command-nondeterministic supervisor $NS$ shown in Fig. \ref{fig:Observations M_o} and Fig. \ref{fig:Example_NS}, respectively, we can obtain $OC$, $OCNS$ and $OCNS^{A}$, which are illustrated in Fig. \ref{fig:Example_OC}, Fig. \ref{fig:R3C19_OCNS} and Fig. \ref{fig:Example_OCNS^{A}}, respectively. To help readers understand the construction procedure, we also present the detailed explanations about how to construct these models step-by-step as well as the meaning of each model.
\begin{figure}[htbp]
\begin{center}
\includegraphics[height=3.2cm]{OC.pdf}
\caption{The constructed $OC$}
\label{fig:Example_OC}
\end{center}
\end{figure}
\begin{figure}[htbp]
\begin{center}
\includegraphics[height=10cm]{R3C19_OC.png}
\caption{The construction procedure of $OC$ from $M_{o}$. (i) is $M_{o}$. (iii) is $OC$.}
\label{fig:R3C19_OC}
\end{center}
\end{figure}
Based on \textbf{Step 2.1}, we shall explain how to construct $OC$ (Fig. \ref{fig:R3C19_OC}. (iii)) from $M_{o}$ (Fig. \ref{fig:R3C19_OC}. (i)).
\begin{enumerate}[1.]
\setlength{\itemsep}{3pt}
\setlength{\parsep}{0pt}
\setlength{\parskip}{0pt}
\item We need to generate the state set of $OC$ based on step 1, since $Q_{o} = \{0,1,2,3\}$, we have $Q_{oc} = Q_{o} \cup Q_{o}^{com} \cup \{q_{oc}^{dump}\} = \{0,1,2,3,0^{com},1^{com},2^{com},3^{com},q_{oc}^{dump}\}$, which is illustrated in Fig. \ref{fig:R3C19_OC}. (ii).
\item We need to construct the transitions based on step 3. For example, if $q^{com} = 1^{com}$, then we have $En_{M_{o}}(1) = \{close\} \subseteq v_{2} \Rightarrow \xi_{oc}(1^{com}, v_{2}) = 1$ and $En_{M_{o}}(1) = \{close\} \subseteq v_{4} \Rightarrow \xi_{oc}(1^{com}, v_{4}) = 1$, which are illustrated by the transitions labelled as $v_{2}$ and $v_{4}$ from state $1^{com}$ to state $1$ in Fig. \ref{fig:R3C19_OC}. (ii).
\item We need to construct the transitions based on step 4. For example, if $q = 1$, then we have $\xi_{o}(1,close) = 3 \Rightarrow \xi_{oc}(1,close) = (\xi_{o}(1,close))^{com} = 3^{com}$, which is illustrated by the transition labelled as $close$ from state $1$ to state $3^{com}$ in Fig. \ref{fig:R3C19_OC}. (iii).
\item We need to construct the transitions based on step 5. For example, if $q = 1$, then we have $\neg \xi_{o}(1,H)! \Rightarrow \xi_{oc}(1,H) = q_{oc}^{dump}$, which is illustrated by the transition labelled as $H$ from state 1 to state $q_{oc}^{dump}$ in Fig. \ref{fig:R3C19_OC}. (iii).
\item Since there are no unobservable events in this example, the construction for step 6 is not needed.
\item Based on step 7, we need to add the self-loop transitions labelled as the events in $\Sigma \cup \Gamma = \{H,L,EH,EL,close,open,v_{1},v_{2},v_{3},v_{4}\}$ at state $q_{oc}^{dump}$, which are illustrated in Fig. \ref{fig:R3C19_OC}. (iii).
\end{enumerate}
The meaning of $OC$ is: $OC$ encodes all the bipartite supervisors consistent with the collected observations $M_{o}$ because the condition $En_{M_{o}}(q) \subseteq \gamma$ at step 3 would filter out those control commands that could not generate the obtained observations. $OC$ could be interpreted in a similar way as $NS$, whose meaning is explained in the response to Comment 18. For example, at the initial state $0^{com}$ of $OC$, the bipartite supervisor that is consistent with observations could issue any control command in $v_{1}$, $v_{2}$, $v_{3}$ and $v_{4}$ because $v_{1}$, $v_{2}$, $v_{3}$ and $v_{4}$ could generate the observations $H$ and $L$ at the initial state 0 of $M_{o}$, after which $OC$ would transit to state 0. Then, if the observation $H$ is received, $OC$ would transit to state $2^{com}$, at which the bipartite supervisor that is consistent with observations could issue $v_{3}$ or $v_{4}$ because only $v_{3}$ and $v_{4}$ could generate the observation $open$ at state 2 of $M_{o}$. Similarly, the transition labelled as $L$ at state 0 of $OC$ and the transition labelled $v_{2}$ and $v_{4}$ at state $1^{com}$ of $OC$ could be interpreted in this way. At state 0 of $OC$, the events in $EH,EL,close,open$ are not collected at state 0 of the model $M_{o}$, and the transitions labelled as those events would lead to state $q_{oc}^{dump}$ because it is still possible that they could happen although they are not collected in $M_{o}$ due to the finite observations. In addition, after the occurrence of these uncollected observations,
since they are not collected in $M_{o}$, implying that we are not sure what event could happen later following those uncollected observations, any event in $\Sigma \cup \Gamma$ is defined at state $q_{oc}^{dump}$ such that any bipartite supervisor that is consistent with the observations $O$ is embedded in $OC$.
\begin{figure}[htbp]
\begin{center}
\includegraphics[height=2.6cm]{R3C16_NS.png}
\caption{The computed $NS$}
\label{fig:R3C16_NS}
\end{center}
\end{figure}
\begin{figure}[htbp]
\begin{center}
\includegraphics[height=3.5cm]{R3C19_only_OC.png}
\caption{The constructed $OC$}
\label{fig:R3C19_only_OC}
\end{center}
\end{figure}
\begin{figure}[htbp]
\begin{center}
\includegraphics[height=3.5cm]{R3C19_OCNS.png}
\caption{The constructed $OCNS$}
\label{fig:R3C19_OCNS}
\end{center}
\end{figure}
For the water tank example, based on $NS$ and $OC$ shown in Fig. \ref{fig:R3C16_NS} and Fig. \ref{fig:R3C19_only_OC}, respectively, the constructed $OCNS$ is illustrated in Fig. \ref{fig:R3C19_OCNS}. Based on the construction of $OCNS$, we know that it embeds any safe command non-deterministic supervisor that is consistent with the observations $O$. Thus, it can be interpreted in a similar way as $NS$ and $OC$. For example, at the initial state $(0,0^{com})$, it could issue any control command in $v_{1}$, $v_{2}$, $v_{3}$ and $v_{4}$ as these control commands can ensure the safety of the plant and the generation of the collected observations $H$ and $L$ at the initial state of $M_{o}$. After the command sending, $OCNS$ would transit to state $(1,0)$, at which it may observe $H$ or $L$, denoted by the transition labelled as $H$ and $L$ at state $(1,0)$. If it receives the observation $H$ and transits to state $(3,2^{com})$, then it could issue the control command $v_{3}$ as $v_{3}$ can ensure the safety of the plant and the generation of the collected observations $open$ at state $2$ of $M_{o}$; otherwise, i.e., if it receives the observation $L$ and transits to state $(2,1^{com})$, then it could issue the control command $v_{2}$ as $v_{2}$ can ensure the safety of the plant and the generation of the collected observations $close$ at state $1$ of $M_{o}$.
\begin{figure}[htbp]
\begin{center}
\includegraphics[height=4.5cm]{R3C19_OCNS_A.pdf}
\caption{The constructed $OCNS^{A}$}
\label{fig:Example_OCNS^{A}}
\end{center}
\end{figure}
Based on \textbf{Step 2.3}, we shall explain how to construct $OCNS^{A}$ (Fig. \ref{fig:Example_OCNS^{A}}).
\begin{enumerate}[1.]
\setlength{\itemsep}{3pt}
\setlength{\parsep}{0pt}
\setlength{\parskip}{0pt}
\item Based on step 1, we shall add a new state $q_{cov}^{brk}$ to explicitly denote that the supervisor has detected the existence of attack and the covertness of attackers has been broken.
\item We need to construct the transitions based on step 3. For example, if $q = (1,0)$ and $q^{'} = (3,2^{com})$, then we have $\xi_{ocns}(q = (1,0), H) = q' = (3,2^{com}) \Rightarrow \xi_{ocns}^{a}(q = (1,0), H^{\#}) = q' = (3,2^{com}) \wedge \xi_{ocns}^{a}(q = (1,0), H) = (1,0)$, which are illustrated by the transition labelled as $H^{\#}$ from state $(1,0)$ to state $(3,2^{com})$ and the self-loop transition labelled as $H$ at state $(1,0)$ in Fig. \ref{fig:Example_OCNS^{A}}.
\item We need to construct the transitions based on step 4. However, since $\Sigma_{c,a} \cap (\Sigma_{uo} \cup \Sigma_{s,a}) = \{close,open\} \cap (\emptyset \cup \{H,L,EH,EL\}) = \emptyset$, the construction for step 4 is not needed.
\item We need to construct the transitions based on step 5, that is, retain all the transitions of $OCNS$ labelled as events in $(\Sigma - \Sigma_{s,a}) \cup \Gamma$.
\item We need to construct the transitions based on step 6. For $NS$, its state set is divided into two sets, the set of reaction states $Q_{ns}^{rea}$ and the set of control states $Q_{ns}^{com}$. For any state of $Q_{ns}^{rea}$, any event in $\Gamma$ is not defined, thus, for $NS$ shown in Fig. \ref{fig:R3C16_NS}, we know its $Q_{ns}^{rea} = \{1,4,5,6\}$. Then, based on step 6 and take $q = (1,0)$ as an instance, we have $q = (1,0) \in Q_{ns}^{rea} \times (Q_{o} \cup \{q_{oc}^{dump}\}) \wedge \neg \xi_{ocns}(q = (1,0), close)! \Rightarrow \xi_{ocns}^{a}(q = (1,0), close) = q_{cov}^{brk}$, which is illustrated by the transition labelled as $close$ from state $(1,0)$ to state $q_{cov}^{brk}$ in Fig. \ref{fig:Example_OCNS^{A}}. Similarly, the transition labelled as $open$ at state $(1,0)$ would also lead a transition to state $q_{cov}^{brk}$ in Fig. \ref{fig:Example_OCNS^{A}}.
\item We need to construct the transitions based on step 7. For example, if $q = (1,0)$, then we have $q = (1,0) \in Q_{ns}^{rea} \times (Q_{o} \cup \{q_{oc}^{dump}\}) \wedge \neg \xi_{ocns}(q = (1,0), EH)! \Rightarrow \xi_{ocns}^{a}(q = (1,0), EH^{\#}) = q_{cov}^{brk}$, which is illustrated by the transition labelled as $EH^{\#}$ from state $(1,0)$ to state $q_{cov}^{brk}$ in Fig. \ref{fig:Example_OCNS^{A}}. Similarly, the transition labelled as $EL^{\#}$ at state $(1,0)$ would also result in a transition to state $q_{cov}^{brk}$ in Fig. \ref{fig:Example_OCNS^{A}}.
\end{enumerate}
The meaning of $OCNS^{A}$ is: $OCNS^{A}$ embeds any safe command non-deterministic supervisor that is consistent with observations $O$, where the effects of sensor-actuator attacks, e.g., event relabelling for $\Sigma_{s,a}$, and the covertness-breaking situations of attackers are also encoded. For example, at the initial state $(0,0^{com})$ of $OCNS^{A}$ in Fig. \ref{fig:Example_OCNS^{A}}, it could issue any control command in $v_{1}$, $v_{2}$, $v_{3}$ and $v_{4}$ as these control commands can ensure the safety of the plant and the generation of the collected observations $H$ and $L$ at the initial state of $M_{o}$. After the command sending, it would transit to state $(1,0)$, at which the event $H$ or $L$ could be executed at the plant. However, due the existence of the sensor attack, we have that events in $\Sigma_{s,a}$ are executed by the plant, while events in $\Sigma_{s,a}^{\#}$ are those attacked copies sent by the sensor attacker and received by the supervisor. Thus, the execution of $H$ or $L$ at the plant would lead to a self-loop transition at state $(1,0)$ as they are unobservable to the supervisor, and only the transition labelled as $H^{\#}$ or $L^{\#}$ would lead to a state change, denoted by the transition labelled as $H^{\#}$ ($L^{\#}$, respectively) from state $(1,0)$ to state $(3,2^{com})$ (state $(2,1^{com})$, respectively). In addition, in this work, we assume that the supervisor is embedded with a monitor to detect the existence of attacks, and once the information inconsistency happens, it could assert there exists an attack and halt the system execution. Thus, at state $(1,0)$, i.e., the supervisor just issues the initial control command, it knows\footnote{$OCNS^{A}$ is derived from the synchronous product of $NS$ and $OC$, where $NS$ already encodes the information that should be observed under the absence of attack, i.e., the monitoring mechanism is implicitly encoded in $NS$.} that only $H^{\#}$ or $L^{\#}$ could be observed, and once any other event in $\{EL^{\#},EH^{\#},close,open\}$ is observed, it could assert that an attack happens, denoted by the transition labelled as events in $\{EL^{\#},EH^{\#},close,open\}$ from state $(1,0)$ to state $q_{cov}^{brk}$, i.e., the covertness of attackers has been broken.
\emph{Theorem IV.1:} Given a set of observations $O$, for any safe supervisor $S$ that is consistent with $O$, i.e., $O \subseteq P_{o}(L(G||CE||BT(S)))$, it holds that $L(BT(S)^{A}) \subseteq L(OCNS^{A})$.
\emph{Proof:} See Appendix \ref{appendix: 1}. \hfill $\blacksquare$
\vspace{0.1cm}
\emph{Corollary IV.1:} For any safe supervisor $S$ consistent with $O$, if $s \in L(BT(S)^{A})$, it holds that $\xi_{bs,a}(q_{bs,a}^{init}, s) = q^{detect} \Leftrightarrow \xi_{ocns}^{a}(q_{ocns}^{init,a}, s) = q_{cov}^{brk}$.
\emph{Proof:} This follows from the analysis given in the proof of \emph{Theorem IV.1}. \hfill $\blacksquare$
\vspace{0.1cm}
\noindent \textbf{Step 3: Construction of $\overline{S^{\downarrow,A}}$}
\vspace{0.1cm}
Next, as stated in the main idea of Section \ref{subsec:Main idea of the Solution Methodology}, we shall construct $\overline{S^{\downarrow,A}}$~\cite{LTZS20}, a complete automaton whose marked behavior models the least permissive supervisor (that is consistent with $O$) under attack. The step-by-step construction procedure is given as follows, including \textbf{Step 3.1} - \textbf{Step 3.3}.
\vspace{0.1cm}
\noindent \textbf{Step 3.1: Construction of $S^{\downarrow}$}
\vspace{0.1cm}
Based on the model $M_{o} = (Q_{o}, \Sigma_{o}, \xi_{o}, q_{o}^{init})$ that captures the observations $O$, we shall construct the least permissive supervisor $S^{\downarrow}$~\cite{LTZS20} which is consistent with $O$. Let
\[
S^{\downarrow} = (Q_{s}^{\downarrow}, \Sigma_{s}^{\downarrow}, \xi_{s}^{\downarrow}, q_{s}^{init,\downarrow})
\]
\begin{enumerate}[1.]
\setlength{\itemsep}{3pt}
\setlength{\parsep}{0pt}
\setlength{\parskip}{0pt}
\item $Q_{s}^{\downarrow} = Q_{o}$
\item $\Sigma_{s}^{\downarrow} = \Sigma$
\item $(\forall q, q' \in Q_{o})(\forall \sigma \in \Sigma_{o}) \, \xi_{o}(q, \sigma) = q' \Rightarrow \xi_{s}^{\downarrow}(q, \sigma) = q'$. (transitions retaining)
\item $(\forall q \in Q_{o})(\forall \sigma \in \Sigma_{uc} \cap \Sigma_{uo} = \Sigma_{uo}) \, \xi_{s}^{\downarrow}(q, \sigma) = q$. (controllability and observability requirement)
\item $(\forall q \in Q_{o})(\forall \sigma \in \Sigma_{uc} \cap \Sigma_{o}) \, \neg \xi_{o}(q, \sigma)! \Rightarrow \xi_{s}^{\downarrow}(q, \sigma) = q_{o}^{dl}$. (controllability requirement)
\item $q_{s}^{init,\downarrow} = q_{o}^{init}$
\end{enumerate}
At Step 3, we shall retain all the transitions originally defined in $M_{o}$. Then, at any state $q \in Q_{o}$, Step 4 and Step 5 would complete the undefined transitions labelled by events in $\Sigma_{uc}$ to satisfy the controllability, where the unobservable parts would lead to self-loops at Step 4 to satisfy the observability, and the observable parts would transit to the deadlocked state $q_{o}^{dl}$ at Step 5.
\emph{Theorem IV.2:} Given a set of observations $O$, $S^{\downarrow}$ is the least permissive supervisor among all the supervisors that are consistent with $O$.
\emph{Proof:} See Appendix \ref{appendix: 2}. \hfill $\blacksquare$
\vspace{0.1cm}
\noindent \textbf{Step 3.2: Construction of $S^{\downarrow,A}$}
\vspace{0.1cm}
Based on $S^{\downarrow}$, we shall construct $S^{\downarrow,A}$, whose behavior encodes the least permissive supervisor (consistent with $O$) under the effects of the sensor-actuator attack. The following procedure is similar to the procedure of $BT(S)^A$ given in \textbf{Step 2} of Section \ref{subsec:unknown supervisor}.
\[
S^{\downarrow,A} = (Q_{s}^{\downarrow,a}, \Sigma_{s}^{\downarrow,a}, \xi_{s}^{\downarrow,a}, q_{s}^{init,\downarrow,a})
\]
\begin{enumerate}[1.]
\setlength{\itemsep}{3pt}
\setlength{\parsep}{0pt}
\setlength{\parskip}{0pt}
\item $Q_{s}^{\downarrow,a} = Q_{s}^{\downarrow} \cup \{q^{risk}\}$
\item $\Sigma_{s}^{\downarrow,a} = \Sigma \cup \Sigma_{s,a}^{\#}$
\item $(\forall q, q' \in Q_{s}^{\downarrow,a})(\forall \sigma \in \Sigma_{s,a}) \, \xi_{s}^{\downarrow}(q, \sigma) = q' \Rightarrow \xi_{s}^{\downarrow,a}(q, \sigma^{\#}) = q' \wedge \xi_{s}^{\downarrow,a}(q, \sigma) = q$.
\item $(\forall q \in Q_{s}^{\downarrow,a})(\forall \sigma \in \Sigma_{c,a} \cap (\Sigma_{uo} \cup \Sigma_{s,a})) \, \neg \xi_{s}^{\downarrow}(q, \sigma)! \Rightarrow \xi_{s}^{\downarrow,a}(q, \sigma) = q$.
\item $(\forall q, q' \in Q_{s}^{\downarrow,a})(\forall \sigma \in \Sigma - \Sigma_{s,a}) \, \xi_{s}^{\downarrow}(q, \sigma) = q' \Rightarrow \xi_{s}^{\downarrow,a}(q, \sigma) = q'$.
\item $(\forall q \in Q_{s}^{\downarrow,a})(\forall \sigma \in \Sigma_{o} - \Sigma_{s,a}) \, \neg \xi_{s}^{\downarrow}(q, \sigma)! \Rightarrow \xi_{s}^{\downarrow,a}(q, \sigma) = q^{risk}$.
\item $(\forall q \in Q_{s}^{\downarrow,a})(\forall \sigma \in \Sigma_{s,a}) \, \neg \xi_{s}^{\downarrow}(q, \sigma)! \Rightarrow \xi_{s}^{\downarrow,a}(q, \sigma^{\#}) = q^{risk}$.
\item $q_{s}^{init,\downarrow,a} = q_{s}^{init,\downarrow}$
\end{enumerate}
The construction of $S^{\downarrow,A}$ is similar to that of $BT(S)^{A}$ in \textbf{Step 2} of Section \ref{subsec:unknown supervisor}, where $q^{risk}$ in $S^{\downarrow,A}$ serves as the same role as $q^{detect}$ in $BT(S)^{A}$.
\vspace{0.1cm}
\noindent \textbf{Step 3.3: Construction of $\overline{S^{\downarrow,A}}$}
\vspace{0.1cm}
Based on $S^{\downarrow,A}$, we shall construct $\overline{S^{\downarrow,A}}$ by performing the completion to make $\overline{S^{\downarrow,A}}$ become a complete automaton, where now only the marked behavior encodes the least permissive supervisor (consistent with $O$) under attack. Intuitively speaking, as long as the attacker makes use of the marked behavior of $\overline{S^{\downarrow,A}}$ to implement attacks, it can ensure damage-infliction against any (unknown) safe supervisor that is consistent with the observations.
\[
\overline{S^{\downarrow,A}} = (\overline{Q_{s}^{\downarrow,a}}, \overline{\Sigma_{s}^{\downarrow,a}}, \overline{\xi_{s}^{\downarrow,a}}, \overline{q_{s}^{init,\downarrow,a}}, \overline{Q_{s,m}^{\downarrow,a}})
\]
\begin{enumerate}[1.]
\setlength{\itemsep}{3pt}
\setlength{\parsep}{0pt}
\setlength{\parskip}{0pt}
\item $\overline{Q_{s}^{\downarrow,a}} = Q_{s}^{\downarrow,a} \cup \{q^{dump}\}$
\item $\overline{\Sigma_{s}^{\downarrow,a}} = \Sigma \cup \Sigma_{s,a}^{\#}$
\item $(\forall q, q' \in \overline{Q_{s}^{\downarrow,a}})(\forall \sigma \in \Sigma \cup \Sigma_{s,a}^{\#}) \, \xi_{s}^{\downarrow,a}(q, \sigma) = q' \Rightarrow \overline{\xi_{s}^{\downarrow,a}}(q, \sigma) = q'$. (transitions retaining)
\item $(\forall q \in \overline{Q_{s}^{\downarrow,a}})(\forall \sigma \in \Sigma \cup \Sigma_{s,a}^{\#}) \, \neg \xi_{s}^{\downarrow,a}(q, \sigma)! \Rightarrow \overline{\xi_{s}^{\downarrow,a}}(q, \sigma) = q^{dump}$. (transitions completion)
\item $(\forall \sigma \in \Sigma \cup \Sigma_{s,a}^{\#}) \, \overline{\xi_{s}^{\downarrow,a}}(q^{dump}, \sigma) = q^{dump}$. (transitions completion)
\item $\overline{q_{s}^{init,\downarrow,a}} = q_{s}^{init,\downarrow,a}$
\item $\overline{Q_{s,m}^{\downarrow,a}} = Q_{s}^{\downarrow,a}$
\end{enumerate}
Based on the model of $\overline{S^{\downarrow,A}}$, we know that $|\overline{Q_{s}^{\downarrow,a}}| = |Q_{o}| + 2$.
\vspace{0.1cm}
\textbf{Example IV.3} We shall continue with the water tank example. Based on \textbf{Step 3.1} - \textbf{Step 3.3} and the observation automaton $M_{o}$ shown in Fig. \ref{fig:Observations M_o}, we generate $S^{\downarrow}$, $S^{\downarrow,A}$ and $\overline{S^{\downarrow,A}}$, which are illustrated in Fig. \ref{fig:Example_S_downarrow}, Fig. \ref{fig:Example_S_downarrow_A} and Fig. \ref{fig:Example_S_downarrow_A_complete}, respectively. To help readers understand the construction procedure, we also present the detailed explanations about how to construct these models step-by-step as well as the meaning of each model.
\begin{figure}[htbp]
\begin{center}
\includegraphics[height=2.5cm]{S_downarrow.pdf}
\caption{The constructed $S^{\downarrow}$}
\label{fig:Example_S_downarrow}
\end{center}
\end{figure}
\begin{figure}[htbp]
\begin{center}
\includegraphics[height=3cm]{R3C20_S_downarrow.png}
\caption{The construction procedure of $S^{\downarrow}$ from $M_{o}$. (i) is $M_{o}$. (ii) is $S^{\downarrow}$.}
\label{fig:R3C20_S_downarrow}
\end{center}
\end{figure}
Based on \textbf{Step 3.1}, we shall explain how to construct $S^{\downarrow}$.
\begin{enumerate}[1.]
\setlength{\itemsep}{3pt}
\setlength{\parsep}{0pt}
\setlength{\parskip}{0pt}
\item Based on step 1, the state set of $S^{\downarrow}$ is the same as that of $M_{o}$, that is, $Q_{s}^{\downarrow} = Q_{o} = \{0,1,2,3\}$.
\item We need to construct the transitions based on step 3, that is, retain all the transitions defined in $M_{o}$, which are illustrated in Fig. \ref{fig:R3C20_S_downarrow}.
\item We need to construct the transitions based on step 4. Since all the events in this water tank example are observable, the construction for step 4 is not needed.
\item We need to construct the transitions based on step 5. For example, if $q = 0$, then we have $\neg \xi_{o}(q = 0, EH)! \Rightarrow \xi_{s}^{\downarrow}(q = 0, EH) = q_{o}^{dl} = 3$ and $\neg \xi_{o}(q = 0, EL)! \Rightarrow \xi_{s}^{\downarrow}(q = 0, EL) = q_{o}^{dl} = 3$, which are illustrated by the transitions labelled as $EH$ and $EL$ from state 0 to state 3 in Fig. \ref{fig:R3C20_S_downarrow}.
\end{enumerate}
The meaning of $S^{\downarrow}$: $S^{\downarrow}$ encodes the least permissive supervisor that is consistent with observations $O$, which has been proved in Theorem IV.2 of the manuscript. We shall take the initial state of $S^{\downarrow}$ as an instance to explain its meaning. At the initial state 0 in Fig. \ref{fig:R3C20_S_downarrow}. (ii), $H$ and $L$ are defined because they are collected in the observations, implying that the supervisor must enable $H$ and $L$ at the initial state to ensure that $S^{\downarrow}$ is consistent with the observations. In addition, to satisfy the controllability, i.e., any uncontrollable event should always be enabled by the supervisor, two more transitions labelled as $EH$ and $EL$ should also be added at the initial state 0 of $S^{\downarrow}$, leading to state 3.
\begin{figure}[htbp]
\begin{center}
\includegraphics[height=5.3cm]{S_downarrow_A.pdf}
\caption{The constructed $S^{\downarrow,A}$}
\label{fig:Example_S_downarrow_A}
\end{center}
\end{figure}
\begin{figure}[htbp]
\begin{center}
\includegraphics[height=11cm]{R3C20_S_downarrow_A.png}
\caption{The construction procedure of $S^{\downarrow,A}$ from $S^{\downarrow}$. (i) is $S^{\downarrow}$. (iv) is $S^{\downarrow,A}$.}
\label{fig:R3C20_S_downarrow_A}
\end{center}
\end{figure}
Based on \textbf{Step 3.2} we shall explain how to construct $S^{\downarrow,A}$.
\begin{enumerate}[1.]
\item Based on step 1, a new state $q^{risk}$ is added to the state set, thus, $Q_{s}^{\downarrow,a} = Q_{s}^{\downarrow} \cup \{q^{risk}\} = \{0,1,2,3,q^{risk}\}$, illustrated in Fig. \ref{fig:R3C20_S_downarrow_A}. (ii).
\item We need to construct the transitions based on step 3. For example, if $q = 0$ and $q^{'} = 1$, then we have $\xi_{s}^{\downarrow}(q = 0, L) = q' = 1 \Rightarrow \xi_{s}^{\downarrow,a}(q = 0, L^{\#}) = q' = 1 \wedge \xi_{s}^{\downarrow,a}(q = 0, L) = q = 0$, which is illustrated by the transition labelled as $L^{\#}$ from state 0 to state 1 and the self-loop transition labelled as $L$ at state 0 in Fig. \ref{fig:R3C20_S_downarrow_A}. (ii). Similarly, we could construct the transitions labelled as $H^{\#}$, $EL^{\#}$ and $EH^{\#}$ from state 0 to state 1 and the self-loop transitions labelled as $H$, $EL$ and $EH$ at state 0 in Fig. \ref{fig:R3C20_S_downarrow_A}. (ii).
\item We need to construct the transitions based on step 4. However, since $\Sigma_{c,a} \cap (\Sigma_{uo} \cup \Sigma_{s,a}) = \{close,open\} \cap (\emptyset \cup \{H,L,EH,EL\}) = \emptyset$, the construction for step 4 is not needed.
\item We need to construct the transitions based on step 5. For example, if $q = 1$ and $q^{'} = 3$, then we have $\xi_{s}^{\downarrow}(q = 1, close) = q' = 3 \Rightarrow \xi_{s}^{\downarrow,a}(q = 1, close) = q' = 3$, which is illustrated by the transition labelled as $close$ from state 1 to state 3 in Fig. \ref{fig:R3C20_S_downarrow_A}. (iii). Similarly, we could construct the transitions labelled as $open$ from state 2 to state 3 in Fig. \ref{fig:R3C20_S_downarrow_A}. (iii).
\item We need to construct the transitions based on step 6. For example, if $q = 1$, then we have $\neg \xi_{s}^{\downarrow}(q = 1, open)! \Rightarrow \xi_{s}^{\downarrow,a}(q = 1, open) = q^{risk}$, which is illustrated by the transition labelled as $open$ from state 1 to state $q^{risk}$ in Fig. \ref{fig:R3C20_S_downarrow_A}. (iv). Similarly, we could construct 1) the transitions labelled as $close$ and $open$ from state 0 to state $q^{risk}$, 2) the transition labelled as $close$ from state 2 to state $q^{risk}$, and 3) the transitions labelled as $close$ and $open$ from state 3 to state $q^{risk}$ in Fig. \ref{fig:R3C20_S_downarrow_A}. (iv).
\item We need to construct the transitions based on step 7. It can be checked that there are no new transitions satisfying the condition given at this step.
\end{enumerate}
The meaning of $S^{\downarrow,A}$: $S^{\downarrow,A}$ represents the least permissive supervisor that is consistent with the collected observations, where the effects of sensor actuator attacks are encoded. For example, at the initial state 0 of $S^{\downarrow,A}$ in Fig. \ref{fig:R3C20_S_downarrow_A}. (iv), $H$ and $L$ are enabled by the supervisor because they are collected in the observations, and $EH$ and $EL$ are also enabled by the supervisor because they are uncontrollable events, which should always be enabled. However, due to the existence of sensor attacks, events in $\Sigma_{s,a}$ are the ones executed in the plant and only the attacked copies in $\Sigma_{s,a}^{\#}$ are the events sent by the sensor attacker and could be observed by the supervisor. Thus, at the initial state 0 of $S^{\downarrow,A}$, the transitions labelled as events $H$, $L$, $EH$ and $EL$ are self-loops while the transitions labelled as $H^{\#}$, $L^{\#}$, $EH^{\#}$ and $EL^{\#}$ could enable $S^{\downarrow,A}$ to make a state transition, meaning that $H^{\#}$, $L^{\#}$, $EH^{\#}$ and $EL^{\#}$ are observed by the supervisor. The transitions labelled as $close$ and $open$ from the initial state 0 to state $q^{risk}$ mean that: Although $close$ and $open$ are not collected at the initial state of $M_{o}$ due to the finite observations, it is still possible that they are enabled by some supervisor. However, taking using of such uncollected event enabling information by the attacker is risky as the attacker might not be covert against any supervisor that is consistent with the collected observations. Thus, the transitions labelled as $close$ and $open$ at state 0 would lead to state $q^{risk}$.
\begin{figure}[htbp]
\begin{center}
\includegraphics[height=5.3cm]{S_Down_A_complete.pdf}
\caption{The constructed $\overline{S^{\downarrow,A}}$}
\label{fig:Example_S_downarrow_A_complete}
\end{center}
\end{figure}
Based on \textbf{Step 3.3}, we shall explain how to construct $S^{\downarrow,A}$.
\begin{enumerate}[1.]
\setlength{\itemsep}{3pt}
\setlength{\parsep}{0pt}
\setlength{\parskip}{0pt}
\item Based on step 1, a new state $q^{dump}$ is added to the state set, thus, $\overline{Q_{s}^{\downarrow,a}} = Q_{s}^{\downarrow,a} \cup \{q^{dump}\} = \{0,1,2,3,q^{risk},q^{dump}\}$, illustrated in Fig. \ref{fig:Example_S_downarrow_A_complete}.
\item We need to construct the transitions based on step 3, that is, retain all the transitions defined in $S^{\downarrow,A}$, which are illustrated in Fig. \ref{fig:Example_S_downarrow_A_complete}.
\item We need to construct the transitions based on step 4 and step 5, that is, for any state of $S^{\downarrow,A}$, we shall add the transitions, labelled as events in $\Sigma \cup \Sigma_{s,a}^{\#} = \{H,L,EH,EL,close,open,H^{\#},L^{\#},EH^{\#},EL^{\#}\}$, that are not defined at that state to make $\overline{S^{\downarrow,A}}$ become a complete automaton.
\end{enumerate}
The meaning of $\overline{S^{\downarrow,A}}$: $\overline{S^{\downarrow,A}}$ can be interpreted in a similar way to $S^{\downarrow,A}$, and the only difference is that $\overline{S^{\downarrow,A}}$ is a complete automaton while $S^{\downarrow,A}$ is not. Our motivation for constructing such a complete automaton, where only the marked behavior encodes the least permissive supervisor (consistent with $O$) under attack, is to provide convenience when we prove the decidability result in Theorem IV.5. Intuitively speaking, as long as the attacker makes use of the marked behavior of $\overline{S^{\downarrow,A}}$ to implement attacks, it can ensure damage-infliction against any (unknown) safe supervisor that is consistent with the observations.
\vspace{0.1cm}
\noindent \textbf{Step 4: Synthesis of the sensor-actuator attacker $A$}
\vspace{0.1cm}
Now, we are ready to provide the procedure for the synthesis of the supremal covert damage-reachable sensor-actuator attacker against all the safe supervisors that are consistent with the observations, which is given as follows:
\noindent \textbf{Procedure 2:}
\begin{enumerate}[1.]
\setlength{\itemsep}{3pt}
\setlength{\parsep}{0pt}
\setlength{\parskip}{0pt}
\item Compute $\mathcal{P} = G||CE^{A}||AC||OCNS^{A}||\overline{S^{\downarrow,A}} = (Q_{\mathcal{P}}, \Sigma_{\mathcal{P}}, \xi_{\mathcal{P}}, q_{\mathcal{P}}^{init}, Q_{\mathcal{P},m})$.
\item Generate $\mathcal{P}_{r} = (Q_{\mathcal{P}_{r}}, \Sigma_{\mathcal{P}_{r}}, \xi_{\mathcal{P}_{r}}, q_{\mathcal{P}_{r}}^{init}, Q_{\mathcal{P}_{r},m})$.
\begin{itemize}
\setlength{\itemsep}{3pt}
\setlength{\parsep}{0pt}
\setlength{\parskip}{0pt}
\item $Q_{\mathcal{P}_{r}} = Q_{\mathcal{P}} - Q_{1}$
\begin{itemize}
\setlength{\itemsep}{3pt}
\setlength{\parsep}{0pt}
\setlength{\parskip}{0pt}
\item $Q_{1} = \{(q, q_{ce,a}, q_{ac}, q_{ocns}^{a}, \overline{q_{s}^{\downarrow,a}}) \in Q_{\mathcal{P}}|\, q \notin Q_{d} \wedge q_{ocns}^{a} = q_{cov}^{brk}\}$
\end{itemize}
\item $\Sigma_{\mathcal{P}_{r}} = \Sigma_{\mathcal{P}}$
\item $(\forall q, q' \in Q_{\mathcal{P}_{r}})(\forall \sigma \in \Sigma_{\mathcal{P}_{r}}) \, \xi_{\mathcal{P}}(q, \sigma) = q' \Leftrightarrow \xi_{\mathcal{P}_{r}}(q, \sigma) = q'$
\item $q_{\mathcal{P}_{r}}^{init} = q_{\mathcal{P}}^{init}$
\item $Q_{\mathcal{P}_{r},m} = Q_{\mathcal{P},m} - Q_{1}$
\end{itemize}
\item Synthesize the supremal supervisor $\mathcal{A} = (Q_{a}, \Sigma_{a}, \xi_{a}, q_{a}^{init}, Q_{a,m})$ over the attacker's control constraint $\mathscr{C}_{ac}$ by treating $\mathcal{P}$ as the plant and $\mathcal{P}_{r}$ as the requirement such that $\mathcal{P}||A$ is marker-reachable and safe w.r.t. $\mathcal{P}_{r}$.
\end{enumerate}
We shall briefly explain \textbf{Procedure 2}. At Step 1, we generate a new plant $\mathcal{P} = G||CE^{A}||AC||OCNS^{A}||\overline{S^{\downarrow,A}}$. At Step 2, we generate the requirement $\mathcal{P}_{r}$ from $\mathcal{P}$ by removing those states where the covertness is broken, denoted by $q \notin Q_{d} \wedge q_{ocns}^{a} = q_{cov}^{brk}$. Then we synthesize the supremal sensor-actuator attacker at Step 3. Intuitively speaking, 1) since $OCNS^{A}$ have encoded all the safe bipartite supervisors that are consistent with the observations $O$, removing those covertness-breaking states in the requirement $\mathcal{P}_{r}$ can enforce the covertness against any unknown safe supervisor that is consistent with $O$, and 2) since the marked behavior of $\overline{S^{\downarrow,A}}$ encodes the least permissive supervisor under attack, ensuring the marker-reachability for $\mathcal{P}||\mathcal{A}$ can enforce that the attacker can always cause damage-infliction against any (unknown) safe supervisor that is consistent with $O$.
Next, we shall formally prove the correctness of the proposed solution methodology.
\vspace{0.1cm}
\emph{Theorem IV.3:} Given a set of observations $O$, the sensor-actuator attacker $\mathcal{A}$ generated in \textbf{Procedure 2}, if non-empty, is covert for any safe supervisor that is consistent with $O$.
\emph{Proof:} See Appendix \ref{appendix: 3}. \hfill $\blacksquare$
\vspace{0.1cm}
\emph{Theorem IV.4:} Given a set of observations $O$, the sensor-actuator attacker $\mathcal{A}$ generated in \textbf{Procedure 2}, if non-empty, is damage-reachable for any safe supervisor that is consistent with $O$.
\emph{Proof:} See Appendix \ref{appendix: 4}. \hfill $\blacksquare$
Now, we are ready to show that \textbf{Problem 1}, the main problem to be solved in this work, can be reduced to a Ramadge-Wonham supervisory control problem
\vspace{0.1cm}
\emph{Theorem IV.5:} Given the plant $G$ and a set of observations $O$, there exists a covert damage-reachable sensor-actuator attacker $\mathcal{A} = (Q_{a}, \Sigma_{a}, \xi_{a}, q_{a}^{init})$ w.r.t. the attack constraint $(\Sigma_{o}, \Sigma_{s,a}, \Sigma_{c,a})$ against any safe supervisor that is consistent with $O$ if and only if there exists a supervisor $S'$ over the attacker's control constraint $\mathscr{C}_{ac}$ for the plant $\mathcal{P} = G||CE^{A}||AC||OCNS^{A}||\overline{S^{\downarrow,A}}$ such that
\begin{enumerate}[a)]
\setlength{\itemsep}{3pt}
\setlength{\parsep}{0pt}
\setlength{\parskip}{0pt}
\item Any state in $\{(q, q_{ce,a}, q_{ac}, q_{ocns}^{a}, \overline{q_{s}^{\downarrow,a}}, q_{s}') \in Q \times Q_{ce,a} \times Q_{ac} \times Q_{ocns}^{a} \times \overline{Q_{s}^{\downarrow,a}} \times Q_{s}'| \, q \notin Q_{d} \wedge q_{ocns}^{a} = q_{cov}^{brk}\}$ is not reachable in $\mathcal{P}||S'$, where $Q_{s}'$ is the state set of $S'$.
\item $\mathcal{P}||S'$ is marker-reachable.
\end{enumerate}
\emph{Proof:} See Appendix \ref{appendix: 5}. \hfill $\blacksquare$
\vspace{0.1cm}
\emph{Theorem IV.6:} The sensor-actuator attacker $\mathcal{A}$ generated in \textbf{Procedure 2}, if non-empty, is a solution for \textbf{Problem 1}.
\emph{Proof:} Based on \emph{Theorem IV.5}, the problem of synthesizing a covert damage-reachable sensor-actuator attacker $\mathcal{A}$ w.r.t. the attack constraint $(\Sigma_{o}, \Sigma_{s,a}, \Sigma_{c,a})$ against any safe supervisor that is consistent with the observations $O$ has been reduced to a Ramadge-Wonham supervisory control problem formulated at Step 3 of \textbf{Procedure 2}. Thus, \textbf{Procedure 2} can synthesize the supremal covert damage-reachable sensor-actuator attacker against any (unknown) safe supervisor that is consistent with the observations $O$ for \textbf{Problem 1}. \hfill $\blacksquare$
\vspace{0.1cm}
\emph{Theorem IV.7:} The supremal covert damage-reachable sensor-actuator attacker against any safe supervisor that is consistent with $O$ exists.
\emph{Proof:} This is straightforward based on the attacker's control constraint $\mathscr{C}_{ac} = (\Sigma_{c,a} \cup \Sigma_{s,a}^{\#} \cup \{stop\}, \Sigma_{o} \cup \Sigma_{s,a}^{\#} \cup \{stop\})$ and the fact that $\Sigma_{c, a} \subseteq \Sigma_c \subseteq \Sigma_o$. \hfill $\blacksquare$
\vspace{0.1cm}
\emph{Theorem IV.8:} \textbf{Problem 1} is decidable.
\emph{Proof:} Based on \emph{Theorem IV.5} and \emph{Theorem IV.6}, and the fact that \textbf{Procedure 2} terminates within finite steps, we immediately have this result. \hfill $\blacksquare$
\vspace{0.1cm}
Next, we shall analyze the computational complexity of the proposed algorithm to synthesize a covert damage-reachable sensor-actuator attacker, which depends on the complexity of two synthesis steps: \textbf{Procedure 1} and \textbf{Procedure 2}. By using the normality based synthesis approach \cite{WMW10,WLLW18}, the complexity of \textbf{Procedure 1} and \textbf{Procedure 2} are $O((|\Sigma|+|\Gamma|)2^{|Q|\times|Q_{ce}|})$ and $O((|\Sigma| + |\Gamma|)2^{|Q_{\mathcal{P}}|})$, respectively, where
\begin{itemize}
\setlength{\itemsep}{3pt}
\setlength{\parsep}{0pt}
\setlength{\parskip}{0pt}
\item $|Q_{\mathcal{P}}| = |Q| \times |Q_{ce,a}| \times |Q_{ac}| \times |Q_{ocns}^{a}| \times |\overline{Q_{s}^{\downarrow,a}}|$
\item $|Q_{ce}| = |Q_{ce,a}| = |\Gamma| + 1$
\item $|Q_{ac}| = 3$
\item $|Q_{ocns}^{a}| \leq 2^{|Q| \times |Q_{ce}|} \times (2|Q_{o}| + 1) + 1$
\item $|\overline{Q_{s}^{\downarrow,a}}| = |Q_{o}| + 2$
\end{itemize}
Thus, the computational complexity of the proposed algorithm is $O((|\Sigma|+|\Gamma|)2^{|Q|\times|Q_{ce}|} + (|\Sigma| + |\Gamma|)2^{|Q_{\mathcal{P}}|}) = O((|\Sigma| + |\Gamma|)2^{|Q_{\mathcal{P}}|})$, which is doubly exponential due to the partial observation supervisor synthesis algorithm in \textbf{Procedure 2} and the exponential blowup in the state size of $OCNS^{A}$. However, we are not sure if the above analysis of the doubly exponential complexity upper bound is tight. This is left as a future work.
\emph{Remark IV.1:} If we remove the assumption that $\Sigma_c \subseteq \Sigma_o$, then \textbf{Procedure 2} is still a sound procedure for \textbf{Problem 1} but it is in general not complete, as now $S^{\downarrow}$ is only an under-approximation of any supervisor that is consistent with $O$~\cite{LTZS20}.
\textbf{Example IV.4} We shall continue with the water tank example. Based on \textbf{Procedure 2},
we can use \textbf{SuSyNA} \cite{Susyna} to synthesize the supremal covert damage-reachable sensor-actuator attacker $\mathcal{A}$, which is illustrated in Fig. \ref{fig:Example_A}.
\begin{figure}[htbp]
\begin{center}
\includegraphics[height=8.5cm]{A.pdf}
\caption{The synthesized supremal covert damage-reachable sensor-actuator attacker $\mathcal{A}$. The state marked blue is a state denoting that the damage infliction has been caused to $G$.}
\label{fig:Example_A}
\end{center}
\end{figure}
Intuitively, the attack strategy of the synthesized sensor-actuator attacker $\mathcal{A}$ is explained as follows. Upon the observation of $H$ ($L$, respectively), it would immediately replace it with the fake sensing information $L^{\#}$ ($H^{\#}$, respectively). Once the supervisor receives the fake information, it will issue the inappropriate control command $v_{2}$ ($v_{3}$, respectively). When the water tank system receives such a control command, it will execute the event $close$ ($open$, respectively), resulting in that the water level becomes $EH$ ($EL$, respectively), that is, the damage-infliction goal is achieved. In addition, such an attack strategy would always allow the attacker to remain covert against any safe supervisor that is consistent with the observations $O$ shown in Fig. \ref{fig:Observations M_o}.
\section{Conclusions}
\label{sec:Conclusions}
This work investigates the problem of synthesizing the supremal covert sensor-actuator attacker to ensure damage reachability against unknown supervisors, where only a finite set of collected observations instead of the supervisor model is needed. We have shown the decidability of the observation-assisted covert damage-reachable attacker synthesis problem. Our solution methodology is to reduce the original problem into the Ramadge-Wonham supervisory control problem, which allows several existing synthesis tools \cite{Susyna}-\cite{Malik07} to be used for the synthesis of covert damage-reachable attackers against unknown supervisors. In the future works, we shall relax the assumption $\Sigma_{c} \subseteq \Sigma_{o}$ and study the decidability problem, and explore more powerful synthesis approaches to achieve the damage-nonblocking goal against unknown supervisors.
Another interesting topic is to find the minimal supervisor information needed to synthesize an attacker, which is naturally related to an optimization problem, and one possible way is to introduce cost function w.r.t. the supervisor information, similar to \cite{RKL2021}.
\begin{appendices}
\section{Proof of Theorem IV.1}
\label{appendix: 1}
Firstly, we prove $L(BT(S)^{M}) \subseteq L(OCNS) = L(NS||OC)$. To prove this result, we only need to show $L(BT(S)) \subseteq L(OCNS) = L(NS||OC)$ as $L(BT(S)^{M}) \subseteq L(BT(S))$. It is straightforward that $L(BT(S)) \subseteq L(NS)$ as $BT(S)$ is safe and $NS$ is the supremal safe command-nondeterministic supervisor. It is also clear that $L(BT(S)) \subseteq L(OC)$, as $BT(S)$ is consistent with observation $O$ and $OC$ embeds any supervisor that is consistent with $O$. Thus, $L(BT(S)^{M}) \subseteq L(BT(S)) \subseteq L(NS) \cap L(OC) = L(OCNS)$.
For any string $s \in L(BT(S)^{M})$ of the form $s_1\gamma$, where $\gamma \in \Gamma$, based on the above analysis, we have $s \in L(OCNS) = L(OC) \cap L(NS)$.
Thus, we have $En_{OCNS}(\xi_{ocns}(q_{ocns}^{init}, s)) = En_{OC}(\xi_{oc}(q_{oc}^{init}, s)) \cap En_{NS}(\xi_{ns}(q_{ns}^{init}, s)) = En_{NS}(\xi_{ns}(q_{ns}^{init}, s))$ as any event in $\Sigma$ is defined at the state $\xi_{oc}(q_{oc}^{init}, s)$ of $OC$ by construction. Since $NS \Vert P_{\Sigma_o \cup \Gamma}(G\lVert CE) = NS$ and $BT(S)^{M} = BT(S) \lVert P_{\Sigma_o \cup \Gamma}(G\lVert CE)$, we have $En_{OCNS}(\xi_{ocns}(q_{ocns}^{init}, s)) = En_{NS}(\xi_{ns}(q_{ns}^{init}, s)) = En_{BT(S)^{M}}(\xi_{bs,1}(q_{bs,1}^{init}, s))$. Since \textbf{Step 2} of constructing $BT(S)^{A}$ based on $BT(S)^{M}$ in Section \ref{subsec:unknown supervisor} and \textbf{Step 2.3} of constructing $OCNS^{A}$ based on $OCNS$ follow the same procedures, we conclude that $L(BT(S)^{A}) \subseteq L(OCNS^{A})$. This completes the proof. \hfill $\blacksquare$
\section{Proof of Theorem IV.2}
\label{appendix: 2}
Firstly, we prove $S^{\downarrow}$ is consistent with $O$. Since $O$ is a finite set of observations of the executions of $G||S$, we have $O \subseteq P_o(L(G|| S))$. Thus, $O \subseteq P_o(L(G))$. Since $L(S^{\downarrow}) = P_{o}^{-1}(O(\Sigma_{uc} \cap \Sigma_o)^*)$, we have $P_o(L(G|| S^{\downarrow})) = P_o(L(G) \cap P_{o}^{-1}(O(\Sigma_{uc} \cap \Sigma_o)^*)) = P_o(L(G)) \cap O(\Sigma_{uc} \cap \Sigma_o)^*\supseteq O$.
Secondly, we prove $S^{\downarrow}$ is the least permissive supervisor that is consistent with $O$. We use the fact that every supervisor $S$ over the control constraint $(\Sigma_c, \Sigma_o)$, where $\Sigma_c \subseteq \Sigma_o$, satisfies $L(S)=P_o^{-1}(P_o(L(S))(\Sigma_{uc} \cap \Sigma_{o})^*)~$\cite{Lin15}.
Thus, for any supervisor $S$ consistent with $O$, since $O \subseteq P_{o}(L(G||S))= P_{o}(L(G) \cap P_{o}^{-1}(P_{o}(L(S))(\Sigma_{uc} \cap \Sigma_{o})^{*}))= P_{o}(L(G)) \cap P_{o}(L(S))(\Sigma_{uc} \cap \Sigma_{o})^{*}$, we have $O \subseteq P_{o}(L(S))(\Sigma_{uc} \cap \Sigma_{o})^{*}$. Thus, $L(S^{\downarrow}) = P_{o}^{-1}(O(\Sigma_{uc} \cap \Sigma_{o})^{*}) \subseteq P_{o}^{-1}(P_{o}(L(S))(\Sigma_{uc} \cap \Sigma_{o})^{*}) = L(S)$. This completes the proof. \hfill $\blacksquare$
\section{Proof of Theorem IV.3}
\label{appendix: 3}
We need to prove that, for any safe supervisor $S$, any state in $\{(q, q_{ce,a}, q_{ac}, q_{bs,a}, q_{a}) \in Q \times Q_{ce,a} \times Q_{ac} \times Q_{bs,a} \times Q_{a}|\, q \notin Q_{d} \wedge q_{bs,a} = q^{detect}\}$ is not reachable in $G||CE^{A}||AC||BT(S)^{A}||\mathcal{A}$.
We adopt the contradiction. Suppose some above-mentioned state, where $q \in Q - Q_{d}$, $q_{bs,a} = q^{detect}$, can be reached in $G||CE^{A}||AC||BT(S)^{A}||\mathcal{A}$ via some string $s \in L(G||CE^{A}||AC||BT(S)^{A}||\mathcal{A})$. Then, $s$ can be executed in $G$, $CE^{A}$, $AC$, $BT(S)^{A}$, and $\mathcal{A}$, after we lift their alphabets to $\Sigma \cup \Sigma_{s,a}^{\#} \cup \Gamma \cup \{stop\}$. Then, based on \emph{Theorem IV.1} and the construction of $\overline{S^{\downarrow,A}}$, which is a complete automaton, we know that $s$ can be executed in $OCNS^{A}$, and $\overline{S^{\downarrow,A}}$. Thus, $s$ can also be executed in $G||CE^{A}||AC||OCNS^{A}||\overline{S^{\downarrow,A}}||\mathcal{A}$. Next, we shall check what state is reached in $G||CE^{A}||AC||OCNS^{A}||\overline{S^{\downarrow,A}}||\mathcal{A}$ via the string $s$. Clearly, state $q \in Q - Q_{d}$ is reached in $G$ and state $q_{cov}^{brk}$ is reached in $OCNS^{A}$ according to \emph{Corollary IV.1}. Thus, the state $(q, q_{ce,a}, q_{ac}, q_{cov}^{brk}, \overline{q_{s}^{\downarrow,a}}, q_{a})$, where $q \in Q - Q_{d}$, is reached in $G||CE^{A}||AC||OCNS^{A}||\overline{S^{\downarrow,A}}||\mathcal{A}$ via the string $s$, which is a contradiction to the fact that $\mathcal{A}$ is a safe supervisor for the plant $G||CE^{A}||AC||OCNS^{A}||\overline{S^{\downarrow,A}}$ based on Step 3 of \textbf{Procedure 2}. Then, the supposition does not hold and the proof is completed. \hfill $\blacksquare$
\section{Proof of Theorem IV.4}
\label{appendix: 4}
Firstly, the sensor-actuator attacker $\mathcal{A}$ generated in \textbf{Procedure 2}, if non-empty, must satisfy that $\mathcal{P}||\mathcal{A}$ is marker-reachable, i.e., some state $(q, q_{ce,a}, q_{ac}, q_{ocns}^{a}, \overline{q_{s}^{\downarrow,a}}, q_{a}) \in Q_{d} \times Q_{ce,a} \times Q_{ac} \times Q_{ocns}^{a} \times \overline{Q_{s,m}^{\downarrow,a}} \times Q_{a}$ can be reached in $G||CE^{A}||AC||OCNS^{A}||\overline{S^{\downarrow,A}}||\mathcal{A}$ via some string $s \in L(G||CE^{A}||AC||OCNS^{A}||\overline{S^{\downarrow,A}}||\mathcal{A})$ such that $P_{\Sigma \cup \Sigma_{s,a}^{\#}}(s) \in L_{m}(\overline{S^{\downarrow,A}})$, where $P_{\Sigma \cup \Sigma_{s,a}^{\#}}: (\Sigma \cup \Sigma_{s,a}^{\#} \cup \Gamma \cup \{stop\})^{*} \rightarrow (\Sigma \cup \Sigma_{s,a}^{\#})^{*}$. According to \emph{Theorem IV.2} that $S^{\downarrow}$ is the least permissive supervisor that is consistent with $O$, we know that for any other supervisor $S$ that is consistent with $O$, there always exists a string $s' \in (\Sigma \cup \Sigma_{s,a}^{\#} \cup \Gamma \cup \{stop\})^{*}$ such that $P_{\Sigma \cup \Sigma_{s,a}^{\#}}(s') = P_{\Sigma \cup \Sigma_{s,a}^{\#}}(s)$ and $s'$ can be executed in $G$, $CE^{A}$, $AC$, $BT(S)^{A}$, and $\mathcal{A}$ after we lift their alphabets to $\Sigma \cup \Sigma_{s,a}^{\#} \cup \Gamma \cup \{stop\}$. Thus, $s'$ can be executed in $G||CE^{A}||AC||BT(S)^{A}||\mathcal{A}$ and the state $q \in Q_{d}$ is reached in $G$ via the string $s'$, i.e., some marker state is reachable in $G||CE^{A}||AC||BT(S)^{A}||\mathcal{A}$. This completes the proof. \hfill $\blacksquare$
\section{Proof of Theorem IV.5}
\label{appendix: 5}
(If) Suppose there exists a supervisor $S'$ over the attacker's control constraint $\mathscr{C}_{ac}$ for the plant $\mathcal{P} = G||CE^{A}||AC||OCNS^{A}||\overline{S^{\downarrow,A}}$ such that the above Condition a) and Condition b) are satisfied. Then, based on \emph{Theorem IV.3} and \emph{Theorem IV.4}, we know that $\mathcal{A}=S'$ is a covert damage-reachable sensor-actuator attacker w.r.t. the attack constraint $(\Sigma_{o}, \Sigma_{s,a}, \Sigma_{c,a})$ against any safe supervisor that is consistent with $O$. This completes the proof of sufficiency.
(Only if) We need to prove that $\mathcal{A}$ can satisfy the Condition a) and Condition b) w.r.t. the plant $\mathcal{P}$ and thus we can choose $S'=\mathcal{A}$. Firstly, we shall show that any state in $\{(q, q_{ce,a}, q_{ac}, q_{ocns}^{a}, \overline{q_{s}^{\downarrow,a}}, q_{a}) \in Q \times Q_{ce,a} \times Q_{ac} \times Q_{ocns}^{a} \times \overline{Q_{s}^{\downarrow,a}} \times Q_{a}| \, q \notin Q_{d} \wedge q_{ocns}^{a} = q_{cov}^{brk}\}$ is not reachable in $\mathcal{P}||\mathcal{A} = G||CE^{A}||AC||OCNS^{A}||\overline{S^{\downarrow,A}}||\mathcal{A}$. We carry out the proof by contradiction and suppose that some state $(q, q_{ce,a}, q_{ac}, q_{ocns}^{a}, \overline{q_{s}^{\downarrow,a}}, q_{a}) \in Q \times Q_{ce,a} \times Q_{ac} \times Q_{ocns}^{a} \times \overline{Q_{s}^{\downarrow,a}} \times Q_{a}$, where $q \in Q - Q_{d}$, $q_{ocns}^{a} = q_{cov}^{brk}$ can be reached in $G||CE^{A}||AC||OCNS^{A}||\overline{S^{\downarrow,A}}||\mathcal{A}$ via a string $s$. Thus, $s$ can be executed in $G$, $CE^{A}$, $AC$, $OCNS^{A}$, and $\mathcal{A}$ after we lift their alphabets to $\Sigma \cup \Sigma_{s,a}^{\#} \cup \Gamma \cup \{stop\}$. Since $OCNS^{A}$ only embeds all the safe bipartite supervisors that are consistent with $O$ (under attack), we can always find a safe supervisor $S$ that is consistent with $O$ such that $s$ can be executed in $BT(S)^{A}$ and the state $q^{detect}$ is reached in $BT(S)^{A}$ via the string $s$. Thus, in $G||CE^{A}||AC||BT(S)^{A}||\mathcal{A}$, the state $(q, q_{ce,a}, q_{ac}, q_{bs,a}, q_{a}) \in Q \times Q_{ce,a} \times Q_{ac} \times Q_{bs,a} \times Q_{a}$, where $q \in Q - Q_{d}$, $q_{bs,a} = q^{detect}$, can be reached via the string $s$, and this causes the contradiction with the fact that $\mathcal{A}$ is covert against against any safe supervisor that is consistent with $O$. Thus, the supposition does not hold.
Secondly, since $\mathcal{A}$ is damage-reachable against any safe supervisor that is consistent with $O$, we know that $\mathcal{A}$ is also damage-reachable against $S^{\downarrow}$, the least permissive supervisor that is consistent with $O$ based on \emph{Theorem IV.2}. Thus, $G||CE^{A}||AC||BT(S^{\downarrow})^{A}||\mathcal{A}$ is marker-reachable, i.e., some state $(q, q_{ce,a}, q_{ac}, q_{bs,a}, q_{a}) \in Q_{d} \times Q_{ce,a} \times Q_{ac} \times Q_{bs,a} \times Q_{a}$ can be reached in $G||CE^{A}||AC||BT(S^{\downarrow})^{A}||\mathcal{A}$ via some string $s \in L(G||CE^{A}||AC||BT(S^{\downarrow})^{A}||\mathcal{A})$. Then, $s$ can be executed in $G$, $CE^{A}$, $AC$, $BT(S^{\downarrow})^{A}$, and $\mathcal{A}$, after we lift their alphabets to $\Sigma \cup \Sigma_{s,a}^{\#} \cup \Gamma \cup \{stop\}$. Based on \emph{Theorem IV.1} and the construction of $\overline{S^{\downarrow,A}}$, which is a complete automaton, we know that $s$ can be executed in $OCNS^{A}$ and $\overline{S^{\downarrow,A}}$, and $P_{\Sigma \cup \Sigma_{s,a}^{\#}}(s) \in L_{m}(\overline{S^{\downarrow,A}})$. Thus, some state $(q, q_{ce,a}, q_{ac}, q_{ocns}^{a}, \overline{q_{s}^{\downarrow,a}}, q_{a}) \in Q_{d} \times Q_{ce,a} \times Q_{ac} \times Q_{ocns}^{a} \times \overline{Q_{s,m}^{\downarrow,a}} \times Q_{a}$ is reached in $G||CE^{A}||AC||OCNS^{A}||\overline{S^{\downarrow,A}}||\mathcal{A}$ via the string $s$, i.e., $G||CE^{A}||AC||OCNS^{A}||\overline{S^{\downarrow,A}}||\mathcal{A}$ is marker-reachable. This completes the proof of necessity. \hfill $\blacksquare$
\end{appendices}
|
1,116,691,499,898 | arxiv |
\section{Introduction}
\vspace{-.3mm}
Recent advances in Natural Language Processing (NLP) are largely attributed to the rise of the \textit{transformer}~\citep{vaswani17attentionisallyouneed}.
Pre-trained to solve an unsupervised task on large corpora of text, transformer-based architectures, such as GPT-2 \citep{radford2018gpt2}, BERT \citep{devlin2018bert} and Transformer-XL \citep{dai2019transformerxl}, seem to possess the capacity to learn the underlying structure of text and, as a consequence, to learn representations that generalize across tasks.
The key difference between transformers and previous methods, such as recurrent neural networks \citep{lstm97} and convolutional neural networks (CNN), is that the former can simultaneously attend to every word of their input sequence.
This is made possible thanks to the \textit{attention mechanism}---originally introduced in Neural Machine Translation to better handle long-range dependencies%
~\citep{Bahdanau2015attention}.
With self-attention in particular, the similarity of two words in a sequence is captured by an attention score measuring the distance of their representations. The representation of each word is then updated based on those words whose attention score is highest.
Inspired by its capacity to learn meaningful inter-dependencies between words, researchers have recently considered utilizing self-attention in vision tasks.
Self-attention was first added to CNN by either using channel-based attention \citep{DBLP:conf/cvpr/HuSS18} or non-local relationships across the image \citep{nonlocal2018wang}.
More recently, \citet{belloAttentionAugmentedConvolutional2019} augmented CNNs by replacing some convolutional layers with self-attention layers, leading to improvements on image classification and object detection tasks.
Interestingly, \cite{ramachandran2019standaloneselfattention} noticed that, even though state-of-the art results are reached when attention and convolutional features are combined, under same computation and model size constraints, self-attention-\emph{only} architectures also reach competitive image classification accuracy.
\textit{These findings raise the question, do self-attention layers process images in a similar manner to convolutional layers?}
From a theoretical perspective, one could argue that transfomers have the capacity to simulate any function---including a CNN. Indeed, \cite{perez2019turingcomplete} showed that a multi-layer attention-based architecture with additive positional encodings is Turing complete under some strong theoretical assumptions, such as unbounded precision arithmetic.
Unfortunately, universality results do not reveal how a machine solves a task, only that it has the capacity to do so. Thus, the question of how self-attention layers actually process images remains open.
\paragraph{Contributions.}
In this work, we put forth theoretical and empirical evidence that self-attention layers can (and do) learn to behave similar to convolutional layers:
\vspace{-2mm}
\begin{itemize}
\item[I.] From a theoretical perspective, we provide a constructive proof showing that self-attention layers can express any convolutional layers.
\end{itemize}
\vspace{-2mm}
Specifically, we show that a single multi-head self-attention layer using relative positional encoding can be re-parametrized to express any convolutional layer. %
\vspace{-2mm}
\begin{itemize}
\item[II.] Our experiments show that the first few layers of attention-only architectures~\citep{ramachandran2019standaloneselfattention} do learn to attend on grid-like pattern around each query pixel, similar to our theoretical construction.
\end{itemize}
Strikingly, this behavior is confirmed both for our quadratic encoding, but also for relative encoding that is learned.
Our results seem to suggest that localized convolution is the right inductive bias for the first few layers of an image classifying network.
We provide an interactive website\footnote{\ificlrfinal \href{https://epfml.github.io/attention-cnn}{\tt epfml.github.io/attention-cnn}{} \else URL available after deanonymization, preview at \url{https://drive.google.com/file/d/1METSetroUA2qd2slol9wt7YxucJslAmF/} \fi} to explore how self-attention exploits localized position-based attention in lower layers and content-based attention in deeper layers.
For reproducibility purposes, our code is publicly available. %
\section{Background on Attention Mechanisms for Vision}
\label{sec:background}
We here recall the mathematical formulation of self-attention layers and emphasize the role of positional encodings.
\subsection{The Multi-Head Self-Attention Layer}
\label{ssec:background_self_attention}
Let ${\bm{X}} \in \mathbb{R}^{T\times D_{\textit{in}}}$ be an input matrix consisting of $T$ tokens in of ${D_{\textit{in}}}$ dimensions each.
While in NLP each token corresponds to a word in a sentence, the same formalism can be applied to any sequence of $T$ discrete objects, e.g. pixels. A self-attention layer maps any query token $t \in [T]$ from $D_{\textit{in}}$ to $D_{\textit{out}}$ dimensions as follows:
\begin{align}
\operatorname{Self-Attention}({\bm{X}})_{t,:} &:=
\mathrm{softmax}
\left(
{\bm{A}}_{t,:}
\right)
{\bm{X}}
{\bm{W}}_{\!\textit{val}},
\label{eq:attention}
\end{align}
where we refer to the elements of the $T \times T$ matrix
\begin{align}
{\bm{A}} &:=
{\bm{X}} {\bm{W}}_{\!\textit{qry}} {\bm{W}}_{\!\textit{key}}^\top {\bm{X}}^\top
\label{eq:att_coeff}
\end{align}
as \textit{attention scores} and the softmax %
output\footnote{$ \mathrm{softmax}\left({\bm{A}}_{t,:}\right)_{k} = \text{exp}({{\bm{A}}_{t,k}}) / \sum_{p} \text{exp}({{\bm{A}}_{t,p}})$} as \textit{attention probabilities}.
The layer is parametrized by a query matrix ${\bm{W}}_{\!\textit{qry}} \in \mathbb{R}^{D_{\textit{in}} \times D_{k}}$, a key matrix ${\bm{W}}_{\!\textit{key}} \in \mathbb{R}^{D_{\textit{in}} \times D_{k}}$ and a value matrix ${\bm{W}}_{\!\textit{val}} \in \mathbb{R}^{D_{\textit{in}} \times D_{\textit{out}}}$.%
For simplicity, we exclude any residual connections, batch normalization and constant factors.%
A key property of the self-attention model described above is that it is equivariant to reordering, that is, it gives the same output independently of how the $T$ input tokens are shuffled.
This is problematic for cases we expect the order of things to matter.
To alleviate the limitation, a \emph{positional encoding} is learned for each token in the sequence (or pixel in an image), and added to the representation of the token itself before applying self-attention
\begin{align}
{\bm{A}} &:=
({\bm{X}} + {\bm{P}}) {\bm{W}}_{\!\textit{qry}} {\bm{W}}_{\!\textit{key}}^\top ({\bm{X}} + {\bm{P}})^\top,
\end{align}
where ${\bm{P}} \in \mathbb{R}^{T \times D_{\textit{in}}}$ contains the embedding vectors for each position. More generally, ${\bm{P}}$ may be substituted by any function that returns a vector representation of the position.
It has been found beneficial in practice to replicate this self-attention mechanism into \emph{multiple heads}, each being able to focus on different parts of the input by using different query, key and value matrices.
In multi-head self-attention, the output of the $N_h$ heads of output dimension $D_h$ are concatenated and projected to dimension $D_{\textit{out}}$ as follows:
\begin{align}
\operatorname{MHSA}({\bm{X}}) :=
\concat_{h \in {[N_h]}}\big[\operatorname{Self-Attention}_h({\bm{X}})\big] \; {\bm{W}}_{\!\textit{out}}\, + {\bm{b}}_{\textit{out}}
\label{eq:multi-head}
\end{align}
and two new parameters are introduced: the projection matrix ${\bm{W}}_{\!\textit{out}} \in \mathbb{R}^{N_h D_h \times D_{\textit{out}}}$ and a bias term ${\bm{b}}_{\textit{out}}\in \mathbb{R}^{D_{\textit{out}}}$.
\subsection{Attention for Images}
Convolutional layers are the \textit{de facto} choice for building neural networks that operate on images. We recall that, given an image tensor ${\tens{X}}~\in~\mathbb{R}^{W\times H \times D_{\textit{in}}}$ of width $W$, height $H$ and $D_{\textit{in}}$ channels, the output of a convolutional layer for pixel $(i,j)$ is given by
\begin{align}
\operatorname{Conv}({\bm{X}})_{i,j,:} :=
\sum_{(\delta_1, \delta_2) \in \Delta\!\!\!\!\Delta_K}
{\tens{X}}_{i+\delta_1, j+\delta_2, :}
{\tens{W}}_{\delta_1,\delta_2,:,:}
+ {\bm{b}},
\label{eq:conv}
\end{align}
where ${\tens{W}}$ is the $K \times K \times D_{\textit{in}} \times D_{\textit{out}}$ weight tensor
\footnote{To simplify notation, we index the first two dimensions of the tensor from $-\lfloor K / 2 \rfloor$ to $\lfloor K / 2 \rfloor$.}, ${\bm{b}} \in \mathbb{R}^{D_{\textit{out}}}$ is the bias vector and the set
$$
{\Delta\!\!\!\!\Delta}_K := \left[-\left\lfloor\frac{K}{2} \right\rfloor, \cdots, \left\lfloor\frac{K}{2} \right\rfloor \right] \times \left[ -\left\lfloor\frac{K}{2} \right\rfloor, \cdots, \left\lfloor\frac{K}{2} \right\rfloor \right]
$$
contains all possible shifts appearing when convolving the image with a $K\times K$ kernel.
In the following, we review how self-attention can be adapted from 1D sequences to images.
With images, rather than tokens, we have query and key pixels ${\bm{q}}, {\bm{k}} \in [W] \times [H]$. Accordingly, the input is a tensor ${\tens{X}}$ of dimension $W \times H \times D_{\textit{in}}$ and each
attention score associates a query and a key pixel. %
To keep the formulas consistent with the 1D case, we abuse notation and slice tensors by using a 2D index vector: if ${\bm{p}} = (i,j)$, we write ${\tens{X}}_{{\bm{p}},:}$ and ${\tens{A}}_{{\bm{p}},:}$ to mean ${\tens{X}}_{i, j,:}$ and ${\tens{A}}_{i, j,:,:}$, respectively.
With this notation in place, the multi-head self attention layer output at pixel ${\bm{q}}$ can be expressed as follows:
\begin{align}
\operatorname{Self-Attention}({\bm{X}})_{{\bm{q}},:} &=
\sum_{{\bm{k}}} \mathrm{softmax}
\left(
{\tens{A}}_{{\bm{q}},:}
\right)_{{\bm{k}}}
{\tens{X}}_{{\bm{k}},:}\,
{\bm{W}}_{\!\textit{val}}
\end{align}
and accordingly for the multi-head case.
\subsection{Positional Encoding for Images}
\label{ssec:relative_position_encoding}
There are two types of positional encoding that has been used in transformer-based architectures: the \textit{absolute} and \textit{relative} encoding (see also \Cref{tab:relwork_attention} in the Appendix).
With absolute encodings, a (fixed or learned) vector ${\tens{P}}_{{\bm{p}},:}$ is assigned to each pixel ${\bm{p}}$. The computation of the attention scores we saw in \cref{eq:att_coeff} can then be decomposed as follows:
\begin{align}
{\tens{A}}_{{\bm{q}}, {\bm{k}}}^{\mathrm{abs}}
&= ({\tens{X}}_{{\bm{q}},:} + {\tens{P}}_{{\bm{q}},:}) {\bm{W}}_{\!\textit{qry}} {\bm{W}}_{\!\textit{key}}^\top ({\tens{X}}_{{\bm{k}},:} + {\tens{P}}_{{\bm{k}},:})^\top \notag \\
&=
{{\tens{X}}_{{\bm{q}},:} {\bm{W}}_{\!\textit{qry}} {\bm{W}}_{\!\textit{key}}^\top {\tens{X}}_{{\bm{k}},:}^\top}
+
{{\tens{X}}_{{\bm{q}},:} {\bm{W}}_{\!\textit{qry}} {\bm{W}}_{\!\textit{key}}^\top {\tens{P}}_{{\bm{k}},:}^\top}
+
{{\tens{P}}_{{\bm{q}},:}{\bm{W}}_{\!\textit{qry}} {\bm{W}}_{\!\textit{key}}^\top {\tens{X}}_{{\bm{k}},:}}
+
{{\tens{P}}_{{\bm{q}},:} {\bm{W}}_{\!\textit{qry}} {\bm{W}}_{\!\textit{key}}^\top {\tens{P}}_{{\bm{k}},:}}
\label{eq:decompose_att_coef}
\end{align}
where ${\bm{q}}$ and ${\bm{k}}$ correspond to the query and key pixels, respectively.
The relative positional encoding was introduced by \cite{dai2019transformerxl}. %
The main idea is to only consider the position difference between the query pixel (pixel we compute the representation of) and the key pixel (pixel we attend) instead of the absolute position of the key pixel:
\begin{align}
{\tens{A}}_{{\bm{q}}, {\bm{k}}}^{\mathrm{rel}} &:=
{\tens{X}}_{{\bm{q}},:}^{\top} {\bm{W}}_{\!\textit{qry}}^{\top} {\bm{W}}_{\!\textit{key}} \, {\tens{X}}_{{\bm{k}},:}
+
{\tens{X}}_{{\bm{q}},:}^{\top} {\bm{W}}_{\!\textit{qry}}^{\top} \widehat{{\bm{W}}}_{\!\textit{key}} \, {\bm{r}}_{{\bm{\delta}}}
+
{ {\bm{u}}^{\top}} {\bm{W}}_{\!\textit{key}} \, {\tens{X}}_{{\bm{k}},:}
+
{ {\bm{v}}^{\top}} \widehat{{\bm{W}}}_{\!\textit{key}} \, {\bm{r}}_{{\bm{\delta}}}
\label{eq:att_rel}
\end{align}
In this manner, %
the attention scores only depend on the shift ${\bm{\delta}} := {\bm{k}} - {\bm{q}}$.
Above, the learnable vectors ${\bm{u}}$ and ${\bm{v}}$ are unique for each head, whereas for every shift ${\bm{\delta}}$ the relative positional encoding ${\bm{r}}_{{\bm{\delta}}} \in \mathbb{R}^{D_p}$ is shared by all layers and heads.
Moreover, now the key weights are split into two types: ${\bm{W}}_{\!\textit{key}}$ pertain to the input and $\widehat{{\bm{W}}}_{\!\textit{key}}$ to the relative position of pixels.
\section{Self-Attention as a Convolutional Layer}
\label{sec:attention_can_implement_cnn}
This section derives sufficient conditions such that a multi-head self-attention layer can simulate a convolutional layer.
Our main result is the following:
\begin{theorem}
A multi-head self-attention layer with $N_h$ heads of dimension $D_h$, output dimension~$D_{\textit{out}}$ and a relative positional encoding of dimension $D_p \geq 3$
can express any convolutional layer of kernel size $\sqrt{N_h} \times \sqrt{N_h}$
and $\min(D_h, D_{\textit{out}})$ output channels.
\label{thm:the_theorem}
\end{theorem}
The theorem is proven constructively by selecting the parameters of the multi-head self-attention layer so that the latter acts like a convolutional layer.
In the proposed construction, the attention scores of each self-attention head should attend to a different relative shift within the set $\Delta\!\!\!\!\Delta_K = \{-\lfloor K/2 \rfloor, \dots, \lfloor K/2 \rfloor\}^2$ of all pixel shifts in a $K\times K$ kernel. The exact condition can be found in the statement of Lemma~\ref{lemma:1}.
Then, Lemma~\ref{lemma:2} shows that the aforementioned condition is satisfied for the relative positional encoding that we refer to as the \textit{quadratic encoding}:
\begin{align}\label{eq:quadposembedding}
{\bm{v}}^{(h)} \!:= -\alpha^{(h)} \, (1, -2{\bm{\Delta}}_1^{(h)}, -2{\bm{\Delta}}_2^{(h)})
\quad
{\bm{r}}_{\bm{\delta}} &\!:= (\|{\bm{\delta}}\|^2 , {\bm{\delta}}_1, {\bm{\delta}}_2)
\quad {\bm{W}}_{{\!\textit{qry}}} \!=\! {\bm{W}}_{{\!\textit{key}}} \!:= \boldsymbol{0} \quad \widehat{{\bm{W}}_{{\!\textit{key}}}} \!:= {\bm{I}}
\end{align}
The learned parameters ${\bm{\Delta}}^{(h)} = ({\bm{\Delta}}^{(h)}_1, {\bm{\Delta}}^{(h)}_2)$ and $\alpha^{(h)}$ determine the center and width of attention of each head, respectively. On the other hand, ${\bm{\delta}} = ({\bm{\delta}}_1, {\bm{\delta}}_2)$ is fixed and expresses the relative shift between query and key pixels.
It is important to stress that the above encoding is not the only one for which the
conditions of Lemma~\ref{lemma:1} are satisfied. In fact, in our experiments, the relative encoding learned by the neural network also matched the conditions of the lemma (despite being different from the quadratic encoding). Nevertheless, the encoding defined above is very efficient in terms of size, as only $D_p = 3$ dimensions suffice to encode the relative position of pixels, while also reaching similar or better empirical performance (than the learned one).
The theorem covers the general convolution operator as defined in \cref{eq:conv}.
However, machine learning practitioners using differential programming frameworks \citep{paszke2017automatic,tensorflow2015-whitepaper} might question if the theorem holds for all hyper-parameters of 2D convolutional layers:
\begin{itemize}
\item \emph{Padding}: a multi-head self-attention layer uses by default the \texttt{"SAME"} padding while a convolutional layer would decrease the image size by $K - 1$ pixels. The correct way to alleviate these boundary effects is to pad the input image with $\lfloor K / 2 \rfloor$ zeros on each side. In this case, the cropped output of a MHSA and a convolutional layer are the same.
\item \emph{Stride}: a strided convolution can be seen as a convolution followed by a fixed pooling operation---with computational optimizations. \Cref{thm:the_theorem} is defined for stride 1, but a fixed pooling layer could be appended to the Self-Attention layer to simulate any stride.
\item \emph{Dilation}: a multi-head self-attention layer can express any dilated convolution as each head can attend a value at any pixel shift and form a (dilated) grid pattern.
\end{itemize}
\paragraph{Remark for the 1D case.} Convolutional layers acting on sequences are commonly used in the literature for text~\citep{kim-2014-convolutional}, as well as audio~\citep{oord2016wavenet} and time series~\citep{franceschi2019unsupervised}.
Theorem~\ref{thm:the_theorem} can be straightforwardly extended to show that multi-head self-attention with $N_h$ heads can also simulate a 1D convolutional layer with a kernel of size $K=N_h$ with $\min(D_h, D_{\textit{out}})$ output channels using a positional encoding of dimension $D_p \geq 2$. Since we have not tested empirically if the preceding construction matches the behavior of 1D self-attention in practice, we cannot claim that it actually learns to convolve an input sequence---only that it has the capacity to do so.
\subsection*{Proof of Main Theorem}
\begin{figure}[t]
\centering
\vspace{-2mm}
\includegraphics[width=1.01\linewidth]{figs/attention_cnn_v3}
\caption{Illustration of a Multi-Head Self-Attention layer applied to a tensor image ${\tens{X}}$.
Each head~$h$ attends pixel values around shift ${\bm{\Delta}}^{(h)}$ and learn a filter matrix ${\bm{W}}_{{\!\textit{val}}}^{(h)}$.
We show attention maps computed for a query pixel at position ${\bm{q}}$.\vspace{-3mm}}
\label{fig:attention}
\end{figure}
The proof follows directly from Lemmas~\ref{lemma:1} and~\ref{lemma:2} stated below:
\begin{lemma}
Consider a multi-head self-attention layer consisting of $N_h = K^2$ heads, $D_h \geq D_{\textit{out}}$ and let ${\bm{f}}~:~[N_h]~\rightarrow~{\Delta\!\!\!\!\Delta}_K$ be a bijective mapping of heads onto shifts. Further, suppose that for every head the following holds: %
\begin{align}
\mathrm{softmax}({\bm{A}}^{(h)}_{{\bm{q}},:})_{{\bm{k}}} =
\begin{cases}
1 & \text{ if } {\bm{f}}(h) = {\bm{q}} - {\bm{k}} \\
0 & \text{ otherwise}.
\end{cases}
\label{eq:shift_attention_prob}
\end{align}
Then, for any convolutional layer with a $K \times K$ kernel and $D_{\textit{out}}$ output channels, there exists $\{{\bm{W}}_{\!\textit{val}}^{(h)}\}_{h \in [N_h]}$ such that
$
\operatorname{MHSA}({\bm{X}}) = \operatorname{Conv}({\bm{X}})
$
for every ${\bm{X}} \in \mathbb{R}^{W \times H \times D_{\textit{in}}}$.%
\label{lemma:1}
\end{lemma}
\begin{proof}
Our first step will be to rework the expression of the Multi-Head Self-Attention operator from \eqref{eq:attention} and \eqref{eq:multi-head} such that the effect of the multiple heads becomes more transparent:
\begin{align}
\operatorname{MHSA}({\bm{X}})
&=
{\bm{b}}_{\textit{out}} +
\sum_{h \in [N_h]}
\mathrm{softmax}({\bm{A}}^{(h)}){\bm{X}}
\underbrace{
{\bm{W}}_{\!\textit{val}}^{(h)} {\bm{W}}_{\textit{out}}[(h-1)D_h + 1:h D_h +1]
}_{{\bm{W}}^{(h)}}
\end{align}
Note that each head's value matrix ${\bm{W}}_{\!\textit{val}}^{(h)} \in \mathbb{R}^{D_{\textit{in}} \times D_{h}}$ and each block of the projection matrix ${\bm{W}}_{\textit{out}}$ of dimension $D_h \times D_{\textit{out}}$ are learned.
Assuming that $D_h \geq D_{\textit{out}}$, we can replace each pair of matrices by a learned matrix ${\bm{W}}^{(h)}$ for each head.
We consider one output pixel of the multi-head self-attention:
\begin{align}
\operatorname{MHSA}({\bm{X}})_{{\bm{q}},:}
=
\sum_{h \in [N_h]}
\left(
\sum_{{\bm{k}}}
\mathrm{softmax}({\tens{A}}^{(h)}_{{\bm{q}},:})_{{\bm{k}}}
{\tens{X}}_{{\bm{k}},:}
\right)
{\bm{W}}^{(h)}
+ {\bm{b}}_{\textit{out}}
\end{align}
Due to the conditions of the Lemma, for the $h$-th attention head the attention probability is one when
$
{\bm{k}} = {\bm{q}} - {\bm{f}}(h)
$
and zero otherwise.
The layer's output at pixel ${\bm{q}}$ is thus equal to
\begin{align}
\operatorname{MHSA}({\tens{X}})_{{\bm{q}}}
&= \sum_{h \in [N_h]} {\tens{X}}_{{\bm{q}} - {\bm{f}}(h),:} {\bm{W}}^{(h)} + {\bm{b}}_{\textit{out}} %
\end{align}
For $K = \sqrt{N_h}$, the above can be seen to be equivalent to a convolutional layer expressed in eq.~\ref{eq:conv}: there is a one to one mapping (implied by map ${\bm{f}}$) between the matrices ${\bm{W}}^{(h)}$ for $h = [N_h]$ and the matrices ${\tens{W}}_{k_1,k_2,:,:}$ for all $(k_1,k_2) \in [K]^2.$
\end{proof}
\paragraph{Remark about $D_h$ and $D_{\textit{out}}$.}
It is frequent in transformer-based architectures to set $D_h~=~D_{\textit{out}}/N_h$, hence $D_h < D_{\textit{out}}$. In that case, ${\bm{W}}^{(h)}$ can be seen to be of rank $D_{\textit{out}} - D_h$, which does not suffice to express every convolutional layer with $D_{\textit{out}}$ channels. Nevertheless, it can be seen that any $D_h$ out of $D_{\textit{out}}$ outputs of $\operatorname{MHSA}({\bm{X}})$ can express the output of any convolutional layer with $D_h$ output channels. To cover both cases, in the statement of the main theorem we assert that the output channels of the convolutional layer should be $\min(D_h, D_{\textit{out}})$. In practice, we advise to concatenate heads of dimension $D_h = D_{\textit{out}}$ instead of splitting the $D_{\textit{out}}$ dimensions among heads to have exact re-parametrization and no ``unused'' channels.
\begin{lemma}
There exists a relative encoding scheme $\{{\bm{r}}_{\bm{\delta}} \in \mathbb{R}^{D_p}\}_{{\bm{\delta}} \in \mathbb{Z}^2}$
with $D_p \geq 3$ and parameters ${\bm{W}}_{\!\textit{qry}}, {\bm{W}}_{\!\textit{key}}, \widehat {\bm{W}}_{\!\textit{key}},{\bm{u}}$ with $D_p \leq D_k$ such that, for every ${\bm{\Delta}} \in \Delta\!\!\!\!\Delta_K$ there exists some vector ${\bm{v}}$ (conditioned on ${\bm{\Delta}}$) yielding
$ \mathrm{softmax}({\tens{A}}_{{\bm{q}},:})_{{\bm{k}}} = 1 $ if $ {\bm{k}} - {\bm{q}} = {\bm{\Delta}}$ and zero, otherwise.
\label{lemma:2}
\end{lemma}
\vspace{-1em}
\begin{proof}
We show by construction the existence of a $D_p=3$ dimensional relative encoding scheme yielding the required attention probabilities.
As the attention probabilities are independent of the input tensor ${\tens{X}}$, we set ${\bm{W}}_{\!\textit{key}}={\bm{W}}_{\!\textit{qry}}=\boldsymbol{0}$ which leaves only the last term of \cref{eq:att_rel}.
Setting $\widehat {\bm{W}}_{\!\textit{key}} \in \mathbb{R}^{D_k \times D_p}$ to the identity matrix (with appropriate row padding), yields
${\tens{A}}_{{\bm{q}}, {\bm{k}}} = {\bm{v}}^{\top} {\bm{r}}_{{\bm{\delta}}}$ where $\quad {\bm{\delta}} := {\bm{k}} - {\bm{q}}$.
Above, we have assumed that $D_p \leq D_k$ such that no information from ${\bm{r}}_{{\bm{\delta}}}$ is lost.
Now, suppose that we could write:
\begin{align}
{\tens{A}}_{{\bm{q}}, {\bm{k}}}
=
-\alpha (\|{\bm{\delta}} - {\bm{\Delta}}\|^2 + c)
\label{eq:iso_att_decomposed}
\end{align}
for some constant $c$.
In the above expression, the maximum attention score over ${\tens{A}}_{{\bm{q}}, :}$ is $-\alpha c$ and it is reached for ${\tens{A}}_{{\bm{q}}, {\bm{k}}}$ with ${\bm{\delta}} = {\bm{\Delta}}$. On the other hand, the $\alpha$ coefficient can be used to scale arbitrarily the difference between ${\tens{A}}_{{\bm{q}},{\bm{\Delta}}}$ and the other attention scores.
In this way, for ${\bm{\delta}} = {\bm{\Delta}}$, we have
\begin{align*}
\lim_{\alpha \rightarrow \infty} \mathrm{softmax}({\tens{A}}_{{\bm{q}},:})_{{\bm{k}}}
&= \lim_{\alpha \rightarrow \infty} \frac{e^{-\alpha (\|{\bm{\delta}} - {\bm{\Delta}}\|^2+c)}}{ \sum_{{\bm{k}}'} e^{-\alpha (\|({\bm{k}} - {\bm{q}}')- {\bm{\Delta}}\|^2+c)}}\\
&= \lim_{\alpha \rightarrow \infty} \frac{e^{-\alpha \|{\bm{\delta}} - {\bm{\Delta}}\|^2}}{ \sum_{{\bm{k}}'} e^{-\alpha \|({\bm{k}} - {\bm{q}}')- {\bm{\Delta}}\|^2}}
= \frac{1}{ 1 + \lim_{\alpha \rightarrow \infty} \sum_{{\bm{k}}' \neq {\bm{k}}} e^{-\alpha \|({\bm{k}} - {\bm{q}}')- {\bm{\Delta}}\|^2}}
= 1
\end{align*}
and for ${\bm{\delta}} \neq {\bm{\Delta}}$, the equation becomes
$
\lim_{\alpha \rightarrow \infty} \mathrm{softmax}({\tens{A}}_{{\bm{q}},:})_{{\bm{k}}} = 0,
$
exactly as needed to satisfy the lemma statement.
What remains is to prove that there exist ${\bm{v}}$ and $\{{\bm{r}}_{\bm{\delta}}\}_{{\bm{\delta}} \in \mathbb{Z}^2}$ for which \cref{eq:iso_att_decomposed} holds.
Expanding the RHS of the equation, we have
$
-\alpha (\|{\bm{\delta}} - {\bm{\Delta}}\|^2 + c)
=
-\alpha
(
\|{\bm{\delta}}\|^2 + \|{\bm{\Delta}}\|^2 - 2\langle {\bm{\delta}}, {\bm{\Delta}} \rangle + c
)\,.
$
Now if we set
$
{\bm{v}} = -\alpha \, (1, -2{\bm{\Delta}}_1, -2{\bm{\Delta}}_2)
$
and
$
{\bm{r}}_{\bm{\delta}} = (\|{\bm{\delta}}\|^2 , {\bm{\delta}}_1, {\bm{\delta}}_2),
$
then
$$
{\tens{A}}_{{\bm{q}}, {\bm{k}}} = {\bm{v}}^\top {\bm{r}}_{\bm{\delta}}
= -\alpha ( \|{\bm{\delta}}\|^2 - 2{\bm{\Delta}}_1 {\bm{\delta}}_1 - 2{\bm{\Delta}}_2 {\bm{\delta}}_2)
= -\alpha ( \|{\bm{\delta}}\|^2 - 2 \langle {\bm{\delta}}, {\bm{\Delta}} \rangle)
= -\alpha (\|{\bm{\delta}} - {\bm{\Delta}}\|^2 - \|{\bm{\Delta}}\|^2),
$$
which matches \cref{eq:iso_att_decomposed} with $c = -\| {\bm{\Delta}} \|^2$ and the proof is concluded.
\end{proof}
\paragraph{Remark on the magnitude of $\alpha$.}
The exact representation of one pixel requires $\alpha$ (or the matrices ${\bm{W}}_{{\!\textit{qry}}}$ and ${\bm{W}}_{{\!\textit{key}}}$) to be arbitrary large, despite the fact that the attention probabilities of all other pixels converge exponentially to 0 as $\alpha$ grows.
Nevertheless, practical implementations always rely on finite precision arithmetic for which a constant $\alpha$ suffices to satisfy our construction. For instance, since the smallest positive \texttt{float32} scalar is approximately $10^{-45}$, setting $\alpha=46$ would suffice to obtain hard attention.
\section{Experiments}
\label{sec:experiments}
The aim of this section is to validate the applicability of our theoretical results---which state that self-attention \emph{can} perform convolution---and to examine whether self-attention layers in practice do actually learn to operate like convolutional layers when trained on standard image classification tasks.
In particular, we study the relationship between self-attention and convolution with \textit{quadratic} and \textit{learned} relative positional encodings. We find that, for both cases, the attention probabilities learned tend to respect the conditions of Lemma~\ref{lemma:1}, supporting our hypothesis.
\subsection{Implementation Details}
\label{ssec:experiment_setup}
We study a fully attentional model consisting of six multi-head self-attention layers.
As it has already been shown by \cite{belloAttentionAugmentedConvolutional2019} that
combining attention features with convolutional features improves performance on Cifar-100 and ImageNet, we do not focus on attaining state-of-the-art performance. Nevertheless, to validate that our model learns a meaningful classifier, we compare it to the standard ResNet18 \citep{He2015resnet} on the CIFAR-10 dataset \citep{cifar10}.
In all experiments, we use a $2\times2$ invertible down-sampling \citep{jacobsen2018irevnet} on the input to reduce the size of the image. As the size of the attention coefficient tensors (stored during forward) scales quadratically with the size of the input image, \emph{full} attention cannot be applied to bigger images. %
The fixed size representation of the input image is computed as the average pooling of the last layer representations and given to a linear classifier.
We used the PyTorch library \citep{paszke2017automatic} and based our implementation on PyTorch Transformers\footnote{\href{https://github.com/huggingface/pytorch-transformers}{\tt github.com/huggingface/pytorch-transformers}}.
We release our code on Github\footnote{\ificlrfinal \href{https://github.com/epfml/attention-cnn}{\tt github.com/epfml/attention-cnn}{} \else URL available after deanonymization.\fi} and hyper-parameters are listed in \Cref{tab:hyper-parameter} (Appendix).
\begin{figure}
\begin{floatrow}%
\ffigbox{%
\includegraphics[width=0.95\linewidth]{plots/learning_curve/learning_curve_small.pdf}%
\vspace{-1em}
}{%
\caption{Test accuracy on CIFAR-10.\vspace{-1em}}%
\label{fig:learning_curve}%
}
\capbtabbox{%
\resizebox{1\linewidth}{!}{%
\begin{tabular}{lrll}
\toprule
Models & accuracy & \# of params & \# of FLOPS \\
\midrule
ResNet18 & 0.938 & 11.2M & 1.1B \\
SA quadratic emb. & 0.938 & 12.1M & 6.2B \\
SA learned emb. & 0.918 & 12.3M & 6.2B \\
SA learned emb. + content & 0.871 & 29.5M & 15B \\
\bottomrule
\end{tabular}%
}%
}{%
\vspace{3em}%
\caption{Test accuracy on CIFAR-10 and model sizes. SA stands for Self-Attention.\vspace{-1em}}%
\label{tab:parameter_size}%
}
\end{floatrow}
\end{figure}
\vspace{-2mm}
\paragraph{Remark on accuracy.}
To verify that our self-attention models perform reasonably well, we display in \Cref{fig:learned_attention_map_data} the evolution of the test accuracy on CIFAR-10 over the 300 epochs of training for our self-attention models against a small ResNet (\Cref{tab:parameter_size}).
The ResNet is faster to converge, but we cannot ascertain whether this corresponds to an inherent property of the architecture or an artifact of the adopted optimization procedures.
Our implementation could be optimized to exploit the locality of Gaussian attention probabilities and reduce significantly the number of FLOPS.
We observed that learned embeddings with content-based attention were harder to train probably due to their increased number of parameters.
We believe that the performance gap can be bridged to match the ResNet performance, but this is not the focus of this work.
\vspace{-2mm}
\subsection{Quadratic Encoding}
\label{ssec:verifying_theory}
\vspace{-2mm}
As a first step, we aim to verify that, with the relative position encoding introduced in \eqref{eq:quadposembedding}, attention layers learn to behave like convolutional layers.
We train nine attention heads at each layer to be on par with the $3\times 3$ kernels used predominantly by the ResNet architecture.
The center of attention of each head $h$ is initialized to ${\bm{\Delta}}^{(h)} \sim \mathcal{N}(\boldsymbol{0}, 2{\bm{I}}_2)$.
\Cref{fig:iso_during_training} shows how the initial positions of the heads (different colors) at layer 4 changed during training.
We can see that after optimization, the heads attend on specific pixel of the image forming a grid around the query pixel.
Our intuition that Self-Attention applied to images learns convolutional filters around the queried pixel is confirmed.
\begin{figure}
\includegraphics[width=.9\linewidth]{plots/epochs_iso_layer_4_small.png}
\caption{Centers of attention of each attention head (different colors) at layer 4 during the training with quadratic relative positional encoding.
The central black square is the query pixel, whereas solid and dotted circles represent the 50\% and 90\% percentiles of each Gaussian, respectively.\vspace{-3mm}}
\label{fig:iso_during_training}
\end{figure}
\Cref{fig:iso_attention_final} displays all attention head at each layer of the model at the end of the training.
It can be seen that in the first few layers the heads tend to focus on local patterns (layers 1 and 2), while deeper layers (layers 3-6) also attend to larger patterns by positioning the center of attention further from the queried pixel position.
We also include in the Appendix a plot of the attention positions for a higher number of heads ($N_h=16$).
\Cref{fig:iso_many_heads} displays both local patterns similar to CNN and long range dependencies.
Interestingly, attention heads do not overlap and seem to take an arrangement maximizing the coverage of the input space.
\begin{figure}
\includegraphics[width=1\linewidth]{plots/final_iso_small.png}
\caption{Centers of attention of each attention head (different colors) for the 6 self-attention layers using quadratic positional encoding.
The central black square is the query pixel, whereas solid and dotted circles represent the 50\% and 90\% percentiles of each Gaussian, respectively.\vspace{-1.5mm}}
\label{fig:iso_attention_final}
\end{figure}
\subsection{Learned Relative Positional Encoding}
We move on to study the positional encoding used in practice by fully-attentional models on images.
We implemented the 2D relative positional encoding scheme used by \citep{ramachandran2019standaloneselfattention,belloAttentionAugmentedConvolutional2019}:
we learn a $\lfloor D_p / 2 \rfloor$ position encoding vector for each row and each column pixel shift.
Hence, the relative positional encoding of a key pixel at position ${\bm{k}}$ with a query pixel at position ${\bm{q}}$ is the concatenation of the row shift embedding ${\bm{\delta}}_1$ and the column shift embedding ${\bm{\delta}}_2$ (where ${\bm{\delta}} = {\bm{k}} - {\bm{q}}$).
We chose $D_p = D_{\textit{out}} = 400$ in the experiment.
We differ from their (unpublished) implementation in the following points:
(\emph{i}) we do not use convolution stem and ResNet bottlenecks for downsampling, but only a $2\times 2$ invertible downsampling layer \citep{jacobsen2018irevnet} at input,
(\emph{ii}) we use $D_h = D_{\textit{out}}$ instead of $D_h = D_{\textit{out}} / N_h$ backed by our theory that the effective number of learned filters is $\min(D_h, D_{\textit{out}})$.
At first, we discard the input data and compute the attention scores solely as the last term of eq.~(\ref{eq:att_rel}).
The attention probabilities of each head at each layer are displayed on \Cref{fig:learned_attention_map}.
The figure confirms our hypothesis for the first two layers and partially for the third: even when left to learn the positional encoding scheme from randomly initialized vectors, certain self-attention heads (depicted on the left) learn to attend to individual pixels, closely matching the condition of Lemma 1 and thus Theorem 1.
At the same time, other heads pay attention to horizontally-symmetric but non-localized patterns, as well as to long-range pixel inter-dependencies.
\begin{figure}
\RawFloats
\centering
\includegraphics[width=.95\linewidth]{plots/learned_attention_maps_small.png}
\caption{Attention probabilities of each head (\emph{column}) at each layer (\emph{row}) using learned relative positional encoding without content-based attention. The central black square is the query pixel. We reordered the heads for visualization and zoomed on the 7x7 pixels around the query pixel.
\vspace{-3mm}}
\label{fig:learned_attention_map}
\vspace*{\floatsep}
\vspace*{\floatsep}
\vspace*{\floatsep}
\includegraphics[width=.95\linewidth]{plots/average_attention_with_content.pdf}
\caption{Attention probabilities for a model with 6 layers (\emph{rows}) and 9 heads (\emph{columns}) using learned relative positional encoding and content-content based attention.
Attention maps are averaged over 100 test images to display head behavior and remove the dependence on the input content.
The black square is the query pixel.
More examples are presented in Appendix~\ref{ssec:appendix_content}.
\vspace{-5mm}}
\label{fig:learned_attention_map_data}
\end{figure}
We move on to a more realistic setting where the attention scores are computed using both positional and content-based attention (i.e., $q^\top k + q^\top r$ in \citep{ramachandran2019standaloneselfattention}) which corresponds to a full-blown standalone self-attention model.
The attention probabilities of each head at each layer are displayed in \Cref{fig:learned_attention_map_data}.
We average the attention probabilities over a batch of 100 test images to outline the focus of each head and remove the dependency on the input image.
Our hypothesis is confirmed for some heads of layer 2 and~3: even when left to learn the encoding from the data, certain self-attention heads only exploit position-based attention to attend to distinct pixels at a fixed shift from the query pixel reproducing the receptive field of a convolutional kernel.
Other heads use more content-based attention (see \Cref{fig:extra_data_1,fig:extra_data_2,fig:extra_data_3} in Appendix for non-averaged probabilities) leveraging the advantage of Self-Attention over CNN which does not contradict our theory.
In practice, it was shown by \cite{belloAttentionAugmentedConvolutional2019} that combining CNN and self-attention features outperforms each taken separately.
Our experiments shows that such combination is learned when optimizing an unconstrained fully-attentional model.
The similarity between convolution and multi-head self-attention is striking when the query pixel is slid over the image:
the localized attention patterns visible in \Cref{fig:learned_attention_map_data} follow the query pixel.
This characteristic behavior materializes when comparing \Cref{fig:learned_attention_map_data} with the attention probabilities at a different query pixel (see \Cref{fig:average_attention_3_3} in Appendix).
Attention patterns in layers 2 and 3 are not only localized but stand at a constant shift from the query pixel, similarly to convolving the receptive field of a convolutional kernel over an image.
This phenomenon is made evident on our interactive website\footnote{\ificlrfinal \href{https://epfml.github.io/attention-cnn}{\tt epfml.github.io/attention-cnn}{} \else URL available after deanonymization, preview at \url{https://drive.google.com/file/d/1METSetroUA2qd2slol9wt7YxucJslAmF/} \fi}.
This tool is designed to explore different components of attention for diverse images with or without content-based attention.
We believe that it is a useful instrument to further understand how MHSA learns to process images.
\section{Related Work}
In this section, we review the known differences and similarities between CNNs and transformers.
The use of CNN networks for text---at word level \citep{DBLP:journals/corr/GehringAGYD17} or character level \citep{kim-2014-convolutional}---is more seldom than transformers (or RNN).
Transformers and convolutional models have been extensively compared empirically on tasks of Natural Language Processing and Neural Machine Translation.
It was observed that transformers have a competitive advantage over convolutional model applied to text \citep{vaswani17attentionisallyouneed}.
It is only recently that \cite{belloAttentionAugmentedConvolutional2019,ramachandran2019standaloneselfattention} used transformers on images and showed that they achieve similar accuracy as ResNets.
However, their comparison only covers performance and number of parameters and FLOPS but not expressive power.
Beyond performance and computational-cost comparisons of transformers and CNN, the study of expressiveness of these architectures has focused on their ability to capture long-term dependencies \citep{dai2019transformerxl}.
Another interesting line of research has demonstrated that transformers are Turing-complete \citep{universalTransformers,perez2019turingcomplete}, which is an important theoretical result but is not informative for practitioners.
To the best of our knowledge, we are the first to show that the class of functions expressed by a layer of self-attention encloses all convolutional filters.
The closest work in bridging the gap between attention and convolution is due to \cite{outerproduct2019andreoli}.
They cast attention and convolution into a unified framework leveraging tensor outer-product.
In this framework, the receptive field of a convolution is represented by a ``basis'' tensor ${\tens{A}}~\in~\mathbb{R}^{K\times K \times H \times W \times H \times W}$.
For instance, the receptive field of a classical $K\times K$ convolutional kernel would be encoded by ${\tens{A}}_{{\bm{\Delta}}, {\bm{q}}, {\bm{k}}} = \mathbbm{1}\{{\bm{k}} - {\bm{q}} = {\bm{\Delta}}\}$ for ${\bm{\Delta}} \in \Delta\!\!\!\!\Delta_K$.
The author distinguishes this \emph{index-based} convolution with \emph{content-based} convolution where ${\tens{A}}$ is computed from the value of the input, e.g., using a key/query dot-product attention.
Our work moves further and presents sufficient conditions for relative positional encoding injected into the input content (as done in practice) to allow \emph{content-based} convolution to express any \emph{index-based} convolution.
We further show experimentally that such behavior is learned in practice.
\section{Conclusion}
\label{sec:discussion}
We showed that self-attention layers applied to images can express any convolutional layer (given sufficiently many heads) and
that fully-attentional models learn to combine local behavior (similar to convolution) and global attention based on input content.
More generally, fully-attentional models seem to learn a generalization of CNNs where the kernel pattern is learned at the same time as the filters---similar to deformable convolutions \citep{dai2017deformable,Zampieri2019masterthesis}.
Interesting directions for future work include translating existing insights from the rich CNNs literature back to transformers on various data modalities, including images, text and time series.
\subsubsection*{Acknowledgments}
Jean-Baptiste Cordonnier is thankful to the Swiss Data Science Center (SDSC) for funding this work.
Andreas Loukas was supported by the Swiss National Science Foundation (project “Deep Learning for Graph Structured Data”, grant number PZ00P2 179981).
\newpage
|
1,116,691,499,899 | arxiv | \section{Introduction}
AutoDock is a protein-ligand docking tool which has now been distributed to more than 29000 users around the world\cite{trott2010autodock}. The application for this software arises from problems in computer-aided drug design. The ideal procedure would optimize the interaction energy between the substrate and the target protein. It employs a set of meta-heuristic and local search optimization methods to meet this demand. The Autodock puts protein-ligand docking at the disposal of non-expert users how often do not know how to choose among the dozens of configurations for such algorithms. This motivated us to bring the benefits of hyperparameter tuning to users of AutoDock.
\par
The global optimization problem arises in many real-world applications as well as molecular docking. A standard continuous optimization problem seeks a parameter vector \( \mathbf x^{*}\) to minimize an objective function \(f(\mathbf x): \mathbb{R}^{D}\rightarrow \mathbb{R}\), i.e. \(f(\mathbf x^{*}) < f(\mathbf x)\) for all \(\mathbf x \in \Omega\), where \(\Omega=\mathbb{R}^{D}\) denotes the search domain (a maximization problem can be obtained by negating the \(f)\). Over the years, optimization algorithms have been effectively applied to tackle this problem. These algorithms, however, may need to be fine-tuned which is a major challenge to their successful application. This is primarily due to the highly nonlinear and complex properties associated with the objective function. In this context, the advantages of algorithm configuration techniques become clear.
\par
The hyperparameter tuning domain has been dominated by model-based optimization \cite{hutter2011sequential} that adopt probabilistic surrogate models to replace in part the original computationally expensive evaluations. They construct computationally cheap to evaluate surrogate models in order to provide a fast approximation of the expensive fitness evaluations during the search process.
Surrogate based techniques try to model conditional probability \(p(y|\varphi)\) of a \(m-\)dimensional configuration \(\varphi\) given \(n\) observations \(\textbf{S}\):
\begin{equation}
\resizebox{.90\hsize}{!}{$\mathbf{S}=\left [ \mathbf{x}^{(1)},...,\mathbf{x}^{(n)} \right ]^{\textup{T}} \in \mathbb{R}^{n\times m}, \mathbf{x}=\left \{ x_{1},...,x_{m} \right \} \in \mathbb{R}^{m}$}
\label{eq_2}
\end{equation}
with the corresponding evaluation metrics \(\mathbf{y}\):
\begin{equation}
\resizebox{.90\hsize}{!}{$\mathbf{y}=\left [y^{(1)},...,y^{(n)} \right ]^{\textup{T}}=\left [y(\mathbf{x}^{(1)}),...,y(\mathbf{x}^{(n)}) \right ]^{\textup{T}} \in \mathbb{R}^{n}$}
\label{eq_3}
\end{equation}
\par
The essential question arises in model based algorithms is which individuals should be chosen to be evaluated using the exact fitness function. This is most characterized by making a good balance between the exploration and exploitation capabilities. One of the earliest researches in this direction was performed by Jones et al. who adopted a Kriging surrogate to fit the data and make a balance between exploration and exploitation \cite{jones1998efficient}. The exploration property of their proposed efficient global optimization (EGO) algorithm is enhanced by the fact that the expected improvement (EI) (an acquisition function) is conditioned on points with large uncertainty and low values of the surrogate. In another study, Booker et al. \cite{booker1999rigorous} sought out a balanced search strategy by taking into account sample points with low surrogate predictions and high mean square error. Moreover, Wang et al. \cite{wang2004mode} introduced the mode-pursuing approach which favors the trial points with low surrogate values according to a probability function. Regis and Shoemaker \cite{regis2005constrained} also put forward an approach according to which the next candidate point is chosen to be the one that minimizes the surrogate value subject to distance from previously evaluated points. The distance starts from a high value (global search) and ends with a low value (local search). They also proposed Stochastic Response Surface (SRS) \cite{regis2007stochastic} algorithm that cycles from emphasis on the objective to emphasis on the distance using a weighting strategy. The SRS, however, mitigated some of the requirements for inner acquisition function optimization. For this reason, we focused on proposing a new algorithm configuration approach based on SRS model. The proposed algorithm, called as MO-SRS, modify the SRS so as to be able to handle multi-objective problems. More precisely, MO-SRS is equipped with the idea of multi-objective particle swarm optimization (PSO)\cite{1004388}. We used the MO-SRS to make a balance between intermolecular energy and the Root Mean Square Deviation (RMSD) during the hyperparameter optimization in Autodock.
\par
The rest of the paper is organized as follows. Section II provides us with a brief review on the related works. Section III gives a brief description for the docking problem. Section IV elaborates technical details of our proposed approach. In Section V, the performance of the introduced components is investigated by conducting a set of experiments. The last section summarizes the paper and draws conclusions.
\section{Related works}
Sequential Model-based Algorithm Configuration (SMAC) \cite{hutter2011sequential}, Hyperband \cite{li2017hyperband}, Spearmint \cite{snoek2012practical}, F-race \cite{birattari2010f} and Tree-structure Parzen Estimator (TPE) \cite{bergstra2011algorithms} are examples of well known methods for hyperparameter optimization. SMAC adopted a random forests model and Expected Improvement (EI) to compute \(p(y|\varphi)\). Similarly, TPE et al. defined a configuration algorithm based on tree-structure Parzen estimator and EI. To tackle the \textit{curse of dimensionality}, TPE assigns particular values of other elements to the hyperparameters which are known to be irrelevant. Ilievski et al. \cite{ilievski2017efficient} proposed a deterministic method which employs dynamic coordinate search and radial basis functions (RBFs) to find most promising configurations. By using the RBFs \cite{park1991universal} as surrogate model, they mitigated some of the requirements for inner acquisition function optimization. In another work \cite{snoek2015scalable}, the authors put forward neural networks as an alternative to Gaussian process for modeling distributions over functions. Interestingly, \textit{Google} introduced \textit{Google Vizier} \cite{golovin2017google}, an internal service for surrogate based optimization which incorporates Batched Gaussian Process Bandits along with the EI acquisition function.
\par
Although the above-mentioned approaches have
been proven successful, they are not able to handle several objective functions during the tuning process. Consequently, this paper presents a novel multi-objective algorithm which integrates the idea of a multi-objective meta-heuristics with the SRS method to find a subset of promising configurations.
\section{Molecular docking}
Molecular docking enables us to find an optimal conforma-
tion between a ligand $ L $ and receptor $ R $. In another word, it predicts the preferred orientation of $ L $ to R when bound to each other in order to form a stable complex. A schematic example of this procedure is illustrated in Fig.1. This problem can be formulated as a multi-objective problem consisting of minimizing the RMSD score and the binding energy $ E_{inter} $. In Autodock, energy
function $ E_{inter} $ is defined as follows \cite{trott2010autodock}:
\begin{figure}[htbp]
\centerline{\includegraphics[width=0.35\textwidth]{Docking_representation_2.png}}
\caption{Schematic example of docking a ligand to a protein}
\vspace{-0.5cm}
\end{figure}
\begin{equation}
E_{inter}=Q_{bound}^{R-L}+Q_{unbound}^{R-L}
\end{equation}
\begin{multline*}
Q=W_{vdw}\sum_{i,j} \left ( \frac{A_{ij}}{r_{ij}^{12}}-\frac{B_{ij}}{r_{ij}^{6}} \right )+
\\
W_{hbond}\sum_{i,j} E(t) \left ( \frac{C_{ij}}{r_{ij}^{12}}-\frac{D_{ij}}{r_{ij}^{10}} \right )+
\\
W_{elec}\sum_{i,j} \frac{q_{i}q_{j}}{\varepsilon (r_{ij})r_{ij})}+W_{sol}\sum_{i,j}(S_{j}V_{i}+S_{i}V_{j})e^{(-\frac{r_{ij}^{2}}{2\sigma^{2} })}
\end{multline*}
In (3),
$ Q_{bound}^{R-L} $ and $ Q_{unbound}^{R-L} $ represent the states of ligand-protein complex in bound and unbound modes, respectively.
The pairwise energetic terms take into the account evaluations for repulsion ($ vdw $), electrostatics ($ elec $), desolvation ($ sol $) and hydrogen bonding ($ hbond $). The constant weights $ W_{vdw} $, $ W_{elec} $, $ W_{hbond} $ and $ W_{sol} $ belong to Van der Waals, electrostatic interactions, hydrogen bonds and desolvation, respectively. Moreover, $ r_{ij} $, $ A_{ij} $, $ B_{ij} $, $ C_{ij} $ and $ D_{ij}$ denotes Lennard-Jones parameters, while $ E(t) $ function is the angle-dependent directionality. Also, $ V $ is the volume of atoms that surround a given atom, weighted by a solvation parameter ($ S $). For a detailed report of all the variables please refer to \cite{morris2009autodock4}.
\section{The proposed method}
This section discusses in detail the main components of the proposed MO-SRS. We extend the idea of stochastic RBF to the multi-objective case to be suitable for the Autodock application. Compared to the evolutionary algorithms like genetic algorithm, MO-SRS need less computational time by virtue of surrogate modeling techniques. A workflow of the introduced approach is illustrated in Fig 2.
\begin{figure}[htbp]
\centerline{\includegraphics[width=0.35\textwidth]{Untitled_New.png}}
\caption{Workflow of the introduced automatic hyperparameter tuning approach for Autodock}
\vspace{-0.5cm}
\end{figure}
The MO-SRS starts the optimization procedure by generating a set of random configurations (i.e., initial population). During this phase, it might be possible to miss a considerable portion of the promising area due to the high dimensionality of the configuration space. It should be noticed that we have a small and a fix computational budget and increasing size of the initial population cannot remedy the issue. Furthermore, it is crucial for a model-based algorithm to efficiently explore the search space so as to approximate the nonlinear behavior
of the objective function. The Design of Computer Experiments (DoCE) methods are often used to partially mitigate high dimensionality of the search space. Among them, MO-SRS utilized the Latin Hypercube Sampling (LHS) \cite{mckay1979comparison} which is a DoCE method for providing a uniform cover on the search space using a minimum number of population. The main advantage of LHS is that it does not require an increased initial population size for more dimensions.
As the next step, we evaluate all the generated configurations \(\mathbf{x}_{i}(i=1,2,\cdots,n)\) using the Autodock to yield outcomes \(y_{i}^{(1)}=f^{(1)}(\mathbf{x_{i}})\) and \(y_{i}^{(2)}=f^{(2)}(\mathbf{x_{i}})\). Here, $ {f_{1}} $ and $ {f_{2}} $ denote the energy and RMSD function variables, respectively. Thereafter, we adopt surrogate model techniques to approximate the fitness function using this data set.
\par
Surrogate models offer a set of mathematical tools for predicting the output of an expensive objective function \(f\). Particularly, they try to predict the fitness function \(f\) for any unseen input vector \(\hat{x}\) according to computed data points \((x_{i},y_{i})\). Given solution \(\boldsymbol{\hat{x}}\) and objective function \(f\), a surrogate model \(\hat{f}\) can be defined as in (4), where \(\epsilon \) is the approximation error.
\begin{equation}
\tilde{f}(\boldsymbol{\hat{x}})=f(\boldsymbol{\hat{x}})+\epsilon \label{eq}
\end{equation}
\par
Among different surrogate models, we utilized the Radial Basis Function (RBF) which is a good model for high dimensional problems \cite{diaz2011selection,park1991universal}. It belongs to an
interpolation method for scattered multivariate data which considers all the sample points. In this light, it introduces linear combinations based on a RBF \(h(\mathbf{x})\) to approximate the desired response function \(f\); as presented in (5).
\begin{equation}
\tilde{f}(\hat{\mathbf{x}})=\sum_{i=1}^{n} w_{i} h(\left | \hat{\mathbf{x}}-\mathbf{x}_{i} \right |) + \mathbf{b}^{\top}\hat{\mathbf{x}}+a \label{eq}
\end{equation}
In (5), \(w_{i}\) shows \(i^{th}\) unknown weight coefficient, \(h\) determines a radial basis function, \(\hat{\mathbf{x}}\) is an unseen point and \(\epsilon\) are independent errors with a variance \(\sigma^{2}\). A radial function \(h: \mathbb{R}^{m} \rightarrow \mathbb{R}\) has the property \(h(\mathbf{x})=h(\left \| x \right \|)\). By having a suitable kernel \(h\); the unknown
parameters \(a, \mathbf{b}\) and \(\mathbf{w}\) could be obtained by solving the
following system of linear equations:
\begin{equation}
\mathbf{}
\begin{pmatrix}
\mathbf{\Phi} & \mathbf{P}\\
\mathbf{P}^{\top}& 0
\end{pmatrix} \begin{pmatrix}
\mathbf{w}
\\
\mathbf{c}
\end{pmatrix}=\begin{pmatrix}
f_{\psi}
\\
0
\end{pmatrix}
\label{eq}
\end{equation}
Here, \(\mathbf{\Phi}\) is a \( n \times n \) matrix with \(\Phi_{i,j}=h(\left \| \mathbf{x}_{i} - \boldsymbol{x}_{i} \right \|)\), \(\mathbf{c}=(\mathbf{b}^{\top}, a)^{\top}\) and
\begin{equation}
\mathbf{}
\mathbf{P}^{\top}=\begin{pmatrix}
\mathbf{x}_{1} & \mathbf{x}_{2} & \cdots & \mathbf{x}_{n} \\
1 & 1 & \cdots & 1
\end{pmatrix}
\label{eq}
\end{equation}
The linear system (6) has a unique solution and it can be used to approximate function values.
As opposite to the standard SRS method, we have two main objective functions which should be optimized: the intermolecular energy and the RMSD. Hence, we repeat the above mentioned procedure for both the objective functions. More precisely, we will train two surrogate models $ \hat{f_{1}} $ and $ \hat{f_{2}} $ to approximate the energy and RMSD output variables, respectively.
Now we are ready to use the trained surrogate models inside a pre-selection strategy. Accordingly, we generate a set of $ N $ candidates to be evaluated using $ \hat{f_{1}} $ and $ \hat{f_{2}}$. In the single-objective SRS code, the following equation generates neighborhoods using the current best solution $\hat{f_{2}}$, a randomly produced vector $ \textup{{v}} \in \mathbb{R}^{m} $ and adaptive parameter $ \gamma $:
\begin{equation}
\boldsymbol{\sigma}_{j}=\boldsymbol{x}_{best}+\gamma \otimes \textup{{v}}, j=1... N\label{eq}
\end{equation}
Generally speaking, in multi-objective optimization the best solutions need to be chosen based on two objectives and there is no single best solution. So, we cannot directly apply the standard search operators of the SRS. Moreover, we have to update our surrogate models $ \hat{f_{1}} $ and $ \hat{f_{2}}$ based on a best obtained solution which is another problem for the MO-SRS. Having this in mind, we borrowed the idea of leader selection in multi-objective PSO \cite{1004388} algorithm which handles the same problem (update equation of the PSO also depends on the global best solution and finding a leader particle is an important task). Inferred from \cite{1004388}, MO-SRS first builds an archive which contains the \textit{Pareto-optimal} solutions. A solution vector is \textit{Pareto-optimal} if there is not another solution that dominates it. In mathematical terms, solution $x_{1}$ is said to dominate another solution $x_{2}$, if:
\begin{itemize}
\item$ f_{i}(x_{1}) \leqslant f_{i}(x_{2}) $ for all $ i \in \left \{ 1,2,...,k \right \} $ and
\item $ f_{i}(x_{1}) \leqslant f_{i}(x_{2}) $ for at least one $ i \in \left \{ 1,2,...,k \right \} $
\end{itemize}
The MO-SRS divides the objective space into hypercubes and assigns a fitness value to each hypercube based on the number of Pareto-optimal solutions that lie in it. Then, it employs the roulette-wheel to find the superior hypercube. A randomly selected solution from that hypercube is determined to be the best solution $ \boldsymbol{x}_{best} $.
All the $N$ candidates are evaluated by the previously trained surrogates $ \hat{f_{1}} $ and $ \hat{f_{2}} $. Again, we select the best solution according to the above mentioned strategy. This best solution is then used to update our surrogate models.
All the aforementioned steps continue until some stopping criteria are met. To sum up, MO-SRS approach is illustrated in Fig. 3. The MO-SRS will be available soon on https://github.com/ML-MHs/Auto-Autodock.
\begin{algorithm}[H]
\centering \normalsize
\caption{MO-SRS} \label{alg:MyAlgorithm}
\begin{spacing}{0.8}
\begin{algorithmic}[0.6]
\State $\text{Use LHS to initialize a population } \boldsymbol{X} \leftarrow \left \{ \mathbf{x}_{1},\cdots,\mathbf{x}_{n_{0}} \right \} $
\State $n \gets n_{0}$
\State $y_{i}^{(1)}=f^{(1)}(\mathbf{x_{i}}): i\leftarrow 1\cdots n $
\State $y_{i}^{(2)}=f^{(2)}(\mathbf{x_{i}}): i\leftarrow 1\cdots n $
\Repeat
\State Find $ \textit{Pareto-optimal } P \text{ using } y_{i}^{(1)} \text{ and } y_{i}^{(2)} $
\State Set $ \boldsymbol{x}_{best} $ using $ P $ and described hypercube technique
\State Fit surrogate $ \hat{f}^{(1)}$ by $ \mathcal{B}_{n} \leftarrow \left \{ \left ( x_{i},y_{i}^{(1)} \right ) :i\leftarrow 1\cdots n \right \} $
\State Fit surrogate $ \hat{f}^{(2)}$ by $ \mathcal{C}_{n} \leftarrow \left \{ \left ( x_{i},y_{i}^{(2)} \right ) :i\leftarrow 1\cdots n \right \} $
\State Generate new solutions $\boldsymbol{\sigma}_{j} : j\leftarrow 1\cdots N $ as in (8)
\State Apply the bound constraints on each solution $\boldsymbol{\sigma}_{j}$
\State $\hat{y}_{j}^{(1)}=\hat{f}^{(1)}(\mathbf{\sigma_{j}}): j\leftarrow 1\cdots N $
\State $\hat{y}_{j}^{(2)}=\hat{f}^{(2)}(\mathbf{\sigma_{j}}): j\leftarrow 1\cdots N $
\State Find $ \textit{Pareto-optimal } \hat{P} \text{ using } \hat{y}^{(1)} \text{ and } \hat{y}^{(2)} $
\State Set $ \boldsymbol{x}_{n+1} $ using $ \hat{P} $ and described hypercube technique
\State $y_{n+1}^{(1)}=f^{(1)}(\mathbf{x_{n+1}})$
\State $y_{n+1}^{(2)}=f^{(2)}(\mathbf{x_{n+1}})$
\State $\mathcal{B}_{n+1}\leftarrow \mathcal{B}_{n} \cup \left \{ (\mathbf{x}_{n+1},y_{n+1}^{(1)}) \right \} $
\State $\mathcal{C}_{n+1}\leftarrow \mathcal{C}_{n} \cup \left \{ (\mathbf{x}_{n+1},y_{n+1}^{(2)}) \right \} $
\State $n\leftarrow n+1$
\Until{stopping criteria are met}
\end{algorithmic}
\end{spacing}
\end{algorithm}
\begingroup
\centerline{Fig. 3. Pseudocode of the MO-SRS method}
\endgroup
\section{Experimental results}
In this section, the performance of MO-SRS hyperparameter tuning method on Autodock task is investigated. We considered two scenarios from Autodock documentations which should be minimized: 1)1DWD and 2) HSG1. All simulations are performed using Matlab. We adopted the same implementation and parameter configuration for the SRS as suggested in \cite{muller2014matsumoto}. The Cubic function is used as the kernel in the RBF.
\par
We applied the MO-SRS to tune the hyperparameters of Autodock for the genetic algorithm (GA) \cite{morris1998automated}, simulated annealing (SA) \cite{kirkpatrick1983optimization}, a local search (LS) called as pseudo-Solis-Wet \cite{trott2010autodock} and a hybrid method (HB) which combines the GA and LS. The GA is a well-known evolutionary optimization algorithm which is highly sensitive ot the initial value of its parameters. The performance of the GA is dependent on the structure of the problem at hand and do not scale well with complexity. Consequently, parameters of GA like crossover, population size and mutation should be chosen with care. A small population size will lead to the early convergence problem, while using a large population increases the computational cost. The configuration tuning of GA becomes even more challenging when there is a correlation between its hyperparameters. For example, mutation is more effective on smaller population sizes, while the crossover is likely to benefit from large populations. All the mentioned reasons make the GA a suitable algorithm for benchmarking the performance of MO-SRS. The same situation could happen for the considered LS and SA algorithms. From other points of view, the adopted algorithms are introduced to measure search performance of the MO-SRS under different dimensions. The LS and GA belong to low dimensional problems, SA is a medium dimension and HB algorithm is a high dimensional hyperparameter tuning problem. A detailed information for the optimized configurations for GA, SA and LS are presented in Tables I-III, respectively.
\par
The obtained results are reported in Tables IV and V. The MO-SRS should obtain a reliable search performance using a limited computational budget and so the number of evaluations is set to 100. For each algorithm, the search bounds are reported in Tables I-III. To reduce the influence of stochastic error, experiments are repeated 10 times for each problem. In Tables IV-V, the \textit{tuned } prefix denotes the optimized algorithms using the MO-SRS. The MO-SRS offers a set of final solutions for each of the cases and we used a \textit{Pareto-optimal} graph to illustrate such solutions; as depicted in Fig. 4. However, it should be noticed that the results offered by the default methods in Autodock are based on single-objective algorithms. For this reason, we compared the obtained results of the MO-SRS according to each of those objectives. The subindex 1 in tables IV-V shows the performance of the compared algorithms for the binding energy and subindex 2 denotes the same for the RMSD criterion. The results are averaged over 10 runs. The energy unit is in kcal/mol and RMSD unit is in Angstroms.
\par
The results of the first case study are presented in Table IV. As it can be seen, MO-SRS converges closer to the global optimum. In the case of LS method, we can see that the tuned algorithm yields a minimized energy -17.1530 and RMSD 0.0860. In docking domain, RMSD $ < $ 1.5 Angstrom is always wise to consider. Similarly, MO-SRS provides more accurate results for the second scenario according to the results in Table V.
\par
In these tables, we can see how the introduced method tries to find more than one objective simultaneously.
In this regard, there are two main points. The first one is that the diversity of the obtained solutions strongly depends on the performance of the algorithm at hand. For example, LS performs better than the GA and consequently we can see the \textit{Pareto-optimal} for LS contains more solution. The second point is that all \textit{Pareto-optimal} solutions are obtained only after 100 evaluations. This confirms our hypothesis that MO-SRS can effectively be use to tune hyperparameters of the Autodock. The main components of MO-SRS are the adopted multi-objective and surrogate modeling techniques which can be further improved in the future works.
\begin{table}[H]
\centering \small
\captionsetup{font=scriptsize}
\caption{Details for the optimized hyperparameters of GA}
\begin{center}
\scalebox{0.75}{
\begin{tabular}{|l|l|l|l|}
\hline
Hyperparameter & Type & Range & Default \\ \hline
seed & Integer & {[}0,10000000{]} & Random \\ \hline
ga\_pop\_size & Integer & {[}50,500{]} & 150 \\ \hline
ga\_elitism & Binary & 0,1 & 1 \\ \hline
ga\_mutation\_rate & Continuous & {[}0.2,0.99{]} & 0.02 \\ \hline
ga\_crossover\_rate & Continuous & {[}0.2,0.99{]} & 0.80 \\ \hline
\end{tabular}}
\end{center}
\end{table}
\begin{table}[H]
\centering \small
\captionsetup{font=scriptsize}
\caption{Details for the optimized hyperparameters of SA}
\vspace{-0.2cm}
\begin{center}
\scalebox{0.70}{
\begin{tabular}{|l|l|l|l|}
\hline
Hyperparameter & Type & Range & Default \\ \hline
seed & Integer & {[}0,10000000{]} & Random \\ \hline
tstep & Continuous & {[}-2.0,2.0{]} & 2.0 \\ \hline
qstep & Continuous & {[}-5.0,5.0{]} & 2.0 \\ \hline
dstep & Continuous & {[}-5.0,5.0{]} & 2.0 \\ \hline
rtrf & Continuous & {[}0.0001,0.99{]} & 0.80 \\ \hline
trnrf & Continuous & {[}0.0001,0.99{]} & 1.0 \\ \hline
quarf & Continuous & {[}0.0001,0.99{]} & 1.0 \\ \hline
dihrf & Continuous & {[}0.0001,0.99{]} & 1.0 \\ \hline
accs & Integer & {[}100,30000{]} & 30000 \\ \hline
rejs & Integer & {[}100,30000{]} & 30000 \\ \hline
linear\_schedule & Binary & 0,1 & 1 \\ \hline
\end{tabular}}
\end{center}
\end{table}
\begin{table}[H]
\centering \small
\captionsetup{font=scriptsize}
\caption{Details for the optimized hyperparameters of the adopted local search}
\vspace{-0.2cm}
\begin{center}
\scalebox{0.6}{
\begin{tabular}{|l|l|l|l|}
\hline
Hyperparameter & Type & Range & Default \\ \hline
seed & Integer & {[}0,10000000{]} & Random \\ \hline
sw\_max\_its & Integer & {[}100,1000{]} & 300 \\ \hline
sw\_max\_succ & Integer & {[}2,10{]} & 4 \\ \hline
sw\_max\_fail & Integer & {[}2,10{]} & 4 \\ \hline
\end{tabular}}
\end{center}
\end{table}
\vspace{-0.7cm}
\begin{table}[H]
\centering \small
\captionsetup{font=scriptsize}
\caption{The average docking results of the SA, GA, LS and HB algorithms after 10 runs for the first case study.}
\vspace{-0.2cm}
\begin{center}
\scalebox{0.6}{
\begin{tabular}{|l|l|l|}
\hline
Algorithm \hspace{1cm} & Energy \hspace{1cm} & RMSD \hspace{1cm} \\ \hline
Tuned $ \text{SA}^{*}_{1} $ & -13.4990 & 4.0450 \\
Tuned $ \text{SA}^{*}_{2} $ & -10.9280 & 3.4140 \\
$ \text{SA}_{1} $ & -10.9260 & 3.5460 \\
$ \text{SA}_{2} $ & -10.9260 & 3.5460 \\ \hline
Tuned $ \text{GA}^{*}_{1} $ & -13.7110 & 3.4960 \\
Tuned $ \text{GA}^{*}_{2} $ & -11.9070 & 2.4550 \\
$ \text{GA}_{1} $ & -10.3610 & 5.2730 \\
$ \text{GA}_{2} $ & -10.1120 & 3.2880 \\ \hline
Tuned $ \text{LS}^{*}_{1} $ & -17.4270 & 0.2190 \\
Tuned $ \text{LS}^{*}_{2} $ & -17.1530 & 0.0860 \\
$ \text{LS}_{1} $ & -17.2990 & 0.1980 \\
$ \text{LS}_{2} $ & -17.1920 & 0.1400 \\ \hline
Tuned $ \text{HB}^{*}_{1} $ & -13.7110 & 3.4960 \\
Tuned $ \text{HB}^{*}_{2} $ & -11.9070 & 2.4550 \\
$ \text{HB}_{1} $ & -12.6610 & 2.6910 \\
$ \text{HB}_{2} $ & -12.6610 & 2.6910 \\ \hline
\end{tabular}}
\end{center}
\end{table}
\vspace{-0.7cm}
\begin{table}[H]
\centering \small
\captionsetup{font=scriptsize}
\caption{The average docking results of the SA, GA, LS and HB algorithms after 10 runs for the second case study.}
\vspace{-0.2cm}
\begin{center}
\scalebox{0.6}{
\begin{tabular}{|l|l|l|}
\hline
Algorithm \hspace{1cm} & Energy \hspace{1cm} & RMSD \hspace{1cm} \\ \hline
Tuned $ \text{SA}^{*}_{1} $ & -15.0124 & 6.2430 \\
Tuned $ \text{SA}^{*}_{2} $ & -13.0790 & 6.0120 \\
$ \text{SA}_{1} $ & -14.0790 & 6.5380 \\
$ \text{SA}_{2} $ & -12.9610 & 6.4640 \\ \hline
Tuned $ \text{GA}^{*}_{1} $ & -12.728 & 6.722 \\
Tuned $ \text{GA}^{*}_{2} $ & -12.728 & 6.722 \\
$ \text{GA}_{1} $ & -14.0030 & 6.6350 \\
$ \text{GA}_{2} $ & -12.1460 & 6.4200 \\ \hline
Tuned $ \text{LS}^{*}_{1} $ & -15.6910 & 6.6340 \\
Tuned $ \text{LS}^{*}_{2} $ & -13.4850 & 6.4690 \\
$ \text{LS}_{1} $ & -14.8500 & 6.6480 \\
$ \text{LS}_{2} $ & 0.1520 & 6.3570 \\ \hline
Tuned $ \text{HB}^{*}_{1} $ & -16.9080 & 6.6600 \\
Tuned $ \text{HB}^{*}_{2} $ & -15.6030 & 6.4730 \\
$ \text{HB}_{1} $ & -16.0210 & 6.7040 \\
$ \text{HB}_{2} $ & -15.8590 & 6.6160 \\ \hline
\end{tabular}}
\end{center}
\end{table}
\section{Conclusion}
A typical docking scenario in Autodock includes applying optimization algorithms. Following those steps, users should select a set of appropriate hyperparameters to maximize the performance of their final results. As it is often beyond the abilities of novice users, we proposed to develop a multi-objective approach for automatically tuning highly parametric algorithms of the Autodock. Automating end-to-end process of applying the proposed MO-SRS offers the advantages of more accurate solutions. The experimental results clearly show that the obtained results by MO-SRS outperform hand-tuned models. We hope that the introduced MO-SRS helps non-expert users to more effectively apply Autodock to their applications.
\bibliographystyle{IEEEtran}
{\tiny
|
1,116,691,499,900 | arxiv | \section{Introduction}
\label{sec:intro}
In the last decade, surveys at sub-millimetre (sub-mm) wavelengths have revolutionized our understanding of the formation and evolution of galaxies by revealing an unexpected population of high-redshift, dust-obscured galaxies called sub-mm galaxies (SMGs) which are forming stars at a tremendous rate \citep[i.e. star formation rate, SFR$\gtrsim1000~M_\odot yr^{-1}$; ][]{Bla99}. Data collected before the advent of the European \textit{Herschel} Space Observatory \citep[\textit{Herschel}; ][]{Pil10} and the South Pole Telescope \citep{Cal11}, suggested that the number density of SMGs drops off abruptly at relatively bright sub-mm flux densities ($\sim50$ mJy at $500~\mu m$), indicating a steep luminosity function and a strong cosmic evolution for this class of sources.
Several authors have argued that the bright tail of the sub-mm number counts may contain a significant fraction of strongly-lensed galaxies \citep[SLGs;][]{Bla96,Neg07}. The \textit{Herschel} Multi-tiered Extragalactic Survey \citep[HerMES; ][]{Oli12} and the \textit{Herschel} Astrophysical Terahertz Large Area Survey \citep[H-ATLAS;][]{Eal10} are wide-field surveys ($\sim\!380$ deg$^2$ and $\sim\!610$ deg$^2$, respectively) conducted by the \textit{Herschel} satellite. Thanks to their sensitivity and frequency coverage both surveys have led to the discovery of several lensed SMGs (\citet{Neg17} and references therein). The selection of SLGs at these wavelengths is made possible by the steep number counts of SMGs \citep{Bla96,Neg07}; in fact, almost only those galaxies whose flux density has been boosted by an event of lensing can be observed above a certain threshold, namely $\sim\!100$ mJy at $500~\mu m$. Similarly, at mm wavelengths, the SPT survey has already discovered several tens of SLGs \citep[e.g.][]{Vie13,Spi16} and other lensing events have been found in the Planck all-sky surveys \citep{Can15,Har16}.
With \textit{Herschel} data, \citet{Neg10} produced the first sample of five SLGs by means of a simple selection in flux density at $500~\mu m$. Preliminary source catalogues derived from the full H-ATLAS were then used to identify the sub-mm brightest candidate lensed galaxies for follow-up observations with both ground based and space telescopes to measure their redshifts (see \citet{Neg17} with 80 SLG candidates and references therein) and confirm their nature \citep{Neg10,Bus12,Bus13,Fu12,Cal14}. Using the same methodology, i.e. a cut in flux density at $500~\mu m$, \citet{War13} have identified 11 SLGs over 95 deg$^2$ of HerMES, while, more recently, \citet{Nay16} have published a catalogue of 77 galaxies candidate at lenses with $S_{500~\mu m}\gtrsim100$ mJy extracted from the HerMES Large Mode Survey \citep[HeLMS][]{Oli12} and the \textit{Herschel} Stripe 82 Survey \citep[HerS; ][]{Vie14}, over an area of 372 deg$^2$. Altogether, the extragalactic surveys carried out with \textit{Herschel} are expected to deliver a sample of $\sim\!200$ of sub-mm bright SLGs.
Moreover, as argued by \citet{GN12}, this number might increase to over a thousand if the selection is based on the steepness of the luminosity function of SMGs \citep{Lap12} rather than that of the number counts. The HALOS (\textit{Herschel}–ATLAS Lensed Objects Selection) method relies on the fact that SLGs tend to dominate the brightest end of the high-$z$ luminosity function. This method was demonstrated by looking for close associations (within 3.5 arcsec) with VIKING galaxies \citep{Fle12} that may qualify as being the lenses after a primary selection based on \textit{Herschel} photometry ($S_{350\mu m}>85$mJy, $S_{250\mu m}>35$mJy, $S_{350\mu m}/S_{250\mu m}>0.6$ and $S_{500\mu m}/S_{350\mu m}>0.6$). To be conservative, the candidates were further restricted to objects whose VIKING counterparts have redshifts z>0.2. After comparing both SLGs candidate lists, it was shown that about 70\% of SMGs with luminosities in the top 2\% percentile were also identified with the second method.
Although the HALOS method is a step forward to increase the number of SLGs candidates, its conclusions are based on a sample with very restrictive selection criteria that makes difficult to extrapolate its performance to a more general case. Moreover, the main parameter of the method, the top luminosity percentile, does not have a clear optimal value and the choice of such value makes the method very subjective.
For the above reasons and taking into account the slow pace of follow-up campaign confirmation, in this work we propose a new methodology based on the similarity between probability distributions of pairs of galaxies from two different catalogues (one for the potential foreground galaxies acting as lenses and another one for the potential background sources), associated to a set of observables as the redshift or the angular separation. The characteristics of this method make it more objective and easily reproducible, providing a final probability ranked list of SLGs candidates. Moreover, with very few initial constraints, the statistical properties of the SLGs candidates are not biased and can be studied statistically before the observational confirmation of each individual case. The data sets and the initial selection criteria are presented in Section \ref{sec:cats}. The general methodology is discussed in Section \ref{sec:methodology}, while the details for the particular implementation of the general methodology to the identification of SLGs and the main results are described in Section \ref{sec:shalos}. Some of the statistical properties of the SHALOS SLGs candidates are estimated and discussed in Section \ref{sec:induction}. Finally the main conclusions are presented in Section \ref{sec:conclusions}.
\section{Data}
\label{sec:cats}
In this work we use the official H-ATLAS catalogues, the largest area extragalactic survey carried out by the \textit{Herschel} space observatory \citep{Pil10}. With its two instruments PACS \citep[Photoconductor Array Camera and Spectrometer;][]{Pog10} and SPIRE \citep[Spectral and Photometric Imaging Receiver;][]{Gri10} operating between 100 and 500 $\mu$m, it covers about 610 deg$^{2}$. The survey is comprised of five different fields, three of which are located on the celestial equator \citep[GAMA fields or G09, G12 and G15;][]{Val16,Bou16,Rig11,Pas11,Iba10} covering in total an area of 161.6 deg$^2$. The other two fields are centred on the North and South Galactic Poles \citep[NGP and SGP fields;][]{Smi17,Mad18, Fur18} covering areas of 180.1 deg$^2$ and 317.6 deg$^2$, respectively. As described in detail in \citet{Bou16}, for the GAMA fields, and \citet{Fur18}, for the NGP field, a likelihood ratio method was used to identify counterparts in the Sloan Digital Sky Survey \citep[SDSS; ][]{Aba09} within a search radius of 10 arcsec of the H-ATLAS sources with a $4\sigma$ detection at $250~\mu m$. We were not able to use the SGP field in this work because there is no overlap with the SDSS survey.
We are going to focus on those sources with a cross-matched optical counterpart and, therefore, there is an implicit $4\sigma$ detection at $250~\mu m$ initial selection criteria. In addition, we discard sources flagged as stars and those galaxies without an optical redshift estimation.
\subsection{Sub-mm photometric redshifts}
\label{sec:photoz}
Photometric redshift are provided in the H-ATLAS catalogues but they are based on the optical cross-matched information. This means that if the cross-matched sources are different galaxies, the estimated redshifts tend to correspond to the ones at lower redshift. We have used the spectroscopic redshift when available.
To have an independent estimation of the redshift for the potential SMGs (the high redshift counterparts) we follow the usual approach to derive the sub-mm photometric redshifts. Following previous works \citep{Lap11,Pea13,GN12,GN14,Ivi16,GN17,Bon19}, the sub-mm photometric redshifts were estimated by means of a minimum $\chi^2$ fit of a template SED to the SPIRE data (using PACS data when possible). The SED of SMM J2135-0102 \citep[`The Cosmic Eyelash' at $z = 2.3$;][]{Ivi10,Swi10} is known to be the best overall template to describe the SMGs population, at least for $z>0.8$. When comparing with spectroscopic redshifts, the usage of this template provides the best performance with a minimum difference dispersion: $\Delta z/(1 + z) = -0.07$ and a dispersion of 0.153 \citep{Ivi16,GN12,Lap11}.
In order to obtain more reliable sub-mm photometric redshifts, we restrain ourselves to those sources with at least $3\sigma$ photometric estimations at 350 and $500~\mu m$. Moreover, we further focus only on those sources with estimated sub-mm photometric redshifts with $z>0.8$.
Finally, using the estimated photometric redhsift and the SED of SMM J2135-0102 we calculate the Bolometric Luminosity for each of the SMGs.
\section{Methodology}
\label{sec:methodology}
For our purpose we need a method to compare two different probability distributions deriving a quantity that can be interpreted as a probability in order to combine the information obtained from different comparisons. Among the different defined statistical distances between distributions, we find that the Bhattacharyya distance fulfill our requirements.
In statistics, the Bhattacharyya distance \citep{Bat43} measures the similarity of two discrete or continuous probability distributions. It is closely related to the Bhattacharyya coefficient \citep[BC;][]{Bat43}, i.e. the overlap estimate of two probability distributions. Among other applications, the Bhattacharyya distance is widely used in research of feature extraction and selection \citep[e.g,][]{Ray89, Cho03}.
The Bhattacharyya distance for two continuous probability distributions $p$ and $q$ can be expressed as:
\begin{equation}
D_B (p,q) =-ln(BC(p,q))=-ln\left(\int dx\sqrt{p(x)q(x)}\right),
\end{equation}
\noindent
where BC(p,q) denotes the Bhattacharyya kernel or Bhattacharyya Coefficient,
with $0\leq D_B \leq \infty$ and $0\leq BC\leq1$. When $p$ and $q$ are two normal distributions, the Bhattacharyya distance can be computed as:
\begin{equation}
\label{eq:2normal}
D_B(p,q) = \frac{1}{4} ln\left( \frac{1}{4} \left( \frac{\sigma^2_p}{\sigma^2_q} + \frac{\sigma^2_q}{\sigma^2_p} + 2\right)\right) + \frac{1}{4} \left( \frac{(\mu_p-\mu_q)^2}{\sigma_p^2+\sigma_q^2}\right),
\end{equation}
\noindent
where $\sigma^2_p$ ($\sigma^2_q$) is the variance of the $p$ ($q$) distribution and $\mu_p$ ($\mu_q$) is the mean of the $p$ ($q$) distribution.
The usage of the Bhattacharyya distance, or distance in general, is a novel approach in the identification of specific source characteristics or events based on cross-matched pair of galaxies. Moreover, it has some advantages with respect to more traditional approaches:
\begin{itemize}
\item The calculation of a distance between two probability distributions relies just on the parameters describing such distributions and are determined by observations (such as beam size, positional uncertainty, redshift uncetainties) and, therefore, do not require any assumption on previous knowledge, \textit{priors}, or limits.
\item As an extension of the previous point, the usage of the distance avoids complicated calculations of several probabilities of the samples and model parameters, most of which depend on \textit{a priori} assumptions, required in Likelihood Ratio (LR) approaches \citep[e.g., ][]{Bou12}. LR was successfully implemented to cross-match optical catalogues where the assumptions and probability or parameter estimations were reasonably acceptable. However, the implementation of the LR technique to cross-match catalogues observed in different wavelengths bands becomes very complex.
\item There are other statistical methods for similarity measurements between probability distributions. Most of them consist on statistical hypothesis tests, as for example the the two sample t-test. However, they work with $p$-values, a measure of the statistical evidence for the validation of certain hypothesis, which is usually misunderstood and wrongly used as a measurement of probability. On the contrary, the BC gives a similarity measurement that can be safely interpreted as a probability.
\item Moreover, if we have two or more similarity measurements it is not clear how to combine the $p$-values obtained from each measurement. In the distance approach, the combination of different similarity measurements is straightforward, being just the multiplication of the $BC$ estimated values (similar to the general rule in probabilities).
\item Finally, it should be commented that also the Bayesian alternative to classical hypothesis testing has some limitations. The usage of the Bayes factors can be individually applied to each observational property, but it rises the issue on how to combine the "strength of the evidence" for each individual observable.
In general, Bayes factors are used as a Bayesian model comparison methodology (a generalization of the LR technique). With this approach an ideal model has to be defined to be compared with and it requires knowledge about prior distributions \citep{Bud08,Bud11}. For our purpose, such characteristics make the Bayesian alternative a limited or biased approach.
Some improvements were introduced to overcome these limitations as the Intrinsic Bayes Factor presented in \citet{Ber96}, but it requires the estimation of intrinsic priors and over-complicates the calculation.
\end{itemize}
Therefore, we propose the combination of various distance measurements between two probability distributions (associated to different observable quantities related to the pair of galaxies) as a new simple, objective (without any prior and based on observational probability distributions), modular (additional information can be added as any time to review the overall final probabilities) and flexible (can be adapted for different purposes: identifying strong lensing events, discriminate sub-populations, star-galaxy classification, etc) methodology to identify particular kind of sources or events by cross-matching different catalogues. A natural extension of this methodology could be to implement a neural network to be trained to perform the same task, as already used in other contexts \citep[e.g., ][]{Ode92,Sto92}.
\section{SHALOS}
\label{sec:shalos}
\begin{figure}[tbp]
\centering
\includegraphics[width=\columnwidth]{./figs/Prob_hists_G12}
\caption{\label{fig:P_hist} Comparison of the variation of the number of sources with the different probabilities associated to the observables considered in this work. The total probability is shown as a thick black line while the estimated probability of random pairs is shown as a grey line.}
\end{figure}
Our main objective in this work is to identify a list of the most probable cases of a strong lensing event between optical galaxies acting as lenses and SMGs acting as sources. We named as SHALOS\footnote{`shalo' in the modern urban English slang means "share your location", that it is also adequate to our purpose.} ("Statistical \textit{Herschel}-ATLAS Lensed Object Selection") the specific implementation details to this particular scientific task of the methodology in sec. \ref{sec:methodology}.
The intention of SHALOS is to produce a probability ranked list of potential SLGs. This ranked list try to be as much objective as possible and easily reproducible.
In general, gravitational lensing events between two different samples share the following characteristics: two different closed objects (small angular separation) at different distances (different redshifts) with the background flux density amplified with respect the rest of the source population (higher luminosity). Therefore, we focus on the following observables: angular separation, optical vs sub-mm flux density comparison, redshift and background luminosity.
\begin{itemize}
\item \textit{Angular separation ($BC_{pos}$)}.- The closer in the sky is the lens-source pair, the higher is the gravitational lensing probability. For this observable we compare the positional uncertainty distributions of each pair of galaxies described as gaussian distributions centred in the galaxy position with a dispersion equal to the positional uncertainty (eq. \ref{eq:2normal}). In this case, a higher overlap, i.e. higher BC value, implies a potential higher lensing probability. The global astrometric RMS precision of SDSS is $\sim0.1$ arcsec\footnote{https://www.sdss.org/dr12/scope/} while it is $\sim 2.4$ arcsec for H-ATLAS catalogues \citep{Bou16,Fur18}. Due to the huge difference between the probability distribution dispersion for both cases, the maximum overlap is $BC_{pos}\sim0.3$, for a zero angular separation. Therefore, for aesthetic purposes (i.e., in order to have the best candidates near $BC_{pos}\sim1.0$), we normalize the $BC_{pos}$ to the maximum overlap value.
\item \textit{Redshift ($1-BC_z$)}.- In this case, we are interested in objects at different redshifts and, therefore, with a minimum redshift probability distribution overlap, i.e. $1-BC_z$. Similar to the previous case, we compare the redshift uncertainties of a pair of galaxies described as gaussian distributions centred in the lens/source redshfit best values with a dispersion equal to the redshift uncertainties.
As the source has to be at higher redshift than the lens, any residual of the source redshift probability distribution at lower redshift than the lens one is considered also part of the overlap. In particular, we consider as overlap any residual probability distribution area of the source galaxy that were at lower redshift than $<\mu_{z,lens} - 3\sigma_{z,lens}$, being $\mu_{z,lens}$ and $\sigma_{z,lens}$ the mean readshift and its associated gaussian dispersion. This modification became important when dealing with spectroscopic redshift with very small redshift uncertainties. For the lens candidates, the uncertainty is 0.01 when a spectroscopic redshift measurement is available or the $1\sigma$ error for the photometric estimations. For the sources, the uncertainty is the maximun value between the photometric estimation method $1\sigma$ error or the statistical one, 0.153 \citep{Ivi16}.
\item \textit{Optical vs. sub-mm flux density ratio ($1-BC_r$)}.- To help to distinguish if the source/lens galaxies are the same one, we also consider the ratio between the optical $r$ band and the sub-mm $350~\mu m$ flux densities. This additional information can be useful when the redshifts are similar (typically $z\sim 0.8-1.0$). For each matched galaxy pair, we estimate the flux densities ratio and its uncertainty and we compared it with the expected one from \citet{Smi12}, a stacked SED for typical galaxies at $z<0.5$. If the measured ratio is similar to the stacked SED, the matched galaxies are probably the same galaxy with redshift $z<0.8$. Therefore, as in the redshift case, we are interested in those cases with minimum probability distributions overlap, i.e. $1-BC_r$.
\item \textit{Luminosity percentile ($L_{perc}$)}.- A source galaxy amplified due to a strong lensing effect will tend to have higher Luminosity with respect to other galaxies at similar redshift \citep{GN12}. The bolometric luminosity of each source galaxy canditate is compared with those at similar redshift ($\mu_z-\sigma_z < z < \mu_z+\sigma_z$): the higher the associated percentile the more probable is the hypothesis of a strong lensing event. Taking into account the results from \citet{GN14, GN17} and \citet{Bon19}, most of the event candidates will be produced by weak lensing with typical amplifications below 50\%. In these cases, we expect luminosity percentiles fluctuating around $\sim0.5$.
\end{itemize}
Finally, we combine the information from the four observables to obtain a total strong lensing probability associated to each SLGs candidate:
\begin{equation}
P_{tot} = BC_{pos} * (1 - BC_z) * (1 - BC_r) * L_{perc}
\end{equation}
\subsection{SHALOS produced catalogues and usage}
The SHALOS methodology can be applied to any pair of catalogues and start the cross-matching process from scratch. However, we decided to apply it using the cross-match information already in the official H-ATLAS catalogues (see sec. \ref{sec:cats}).
There are some pros and cons to this decision. On the one hand, the H-ATLAS cross-match was limited to pairs of objects within angular distance $<10$ arcsec. In addition, when multiple counterparts were possible the LR technique was used to chose the most probable one. We consider that initializing the SHALOS methodology using a pair list limited to angular separation $<10$ arcsec does not introduce any bias in identifying SLGs: taking into account the typical positional uncertainties, separation distances larger than this limit are severely penalized within the proposed methodology. This is not true anymore when trying to study the weak lensing regime, with potential gravitational effects at even larger angular separation, depending on the lens mass.
On the other hand, the H-ATLAS catalogues provide not only spectroscopic redshift (when available), but also photometric ones for most of the optical counterparts, that we would have had to compute otherwise.
Overall, using the H-ATLAS catalogues provided us the opportunity to compare our results with the LR ones. This comparison is very interesting allowing a discussion on the differences between both methodologies and their optimal applicability cases.
Therefore, for each entry in one of the H-ATLAS catalogues with a cross-matched optical galaxy, we estimate the associated $P_{tot}$ as described before (Sec. \ref{sec:shalos}). Then all the entries with $P_{tot}<0.1$ are removed and the remaining ones are sorted by their $P_{tot}$ associated value in decreasing order. The SHALOS catalogues can be found as the online material of the publication. From the official H-ATLAS catalogues, we have maintained the most critical information: name, the \textit{Herschel} flux densities and r magnitude, angular separation, LR Reliability and the optical spectroscopic and photometric redshift. Then we added the SHALOS intermediate information as sub-mm redshifts, bolometric luminosity, the four observables associated probabilities and the final total one.
We foresee the usage of the produced SHALOS catalogues mainly as the ranked input list of sub-mm strong lensing targets for follow ups campaigns with high resolution facilities as the HST, Keck or ALMA. The SHALOS ranked list can be used to easily select the `best' event candidates that complies with the observational campaign criteria as sky region, flux density limits, redshift range, etc.
However, the SHALOS method is an approach based mainly on observable measured quantities with minimal assumptions and minimal \textit{a priori} limits.
This means that we can statistically consider most of the top ranked selected events as real and safely perform their analysis, also comparing with previous results on this field. This comparison can be used as a validation by induction of the SHALOS method and new results can be obtained with respect to previous analysis (that are based only on confirmed events).
\subsection{SHALOS results}
\begin{table*}
\caption{PCA loadings for each of the considered observables ($BC_{pos}$, $(1-BC_r)$, $(1-BC_z)$, $L_{perc}$), and their correspondent influence on each of the components.}
\label{table:PCA1}
\centering
\begin{tabular}{c | c c c c c}
\hline\hline
& Comp.1 & Comp.2 & Comp.3 & Comp.4 \\
\hline
$(1-BC_r)$ & -0.07567495 & -0.004521857 & -0.050656304 & -0.99583472 \\
$BC_{pos}$ & 0.81154329 & -0.560949927 & -0.155269678 & -0.05122494 \\
$(1-BC_z)$ & -0.12034156 & 0.093595030 & -0.986553689 & 0.05890413 \\
$L_{perc}$ & -0.56673512 & -0.822529454 & -0.006089687 & 0.04711173 \\
\hline
\end{tabular}
\end{table*}
\begin{table*}
\caption{PCA components relevancy considering standard deviation, proportion of explained variance and the cumulative proportion of variance.}
\label{table:PCA2}
\centering
\begin{tabular}{c | c c c c c}
\hline\hline
& Comp.1 & Comp.2 & Comp.3 & Comp.4 \\
\hline
Standard deviation & 0.2646 & 0.2086 & 0.1212 & 0.09205 \\
Proportion of Variance & 0.5122 & 0.3183 & 0.1075 & 0.06198 \\
Cumulative Proportion & 0.5122 & 0.8305 & 0.9380 & 1.00000 \\
\hline
\end{tabular}
\end{table*}
Most of the detected galaxies in the H-ATLAS catalogue do not even have an optical counterpart within $10$ arcsec. As a consequence they are not considered by the SHALOS method. From this point, we will focus our work only on those event candidates with a $P_{tot}> 0.1$ at least. We consider that below such value of the associated probability there is a completely negligible probability for the event of being a SLG.
Figure \ref{fig:P_hist} summarizes the behaviour of the four probabilities, related to the observable quantities previously described, considered in the SHALOS method. The variation of the number of selected galaxies with respect to the associated probability is an indication of their relative importance. The redshift (dot-dashed red line) and flux ratio (dashed magenta line) observables are introduced to assure that the pair of galaxies are different objects at different distances. They are not very restrictive because the criteria used to select the initial sample was already able to discard potential dubious pairs and low redshift candidates. Their effect is more important for those cases with background redshift near the imposed lower limit, $z>0.8$.
On the contrary, the luminosity percentile (dotted green line) and the angular separation (blue line) information are the most restrictive. As anticipated by the HALOS method \citep{GN12}, the first one will help to select those candidates with higher probability of a stronger gravitational lensing effect. The positional one simply prefer the closest pairs, that normally translate into higher lensing amplifications.
The total associated probability, $P_{tot}$, is shown as a thick black line indicating that the number of SLGs decreases with $P_{tot}$, as expected. The estimated number of potential random pairs that fulfill all the methodology criteria is shown as a grey solid line: for $P_{tot}>0.1$ it can be considered negligible. It was estimated by maintaining the same lens galaxy sample and simulating the background sources. The simulated background sample mimic the real background sample statistics \citep[redshift distribution; source number counts at $250~\mu m$, \citet{Lap11}; `The Cosmic Eyelash' SED,][]{Ivi10} but with random positions. Then, we applied the same selection sample criteria and we cross-matched it with the lens sample using the same 10 arcsec as the maximum angular distance radius. Finally, for each of the random event candidates we applied the SHALOS methodology to obtain the associated total probability, shown in Fig. \ref{fig:P_hist}. This process was repeated 10 times to derive a mean value for each $P_{tot}$ and its dispersion.
Similar conclusions can be obtained from a Principal Component Analisys (PCA). It is performed in order to set the relative relevance of the four considered probabilities by determining their separate influence on the principal components. For the PCA analysis only the $P_{tot}>0.1$ cases were considered. In the PCA, a linear combination of the (standardized) components is made to predict a certain variable: the loadings are the coefficients of this linear combination and, for each component, the sum of their squared values are the eigenvalues (components' variances).
The PCA results show that, for the two most relevant components (components 1 and 2), $BC_{pos}$ and $L_{perc}$ are the most influencing observables. In particular, $BC_{pos}$ is the most important for component 1 and $L_{perc}$ for component 2. The other two principal components corresponds almost entirely to $(1-BC_z)$ (components 3) and to $(1-BC_r)$ (component 4), whose weights are the highest in absolute value (see Table \ref{table:PCA1}).
The importance of each observable can be inferred from the proportion of variance, explained by each principal component considering the information obtained from the loadings. According to the proportion of variance shown in Table \ref{table:PCA2}, component 1 is the one that explains most of the variance (51.22\%), followed by component 2 (31.83\%). Thus, the most relevant observables are $P_{pos}$ and $L_{perc}$. The proportion of variance for component 3 and 4 is lower, and consequently the observables $(1-BC_z)$ and, mostly $(1-BC_r)$, are less important.
\begin{figure}[tbp]
\centering
\includegraphics[width=\columnwidth]{./figs/Ptot_vs_R_NGP}
\caption{\label{fig:P_vs_R} Comparison between P$_{tot}$ and the likelihood Reliability.}
\end{figure}
On the other hand, in Figure \ref{fig:P_vs_R} the estimated total probability for sources with P$_{tot}>0.1$ is compared with the Reliability (quantity to identify the goodness of the cross-matched SDSS local galaxies) estimated in the official H-ATLAS catalogues based on the LR cross-match approach. It is clear that both quantities differ and show almost a bimodal distribution. Cases with low Reliability values, $R<0.3$, has also relatively low associated $P_{tot}<0.5$ values. This is mainly due to the effect of the angular separation in both methodologies. However, more than half of the sources shown in Figure \ref{fig:P_vs_R} have high Reliability ($R>0.8$) with P$_{tot}> 0.1$. The reason is that those "matches" have a smaller angular separation. In the LR methodology, small angular separation results into a higher Reliability. However, at the same time, this is also one of the required characteristic in a gravitational lensing event. Without additional information, as redshift or luminosity, the LR method lacks the proper information in the case of SLGs to associate a low Reliability, as already pointed out in previous works \citep{Neg10,GN12,GN14,Bou14}.
The redshift distribution of sources (\textit{red}) and lenses (\textit{blue}) identified with P$_{tot}>0.5$ in the G09 zone are shown in Figure \ref{fig:dNdz}. The other areas have almost identical redshift distributions. The redshift distribution of the sources covers a wide range of redshift: from $\sim 0.9$ to $\sim 3.6$ with a mean value of $z\sim 2.3$. Sources below $z \simeq 1.5$ are penalized mainly due to their photometric redshift uncertainties. On the other hand, lenses show a redshift distribution with mean value of $z\sim0.5$ as expected from theoretical estimations (see \citet{Lap12} for more details) for sources around $z\sim 2.5$. It is interesting that the SHALOS method identify also several events with lenses at $z < 0.2$, because there is no initial constraint on this respect (contrary to previous works).
\begin{table*}
\caption{Summary of the SHALOS results stats.}
\label{table:stats}
\centering
\begin{tabular}{c c c c c c}
\hline\hline
Zone & Initial & Sample & $P_{tot}>0.1$ & $P_{tot}>0.5$ & $P_{tot}>0.7$ \\
& (\#) & (\#)[\%] & (\#)[\%] & (\#)[\%] & (\#)[\%] \\
\hline
G09 & 39660 & 2808 [7.08\%] & 1374 [3.46\%] & 240 [0.61\%] & 73 [0.18\%] \\
G12 & 38961 & 2924 [7.50\%] & 1377 [3.53\%] & 213 [0.55\%] & 68 [0.17\%] \\
G15 & 41609 & 3059 [7.35\%] & 1506 [3.62\%] & 243 [0.58\%] & 70 [0.17\%] \\
NGP & 118980 & 8437 [7.09\%] & 4129 [3.47\%] & 755 [0.63\%] & 236 [0.20\%] \\
ALL & 239210 & 17228 [7.20\%] & 8386 [3.51\%] & 1451 [0.61\%] & 447 [0.19\%] \\
\hline
\end{tabular}
\end{table*}
In Table \ref{table:stats} there is a summary of the number of galaxies initially in the H-ATLAS catalogues for each of the four zones considered. It is also shown the number of selected sources at high redshift with reliable flux density measurements and the number of identified SLGs at different $P_{tot}$ values. There are already several interesting conclusions that can be extracted from these results:
\begin{enumerate}
\item Only $\sim 7\%$ of the global H-ATLAS sources are considered reliable high redshift sources, $z>0.8$ with our current selection criteria.
\item The results are homogeneous among the different zones, with minimal percentage variations.
\item More than half of the high redshift selected sources have a close low-z optical counterpart and, therefore, they have a non negligible associated probability, $P_{tot}>0.1$, of being a SLG. This result is in agreement with the strong magnification bias signal measured by \citet{GN14, GN17} that implies that many of the H-ATLAS high-z sources are slightly enhanced by weak gravitational lensing.
\item The probability of a stronger gravitational effect is boosted by increasing the $P_{tot}$ limit due to the luminosity percentile observable effect. The number of candidates with $P_{tot}>0.5$ is greater than 1000, confirming the HALOS predictions \citep{GN12} that with more complex selection procedures it is possible to reach such numbers.
\item Finally, the most probable candidates, $P_{tot}>0.7$, correspond to 447 (or 0.19\%) that it is $\sim5$ times the number of H-ATLAS candidates found with flux density above 100 mJy at $500\mu m$ \citep{Neg17}.
\end{enumerate}
As a check of our results, we compare SHALOS SLG candidates with those found in \citet{Neg17}.
In the common NGP and GAMA fields, \citet{Neg17} found 50 SLG candidates, but only 32 are identified in SHALOS. The other 18 objects are excluded by us either because they have no estimated optical redshift (needed by SHALOS) or because they are flagged as star (and we are not interested in such objects).
On the one hand, following the \citet{Neg17} selection criteria, we select those SHALOS SLG candidates with a flux density at 500 $\mu m$ greater than 90 mJy and redshift greater than 0.1, to avoid very local objects. We use a lower flux density limit with respect to \citet{Neg17}, $S_{500~\mu m}\geq100$ mJy, to take into account small flux density variation in the different versions of the H-ATLAS catalogues. By applying such redshift and flux density selection in SHALOS, we obtain a list of 50 objects with $P_{tot}>0.1$, again with the common 32 SLG candidates. From the new 18 objects that are in SHALOS and not in \citet{Neg17}, 6 have $P_{tot}<0.5$, i.e. a very low probability of being actual SLGs, 8 have $S_{500~\mu m}<100$ mJy and therefore excluded from \citet{Neg17} list, 3 are identified as blazars (see Table 1 in \citet{Neg17}) and one is a local extended source (NGC5705) so that its SPIRE photometry is no reliable for the possible, if there is any, background source.
Therefore, not only is the SHALOS method as effective as the \citet{Neg17} approach for $S_{500~\mu m}>100$ mJy, but it is also able to extend the identification methodology at lower flux density limits.
\begin{figure}[tbp]
\centering
\includegraphics[width=\columnwidth]{./figs/dNdz_hists}
\caption{\label{fig:dNdz} Comparison between the redshift distributions of the lenses (\textit{blue}) and sources (\textit{red}) selected by SHALOS with P$_{tot}>0.5$.}
\end{figure}
\section{Validation by induction}
\label{sec:induction}
\begin{figure}[tbp]
\centering
\includegraphics[width=\columnwidth]{./figs/shalos_amplifications_twoPtot}
\caption{\label{fig:mus} Tentative amplification factors derived assuming a galactic and cluster halo masses (see text for more details).}
\end{figure}
Only a follow up campaign using top instruments with high resolution and sensitivity could establish the overall performance of the proposed methodology by studying one by one each individual SLG candidate. However, even obtaining observational time in such facilities, it will take months, if not years, to built a data base large enough to derive some meaningful statistics.
For this reason, we propose an alternative and complementary approach to validate the SHALOS methodology: validation by induction. In this Section we are going to assume that all the lensing event candidates are confirmed SLGs and to study some of their statistical properties. Then, we can compare such properties with previous results or theoretical expectations to check if they are in agreement. If this is the case, we can conclude that the SHALOS provided list is mainly composed by SLGs. Therefore, we can use the SHALOS list to obtain additional valuable statistical information about this kind of events thanks to its less restrictive limits.
\subsection{Amplification factors}
The first statistical property calculated is a tentative amplification factor, $\mu$, produced by the gravitational lensing effect: there is enough information in the SHALOS list to derive an approximate $\mu$ for each of the event canditates. Following mainly the same procedure as in \citet{GN14}, we estimate for each lens the stellar mass, $M_\star$, from the r-band Luminosity, $L_r$. We considered two different scenarios: i) the gravitational lensing effect is produced mainly by the galactic halo surrounding the lens galaxy; ii) the lens galaxies are, typically, the central galaxy of a group or cluster of galaxies as indicated by the conclusions obtained by \citet{GN14} and \citet{GN17}. In this case we thus estimate a group or cluster of galaxies halo mass.
In the first case, we consider a `Singular Isothermal Sphere' (SIS) mass density profile and we can derive the galactic halo mass, $M_h$, directly from the r-band Luminosity \citep{Sha06,Ber03}:
\begin{equation}
M_h=3\times 10^{11}\left(\left(\frac{L_r}{1.3\times 10^{10}} \right)^{0.35}+ \left(\frac{L_r}{1.3\times 10^{10}} \right)^{1.65}\right)\times 10^{-0.19z}
\end{equation}
For the second scenario, we considered a `Navarro-Frank-White'(NFW) mass density profile \citep{NFW96}. The stellar mass is calculated using a modified version of the luminosity-stellar mass relationship \citep{Ber03,Ber10}:
\begin{equation}
M_\star/L_r=3\times(L_r/10^{10.31})^{0.15} \times 10^{-0.19z},
\end{equation}
with $M_\star$ and $L_r$ in $M_\odot$ and $L_\odot$, respectively. Then the cluster halo mass is estimated by applying the stellar to halo mass relationship derived by \citet{Mos10}.
Finally, the amplification factors for both scenarios are estimated following the traditional gravitational lensing framework \citep[see for example][]{Sch05}, taking into account the derived halo masses and the source and lens redshifts \citep[we have used the concentration formula derived by][]{Pra12}.
The results, for all the different areas together, are shown in Fig. \ref{fig:mus} for two different $P_{tot}$ cuts. Even with these tentative estimations about the amplification factors, these results are encouraging. The mean (median) values of the SHALOS list for $P_{tot}>0.5$ is 1.90 (1.26) for the galactic halo case and 2.51 (1.39) for the cluster case. For a more conservative cut, with $P_{tot}>0.7$, the obtained amplification factors are on average bigger: 2.28 (1.33) for the galactic halo and 3.12 (1.47) for the cluster case. With these estimated amplification factors, the number of SHALOS candidates with $P_{tot}>0.5$ and $\mu>2$ are 17.2\% and 25.8\% for the galactic and cluster scenarios, respectively. These percentages increases to 24.3\% and 31.5\%, respectively, for the $P_{tot}>0.7$ cut.
\subsection{Auto-correlation function}
\label{sec:acorr}
\begin{figure}[tbp]
\centering
\includegraphics[width=\columnwidth]{./figs/shalos_ac}
\caption{\label{fig:ac} Auto-correlation of SHALOS SLGs with P$_{tot}>0.5 ~\& ~0.7$ compared with the theoretical estimation using the \citet{GN17} observed cross-correlation parameters. The \citet{GN17} measured auto-correlation of the H-ATLAS high-z sources (\textit{black circles}) is also shown as a comparison.}
\end{figure}
The number of SHALOS candidates is large enough to measure their two-point correlation function. If the SHALOS candidates were simply random associations their correlation function would have being negligible or noise dominated. At maximum they could have resembled the correlation of the SMGs or background sample. On the contrary, if they are real SLGs their correlation function will be in agreement with the one expected from a foreground sample with the lens derived masses.
In order to check these possibilities, we estimate the two points correlation function of the SHALOS candidates using the \citet{Lan93} estimator:
\begin{equation}
w(\theta)=\frac{DD(\theta)- 2DR(\theta)+ RR(\theta)}{RR(\theta)},
\end{equation}
where DD, DR and RR are the normalized unique pairs of galaxies, the data-random pairs and the random-random pairs, respectively.
The measured correlation functions for $P_{tot}>$0.5 \& 0.7 are shown in Fig. \ref{fig:ac}. Although the uncertainties are significant, the SHALOS candidates have a non-zero or noise dominated correlation function. This result immediately discard the random association hypothesis. Moreover, the SHALOS candidates correlation is stronger than the measured by \citet{GN17} for the \textit{Herschel} SMGs at $z>1.2$ (\textit{black diamonds}). These galaxies are the same galaxies that constitute our background sample. Therefore, it is confirmed that there is something special about the SHALOS selected background galaxies.
Finally, we calculae, for comparison, the correlation function expected for a sample of lenses with the observed redshift distribution (blue histogram in Fig. \ref{fig:dNdz}) and the mass and halo ocupation distribution properties derived by \citet{GN17}: minimum halo mass of $\sim1.3\times 10^{13} M_\odot$, a pivotal mass to have at least une satellite galaxy of $\sim3.7\times 10^{14} M_\odot$ and the slope of the number of satellites, $\sim2$. By using the same Halo model formalism of \citet{GN17}, mainly based on \citet{Coo02}, we derived the dashed black line, in good agreement with our measured correlation functions.
Therefore, we can conclude that the angular correlation properties of the SHALOS selected candidates closely resemble the expected ones for the sample of foreground lenses. It is not a direct validation of the gravitational lensing nature of the SHALOS candidates but it is an additional statistical property that agrees with the expectations.
\subsection{Source number counts}
\begin{figure}[tbp]
\centering
\includegraphics[width=\columnwidth]{./figs/shalos_snc}
\caption{\label{fig:snc} Integrated source number counts at $500~\mu m$ of the SHALOS candidates with P$_{tot}>0.5~\&~0.7$ (\textit{red circles} and \textit{blue squares}, respectively). They are compared with the integrated number counts of candidate lensed galaxies derived by \citet{Neg17} from all the H-ATLAS fields (\textit{grey diamonds}). The error bars correspond to the 95\% confidence interval. It is also shown the model source number counts for the unlensed SMGs \citep[\textit{black line},][]{Lap11,Cai13}.}
\end{figure}
The integral source number counts at $500\mu m$ of the SHALOS candidates, combining the results for all the four H-ATLAS fields, are shown in Fig. \ref{fig:snc}. We apply two different $P_{tot}$ cuts to check the number counts dependence on the associated probability. Above 100 mJy, we can compare the SHALOS source number counts with the one derived with the most simple but robust identification methodology by \citet{Neg17} (\textit{grey diamonds}) using the same H-ATLAS catalogues. \citet{Nay16} obtained almost identical source number counts with the same methodology but for the HeLMS+HerS survey (not shown in the Figure). Taking into account that the flux densities of the latest H-ATLAS catalogues have been updated with respect to the ones used in \citet{Neg17}, there is good agreement between both set of lensed candidates source number counts.
However, the SHALOS methodology allow us to extend the measurement of the source number counts down to 50 mJy. It is at these fainter flux densities that the effect of the different $P_{tot}$ cuts is more relevant: a lower probability cuts tend to select more lensed candidates but mainly at faiter flux limits, $S_{500~\mu m}<100$ mJy. Although we are reaching flux densities that start to be dominated by the unlensed SMGs, the SHALOS methodology seems to be effective to discriminate between the lensed/unlensed nature of the considered SMGs, at least for the $P_{tot}>0.7$ cut, as also indicated by the results of Sec. \ref{sec:acorr}.
We can conclude that the SHALOS candidates source number counts at $500~\mu m$ above 100 mJy are in good agreement with previous estimations (where many of the candidates were confirmed by follow-up observations) and, therefore, both methodologies are equivalent at such flux densities. The advantage of the SHALOS approach is that it is able to extend the identification of reliable SLG candidates down to lower flux densities, $\sim 50$ mJy.
\section{Conclusions}
\label{sec:conclusions}
We propose a new methodology for the identification of objects with particular properties by cross-matching different catalogues based on the similarity of probability distribution (the Bhattacharyya Coefficient) associated to different observables. This new approach is more simple, objective and flexible than other traditional approaches to the problem, as the LR or the Bayes factor.
As a practical application, in this work we have focused on the identification of SMGs observed by \textit{Herschel} whose flux density were strongly amplified by the gravitational lensing effect produced by SDSS galaxies at $z<0.8$, acting as the lenses. In particular, we derived the total estimated probability, $P_{tot}$, of being lensed based on four observables: the angular separation, the bolometric luminosity percentile compared with SMGs at similar redshift, the redshift difference and the ratio between the optical and the sub-mm emissions. The results indicate, as also confirmed by a PCA analysis, that the first two are the most discriminant for the identification task. The other two help to confirm that the cross-matched pairs are not the same galaxy, but two galaxies at different redshfits.
The SHALOS method identified 1451 SLG candidates with $P_{tot}>0.5$, that correspond to 0.61\% of the H-ATLAS sources. This number decrease to 447 (or 0.19\%) with a more conservative $P_{tot}>0.7$, that it is still $\sim5$ times the number of SLGs found by \citet{Neg17}. When comparing both SLGs lists, SHALOS method was able to identify 32 of the 50 SLGs with flux density at $500\mu m$ greater than $\sim90$ mJy (a lower limit to take into account small flux density variation between the different versions of the H-ATLAS catalogues). The remaining 18 SLG candidates were excluded by SHALOS because of the lack of an optical redshift estimation or for being flagged as star. On the contrary, the SHALOS method found 12 SLG candidates with $P_{tot}>0.5$ not in \citet{Neg17}: 8 have flux density at $500~\mu m$ smaller than $\sim100$ mJy, 3 are identified blazars \citep[see Table 1 in ][]{Neg17} and the last one is a local extended galaxy (NGC5705).
Finally, we have studied some characteristic statistical properties of the SHALOS SLG candidates as the estimated amplifications factors, the two-point correlation function and the source number counts. For $P_{tot}>0.7$, the tentative amplification factors were found to have mean(median) of 2.28 (1.33) for a galactic mass halo and 3.12 (1.47) for a cluster mass halo. The number of SHALOS candidates with $P_{tot}>0.7$ and $\mu>2$ are 24.3\% and 31.5\% for the galactic and cluster scenarios recpectively.
Moreover, the SHALOS candidates have a non-zero correlation function that is stronger than the one measured for the background SMG sample in \citet{GN17}. It is in agreement with the correlation function expected for the foreground lenses \citep[massive elliptical galaxies or even group of galaxies as anticipated by \citet{GN14} and confirmed by][]{GN17}. The SHALOS candidates source number counts at $500~\mu m$ above 100mJy are in good agreement with previous results confirming that both methodologies are equivalent. However, the SHALOS one allows us to reach much lower flux densities, $\sim50$ mJy. At such faint flux densities, the total source number counts start to be dominated by unlensed SMGs, but the derived source number counts seems to indicate the effectiveness of the SHALOS methodology even in distinguishing between lensed/unlensed SMGs
\begin{acknowledgements}
JGN, LB, FA, LT and SLSG acknowledge financial support from the I+D 2015 project AYA2015-65887-P (MINECO/FEDER). JGN acknowledges financial from the Spanish MINECO for a "Ramon y Cajal" fellowship (RYC-2013-13256).
DH, FA and LT acknowledge financial support from the I+D 2015 project AYA2015-64508-P (MINECO/FEDER).
DH also acknowledges partial financial support from the RADIOFOREGROUNDS project, funded by the European Comission’s H2020 Research Infrastructures under the Grant Agreement 687312.
JDCJ acknowledge financial support from the I+D 2017 project AYA2017-89121-P and support from the European Union's Horizon 2020 research and innovation programme under the H2020-INFRAIA-2018-2020 grant agreement No 210489629.
\\
The \textit{Herschel}-ATLAS is a project with \textit{Herschel}, which is an ESA space observatory with science instruments provided by European-led Principal Investigator consortia and with im- portant participation from NASA. The H-ATLAS website is http://www.h-atlas.org/
\\
This research has made use of \texttt{TopCat} \citep{topcat}, and the python packages \texttt{ipython} \citep{ipython}, \texttt{matplotlib} \citep{matplotlib}, \texttt{Scipy} \citep{scipy}, and \texttt{Astropy}, a community-developed core Python package for Astronomy \citep{astropy}.
\end{acknowledgements}
|
1,116,691,499,901 | arxiv | \section{Introduction}
For heavy ion collisions at the AGS energies, transport model
studies \cite{liko95} have indicated that both density and
temperature of the participant region are high. Heavy ion
experiments at AGS thus offer the possibility to study not only the
hadron to quark-gluon plasma transition but also the properties of
hadrons, such as their masses and lifetimes, in dense medium
\cite{brown2,fang,kl96}. Such medium effects have recently
attracted much attention as they may be related to the precursor
effects due to chiral symmetry restoration \cite{br91,kkl97}. In
particular, knowledge on the properties of kaons in the nuclear
medium is important for understanding both chiral symmetry
restoration and neutron star properties. Since the suggestion by
Kaplan and Nelson on the possibility of kaon condensation in dense
matter \cite{kap86}, there have been many theoretical studies on
the in-medium properties of kaons, using various models, such as
the chiral Lagrangian \cite{lee95}, the boson exchange model
\cite{schaff,schaf97}, the Nambu-Jona-Lasino model \cite{lutz}, and
the coupled-channel approach \cite{koch94,waas96}. These studies
have all shown that $K^+$ feels a weak repulsive potential while
$K^-$ sees a strong attractive potential in the nuclear medium. As
a result, a condensation of antikaons in neutron stars becomes
plausible \cite{kap86,bro87}, which would then lead to the possible
existence of many mini black holes in the galaxies
\cite{bro94,brown}.
A number of experiments have recently been carried out at the AGS
\cite{HIPAGS96}, and preliminary data from these experiments seem
to indicate that medium effects associated with kaons are already
present in some of the observed phenomena \cite{ogilvie}. While the
analysis of experimental data is being finalized, a critical
theoretical examination of both the production mechanism for $K^+$
and $K^-$ and the medium effects on experimental observables will
be very useful. For $K^+$, we have already used the ART model
\cite{liko95} to study its production in heavy ion collisions at
AGS and the dependence of its momentum spectra on its in-medium
properties \cite{liko96}. In particular, we have found that the
$K^+$ transverse collective flow is sensitive to the kaon
dispersion relation in dense nuclear matter. In the present work,
we shall report a similar study for $K^-$.
In Section II, we will briefly describe the ART model used in our
previous study of particle production and signatures of chiral
symmetry restoration and/or QGP formation in heavy ion collisions
at AGS energies \cite{liko95,liko96,liko96b}. We will then discuss
the details of implementing various reaction channels for $K^-$
production. These include $K^-$ production from baryon-baryon,
meson-baryon, and meson-meson interactions. Also, $K^-$ absorption
and its final-state elastic scattering will be considered. In
Section III, results from this study are presented. In particular,
we shall discuss the relative contributions of different reaction
channels to the $K^-$ yield, its rapidity and transverse mass
distribution and collective flow. Medium effects due to both the
nuclear and Coulomb potentials on these observables will also be
investigated. The beam energy dependence of the medium effects on
both $K^+$ and $K^-$ will be studied. Finally, a summary is given
in Section IV.
\section{Antikaon production, absorption and rescattering in the ART model}
The ART model is a pure hadronic transport model developed for
modeling relativistic heavy ion collisions up to the AGS energies
\cite{liko95}. For completeness, we summarize here the main
features of this model and refer the reader to Ref. \cite{liko95}
for its details. The ART 1.0 includes the following baryons:
$N,~\Delta(1232),~N^{*}(1440),~N^{*}(1535), ~\Lambda,~\Sigma$; and
mesons: $\pi,~K,~\eta,~\rho,~\omega$; as well as their explicit
charge states. Both elastic and inelastic collisions among most of
these particles are modeled as best as we can by using as inputs
the experimental data from hadron-hadron collisions. Most inelastic
hadron-hadron collisions are modeled through the formation of
baryon and meson resonances. Although we have only explicitly
included three baryon resonances, effects of heavier baryon
resonances with masses up to 2 GeV are partially taken into account
through the formation of these resonances in the intermediate
states of meson-baryon reactions. We have also included in the
model optional, self-consistent mean field potentials for both
baryons and kaons.
The treatment of antikaon in the ART model 1.0 is, however,
incomplete as only antikaon production from meson-meson
interactions has been included. Although this has negligible
effects on the reaction dynamics and experimental observables
associated with nucleons, pions and kaons, we have been unable to
study in detail the production mechanism for antikaon and its
dependence on the medium effects. In this section, we shall first
discuss the improvement we have made in the ART model for treating
antikaon production, absorption, rescattering, and propagation.
\subsection{Antikaon production from baryon-baryon interactions}
There are few experimental data on $K^{-}$ production from
nucleon-nucleon interactions in the energy range we are
considering. In particular, its total production cross section from
$pp$ collisions is practically unknown. There are a number of
parameterizations of the antikaon production cross section from the
nucleon-nucleon interaction \cite{efr94,sib97,li97,zwer84}. For
example, the inclusive $K^{-}$ production cross section from the
proton-proton interaction, i.e., $pp \to K^{-}X$, has been
parameterized in Ref. \cite{efr94} using phase space consideration.
We choose this one in the present work as it fits better the
available data at high energies. Specifically, the $K^-$ production
cross section from $pp$ collisions is given by
\begin{equation}
\sigma_{pp \to K^-X}(s) = \left (1-{{s_{\rm 0}}\over{s}}\right )^3
\left [2.8F_1\left ({{s}\over{s_{\rm 0}}}\right )+
7.7F_2\left ({{s}\over{s_{\rm 0}}}\right )\right ]+
3.9F_3\left ({{s}\over{s_{\rm 0}}}\right ) \ [{\rm mb}],
\end{equation}
with
\begin{eqnarray}
F_1(x)&=&(1+1/\sqrt{x}){\rm ln}(x)-4(1-1/\sqrt{x}),\nonumber\\
F_2(x)&=&1-(1/\sqrt{x})(1+{\rm ln}(x)/2),\nonumber\\
F_3(x)&=&\left ({{x-1}\over {x^2}}\right )^{3.5}.\nonumber
\end{eqnarray}
In the above,
${s_{\rm 0}}^{1/2}$ = 2($m_p+m_K$)=2.8639\ GeV is the threshold energy.
At AGS energies the final state is expected to be dominated by two
nucleons and a kaon-antikaon pair. We thus assume that the cross
section for $pp\to ppK^+K^-$ is the same as that for $pp\to K^-X$.
Since there are no data for $K^-$ production from $np$, $nn$ and
other baryon-baryon interactions involving one or two resonances,
to determine their cross sections thus requires models for these
interactions. In the present study, we make the minimum assumption
that they all have the same $K^-$ production cross section as in
$pp$ collisions at the same center of mass energy.
To determine the momentum distribution of the produced $K^-$, we
make use of the empirical observation that the momentum
distributions of final particles in high energy $pp$ collisions all
have the following form
\begin{equation}\label{dis}
\frac{d^2\sigma}{dp_T^2dp_L}\propto e^{-A{x^*}^2}e^{-Bp_T^2},
\end{equation}
where $x^*=p_{\rm L}/p_{\rm L_{max}}$ with $p_{\rm L_{max}}$ being
the maximum longitudinal momentum of the particle. In the ART
model, this is carried out by first obtaining the $K^-$ and $K^+$
transverse and longitudinal momenta from the above distribution,
assuming that the angle between $p_x$ and $p_y$ is uniformly
distributed. Then, the longitudinal momenta of the two baryons are
obtained using also a similar distribution. From energy and
momentum conservation, the transverse momenta of both baryons can
be determined. We find that the limited experimental data
\cite{diddens} are reasonably fitted by using the following values:
$A=12.5$ and $B=4.15$ (GeV/c)$^2$ for $K^-$; $A=5.3$ and $B=3.68$
(GeV/c)$^2$ for $K^+$; and $A=2.76$ (GeV/c)$^2$ for $N$.
\subsection{Antikaon production from meson-baryon interactions}
The cross section for $K^-$ production from pion-nucleon
interactions has been studied in Ref. \cite{sib97} using a
boson-exchange model. Reactions with one or more pions in the final
state are neglected as their cross sections are small in the energy
range we consider. Following Ref. \cite{sib97}, we have
\begin{eqnarray}
2\sigma(\pi^{-}p\to p K^{0} K^{-})&=&
\sigma(\pi^{-}p\to n K^{+} K^{-})=
\sigma(\pi^{-}n\to n K^{0} K^{-})\nonumber\\
=4\sigma(\pi^{0}p\to p K^{+} K^{-})&=&
4\sigma(\pi^{0}n\to n K^{+} K^{-})=
\sigma(\pi^{0}n \to p K^{0} K^{-})\nonumber\\
=\sigma(\pi^{+}n \to p K^{+} K^{-})&=&
\sigma_0,
\end{eqnarray}
where $\sigma_0$ is given by \cite{sib97}
\begin{equation}
\sigma_0 = 1.21(1-s_0/s)^{1.86}(s_0/s)^2 \ [{\rm mb}].
\end{equation}
For $\pi$-baryon resonance and $\rho (\omega)$-baryon collisions
there are no experimental data. We again make the minimum
assumption that their cross sections are the same as that in
pion-nucleon interaction at the same center of mass energy.
The momentum distribution of the produced $K^-$ from meson-baryon
interactions is determined by the three-body phase space.
\subsection{Antikaon production from meson-meson interactions}
As in the original ART model \cite{liko95}, antikaon productions
from all meson-meson collisions are modeled through the process $MM
\to K\bar{K}$. The cross section used for $\pi\pi \to K\bar{K}$ is
calculated from the $K^*$ exchange model of Ref. \cite{brown2}. All
other meson-meson reactions, such as $\rho\rho \to K\bar{K}$ or
$\pi\rho\to K\bar{K}$, are taken to have a constant value of 0.3
mb. Contrary to antikaon production from baryon-baryon and
meson-baryon interactions, the momentum of produced antikaon from
meson-meson interaction is trivially fixed by kinematics as the
final state consists of only two particles.
\subsection{Antikaon absorption and its production from meson-hyperon
interactions}
Antikaons produced in hot dense matter can be absorbed by nucleons
via strange-exchange reactions. For final states consisting of a
$\Sigma$ particle, we have the following reactions:
\begin{eqnarray}
K^{-}p&\to& \Sigma^{0}\pi^{0}, \Sigma^{-}\pi^{+},
\Sigma^{+}\pi^{-},\nonumber\\
K^{-}n&\to& \Sigma^{0}\pi^{-}, \Sigma^{-}\pi^{0}.
\end{eqnarray}
Their cross sections are taken from Ref. \cite{cug90}, i.e.,
\begin{eqnarray}
\sigma(K^-p \to \Sigma^0\pi^0)& = &0.6p^{-1.8}\ [{\rm mb}];\ 0.2
\leq p \leq 1.5 {\rm GeV/c},\nonumber\\
\sigma(K^-n \to \Sigma^0\pi^-)& = & \left\{ \begin{array}{ll}
1.2p^{-1.3} \ [{\rm mb}], &\mbox{${\rm if} \ 0.2 \leq p \leq 1 {\rm
GeV/c}$;}\\
1.2p^{-2.3}\ [{\rm mb}], &\mbox{${\rm if} \ 1 \leq p \leq 6
{\rm Gev/c}$.} \end{array}
\right.
\end{eqnarray}
where $p$ is the $K^-$ momentum in the laboratory frame.
For $K^-$ absorption by resonances, we take the cross sections for
$N^{*+}$ and $\Delta^{+}$ to be the same as for $p$, while those
for $N^{*0}$ and $\Delta^0$ the same as for $n$. We also include
the reactions $K^{-}\Delta^{++}$ $\to$ $\Sigma^{+}\pi^{0}$,
$\Sigma^{0}\pi^{+}$ by assuming that their cross sections are the
same as that for $K^{-}p$. On the other hand, the cross section for
$K^{-}\Delta^{-}$ $\to$ $\Sigma^{-}\pi^{-}$ is taken to be the same
as that for $K^{-}n$.
For final states with a $\Lambda$, the following reactions are possible,
\begin{eqnarray}
K^{-}p &\to& \Lambda + \pi^{0}, \nonumber\\
K^{-}n &\to& \Lambda + \pi^{-}.
\end{eqnarray}
The $K^-$p cross section is parameterized as in Ref. \cite{cug90}, i.e.,
\begin{equation}
\sigma_{K^-p\to \Lambda\pi^0} = \left\{ \begin{array}{ll}
50p^2-67p+24\ [{\rm mb}], &\mbox{if \ $0.2 \leq p \leq 0.9 GeV$;} \\
3p^{-2.6}\ [{\rm mb}], &\mbox{if \ $0.9 \leq p \leq 10 GeV$.}
\end{array}
\right.
\end{equation}
For other cross sections involving $n$ and baryon resonances, they
are assumed to be the same as $K^-p\to\Lambda\pi^0$.
In the inverse reactions, i.e. $\pi+\Lambda(\Sigma)$, antikaon can
be reproduced, and their cross sections are deduced from the
antikaon absorption cross section using the detailed balance
relations. A hyperon can also interact with other mesons, such as
rho and omega to produce a $K^-$. The corresponding cross sections
are assumed to be the same as those for a pion at the same center
of mass energy.
\subsection{Antikaon final-state interactions}
Because of final-state interactions, not all antikaons produced in
hadron-hadron interactions can escape freely from the reaction zone
in heavy ion collisions. The absorption reactions have already been
described in the previous section. Here, we are mainly concerned
with the $K^-$ elastic scatterings with baryons. These cross
sections are taken from Ref. \cite{cug90}, i.e.,
\begin{eqnarray}
\sigma_{K^-p\to K^-p}& = &13p^{-0.9}\ [{\rm mb}], \ 0.25
\leq p \leq 4.0\ {\rm GeV/c}. \nonumber\\
\sigma_{K^-n\to K^-n}& = & \left\{ \begin{array}{ll}
20.0p^{-2.74}\ [{\rm mb}], & \mbox{if $0.5 \leq p \leq 1.0$ GeV/c;} \\
20.0p^{-1.8}\ [{\rm mb}], & \mbox{if $1.0 < p \leq 4.0$ GeV/c.}
\end{array}
\right.
\end{eqnarray}
\subsection{Mean field potentials for kaons and antikaons}
In transport models, the imaginary part of the self energy of a
hadron is approximately treated by its scatterings with other
hadrons, while the real part of the self energy is given by the
mean field potential. Various approaches have been used to evaluate
the kaon mean field potential in the nuclear medium
\cite{lee95,schaff,schaf97,lutz,koch94,waas96}, we use here the one
determined from the kaon-nucleon scattering length $a_{KN}$ in the
impulse approximation \cite{brown}, i.e.,
\begin{equation}
\omega(p,\rho_b)=\left[m_K^2+p^2-4\pi\left(1+\frac{m_K}{m_N}\right)
a_{KN}\rho_b\right]^{1/2},
\end{equation}
where $m_K$ and $m_N$ are the kaon and nucleon masses,
respectively; $p$ is the kaon momentum; $\rho_b$ is the baryon
density; and $a_{KN}\approx -0.255$ fm is the isospin-averaged
kaon-nucleon scattering length. The kaon potential in the nuclear
medium is given by
\begin{equation}
U(p,\rho_b)=\omega(p,\rho_b)-(m_K^2+p^2)^{1/2}.
\end{equation}
At normal nuclear density, a kaon at rest has a repulsive potential
of about 30 MeV.
For the $K^-$ potential, we use the following expression
\begin{equation}
U(\rho)=-0.12(\rho_b/\rho_0)~[{\rm GeV}].
\end{equation}
The magnitude of this potential is similar to that either extracted
from the experimental data on kaonic atom or predicted by
theoretical models \cite{lee95,schaff,schaf97,lutz,koch94,waas96}.
One of the main purposes of this work is to explore the effects of
kaon potentials on experimental observables. We shall thus compare
in the next section results from calculations with and without mean
field potentials for kaons and antikaons. Although the model at the
present stage probably cannot distinguish the different forms of
kaon and antikaon potentials from various theories, it does predict
that some observables are appreciably affected by the presence of
mean field potentials as shown below.
\section{Results and discussions}
We now turn to the results of our study. After including all $K^-$
production channels outlined above, our previous predictions, that
are based on the ART 1.0, for the nucleon, pion and kaon
observables do not show significant changes as antikaons are small
perturbations to the collision dynamics and other hadrons. We thus
only present in the following results for antikaons.
\subsection{Production mechanism for antikaons}
\begin{figure}[htp]
\setlength{\epsfxsize=3.5truein}
\centerline{
\vskip 0.2truein
\hskip 1.25truein
\epsffile{songfig1.eps}
\vskip 0.3truein
\caption{Center of mass energy distribution of hadron pairs with
energies above the $K^-$ production threshold in the reaction of Au
+ Au at $p_{\rm beam}$/A = 11.6 GeV/c and impact parameter b = 4
fm, and with the $K^-$ mean field potential.}}
\end{figure}
To identify the sources for antikaon production, we show in Fig. 1
the center of mass energy distribution of hadron pairs with
energies above the $K^-$ production threshold in the reaction of Au
+ Au at $p_{\rm beam}$/A = 11.6 GeV/c and impact parameter of 4 fm.
It is seen that contributions from meson-meson and meson-baryon
collisions are more important than that from baryon-baryon
collisions. More quantitatively, about 40\%, 40\% and 20\% of
produced $K^-$ are from meson-meson, meson-baryon, and
baryon-baryon collisions, respectively. The relative importance of
different $K^-$ sources seen here thus does not agree completely
with results from either RQMD \cite{rqmd}, where meson-baryon
interactions seem to contribute most, or the ARC \cite{arc} model,
which shows a main contribution from baryon-baryon interactions. We
attribute the different conclusions to the fact that different
assumptions about the cross sections for the elementary $K^-$
production reactions are introduced in these models. Similar
differences have also been seen previously in transport model
studies of $K^+$ production \cite{liko95}.
To learn about the dynamics of antikaon production, the primordial
$K^-$ multiplicity is shown in Fig. 2 as a function of time for
various $K^-$ production channels by turning off the $K^-$
absorption reactions in the calculation. It is seen that $K^-$
production starts at about t=0.5 fm/c after the contact of the
colliding nuclei and is initially dominated by the contribution
from baryon-baryon collisions. Soon after that, meson-baryon
collisions begin to contribute significantly. The meson-meson
collisions do not contribute until around 2 fm/c and become the
dominant ones after 4 fm/c. Both $K^-$ production and absorption
from meson-meson collisions and their inverse reactions are seen to
last longer than all other reactions. We note that $K^-$ production
from baryon-baryon collisions practically ceases after about 4
fm/c.
\begin{figure}[htp]
\setlength{\epsfxsize=3.5truein}
\centerline{
\vskip 0.2truein
\hskip 1.25truein
\epsffile{songfig2.eps}
\vskip 0.3truein
\caption{Primordial $K^-$ number from
different reaction channels as functions of time in the same
reaction as in Fig. 1.}}
\end{figure}
Including $K^-$ absorption we have repeated the calculations for
the same reaction as in Fig. 2, and the results are shown in Fig.
3. Comparing Fig. 2 and Fig. 3 one sees that close to half of the
$K^-$'s produced from baryon-baryon collisions are absorbed during
their propagation through the system. For both meson-baryon and
meson-hyperon collisions, about 40\% of the primordial $K^-$ are
absorbed, while it is only about one fourth for those created
through meson-meson collisions. This is mainly because $K^-$'s from
baryon-baryon collisions are produced earlier in the collisions, so
they spend a longer time in the dense matter and thus have a higher
chance to get absorbed. Another reason is that $K^-$'s from
baryon-baryon collisions generally have higher kinetic energies and
thus have velocities comparable to those of baryons, so it is
easier for them to be absorbed.
\begin{figure}[htp]
\setlength{\epsfxsize=3.5truein}
\centerline{
\vskip 0.2truein
\hskip 1.25truein
\epsffile{songfig3.eps}
\vskip 0.3truein
\caption{Net $K^-$ number after absorption for the same reaction as
in Fig. 1.}}
\end{figure}
\subsection{Mean field effects on antikaon spectra and yields}
To study the effects of mean field potentials on kaons and
antikaons, we have calculated their transverse mass spectra and
rapidity distributions with and without mean field potentials. We
shall also compare them with recent data from the E802/E866
collaboration \cite{ags93}.
Fig. 4 shows the $K^+$ and $K^-$ transverse mass distributions from
the reaction of Au + Au at $p_{\rm beam}$/A = 11.6 GeV/c and impact
parameters $b \leq$ 4 fm. It is seen that the attractive mean field
potential pulls $K^-$ to lower values of transverse momentum,
causing its slope to increase, while for $K^+$, the effect seems to
be opposite. Since the mean field potential is stronger for $K^-$
than for $K^+$, the effect is also larger for $K^-$ than for $K^+$.
While the $K^+$ data can be reproduced reasonably well by both
calculations with and without the mean field potential, the
calculations with the mean field potential for $K^-$ seem to better
reproduce the data.
\begin{figure}[htp]
\setlength{\epsfxsize=3.5truein}
\centerline{
\vskip 0.2truein
\hskip 1.25truein
\epsffile{songfig4.eps}
\vskip 0.3truein
\caption{Transverse mass spectra of $K^+$ and $K^-$ in the reaction
of Au + Au at $p_{\rm beam}$/A = 11.6 GeV/c and impact parameters
$b \leq$ 4 fm.}}
\end{figure}
It has been proposed that the $K^+/K^-$ ratio as a function of
their transverse mass is a sensitive probe of mean field effects
\cite{liko96}. This is because the average thermal velocities of
$K^+$ and $K^-$, which are much larger than the change of velocity
caused by medium effects, are almost canceled out. Indeed, as shown
in Fig. 5 for the same reaction as in Fig. 4 this ratio decreases
with the transverse mass $m_{\rm t}$ in the case without mean field
potentials but increases with $m_{\rm t}$ once mean field
potentials are included.
\begin{figure}[htp]
\setlength{\epsfxsize=3.5truein}
\centerline{
\vskip 0.2truein
\hskip 1.25truein
\epsffile{songfig5.eps}
\vskip 0.3truein
\caption{Transverse mass spectra of $K^+/K^-$ in the same reaction as
in Fig. 4.}}
\end{figure}
\begin{figure}[htp]
\setlength{\epsfxsize=3.5truein}
\centerline{
\vskip 0.2truein
\hskip 1.25truein
\epsffile{songfig6.eps}
\vskip 0.3truein
\caption{Rapidity distributions of $K^+$ and $K^-$ in the same reaction
as in Fig. 4.}}
\end{figure}
The $K^+$ and $K^-$ rapidity distributions for the same Au+Au
reaction as in Fig. 4 are shown in Fig. 6. The rapidity
distribution for $K^-$ is seen to be narrower than that for $K^+$.
This may also be a signature of the mean field effect as the strong
attractive $K^-$ mean field potential makes $K^-$ less energetic,
leading thus to a narrower rapidity distribution around the
mid-rapidity than in the case without a potential. Although these
low energy $K^-$'s at the central rapidity are more easily absorbed
by nucleons, the inverse reactions of $K^-$ production from
pion-hyperon interactions is also more important as most pions and
hyperons are concentrated at the central rapidity. One thus expects
an increase of the $K^-$ yield after including the mean field
potential. The theoretical results support such a picture. Indeed,
without mean field potentials the rapidity distribution for $K^+$
is narrower than that for $K^-$ but becomes much wider after the
mean field potentials are included. Also, the increase in the $K^-$
yield at central rapidity is more than its decrease at the
projectile and target rapidities, leading to an increased total
$K^-$ yield when the mean field potential is introduced. This
effect can be more clearly seen in Fig. 7 where the ratio $K^+/K^-$
is shown as a function of rapidity $y$. It shows that the ratio
decreases from about 4.2 around the mid-rapidity to about 3.5
around the target and projectile rapidities if no kaon mean field
is included. Medium effects change this ratio dramatically; it
increases from about 3.0 at the mid-rapidity to about 4.0 at the
target and projectile rapidities.
\begin{figure}[htp]
\setlength{\epsfxsize=3.5truein}
\centerline{
\vskip 0.2truein
\hskip 1.25truein
\epsffile{songfig7.eps}
\vskip 0.3truein
\caption{Rapidity distributions of the $K^+/K^-$ ratio in
the same reaction as in Fig. 4.}}
\end{figure}
We note that both $K^+$ and $K^-$ production from Au+Au collisions
at $p_{\rm beam}$/A = 11.6 GeV/c have also been studied in the HSD
model \cite{cassing}, which includes both initial string dynamics
and subsequent hadron cascade. Contrary to our results as well as
those from the ARC and RQMD models, the HSD model underpredictes
the yield of both $K^+$ and $K^-$. This may be due to the
introduction in the model a finite formation time, which is not
included in either our model or the ARC model. Although finite
formation time is included in the RQMD model, the resulting large
yield of $K^+$ and $K^-$ is probably due to the inclusion of high
mass resonances, which are neglected in the HSD model. More work is
thus required to clarify the effects due to different physical
assumptions in these transport models.
\subsection{Transverse flow analysis for antikaons}
It was first demonstrated in Ref. \cite{lkl95} that kaon transverse
flow is a powerful probe of kaon in-medium potentials in heavy ion
collisions at SIS/GSI energies, which are an order of magnitude
lower than that at AGS. Subsequently, using ART 1.0 the kaon
transverse flow has also been found to be the most sensitive
observable for studying the kaon dispersion relation in the dense
medium formed in relativistic heavy ion collisions at AGS energies
\cite{liko96}. It is thus interesting to compare the transverse
flow of kaons and antikaons as their mean field potentials have
opposite signs.
First, we perform the standard transverse flow analysis for $K^-$
for the same Au+Au reaction as in Fig. 4, and the results are shown
in Fig. 8. It is seen that without $K^-$ potential (shown by open
circles) antikaons flow in the opposite direction as that of
nucleons since antikaons flowing with nucleons are absorbed due to
strong strange-exchange reactions. Such a shadowing effect due to
the spectator matters has also been seen in the transverse flow of
pions \cite{gos89,bal91,bal94,kin97}. However, when the attractive
mean field potential is included (shown by solid circles), the flow
of antikaons change its direction toward that of nucleons.
\begin{figure}[htp]
\setlength{\epsfxsize=3.5truein}
\centerline{
\vskip 0.2truein
\hskip 1.25truein
\epsffile{songfig8.eps}
\vskip 0.3truein
\caption{Transverse flow of $K^-$ in the same reaction as in Fig. 4.}}
\end{figure}
For comparison we show in Fig. 9 the transverse flow of $K^+$
calculated with and without mean field potentials for the same
Au+Au reaction as in Fig. 8. For both $K^+$ and $K^-$, their
transverse flow are reversed once mean field potentials are
introduced in the transport model. The transverse flow of antikaons
is, however, found to be more sensitive to the mean field potential
due to the stronger antikaon potential compared to that for kaon.
This makes antikaon flow analysis an even more valuable tool for
studying the in-medium properties of antikaons. It is interesting
to mention that the preliminary data from the E866 collaboration on
the $K^+$ and $K^-$ flow is consistent with those predicted by an
attractive mean field potential for $K^-$ and a repulsive one for
$K^+$ \cite{ogilvie97}.
\begin{figure}[htp]
\setlength{\epsfxsize=3.5truein}
\centerline{
\vskip 0.2truein
\hskip 1.25truein
\epsffile{songfig9.eps}
\vskip 0.3truein
\caption{Transverse flow of $K^+$ in the same reaction as in
Fig. 4.}}
\end{figure}
\subsection{Coulomb effects on the transverse momentum
spectra of kaons and antikaons}
Besides the mean field potential due to strong interaction, both
$K^+$ and $K^-$ are also affected by their Coulomb potentials.
While the mean field potential is attractive and repulsive for
$K^-$ and $K^+$, respectively, the Coulomb potentials have similar
effects. To see the relative importance of mean field and Coulomb
potentials on the spectra of $K^+$ and $K^-$, the following four
different calculations have been carried out: with Coulomb
potential only, with mean field potential only, with both
potentials, and without any potential. Results of these studies are
shown in Fig. 10 for the same reaction as in Fig. 4. It is seen
that for $K^-$ the effect due to the mean field potential is much
stronger than that due to the Coulomb potential. This is different
from the $K^+$ case, where the mean field potential is weaker than
that for $K^-$, and its effects is thus comparable to that due to
the Coulomb potential. We conclude that mean field effects on the
$K^-$ transverse mass spectra, especially at lower masses, are
distinguishable from that due to the Coulomb potential.
\begin{figure}[htp]
\setlength{\epsfxsize=3.5truein}
\centerline{
\vskip 0.2truein
\hskip 1.25truein
\epsffile{songfig10.eps}
\vskip 0.3truein
\caption{Coulomb and nuclear mean field effects on transverse
mass distributions of $K^+$ and $K^-$ in the same reaction as in
Fig. 4.}}
\end{figure}
\subsection{Beam energy dependence of the mean field effects on antikaons}
Heavy ion collision dynamics is governed by both individual
hadron-hadron collisions and mean field potentials. For baryons,
mean field effects have been found to be more important in
collisions at lower energies. However, it is not clear what is the
beam energy dependence of the mean field effects on produced
particles. To answer this question we have carried out an analysis
of the rapidity distribution and transverse mass spectrum of both
kaons and antikaons in the beam energy range of 2 to 16 GeV/nucleon
with and without mean field potentials. In this section, the beam
energy dependence of mean field effects on antikaons is presented.
Fig. 11 shows the $K^-$ transverse mass spectra calculated with and
without the mean field potential for Au+Au reactions at an impact
parameter of 4 fm and a beam energy of 4, 10.7 and 16 GeV/nucleon,
respectively. It is seen that the mean field effect is almost the
same at all three beam energies. To be more quantitative the $K^-$
inverse slope has been extracted by fitting the $m_{\rm t}$ spectra
with exponential functions. We note that this parameter should not
be identified as the temperature as effects due to collective
radial flow have not been corrected. In Fig. 12, the inverse slope
parameter of the $K^-$ transverse mass spectrum is shown as a
function of beam energy. It is seen that it increases with beam
energy in both cases as one would expect. Moreover, the mean field
potential reduces the $K^-$ inverse slope parameter by about 20\%
in the whole beam energy range.
\begin{figure}[htp]
\setlength{\epsfxsize=3.5truein}
\centerline{
\vskip 0.2truein
\hskip 1.25truein
\epsffile{songfig11.eps}
\vskip 0.3truein
\caption{Transverse mass spectra of $K^-$ with and without the
$K^-$ mean field potential in the reaction of Au + Au at different
beam energies and an impact parameter $b$ = 4 fm. Solid and open
circles are, respectively, for the case with and without mean field
potentials.}}
\end{figure}
\begin{figure}[htp]
\setlength{\epsfxsize=3.5truein}
\centerline{
\vskip 0.2truein
\hskip 1.25truein
\epsffile{songfig12.eps}
\vskip 0.3truein
\caption{Inverse slope or apparent temperature of $K^-$
with and without $K^-$ mean field potential in collisions of Au +
Au at different beam energies and impact parameter $b$ = 4 fm.}}
\end{figure}
The rapidity distributions of antikaons from the same reaction as
in Fig. 11 are compared in Fig. 13. It is again seen that the mean
field effect on $K^-$ rapidity distribution changes very little as
the beam energy varies. We note that a similar observation has also
been found for kaons \cite{liko95}. We thus conclude that the mean
field effects on kaons and antikaons are essentially independent of
the beam energy in the energy range of 2 to 16 GeV/nucleon. Several
experimental collaborations are currently studying the beam energy
dependence of particle production at the AGS, so our results can be
tested in the near future.
\begin{figure}[htp]
\setlength{\epsfxsize=3.5truein}
\centerline{
\vskip 0.2truein
\hskip 1.25truein
\epsffile{songfig13.eps}
\vskip 0.3truein
\caption{Same as Fig. 11 for the rapidity distribution of $K^-$ with
and without the mean field potential.}}
\end{figure}
\section{Summary}
In summary, we have studied $K^-$ production in Au+Au collisions at
different beam energies, based on an extension of the relativistic
transport model ART 1.0. Since $K^-$ has not been well treated in a
previous version of this model, we have improved the model by
including various reaction channels for both $K^-$ production and
scattering. Furthermore, an attractive $K^-$ mean field potential,
which is consistent with that extracted from the kaonic atom data,
has been included. Our results suggest that $K^-$ is mainly
produced from meson-meson and meson-baryon interactions. The
baryon-baryon interactions are less important, contributing only
one fifth of the total $K^-$ yield. Our model is able to describe
reasonably well the observed $K^-$ rapidity and transverse momentum
distributions. Furthermore, the mean field effect is much more
clearly seen on the $K^-$ rapidity distribution and transverse mass
spectrum, compared to those for $K^+$. Also, the $K^+/K^-$ ratio as
a function of transverse momentum or rapidity offers another
possibility for studying the medium effects.
We have also used the model to study the $K^-$ flow and found that
due to the attractive mean field potential the $K^-$ flow is
similar to the nucleon flow. Furthermore, the medium effect on
$K^-$ flow is much stronger than that on the $K^+$ flow, which was
previously studied using the ART 1.0 model.
We have compared the effect due to the nuclear potential to that
due to the Coulomb potential. Our results show that, unlike the
case of $K^+$ where the two effects are comparable, the nuclear
mean field potential has a much stronger effect on $K^-$ than the
Coulomb potential. However, both nuclear and Coulomb potentials
affect the $K^-$ in a similar way, i.e., they both tend to shift
the $K^-$ from high momentum to a lower value.
We have also carried out an extensive calculation for the $K^+$ and
$K^-$ yields, rapidity distributions and $m_t$ spectra at different
beam energies with and without mean field potentials. We have found
that for $K^-$ medium effects are almost independent of the
incident energy due to the fact that although $K^-$ produced at
lower energies are more susceptible to the influence of mean field
potential, which becomes, however, weaker as the beam energy is
reduced.
To conclude, it will be very useful to have more experimental data
from heavy ion collisions at the AGS energies to test the
predictions from our theoretical studies. Such a study will help
improve our understanding of medium effects on antikaons.
\bigskip
\centerline{\bf Acknowledgement}
\bigskip
We would like to thank C. Ogilvie for his interest in this study
and B. Zhang for a critical reading of the manuscript. This work
was supported in part by NSF Grant No. PHY-9509266 and PHY-9870038,
the Robert A Welch foundation under Grant A-1358, and the Texas
Advanced Research Program.
|
1,116,691,499,902 | arxiv | \section*{\sc introduction}
\indent \indent At the time of writing, over 2 000 000 variable stars are catalogued in the Variable Star Index (VSX), currently run
by the American Association of Variable Stars Observers \footnote{Obtained from \url{https://www.aavso.org/vsx/index.php}}. Only a handful of these objects have been studied in full detail, with measurements of their physical parameters such as their mass, radii or temperature. Further, these variables are normally studied only once at the time of submission to the catalog \cite{Tkachenko2016}. These studies are usually performed with photometry since spectroscopic studies are more costly than those done with CCDs. Consequently, these studies adopt phenomenological approaches rather than physical ones. These analysis, required by the catalog previous to submission, classify stars by parameters such as its period $P$, initial epoch $T_0$ and magnitude range $m_{\text{max}} - m_{\text{min}}$. Variability type classification is performed manually by visual inspection of the light curve or phase plot, where the variability cycles are displayed with higher resolution.
The detection of variable star systems is important in astrophysics since it aids areas across the field: Cepheid variables are used as standard candles to calculate distances \cite{Majaess2009}. Further, detection of eclipsing binaries is essential to exoplanet hunting research due to its regular confusion with exoplanet transits by software \cite{Lissauer2014}.
In addition, the amateur community can easily contribute to the detection of variable stars since said community obtains many CCD images for a variety of purposes (astrophotography, comet tracking, etc.). Since, aperture selection can be tedious work, many times these images are not inspected photometrically by the astronomer, specially if he/she is not interested in variable star detection. Furthermore, the majority of times, the astronomer will not encounter unknown variability in the brightness of the stars. This time consuming work therefore serves no purpose to the astronomer.
In this work we present \texttt{VarStar Detect}, a Python library dedicated to the semi-automatic detection of stellar variability. This library is thought to be the basis code of the \texttt{VarStar Detect} program (with implemented GUI) still to be written. This program will perform aperture photometry of all the targets in the FOV automatically and present the possible stellar variable candidates for later visual inspection by the astronomer. This way, the effort and time consumed in mining of variability in the astronomer's images will be significantly reduced, encouraging astronomers to look for variability in their data for submission to VSX. This paper presents the functionality and assessment of the \texttt{VarStar Detect} package available on PyPI\footnote{To access the package see \url{https://pypi.org/project/varstardetect/} for installation instructions or type \texttt{pip install varstardetect} in your command line. Full documentation of available functions and tutorials are available on the VarStar Detect github repository: \url{https://github.com/VarStarDetect/varstardetect}} for installation.
In the first part of this paper, a mathematical overview of the \texttt{amplitude\_test} function design is presented. The current version of \texttt{VarStar Detect} is optimized for the mining of TESS data. In the latter part, a short investigation on sector 1 TESS light curves with \texttt{VarStar Detect} is shown to determine its performance.
\section*{\sc \texttt{VarStar Detect} design and mathematical background}
\indent \indent \texttt{VarStar Detect} is designed to detect variability in photometric light curves. To do that, it must accomplish four tasks:
\begin{enumerate}
\item Fit a trigonometric polynomial of degree s to the light curve (using the weighted least squares method) and calculate its corresponding reduced $\chi^2$ parameter ($\chi^2_r$).
\item Choose the trigonometric polynomial fit which $\chi^2_r\rightarrow 1$.
\item Calculate the amplitude of the polynomial fit.
\item Determine if star is variable.
\end{enumerate}
\indent \indent In the case we are dealing with, we are looking to identify a functional relationship between the flux intensity with respect to the time variable. This relationship should illustrate the behaviour of the obtained data and, from it, the characteristics of the star in question.
\subsection*{\sc data processing flow}
\indent \indent The primary input for \texttt{VarStar Detect} are lists of time, flux and flux uncertainty measurements\footnote{Which are properly filtered by the program to trim errors and non-existant data.}. Time is supported in any unit, although Heliocentric Julian Date (HJD) is recommended for submission to VSX. Brightness measurements (flux) are recommended to be inputted in magnitudes, which can be easily obtained with:
\begin{equation}
m = \overline{m} - 2.5\log{\left(\frac{F}{\overline{F}}\right)}
\end{equation}
where m is the magnitude, F is the flux, $\overline{m}$ is the average magnitude of the star easily obtained from VizieR\footnote{\href{https://vizier.u-strasbg.fr}{https://vizier.u-strasbg.fr}} and $\overline{F}$ is the average flux value of the whole data set.
Given the physical context of the problem, it is known that the relationship sought must be sinusoidal \cite{Andronov2012}. Therefore, we will contemplate the following parametrization of Fourier's Development as the approximating function:
\begin{equation}
P_s(t) = C_1 + \sum_{i = 1}^s \left( C_{i,1} \cdot \cos(i\* \omega \* (t - T_0)) + C_{i,2} \cdot \sin(i\* \omega \* (t - T_0)) \right)
\end{equation}
where $\omega$ is the angular frequency of the function (given by the Lomb-Scargle periodogram \cite{VanderPlas2018}) and $T_0$ the initial epoch, characteristics of the star. In summary, it consists on a $s$-degree trigonometric polynomial with coefficients $C_1,C_{1,1},\dots,C_{s,2}$ to be determined through regression.
Due to the sheer amount of data and knowledge of the uncertainty on the flux's measurements, the regressive method of weighted least squares shall be applied. Not only because of its computational ease, but also because using interpolating techniques would lead to Runge's phenomenon, providing a fake tendency of the data (its variability should be taken into account).
The weighted least squares method consists on a generalization of the widely-known least squares method. By taking into account the uncertainty of each data point, it provides a more accurate approximation function which, differently than the original method, minimizes the $\chi^2$ parameter:
\begin{equation}
\chi^2 = \sum_{j = 1}^n \left( \frac{y_j - P_s(x_j)}{\Delta y_j} \right)^2
\end{equation}
where $n$ is the number of data points and $\Delta y_j$ the uncertainty of the $y_j$ measurement.
Considering the minimization of the derivative of $\chi^2$ with respect to each parameter, the following lineal system is obtained, given by the normal equations:
\begin{equation}
(X^T W X)\cdot \beta = (X^T W Y)
\end{equation}
where $\beta =$\scalebox{0.75}{$
\begin{pmatrix} C_1 \\ C_{1,1} \\ \vdots \\ C_{s,2} \end{pmatrix}$} is the coefficients matrix,
$X =$ \scalebox{0.75}{$
\begin{pmatrix}
1 & \cos(\omega(t_1 - T_0)) & \sin(\omega(t_1 - T_0)) & \hdots & \sin(s\omega(t_1 - T_0))\\
1 & \cos(\omega(t_2 - T_0)) & \sin(\omega(t_2 - T_0)) & \hdots & \sin(s\omega(t_2 - T_0))\\
\vdots & \vdots&\vdots &\ddots &\vdots \\
1 & \cos(\omega(t_n - T_0)) & \sin(\omega(t_n - T_0)) & \hdots & \sin(s\omega(t_n - T_0))
\end{pmatrix}$}
the function matrix,
$W =$\scalebox{0.75}{$
\begin{pmatrix}
{1}/{\Delta y_1 ^2} & 0 & \hdots & 0 \\
0 &{1}/{\Delta y_2 ^2} & \hdots & 0 \\
\vdots & \vdots & \ddots & \vdots\\
0 & 0& \hdots & {1}/{\Delta y_n ^2}
\end{pmatrix}$}
the weights matrix and $Y =$\scalebox{0.75}{$
\begin{pmatrix}
y_1 \\
y_2 \\
\vdots \\
y_n
\end{pmatrix}$}
is the flux matrix.
The solution to this system provides the optimal coefficients ($\beta$) which minimize $\chi^2$.
\texttt{VarStar Detect} then fits the data to several polynomials of degree $s\in [1,30]$ and calculates its respective $\chi^2_r$ parameter (current version of \texttt{VarStar Detect} is optimized for TESS, which observes stars for 30 days, the maximum detectable period is therefore 30 days).
Although the principle of maximum likelihood suggests $\chi^2$ minimization, to determine the degree $s$ of the polynomial, the $\chi^2_r$ parameter will be considered:
\begin{equation}
\chi^2_r = \frac{\sum_{j = 1}^n \left( \frac{y_j - P_s(x_j)}{\Delta y_j} \right)^2}{n - 2\*s -1}
\end{equation}
\indent \indent This parameter takes into account the degrees of freedom of the fit ($n - 2\*s -1$, since the $s$-degree polynomial has $2\*s +1$ terms).
Instead of minimising $\chi^2_r$ , we will pursue $\chi^2_r \to 1$ , to avoid the function from over-fitting.
\texttt{VarStar Detect} then obtains the amplitude of the polynomial by subtracting the lowest fitted value from the highest:
\begin{equation}
A = \texttt{max}(P_s(x_j)) - \texttt{min}(P_s(x_j))
\end{equation}
To obtain the uncertainty of the amplitude, knowledge of the uncertainty of the fitted values is necessary and therefore the uncertainty of the fitting parameters are needed. In a generic regression, to identify the parameter's uncertainty, the covariance matrix ($cov$) needs to be constructed, gathering the covariance of each pair of parameters.
In our case, we will only be interested in the diagonal elements of $cov$, and following the method described \footnote{Obtained from \url{https://www.gnu.org/software/gsl/doc/html/lls.html\#c.gsl_multifit_wlinear}}, we can calculate
\begin{equation}
cov = (X^t W X)^{-1}
\end{equation}
easily obtained with the \textit{numpy} package \footnote{\href{https://numpy.org}{https://numpy.org}}.
\indent\indent Following the general theory described in \cite{MesandUncer}, to calculate the uncertainty of the fitted function, it is enough to consider that the given function is a vectorial function in which, aside from the time as an independent variable, the parameters are also considered as variables:
$$
P_s(t) \Rightarrow P_s(t, C_1, C_{1,1}, \dots, C_{s,2})
$$
Then, since $\Delta t = 0$ (it's not a measurement), we will consider:
\begin{equation}
\Delta P_s(z) = \sqrt{\left( \frac{\partial P_s}{\partial C_1}(z)\cdot \Delta C_1 \right)^2 +
\left( \frac{\partial P_s}{\partial C_{1,1}}(z)\cdot \Delta C_{1,1} \right)^2 +
\dots +
\left( \frac{\partial P_s}{\partial C_{s,2}}(z)\cdot \Delta C_{s,2} \right)^2
}
\end{equation}
For each parameter, the derivative is just a single element of the given vectorial function evaluated in $z$ and, therefore, the calculus of this uncertainty can be easily obtained computationally with a chained loop.
\indent\indent With a similar expression to the above, the amplitude uncertainty can be calculated taking into account the accumulated error on a subtraction.
\begin{equation}
\Delta A = \sqrt{\left(\Delta\texttt{max}(P_s(x_j))\right) ^ 2 + \left(\Delta\texttt{min}(P_s(x_j))\right) ^ 2 }
\end{equation}
Knowing these values, \texttt{VarStar Detect} proceeds with the amplitude variability test.
\subsection*{\sc amplitude test for stellar variability detection}
\indent \indent The purpose of the amplitude test is to decide if the star in question is a potential variable candidate that deserves to be visually inspected. In essence, it nominates as potential variables all stars which have an amplitude greater or equal than the threshold amplitude. The question comes when selecting the threshold amplitude. This will depend on the equipment used and quality of the night sky. It measures the smallest amplitude the equipment can detect before measuring just noise. This work does not go into explaining the physical meaning of this threshold. We encourage further investigations to do so. In the testing of the code, an empirical approach was used to determine the value of threshold for the data.
\section*{\sc test on tess}
\subsection*{\sc transiting exoplanet survey satellite (tess)}
\indent \indent TESS is a space telescope launched by NASA on the 18$^{\text{th}}$ of April 2018 aboard a SpaceX Falcon 9 rocket \cite{ricker2014}. Its scientific objective was similar to those of its predecessor (Kepler): to find extrasolar planets and to calculate their mass and atmospheric compositions.
To achieve the previously outlined objective, aboard the telescope lies four identical CCDs that together monitor sectors of 24º$\times$ 90º of the sky. Each sector is observed for a total of 27 days, which consist of two full orbits around Earth. During this time, the telescope produces photometry with a 2 minute cadence.
Furthermore, TESS data is available at different processing levels \cite{Fausnaugh2018}. Simple Aperture Photometry flux (SAP) is the performed photometry without removal of systematic variations by removal of common variability signals to all stars. Data with this level of correction is the Pre-search Data Conditioning Simple Aperture Photometry flux (PDCSAP). In this study, PDCSAP flux will be used for data mining and SAP will be used when determining if phenomena has been erroneously introduced by the photometry extraction process or astrophysical event.
In this study we have analysed the first 500 light curves of the first sector observed by TESS to determine the efficiency of \texttt{VarStar Detect}.
\subsection*{\sc Results}
\indent \indent Firstly, 3$\sigma$ ourliers were looked for in the data. All of these values were removed from the LC files downloaded using the \texttt{lightkurve} package \footnote{\href{https://docs.lightkurve.org}{https://docs.lightkurve.org}}. PDCSAP flux was used.
Following the outlier extraction process, the evaluation test function was applied to obtain possible variable star candidates. For TESS data, a threshold of 20 e$^-$s$^{-1}$ was selected. This threshold was chosen after visual inspection of variable star light curves discovered through TESS. Random lightcurves (selected with the \texttt{random} \footnote{\url{https://docs.python.org/3/library/random.html\#module-random}} python function) were inspected. Highest noise values were about 20 e$^-$s$^{-1}$.
500 light curves were analysed with the amplitude test function. Out of the 500 stars surveyed, a total of 169 were nominated as variables. After visual inspection of the variable candidate lightcurves, a total of 163 present periodic variability in brightness over time. This results in a efficiency of 96.45 $\%$ in the program's functionality.
This test run also detected exoplanet transit WASP-126 c which had been previously discovered \cite{Pollacco2006}.
All these detected variable stars will be submitted to VSX after checking on VizieR if they had been previously discovered by other teams.
\section*{\sc conclusion and discussion}
\indent \indent In this work, we have presented \texttt{VarStar Detect}: a python dedicated library to semi-automatic detection of stellar variability. The mathematical background behind the python package was described, the functionality of the amplitude test was introduced and the program as a whole was tested on Sector 1 TESS PDCSAP data, downloaded with the data download function designed and available in the python repository. The program was proved to be 96.45 $\%$ efficient, detecting a total of 163 variable stars inside the imported sector 1 database. Following up this analysis, catalogation of these objects is being performed.
\texttt{VarStar Detect} is a program which focuses on amateur variable star detection. Since submission to the catalog requires visual inspection of the data, the program is dedicated to candidate extraction from an imported database. The current version of \texttt{VarStar Detect} is optimised for stellar variability detection using TESS. Future versions of \texttt{VarStar Detect} will include a function of aperture photometry not yet tested at the moment which will allow import of databases in the form of .FIT images.
Furthermore, as previously mentioned, this investigation does not present a quantitative method for determining the value for the threshold of the amplitude test in an objective manner and a further study is encouraged.
In sum, \texttt{VarStar Detect} is a simple python package that facilitates variable star detection at high precision.
\section*{\sc acknowledgement}
\indent \indent The development of this software comes from the voluntary initiative of the three undergraduate authors. It did not count with any academic supervision except for the advice of the following academics we want to acknowledge: Dr. Isabel Llorente García (UCL), Dr. Pablo Pérez Riera (University of Oviedo) and Dr. Carlos Enrique Carleos Artime (University of Oviedo). Also, thank you Ellen for your support in writing this paper.
|
1,116,691,499,903 | arxiv | \section*{Acknowledgments}
RV thanks the Institut f\"{u}r Theoretische Physik, Heidelberg for their kind hospitality and the Excellence Initiative of Heidelberg University for a Guest Professorship during the period when this work was
initiated; we thank Juergen Berges, Jan Pawlowski, and Michael Schmidt for encouraging this effort. We thank Cristina Manuel and Naoki Yamamoto for valuable comments. We thank Fiorenzo Bastianelli for many helpful discussions and for sharing his deep insights into
the world-line formulation of quantum field theory. RV would also like to thank the attendees of a seminar on this work at Stony Brook for their helpful comments; in particular, he would like to thank Dima Kharzeev, Ho-Ung Yee, Yi Yin and Ismail Zahed.
NM acknowledges support by the Studienstiftung des Deutschen Volkes and by the DFG Collaborative Research Centre SFB 1225 (ISOQUANT). This material is partially based upon work supported by the U.S. Department of Energy,
Office of Science, Office of Nuclear Physics, under contract No. DE- SC0012704, and within the framework of the Beam Energy Scan Theory (BEST) Topical Collaboration.
|
1,116,691,499,904 | arxiv | \section{Introduction}
The rich and accessible labeled data fuel the revolutionary success of deep learning \cite{deng2009imagenet,zhao2017multi,li2018support}. However, in many specific real applications, only limited labeled data are available. This motivates the investigation of few-shot learning (FSL) where we need to learn concept of new classes based on a few labeled samples. To combat with deficiency of labeled data, some FSL methods resort to enhance the discriminability of the feature representations such that a simple linear classifier learned from a few labeled samples can reach satisfactory classification results \cite{vinyals2016matching,snell2017prototypical,triantafillou2017few}. Another category of methods investigate techniques of quickly and effectively updating a deep neural network with a few labeled data, either by learning a meta-network and the corresponding updating rules \cite{finn2017model,li2017meta,ravi2016optimization,munkhdalai2017meta}, or by learning a meta-learner model that generates some components of a classification network directly from the labeled samples \cite{li2019novel,gidaris2018dynamic,rusu2018meta}. Alternatively, the third group of methods address this problem with data augmentation by distorting the labeled images or synthesizing new images/features based on the labeled ones \cite{Chen2019Image,gao2018low,schwartz2018delta,chen2018semantic}.
Our proposed method falls into the data augmentation based category. The basic assumption of approaches in this category is that the intra-class cross-sample relationship learned from seen (training) classes can be applied to unseen (test) classes. Once the cross-sample relationship is modeled and learned from seen classes, it can be applied on the few labeled samples of unseen class to hallucinated new ones. It is believed that the augmented samples can diversify the intra-class variance and thus help reach sharper classification boundaries \cite{zhang2018metagan}. Whatever data augmentation technique is used, it is critical to secure discriminability of the augmented samples, as otherwise they shall cast catastrophic impact on the classifier. On the other hand, the decision boundary of a classifier can be determined precisely only when labeled samples exhibit sufficient intra-class variance. Thus, diversity of the augmented samples is also of a crucial role. This is in fact the essential motivation of investigating data augmentation for FSL, as a few labeled samples encapsulate limited intra-class variance.
Though various data augmentation based FSL methods have been proposed recently, they fail to simultaneously guarantee discriminability and diversity of the synthesized samples. Some methods learn a finite set of transformation mappings between samples in each base (label-rich) classes and directly apply them to seed samples of novel (label-scarce) classes. However, the arbitrary mapping may destroy discriminability of the synthesized samples \cite{Chen2019multi,hariharan2017low,schwartz2018delta}. Other methods synthesize samples specifically for certain tasks which regularize the synthesis process \cite{wang2018low,munkhdalai2017meta}. Thus, these methods can guarantee discriminability of the synthesized samples. But the task would constrain the synthesis process and consequently the synthesized samples tend to collapse into certain modes, thus failing to secure diversity.
To avoid limitations of the existing methods, we propose Adversarial Feature Hallucination Networks (AFHN) which consists of a novel conditional Wasserstein Generative Adversarial Networks (cWGAN) \cite{gulrajani2017improved} based feature synthesis framework and two novel regularizers. Unlike many other data augmentation based FSL approaches that perform data augmentation in the image space \cite{Chen2019ImageAAAI,Chen2019multi,Chen2019Image}, our cWGAN based framework hallucinates new features by using the features of the seed labeled samples as the conditional context. To secure discriminability of the synthesized features, AFHN incorporates a novel classification regularizer that constrains \textit{the synthesized features being of high correlation with features of real samples from the same class while of low correlation with those from the different classes}. With this constraint, the generator is encouraged to generate features encapsulating discriminative information of the class used as the conditional context.
It is more complicated to ensure diversity of the synthesized features, as conditional GANs are notoriously susceptible to the mode collapse problem that only samples from limited distribution modes are synthesized. This is caused by the use of usually high dimensional and structured data as the condition tends to make the generator ignore the latent code, which controls diversity. To avoid this problem, we propose a novel anti-collapse regularizer which assigns high penalty for the case where mode collapse likely occurs. It is derived from the observation that \textit{noise vectors that are closer in the latent code space are more likely to be collapsed into the same mode when mapped to the feature space}. We directly penalize the ratio of the dissimilarity of the two synthesized feature vectors and the dissimilarity of the two noise vectors generating them. With this constraint, the generator is forced to explore minor distribution modes, thus encouraging diversity of the synthesized features.
With discriminative and diverse features synthesized, we can get highly effective classifiers and accordingly appealing recognition results. In summary, the contributions of this paper are as follows: (1) We propose a novel cWGAN based FSL framework which synthesizes fake features by taking those of the few labeled samples as the conditional context. (2) We propose two novel regularizers that guarantee discriminability and diversity of the synthesized features. (3) The proposed method reaches the state-of-the-art performance on three common benchmark datasets.
\section{Related Work}
Regarding the perspective of addressing FSL, existing algorithms can generally be divided into three categories. The first category of methods aim to enhance the discriminability of the feature representations extracted from images. To this goal, a number of methods resort to deep metric learning and learn deep embedding models that produce discriminative feature for any given image \cite{ren2018meta,vinyals2016matching,snell2017prototypical,triantafillou2017few}. The difference lies in the loss functions used. Other methods following this line focus on improving the deep metric learning results by learning a separate similarity metric network \cite{yang2018learning}, task dependent adaptive metric \cite{oreshkin2018tadam}, patch-wise similarity weighted metric \cite{hao2019collect}, neural graph based metric \cite{kim2019edge,liu2018learning}, etc.
A more common category of algorithms address FSL by enhancing flexibility of a model such that it can be readily updated using a few labeled samples. These methods utilize meta-learning, also called learning to learn, which learns an algorithm (meta-learner) that outputs a model (the learner) that can be applied on a new task when given some information (meta-data) about that task. Following this line, some approaches aim to optimize a meta-learned classification model such that it can be easily fine-tuned using a few labeled data \cite{ravi2016optimization,finn2017model,li2017meta,li2017meta,ravi2016optimization,munkhdalai2017meta,nichol2018first}. Other approaches adopt neural network generation and train a meta-learning network which can adaptively generate entire or some components of a classification neural network from a few labeled samples of novel classes \cite{qiao2017few,gidaris2018dynamic,li2019rethinking,li2019novel}. The generated neural network is supposed to be more effective to classify unlabeled samples from the novel classes, as it is generated from the labeled samples and encapsulates discriminative information about these classes.
The last category of methods combat deficiency of the labeled data directly with data augmentation. Some methods try to employ additional samples by some forms of transfer learning from external data \cite{ren2018meta,wang2016learning}. More popular approaches perform data augmentation internally by applying transformations on the labeled images or the corresponding feature representations. Naively distorting the images with common transformation techniques (e.g., adding Gaussian perturbation, color jittering, etc.) is particularly risky as it likely jeopardizes the discriminative content in the images. This is undesirable for FSL as we only have a very limited number of images to be utilized; quality control of the synthesizing results for any single image is crucial as otherwise the classifier could be ruined by the low-quality images. Chen et al. propose a series of methods of performing quality-controlled image distortions by applying perturbation in the semantic feature space \cite{Chen2019multi}, shuffling image patches \cite{Chen2019ImageAAAI} and explicitly learning an image transformation network \cite{Chen2019Image}. Performing data augmentation in the feature space seems more promising as the feature variance directly affects the classifier. Many approaches with this idea have been proposed by hallucinating new samples for novel class based on seen classes \cite{schwartz2018delta,hariharan2017low}, composing synthesized representations \cite{chen2018semantic,Yu2018Low}, and using GANs \cite{gao2018low,zhang2018metagan}.
This paper proposes Adversarial Feature Hallucination Networks (AFHN), a new GAN-based FSL model that augments labeled samples by synthesizing fake features conditioned on those of the labeled ones. AFHN significantly differs from the two existing GAN based models \cite{zhang2018metagan,gao2018low} in the following aspects. First, AFHN builds upon Wasserstein GAN (WGAN) model which is known for more stable performance, while \cite{zhang2018metagan,gao2018low} adopt the conventional GAN framework. Second, neither \cite{zhang2018metagan} nor \cite{gao2018low} has a classification regularizer. The most similar optimization objective in \cite{gao2018low} is the one which optimizes the synthesized features as the outlier class (relative to the real class), while that in \cite{zhang2018metagan} is a cycle-consistency objective. We instead regularize the synthesized features of being high correlation with real features from the same classes and low correlation with those from the different classes. Third, After training the generator, we learn a standard Softmax classifier using the synthesize features, while \cite{zhang2018metagan,gao2018low} utilize them to enhance existing FSL methods. Last, we further propose the novel anti-collapse regularizer to encourage diversity of synthesized features, while \cite{zhang2018metagan,gao2018low} do not.
AFHN also bears some similarity with an existing feature hallucination based FSL method \cite{wang2018low}. But apparently we adopt the GAN framework which has the discriminator to regularize the features produced by the generator, while \cite{wang2018low} uses the simple generative model. Besides, AFHN synthesizes new features to learn a standard Softmax classifier for new classes, while \cite{wang2018low} utilizes them to enhance existing FSL classifier. Moreover, we aim to hallucinate diverse features with the novel anti-collapse regularizer, while \cite{wang2018low} does not have such an objective.
\section{Algorithm}
In this section, we first briefly introduce Wasserstein GAN and then elaborate the details of how we build the proposed AFHN model upon it.
\subsection{Wasserstein GAN}
GAN is a recently proposed generative model that has shown impressive performance on synthesizing realistic images. The generative process in GAN is modeled as a game between two competitive models, the generator and the discriminator. The generator aims to generate from noise fake samples as realistic as possible such that the discriminator cannot tell whether they are real or fake. The discriminator instead tries the best to make the correct judgment. This adversarial game pushes the generator to extensively explores the data distribution and consequently produces more visually appealing samples than conventional generative models. However, it is known that GAN is highly unstable in training.
\cite{arjovsky2017wasserstein} analyzes the convergence properties of the objective function of GAN and proposes the Wasserstein GAN (WGAN) which utilizes the Wasserstein distance in the objective function and is shown of better theoretical properties than the vanilla GAN.
We adopt the improved variant of WGAN \cite{gulrajani2017improved}, which optimizes the following min-max problem,
\begin{equation}
\begin{array}{cl}
\underset{G}{\min} \hspace{1pt} \underset{D}{\max} & \underset{\tilde{\mathbf{x}}\sim\mathbb{P}_g}{\mathbb{E}}[D(\tilde{\mathbf{x}})]
- \underset{\mathbf{x}\sim\mathbb{P}_r}{\mathbb{E}} [D(\textbf{x})] \\
& + \lambda \underset{\hat{\mathbf{x}}\sim\mathbb{P}_{\hat{\mathbf{x}}}}{\mathbb{E}}[(\|\nabla_{\hat{\mathbf{x}}} D(\hat{\mathbf{x}})\|_2-1)^2],
\end{array}
\label{wgan}
\end{equation}
where $\mathbb{P}_r$ is the data distribution and $\mathbb{P}_g$ is the model distribution defined by $\tilde{\mathbf{x}}\sim G(\mathbf{z})$, with $\mathbf{z}\sim p(\mathbf{z})$ randomly sampled from noise distribution $p$. $\mathbb{P}_{\hat{\mathbf{x}}}$ is defined by sampling uniformly along straight lines between pairs of points sampled from the data distribution $\mathbb{P}_r$ and the generator distribution $\mathbb{P}_g$, i.e., $\hat{\mathbf{x}} = \alpha \mathbf{x}+(1-\alpha)\tilde{\mathbf{x}}$ with $\alpha \sim U (0, 1)$. The first two terms approximate the Wasserstein distance and the third term penalizes the gradient norm of $\hat{\mathbf{x}}$.
\subsection{Adversarial Feature Hallucination Networks}
Following the literature, we formally define FSL as follows: Given a distribution of tasks $P(\mathcal{T})$, a sample task $\mathcal{T}$$\sim$$P(\mathcal{T})$
is a tuple $\mathcal{T}=(S_\mathcal{T}, Q_\mathcal{T})$ where the support set
$S_\mathcal{T}=\{\{\textbf{x}_{i, j}\}^{K}_{i=1}, y_j\}^{N}_{j=1}$
contains $K$ labeled samples from each of the $N$ classes. This is usually known as $K$-shot $N$-way
classification.
$Q_\mathcal{T}=\{(\textbf{x}_q, y_q)\}^Q_{q=1}$ is the query set where the samples come from the same $N$ classes as the support set $S_\mathcal{T}$. The learning objective is to minimize the classification prediction risk of $Q_\mathcal{T}$, according to $S_\mathcal{T}$.
The proposed AFHN approaches this problem by proposing a general conditional WGAN based FSL framework and two novel regularization terms. Figure~\ref{fremework} illustrates the training pipeline.
\begin{figure*}
\begin{center}
\includegraphics[width=1.0\textwidth]{./figs/framework_refine.pdf}
\end{center}
\caption{Framework of the proposed AFHN.
AFHN takes as input a support set and a query set where images in the query set belongs to the sampled classes in the support set. Each image in the support set is fed to the feature extraction network $F$, resulting the feature embedding $\textbf{s}$. With $\textbf{s}$, feature generator $G$ synthesizes two fake features $\tilde{\textbf{s}}_1$ and $\tilde{\textbf{s}}_2$, by combining $\textbf{s}$ with two randomly sampled variables $\textbf{z}_1$ and $\textbf{z}_2$. Discriminator $D$ discriminates real feature $\textbf{s}$ and fake features $\tilde{\textbf{s}}_1$ and $\tilde{\textbf{s}}_2$, resulting in the GAN loss $L_{GAN}$. By analyzing the relationship between ($\textbf{z}_1$, $\textbf{z}_2$) and ($\tilde{\textbf{s}}_1$, $\tilde{\textbf{s}}_2$), we get the anti-collapse loss $L_{ar}$. The proposed few-shot classifier classifies the features of the query images based on the fake features $\tilde{\textbf{s}}_1$ and $\tilde{\textbf{s}}_2$. This results in the classification loss $L_{cr}$.}
\label{fremework}
\end{figure*}
\noindent\textbf{FSL framework with conditional WGAN}.
For a typical FSL task $\mathcal{T}=(S_\mathcal{T}, Q_\mathcal{T})$,
the feature extraction network $F$ produces a representation vector for each image.
Specifically for an
image from the support set $(\textbf{x}, y)\in S_\mathcal{T}$, $F$ generates
\begin{equation}
\mathbf{s} = F(\mathbf{x}).
\end{equation}
When there are multiple samples for class $y$, i.e., $K > 1$, we simply average the feature vectors and take the averaged vector as the prototype of class $y$ \cite{snell2017prototypical}. Conditioned on $\mathbf{s}$, we synthesize fake features for the class.
Unlike previous GAN models which sample a single random noise variable from some distribution, we sample two noise variables $\textbf{z}_1$ and $\textbf{z}_2\sim N(0, 1)$. The generator $G$ synthesizes fake feature $\tilde{\textbf{s}}_1$ ($\tilde{\textbf{s}}_2$) taking as input $\textbf{z}_1$ ($\textbf{z}_2$) and the class prototype $\mathbf{s}$,
\begin{equation}
\tilde{\textbf{s}}_i = G(\textbf{s}, \textbf{z}_i), \hspace{8pt} \\\ i = 1, 2.
\label{generator}
\end{equation}
The generator $G$ aims to synthesize $\tilde{\textbf{s}}_i$ to be as similar as possible to $\textbf{s}$.
The discriminator $D$ tries to discern $\tilde{\textbf{s}}_i$ as fake and $\textbf{s}$ as real.
Within the WGAN framework, the adversarial training objective is as follows,
\begin{equation}
\begin{array}{cl}
L_{GAN_i}=\underset{(\textbf{x},y)\sim S_\mathcal{T}}{\mathbb{E}}[D(\tilde{\mathbf{s}}_i, \mathbf{s})]
- \underset{(\textbf{x},y)\sim S_\mathcal{T}}{\mathbb{E}} [D(\textbf{s}, \mathbf{s})] \\
+ \lambda \underset{(\textbf{x},y)\sim S_\mathcal{T}}{\mathbb{E}}[(\|\nabla_{\hat{\mathbf{s}}_i} D(\hat{\mathbf{s}}_i, \mathbf{s})\|_2-1)^2],
\hspace{3pt} \hspace{3pt} i = 1, 2.
\end{array}
\label{cwgan}
\end{equation}
Simply training the model with the above GAN loss does not guarantee the generated features are well suited for learning a discriminative classifier because it neglects the inter-class competing information among different classes. Moreover, since the conditioned feature vectors are of high dimension and structured, it is likely that the generator will neglect the noise vectors and all synthesized features collapse to a single or few points in the feature space, i.e., the so-called mode collapse problem. To avoid these problems, we append the objective function with a classification regularization term and an anti-collapse regularization term, aiming to encourage both diversity and discriminability of the synthesized features.
\noindent\textbf{Classification regularizer}.
As our training objective is to classify well samples in the query set $Q_\mathcal{T}$, given the support set $S_\mathcal{T}$, we encourage discriminability of the synthesized features by requiring them serving well the classification task as the real features.
Inspired by \cite{snell2017prototypical}, we define a non-parametric FSL classifier which calculates the possibility of a query image $(\textbf{x}_q, y_q)\in Q_\mathcal{T}$ of being the same class as synthesized feature $\tilde{\textbf{s}}_i$ as
\begin{equation}
P(y_q=y|\textbf{x}_q) = \frac{\exp(\cos(\tilde{\mathbf{s}}_i, \mathbf{q}))}{\sum_{j=1}^N \exp(\cos(\tilde{\mathbf{s}}^j_i, \textbf{q}))},
\label{probability}
\end{equation}
where $\textbf{q}=F(\textbf{x}_q)$. $\tilde{\mathbf{s}}^j_i$ is the synthesized feature for the $j$-th class and $\cos(\textbf{a}, \textbf{b})$ is the Cosine similarity of two vectors.
The adoption of Cosine similarity, rather than Euclidean distance as in \cite{snell2017prototypical}, is inspired by a recent FSL algorithm \cite{gidaris2018dynamic} which proves that Cosine similarity can bound and reduce variance of the features and result in models of better generalization.
With the proposed FSL classifier, the classification regularizer in a typical FSL task is defined as follows:
\begin{equation}
L_{cr_i}=\underset{{\tiny(\mathbf{x}_q, y_q)\sim Q_{\mathcal{T}}}}{\mathbb{E}}\Big[\frac{1}{N}\sum^N_{y=1}y\log [-P(y_q=y|\mathbf{x}_q)]\Big],
\end{equation}
for $i=1,2$. We can see that this regularizer explicitly encourages the synthesized features to have high correlation with features from the same class (the conditional context), while low correlation with features from the different classes. To achieve this, the synthesized features must encapsulate discriminative information about the conditioned class and thus secure discriminability.
\noindent\textbf{Anti-collapse regularizer}.
GAN models are known for suffering from the notorious mode collapse problem, especially conditional GANs where structured and high-dimensional data (e.g., images) are usually used as the conditional contexts. As a consequence, the generator likely ignores the latent code (noises) that accounts for diversity and focuses only on the conditional contexts, which is undesirable. Specifically to our case, our goal is to augment the few labeled samples in the feature space; when mode collapse occurs, all synthesized features may collapse to a single or a few points in the feature space, failing to diversify the labeled samples. Observing that noise vectors that are closer in the latent code space are more likely to be collapsed into the same mode when mapped to the feature space, we directly penalize the ratio of the dissimilarity two synthesized feature vectors and the dissimilarity of the two noise vectors generating them.
Remember that we sample two random variables $\textbf{z}_1$ and $\textbf{z}_2$. We generate two fake feature vectors $\tilde{\textbf{s}}_1$ and $\tilde{\textbf{s}}_2$ from them.
When $\textbf{z}_1$ and $\textbf{z}_2$ are closer, $\textbf{s}_1$ and $\textbf{s}_2$ are more likely to be collapsed into the same mode.
To mitigate this, we define the anti-collapse regularization term as
\begin{equation}
\mathcal{L}_{ar} = \underset{(\mathbf{x}, y)\sim S_{\mathcal{T}}}{\mathbb{E}}\Big[ \frac{1-\cos(\tilde{\mathbf{s}}_1, \tilde{\mathbf{s}}_2)}{1-\cos(\mathbf{z}_1, \mathbf{z}_2)}\Big].
\label{reg_loss}
\end{equation}
We can observe that this term amplifies the dissimilarity of
the two fake feature vectors when the latent codes generating them are of high similarity. With the case mode collapse more likely occurs being assigned with higher penalty, the generator is forced to mine minor modes in the feature space during training. The discriminator will also handle fake features from the minor modes. Thus, it is expected that more diverse features can be synthesized when applying the generator on novel classes.
With the above two regularization terms, we reach our final training objective as
\begin{equation}
\underset{G}{\min} \hspace{1pt} \underset{D}{\max} \hspace{2pt} \hspace{1pt} \sum^2_{i=1} L_{GAN_i} + \alpha \sum^2_{i=1}L_{cr_i}
+ \beta \frac{1}{L_{ar}},
\label{Obj}
\end{equation}
where $\alpha$ and $\beta$ are two hyper-parameters. \textbf{Algorithm 1} outlines the main training steps of the proposed method.
\begin{table}[t]
\small
\centering
\begin{tabular}{l}
\hline
\noindent \textbf{Algorithm 1.} Proposed FSL algorithm \\\hline
\textbf{Input:} Training set $\mathcal{D}_t=\{\mathcal{X}_t, \mathcal{Y}_t\}$, parameters $\lambda$, $\alpha$, and $\beta$. \\
\textbf{Output:} Feature extractor $F$, generator $G$, discriminator $D$. \\\hline
1. Train $F$ as a standard classification task using $\mathcal{D}_t$. \\
\textbf{while} not done \textbf{do}\\
\hspace{3mm} // \textit{Fix $G$ and update $D$}. \\
\hspace{3mm} 2. Sample from $\mathcal{D}_t$ a batch of FSL tasks $\mathcal{T}^d_i\sim p(\mathcal{D}_t)$.\\
\hspace{3mm} \textbf{For} each $\mathcal{T}^d_i$ \textbf{do} \\
\hspace{6mm} 3. Sample a support set $S_\mathcal{T}=\{\{\textbf{x}_{i, j}\}^{K}_{i=1}, y_j\}^{N}_{j=1}$ and \\
\hspace{10mm} query set $Q_\mathcal{T}=\{\{\textbf{x}_{k, j}\}^{Q}_{k=1}, y_j\}^{N}_{j=1}$. \\
\hspace{6mm} 4. Compute prototypes of the $N$ classes $\mathcal{P}=\{\textbf{s}_j\}^{N}_{j=1}$, \\
\hspace{10mm} where $\mathbf{s}_j = \frac{1}{K} \sum_{i=1}^{K} F(\textbf{x}_{i, j})$. \\
\hspace{6mm} 5. Sample $N$ noise variables $\mathcal{Z}_1=\{\textbf{z}^j_1\}_{j=1}^N$ and \\
\hspace{10mm} variables $\mathcal{Z}_2=\{\textbf{z}^j_2\}_{j=1}^N$. \\
\hspace{6mm} 6. Generate fake feature sets $\tilde{\mathcal{Z}}_1=\{\tilde{\textbf{z}}^j_1\}_{j=1}^N$ \\
\hspace{10mm} and $\tilde{\mathcal{Z}}_2=\{\tilde{\textbf{z}}^j_2\}_{j=1}^N$ according to Eq. \eqref{generator}. \\
\hspace{6mm} 7. Update $D$ by maximizing Eq. \eqref{Obj}. \\
\hspace{3mm} \textbf{end For} \\\vspace{1pt}
\hspace{3mm} // \textit{Fix $D$ and update $G$}. \\
\hspace{3mm} 8. Sample from $\mathcal{D}_t$ a batch of FSL tasks $\mathcal{T}^g_i\sim p(\mathcal{D}_t)$. \\
\hspace{3mm} \textbf{For} each $\mathcal{T}^g_i$ \textbf{do} \\
\hspace{6mm} 9. Execute steps 3 - 7. \\
\hspace{6mm} 10. Update $G$ by minimizing Eq. \eqref{Obj}. \\
\hspace{3mm} \textbf{end For} \\
\textbf{end while}
\\ \hline
\vspace{0.5pt}
\end{tabular}
\vspace{-15pt}
\end{table}
\subsection{Classification with Synthesized Samples}
In the test stage, given an FSL task $\mathcal{T}'=(S'_\mathcal{T}, Q'_\mathcal{T})$ randomly sampled from the test set that the classes have no overlap with those in the training set, we first augment the labeled support set $S'_\mathcal{T}$ with the learned generator $G$. Then, we train a classifier with the augmented supported set. The classifier is used to classify samples from the query set $Q'_\mathcal{T}$.
Specifically, suppose after data augmentation, we get an enlarged support set
$\hat{S}'_\mathcal{R} = \{(\textbf{s}_1, y_1 ), (\textbf{s}_2 , y_2), \cdots, (\textbf{s}_{N\times K'} , y_{N\times K'}\}$
where $K'$ is the number of samples synthesized for each class.
With $\hat{S}'_\mathcal{R}$, we train a standard Softmax classifier $\textbf{f}_c$ as
\begin{equation}
\min_{\theta} \underset{(\mathbf{s}, y)\sim \hat{S}'_\mathcal{R}}{\mathbb{E}}\log [-P(y|\mathbf{s}; \theta)],
\label{cls_loss}
\end{equation}
where $\theta$ is the parameter of $\textbf{f}_c$. With $\textbf{f}_c$, we classify samples from $Q'_\mathcal{T}$.
\section{Experiments}
We evaluate AFHN on three common benchmark datasets, namely, \textit{Mini-ImageNet} \cite{vinyals2016matching}, \textit{CUB} \cite{wah2011multiclass} and \textit{CIFAR100} \cite{krizhevsky2009learning}.
The \textit{Mini-ImageNet} dataset is a subset of ImageNet. It has 60,000 images from 100 classes, 600 images for each class. We follow previous methods and use the splits in \cite{ravi2016optimization} for evaluation, i.e., 64, 16, 20 classes as training, validation, and testing sets, respectively. The \textit{CUB} dataset is a fine-grained dataset of totally 11,788 images from 200 categories of birds. We use the split in \cite{hilliard2018few} and 100, 50, 50 classes for training, validation, and testing, respectively. The CIFAR-100 dataset contains 60,000 images from 100 categories. We use the same data split as in \cite{zhou2018deep}. In particular, 64, 16 and 20 classes are used for training, validation and testing, respectively.
Following previous methods, we evaluate 5-way 1-shot and 5-way 5-shot classification tasks where each task instance involves classifying test images from 5 sampled classes with 1 or 5 randomly sampled images for each class as the support set. In order to reduce variance, we repeat the evaluation task 600 times and report the mean of the accuracy with a 95\% confidence interval.
\subsection{Implementation Details}
Following the previous data augmentation based methods \cite{schwartz2018delta,Chen2019multi,Chen2019Image}, we use ResNet18 \cite{he2016deep} as our feature extraction network $F$.
We implement the generator $G$ as a two-layer MLP, with LeakyReLU activation for the first layer and ReLU activation for the second one. The dimension of the hidden layer is 1024. The discriminator is also a two-layer MLP, with LeakyReLU as the activation function for the first layer. The dimension of the hidden layer is also 1024. The noise vectors $\textbf{z}_1$ and $\textbf{z}_2$ are drawn from a unit Gaussian with the same dimensionality as the feature embeddings.
Following the data augmentation based FSL methods \cite{schwartz2018delta,Chen2019multi}, we perform two-step training procedures. In the first step, we only train the feature extraction network $F$ as a multi-class classification task using only the training split. We use Adam optimizer with an initial learning rate $10^{-3}$ which decays to the half every 10 epochs. We train $F$ with 100 epochs with batch size of 128. In the second training stage, we train the generator and discriminator alternatively, using features extracted by $F$
and update $G$ after every 5 updates of $D$. We also use Adam optimizer which has an initial learning rate of $10^{-5}$ and decays to the half every 20 epochs for both $G$ and $D$. We train the whole network with 100 epochs with 600 randomly sampled FSL tasks in each epoch. For the hyper-parameters, we set $\lambda=10$ as suggested by \cite{gulrajani2017improved}, and $\alpha=\beta=1$ for all the three datasets. During the test stage, we synthesize 300 fake features for each class.
The code is developed based on PyTorch.
\begin{table}
\small
\renewcommand{\tabcolsep}{5pt}
\begin{center}
\begin{tabular}{|l|ccccc|}\hline
cWGAN & \xmark & \xmark & \Checkmark & \Checkmark & \Checkmark \\
CR & \xmark & \Checkmark & \xmark & \Checkmark & \Checkmark \\
AR & \xmark & \xmark & \xmark & \xmark & \Checkmark \\ \hline
& 52.73 & 55.65 & 57.58 & 60.56 & 62.38 \\ \hline
\end{tabular}
\end{center}
\vspace{-5pt}
\caption{Ablation study on the \textit{Mini-ImageNet} dataset for the 5-way 1-shot setting. cWGAN, CR, and AR represent the conditional WGAN framework, the classification regularizer, and the anti-collapse regularizer, respectively.
The baseline result (52.73) is obtained by applying the SVM classifier directly on ResNet18 features without data augmentation. The result (55.65) with only CR added is obtained from the synthesized features produced by the generator without the discriminator and AR during training.}
\label{Table_ablation}
\end{table}
\begin{table*}
\small
\renewcommand{\tabcolsep}{10pt}
\begin{center}
\begin{tabular}{|l|l|c|c|c|c|} \hline
& & Backbone & Reference & 1-shot & 5-shot \\ \hline\hline
& ResNet18 + SVM (baseline)
& ResNet18
&
& 52.73$\pm$1.44
& 73.31$\pm$0.81
\\ \hline
\multirow{10}{*}{MetricL}
& Matching Net \cite{vinyals2016matching}
& Conv-64F
& NeurIPS'16
& 43.56$\pm$0.84
& 55.31$\pm$0.73
\\
& PROTO Net \cite{snell2017prototypical}
& Conv-64F
& NeurIPS'17
& 49.42$\pm$0.78
& 68.20$\pm$0.66
\\
& MM-Net \cite{cai2018memory}
& Conv-64F
& CVPR'18
& 53.37$\pm$0.48
& 66.97$\pm$0.35
\\
& GNN \cite{garcia2017few}
& Conv-256F
& Arxiv'17
& 50.33$\pm$0.36
& 66.41$\pm$0.63
\\
& RELATION NET \cite{yang2018learning}
& Conv-64F
& CVPR'18
& 50.44$\pm$0.82
& 65.32$\pm$0.70
\\
& DN4 \cite{li2019revisiting}
& Conv-64F
& CVPR'19
& 51.24$\pm$0.74
& 71.02$\pm$0.64
\\
& TPN \cite{liu2018learning}
& ResNet8
& ICLR'19
& 55.51$\pm$0.86
& 69.86$\pm$0.65
\\
& PARN \cite{wu2019parn}
& Conv-64F
& ICCV'19
& 55.22$\pm$0.84
& 71.55$\pm$0.66
\\
& SAML \cite{hao2019collect}
& Conv-64F
& ICCV'19
& 57.69$\pm$0.20
& 73.03$\pm$0.16
\\
& DCEM \cite{dvornik2019diversity}
& ResNet18
& ICCV'19
& 58.71$\pm$0.62
& 77.28$\pm$0.46
\\ \hline\hline
\multirow{9}{*}{MetaL}
& MAML \cite{finn2017model}
& Conv-32F
& ICML'17
& 48.70$\pm$1.84
& 63.11$\pm$0.92
\\
& META-LSTM \cite{ravi2016optimization}
& Conv-32F
& ICLR'17
& 43.44$\pm$0.77
& 60.60$\pm$0.71
\\
& SNAIL \cite{mishra2017simple}
& ResNet-256F
& ICLR'18
& 55.71$\pm$0.99
& 68.88$\pm$0.92
\\
& MACO \cite{hilliard2018few}
& Conv-32F
& Arxiv'18
& 41.09$\pm$0.32
& 58.32$\pm$0.21
\\
& DFSVL \cite{gidaris2018dynamic}
& Conv-64F
& CVPR'18
&55.95$\pm$0.89
&73.00$\pm$0.68
\\
& META-SGD \cite{li2017meta}
& Conv-32F
& Arxiv'17
& 50.47$\pm$1.87
& 64.03$\pm$0.94
\\
& PPA \cite{qiao2017few}
& WRN-28-10
& CVPR'18
& 59.60$\pm$0.41
& 73.74$\pm$0.19
\\
& UFDA \cite{li2019novel}
& ResNet18
& CIKM'19
& 60.51
& 77.08
\\
& LEO \cite{rusu2018meta}
& WRN-28-10
& ICLR'19
& 61.76$\pm$0.08
& 77.59$\pm$0.12
\\ \hline\hline
\multirow{5}{*}{DataAug}
& MetaGAN \cite{zhang2018metagan}
& Conv-32F
& NeurIPS'18
& 52.71$\pm$0.64
& 68.63$\pm$0.67
\\
& Dual TriNet \cite{Chen2019Image}
& ResNet18
& TIP'19
& 58.80$\pm$1.37
& 76.71$\pm$0.69
\\
& $\Delta$-encoder \cite{schwartz2018delta}
& ResNet18
& NeurIPS'18
& 59.90
& 69.70
\\
& IDeMe-Net \cite{Chen2019Image}
& ResNet18
& CVPR'19
& 59.14$\pm$0.86
& 74.63$\pm$0.74
\\ \cline{2-6}
& AFHN (Proposed)
& ResNet18
&
& \textbf{62.38$\pm$0.72}
& \textbf{78.16$\pm$0.56}
\\ \hline
\end{tabular}
\end{center}
\vspace{-5pt}
\caption{Few-shot classification accuracy on \textit{Mini-Imagenet}.
``MetricL'', ``MetaL'' and ``DataAug'' represent metric learning based category, meta-learning based category and data augmentation based category, respectively.
The “$\pm$” indicates 95\% confidence intervals over tasks. The best results are in \textbf{bold}.}
\label{result_fsl_mini}
\vspace{-5pt}
\end{table*}
\begin{table*}
\small
\renewcommand{\tabcolsep}{6pt}
\begin{center}
\begin{tabular}{|l|l|c|c|c|c|c|c|} \hline
& & \multirow{2}{*}{Backbone} & \multirow{2}{*}{Reference} & \multicolumn{2}{c|}{\textit{CUB}} & \multicolumn{2}{c|}{\textit{CIFAR100}} \\ \cline{5-8}
& & & & 1-shot & 5-shot & 1-shot & 5-shot \\ \hline
& ResNet18 + SVM (baseline)
& ResNet18
&
& 66.54$\pm$0.53
& 82.38$\pm$0.43
& 59.65$\pm$0.78
& 76.75$\pm$0.73
\\ \hline\hline
\multirow{4}{*}{MetricL}
& Matching Net \cite{vinyals2016matching}
& Conv-64F
& NeurIPS'16
& 49.34
& 59.31
& 50.53$\pm$0.87
& 60.30$\pm$0.82
\\
& PROTO Net \cite{snell2017prototypical}
& Conv-64F
& NeurIPS'17
& 45.27
& 56.35
& -
& -
\\
& DN4 \cite{li2019revisiting}
& Conv-64F
& CVPR'19
& 53.15$\pm$0.84
& 81.90$\pm$0.60
& -
& -
\\
& SAML \cite{hao2019collect}
& Conv-64F
& ICCV'19
& 69.33$\pm$0.22
& 81.56$\pm$0.15
& -
& -
\\ \hline\hline
\multirow{4}{*}{MetaL}
& MAML \cite{finn2017model}
& Conv-32F
& ICML'17
& 38.43
& 59.15
& 49.28$\pm$0.90
& 58.30$\pm$0.80
\\
& META-LSTM \cite{ravi2016optimization}
& Conv-32F
& ICLR'17
& 40.43
& 49.65
& -
& -
\\
& MACO \cite{hilliard2018few}
& Conv-32F
& Arxiv'18
& 60.76
& 74.96
& -
& -
\\
& META-SGD \cite{li2017meta}
& Conv-32F
& Arxiv'17
& 66.90
& 77.10
& 61.60
& 77.90
\\ \hline\hline
\multirow{3}{*}{DataAug}
& Dual TriNet \cite{Chen2019multi}
& ResNet18
& TIP'19
& 69.61
& 84.10
& 63.41$\pm$0.64
& 78.43$\pm$0.64
\\
& $\Delta$-encoder \cite{schwartz2018delta}
& ResNet18
& NeurIPS'18
& 69.80$\pm$0.46
& 82.60$\pm$0.35
& 66.70
& 79.80
\\ \cline{2-8}
& AFHN (Proposed)
& ResNet18
&
& \textbf{70.53}$\pm$1.01
& 83.95$\pm$0.63
& \textbf{68.32$\pm$0.93}
& \textbf{81.45$\pm$0.87}
\\ \hline
\end{tabular}
\vspace{-3pt}
\end{center}
\caption{Few-shot classification accuracy on \textit{CUB} and \textit{CIFAR100}. Please refer Table \ref{result_fsl_mini} for details.}
\label{result_fsl_cub_cifar100}
\end{table*}
\subsection{Ablation Study}
The proposed AFHN consists of the novel conditional WGAN (cWGAN) based feature synthesize framework and the two regularizers that encourage diversity and discriminability of the synthesized features, i.e., the Classification Regularier (CR) and Anti-collapse Regularizer (AR).
To evaluate the effectiveness and impact of these components, we conduct ablation study on the \textit{Mini-ImageNet} dataset for the 5-way 1-shot setting. The results are shown in Table \ref{Table_ablation}.
\begin{figure*}
\begin{center}
\begin{minipage}{.33\textwidth}
\centering
\includegraphics[width=1.\linewidth]{./figs/tsne_wt_cls.pdf}
\end{minipage}%
\begin{minipage}{0.33\textwidth}
\centering
\includegraphics[width=1.\linewidth]{./figs/tsne_wt_reg.pdf}
\end{minipage}
\begin{minipage}{0.33\textwidth}
\centering
\includegraphics[width=1.\linewidth]{./figs/tsne_full.pdf}
\end{minipage} \\
\begin{minipage}{.33\textwidth}
\centering
\text{cWGAN}
\end{minipage
\begin{minipage}{0.33\textwidth}
\centering
\text{cWGAN + CR}
\end{minipage}
\begin{minipage}{0.33\textwidth}
\centering
\text{cWGAN + CR + AR}
\end{minipage}
\vspace{-8pt}
\end{center}
\caption{t-SNE \cite{maaten2008visualizing} visualization of synthesized feature embeddings.
The real features are indicated by $\star$. Different colors represent different classes.}
\label{tsne}
\vspace{-5pt}
\end{figure*}
\begin{figure}
\begin{center}
\includegraphics[width=0.3\textwidth]{./figs/analyze_syn_sample1.pdf}
\end{center}
\vspace{-10pt}
\caption{Impact of the number of synthesized samples for each class on the \textit{Mini-ImageNet} dataset.}
\label{impact_of_sample_number}
\vspace{-8pt}
\end{figure}
\noindent\textbf{CR}. This regularizer constrains the synthesized features to have desirable classification property such that we can train from them a discriminative classifier. We can see that when it is used as the only regularization for the generator, it raises the baseline result from 52.73 to 55.65. On the other hand, when it is used along with cWGAN (the discriminator regularizes the generated features, resulting in the GAN loss), it helps further boost the performance from 57.58 to 60.56. Therefore, in the both cases (with and without cWGAN), CR helps enhance discriminability of the synthesized features and leads to performance boost.
\noindent\textbf{cWGAN}. Compared with the baseline (without data augmentation), cWGAN helps raise the accuracy from 52.73 to 57.58. This is because the synthesized features enhance the intra-class variance, which makes classification decision boundaries much sharper. Moreover, with CR as the regularizer, our cWGAN based generative model boosts the performance of the naive generative model from 55.65 to 60.56. This further substantiates the effectiveness of the proposed cWGAN framework. The performance gain is due to the adversarial game between the generator and the discriminator, which enhances the generator's capability of modeling complex data distribution among training data. The enhanced generator is therefore able to synthesize features of both higher diversity and discriminability.
As mentioned in the related work, one of the major differences of the proposed AFHN from the other feature hallucination based FSL method \cite{wang2018low} is that AFHN is an adversarial generative model while \cite{wang2018low} uses a naive generative model. This study thus evidences the advantage of AFHN over \cite{wang2018low}.
\noindent\textbf{AR}. AR aims to encourage the diversity of the synthesized features by explicitly penalizing the case where mode collapses more likely occur. Table \ref{Table_ablation} shows that it further brings about 2\% performance gains, thus proving its effectiveness.
\subsection{Comparative Results}
\noindent\textbf{Mini-Imagenet}.
\textit{Mini-Imagenet} is the most extensively evaluated dataset. From Table \ref{result_fsl_mini} we can observe that AFHN attains the new state-of-the-art, for both the 1-shot and 5-shot setting. Compared with the other four data augmentation based methods, AFHN reaches significant improvements: it beats $\Delta$-encoder \cite{schwartz2018delta} by more than 8\% for the 5-shot setting and Dual TriNet \cite{Chen2019multi} by more than 3\% for the 1-shot setting. Compared with MetaGAN \cite{zhang2018metagan} which is also based on GAN, AFHN achieves about 10\% improvements for both the 1-shot and 5-shot settings. Besides the significant advantages over the peer data augmentation based methods, AFHN also exhibits remarkable advantages over the other two categories of methods. It beats the best metric learning based method DCEM \cite{dvornik2019diversity} by about 3.5\% for the 1-shot setting. It also performs better than the state-of-the-art meta-learning based algorithms.
Compared with the baseline method, ``ResNet18+SVM'', AFHN reaches about 10\% and 5\% improvements for the 1-shot and 5-shot settings, respectively. This substantiates the effectiveness of our proposed data augmentation techniques.
\noindent\textbf{CUB}. This is a fine-grained bird dataset widely used for fine-grained classification. Recently, it has been employed for few-shot classification evaluation. Thus, relatively less results are reported on this dataset. From Table \ref{result_fsl_cub_cifar100} we can see that AFHN reaches comparable results with both the other two data augmentation based methods Dual TriNet and $\Delta$-encoder. It beats the best metric learning based method SAML \cite{hao2019collect} by 2.4\% for the 5-shot setting, and performs significantly better than the meta-learning based methods. Compared with the baseline, we only have a moderate improvement in the 1-shot setting and reach only a marginal boost for the 5-shot setting. We speculate the reason is that this dataset is relatively small, less than 60 images per class on average; a large number of classes only have about 30 images. Due to the small scale of this dataset, the intra-class variance is less significant than that of the \textit{Mini-Imagenet} dataset, such that 5 labeled samples are sufficient to capture most of the intra-class variance. Performing data augmentation is less crucial than that for the other datasets.
\noindent\textbf{CIFAR100}.
This dataset has the identical structure as the \textit{Mini-ImageNet} dataset. Table \ref{result_fsl_cub_cifar100} shows that AFHN performs the best over all the existing methods and the advantages are sometimes significant. AFHN beats Dual TriNet by 5\% and 3\% for 1-shot and 5-shot respectively. Compared with the best meta-learning based method, we get 7\% and 4\% improvements for the 1-shot and 5-shot respectively. Compared with the baseline method, AFHN also reach remarkable gains. We reach about 10\% and 5\% improvements for 1-shot and 5-shot respectively. This great improvement convincingly substantiates the effectiveness of our GAN based data augmentation method for solving the FSL problem.
In summary, among all the three datasets, we reach significant improvements over existing state-of-the-art methods for two of them, while being comparable for the left one. For all the datasets, our method reaches significant boost to the baseline method where there is no data augmentation. These experiments substantiate the effectiveness and superiority of the proposed method.
\subsection{Further Analysis}
\noindent\textbf{Impact of the number of synthesized features}.
Figure \ref{impact_of_sample_number} shows the analysis on \textit{Mini-ImageNet} about the recognition accuracy with respect to the number of synthesized features for each class during test.
We can observe that the classification accuracy keeps boosted with more features synthesized at the beginning, and remains stable with even more synthesized samples.
This is reasonable because the class variance encapsulated by the few labeled samples has a upper bound; data augmentation based on these labeled samples can enlarge the variance to some extent, but it is still bounded by the few labeled samples themselves. When it reaches the peak, the performance reasonably turns stable.
\noindent\textbf{Visualization of synthesized features}.
We showed quantitatively in the ablation study that owing to the CR and AR regularizers, we can generate diverse and discriminative features which bring significant performance gains. Here we further study the effect of the two regularizers by showing the t-SNE visualization of the synthesized features. As shown in Figure \ref{tsne}, the synthesized features of different classes mix up together when using only cWGAN for augmentation. As analyzed before, cWGAN does not guarantee synthesizing semantically meaningful features. The problem is substantially resolved when we train cWGAN with CR. The synthesized features exhibit clear clustering structure, which helps train a discriminative classifier. Furthermore, with AR added, the synthesized features still exhibit favorable clustering structure. But taking a closer look of the visualization, we can find that the features synthesized with AR added are more diverse than that without it: the clusterings are less compact, stretched to larger regions, and even contains some noises. This shows AR indeed helps diversify the synthesized features.
\section{Conclusions}
We introduce the Adversarial Feature Hallucination Networks (AFHN), a new data augmentation based few-shot learning approach. AFHN consists of a novel conditional Wasserstein GAN (cWGAN) based feature synthesis framework, the classification regularizer (CR) and the anti-collapse regularizer (AR).
Based on cWGAN, our framework synthesizes fake features for new classes by using the features of the few labeled samples as the conditional context. CR secures feature discriminability by requiring the synthesized features to be of high similarity with features of the samples from the same classes, while of low similarity with those from the different classes. AR aims to enhance the diversity of the synthesized features by directly penalizing the cases where the mode collapse problem likely occurs. The ablation study shows the effectiveness of the cWGAN based feature synthesis framework, as well as the two regularizers. Comparative results verify the superiority of AFHN to the existing data augmentation based FSL approaches as well as other state-of-the-art ones.
\vspace{5pt}
\noindent\textbf{Acknowledgement}: This research is supported by the U.S. Army Research Office Award W911NF-17-1-0367.
\clearpage
{\small
\bibliographystyle{ieee_fullname}
|
1,116,691,499,905 | arxiv | \section{Introduction}
The study of the Quark-Gluon Plasma (QGP) \cite{pasechnik2017} in heavy-ion collisions requires a precise knowledge of the initial state of the system in order to disentangle QGP-induced phenomena from other nuclear effects. In this regard, the weakly-interacting Z and W$^\pm$ bosons, when detected through their leptonic decay channels, provide a medium-blind reference that allows us to probe the initial-state effects such as the nuclear modifications of the Parton Distribution Functions (PDFs). The various sets of nuclear PDFs (nPDFs) available are currently suffering from large uncertainties due to the lack of experimental constraints in the $x$-range probed at the LHC \cite{paukkunen2011}.
Thanks to the high collision energies and luminosities delivered by the LHC, the measurement of the production of electroweak bosons is now accessible in heavy-ion collisions \cite{aliceZPbPb5tev, aliceWZpPb5tev, atlasZPbPb2tev, atlasZpPb5tev, lhcbZpPb5tev, cmsWPbPb2tev, cmsZPbPb2tev, cmsZpPb5tev}. The four main LHC experiments have complementary kinematic coverages that give access to a wide range of Bjorken-$x$ values (from $10^{-4}$ to almost unity) in a region of high virtuality ($Q^2 \sim M^2_{Z,W}$) where the nPDFs are poorly constrained by other experiments.
\section{Analysis and results}
\subsection{Procedure}
The Z and W$^\pm$ bosons are detected in their (di)muonic decay channels from data recorded with the ALICE muon spectrometer \cite{spectrometer}. The spectrometer is a conical-shape detector consisting of five stations of tracking chambers, two stations of trigger chambers and a set of absorbers that shield the detector from various background sources such as hadrons produced in the collision and the products of interactions of large-$\eta$ primary particles with the beam-pipe. A complete description of the ALICE detector can be found in \cite{alice}. The spectrometer acceptance covers the pseudo-rapidity interval $-4 < \eta < -2.5$, corresponding to a rapidity acceptance in the range $2.5 < y < 4$ for Pb--Pb collisions.\footnote{In the ALICE reference frame, the spectrometer covers negative $\eta$. However, positive values are used when referring to $y$.} In proton-lead collisions, the proton and lead beams have different energies, the nucleon-nucleon centre-of-mass is thus boosted with respect to the one in the laboratory frame by $\Delta y = 0.465$ in the direction of the proton beam. The rapidity acceptance of the spectrometer is then $2.03 < y_{cms} < 3.53$ ($-4.46 < y_{cms} < -2.96$) in the p--going (Pb--going) direction, when the proton (Pb) beam moves toward the spectrometer.
In the following, results from three different periods are presented. The collision systems, energies in the centre-of-mass, integrated luminosities and performed measurements are summarized in Table \ref{periods}.
\begin{table}[h]
\centering
\begin{tabular}{|c|c|c|c|c|}
\hline
\textbf{Collision system} & \textbf{Year} & $\mathbf{\sqrt{s_{_{\rm \textbf{NN}}}}}$ & \textbf{Luminosity} & \textbf{Analyses} \\
\hline \hline
p--Pb & \multirow{2}{*}{2013} & \multirow{2}{*}{5.02 TeV} & 5.03 $\pm$ 0.18 nb\textsuperscript{-1} & \multirow{2}{*}{Z, W$^\pm$} \\
Pb--p & & & 5.81 $\pm$ 0.20 nb\textsuperscript{-1} & \\
\hline
Pb--Pb & 2015 & 5.02 TeV & $\sim$ 225 $\rm \mu$b\textsuperscript{-1} & Z \\
\hline
p--Pb & \multirow{2}{*}{2016} & \multirow{2}{*}{8.16 TeV} & 8.47 $\pm$ 0.18 nb\textsuperscript{-1} & \multirow{2}{*}{Z} \\
Pb--p & & & 12.75 $\pm$ 0.25 nb\textsuperscript{-1} & \\
\hline
\end{tabular}
\caption{Analyses performed in the different data taking periods.}
\label{periods}
\end{table}
In order to ensure the quality of the data sample, a selection is applied on each muon track on the trigger-tracker matching (each track reconstructed in the tracking chambers has to match a track segment in the trigger chambers), the distance at the end of the front absorber and the product of the track momentum to its distance of closest approach (i.e. the distance to the primary vertex of the track extrapolated to the plane transverse to the beam axis and containing the vertex itself). This aims at reducing the background noise by removing tracks that are poorly reconstructed, crossing the high-Z material part of the detector or not coming from the interaction vertex.
The W$^\pm$ candidates are extracted by fitting the single muon transverse momentum distribution, starting at $\ensuremath{p_{\rm T}} ^\mu > 10$ GeV/$c$ to remove muons from low-mass particles decay. The fit is performed using a combination of Monte-Carlo templates to account for the various contributions. In the Z case, a selection on the muon pseudo-rapidity ($-4 < \eta_\mu < -2.5$) and transverse momentum ($\ensuremath{p_{\rm T}} ^\mu > 20$ GeV/$c$) as well as on the dimuon invariant mass ($60 < m_{\mu\mu} < 120$ GeV/$c^2$) leaves a nearly background-free sample from which the Z signal is obtained by simply counting the number of opposite-sign dimuons in the fiducial region defined by the selection.
Finally, the systematic uncertainties on the measurements are evaluated by combining the systematics on the signal extraction, including its potential contamination by background sources, the detector performances and the simulations from which the acceptance $\times$ efficiency factor is estimated.
The strategies for the Z and W$^\pm$ analyses are described in more details in \cite{aliceZPbPb5tev, aliceWZpPb5tev}.
\subsection{Pb--Pb collisions}
The measured Z yield, normalized to the average nuclear overlap function $\left< T_{AA} \right>$ obtained from the Glauber model \cite{glauber}, is displayed in the left panel of Figure \ref{PbPbYield}, for a collision centrality in the $0-90\%$ range and integrated in rapidity ($2.5 < y < 4$). The measurement is compared to several predictions, from free-PDF only, using CT14 \cite{ct14}, or using three different parametrizations for the nuclear modification of the PDFs: EPS09 \cite{eps09}, EPPS16 \cite{epps16} (both with CT14 as baseline PDF) or nCTEQ15 \cite{ncteq15, ncteq15vecBos}. Calculations using the free-PDF set alone are found to overestimate the measured yield by $2.3\sigma$, while the predictions from nPDFs are all in agreement with the measurement within uncertainties.
\begin{figure}
\centering
\includegraphics[height=7cm]{ZPbPb5tev_yield.png}
\includegraphics[height=6.4cm]{ZPbPb5tev_RaaVsCent.png}
\caption{\textbf{Left}: Z yield in Pb--Pb collisions at \ensuremath{\sqrt{s_{_{\rm NN}}}}{} = 5.02 TeV, divided by the average nuclear overlap function, in the rapidity region $2.5 < y < 4$ and a $0-90\%$ collision centrality, compared to predictions from free-PDF and various nPDFs. The horizontal bar and boxes correspond to the statistical and model uncertainties, while the filled band indicates the systematic uncertainty of the experimental value. \textbf{Right}: Centrality dependence of the Z nuclear modification factor. The vertical bars represent the statistical uncertainty, while the boxes represent the systematic uncertainty on the measurements. The filled box located at $R_{AA} = 1$ shows the normalisation uncertainty. See text for details about the models.}
\label{PbPbYield}
\end{figure}
Figure \ref{PbPbYield} (right) shows the centrality dependence of the Z nuclear modification factor $R_{AA}$, compared to calculations including a centrality-dependent nuclear modification of the PDFs. The nuclear modification factor is defined as the ratio of the yield in Pb--Pb collisions to the cross section in p--p collisions scaled by the average nuclear overlap function:
\begin{equation}
R_{AA} = \frac{1}{\left< T_{AA} \right>} \frac{\mbox{d} N_{AA} / \mbox{d} y}{\mbox{d} \sigma _{pp} / \mbox{d} y}.
\end{equation}
A significant deviation by $3\sigma$ from unity is observed for the 20\% most central collisions. Theoretical calculations including the nuclear modifications of the PDFs are found to reproduce the measured ratio within uncertainties.
\subsection{p--Pb collisions}
Figure \ref{pPb5tev} (left) shows the Z production cross sections measured in p--Pb and Pb--p collisions at \ensuremath{\sqrt{s_{_{\rm NN}}}}{} = 5.02 TeV. They are compared to theoretical calculations at NLO and NNLO, both with and without nuclear modifications of the PDF. The NLO calculations are obtained from pQCD, using CT10 \cite{ct10} as free-PDF. The NNLO predictions are computed in the FEWZ \cite{fewz} framework with MSTW2008 \cite{mstw2008} as baseline PDF. In both cases, the nuclear modifications are implemented with EPS09 \cite{eps09}. The ratios of the measured cross sections to the predictions with nuclear modification are shown in the middle (bottom) panel of the figure for NLO (NNLO) computations.
Figure \ref{pPb5tev} (right) presents the W lepton-charge asymmetry, a quantity that gives access to the down over up quark ratio while allowing for a partial cancellation of the uncertainties. It is compared to predictions obtained from the same methods as for the Z cross section computation. The measurement-to-theory ratio can be seen in the lowest panels of each figure.
\begin{figure}
\centering
\includegraphics[width=0.45\linewidth]{ZpPb5tev_XsecVsRap.png}
\includegraphics[width=0.45\linewidth]{WpPb5tev_ChargeAsymmetry.png}
\caption{\textbf{Left}: Z production as a function of rapidity in p--Pb collisions at \ensuremath{\sqrt{s_{_{\rm NN}}}}{} = 5.02 TeV. The bars and boxes around the data points correspond to statistical and systematic uncertainties respectively. The horizontal width of the boxes indicates the measured rapidity range. The measurements are compared to theoretical predictions, horizontally shifted for readability. \textbf{Right}: W lepton charge asymmetry in p--Pb collisions at \ensuremath{\sqrt{s_{_{\rm NN}}}}{} = 5.02 TeV compared to theoretical calculations, at forward and backward rapidities. The statistical (systematic) uncertainties are displayed as bars (boxes), the predictions are shifted horizontally for readability. In both figures, the vertical middle (bottom) panels display the ratio of the data and NLO (NNLO) calculations with nuclear modifications over the NLO (NNLO) calculations from free-PDF only.}
\label{pPb5tev}
\end{figure}
The effect of the nuclear modifications is smaller in p--Pb collisions than in the Pb--Pb case. In addition, the measured yields are found to be smaller, leading to a higher associated relative uncertainty. The combination of those effects prevents any firm conclusion on nuclear effects, as the measured cross sections and charge asymmetry are found to be consistent, within uncertainties, with predictions both including and excluding nuclear modifications.\\
A new preliminary cross section of the Z production has been measured in p--Pb and Pb--p collisions at \ensuremath{\sqrt{s_{_{\rm NN}}}}{} = 8.16 TeV. The results are displayed in Figure \ref{pPb8tev} (left) where they are compared to the 5.02 TeV measurement. The same backward -- forward asymmetry is observed, which results from the change in Bjorken-$x$ induced by the rapidity shift. Figure \ref{pPb8tev} (right) shows the measured cross sections, compared with theoretical calculations at NLO, with and without including nuclear effects. The predictions with nuclear modifications are calculated using the two most recents nPDFs available, EPPS16 \cite{epps16}, using CT14 \cite{ct14} as baseline PDF, and nCTEQ15 \cite{ncteq15}. Within uncertainties, the measurements are reproduced with and without nuclear modifications of the PDFs.
\begin{figure}
\centering
\includegraphics[width=0.42\linewidth]{ZpPb8tev_prelim_XsecVs5tev.png}
\includegraphics[width=0.45\linewidth]{ZpPb8tev_prelim_XsecVsTheory.png}
\caption{Z production cross section as a function of rapidity in p--Pb collisions at \ensuremath{\sqrt{s_{_{\rm NN}}}}{} = 8.16 TeV, compared to the measured production at 5.02 TeV (left panel) and theoretical calculations (right panel). The bars and boxes around the data points correspond to statistical and systematic uncertainties respectively. The theoretical points are horizontally shifted for readability, the close (open) symbols correspond to predictions with (without) nuclear modification of the PDF.}
\label{pPb8tev}
\end{figure}
\section{Conclusion}
The electroweak-boson measurements performed by the ALICE collaboration have been presented. In Pb--Pb at \ensuremath{\sqrt{s_{_{\rm NN}}}}{} = 5.02 TeV, the measured Z production is well reproduced by theoretical calculations including nuclear modifications of the PDFs, while a significant deviation is found from free-PDF predictions, by $2.3\sigma$ for the centrality-integrated yield. In p--Pb collisions, at centre-of-mass energies of 5.02 and 8.16 TeV, the measurements are well reproduced by calculations but the statistical limitations an the small magnitude of nuclear effects prevent any firm conclusion to be drawn on nuclear modifications. Further measurements with better precision are needed to provide more stringent constraints on the nPDFs.
\bibliographystyle{JHEP}
|
1,116,691,499,906 | arxiv | \section{Introduction}
\begin{figure*}[t]
\centering
\begin{subfigure}{0.15\textwidth}
\centering
\includegraphics[width=1\textwidth]{pic1.pdf}
\caption{}
\label{fig:motivation:A}
\end{subfigure}
\quad \quad \quad \quad
\begin{subfigure}{0.17\textwidth}
\centering
\includegraphics[width=1\textwidth]{pic2.pdf}
\caption{}
\label{fig:motivation:B}
\end{subfigure}
\quad \quad \quad \quad
\begin{subfigure}{0.16\textwidth}
\centering
\includegraphics[width=1\textwidth]{pic3.pdf}
\caption{}
\label{fig:motivation:C}
\end{subfigure}
\quad \quad \quad \quad
\begin{subfigure}{0.15\textwidth}
\centering
\includegraphics[width=1\textwidth]{pic4.pdf}
\caption{}
\label{fig:motivation:D}
\end{subfigure}
\\
\begin{subfigure}{0.19\textwidth}
\centering
\includegraphics[width=1\textwidth]{pic5.pdf}
\caption{Tied filtering}
\label{fig:motivation:E}
\end{subfigure}
\quad \quad
\begin{subfigure}{0.23\textwidth}
\centering
\includegraphics[width=1\textwidth]{pic6.pdf}
\caption{Ranking filtering}
\label{fig:motivation:F}
\end{subfigure}
\quad \quad
\begin{subfigure}{0.20\textwidth}
\centering
\includegraphics[width=1\textwidth]{pic7.pdf}
\caption{Gaussian-induced filtering}
\label{fig:motivation:G}
\end{subfigure}
\quad \quad
\begin{subfigure}{0.17\textwidth}
\centering
\includegraphics[width=1\textwidth]{pic8.pdf}
\caption{$3\times 3$ regular filtering}
\label{fig:motivation:H}
\end{subfigure}
\caption{Different filters operations on graph vertices. Examples of one-hop subgraphs are given in (a)-(d), where $v_0$ is the reference vertex and each vertex is assigned to a signal. The tied filtering (e) summarizes all neighbor vertices, and generates the same responses to (a) and (b) under the filter $f$, {i.e.}} \def\etal{{et.al}, $f(\sum \widetilde{w_{0i}}x_i)=f(1.9)$, although the two graphs are completely different in structures. The ranking filtering (f) sorts/prunes neighbor vertices and then performs different filtering on them. It might result into the same responses $f_1(1)+f_2(3)+f_3(1)+f_4(4)$ to different graphs such as (b) and (c), where the digits in red boxes denote the ranked indices and the vertex of dashed box in (b) is pruned. Moreover, the vertex ranking is uncertain/non-unique for equal connections in (d).To address these problems, we derive edge-induced GMM to coordinate subgraphs as shown in (g). Each of Gaussian model can be viewed as one variation component (or direction) of subgraph. Like the standard convolution (h), the Gaussian encoding is sensitive to different subgraphs, \eg, (a)-(d) will have different responses. Note $f, f_i$ are linear filters, and the non-linear activation functions are put on their responses.
}
\label{fig:motivation}
\end{figure*}
As witnessed by the widespread applications, graph is one of the most successful models to conduct structured and semi-structured data, ranging from text~\cite{defferrard2016convolutional}, bioinformatics~\cite{yanardag2015deep,niepert2016learning,song2018eeg} and social network~\cite{gomez2017dynamics,orsini2017shift} to images/videos~\cite{marino2016more,cui2018context,cui2017spectral}. Among these applications, learning robust representations from structured graphs becomes the main topic. To this end, various methods have come forth in recent years. Graph kernels~\cite{yanardag2015deep} and recurrent neural networks (RNNs)~\cite{scarselli2009graph} are the most representative ones. Graph kernels usually take the classic R-convolution strategy~\cite{haussler1999convolution} to recursively decompose graphs into atomic sub-structures and then define local similarities between them. RNNs based methods sequentially traverse neighbors with tied parameters in depth. With the increase of graph size, graph kernels would suffer diagonal dominance of kernels~\cite{scholkopf2002kernel} while RNNs would have the explosive number of combinatorial paths in the recursive stage.
Recently convolutional neural networks (CNNs)~\cite{lecun2015deep} have achieved breakthrough progresses on representing grid-shaped image/video data. In contrast, graphs are with irregular structures and fully coordinate-free on vertices and edges. The vertices/edges are not strictly ordered, and can not be explicitly matched between two graphs. To generalize the idea of CNNs onto graphs, we need to solve this problem therein that the same responses should be produced for those homomorphic graphs/subgraphs when performing convolutional filtering. To this end, recent graph convolution methods~\cite{defferrard2016convolutional,atwood2016diffusion,hamilton2017inductive} attempted to aggregate neighbor vertices as shown in Fig.~\ref{fig:motivation:E}. This kind of methods actually employ a fuzzy filtering ({i.e.}} \def\etal{{et.al}, a tied/shared filter) on neighbor vertices because only first-order statistics (mean) is used. Two examples are shown in Fig.~\ref{fig:motivation:A} and Fig.~\ref{fig:motivation:B}. Although they have different structures, the responses on them are fully equal. Oppositely, Niepert \etal~\cite{niepert2016learning} ranked neighbor vertices according to weights of edges, and then used different filters on these sorted vertices, as shown in Fig.~\ref{fig:motivation:F}. However, this rigid ranking method will suffer some limitations: i) probably consistent responses to different structures (\eg, Fig.~\ref{fig:motivation:B} and Fig.~\ref{fig:motivation:C}) because weights of edges are out of consideration after ranking; ii) information loss of node pruning for a fixed-size receptive field as shown in Fig.~\ref{fig:motivation:B}; and iii) ranking ambiguity for equal connections as shown in Fig.~\ref{fig:motivation:D}; and iv) ranking sensitivity to (slightly) changes of edge weights/connections.
In this paper we propose a Gaussian-induced graph convolution framework to learn graph representation. For a coordinate-free subgraph region, we design an \textit{edge-induced} Gaussian mixture model (EI-GMM) to implicitly coordinate the vertices therein. Specifically, the edges are used to regularize Gaussian models such that variations of subgraph can be well-encoded. In analogy to the standard convolutional kernel as shown in Fig.~\ref{fig:motivation:H}, EI-GMM can be viewed as a coordinate normalization by projecting variations of subgraph into several Gaussian components. For example, the four subgraphs w.r.t. Fig.~\ref{fig:motivation:A}$\sim$\ref{fig:motivation:D} will have different representations\footnote{Suppose three Gaussian models are $\mcN(0,1), \mcN(0,2)$ and $\mcN(0,3)$, then we can compute the responses on (a)-(d) respectively as $f_1([0.49, -0.93])+f_2([0.17, -0.65])+f_3([0.07, -0.44])$, $f_1([0.35, -0.73])+f_2([0.15, -0.58])+f_3([0.10, -0.64]$, $f_1([0.35, -0.71])+f_2([0.15, -0.39])+f_3([0.10, -0.43])$, $f_1([0.46, -0.99])+f_2([0.18, -0.62])+f_3([0.08, -0.42])$. Please refer to incoming section.} through our Gaussian encoding in Fig.~\ref{fig:motivation:G}. To make the network inference forward, we transform Gaussian components of each subgraph into the gradient space of multivariate Gaussian parameters, instead of employing the sophisticated EM algorithm. Then the filters (or transform functions) are performed on different Gaussian components like latticed kernels on different directions in Fig.~\ref{fig:motivation:H}. Further, we derive a \textit{vertex-induced} Gaussian mixture model (VI-GMM) to favor dynamic coarsening of graph. We also theoretically analyze the approximate equivalency of VI-GMM to weighted graph cut~\cite{dhillon2007weighted}. Finally, EI-GMM and VI-GMM can be alternately stacked into an end-to-end optimization network.
In summary, our main contributions are four folds: i) propose an end-to-end Gaussian-induced convolutional neural network for graph representation; ii) propose edge-induced GMM to encode variations of different subgraphs; iii) derive vertex-induced GMM to perform dynamic coarsening of graphs, which is an approximation to the weighted graph cut; iv) verify the effectiveness of our method and report state-of-the-art results on several graph datasets.
\begin{figure*}[t]
\centering
\includegraphics[width=0.83\textwidth]{pic9.pdf}
\caption{The GIC network architecture. The GIC main contains two module: convolution layer (EI-GMM) and coarsening layer (VI-GMM). The GIC stacks several convolution and coarsening layers alternatively and iteratively. More details can be found in incoming section.}
\label{fig:network:A}
\end{figure*}
\section{Related Work}
Graph CNNs mainly fall into two categories: spectral and spatial methods. Spectral methods~\cite{bruna2013spectral,scarselli2009graph,henaff2015deep,such2017robust,li2018action,li2018spatio} construct a series of spectral filters by decomposing graph Laplacian, which often suffers high-computational burden. To address this problem, the fast local spectral filtering method~\cite{defferrard2016convolutional} parameterizes the frequency responses as a Chebyshev polynomial approximation. However, as shown in Fig.~\ref{fig:motivation:B}, after summarizing all nodes, this method will discard topology structures of a local receptive field. This kind of methods usually require equal sizes of graphs like the same sizes of images for CNNs~\cite{kipf2016semi}. Spatial methods attempt to define spatial structures of adjacent vertices and then perform filtering on structured graphs. Diffusion CNNs~\cite{atwood2016diffusion} scans a diffusion process across each node. PATCHY-SAN~\cite{niepert2016learning} linearizes neighbors by sorting weights of edges and deriving convolutional filtering on graphs, as shown in Fig.~\ref{fig:motivation:C}. As an alternative, random walks based approach is also used to define the neighborhoods~\cite{perozzi2014deepwalk}. For the linearized neighbors, RNNs~\cite{li2015gated} could be used to model the structured sequences. Similarly, NgramCNN~\cite{luo2017deep} serializes each graph by introducing the concept of $n$-gram block.
GAT~\cite{velickovic2017graph} attempts to weight edges through the attention mechanism. WSC~\cite{jiang2018walk} attempts to aggregate walk fields defined by random walks into Gaussian mixture models. Zhao~\cite{zhao2018work} attempts to define a standard network with different graph convolutions. Besides, some variants~\cite{hamilton2017inductive,duran2017learning,zhang2018tensor} employ the aggregation or propagation of local neighbor nodes. Different from these tied filtering or ranking filtering methods, we use Gaussian models to encode local variations of graph. Also different from the recent mixture models~\cite{monti2017geometric}, which uses GMM to only learn the importance of adjacent nodes, our method uses weighted GMM to encode the distributions of local graph structures.
\section{The GIC Network}
\subsection{Attribute Graph}
Here we consider an undirected attribute graph $\mcG=(\mcV,\A,\X)$ of $m$ vertices (or nodes), where $\mcV=\{v_i\}_{i=1}^{m}$ is the set of vertices, $\A$ is a (weighted) adjacency matrix, and $\X$ is a matrix of graph attributes (or signals). The adjacency matrix $\A\in\mbR^{m\times m}$ records the connections between vertices. If $v_i, v_j$ are not connected, then $A(v_i,v_j)=0$, otherwise $A(v_i,v_j)\neq 0$. We sometimes abbreviate $A(v_i,v_j)$ as $A_{ij}$. The attribute matrix $\X\in\mbR^{m\times d}$ is associated with the vertex set $\mcV$, whose $i$-th row $\X_{i}$ (or $\X_{v_i}$) denotes a $d$-dimension attribute of the $i$-th node ({i.e.}} \def\etal{{et.al}, $v_i$).
The graph Laplacian matrix $\L$ is defined as $\L = \D-\A$, where $\D\in\mbR^{m\times m}$ is the diagonal degree matrix with $D_{ii}=\sum_{j}A_{ij}$. The normalized version is written as $\L^{norm} = \D^{-1/2}\L\D^{-1/2}= \I-\D^{-1/2}\A\D^{-1/2}$.
where $\I$ is the identity matrix. Unless otherwise specified, we use the latter. We give the definition of subgraph used in the following.
\begin{defn} \label{def:graph}
Given an attribute graph $\mcG=(\mcV,\A,\X)$, the attribute graph $\mcG'=(\mcV',\A',\X')$ is a subgraph of $\mcG$, denoted $\mcG'\subseteq\mcG$, if (i) $\mcV'\subseteq\mcV$, (ii) $\A'$ is the submatrix of $\A$ w.r.t. the subset $\mcV'$, and (iii) $\X'=\X_{\mcV'}$.
\end{defn}
\subsection{Overview}
The GIC network architecture is shown in Fig.~\ref{fig:network:A}. Given an attribute graph $\mcG^{(0)}=(\mcV^{(0)},\A^{(0)},\X^{(0)})$, where the superscript denotes the layer number, we construct multi-scale receptive fields for each vertex based on the adjacency matrix $\A^{(0)}$. Each receptive field records $k$-hop neighborhood relationships around the reference vertex, and forms a local centralized subgraph. To encode the centralized subgraph, we project it into edge-induced Gaussian models, each of which defines one variation ``direction" of the subgraph. We perform different filtering operations on different Gaussian components and aggregate all responses as the convolutional output. After the convolutional filtering, the input graph $\mcG^{(0)}$ is transformed into a new graph $\mcG^{(1)}=(\mcV^{(1)},\A^{(1)},\X^{(1)})$, where $\mcV^{(1)}=\mcV^{(0)}$ and $\A^{(1)}=\A^{(0)}$. To further abstract graphs, we next stack a coarsening layer on the graph $\mcG^{(1)}$. The proposed vertex-induced GMM is used to downsample the graph $\mcG^{(1)}$ into the low-resolution graph $\mcG^{(2)}=(\mcV^{(2)},\A^{(2)},\X^{(2)})$. Taking the convolution and coarsening modules, we may alternately stack them into a multi-layer GIC network, With the increase of layers, the receptive field size of filters will become larger, so the higher layer can extract more global graph information. In the supervised case of graph classification, we finally concatenate with a fully connected layer followed by a softmax loss layer.
\subsection{Multi-Scale Receptive Fields}
In the standard CNN, receptive fields may be conveniently defined as latticed spatial regions. Thus convolution kernels on grid-shaped structures are accessible. However, the construction of convolutional kernels on graphs are intractable due to coordinate-free graphs, \eg, unordered vertices, unfixed number of adjacent edges/vertices. To address this problem, we resort to the adjacent matrix $\A$, which expresses connections between vertices. Since $\A^k$ exactly records the $k$-step reachable vertices, we may construct a $k$-neighbor receptive field by using the $k$-order polynomial of $\A$, denoted as $\psi_k(\A)$. Taking the simplest case, $\psi_k(\A)=\A^k$ reflects the $k$-hop neighborhood relationships. In order to remove the scale effect, we may normalize $\psi_k(\A)$ as $\psi_k(\A) \text{diag} (\psi_k(\A)\1)^{-1}$, which describes the reachable possibility in a $k$-hop walking. Formally, we define the $k$-th scale receptive field as a subgraph.
\begin{defn}\label{def:receptive}
The $k$-th scale receptive field around a reference vertex $v_i$ is a subgraph $\mcG_{v_i}^k=(\mcV',\A',\X')$ of the k-order graph $(\mcV, \tbA=\psi_k(\A), \X)$, where $\mcV'=\{v_j|\wtA_{ij}\neq 0\}\cup\{v_i\}$, $\A'$ is the submatrix of $\tbA$ w.r.t. $\mcV'$, and $\X'=\X_{\mcV'}$.
\end{defn}
\subsection{Convolution: Edge-Induced GMM}
Given a reference vertex $v_i$, we can construct the centralized subgraph $\mcG_{v_i}^k$ of the $k$-th scale.
To coordinate the subgraph, we introduce Gaussian mixture models (GMMs), each of which may be understood as one principal direction of its variations. To encode the variations accurately, we jointly formulate attributes of vertices and connections of edges into Gaussian models. The edge weight $A'(v_i,v_j)$ indicates the relevance of $v_j$ to the central vertex $v_i$. The higher weight is, the stronger impact on $v_i$ is. So the weights can be incorporated into a Gaussian model by observing $A'(v_i,v_j)$ times. As the likelihood function, it is equivalent to raise the power $A'(v_i,v_j)$ on Gaussian function, which is proportional to $\mcN(\X'_{v_j}, \bmu, \frac{1}{A'(v_i,v_j)}\mathbf{\Sigma}} \def\bmu{\mathbf{\mu}} \def\btheta{\mathbf{\theta}} \def\bTheta{\mathbf{\Theta})$. Formally, we estimate the probability density of the subgraph $\mcG_{v_i}^k$ from the $C_1$-component GMM,
\begin{align}
p_{v_i}(\X'_{v_j};\bTheta_1,A'_{ij}) &= \sum_{c=1}^{C_1}\pi_c\mcN(\X'_{v_j}; \bmu_c, \frac{1}{A'_{ij}}\mathbf{\Sigma}} \def\bmu{\mathbf{\mu}} \def\btheta{\mathbf{\theta}} \def\bTheta{\mathbf{\Theta}_c), \nonumber \\
&\st \pi_c>0, \sum_{c=1}^{C_1}\pi_c=1, \label{eqn:GMM_edge}
\end{align}
where $\bTheta_1=\{\pi_1,\cdots,\pi_{C_1}, \bmu_1,\cdots,\bmu_{C_1},\mathbf{\Sigma}} \def\bmu{\mathbf{\mu}} \def\btheta{\mathbf{\theta}} \def\bTheta{\mathbf{\Theta}_1,\cdots,\mathbf{\Sigma}} \def\bmu{\mathbf{\mu}} \def\btheta{\mathbf{\theta}} \def\bTheta{\mathbf{\Theta}_{C_1}\}$ are the mixture parameters, $\{\pi_c\}$ are the mixture coefficients, $\{\bmu_c, \mathbf{\Sigma}} \def\bmu{\mathbf{\mu}} \def\btheta{\mathbf{\theta}} \def\bTheta{\mathbf{\Theta}_c\}$ are the parameters of the $k$-th component, and $A'_{ij}>0$ \footnote{In practice, we normalize $\A'$ into a non-negative matrix.}. Intuitively, edge weight $A'_{ij}$ is, the stronger impact of the node $v_j$ w.r.t. the reference vertex $v_i$ is. We will refer to the model in Eqn.~(\ref{eqn:GMM_edge}) as the \textit{edge-induced Gaussian mixture model} (EI-GMM).
In what follows, we assume all attributes of nodes are independent on each other, which is often used in signal processing. That means, the covariance matrix $\mathbf{\Sigma}} \def\bmu{\mathbf{\mu}} \def\btheta{\mathbf{\theta}} \def\bTheta{\mathbf{\Theta}_c$ is diagonal, so we denote it as $\text{diag}(\mathbf{\sigma}_c^2)$. To avoid the explicit constraints for $\pi_c$ in Eqn.~(\ref{eqn:GMM_edge}), we adopt the soft-max normalization with the re-parameterization variable $\alpha_c$, {i.e.}} \def\etal{{et.al}, $\pi_c = {\exp(\alpha_c)}/{\sum_{k=1}^{C_1}\exp(\alpha_k)}$. Thus, the entire subgraph log-likelihood can be written as
\begin{align}
\zeta(\mcG_{v_i}^k) &= \sum_{j=1}^m\ln p_{v_i}(\X'_{v_j};\bTheta_1,\A') \nonumber \\
&= \sum_{j=1}^m\ln\sum_{c=1}^{C_1}\pi_c\mcN(\X'_{v_j}; \bmu_c, \frac{1}{A'_{ij}}\mathbf{\Sigma}} \def\bmu{\mathbf{\mu}} \def\btheta{\mathbf{\theta}} \def\bTheta{\mathbf{\Theta}_c),\label{eqn:GMM_log}
\end{align}
To infer forward, instead of the expectation-maximization (EM) algorithm, we use the gradients of the subgraph with regard to the parameters of the EI-GMM model $\bTheta_1$, motivated by the recent Fisher vector work~\cite{sanchez2013image}, which has been proven to be effective in representation.
For a convenient calculation, we simplify the notations, $\mcN_{jc} = \mcN(\X'_{v_j}, \bmu_c, \frac{1}{A'_{ij}}\mathbf{\sigma}_c^2)$ and $ Q_{jc}=\frac{\pi_c\mcN_{jc} }{\sum_{k=1}^{C_1}\pi_k\mcN_{jk}}$, then we can derive the gradients of model parameters from Eqn.~(\ref{eqn:GMM_log}) as follows
\begin{align}
&\frac{\partial \zeta(\mcG_{v_i}^k)}{\partial\bmu_c} \!\!=\!\! \sum_{j=1}^m\frac{A'_{ij}Q_{jc}(\X'_{v_j}-\bmu_c)}{\mathbf{\sigma}_c^2}, \quad \nonumber \\
&\frac{\partial \zeta(\mcG_{v_i}^k)}{\partial\mathbf{\sigma}_c} \!\!=\!\! \sum_{j=1}^m \frac{Q_{jc}(A'_{ij}(\X'_{v_j}-\bmu_c)^2-\mathbf{\sigma}_c^2)}{\mathbf{\sigma}_c^3},
\end{align}
where the division of vectors means a term-by-term operation. Note we do not use $\partial \zeta(\mcG_{v_i}^k) / \partial\alpha_c$ due to no improvement in our experience. The gradients describe the contribution of the corresponding parameters to the generative process. The subgraph variations are adaptively allocated to $C_1$ Gaussian models. Finally, we ensemble all gradients w.r.t. Gaussian model ({i.e.}} \def\etal{{et.al}, directions of graph) to analogize the collection of local square receptive field on image. Formally, for the $k$-scale receptive field $\mcG_{v_i}^k$ around the vertex $v_i$, the attributes produced from Gaussian models are filtered respectively and then concatenated,
\begin{align}
F(\mcG_{v_i}^k,\bTheta_1,f) &= \text{ReLU}(\sum_{c=1}^{C_1} f_i(\text{Cat}[\frac{\partial \zeta(\mcG_{v_i}^k)}{\partial\bmu_c},\frac{\partial \zeta(\mcG_{v_i}^k)}{\partial\mathbf{\sigma}_c}]), \label{eqn:gau_fea}
\end{align}
where $\text{Cat}[\cdot, \cdot]$ is a concatenation operator, $f_i$ is a linear filtering function ({i.e.}} \def\etal{{et.al}, a convolution function) and ReLU is the rectified linear unit. Therefore we can produce the feature vectors that have same dimensionality depending on the number of Gaussian models for different subgraphs. If the soft assignment distribution $ Q_{jc} $ is sharply peaked on a single value of one certain Gaussian for the vertex $v_{j}$, the vertex will be only projected onto one Gaussian direction.
\subsection{Coarsening: Vertex-Induced GMM}
Like the standard pooling in CNNs, we need to downsample graphs so as to abstract them as well as reduce the computational cost. However, the pooling on images are tailored for latticed structures, and cannot be used for irregular graphs. One solution is to use some clustering algorithms to partition vertices to several clusters, and then produce a new vertex from each cluster. However, we expect that two vertices should not fall into the same cluster with a larger possibility if there is a high transfer difficulty between them. To this end, we derive vertex-induced Gaussian mixture models (VI-GMM) to weight each vertex. To utilize the edge information, we construct a latent observation $\phi(v_i)$ w.r.t. each vertex $v_i$ from the graph Laplacian (or adjacent matrix if semi-positive definite), {i.e.}} \def\etal{{et.al}, the kernel calculation $\langle\phi(v_i),\phi(v_j)\rangle=L_{ij}$. Moreover, for each vertex $v_i$, we define an influence factor $w_i$ for Gaussian models. Formally, given $C_2$ Gaussian models, VI-GMM is written as
\begin{align}
p(\phi(v_i);\bTheta_2,w_i) &= \sum_{c=1}^{C_2}\pi_c\mcN(\phi(v_i); \bmu_c, \frac{1}{w_i}\mathbf{\Sigma}} \def\bmu{\mathbf{\mu}} \def\btheta{\mathbf{\theta}} \def\bTheta{\mathbf{\Theta}_c), \nonumber \\
&\st w_i=h(\X_{v_i})>0,
\end{align}
where $h$ is a mapping function to be learnt. To reduce the computation cost of matrix inverse on $\mathbf{\Sigma}} \def\bmu{\mathbf{\mu}} \def\btheta{\mathbf{\theta}} \def\bTheta{\mathbf{\Theta}$, we specify it as an identity matrix. Then we have
\begin{align}
p(\phi(v_i);\bTheta_2,w_i) = \sum_{c=1}^{C_2}\frac{\pi_c}{(\frac{2\pi}{w_i})^{d/2}}\exp^{-\frac{w_i}{2}\|\phi(v_i)-\bmu_c\|^2},
\end{align}
Given a graph with $m$ vertices, the objective is to maximize the following log-likelihood:
\begin{align}
\argmax_{\bTheta_2} \zeta(\bTheta_2) = \sum_{i=1}^m \ln\sum_{c=1}^{C_2}\pi_c\mcN(\phi(v_i); \bmu_c, \frac{1}{w_i}\I)). \label{eqn:likelihood}
\end{align}
To solve above model in Eqn.~(\ref{eqn:likelihood}), we use the iterative expectation maximization algorithm, which has closed-form solution at each step. Meanwhile, the algorithm may automatically conduct the required constraints. The graphical clustering process is summarized as follows:
(1) {E-Step}: the posteriors, {i.e.}} \def\etal{{et.al}, the $i$-th vertex for the $c$-th cluster, are updated with
$p_{ic} = \frac{\pi_c p(\phi(v_i);\btheta_c, w_i)}{\sum_{k=1}^C \pi_k p(\phi(v_i);\btheta_k, w_i)}$,
where $\btheta_c$ is the $c$-th Gaussian parameters, and $\bTheta_2=\{\btheta_1,\cdots,\btheta_{C_2}\}$.
(2) {M-Step}: we optimize Gaussian parameters $\pi, \bmu$. The parameter estimatation is given by $
\pi_c = \frac{1}{m}\sum_{i=1}^m r_{ic},
\bmu_c =\frac{\sum_{v_i\in \mcG_c}w_i\phi(v_i)}{\sum_{v_i\in \mcG_c}w_i}$.
$\pi_c$ indicates the energy summation of all vertices assigned to the cluster $c$, and $\bmu_c$ may be understood as a doubly weighted ($w_i, r_{ic}$) average on the cluster $c$.
After several iterations of the two steps, we perform hard quantification. The $i$-th vertex is assigned as the class with the maximum possibility, formally, $r_{ic} = 1$ if $c=\argmax_{k} p_{ik}$, otherwise 0. Thus we can obtain the cluster matrix $\P\in\{0,1\}^{m\times C_2}$, where $P_{ic}=1$ if the $i$-th vertex falls into the cluster $c$. During coarsening, we take maximal responses of each cluster as the attributes of new vertex, and derive a new adjacency matrix by using $\P^\intercal} \def\st{\text{s.t.~}\A\P$.
It is worth noting that we need not compute the concrete $\phi$ during the clustering process. The main calculation $\|\phi(v_i)-\bmu_c\|^2$ in EM can be reduced to the kernel version:
$K_{ii}-\frac{2\sum_{v_j\in\mcG_c}w_jK_{ij}}{\sum_{v_j\in\mcG_c}w_j}
+\frac{\sum_{v_j,v_k\in\mcG_c}w_jw_k K_{jk}}{(\sum_{v_j\in\mcG_c}w_j)^2}$,
where $K_{ij} = \langle\phi(v_i),\phi(v_j)\rangle$.
In practice, we can use the graph Laplacian $\L$ as the kernel. In this case, we can easily reach the following proposition, which is relevant to graph cut~\cite{dhillon2007weighted}.
\begin{prop}
In EM, if the kernel matrix takes the weight-regularized graph Laplacian, {i.e.}} \def\etal{{et.al}, $\mcK= \text{diag}(\w)\L \text{diag}(\w)$, then VI-GMM is equal to an approximate optimization of graph cut, {i.e.}} \def\etal{{et.al}, $
\min \sum_{c=1}^C\frac{\text{links}(\mcV_c, \mcV\backslash \mcV_c)}{w(\mcV_c)}$,
where $\text{links}(\mcA, \mcB)=\sum_{v_i\in\mcA,v_j\in\mcB} A_{ij}$, and $w(\mcV_c)=\sum_{j\in\mcV_c}w_j$.
\end{prop}
\section{Experiments}
\subsection{Graph Classification}
For graph classification, each graph is annotated with one label. We use two types of datasets: Bioinformatics and Network datasets. The former contains MUTAG~\cite{debnath1991structure}, PTC~\cite{toivonen2003statistical}, NCI1 and NCI109~\cite{wale2008comparison}, ENZYMES~\cite{borgwardt2005protein} and PROTEINS~\cite{borgwardt2005protein}. The latter has COLLAB~\cite{leskovec2005graphs}, REDDIT-BINARY, REDDIT-MULTI-5K, REDDIT-MULTI-12K, IMDB-BINARY and IMDB-MULTI.
\begin{table*}[!t]
\centering
\caption{Comparisons with state-of-the-art methods.}
\begin{sc}
\scalebox{0.85}{
\begin{tabular}{|l| c c c| c c | c c c |c |c |c |c|}
\toprule
Dataset
&PSCN &DCNN &NgramCNN &FB &DyF &WL &GK &DGK &RW &SAEN & GIC \\
\midrule
\multirow{2}{*}{MUTAG}
&92.63 &66.98 &\textbf{94.99} &84.66 &88.00 &78.3 &81.66 &82.66 &83.72 &84.99 &94.44 \\
& $\pm$4.21 &-- &\textbf{$\pm$5.63} &$\pm$2.01 & $\pm$2.37 & $\pm$1.9 & $\pm$2.11 & $\pm$1.45 & $\pm$1.50 & $\pm$1.82 &$\pm$4.30 \\
\midrule
\multirow{2}{*}{PTC}
&60.00 &56.60 &68.57 &55.58 &57.15 &-- &57.26 &57.32 &57.85 &57.04 &\textbf{77.64} \\
& $\pm$4.82 &-- &$\pm$1.72 &2.30 & $\pm$1.47 & -- & $\pm$1.41 & $\pm$1.13 & $\pm$1.30
& $\pm$ 1.30 & \textbf{$\pm$ 6.98} \\
\midrule
\multirow{2}{*}{NCI1}
&78.59 &62.61 &-- &62.90 &68.27 &83.1 &62.28 &62.48 &48.15 &77.80 &\textbf{84.08} \\
& $\pm$1.89 &-- &-- &$\pm$0.96 & $\pm$0.34 & $\pm$0.2 & $\pm$0.29 & $\pm$0.25 & $\pm$0.50 & $\pm$ 0.42 & \textbf{$\pm$1.77} \\
\midrule
\multirow{2}{*}{NCI109}
& -- &62.86 &-- &62.43 & 66.72 & \textbf{85.2} & 62.60 & 62.69 & 49.75 & -- & 82.86 \\
& -- &-- &-- &$\pm$1.13 & $\pm$ 0.20 & \textbf{$\pm$ 0.2} & $\pm$ 0.19 & $\pm$ 0.23 & $\pm$ 0.60 & -- & $\pm$ 2.37 \\
\midrule
\multirow{2}{*}{ENZYMES}
& -- &18.10 &-- &29.00 & 33.21 & 53.4 & 26.61 & 27.08 & 24.16 & -- & \textbf{62.50} \\
& -- &-- &-- &$\pm$1.16 & $\pm$ 1.20 & $\pm$ 1.4 & $\pm$ 0.99 & $\pm$ 0.79 & $\pm$ 1.64
&-- & \textbf{$\pm$ 5.12} \\
\midrule
\multirow{2}{*}{PROTEINS}
& 75.89 &-- &75.96 &69.97 & 75.04 & 73.7 & 71.67 & 71.68 & 74.22 & 75.31 & \textbf{77.65} \\
& $\pm$ 2.76 &-- &$\pm$2.98 &$\pm$1.34 & $\pm$ 0.65 & $\pm$ 0.5 & $\pm$ 0.55 & $\pm$ 0.50 & $\pm$ 0.42 & $\pm$ 0.70 & \textbf{$\pm$ 3.21} \\
\midrule
\multirow{2}{*}{COLLAB}
& 72.60 &-- &-- &76.35 & 80.61 & -- & 72.84 & 73.09 & 69.01 & 75.63 & \textbf{81.24} \\
& $\pm$ 2.15 &-- &-- &1.64 & $\pm$ 1.60 & -- & $\pm$ 0.28 & $\pm$ 0.25 & $\pm$ 0.09
& $\pm$ 0.31 & \textbf{$\pm$ 1.44} \\
\midrule
\multirow{2}{*}{REDDIT-B}
& 86.30 &-- &-- &88.98 &\textbf{89.51} &75.3 & 77.34 & 78.04 & 67.63 & 86.08 & 88.45 \\
& $\pm$ 1.58 &-- &-- &$\pm$2.26 & \textbf{$\pm$ 1.96} & $\pm$ 0.3 & $\pm$ 0.18 & $\pm$ 0.39 & $\pm$ 1.01
& $\pm$ 0.53 & $\pm$ 1.60 \\
\midrule
\multirow{2}{*}{REDDIT-5K}
& 49.10 &-- &-- &50.83 & 50.31 & -- & 41.01 & 41.27 & -- &\textbf{52.24} &51.58 \\
& $\pm$ 0.70 &-- &-- &1.83 &$\pm$ 1.92 & -- & $\pm$ 0.17 & $\pm$ 0.18 & --
& \textbf{$\pm$ 0.38} & $\pm$ 1.68 \\
\midrule
\multirow{2}{*}{REDDIT-12K}
& 41.32 &-- &-- &42.37 & 40.30 & -- & 31.82 & 32.22 & -- & \textbf{46.72} & 42.98 \\
& $\pm$ 0.42 &-- &-- &1.27 & $\pm$ 1.41 & -- & $\pm$ 0.08 & $\pm$ 0.10 & --
& \textbf{$\pm$ 0.23} & $\pm$ 0.87 \\
\midrule
\multirow{2}{*}{IMDB-B}
& 71.00 &-- &71.66 &72.02 & 72.87 & 72.4 & 65.87 & 66.96 & 64.54 & 71.26 & \textbf{76.70} \\
& $\pm$ 2.29 &-- &$\pm$2.71 &$\pm$4.71 & $\pm$ 4.05 & $\pm$ 0.5 & $\pm$ 0.98 & $\pm$ 0.56 & $\pm$ 1.22
& $\pm$ 0.74 & \textbf{$\pm$ 3.25} \\
\midrule
\multirow{2}{*}{IMDB-M}
& 45.23 &-- &50.66 &47.34 & 48.12 & -- & 43.89 & 44.55 & 34.54 & 49.11 & \textbf{51.66} \\
& $\pm$ 2.84 &-- &$\pm$4.10 &3.56 & $\pm$ 3.56 & -- & $\pm$ 0.38 & $\pm$ 0.52 & $\pm$ 0.76
& $\pm$ 0.64 & \textbf{$\pm$ 3.40} \\
\bottomrule
\end{tabular}
}
\end{sc}
\label{table:state-of-the-art}
\end{table*}
\begin{table}[!t]
\centering
\caption{Node label prediction on Reddit and PPI data (micro-averaged F1 score).}
\label{table:multi-label}
\begin{sc}
\scalebox{0.9}{
\begin{tabular}{l c c}
\toprule
Dataset & Reddit & PPI \\
\midrule
Random & 0.042 & 0.396 \\
Raw features & 0.585 & 0.422 \\
Deep walk & 0.324 & -- \\
Deep walk + features & 0.691 & -- \\
Node2Vec + regression & 0.934 & -- \\
GraphSAGE-GCN & 0.930 & 0.500 \\
GraphSAGE-mean & 0.950 & 0.598 \\
GraphSAGE-LSTM & \textbf{0.954} & 0.612 \\
GIC & 0.952 & \textbf{0.661} \\
\bottomrule
\end{tabular}
}
\end{sc}
\end{table}
\subsubsection{Experiment Settings}
We verify our GIC on the above bioinformatics and social network datasets. In default, GIC mainly consists of three graph convolution layers, each of which is followed by a graph coarsening layer, and one fully connected layer with a final softmax layer as shown in Fig~\ref{fig:network:A}. Its configuration can simply be set as C(64)-P(0.25)-C(128)-P(0.25)-C(256)-P-FC(256), where C, P and FC denote the convolution, coarsening and fully connected layers respectively. The choices of hyperparameters are mainly inspired from the classic VGG net. For example, the coarsening factor is 0.25 (w.r.t. 0.5$\times$0.5 in VGG), the attribute dimensions at three conv. layers are 64-128-256 (w.r.t. the channel numbers of conv1-3 in VGG). The scale of respective field and the number of Gaussian components are both set to 7. We train GIC network with stochastic gradient descent for roughly 300 epochs with a batch size of 100, where the learning rate is 0.1 and the momentum is 0.95.
In the bioinformatics datasets, we exploit labels and degrees of the vertices to generate initial attributes of each vertex. In the social network datasets, we use degrees of vertices. We closely follow the experimental setup in PSCN~\cite{niepert2016learning}. We perform 10-fold cross-validation, 9-fold for training and 1-fold for testing. The experiments are repeated 10 times and the average accuracies are reported.
\subsubsection{Comparisons with the State-of-the-arts}
We compare our GIC with several state-of-the-arts, which contain graph convolution networks (PSCN~\cite{niepert2016learning}, DCNN~\cite{atwood2016diffusion}, NgramCNN~\cite{luo2017deep}), neural networks (SAEN~\cite{orsini2017shift}), feature based algorithms (DyF~\cite{gomez2017dynamics}, FB~\cite{bruna2013spectral}), random walks based methods (RW~\cite{gartner2003graph}), graph kernel approaches (GK~\cite{shervashidze2009efficient}, DGK~\cite{yanardag2015deep}, WL~\cite{morris2017glocalized}). We present the comparisons with the state-of-the-arts, as shown in Table~\ref{table:state-of-the-art}. All results come from the related literatures. We have the following observations.
Deep learning based methods on graphs (including DCNN, PSCN, NgramCNN, SAEN and ours) are superior to those conventional methods in most cases. The conventional kernel methods usually require the calculation on graph kernels with high-computational complexity. In contrast, these graph neural networks attempt to learn more abstract high-level features by performing inference-forward, which need relatively low computation cost.
Compared with recent graph convolution methods, ours can achieve better performance on most datasets, such as PTC, NCI1, NCI109, ENZYMES and PROTEINS. The main reason should be that local variations of subgraphs are accurately described with Gaussian component analysis.
The proposed GIC achieves state-of-the-art results on most datasets. The best performance is gained in some bioinformatics datasets and some social network datasets including PTC, NCI1, ENZYMES, PROTEINS, COLLAB, IMDB-BINARY and IMDB-MULTI. Although NgramCNN, DyF, WL and SEAN approaches have obtained the best performance on MUTAG, REDDIT-BINARY, NCI109, REDDIT-MULTI-5K and REDDIT-MULTI-12K respectively, our method is fully comparable to them.
\subsection{Node Classification}
For node classification, one node is assigned one/multiple labels. It is challenging if the label set is large. During training, we only use a fraction of nodes and their labels. The task is to predict the labels for the remaining nodes. Following the setting in~\cite{hamilton2017inductive}, we conduct the experiments on Reddit data and PPI data. For a fair comparison to graphSAGE~\cite{hamilton2017inductive}, we use the same initial graph data, mini-batch iterators, supervised loss function and neighborhood sample. The other network parameters are similar to graph classification except removing the coarsening layer.
Tabel~\ref{table:multi-label} summarizes the comparison results. Our GIC can obtain the best performance 0.661 on PPI data and a comparable result 0.952 on Reddit data. The raw features provide an important initial information for node multi-label classification. Based on the raw features, deep walk~\cite{perozzi2014deepwalk} improves about 0.36 (micro-F1 scores) on Reddit data. Meanwhile, we conduct an experiment of node2vec and use regression model to classification. Our method gains better performance than node2vec~\cite{grover2016node2vec}. Comparing different aggregation methods like GCN~\cite{kipf2016semi}, mean and LSTM, our GIC has a significant improvement about 0.16 on PPI data and gains a competitive performance on Reddit data. The results demonstrate our approach is robust to infer unknown labels of partial graphs.
\subsection{Model Analysis}
\begin{table}[!t]
\centering
\caption{The verification of our convolution and coarsening.}
\label{table:convandpooling}
\begin{sc}
\scalebox{0.68}{
\begin{tabular}{l c c c c}
\toprule
\multirow{2}{*}{Dataset} & ChebNet & GCN & GIC &\multirow{2}{*}{GIC} \\
& w/ VI-GMM & w/ VI-GMM & w/o VI-GMM & \\
\midrule
MUTAG & 89.44 $\pm$ 6.30 & 92.22 $\pm$ 5.66 & 93.33 $\pm$ 4.84 & \textbf{94.44 $\pm$ 4.30} \\
PTC & 68.23 $\pm$ 6.28 & 71.47 $\pm$ 4.75 & 68.23 $\pm$ 4.11 & \textbf{77.64 $\pm$ 6.98} \\
NCI1 & 73.96 $\pm$ 1.87 & 76.39 $\pm$ 1.08 & 79.17 $\pm$ 1.63 & \textbf{84.08 $\pm$ 1.77} \\
NCI109 & 72.88 $\pm$ 1.85 & 74.92 $\pm$ 1.70 & 77.81 $\pm$ 1.88 & \textbf{82.86 $\pm$ 2.37} \\
ENZYMES & 52.83 $\pm$ 7.34 & 51.50 $\pm$ 5.50 & 52.00 $\pm$ 4.76 & \textbf{62.50 $\pm$ 5.12} \\
PROTEINS & 78.10 $\pm$ 3.37 & \textbf{80.09 $\pm$ 3.20} & 78.19 $\pm$ 2.04 & 77.65 $\pm$ 3.21 \\
\bottomrule
\end{tabular}
}
\end{sc}
\end{table}
\begin{table}[!t]
\centering
\caption{Comparisons on $K$ and $C_1$.}
\label{table:GMM}
\begin{sc}
\scalebox{0.68}{
\begin{tabular}{l c c c c}
\toprule
Dataset & $K,C_1=1$ & $K,C_1=3$ & $K,C_1=5$ & $K,C_1=7$ \\
\midrule
MUTAG & 67.77 $\pm$ 11.05 & 83.88 $\pm$ 5.80 & 90.55 $\pm$ 6.11 & \textbf{94.44 $\pm$ 4.30} \\
PTC & 72.05 $\pm$ 8.02 & 77.05 $\pm$ 4.11 & 76.47 $\pm$ 5.58 & \textbf{77.64 $\pm$ 6.98} \\
NCI1 & 71.21 $\pm$ 1.94 & 83.26 $\pm$ 1.17 & \textbf{84.47 $\pm$ 1.64} & 84.08 $\pm$1.77\\
NCI109 & 70.02 $\pm$ 1.57 & 81.74 $\pm$ 1.56 & \textbf{83.39 $\pm$ 1.65} & 82.86 $\pm$ 2.37 \\
ENZYMES & 33.83 $\pm$ 4.21 & \textbf{64.00 $\pm$ 4.42} & 63.66 $\pm$ 3.85 & 62.50 $\pm$ 5.12 \\
PROTEINS & 75.49 $\pm$ 4.00 & 77.47 $\pm$ 3.37 & \textbf{78.10 $\pm$ 2.96} & 77.65 $\pm$ 3.21 \\
\bottomrule
\end{tabular}
}
\end{sc}
\end{table}
\begin{table}[!t]
\centering
\caption{Comparisons on the layer number.}
\label{table:layer}
\begin{sc}
\scalebox{0.68}{
\begin{tabular}{l c c c c}
\toprule
Dataset & $N=2$ & $N=4$ & $N=6$ & $N=8$ \\
\midrule
MUTAG & 86.66 $\pm$ 8.31 & 91.11 $\pm$ 5.09 & 93.88 $\pm$ 5.80 & \textbf{94.44 $\pm$ 4.30} \\
PTC & 64.11 $\pm$ 6.55 & 74.41 $\pm$ 6.45 & 75.29 $\pm$ 6.05 & \textbf{77.64 $\pm$ 6.98} \\
NCI1 & 71.82 $\pm$ 1.85 & 81.36 $\pm$ 1.07 & 83.01 $\pm$ 1.54 & \textbf{84.08 $\pm$ 1.77} \\
NCI109 & 71.09 $\pm$ 2.41 & 80.02 $\pm$ 1.67 & 81.60 $\pm$ 1.83 & \textbf{82.86 $\pm$ 2.37} \\
ENZYMES & 42.33 $\pm$ 4.22 & 61.83 $\pm$ 5.55 & \textbf{64.83 $\pm$ 6.43} & 62.50 $\pm$ 5.12 \\
PROTEINS & 77.38 $\pm$ 2.97 & \textbf{79.81 $\pm$ 3.84} & 78.37 $\pm$ 4.00 & 77.65 $\pm$ 3.21 \\
\bottomrule
\end{tabular}
}
\end{sc}
\end{table}
\textbf{EI-GMM and VI-GMM}: To directly analyze convolution filtering with EI-GMM, we compare our method with ChebNet~\cite{defferrard2016convolutional} and GCN~\cite{kipf2016semi} approaches by using the same coarsening mechanism VI-GMM. As shown in Table~\ref{table:convandpooling}, under the same coarsening operation, our GIC is superior to ChebNet+VI-GMM and GCN+VI-GMM. It indicates EI-GMM can indeed encode the variations of subgraphs more effectively. On the other hand, we remove the coarsening layer from our GIC. For different size graphs, we pad new zero vertices into a fixed size and then concatenate attributes of all vertices for classification. As shown in this table, the performance of GIC still outperforms GIC without VI-GMM coarsening, which verifies the effectiveness of the coarsening layer VI-GMM.
\textbf{$K$ and $C_1$}: The kernel size $K$ and the number of Gaussian components $C_1$ are the most crucial parameters. Generally, the $C_1$ is proportional to the $K$. The reason is that the larger receptive field usually contains more vertices ({i.e.}} \def\etal{{et.al}, a relative large subgraph). Thus we simply take the equal values for them, $K=C_1 = \{1,3,5,7\}$. The experimental results are shown in Table~\ref{table:GMM}. With the increase of $K, C_1$, the performance improves at most cases. The reasons are two folds: i) with increasing receptive field size, the convolution will cover the farther hopping neighbors; ii) with the increase of $C_1$, the variations of subgraphs are encoded more accurately. But for the larger values of $K$ and $C_1$ will increase the computational burden. Moreover, the overfitting phenomenon might occur with the increase of model complexity. Take the example of NCI109, in the first convolution layer, the encoded attributes (in Eqn.~(\ref{eqn:gau_fea})) will be $2\times39\times7=546$ for each scale of receptive field, where $39$ is the dimension of attributes (w.r.t the number of node labels) and $7$ is the number of Gaussian components. Thus, for 7 scales of receptive field, the final encoded attributes will be $546\times7=3822$ dimensions, which will be mapping to 64 dimensions by the function $f=[f_1,\cdots,f_{C_1}]$ in Eqn.~(\ref{eqn:gau_fea}). Thus the model parameter is $3822\times64=244608$ in the first layer. Similarly, if the number of node label is 2, the model parameter will sharply decrease into $18816$. Besides, the parameter complexity is related to the number of classes and nodes. The comparison results in Table~\ref{table:GMM} demonstrate the trend of the parameters $K$ and $C_1$ in our GIC framework.
\textbf{Number of stacked layers}: Here we test on the number of stacked network layers with $N=2,4,6,8$. When $N=2$, only one fully connected layer and one softmax layer are preserved. When $N=4$, we add two layers: the convolution layer and the coarsening layer. When continuing to stack both, the depth of network will be 6 and 8. The results are shown in Table~\ref{table:layer}. Deeper networks can gain better performance in most cases, because the larger receptive field is observed and more abstract structures will be extracted in the topper layer. Of course, there is an extra risk of overfitting due to the increase of model complexity.
\textbf{An analysis of computation complexity}: In the convolution layer, the computational costs of receptive fields and Gaussian encoding are about $O(Km^2)$ and $O(C_1d^2)$ respectively, where $m, d$ are number of nodes and the feature dimensionality. Generally, $K=C_1\ll d<m$. In the coarsening layer, the time complexity is about $O(pm^2+md)$, where $p$ is iteration number of the EM algorithm. In all, suppose the whole GIC alternatively stacks $n$ convolution and coarsening layers, the entire time complexity is $O(n(K+p)m^2+nC_1d^2+nmd)$.
\section{Conclusion}
In this paper, we proposed a novel Gaussian-induced convolution network to handle with general irregular graph data. Considering the previous spectral and spatial methods do not well characterize local variations of graph, we derived edge-induced GMM to adaptively encode subgraph structures by projecting them into several Gaussian components and then performing different filtering operations on each Gaussian direction like the standard CNN filters on images. Meanwhile, we formulated graph coarsening into vertex-induced GMM to dynamically partition a graph, which was also proven to be equal to graph cut. Extensive experiments in two graphic tasks (i.e. graph and node classification) demonstrated the effectiveness and superiority of our GIC compared with those baselines and state-of-the-art methods. In the future, we would like to extend our method into more applications to irregular data.
\section{Acknowledgments}
The authors would like to thank the Chairs and the anonymous reviewers for their critical and constructive comments and suggestions. This work was supported by the National Science Fund of China under Grant Nos. 61602244, 61772276, U1713208 and 61472187 and Program for Changjiang Scholars.
\small
\bibliographystyle{aaai}
|
1,116,691,499,907 | arxiv | \section{Introduction}
The traditional formulation of mechanics \cite{Principia} has rested on absolute space and absolute time.
However, Leibniz \cite{LCC} and Mach \cite{Mach} raised philosophically well-motivated relational
objections to these foundations.
(See e.g. \cite{Berkeley, B86, buckets, Comments} for further discussion of this `absolute
versus relative motion debate'.)
It is reasonable to consider whether relational principles apply to physics as a whole.
However, for many years no means were known by which physical theories could be built along these lines.
Then Barbour and Bertotti \cite{BB82} and Barbour \cite{B03} found some relational particle mechanics
[Reissner's earlier theory -- subsequently rediscovered by Schr\"{o}dinger and by Barbour--Bertotti --
\cite{BB77, buckets} is incompatible with mass-anisotropy experiments.]
Note that the present paper uses the word `relational' in Barbour's sense.
This is worth some discussion because Rovelli \cite{Rovelli} uses the same word, but each of he and
Barbour take it to mean something different.
In outline, Rovelli's classical relationalism involves objects not being located in spacetime but being
located with respect to each other.
Rovelli also has a quantum relationalism that quantum states for a particular subsystem only make
sense with respect to another subsystem.
He then speculates that these two relationalisms of his might be related (p 157 of the online version
of \cite{Rovelli}: ``Is there a connection... This is of course very vague, and might lead
nowhere, but I find the idea intriguing."
Barbour on the other hand, has specific spatial and temporal relationalism postulates that embody
particular ideas of Mach (that time is to be abstracted from change) and Leibniz (the identity of
indiscernibles), each of which is sharply implemented by particular mathematics at the classical level,
as follows.
\noindent
A physical theory is {\it temporally relational} if there is no meaningful primary notion of time for
the whole system thereby described (e.g. the universe) \cite{BB82, RWR, FORD}.
This is implemented by using actions that are manifestly reparametrization invariant while also being
free of extraneous time-related variables [such as Newtonian time or General Relativity (GR)'s lapse].
This reparametrization invariance then directly produces primary constraints quadratic in the momenta
(such as the energy constraint of mechanics or the Hamiltonian constraint of GR).
\noindent
A physical theory is {\it configurationally relational} if a certain group $G$ of transformations that
act on the theory's configuration space $\mbox{\sffamily Q}$ are physically meaningless \cite{BB82, RWR, Lan, FORD}.
This can be is implemented by such as\footnote{This
is my own \cite{Lan, Phan, Lan2} passive, mathematician's implementation, whereas Barbour thinks about
this in active terms that physicists sometimes use.
This difference in thinking does not, however, lead to any tangible discrepancies in the material
of this paper.}
using arbitrary-$G$-frame-corrected quantities rather than `bare' $\mbox{\sffamily Q}$-configurations.
For, despite this augmenting $\mbox{\sffamily Q}$ to the principal bundle $P(\mbox{\sffamily Q}, G)$, variation with respect to each
adjoined independent auxiliary $G$-variable produces a secondary constraint linear in the momenta
(e.g. the GR momentum constraint arises as a vectorial collection of 3 such constraints) which removes
one $G$ degree of freedom and one redundant degree of freedom among the $\mbox{\sffamily Q}$ variables.
Thus one ends up dealing with the desired reduced configuration space -- the quotient space $\mbox{\sffamily Q}/G$.
Configurational relationalism includes as subcases both spatial relationalism and internal relationalism
(in the sense of gauge theory).
Configurational relationalism can also be implemented, at least in some cases \cite{TriCl, FORD}, by
working directly on reduced configuration space. (Reissner's theory was formulated along these direct
lines, while I also found that the theories \cite{BB82, B03} can be arrived at in 1- and 2-d from a
direct implementation \cite{FORD}.)
One difference between Barbour and Rovelli's approaches is as follows (another is mentioned in
Subsec 2.1).
In Sec. 2.4.4 of \cite{Rovelli}, Rovelli discusses ``meanings of time" and identifies time in Newtonian
physics as a metric line that exists alongside the configuration space of the system, nowhere
reflecting Barbour's starting point that time is derived from change alongside consideration of
the configuration spaces alone, on which Jacobi-type variational principles are defined.
This then takes Barbour straight to the situation in which no variable is distinguished as time at the
kinematic level; while Rovelli characterizes this as an essential difference between non-relativistic
and relativistic mechanics in his approach, in Barbour's approach this distinction has dissolved.
The recovery of Newtonian dynamics from relational particle models also has features by which such
models are closer in objective structure to GR than Rovelli generally holds nonrelativistic mechanics
models to be in \cite{Rovelli}.
Thereby, relational particle models provide tractable models outside the scheme in \cite{Rovelli}.
As far as I know, Barbour and collaborators have not as yet reached a specific notion of quantum
relationalism.
Rovelli's own idea of relationalism at the quantum level above-mentioned does share a number of features
with record-theoretic positions that Barbour and I have previously advocated \cite{B94II, EOT, Records}.
It is not as yet know whether such records-theoretic positions really have any substantial conceptual
or technical ties to Barbour's notion of classical relationalism, any more than Rovelli knows whether
his similar quantum position to be tied to his own classical relationalism (as above-quoted).
While Barbour's relationalism has the virtue of specifically being in line with Leibniz and Mach
(whereby alongside having attained a concrete mathematical implementation of these ideas it is
definitely of interest to the foundations of physics and theoretical physics and so deserves full
investigation), I would not dismiss the possibility that Rovelli's relationalism is {\it also} in line
with other (interpretations of) Leibniz, Mach or other such historical figures, and, in any case,
has original value and is useful in a major quantum gravity program (Loop Quantum Gravity).
{\sl The current paper and its sequel \cite{08II} concern technical advances with examples specifically
of Barbour's relational program} -- I subsequently use `relational' in this sense except where I
specifically say otherwise.
Sec 2 presents Euclidean relational particle mechanics \cite{BB82, B86, BS89, B94I, GGM, Paris, 06I,
TriCl, FORD} (also referred to as {\it scaled} models) and similarity relational particle mechanics
\cite{B03, Paris, 06II, TriCl, FORD} (also referred to as {\it scalefree} models) and further motivates
the relational scheme by arguing that some (conformo)geometrodynamical formulations of GR can be
regarded as arising therein too.
Sec 3 explains the configuration space structure of relational particle mechanics which has further parallels with GR.
All these parallels are eventually relevant as regards relational particle mechanics furbishing useful toy models
\cite{K92, B94II, EOT, 06I, 06II, SemiclI, Records, New, BF08} for the Problem
of Time in Quantum Gravity \cite{K92, I93} and other issues in the foundations of Quantum Cosmology
\cite{EOT, Halliwell03}.
Use of relational particle mechanics in both the absolute versus relative motion debate and the study of conceptual strategies
suggested toward resolving the Problem of Time in Quantum Gravity would benefit from having a good
working understanding of explicit examples of relational particle mechanics,
which is the subject of this paper at the classical level.
Relational particle models concern N particles in dimension d.
As the general configuration for d = 3 is an N-haedron, I term the 3-d N-particle relational particle
model {\it N-haedronland}.
Likewise, I term the 2-d N-particle relational particle model {\it N-a-gonland}, and the 1-d one
{\it N-stop metroland} (as in urban public transport maps).
The special 3-a-gonland considered as the principal example in this paper I term {\it triangleland}.
Noting that scalefree N-stop metroland and N-a-gonland have fairly standard geometry
($\mathbb{S}^{\sN - 2}$ and $\mathbb{CP}^{\sN - 2}$ respectively \cite{FORD}) permits explicit
reductions \cite{TriCl, FORD} and
subsequent availability of useful coordinate systems and methods of mathematical physics.
For scalefree triangleland, $\mathbb{CP}^{1} = \mathbb{S}^{2}$, so one has `twice as many
techniques', so in this paper I choose to study this case.
Scaled triangleland also permits explicit reduction and its configuration space in shape-scale variables
takes the form $C(\mathbb{S}^{2})$, which also makes for tractable and interesting explicit examples,
where the {\it cone} \cite{Cones} C(X) over a space X is
$\mathbb{R}_+ \times X \mbox{ } \bigcup \mbox{ } 0$, the special cone-point or `apex'.
In Sec 4 I give Euler--Lagrange equations for scalefree N-stop metroland and N-a-gonland and give
further specific forms for the exceptional triangleland case.
I then consider examples with potentials that are independent of the relative angle $\Phi$\footnote{The
notation for this and the next paragraph is as follows.
$\underline{\mbox{R}}_i$, $i$ = 1, 2, are {\it relative Jacobi coordinates} \cite{Marchal}.
$\Phi$ is the angle between these.
By $\underline{\iota}_i$ being mass-weighted, I mean that $\underline{\iota}_i =
\sqrt{\mu_i}\underline{R}_i$, where $\mu_i$ are the particle (cluster) masses associated with
$\underline{\mbox{R}}_i$ \cite{Marchal}.
$\iota_i = ||\underline{\iota}_i||$, the magnitude of $\underline{\iota}_i$, and
$I_i = \iota_i^2 = \mu_iR_i^2$, the ith Jacobi (barycentric) partial moment of inertia.
${\cal R}$ is the simple ratio variable $\iota_1/\iota_2$ and $\Theta = 2\mbox{arctan}{\cal R}$, which
turns out to geometrically be the azimuthal spherical angle; see also Fig 1 and Secs 2--3 for further
interpretation and depictions of these.}
as well as harder ones that depend on this.
(The former is a substantial simplification \cite{TriCl} -- in close analogy with centrality in ordinary
mechanics -- but it is the harder latter case that is relevant to various Problem of Time schemes --
semiclassical emergent time and possibly the semblance of dynamics in timeless records schemes discussed
in paper II \cite{08II}.)
In particular, I consider the general case of harmonic oscillator like potentials between all particles, looking at the
`special' ($\Phi$-independent) and `very special' (constant) subcases within as well as the general
small and large asymptotic behaviour, and recast this in terms of the problem's `original variables' --
{\it mass-weighted relative Jacobi variables}
$\underline{\iota}_1$, $\underline{\iota}_2$ that still contain an absolute orientation.
Identifying the `very special' problem's mathematics as corresponding to the linear rigid rotor,
the `special' problem's as that with additionally a background homogeneous electric field in the axial
direction and the general problem's likewise but now with a general direction, is valuable in this
paper and in subsequent work at the QM level in paper II.
In Sec 5 I use that a rotation (or, equivalently, a normal modes construction) maps the general case to
the special case for this particular problem.
Thus I can get as far in solving for the general case as I can with the `special' case.
In Sec 6, I consider scaled triangleland's Euler--Lagrange equations in terms of the
straightforwardly relational variables $(\iota_1, \iota_2, \Phi)$, the useful ($I_1, I_2, \Phi$)
coordinates (that turn out to be parabolic coordinates), the $C(\mathbb{CP}^1)$
presentation shape--scale coordinates $(I, {\cal R}, \Phi)$ and the $C(\mathbb{S}^2)$
presentation shape--scale coordinates $(I, \Theta, \Phi)$.
I then exactly solve the `special' case of multi-harmonic oscillator like potential for scaled triangleland,
by mapping it in $(I_1, I_2, \Phi)$ coordinates to a close analogue of the Kepler--Coulomb problem
(which move remains useful at the quantum level \cite{08III}).
I then give Euclidean relational particle mechanics' own rotation/normal modes construction, whereby my having obtained the special
solution enables me to also obtain the general solution, albeit this paper only has room for
presenting the special solution and its physical interpretation.
I conclude in Sec 7, including an outline of further promising relational particle mechanics examples.
Paper II considers this paper's similarity models at the quantum level and \cite{08III}
likewise for this paper's Euclidean models.
Interesting Problem of Time in Quantum Gravity applications of these models will be developed in yet
further papers \cite{New}.
\section{Examples of relational theories}
\subsection{Euclidean relational particle mechanics}
In {\it Euclidean}, or {\it scaled}, {\it relational particle mechanics} \cite{BB82, B86, B94I, EOT}, only
relative times, relative angles and relative separations are meaningful.
E.g. for 3 particles in dimension d $> 1$, Euclidean relational particle mechanics is a dynamics of the
triangle that the 3 particles form.
Euclidean relational particle mechanics was originally \cite{BB82} conceived for $\mbox{\sffamily Q} = \mbox{\sffamily Q}(\mN, \textrm{d}) =
\mathbb{R}^{\sN\mbox{\scriptsize d}}$ the positions $q_{I\alpha}$ of N particles in d-dimensional space with $G$ the
d-dimensional Euclidean group Eucl(d)
of translations, Tr(d), and rotations, Rot(d).\footnote{Lower-case Greek letters are spatial indices.
Capital indices are used for the N particle position coordinates,
lower-case indices for n = N -- 1 coordinates describing relative particle (cluster) separations,
barred and tilded lower-case indices to take values 1 to n -- 1 values and
hatted lower-case indices to take values 1 to n -- 2.
The dot denotes ${\textrm{d}}/{\textrm{d}\lambda}$ for $\lambda$ a label-time parameter that has no physical
meaning since (\ref{action}) is reparametrization-invariant.
$B_{\alpha}$ generates the rotations.
As well as the obvious 3-d case, the 1 and 2 d cases are incorporable into this form as follows.
Take $B_{\alpha}$ = (0, 0, $B$) in 2-d.
Then $\mbox{\tt L}^{\alpha}$ has just one component that is nontrivially zero.
Take furthermore $B$ = 0 in 1-d.
Then there is no $\mbox{\tt L}^{\alpha}$ constraint at all.
$M^{ij\alpha\beta}$ is the kinetic metric $\mu_i\delta^{ij}\delta^{\alpha\beta}$, where $\mu_i$ are the
Jacobi masses \cite{Marchal}.
[E.g. for this paper's triangleland example, these are
$\mu_1 = m_2m_3/\{m_2 + m_3\}$ and $\mu_2 = m_1\{m_2 + m_3\}/\{m_1 + m_2 + m_3\}$ for $m_I$ the
particle masses.].
$N_{ij\alpha\beta}$ is the inverse of this array.}
However, eliminating Tr(d) is trivial and produces a theory of essentially the same form as the original
if relative Jacobi coordinates $R_{i\alpha}$ are employed \cite{06I}, so I take that as my starting
position.
Relative Jacobi coordinates are inter-particle (cluster) separations chosen such that the kinetic term is
diagonal \cite{Marchal}; thus I take my $\mbox{\sffamily Q}$ to be $\mbox{\sffamily R}(\mN, \textrm{d})$, the space of relative separations
(this is $\mathbb{R}^{\sn\mbox{\scriptsize d}}$), and $G$ to be Rot(d).
Then configurational relationalism takes the form that one is to construct one's action using the
arbitrary Rot(d) frame expressions $\circ R_{i\alpha} \equiv \dot{R}_{i\alpha} -
{\epsilon_{\alpha}}^{\beta\gamma}\dot{B}_{\beta}R_{i\gamma}$ rather than `bare' $\dot{R}_{i\alpha}$.
\mbox{ }
\noindent{\footnotesize [{\bf Figure 1}: Coordinate systems for scaled triangleland.
\noindent i) Absolute particle position coordinates
($\underline{q}_1$, $\underline{q}_2$, $\underline{q}_3$) with respect to fixed axes and a fixed origin O; the corresponding particle
masses are $m_I$, $I$ = 1 to 3.
\noindent ii) Relative particle position coordinates, any 2 of which form a basis.
\noindent iii) Relative Jacobi coordinates ($\underline{R}_1$, $\underline{R}_2$); the corresponding Jacobi masses are $\mu_i$, $i$ = 1, 2.
The mass-scaled relative Jacobi coordinates are related to these by $\underline{\iota}_i =
\sqrt{\mu_i}\underline{R}_i$.
\noindent iv) Bipolar relative Jacobi coordinates ($\rho_1$, $\theta_1$, $\rho_2$, $\theta_2$).
The mass-scaled radial Jacobi coordinates are $\iota_i = \sqrt{\mu}\rho_i$.
These coordinates still refer to fixed axes.
\noindent v) Fully relational coordinates ($\rho_1, \rho_2, \Phi$), for $\Phi$ =
arccos$\left(\frac{\underline{R}_1\cdot\underline{R}_2}{||\underline{R}_1||||\underline{R}_2||}\right)$
the relational `Swiss army knife angle' between the 2 relative Jacobi vectors.
The coordinate ranges are $0 \leq \rho_i < \infty$, $0 \leq \Phi < 2\pi$. ]}
\mbox{ }
The action that one builds is of Jacobi type \cite{Lanczos} so as to implement temporal relationalism,
\begin{equation}
\mbox{\sffamily S}^{}_{}[R_{i\alpha}, \dot{R}_{i\alpha}, \dot{B}_{\alpha}] =
2\int\textrm{d}\lambda\sqrt{\mbox{\sffamily T}^{}\{\mbox{\sffamily U} + \mbox{\sffamily E}\}} \mbox{ } .
\label{action}
\end{equation}
Here the kinetic term $\mbox{\sffamily T}^{}(R_{i\alpha}, \dot{R}_{i\alpha}, \dot{B}_{\alpha}) =
M^{ij\alpha\beta}\circ R_{i\alpha}\circ R_{j\beta}/2$ is homogeneous quadratic in the velocities.
$\mbox{\sffamily U}$ is minus the potential energy $\mbox{\sffamily V}$ which is a function of
$\sqrt{\underline{R}_i\cdot\underline{R}_j}$ alone, and $\mbox{\sffamily E}$ is the total energy of the closed
system/universe; hitherto this has been taken to have a fixed value (but should it: no observer would
know it exactly?)
[Note that each such action $\mbox{\sffamily S}_{\mbox{\tiny J}} = 2\int\textrm{d}\lambda\sqrt{\mbox{\sffamily T}\{\mbox{\sffamily E} + \mbox{\sffamily U}\}}$ is indeed equivalent to
the more well known Euler--Lagrange actions
\begin{equation}
\mbox{\sffamily S} = \int\textrm{d} t\{\mbox{\sffamily T} - \mbox{\sffamily V}\} \mbox{ } ,
\label{Lagaction}
\end{equation}
where $\mbox{\sffamily T}$ now differs from the its previous form in containing $*$ in place of $\dot{\mbox{ }}$.
See e.g. \cite{Lanczos} for obtaining actions of type (\ref{action}) from actions of type
(\ref{Lagaction}) by parametrization and Routhian reduction to eliminate $\textrm{d} t/\textrm{d}\lambda$.]
From (\ref{action}), the conjugate momenta are then
\begin{equation}
P^{i\alpha} = M^{ij\alpha\beta}*R_{j\beta} \mbox{ } \mbox{ for } \mbox{ }
*R_{i\alpha} \equiv {R_{i\alpha}}^* - \epsilon_{\alpha}\mbox{}^{\beta\gamma}{B_{\beta}}^*R_{i\gamma}
\label{Rmom}
\end{equation}
and $\mbox{}^* \equiv \sqrt{\{\mbox{\sffamily U} + \mbox{\sffamily E}\}/{\mbox{\sffamily T}^{}}}\mbox{ }\dot{\mbox{}} = {\textrm{d}}/{\textrm{d} t}$ for $t$ the
emergent `Leibniz--Mach--Barbour' time that coincides here with Newtonian time.
This object introduced, another difference between Barbour's scheme and Rovelli's can be pointed out.
In Rovelli's scheme, one clock/timestandard is much as good as another (at the conceptual level rather
than what is convenient or accurate), while in Barbour's scheme there emerges a Leibniz--Mach--Barbour
timestandard to which everything in the universe contributes (an `ephemeris' timestandard
\cite{B94I,SemiclI,fqxi}).
This choice of time is distinguished by its substantially simplifying both the above momentum-velocity
relations and the Euler--Lagrange equations, and it amounts to an emergent recovery of other notions of
time like Newtonian time here in mechanics, or proper time or cosmic time in Sec 2.3, and it has some
parallels with the actual ephemeris time in official use in the first half of the 20th century
\cite{Clemence}, which is based on the totality of the motions of the objects in the solar system.
Reparametrization invariance implies \cite{Dirac} that these must obey at least one primary constraint;
here there is one, which takes the form of an energy constraint
\begin{equation}
\mbox{\tt H} \equiv N_{ij\alpha\beta}P^{i\alpha}P^{j\beta}/2 + \mbox{\sffamily V} = \mbox{\sffamily E}
\label{En} \mbox{ } ,
\end{equation}
to which the momenta contribute quadratically but not linearly.
Variation with respect to $B_{\alpha}$ yields as a secondary constraint the zero total angular
momentum constraint
\begin{equation}
{\mbox{\tt L}}^{\alpha} \equiv {\epsilon^{\alpha\beta}}_{\gamma}R_{i\beta}P^{i\gamma} = 0 \mbox{ } ,
\label{ZAM}
\end{equation}
which is linear in the momenta and is interpretable as the physical content of the theory being in
relative separations and relative angles and not in absolute angles.
Euclidean relational particle mechanics is Leibnizian/Machian in form, and yet is in agreement with a subset of Newtonian mechanics -- the
zero total angular momentum universes.
The recovery from relationalism of many of the results previously obtained by standard absolutist
physics and the finding of similar mathematical structures along both routes (at least in the simpler
cases) is an interesting reconciliation as regards the extent to which we have not been prejudiced by
studying absolutist physics.
This paper and \cite{08II} show that this trend extends further, albeit the mathematics arising from
similarity relational particle mechanics is distinct from that in the ordinary central force problem in
having restricted, unusual potentials inherited from the scale invariance).
Sec 2.3 and 2.4 recollect that a similar trend is also present in GR.
\subsection{Similarity relational particle mechanics}
In {\it similarity}, or {\it scalefree}, {\it relational particle mechanics} \cite{B03, 06II, TriCl}, only relative times, relative angles and ratios of relative separations
are meaningful.
I.e., it is a dynamics of shape excluding size: a {\sl dynamics of pure shape}.
In the case of 3 particles in dimension d $> 1$, similarity relational particle mechanics is the dynamics of the shape of the
triangle that the 3 particles form.
$G$ is now augmented by the dilations Dil(d) = Dil to form the Similarity group Sim(d), so that
$\circ R_{i\alpha}$ now takes the form $\circ R_{i\alpha} \equiv \dot{R}_{i\alpha} -
{\epsilon_{\alpha}}^{\beta\gamma}\dot{B}_{\beta}R_{i\gamma} + \dot{C}R_{i\alpha}$, where $C$ generates
the dilations.
Noting that the `banal conformal transformation',\footnote{Performing this
transformation clearly leaves invariant product-type actions and so of Jacobi-type actions
(\ref{action}) and its similarity relational particle mechanics counterpart and of the reduced actions that follow from these in Sec 3, and
of their GR counterparts such as the Baierlein--Sharp--Wheeler action \cite{BSW}, (\ref{GRaction}), and (\ref{GRconfaction}) and its
variants.
$\mbox{\sffamily T}$ and $\mbox{\sffamily E} + \mbox{\sffamily U}$ scale compensatingly and then deduce from $\mbox{}^* \equiv \sqrt{\{\mbox{\sffamily{\scriptsize E}} + \mbox{\sffamily{\scriptsize U}}\}/{\mbox{\sffamily{\scriptsize T}}}}
\mbox{ }\dot{\mbox{}}$ that $*$ scales as $* \longrightarrow *_{\Omega} = \Omega^{-2}*$.
(Euler--Lagrange or Arnowitt--Deser--Misner type actions have this invariance too but its presence therein
is less obvious to spot \cite{Banal}.)
I term each particular choice of $\Omega$ a `banal conformal representation'.
Note: classically Euler--Lagrange equations are invariant \cite{Banal} so applying a banal conformal transformation makes no
difference and it is just a means of computational convenience in some cases.
But retaining this lack of difference at the quantum level has implications \cite{Banal, 08II, 08III}.}
\begin{equation}
\mbox{\sffamily T} \rightarrow \mbox{\sffamily T}_{\Omega} = \Omega^2\mbox{\sffamily T} \mbox{ } \mbox{ } , \mbox{ } \mbox{ }
\mbox{\sffamily E} + \mbox{\sffamily U} \rightarrow \mbox{\sffamily E}_{\Omega} + \mbox{\sffamily U}_{\Omega} = \{\mbox{\sffamily E} - \mbox{\sffamily V}\}/\Omega^2 \mbox{ } ,
\end{equation}
leaves the Jacobi action invariant, the most natural presentation \cite{TriCl} for the action is
\begin{equation}
\mbox{\sffamily S}^{}_{}[R_{i\alpha}, \dot{R}_{i\alpha}, \dot{B}_{\alpha}, \dot{C}] =
2\int\textrm{d}\lambda\sqrt{\mbox{\sffamily T}^{}\{\mbox{\sffamily U} + \mbox{\sffamily E}\}}
\end{equation}
with $\mbox{\sffamily T}^{}(R_{i\alpha}, \dot{R}_{i\alpha}, \dot{B}_{\alpha}, \dot{C}) = M^{ij\alpha\beta}
\circ R_{i\alpha} \circ R_{j\beta}/2\mbox{I}$ for $\mbox{I}$ the total barycentric moment of inertia of the system,
whereupon $\mbox{\sffamily V}$ is a function only of manifestly scale-invariant {\sl ratios} of
$\sqrt{\underline{R}_i\cdot\underline{R}_j}$ (i.e. homogeneous of degree zero) and $\mbox{\sffamily E}$ comes unweighted.
[Barbour's original presentation \cite{B03}, in which the potential is homogeneous of degree $-2$ and
an energy E cannot be added on to this (but $\mbox{\sffamily E}/\mbox{I}$ {\sl can}), is related to this one by use of
$\Omega^2 = \mbox{I}$, i.e. by not dividing through by the moment of inertia in the first place.]
Then (in the conformally natural presentation I use above) the conjugate momenta are given by
(\ref{Rmom}) but divided by $\mbox{I}$ and containing the $*$ corresponding to the new $\circ$.
There are again primary and secondary constraints respectively of form (\ref{En}) with the first term
multiplied by $\mbox{I}$ and (\ref{ZAM}), as well as a secondary constraint from variation with respect to $C$,
\begin{equation}
\mbox{\tt D} \equiv R_{i\alpha}P^{i\alpha} = 0 \mbox{ } ,
\label{ZDM}
\end{equation}
which is also linear in the momenta.
This is the dilational (or Euler) constraint -- it says that the dilational momentum of the whole system
is zero so that the physical content of the theory is not in relative separations but in ratios
of relative separations (relative angles already being functions of ratios, they are unaffected).
\subsection{Relational formulation of geometrodynamics}
Important further motivation for the study of relationalism and relational particle mechanics is that
General Relativity (GR) can also be formulated as a relational theory.
Begin by recollecting that as well as being a spacetime theory, GR can be studied as a dynamics obtained
by splitting spacetime with respect to a family of spatial hypersurfaces \cite{ADM}.
However, answering a question of Wheeler \cite{Battelle}, this dynamics can be taken to follow from
first principles of its own \cite{HKT, RWR, Phan}.
And one of the two known such sets of first principles are relational first principles, with $\mbox{\sffamily Q}$ =
Riem($\Sigma$, 3) -- the space of positive-definite 3-metrics on some fixed topology $\Sigma$ which I
take to be a compact without boundary one for simplicity -- and $G$ = Diff($\Sigma$, 3) the diffeomorphisms on $\Sigma$ \cite{RWR, Lan, Phan}.
The arbitrary Diff($\Sigma$, 3) frame expressions are
\noindent
$\circ h_{\mu\nu} \equiv \dot{h}_{\mu\nu} - {\pounds}_{\dot{F}}h_{\mu\nu}$
rather than `bare' $\dot{h}_{\mu\nu}$.\footnote{The
dot now denotes ${\partial}/{\partial\lambda}$.
$F_{\mu}$ generates Diff($\Sigma$, 3).
$\pounds_{\dot{F}}$ is the Lie derivative with respect to the vector field $\dot{F}_{\mu}$.
$h$, $D_{\mu}$ and $\mbox{Ric}(h)$ are the determinant, covariant derivative and Ricci scalar associated
with $h_{\mu\nu}$.
$\Lambda$ is the cosmological constant.
${\cal M}^{\mu\nu\rho\sigma} = h^{\mu\rho}h^{\nu\sigma} - h^{\mu\nu}h^{\rho\sigma}$ is the kinetic
supermetric of GR, with determinant ${\cal M}$ and inverse ${\cal N}_{\mu\nu\sigma\rho} =
h_{\mu\rho}h_{\nu\sigma} - h_{\mu\nu}h_{\rho\sigma}/2$ (which is the undensitized version of the
DeWitt supermetric \cite{DeWitt}).}
The action that one builds so as to explicitly implement temporal relationalism is \cite{RWR, Lan, ABFO, Phan}
\begin{equation}
\mbox{\sffamily S}^{\mbox{\scriptsize G}\mbox{\scriptsize R}}_{}[h_{\alpha\beta}, \dot{h}_{\alpha\beta}, \dot{F}_{\beta}] =
2\int\textrm{d}\lambda\int\textrm{d}^{3}x\sqrt{h}\sqrt{\mbox{\sffamily T}^{\mbox{\scriptsize G}\mbox{\scriptsize R}}_{}
\{\mbox{Ric}(h) - 2\Lambda\}} \mbox{ } \mbox{ for } \mbox{ }
\mbox{\sffamily T}^{\mbox{\scriptsize G}\mbox{\scriptsize R}}_{}(x^{\omega}, h_{\mu\nu}, \dot{h}_{\mu\nu}; \dot{F}_{\mu}] =
\frac{1}{4}{\cal M}^{\mu\nu\rho\sigma}\circ h_{\mu\nu}\circ h_{\rho\sigma}
\label{GRaction} \mbox{ } ,
\end{equation}
which bears many similarities to the better-known Baierlein--Sharp--Wheeler \cite{BSW}
action.\footnote{The
difference is that the Baierlein--Sharp--Wheeler action contains the shift, $\beta_{\mu}$, while the
action (\ref{GRaction}) contains the velocity associated with the frame variable $F_{\mu}$, which is
such that $\dot{F}_{\mu} = \beta_{\mu}$.
Both of these actions are free of extraneous time-related variables (unlike the Arnowitt--Deser--Misner
action which contains a such -- the lapse -- multiplier elimination of which produces the
Baierlein--Sharp--Wheeler action).
The above-mentioned difference does not affect the outcome of the variational procedure \cite{FEPI};
however, it does make (\ref{GRaction}) homogeneous quadratic in $\partial/\partial\lambda$ so that the $\lambda$'s
cancel out of $\textrm{d}\lambda\sqrt{\mbox{\sffamily T}}$ so that it and not the Baierlein--Sharp--Wheeler action is
manifestly reparametrization invariant as well as free of extraneous time-related variables,
i.e. temporally relational as defined on page 1.}
Also note that one does not need to assume the GR form of the kinetic metric or potential; relational
postulates plus a few simplicities give this since the Dirac procedure \cite{Dirac} prevents most other
likewise simple choices of kinetic term $\mbox{\sffamily T}$ from working \cite{RWR, SanOM, Than, Lan, Phan}.
Yet further motivation is that 1) configurational relationalism is closely related \cite{Lan, FEPI} to
certain formulations of gauge theory.
2) The above relational formulation of GR is furthermore robust to the inclusion of a sufficiently full
set of fundamental matter sources so as to describe nature \cite{AB, Van, Lan, Phan, Lan2}.
Then the conjugate momenta are
\begin{equation}
\pi^{\mu\nu} = \sqrt{h}{\cal M}^{\mu\nu\rho\sigma}*h_{\rho\sigma} \mbox{ } \mbox{ for } \mbox{ }
*h_{\rho\sigma} = {h_{\rho\sigma}}^* - \pounds_{F^*}h_{\rho\sigma}
\label{tumvel}
\end{equation}
for $\mbox{}^* \equiv \sqrt{\{\mbox{Ric}(h) - 2\Lambda\}/{\mbox{\sffamily T}^{\mbox{\scriptsize G}\mbox{\scriptsize R}}_{}}}\mbox{ } \dot{\mbox{}} \mbox{ }$.
(\ref{GRaction}) being reparametrization invariant, there must likewise be at least one primary
constraint, which is in this case the GR Hamiltonian constraint
\begin{equation}
{\cal H} \equiv \frac{1}{\sqrt{h}}{\cal N}_{\mu\nu\rho\sigma}\pi^{\mu\nu}\pi^{\rho\sigma} -
\sqrt{h}\{ \mbox{Ric}(h) - 2\Lambda\} = 0 \mbox{ }
\label{Ham}
\end{equation}
to which the momenta contribute quadratically but not linearly.
Variation with respect to $F_{\mu}$ yields as a secondary constraint the GR momentum constraint
\begin{equation}
{\cal H}_{\mu} \equiv - 2D_{\nu}{\pi^{\nu}}_{\mu} = 0 \mbox{ } ,
\label{Mom}
\end{equation}
which is linear in the gravitational momenta and interpretable as GR being more than just a theory of
dynamical 3-metrics: the physical information is in the underlying geometry and not in the allocation
of points to that geometry.
While, the purely quadratic nature of ${\cal H}$ leads to $\widehat{\cal H}\Psi = \mbox{\sffamily E}\Psi$ and not
${i\hbar}{\partial\Psi}/{\partial T}$ for some notion of time $T$.
The $\mbox{\tt H}$ of relational particle mechanics also has this feature and so itself manifests the Problem
of Time (Sec II.1); it is a reasonable model for this in that a number of the conceptual strategies
subsequently suggested for GR have nontrivial counterparts for relational particle mechanics.
The presence of square roots in the above actions so as to implement manifest temporal relationalism
has caused some concern with Referees due to square roots causing substantial difficulties at the
quantum level.
In this respect, I note that the above square roots occur {\it at the classical level in the Lagrangian
formulation}.
However, {\sl there are no square roots in the Hamiltonians resulting from such actions},
and it is these that I promote to quantum equations in \cite{08II}, so that there are
{\sl no square roots in the quantum equations}, and so this work encounters none of the problems
associated with handling square roots at the quantum level.
\subsection{Relational formulation of conformogeometrodynamics}
This has a number of ties with the work of Lichnerowicz \cite{Lich44} and York \cite{York72} on maximal
and constant mean curvature spatial slices respectively, which is important in numerical
relativity \cite{CMCApps}.
A preliminary action is
\begin{equation}
\mbox{\sffamily S}^{\mbox{\scriptsize G}\mbox{\scriptsize R}}_{} =
\int\textrm{d}\lambda\int\textrm{d}^3x\sqrt{h}\phi^6\sqrt{\mbox{\sffamily T}^{\mbox{\scriptsize G}\mbox{\scriptsize R}}_{}\{\phi^{-4}\{\mbox{Ric}(h) -8D^2\phi/\phi\}\}}
\label{GRconfaction} \mbox{ } ,
\end{equation}
where\footnote{I, like \cite{BO},
present this for simplicity with no $\Lambda$ term; see \cite{ABFO} for inclusion of this.
Also, \cite{BO} first considered this with two separate multipliers instead of a single more general
auxiliary whose velocity also features in the action and has to be free end hypersurface varied; the way
this is presented here is that of \cite{ABFKO} and \cite{FEPII}.
See also \cite{FEPI, FEPII} for justification of the type of variation in use.}
$$
\mbox{\sffamily T}^{\mbox{\scriptsize G}\mbox{\scriptsize R}}_{}(x^{\omega}, h_{\mu\nu}, \phi, \dot{h}_{\mu\nu}, \dot{\phi}_{\mu}; \dot{F}_{\mu}] =
\{\phi^{-4}h^{\mu\rho}\phi^{-4}h^{\nu\sigma} - \phi^{-4}h^{\mu\nu}\phi^{-4}h^{\rho\sigma}\}
\circ\{\phi^4h_{\mu\nu}\}\circ\{\phi^4h_{\rho\sigma}\} =
$$
\begin{equation}
\{h^{\mu\rho}h^{\nu\sigma} - h^{\mu\nu}h^{\rho\sigma}\}
\{\circ h_{\mu\nu} + 4 h_{\mu\nu} \circ\phi/\phi\}
\{\circ h_{\rho\sigma} + 4 h_{\rho\sigma}\circ\phi/\phi\} \mbox{ } ;
\end{equation}
$\phi$ is a conformal factor.
This action then gives as a primary constraint
\begin{equation}
{\cal H}_{\phi} \equiv \phi^{-8}\frac{1}{\sqrt{h}}\pi_{\mu\nu}\pi^{\mu\nu} - \sqrt{h}
\left\{
\mbox{Ric}(h) - \frac{8D^2\phi}{\phi}
\right\} = 0
\end{equation}
(Lichnerowicz equation), ${\cal H}_{\mu}$ as a secondary constraint from $F_{\mu}$ variation, and
\begin{equation}
\pi = 0
\end{equation}
(condition for a maximal slice) as one part of the free end hypersurface \cite{FEPI, FEPII} variation of
$\phi$, but the other part of this variation entails frozenness (the well-known non-propagability of
maximal slicing for spatially compact without boundary GR.
\cite{BO} and \cite{ABFO} got round this frozenness was circumvented by considering a new action with
division by $\mbox{Vol}^{2/3}$ which amounts to fully using $G$ = Diff($\Sigma$, 3) $\times$
Conf($\Sigma$, 3) where Conf($\Sigma$, 3) is the group of conformal transformations on $\Sigma$.
But the subsequent variational principle no longer gives GR and is furthermore questionable as an
alternative theory \cite{ABFO, Lan, Than}).
In \cite{ABFKO} it was circumvented rather by using $G$ = Diff($\Sigma$, 3) $\times$ VPConf($\Sigma$, 3),
where VPConf($\Sigma$, 3) are the (global) volume-preserving conformal transformations on $\Sigma$ as
implemented by using
\begin{equation}
\widehat{\phi} = \phi/\left\{\int\textrm{d}^3x\sqrt{h}\phi^6\right\}^{1/6} \mbox{ } .
\end{equation}
Subsequently, the primary constraint (now denoted by ${\cal H}_{\widehat{\phi}}$) picks up more terms (making it a Lichnerowicz-York equation rather than a
Lichnerowicz equation).
While, $\phi$ variation now gives
\begin{equation}
\pi/\sqrt{h} = C \mbox{ } ,
\end{equation}
(condition for a constant mean curvature slice) alongside an equation that successfully maintains this.
The addition of matter to the present paragraph's scheme has not to date been extensively
studied, but no significant hindrances are known to date \cite{MacSweeney}, and the addition of matter
to the preceding paragraph's scheme has been extensively studied \cite{ABFO, CMCAnderson}.
\section{Geometry of the configuration spaces}
The following configuration space structure issues play an important underlying role in papers I and II.
\subsection{Euclidean relational particle mechanics configuration spaces and reduction}
For Euclidean relational particle mechanics, such a study was carried out in \cite{GGM} in a `rigged' fashion (auxiliary variables
included), and in \cite{LB, GGM, 06I} in a reduced fashion (auxiliary variables eliminated).
\cite{RelatedWork} exemplifies related work.
In \cite{06II} I noted that this elimination has a simpler nature in 2-d than in 3-d, and used this to
pass to completely relational variables in \cite{TriCl, FORD} for both Euclidean and similarity relational
particle mechanics.
{\it Relative space} $\mbox{\sffamily R}(\mN, \textrm{d})$ is the quotient space $\mbox{\sffamily Q}(\mN,\textrm{d})/\mbox{Tr}(\textrm{d})$.
This is flat space, so the kinetic metric is just $\mbox{\sffamily T}^{}(\underline{R_i}) =
\sum_i\mu_iR_i^2/2$.
At this level have Jacobi coordinates and constraints as in Sec 2.1.
I use mass-weighted Jacobi variables as the most succinct `original variables of the problem'.
{\it Relational space} is ${\cal R}(\mN, \textrm{d}) = \mbox{\sffamily Q}(\mN,\textrm{d})/\mbox{Eucl}(\textrm{d})$, which is what one has
on eliminating the angular momentum constraint.
These coincide in 1-d.
In 2-d one can do elimination explicitly \cite{TriCl, FORD}, obtaining e.g. in the 3-particle case the
$\mbox{\sffamily T}^{\mbox{\scriptsize t}\mbox{\scriptsize r}\mbox{\scriptsize i}\mbox{\scriptsize a}\sn\sg\sll\mbox{\scriptsize e}}(\iota_i, \dot{\iota}_i, \dot{\Phi})$ corresponding to the line element
\begin{equation}
\textrm{d} s^2_{\mbox{\scriptsize t}\mbox{\scriptsize r}\mbox{\scriptsize i}\mbox{\scriptsize a}\sn\sg\sll\mbox{\scriptsize e}} = \textrm{d}{\iota}_1^2 + \textrm{d}{\iota}_2^2 +
\frac{\iota_1^2\iota_2^2\textrm{d} {\Phi}^2}{\iota_1^2 + \iota_2^2}
\mbox{ } .
\end{equation}
[To obtain a configuration space kinetic term $\mbox{\sffamily T}$ from the line element of a metric
$\textrm{d} s^2 = M_{\mbox{\sffamily{\scriptsize A}}\mbox{\sffamily{\scriptsize B}}}\textrm{d} Q^{\mbox{\sffamily{\scriptsize A}}}\textrm{d} Q^{\mbox{\sffamily{\scriptsize B}}}$, use (\ref{Te}).]
This elimination is done by: Step 1) passing to mass-weighted Jacobi coordinates.
Step 2: eliminating the rotational auxiliary from the Lagrangian form of the constraint (\ref{ZAM}),
casting the subsequent expression in mass-weighted Jacobi bipolar coordinates, which I denote by
($\iota_1$, $\theta_1$, $\iota_2$, $\theta_2$) [fig 1iv); these are still with respect to fixed axes.]
Step 3: one can then pass to fully Euclideanly-relational coordinates [fig 1v)] ($\{\iota_1, \iota_2,
\Phi\}$) (by an additional `purely absolute' angle dropping out of the working \cite{06I, TriCl}).
In 3-d, the situation is considerably harder.
\subsection{Similarity relational particle mechanics configuration spaces and reduction }
For similarity relational particle mechanics, such a study was carried out in \cite{FORD} in a reduced fashion (both by reduction and
in an already-reduced form based on \cite{Kendall}).
{\it Preshape space} is $\mbox{\sffamily P}(\mN, \textrm{d}) = \mbox{\sffamily Q}(\mN,\textrm{d})/\mbox{Tr}(\textrm{d}) \times \mbox{Dil}$ and {\it Shape
space} is $\mbox{\sffamily S}(\mN, \textrm{d}) = \mbox{\sffamily Q}(\mN,\textrm{d})/\mbox{Sim}(\textrm{d})$.
Furthermore, $\mbox{\sffamily P}(\mN, \textrm{d}) {=} \mathbb{S}^{\sn\mbox{\scriptsize d} - 1}$ and $\mbox{\sffamily S}(\mN, 1)
{=} \mathbb{S}^{\sn - 1}$ where in both cases `=' here means equal both
topologically and metrically, with the standard (hyper)spherical metric.
Thus one has $\mbox{\sffamily T}^{\sN\s-\mbox{\scriptsize s}\mbox{\scriptsize t}\so\mbox{\scriptsize p}}({\cal R}_{\bar{p}}, \dot{\cal R}_{\bar{p}})$ or
$\mbox{\sffamily T}^{\sN\s-\mbox{\scriptsize s}\mbox{\scriptsize t}\so\mbox{\scriptsize p}}(\Theta_{\bar{p}}, \dot{\Theta}_{\bar{p}})$ corresponding to the line element
\begin{equation}
\textrm{d} s^2_{\sN\s-\mbox{\scriptsize s}\mbox{\scriptsize t}\so\mbox{\scriptsize p}} = \frac{
\left\{ 1 + \sum_{\bar{p} = 1}^{\sn - 1}{\cal R}_{\bar{p}}^2 \right\}
\sum_{\bar{q} = 1}^{\sn - 1}\textrm{d}{{\cal R}}_{\bar{q}}^2 -
\left\{ \sum_{\bar{p} = 1}^{\sn - 1}{\cal R}_{\bar{p}}\textrm{d}{{\cal R}}_{\bar{p}}
\right\}^2 }
{ \{1 + \sum_{\bar{p} = 1}^{\sn - 1}{\cal R}_{\bar{p}}^2\}^2 }
= \sum_{\bar{r} = 1}^{\sn - 1}
\prod_{\widehat{p} = 1}^{\bar{r} - 1}\mbox{sin}^2\Theta_{\widehat{p} }\textrm{d}{\Theta}_{\bar{r}}^2 \mbox{ }
\end{equation}
in terms of simple ratio coordinates (\ref{SRC}) and (ultra)spherical coordinates (\ref{Ultra}),
respectively, where $\prod_{i = 1}^{0}$ terms are defined to be 1.
It is obtained from the similarity relational particle mechanics action in Jacobi coordinates by: Step 1) passing to mass-weighted
coordinates.
Step 2) eliminate dilational auxiliaries from the Lagrangian form of the constraint (\ref{ZDM}) and
re-express in mass-weighted Jacobi bipolar coordinates.
Step 3) pass to fully relational simple ratio variables
\begin{equation}
{\cal R}_{\bar{p}} \equiv \iota_{\bar{p}}/\iota_{\sn - 1}
\label{SRC}
\end{equation}
(in the present context these are, geometrically, Beltrami coordinates); these are related to
ultraspherical coordinates by
\begin{equation}
\Theta_{\bar{p}} = \mbox{arctan}
\left(
{ \sqrt{\sum_{j = 1}^{\bar{p}}{\cal R}_{j}^2} }/{ {\cal R}_{\bar{p} + 1} }
\right)
\label{Ultra}
\end{equation}
for ${\cal R}_{\sn - 1} \equiv \iota_{\sn - 1}/\iota{\sn - 1} = 1$.
The coordinate ranges for these are $\Theta_{\bar{p}} \in (0, \pi)$, $\Theta_{\sn\textrm{d} - 1} \in [0, 2\pi)$.
E.g. for 4-stop metroland $\Theta_1$ is the azimuthal angle $\Theta$ and $\Theta_2$ is the polar angle $\Phi$.
Then one has $\mbox{\sffamily T}^{4\s-\mbox{\scriptsize s}\mbox{\scriptsize t}\so\mbox{\scriptsize p}}({\cal R}_i, \dot{\cal R}_i)$ or
$\mbox{\sffamily T}^{\sN\s-\mbox{\scriptsize s}\mbox{\scriptsize t}\so\mbox{\scriptsize p}}(\Theta, \dot{\Theta}, \dot{\Phi})$ corresponding to the line element
\begin{equation}
\textrm{d} s^2_{4\s-\mbox{\scriptsize s}\mbox{\scriptsize t}\so\mbox{\scriptsize p}} =
\frac{\{1 + {\cal R}_2^2\}\textrm{d}{{\cal R}}_1^2 + \{1 + {\cal R}_1^2\}\textrm{d}{{\cal R}}_2^2 -
2{\cal R}_1{\cal R}_2\textrm{d}{{\cal R}}_1\textrm{d}{{\cal R}}_2}{\{1 + {\cal R}_1^2 + {\cal R}_2^2\}^2} =
\textrm{d}{\Theta}^2 + \mbox{sin}^2\Theta\textrm{d}{\Phi}^2
\end{equation}
for `simple ratio' coordinates (${\cal R}_1$, ${\cal R}_2$) [4-stop metroland subcase of (\ref{SRC})]
and spherical coordinates ($\Theta, \Phi$) [ = ($\Theta_1, \Theta_2$), given by the 4-stop metroland
subcase of (\ref{Ultra})].
While, for N particles in 2-d $\mbox{\sffamily S}(\mN, 2) {=} \mathbb{CP}^{\sn - 1}$ both topologically and metrically,
with the standard Fubini--Study metric corresponding to the line element
\begin{equation}
\textrm{d} s^2_{\sN\s-\mbox{\scriptsize a}\s-\sg\so\sn} =
\frac{ \{1 + \sum_{\bar{p}}|{\cal Z}_{\bar{p}}|^2\}
\sum_{\bar{q}}|\textrm{d}{\cal Z}_{\bar{q}}|^2 -
|\sum_{\bar{p}}\overline{{\cal Z}}_{\bar{p}} \textrm{d}{\cal Z}_{\bar{p}}|^2 }
{ \{1 + \sum_{\bar{r}}|{\cal Z}_{\bar{r}}|^2\}^2 } \mbox{ } .
\label{10}
\end{equation}
for ${\cal Z}_{\bar{r}}$ inhomogeneous coordinates on $\mathbb{CP}^{\sn - 1}$, from which the kinetic term
$\mbox{\sffamily T}^{\sN\s-\mbox{\scriptsize a}\s-\sg\so\sn} ({\cal Z}_{\bar{p}}, \dot{\cal Z}_{\bar{p}})$ is constructed.
This is obtained from the similarity relational particle mechanics action in Jacobi coordinates by Step 1) passing to mass-weighted
coordinates.
Step 2) eliminate both a rotational auxiliary and a dilational auxiliary from the
Lagrangian forms of (\ref{ZAM}, \ref{ZDM}).
Step 3) write this action in terms of ratio variables.
Furthermore, one can use the polar form ${\cal Z} = {\cal R}_{\bar{p}}\mbox{exp}(i\Theta_{\bar{p}})$;
indexing moduli (real ratio coordinates) as ${\cal R}_{\bar{p}}$
and arguments (relative angle coordinates) as ${\Theta}_{\widetilde{p}}$
(for both $\bar{p}$ and $\widetilde{p}$ taking 1 to N -- 2).
Then the configuration space metric can be written in two blocks (${\cal M}_{\bar{p}\tilde{q}} = 0$):
\begin{equation}
{\cal M}_{\bar{p}\bar{q}} = \{1 + ||{\cal R}||^2\}^{-1}\delta_{\bar{p}\bar{q}} -
\{1 + ||{\cal R}||^2\}^{-2}{\cal R}_{\bar{p}}{\cal R}_{\bar{q}}
\mbox{ } , \mbox{ }
{\cal M}_{\tilde{p}\tilde{q}} = \{\{1 + ||{\cal R}||^2\}^{-1}\delta_{\tilde{p}\tilde{q}} -
\{1 + ||{\cal R}||^2\}^{-2}{\cal R}_{\tilde{p}}{\cal R}_{\tilde{q}}\} {\cal R}_{\tilde{p}}{\cal R}_{\tilde{q}}
\mbox{ } \mbox{ (no sum) ,}
\end{equation}
where for a given p ${\cal R}_{\widetilde{p}}$ is the ${\cal R}_{\bar{p}}$ that forms a complex coordinate
pair with ${\Theta}_{\widetilde{p}}$.
As a particular example, $\mbox{\sffamily S}$(3, 2) = $\mathbb{CP}^1 = \mathbb{S}^2$, making this triangleland case
particularly amenable to study due to the availability of both projective and spherical techniques
(while, as we shall see, this example meets many of the nontrivialities required by Problem of Time
strategies); this paper principally considers this case.
In this case the kinetic term collapses to $\mbox{\sffamily T}^{\mbox{\scriptsize t}\mbox{\scriptsize r}\mbox{\scriptsize i}\mbox{\scriptsize a}\sn\sg\sll\mbox{\scriptsize e}}({\cal Z}, \dot{\cal Z})$,
$\mbox{\sffamily T}^{\mbox{\scriptsize t}\mbox{\scriptsize r}\mbox{\scriptsize i}\mbox{\scriptsize a}\sn\sg\sll\mbox{\scriptsize e}}_{\mathbb{CP}^1}({\cal R}, \dot{\cal R}, \dot{\Phi})$ or
$\mbox{\sffamily T}^{\mbox{\scriptsize t}\mbox{\scriptsize r}\mbox{\scriptsize i}\mbox{\scriptsize a}\sn\sg\sll\mbox{\scriptsize e}}_{(\mathbb{S}^2, 1/2)}(\Theta, \dot{\Theta}, \dot{\Phi})$ as constructed from
the line element
\begin{equation}
\textrm{d} s_{\mbox{\scriptsize t}\mbox{\scriptsize r}\mbox{\scriptsize i}\mbox{\scriptsize a}\sn\sg\sll\mbox{\scriptsize e}}^2 = {|\textrm{d}{\cal Z}|^2}/{\{1 + |{\cal Z}|^2\}^2} =
\{\textrm{d}{\cal R}^2 + {\cal R}^2\textrm{d}{\Phi^2}\}/{\{1 + {\cal R}^2\}^2} =
\{\textrm{d}{\Theta}^2 + \mbox{sin}^2\Theta \textrm{d}{\Phi}^2\}/4 \mbox{ } ,
\label{T32}
\end{equation}
where ${\cal R}$ is the simple ratio variable ${\cal R} \equiv {\iota_1}/{\iota_2}$
This is physically the square root of the ratio of partial moments of inertia, $\sqrt{\mbox{I}_1/\mbox{I}_2}$,
and mathematically a choice of inhomogeneous coordinate's modulus on $\mathbb{CP}^1$.
This is related to the azimuthal angle $\Theta$ on $\mathbb{S}^2 = \mathbb{CP}^1$ by
\begin{equation}
\Theta = 2\mbox{arctan}{\cal R} \mbox{ } \Leftrightarrow \mbox{ } {\cal R} = \mbox{tan$\frac{\Theta}{2}$}
\end{equation}
for $\Theta$ the spherical azimuthal angle (see also Fig 2).
[Thus ${\cal R}$ is also geometrically interpretable as a radial stereographic coordinate on
$\mathbb{S}^2$].
$\Phi$ is the relative angle between the two Jacobi coordinate vectors.
Useful relations between the two representations include that ${\cal R} = 1$ is the equator,
${\cal R}$ small (compared to 1) corresponds to near the North Pole ($\Theta$ a small angle)
and ${\cal R}$ large corresponds to near the South Pole (the supplement angle
$\Xi = \pi - \Theta$ is small).
Also, one can use the barred banal conformal representation $\mbox{\sffamily T} \longrightarrow 4\mbox{\sffamily T}$,
$\mbox{\sffamily U} + \mbox{\sffamily E} \longrightarrow 4\{\mbox{\sffamily U} + \mbox{\sffamily E}\}$, whereupon this sphere of radius 1/2 becomes the unit sphere,
and I write $\overline{\mbox{\sffamily T}}_{\mathbb{S}^2}^{\mbox{\scriptsize t}\mbox{\scriptsize r}\mbox{\scriptsize i}\mbox{\scriptsize a}\sn\sg\sll\mbox{\scriptsize e}}$.
\mbox{ }
\noindent {\footnotesize[{\bf Figure 2}: Interrelation between ${\cal R}$, ${\cal U}$, $\Theta$ and
$\Xi$ coordinates.
N is the North Pole, S is the South Pole, O is the centre.
Then point P in the spherical representation corresponds to point $P^{\prime}$ in the stereographic
tangent plane at N with radial coordinate ${\cal R}$, and to point $P^{\prime\prime}$ in the
stereographic tangent plane at S with radial coordinate ${\cal U}$.]}
\mbox{ }
For use in paper II, this line element in terms of $\Phi$ and $\mbox{I}_i$, $i = 1$ or $2$ is (no sum)
\begin{equation}
\textrm{d} s^{2} = {\textrm{d}\mbox{I}_i^2}/{4\mbox{I}_i\{\mbox{I} - \mbox{I}_i\}} + {\{\mbox{I} - \mbox{I}_i\}\mbox{I}_i}\textrm{d}\Phi^2/\mbox{I}^2 \mbox{ } .
\end{equation}
One can also use ${\mbox{\sffamily T}}_{\mbox{\scriptsize f}\sll\mbox{\scriptsize a}\mbox{\scriptsize t}} \equiv
\widetilde{\mbox{\sffamily T}}^{\mbox{\scriptsize t}\mbox{\scriptsize r}\mbox{\scriptsize i}\mbox{\scriptsize a}\sn\sg\sll\mbox{\scriptsize e}}({\cal R}, \dot{\cal R}, \dot{\Phi})$ constructed from the line
element
\begin{equation}
\textrm{d} s^2_{\mbox{\scriptsize f}\sll\mbox{\scriptsize a}\mbox{\scriptsize t}} = \dot{\cal R}^2 + {\cal R}^2\dot{\Phi}^2
\label{flatbrod}
\end{equation}
by performing a banal conformal transformation with conformal factor $\Omega^2 = \{1 + {\cal R}^2\}^2$.
This is geometrically trivial, while the other above forms are both geometrically natural and
mechanically natural (equivalent to $\mbox{\sffamily E}$ appearing as an eigenvalue free of weight function);
using $\{1 + {\cal R}^2\}$ alone would be the conformally-natural choice.
In 3-d, the situation is, again, harder \cite{Kendall} though at least similarity relational particle mechanics is free of
collisions that are {\sl maximal} (i.e. between all the particles at once).
\subsection{Euclidean relational particle mechanics in scale--shape variables}
Finally, these `shapes' are also relevant within the corresponding Euclidean relational particle
mechanics, as these admit conceptually interesting formulations in terms of scale--shape split variables.
For scaled N-stop metroland, the configuration space is the generalized cone $C(\mathbb{S}^{\sn - 1})$
while for N-a-gonland it is $C(\mathbb{CP}^{\sn - 1})$, scaled triangleland also being
$C(\mathbb{S}^2)$.
The special cone-point or `apex' 0 here corresponds physically to the maximal collision.
Now use the new coordinate
\begin{equation}
\mbox{I} = \iota_1^2 + \iota_2^2 \mbox{ } , \mbox{ }
\end{equation}
which is physically the moment of inertia and mathematically a radius,
and the same ${\cal R}$ coordinate as in Sec 3.2.
This coordinate transformation then inverts to
\begin{equation}
\iota_1 = \sqrt{{\mbox{I}}/\{1 + {\cal R}^2\}}{\cal R} \mbox{ } , \mbox{ }
\iota_2 = \sqrt{{\mbox{I}}/\{1 + {\cal R}^2\}} \mbox{ } .
\end{equation}
Also, ${\cal R}$ can be supplanted by $\Theta = 2\mbox{arctan}{\cal R}$.
The kinetic term is $\mbox{\sffamily T}^{\mbox{\scriptsize t}\mbox{\scriptsize r}\mbox{\scriptsize i}\mbox{\scriptsize a}\sn\sg\sll\mbox{\scriptsize e}}(\mbox{I}, {\cal Z}, \dot{\mbox{I}}, \dot{\cal Z})$,
$\mbox{\sffamily T}^{\mbox{\scriptsize t}\mbox{\scriptsize r}\mbox{\scriptsize i}\mbox{\scriptsize a}\sn\sg\sll\mbox{\scriptsize e}}(\mbox{I}, {\cal R}, \dot{\mbox{I}}, \dot{\cal R}, \dot{\Phi})$ or
$\mbox{\sffamily T}^{\mbox{\scriptsize t}\mbox{\scriptsize r}\mbox{\scriptsize i}\mbox{\scriptsize a}\sn\sg\sll\mbox{\scriptsize e}}(\mbox{I}, \Theta, \dot{\mbox{I}}, \dot{\Theta}, \dot{\Phi})$ as constructed from the line
element
\begin{equation}
\textrm{d} s^2_{\mbox{\scriptsize t}\mbox{\scriptsize r}\mbox{\scriptsize i}\mbox{\scriptsize a}\sn\sg\sll\mbox{\scriptsize e}}
= \frac{1}{4\mbox{I}}
\left\{
\textrm{d}{\mbox{I}}^2 + 4\mbox{I}^2\frac{|\textrm{d}{\cal Z}^2|}{\{1 + |{\cal Z}|^2\}^2}
\right\}
= \frac{1}{4\mbox{I}}
\left\{
\textrm{d}{\mbox{I}}^2 + 4\mbox{I}^2\frac{\textrm{d}{\cal R}^2 + {\cal R}^2\textrm{d}{\Phi}^2}{\{1 + {\cal R}^2\}^2}
\right\}
= \frac{1}{4\mbox{I}}
\left\{
\textrm{d}{\mbox{I}}^2 + \mbox{I}^2\{\textrm{d}{\Theta^2} + \mbox{sin}^2\Theta\textrm{d}{\Phi^2}\}
\right\} \mbox{ } .
\end{equation}
Moreover, one can use instead the banal-conformally related kinetic term
$\check{\mbox{\sffamily T}}_{\mbox{\scriptsize F}\sll\mbox{\scriptsize a}\mbox{\scriptsize t}}(\mbox{I}, \Theta, \dot{\mbox{I}}, \dot{\Theta}, \dot{\Phi})$ constructed from
\begin{equation}
\textrm{d} s^2_{\mbox{\scriptsize F}\sll\mbox{\scriptsize a}\mbox{\scriptsize t}} =
\textrm{d}{\mbox{I}}^2 + \mbox{I}^2\{ \textrm{d}{\Theta}^2 + \mbox{sin}^2\Theta\textrm{d}{\Phi}^2 \}
\label{fltrp} \mbox{ }
\end{equation}
(the corresponding conformal factor being $4\mbox{I}$, so that also
\begin{equation}
\left.
\check{\mbox{\sffamily U}} + \check{\mbox{\sffamily E}} = \{\mbox{\sffamily U} + \mbox{\sffamily E}\}/4\mbox{I} \mbox{ }
\right) \mbox{ } ,
\end{equation}
away from $\mbox{I} = 0$ in which place this conformal transformation is invalid).
The form of (\ref{fltrp}) clearly makes this a flat (i.e. geometrically trivial) representation in
spherical polar coordinates, with $\mbox{I}$ as the radius.
While, the other forms above are mechanically natural.\footnote{\cite{Cones}
connects this section's workings with results from the celestial mechanics and molecular physics
literatures.}
\subsection{Geometrodynamical configuration spaces}
While, GR is a {\it geometrodynamics} \cite{Battelle, DeWitt} on the quotient configuration space
Superspace($\Sigma$, 3) $\equiv$
\noindent Riem($\Sigma$, 3)/Diff($\Sigma$, 3) \cite{Battelle, DeWitt},
which is studied topologically and geometrically in e.g. \cite{Battelle, DeWitt, Superspace}.
Superspace is an infinite-dimensional complicatedly stratified manifold; explicit reduction is not in
general possible here.
\subsection{Conformogeometrodynamical configuration spaces}
One can obtain relational theories or formulations using G = Diff($\Sigma$, 3) $\times$ Conf($\Sigma$, 3)
that reproduces the maximal condition (however this formulation freezes unless one alters one's theory
from GR \cite{ABFO}) and using G = Diff($\Sigma$, 3) $\times$ VPConf($\Sigma$, 3) (the volume-preserving
conformal transformations) that does reproduce GR in the York formulation from an action principle
\cite{ABFKO}.
Here the associated configuration spaces are conformal superspace CS($\Sigma$, 3) $\equiv$ Riem($\Sigma$, 3)
/Diff($\Sigma$, 3) $\times$ Conf($\Sigma$, 3), which is studied geometrically in \cite{CS, FM96}, and
\{CS + V\}($\Sigma$, 3) = Riem($\Sigma$, 3)/Diff($\Sigma$, 3) $\times$ VPConf($\Sigma$, 3), which
previously featured in e.g. \cite{York72}, but has not to my knowledge been studied from a geometrical
perspective.
[The CRiem($\Sigma$, 3) $\equiv$ Riem($\Sigma$, 3)/Conf($\Sigma$, 3) analogue of
preshape space has been studied geometrically in e.g. \cite{DeWitt, FM96} ].
Parallels between this and Euclidean relational particle mechanics in shape-scale split variables
include homogeneous degrees of freedom playing a similar role to scale, York time \cite{YorkTime}
having an `Euler time' analogue \cite{06II}, similarity relational particle mechanics
corresponding to maximal slicing in having this time frozen, and in the analogy between
\{CS + V\}($\Sigma$, 3) and the Euclidean relational particle mechanics configuration spaces' cone
structure.
In the configuration space of GR, the analogous `special point' at zero scale is the Big Bang.
\section{Similarity relational particle mechanics at the classical level}
\subsection{1 and 2-d cases}
In App A, I provide how the general curved space mechanics unfolds.
In the case of scalefree N-stop metroland, the Jacobi action is
\begin{equation}
\mbox{\sffamily S}_{}^{\sN\s-\mbox{\scriptsize s}\mbox{\scriptsize t}\so\mbox{\scriptsize p}}(\Theta_{\bar{p}}, \dot{\Theta}_{\bar{p}}) =
2\int\textrm{d}\lambda\sqrt{\mbox{\sffamily T}_{\mathbb{S}^{\mbox{\tiny n} - 1}}\{\mbox{\sffamily E} - \mbox{\sffamily V}\}} \mbox{ } .
\end{equation}
The Euler--Lagrange equations following from this are given in \cite{FORD} in Beltrami coordinates (the notation there uses
$s_{\bar{p}}$ in place of the present paper's ${\cal R}_{\bar{p}}$), but in this paper I use, rather,
(ultra)spherical coordinates in the present paper, so I provide the Euler--Lagrange equations afresh in these:
\begin{equation}
\left\{\prod_{\bar{p}}^{\bar{q} - 1} \mbox{sin}^2\Theta_{\bar{p}} \Theta_{\bar{q}}^*\right\}^* -
\left\{
\sum_{\bar{r} = \bar{q} + 1}^{\sn - 1}
\prod_{\bar{p} = 1, \bar{p} \neq \bar{q}}^{\bar{r} - 1}\mbox{sin}^2\Theta_{\bar{p}}
\right\}
\mbox{sin}\Theta_{\bar{q}}\mbox{cos}\Theta_{\bar{q}}\Theta_{\bar{r}}^{*2} = - \frac{\partial\mbox{\sffamily V}}{\partial\Theta_{\bar{q}}}
\mbox{ } .
\end{equation}
There is also a first energy integral,
\begin{equation}
{\sum_{\bar{r} = 1}^{\sn - 1}
\prod_{\bar{p} = 1}^{\bar{r} - 1}\mbox{sin}^2\Theta_{\bar{p}}{\Theta}_{\bar{r}}^{*2}}/{2}
+ \mbox{\sffamily V}(\Theta_{\bar{r}}) = \mbox{\sffamily E} \mbox{ } .
\end{equation}
For scalefree N-a-gonland, the Jacobi action
\begin{equation}
\mbox{\sffamily S} = 2\int\textrm{d}\lambda\sqrt{\mbox{\sffamily T}_{\mbox{\scriptsize F}\mbox{\scriptsize S}}\{\mbox{\sffamily E} - \mbox{\sffamily V}\}}
\end{equation}
gives the Euler--Lagrange equations as presented implicitly in \cite{FORD}.
In the exceptional triangleland case, there are spherical flat (conformal-to-stereographic) presentations,
\noindent
\begin{equation}
\mbox{\sffamily S}_{} = 2\int\textrm{d}\lambda\sqrt{\overline{\mbox{\sffamily T}}_{\mathbb{S}^2}\{\overline{\mbox{\sffamily E}} - \overline{\mbox{\sffamily V}}\}}
= 2\int\textrm{d}\lambda\sqrt{\widetilde{\mbox{\sffamily T}}_{\mbox{\scriptsize f}\sll\mbox{\scriptsize a}\mbox{\scriptsize t}}\{\widetilde{\mbox{\sffamily E}} - \widetilde{\mbox{\sffamily V}}\}}
\mbox{ } .
\end{equation}
Then, in the spherical presentation, the Euler--Lagrange equations simplify to
\begin{equation}
\Theta^{\overline{**}} - \mbox{sin}\Theta\mbox{cos}\Phi \Phi^{\overline{*}2} =
- \frac{\partial\overline{\mbox{\sffamily V}}}{\partial\Theta} \mbox{ } \mbox{ } , \mbox{ } \mbox{ }
\{\mbox{sin}^2\Theta \Phi^{\overline{*}}\}^{\overline{*}} = - \frac{\partial\overline{\mbox{\sffamily V}}}{\partial\Phi}
\label{Su}
\end{equation}
with an accompanying energy integral
\begin{equation}
{\Theta^{\overline{*}2}}/{2} + {\mbox{sin}^2\Theta \Phi^{\overline{*}2}}/{2} + \overline{\mbox{\sffamily V}} =
\overline{\mbox{\sffamily E}}
\mbox{ } .
\label{EN_FULL}
\end{equation}
[Above,
\begin{equation}
\overline{*} \equiv \sqrt{\{\overline{\mbox{\sffamily E}} + \overline{\mbox{\sffamily U}}\}/{\overline{\mbox{\sffamily T}}}}\mbox{ }\dot{} =
\sqrt{\{\mbox{\sffamily E} + \mbox{\sffamily U}\}/{\mbox{\sffamily T}}}/4\mbox{ }\dot{} = */4 \mbox{ } .]
\label{overlinestardef}
\end{equation}
If $\overline{\mbox{\sffamily V}}$ is independent of $\Phi$, then (\ref{Su}ii) becomes another first integral:
\begin{equation}
\mbox{sin}^2\Theta \Phi^{\overline{*}} = {\cal J} \mbox{ } , \mbox{ constant }.
\label{CF}
\end{equation}
This is a relative angular momentum quantity; see App C for various interpretations of it.
Then (\ref{EN_FULL}) becomes
\begin{equation}
{\Theta^{\overline{*}2}}/{2} + {{\cal J}^2}/{2\mbox{sin}^2\Theta} +
\overline{\mbox{\sffamily V}}(\Theta) = \overline{\mbox{\sffamily E}}
\mbox{ } .
\label{MPW}
\end{equation}
Or, in terms of the ${\cal R}$ coordinate, the important results (\ref{CF}) and (\ref{MPW}) take the form
\begin{equation}
{\cal R}^2\Phi^{\widetilde{*}} = {\cal J} \mbox{ } , \mbox{ }
\frac{1}{2}
\left\{
\left\{
\frac{\textrm{d} {\cal R}}{\textrm{d} \widetilde{t}}
\right\}^2
+ \frac{{\cal J}^2}{{\cal R}^2}
\right\} = \widetilde{\mbox{\sffamily E}} + \widetilde{\mbox{\sffamily U}}
\mbox{ } ,
\end{equation}
where $\widetilde{*} \equiv \sqrt{\{\widetilde{\mbox{\sffamily E}} + \widetilde{\mbox{\sffamily U}}\}/\widetilde{\mbox{\sffamily T}}}\dot{\mbox{ }}$.
This is illuminating though its direct parallel with the usual flat planar presentation
of the ordinary mechanics of a test particle moving in a central potential:
\begin{equation}
\mbox{\Huge(}
\stackrel{\mbox{ratio of square roots of the two subsystems'}}
{\mbox{Jacobi partial moments of inertia }}
\mbox{\Huge)} = {\cal R} \mbox{ } \longleftrightarrow \mbox{ } r =
\mbox{ (radial coordinate of test particle) } ,
\end{equation}
\begin{equation}
\mbox{ (relative angle between Jacobi coordinates) } = \Phi \mbox{ } \longleftrightarrow \mbox{ }
\theta = \mbox{ (polar coordinate of test particle) } ,
\end{equation}
\begin{equation}
\left(
\stackrel{\mbox{relative angular momentum}}{\mbox{of the two subsystems}}
\right) = {\cal J} \mbox{ } \longleftrightarrow \mbox{ }
\mbox{L} \mbox{ } ( \mbox{ } = \mbox{L}_z \mbox{ } ) \mbox{ } =
\left(
\stackrel{\mbox{angular momentum component}}{\mbox{perpendicular to the plane}}
\right)
\mbox{ } ,
\end{equation}
\begin{equation}
1 \mbox{ } \longleftrightarrow \mbox{ } m = \mbox{ (test particle mass) }
\mbox{ } .
\end{equation}
This analogy is furthermore a pointer to parallel the well-known $u = 1/r$ substitution by the
${\cal U} = 1/{\cal R}$ one, which turns out to be exceedingly useful in my study below.
[In the spherical presentation the counterpart of this {\it inversion map}
${\cal R} \longrightarrow {\cal U} = 1/{\cal R}$ is the {\it supplementary map}
$\Theta \longrightarrow \pi - \Theta =$ the supplementary angle, $\Xi$.
In $\iota_i$ or $I_i$ variables, it takes the form of interchanging the 1-indices and the 2-indices.
I term the underlying operation that takes these forms in these presentations as the {\it duality map}.]
Next, $\widetilde{\mbox{\sffamily V}}_{\mbox{\scriptsize e}\mbox{\scriptsize f}\sf} \equiv \widetilde{\mbox{\sffamily V}} + {{\cal J}^2}/{{\cal R}^2}
- \widetilde{\mbox{\sffamily E}}$ is the potential quantity that is significant for motion in time, and combining
(\ref{MPW}) and (\ref{CF}), $\widetilde{\mbox{\sffamily U}}_{\so\mbox{\scriptsize r}\mbox{\scriptsize b}} \equiv - {\cal R}^4\widetilde{\mbox{\sffamily V}}_{\mbox{\scriptsize e}\mbox{\scriptsize f}\sf}$
is the potential quantity that is significant as regards the shapes of the classical orbits.
Translating into the spherical language, the corresponding quantities are $\overline{\mbox{\sffamily V}}_{\mbox{\scriptsize e}\mbox{\scriptsize f}\sf} =
\overline{\mbox{\sffamily V}} + {J^2}/{\mbox{sin}^2\Theta} - \overline{\mbox{\sffamily E}}$ and
$\overline{\mbox{\sffamily U}}_{\so\mbox{\scriptsize r}\mbox{\scriptsize b}} = - \mbox{sin}^4{\Theta}\overline{\mbox{\sffamily V}}_{\mbox{\scriptsize e}\mbox{\scriptsize f}\sf}$.
Finally, combining (\ref{MPW}) and (\ref{CF}) or their ${\cal R}$-analogues gives as quadratures for
the shapes of the orbits
\begin{equation}
\Phi - \Phi_0 = \int
{{\cal J}\textrm{d}\Theta}/
{\mbox{sin}\Theta\sqrt{2\{\overline{\mbox{\sffamily E}} - \overline{\mbox{\sffamily V}}(\Theta)\}\mbox{sin}^2\Theta - {\cal J}^2}}
= \int {{\cal J}\textrm{d}{\cal R}}/{{\cal R}\sqrt{2\{\widetilde{\mbox{\sffamily E}} +
\widetilde{\mbox{\sffamily U}}\}{\cal R}^2 - {\cal J}^2}}
\mbox{ } .
\label{quad}
\end{equation}
While, (\ref{MPW}) and its ${\cal R}$ analogue give as quadratures for the time-traversals
\begin{equation}
\overline{t} - \overline{t}_0 = \int {\mbox{sin}\Theta \textrm{d}\Theta}/{\sqrt{2\{\overline{\mbox{\sffamily E}} +
\overline{\mbox{\sffamily U}}\}\mbox{sin}^2\Theta - {\cal J}^2}} \mbox{ } , \mbox{ } \mbox{ or } \mbox{ }
\widetilde{t} - \widetilde{t}_0 = \int {{\cal R}\textrm{d}{\cal R}}/{\sqrt{2\{\widetilde{\mbox{\sffamily E}} +
\widetilde{\mbox{\sffamily U}}\}{\cal R}^2 - {\cal J}^2}} \mbox{ } .
\end{equation}
\subsection{A class of separable potentials for scalefree triangleland}
As $\theta$-independent (central) potentials considerably simplify classical and quantum mechanics,
by the above analogy it is clear that $\Phi$-independent (relative angle independent) potentials
will also considerably simplify classical and quantum scalefree triangleland.
In particular, these simplifications include separability.
Also, in each case, there is a {\sl conserved quantity}: angular momentum $\mbox{L}$ in the case of ordinary mechanics with
central potentials, and {\sl relative} angular momentum ${\cal J}$ of the two constituent subsystems in the present
context (see App C for details of the interpretation of ${\cal J}$).
A general class of separable potentials consists of linear combinations of
$$
\mbox{\sffamily V}_{(\alpha, \beta)}
\propto \left\{\frac{\iota_1}{\sqrt{\mbox{I}}}\right\}^{\alpha - \beta}
\left\{\frac{\iota_2}{\sqrt{\mbox{I}}}\right\}^{\beta}
\propto \left\{\frac{\cal R}{\sqrt{ 1 + {\cal R}^2}}\right\}^{\alpha - \beta}
\left\{\frac{1}{\sqrt{ 1 + {\cal R}^2}}\right\}^{\beta}
\propto \mbox{sin}^{\alpha - \beta}\mbox{$\frac{\Theta}{2}$}\mbox{cos}^{\beta}\mbox{$\frac{\Theta}{2}$}
$$
so that
\begin{equation}
\widetilde{\mbox{\sffamily V}}_{(\alpha, \beta)}
\propto {{\cal R}^{\alpha - \beta}}/{\{1 + {\cal R}^2\}^{\frac{\alpha}{2} + 2}}
\mbox{ } \mbox{ and } \mbox{ }
\overline{\mbox{\sffamily V}}_{(\alpha, \beta)}
\propto \mbox{sin}^{\alpha - \beta}\mbox{$\frac{\Theta}{2}$}\mbox{cos}^{\beta}\mbox{$\frac{\Theta}{2}$}
\mbox{ } .
\end{equation}
The most physically relevant subcase therein are the power-law-mimicking potentials $\beta = 0$
(remember that $\mbox{I}$ turns out to be constant in similarity relational particle mechanics after
variation is done).
These correspond to potential contributions solely between particles 2, 3.
The case $\beta = \alpha$ are potential contributions solely between particle 1 and the centre of mass
of particles 2, 3 (which is less widely physically meaningful).
It also turns out that the duality map sends $\mbox{\sffamily V}_{(\alpha,\beta)}$ to $\mbox{\sffamily V}_{(\alpha,\alpha - \beta)}$.
Simple examples therein of quantum-mechanical interest are the constant potential ($\alpha = 0$) and
similarity relational particle mechanics' mimicker of harmonic oscillators ($\alpha = 2$).
I choose these for explicit study due to the ubiquity of constant and harmonic oscillator potentials in theoretical
physics; moreover harmonic oscillator potentials confer nice boundedness features at the quantum level.
I use the following notation for the constants of proportionality for the single harmonic oscillator between particles 2,
3:
$2\mbox{\sffamily V}_{(2, 0)}^{(1)} = h_{23}\{\underline{q}_2 - \underline{q}_3\}^2 \equiv H_1^{(1)}\rho_1^2$ for
$h_{23}$ the ordinary position space Hooke's coefficient and $H_1$ the relative configuration space's
`Jacobi--Hooke' coefficient.
I use the obvious cyclic permutations of this for harmonic oscillators between particles 3, 1, and particles 1, 2.
Moreover, the linear combination of these last two such that $m_2h_{13} = m_3h_{12}$ (resultant force of
the second and third `springs' pointing along the line joining the centre of mass of particles 2, 3 and
the position of particle 1) is
$2\{\mbox{\sffamily V}_{(2, 0)}^{(2)} + \mbox{\sffamily V}_{2, 2}^{(2)}\} = H_1^{(2)}\rho_1^2 + H_2^{(2)}\rho_2^2$, where
\begin{equation}
H_1^{(2)} = \{h_{13}m_2^2 + h_{12}m_3^2\}/{\{m_{2} + m_{3}\}^2} \mbox{ } \mbox{ and } \mbox{ }
H_2^{(2)} = h_{12} + h_{13} \mbox{ } .
\label{H1and2}
\end{equation}
Additionally, one can consider $\mbox{\sffamily V} = \mbox{\sffamily V}^{(1)} + \mbox{\sffamily V}^{(2)}$, whereupon one has
$H_1 = H_1^{(1)} + H_1^{(2)}$ and $H_2 = H_2^{(2)}$; then define $K_i \equiv H_i/\mu_i$ so that
\begin{equation}
2\mbox{\sffamily V} = H_1\rho_1^2 + H_2\rho_2^2 = K_1\iota_1^2 + K_2\iota_2^2 \mbox{ } .
\end{equation}
I refer to this case as the `special' case of the 3 harmonic oscillator problem, the general case having
angle-dependent potentials (see the next SSec) due to {\sl not} having the resultant force
of the second and third `springs' point along $\underline{\iota}_2$).
Three notes of importance below and paper II are as follows.
\noindent 1) In ($\cal R$, $\Phi$) coordinates this leads to the form
\begin{equation}
2\widetilde{\mbox{\sffamily V}} = \{K_1{\cal R}^2 + K_2\}/\{1 + {\cal R}^2\}^3 \mbox{ } ,
\end{equation}
or, in ($\Theta$, $\Phi$) coordinates, by the simplifications
$\mbox{cos}^{2}\mbox{$\frac{\Theta}{2}$} = \{1 + \mbox{cos}\Theta\}/2$ and
$\mbox{sin}^{2}\mbox{$\frac{\Theta}{2}$} = \{1 - \mbox{cos}\Theta\}/2$, to the form
\begin{equation}
\overline{\mbox{\sffamily V}} = A + B \mbox{cos}\Theta \mbox{ } \mbox{ for } \mbox{ }
A =
\left\{
{K_1} + {K_2}
\right\}/16
\mbox{ } , \mbox{ }
B =
\left\{
{K_2} - {K_1}
\right\}/16
\end{equation}
2) This situation is the linear combination of a $\mbox{\sffamily V}_{(2, 0)}$ and a $\mbox{\sffamily V}_{(2, 2)}$ and as such {\sl is
self-dual provided that} the values of $K_1$ {\sl and} $K_2$ are also interchanged (or, equivalently,
the sign of $B$ is reversed).
\noindent 3) if $K_1 = K_2$ (corresponding to the highly symmetric balance
$m_1h_{23} = m_2h_{31} = m_3h_{12}$ in position space), then
\begin{equation}
\overline{\mbox{\sffamily V}} = A \mbox{ } ,
\end{equation}
to which I refer as the `very special case'; this is {\sl unconditionally self-dual}.
\subsection{Scalefree triangleland with harmonic oscillator type potential}
In all other multiple harmonic oscillator cases, there is a $\Phi$-dependent cross-term, rendering the potential
nonseparable in these coordinates.
The general triple harmonic oscillator like dipotential $2\mbox{\sffamily V} = h_{23}\{\underline{q}_2 - \underline{q}_3\}^2 +$ cycles maps to
\begin{equation}
2\widetilde{\mbox{\sffamily V}} = \{K_1{\cal R}^2 + L{\cal R}\mbox{cos}\Phi + K_2\}/\{1 + {\cal R}^2\}^3
\mbox{ } \mbox{ for } \mbox{ }
L \equiv \{\mu_1\mu_2\}^{-1/2}2\{h_{13}m_2 - h_{12}m_3\}/\{m_2 + m_3\} \mbox{ } .
\end{equation}
Or, alternatively, in the spherical presentation, the potential is
\begin{equation}
\overline{\mbox{\sffamily V}} = A + B\mbox{cos}\Theta + C\mbox{sin}\Theta\mbox{cos}\Phi
\mbox{ } \mbox{ for } \mbox{ }
C = {L}/{16}
\mbox{ } .
\end{equation}
This clearly includes the previous SSec's special case by setting $C = 0$ (and so $L = 0$,
corresponding to $m_2h_{13} = m_3h_{12}$).
\subsection{Analogy with some linear rigid rotor set-ups}
Some useful mathematical analogies for scalefree triangleland with multiple harmonic oscillator like potentials are as
follows.
\begin{equation}
\mbox{ very special harmonic oscillator } \longleftrightarrow \mbox{ linear rigid rotor } ,
\end{equation}
\begin{equation}
\mbox{ special harmonic oscillator } \longleftrightarrow
\mbox{linear rigid rotor in a background homogeneous electric field in the axial (`z')-direction } ,
\end{equation}
\begin{equation}
\mbox{ general harmonic oscillator } \longleftrightarrow
\mbox{ linear rigid rotor in a background homogeneous electric field in an arbitrary direction }.
\end{equation}
In particular, this classical problem has
\begin{equation}
\mbox{\sffamily T}_{\mbox{\scriptsize r}\so\mbox{\scriptsize t}\so\mbox{\scriptsize r}} = \mbox{I}_{\mbox{\scriptsize r}\so\mbox{\scriptsize t}\so\mbox{\scriptsize r}}\{\dot{\theta}^2 + \mbox{sin}^2\theta\dot{\phi}^2\}/{2}
\mbox{ } , \mbox{ } \mbox{\sffamily V}_{\mbox{\scriptsize r}\so\mbox{\scriptsize t}\so\mbox{\scriptsize r}} = - {\cal M}{\cal E}\mbox{cos}\theta
\end{equation}
where $\mbox{I}_{\mbox{\scriptsize r}\so\mbox{\scriptsize t}\so\mbox{\scriptsize r}}$ is the single nontrivial value of the moment of inertia of the linear
rigid rotor, ${\cal E}$ is a constant external electric field in the axial `$z$' direction and
${\cal M}$ is the dipole moment component in that direction.
Thus the correspondence is $\Theta \longleftrightarrow \theta$, $\Phi \longleftrightarrow \phi$,
\begin{equation}
1 \mbox{ } \longleftrightarrow \mbox{ } \mbox{I}_{\mbox{\scriptsize r}\so\mbox{\scriptsize t}\so\mbox{\scriptsize r}} \mbox{ } , \mbox{ }
\end{equation}
\begin{equation}
\mbox{(energy)/4 -- (sum of mass-weighted Jacobi--Hooke coefficients)/16 }
\mbox{ } = \mbox{ }
{\mbox{\sffamily E}}/{4} - A
\mbox{ } = \mbox{ }
\overline{\mbox{\sffamily E}} - A \mbox{ } \longleftrightarrow \mbox{ } E \mbox{ = (energy) } ,
\end{equation}
\begin{equation}
\mbox{ (difference of mass-weighted Jacobi--Hooke coefficients) = } \mbox{ }
B \mbox{ } \longleftrightarrow
\mbox{ } - {\cal ME} \mbox{ } , \mbox{ }
\end{equation}
These all being well-studied at the quantum level \cite{TSMessiah, Hecht}, this identification is of
considerable value in solving the relational problem in hand, by the string of techniques in Fig 3.
\mbox{ }
\noindent{\footnotesize[{\bf Figure 3} In this paper I perform coordinate transformations to obtain
standardly solvable problems and then back-track to see what happens in terms of the original
mechanically-significant variables.]}
\subsection{A brief study of the potential}
Working in spherical coordinates, set $0 = {\partial\overline{\mbox{\sffamily V}}}/{\partial\Theta} = - B\mbox{sin}\Theta +
C\mbox{cos}\Theta\mbox{cos}\Phi$, $0 = {\partial\overline{\mbox{\sffamily V}}}/{\partial\Phi} = -C\mbox{sin}\Theta\mbox{sin}\Phi$
to find the critical points.
These are at $(\Theta, \Phi) = (\mbox{arctan}(C/B), 0)$, $(-\mbox{arctan}(C/B), \pi)$ which are
antipodal (see Fig 4); in fact the potential is axisymmetric about the axis these lie on.
The critical points are, respectively, a maximum and a minimum.
[The very special case $B = C = 0$ is also critical, for all angles -- this case ceases to have a
preferred axis.]
\mbox{ }
\noindent{\footnotesize[{\bf Figure 4}]}
\mbox{ }
\noindent ${\cal R}$ small corresponds to $\Theta$ small, for which
\begin{equation}
\overline{\mbox{\sffamily U}} + \overline{\mbox{\sffamily E}} = \{\overline{\mbox{\sffamily E}} - A - B\} - C\Theta\mbox{cos}\Phi + B\Theta^2/2
+ O(\Theta^3) \mbox{ } , \mbox{or}
\end{equation}
\begin{equation}
2\{\widetilde{\mbox{\sffamily U}} + \widetilde{\mbox{\sffamily E}}\} =
2\mbox{\sffamily E} - K_2 - L{\cal R}\mbox{cos}\Phi - \{4\mbox{\sffamily E} + K_1 - 3K_2\}{\cal R}^2 + O({\cal R}^3) \equiv
Q_0 - L{\cal R}\mbox{cos}\Phi - \{Q_2 {\cal R}^2\} + O({\cal R}^3)
\mbox{ } .
\label{assmall}
\end{equation}
Thus the leading term is a constant,
unless $Q_0 = 2\mbox{\sffamily E} - K_2 \mbox{ }(\mbox{ } \propto \overline{\mbox{\sffamily E}} - A - B \mbox{ }) = 0$,
in which case it is linear in $\Theta$ or ${\cal R}$ and with a cos$\Phi$ factor,
unless also $L \mbox{ }(\mbox{ } \propto C \mbox{ }) = 0$ (which is also the condition for the `special' case),
in which case it is quadratic in $\Theta$ or ${\cal R}$, unless $B = 0$ (given previous conditions,
this is equivalent to $Q_2 = 4\mbox{\sffamily E} + K_1 - 3K_2 = 0$), which means that one is in the $K_2 = 2\mbox{\sffamily E}$ subcase of
the `very special' case, for which $\mbox{\sffamily U} + \mbox{\sffamily E}$ has no terms at all.
${\cal R}$ large corresponds to the supplementary angle $\Xi \equiv \pi - \Theta$ being small, so
\begin{equation}
\overline{\mbox{\sffamily U}} + \overline{\mbox{\sffamily E}} = \{\overline{\mbox{\sffamily E}} - A - B\} + C\Xi\mbox{cos}\Phi + B\Xi^2/2
+ O(\Xi^3)
\mbox{ } , \mbox{ or }
\end{equation}
\begin{equation}
2\{\widetilde{\mbox{\sffamily U}} + \widetilde{\mbox{\sffamily E}}\} = \{2\mbox{\sffamily E} - K_1\}{\cal R}^{-4} -
L{\cal R}^{-5}\mbox{cos}\Phi - \{4\mbox{\sffamily E} + K_2 - 3K_1\}{\cal R}^{-6} + O({\cal R}^{-7}) \equiv
{Q_4}{\cal R}^{-4} - L{\cal R}^{-5}\mbox{cos}\Phi - Q_6{\cal R}^{-6}
+ O({\cal R}^{-7}) \mbox{ } .
\label{aslar}
\end{equation}
Thus the leading term goes as a constant in $\Xi$ or as ${\cal R}^{-4}$,
unless $Q_4 = 2\mbox{\sffamily E} - K_1 \mbox{ }(\mbox{ }\propto \overline{\mbox{\sffamily E}} - A + B \mbox{ }) = 0$,
in which case it goes linearly in $\Xi$ or as ${\cal R}^{-5}$ in each case also with a cos$\Phi$ factor,
unless also $L \mbox{ }(\mbox{ } \propto C \mbox{ }) = 0$ (`special' case),
in which case it goes quadratically in $\Xi$ or as ${\cal R}^{-6}$, unless $B = 0$
(given previous conditions, this is equivalent to $Q_6 = 4\mbox{\sffamily E} + K_2 - 3K_1 = 0$),
which means that one is in the $K_1 = 2\mbox{\sffamily E}$ subcase of the `very special' case, for which $\mbox{\sffamily E} + \mbox{\sffamily U}$
has no terms at all.
Note that $B = 0$ implies $K_1 = K_2$, so this very special subcase indeed coincides with the previous
paragraph's.
Finally note that the large and small asymptotics are dual to each other (the difference of 4 powers is
accounted for by how the kinetic energy scales under the duality map), so that {\sl one need only
analyse the parameter space for one of the two regimes and then obtain everything about the other regime
by simple transcription}.
\subsection{Classical equations of motion}
The Jacobi-type action for this problem is, in spherical coordinates and using the barred
banal conformal representation,
\noindent
\begin{equation}
\mbox{\sffamily S} = \int\textrm{d}\lambda\sqrt{\frac{\dot{\Theta^2} + \mbox{sin}^2\Theta\dot{\Phi}^2}{2}
\left\{
\frac{\mbox{\sffamily E}}{4} - A - B\mbox{cos}\Theta - C\mbox{sin}\Theta\mbox{cos}\Phi
\right\}} \mbox{ } .
\end{equation}
Then the Euler--Lagrange equations are
\begin{equation}
\Theta^{\overline{**}} - \mbox{sin}\Theta\mbox{cos}\Theta \{\Phi^{\overline{*}}\}^2 =
B\mbox{sin}\Theta - C\mbox{cos}\Theta\mbox{cos}\Phi \mbox{ } ,
\label{Tera}
\end{equation}
\begin{equation}
\{\mbox{sin}^2\Theta\Phi^{\overline{*}}\}^{\overline{*}} = C\mbox{sin}\Theta\mbox{sin}\Phi \mbox{ } .
\label{Dactyl}
\end{equation}
On account of the action being independent of $\overline{t}$, one of these can be replaced by the energy
integral
\begin{equation}
\frac{\{\Theta^{\overline{*}}\}^2 + \mbox{sin}^2\Theta\{\Phi^{\overline{*}}\}^2}{2} +
A + B\mbox{cos}\Theta + C\mbox{sin}\Theta\mbox{cos}\Phi = \overline{\mbox{\sffamily E}} \mbox{ } .
\label{EnIn}
\end{equation}
A dynamical systems and phase space analysis of these equations and their interpretation in terms of
$\mbox{I}_1$, $\mbox{I}_2$ and $\Phi$ will be presented elsewhere \cite{Dyn}.
Also note that the notions of $\mbox{\sffamily V}_{\mbox{\scriptsize eff}}$ and $\mbox{\sffamily V}_{\mbox{\scriptsize orb}}$ are
inapplicable in the nonseparable case.
\subsection{`Special case'}
For $C = 0$,
(\ref{EnIn}) reads
\begin{equation}
\frac{1}{2}
\left\{
\{\Theta^{\overline{*}}\}^2 + \frac{{\cal J}^2}{\mbox{sin}^2\Theta}
\right\}
+ A + B\mbox{cos}\Theta = \overline{\mbox{\sffamily E}}
\end{equation}
\noindent
which can be integrated [using (\ref{CF}) to eliminate $t^{\mbox{\scriptsize e}\mbox{\scriptsize m}}$ in favour of $\Phi$]:
\begin{equation}
\Phi - \Phi_0 = \int {{\cal J}\textrm{d}\Theta}/
{\mbox{sin}\Theta\sqrt{2\{\overline{\mbox{\sffamily E}} - A - B\mbox{cos}\Theta\}\mbox{sin}^2\Theta - {\cal J}^2}}
\mbox{ } .
\end{equation}
\mbox{ }
For the very special case $B = C = 0$, this reduces to the well-known integral for the computation of
geodesics for the sphere -- c.f. \cite{TriCl}, which exact solution I now cast below in terms of
${\cal R}$, $\Phi$ and then in terms of the `original' $\underline{\iota}_1$, $\underline{\iota}_2$
variables.
I've also solved the $B \neq 0$ case exactly by e.g. tan$\frac{\Theta}{2} = {\cal R} \equiv \sqrt{x - 1}$,
obtaining then via Maple \cite{Maple} a composition of polynomials, roots and elliptic functions in $x$
that I consider to be too complicated to present here.
\cite{TriCl} sketches $\widetilde{\mbox{\sffamily V}}_{\mbox{\scriptsize e}\mbox{\scriptsize f}\sf}$ and $\widetilde{\mbox{\sffamily U}}_{\so\mbox{\scriptsize r}\mbox{\scriptsize b}}$ for
the single harmonic oscillator and their large and small asymptotic behaviours.
${\cal J} = 0$ is rather simpler: 1-d motion i.e. the orbits are straight lines.
Assume ${\cal J} \neq 0$ from now on in looking for orbits with less trivial shapes.
The universal large ${\cal R}$ asymptotic solution's $\widetilde{\mbox{\sffamily V}}_{\mbox{\scriptsize e}\mbox{\scriptsize f}\sf}$
exhibits a finite potential barrier and the orbits are bounded from above.
The small ${\cal R}$ asymptotic solution for a fixed $\mbox{\sffamily E} > 0$
harmonic oscillator problem is the usual radial/isotropic harmonic oscillator problem.
Here, $\widetilde{\mbox{\sffamily V}}_{\mbox{\scriptsize e}\mbox{\scriptsize f}\sf}$ is an infinite well formed by the harmonic oscillator's parabolic potential on the
outside and the infinite centrifugal barrier on the inside, and the orbits are bounded from below.
For a fixed $\mbox{\sffamily E} > 0$ harmonic oscillator scalefree triangleland problem, in contrast with the usual radial
harmonic oscillator problem, the potential
tends to 0 rather than $+\infty$ as ${\cal R} \longrightarrow \infty$, and the orbits are bounded from
both above and below.
\subsection{Exact solution for the `very special case'}
This effectively constant-$\mbox{\sffamily V}$ case is equivalent to the geodesic problem on the sphere.
Thus it is solved in $\Theta, \Phi$ variables by great circles (first equality)
\begin{equation}
F\mbox{cos}(\Phi - \Phi_0) = 2\mbox{cot}\Theta = \{1 - {\cal R}^2\}/{\cal R} =
\{\iota_2^2 - \iota_1^2\}/\iota_1\iota_2 \mbox{ }
\label{squik}
\end{equation}
(following through with variable transformations so as to cast it in terms of straightforward relational
variables, and using $\left.{\cal J}F = 2\sqrt{2\{\overline{\mbox{\sffamily E}} - A\} - {\cal J}^2} =
\sqrt{2\mbox{\sffamily E} - K_2 - 4{\cal J}^2} = \sqrt{Q_0 - 4{\cal J}^2}\right)$.
The ${\cal R}$ form above can be rearranged to the equation of a generally off-centre circle [Fig 5 a)].
While, in terms of the original quantities of the problem,
\begin{equation}
F^2\{ \underline{\iota}_1 \cdot \underline{\iota}_2 \}^2 +
||\underline{\iota}_1||^4 + ||\underline{\iota}_2||^4 +
2F\{ ||\underline{\iota}_1||^2 - ||\underline{\iota}_2||^2 \}
\underline{\iota}_1 \cdot \underline{\iota}_2 \mbox{cos}\Phi_0 =
\{2 + F^2\mbox{sin}^2\Phi_0\}||\underline{\iota_1}||^2||\underline{\iota_2}||^2 \mbox{ } .
\end{equation}
Note 1) this solution is homogeneous in $||\underline{\iota}_1||$, $||\underline{\iota}_2||$ and
$\sqrt{\underline{\iota}_1\cdot\underline{\iota}_2}$ (in fact it is a fourth-order homogeneous
polynomial in these quantities).
These quantities are all invariant under rotation, so the zero total angular momentum constraint is manifestly
satisfied, while the zero dilational momentum constraint is manifestly satisfied due to the homogeneity.
[`Homogeneous equation' is another way of expressing `equation in terms of ratios alone'.]
\noindent 2) It has simpler subcases: $\mbox{sin}\Phi_0 = 0$ gives the merely second-order
\begin{equation}
F\underline{\iota}_1 \cdot \underline{\iota}_2 +
||\underline{\iota}_1||^2 = ||\underline{\iota}_2||^2 \mbox{ } ,
\end{equation}
which I sketch in Fig 5, while cos$\Phi_0 = 0$ gives
\begin{equation}
F^2\{\underline{\iota_1} \cdot \underline{\iota}_2\}^2 +
||\underline{\iota_1}||^4 + ||\underline{\iota}_2||^4 =
\{2 + F^2\}||\underline{\iota}_1||^2||\underline{\iota}_2||^2 \mbox{ } .
\end{equation}
\noindent 3) The solution is invariant under $\underline{\iota}_1 \longleftrightarrow \underline{\iota}_2$
provided that also $\Phi_0$ is shifted by a multiple of $\pi$.
\mbox{ }
\noindent{\footnotesize[{\bf Figure 5} a) On the sphere the solutions are great circles, while their
projections onto the $({\cal R}, \Phi)$ plane are circles.
I also provide a sketch of how the projection works;
the equator maps to the circle ${\cal R} = 1$ and the `Greenwich meridian' maps to the vertical line.
\noindent b) The barycentric partial moments of inertia as functions of $\Phi$ are then [by
(\ref{squik}), that their sum is a constant $\mbox{I}$, and that this sketch is readily rotatable from
$\Phi_0 = 0$ to $\Phi_0$ arbitrary and using \cite{Maple}] $\mbox{I}_1, \mbox{I}_2 = \frac{\mbox{\scriptsize I}}{2}\left\{ 1 \pm
{F\mbox{cos}\Phi}/{\sqrt{4 + F^2\mbox{cos}^2\Phi}}\right\}$.
This is additionally, as expected, readily rescaleable for any $\mbox{I} \neq 0$, so I take $\mbox{I} = 1$.
For diagrams of this kind in this paper, I use the convention of black for $\mbox{I}_1$ and dashed for $\mbox{I}_2$.
Then as $F \longrightarrow 0$, one gets twice the circle of radius 0.5 (corresponding to the
${\cal R} = 1$ solution); the first picture, for $F = 0.1$, is well on its way toward that behaviour.
As $F$ grows, there motion has two `almost-halves' in which one of $\mbox{I}_1$, $\mbox{I}_2$ completely
dominates over the other, separated by two thin wedges in which $\mbox{I}_1$, $\mbox{I}_2$ cross over.
This amounts to $\mbox{I}_1, \mbox{I}_2$ undergoing periodic oscillations.
Note that they always cross over at 1/2, corresponding to all the other circles in Fig 5a) cutting
the ${\cal R} = 1$ circle.
Also note the complete symmetry between the status of $\mbox{I}_1$ and $\mbox{I}_2$ in the opposite direction
($\Phi \longrightarrow \Phi + \pi$).
As $F \longrightarrow \infty$, the two curves touch the origin and, oppositely, the unit circle,
corresponding to ${\cal R}$ running from 0 to $\infty$, i.e. the vertical line of Fig 5a).
\noindent c) Re-interpretation in terms of particles.
I consider only $m_1 = m_2 = m_3$ for simplicity.
The equator (${\cal R} = 1$ circle) $F = 0$ case corresponds to going through the configurations
sketched, which can be summarized by the picture underneath particles 2, 3 fixed and the circle
representing the range of positions that particle 1 takes.
I also provide the analogous summarizing sketch for the slightly inclined great circle of Fig 5 a)
(this is for $\Phi_0 = 0$, in general different values of $\Phi_0$ correspond to different
particle behaviours).
The Greenwich meridian corresponds to motions between the double collision of particles 2, 3 and the
2, 1, 3 collinearity with particle 1 at the centre of mass of particles 2, 3.
\noindent d) Some useful terminology for the particle configurations.
Some definitions to aid this are as follows:
near-isosceles and near-collinear (for $\alpha <$ some $\alpha_0$, constant),
`Jacobi-flat' ($\mbox{I}_1 << \mbox{I}_2$),
`Jacobi-regular' ($\mbox{I}_1 = \mbox{I}_2$, which, when isosceles, is equilateral) and
`Jacobi-tall' ($\mbox{I}_1 >> \mbox{I}_2$).]}
\subsection{Small relative scale asymptotic behaviour}
The first approximation for ${\cal R}$
is the generic asymptotics ($Q_0 = 2\mbox{\sffamily E} - K_2 \propto \overline{\mbox{\sffamily E}} - A + B \neq 0$, so
$\widetilde{\mbox{\sffamily U}} + \widetilde{\mbox{\sffamily E}}$ goes as $Q_0/2$).
[Note however that not all dynamical orbits enter such a regime -- sometimes the quadrature's integral
goes
complex before the small ${\cal R}$ regime is attained (${\cal R}$ large `classically forbidden') when
$\widetilde{\mbox{\sffamily U}}({\cal R}) {\cal R}^2 - {\cal J}^2 < 0$.]
Then integrating the ${\cal R}$ quadrature (\ref{quad}) gives the orbits
\begin{equation}
\pm\sqrt{2q_0}\mbox{sec}(\Phi - \bar{\Phi}) = {\cal R} = \mbox{tan$\frac{\Theta}{2}$} = \iota_1/\iota_2
\mbox{ }
\label{exactsoln}
\end{equation}
(following through with variable transformations so as to cast it in terms of straightforward relational
variables, and using the ${\cal J}$-absorbing constant
$q_0 = Q_0/2{\cal J}^2 = {\mbox{\sffamily E} - K_2/2}/{\cal J}^2 = {4\{\overline{\mbox{\sffamily E}} - A - B\}}/{\cal J}^2$).
Note that the ${\cal R}$ form of the orbits are parallel straight lines (vertical for $\Phi_0 = 0, \pi$)
[Fig 6a)].
Moreover, these are known not to be very good approximands in that they totally neglect the
non-constant part of the potential and thus are precisely rectilinear motions.
Thus at least here (and one may suspect likewise in the next section), it is often necessary to
use the second approximation in studies.
The other equalities in (\ref{exactsoln}) convert that result into what form the asymptotic orbits take
in straightforward relational variables; I sketch these in Figs 6a) and 7a).
While, in terms of the problem's original quantities,
\begin{equation}
||\underline{\iota}_2||^4 + 2q_0\{\underline{\iota_1} \cdot \underline{\iota_2}\}^2 =
2q_0\{||\underline{\iota_1}||^2 ||\underline{\iota_2}||^2\mbox{sin}^2\Phi_0 +
2\sqrt{2q_0}||\underline{\iota}_2||^2\{\underline{\iota_1}\cdot\underline{\iota_2}\}^2\}\mbox{cos}{\Phi}_0
\mbox{ } ,
\label{81}
\end{equation}
which is again a fourth-order homogeneous polynomial in $||\underline{\iota}_1||$,
$||\underline{\iota}_2||$ and $\sqrt{\underline{\iota}_1\cdot\underline{\iota}_2}$.
It admits the simpler cases 1) $\mbox{sin}\Phi_0 = 0$ (which is merely second-order):
\begin{equation}
||\underline{\iota}_2||^2 = \sqrt{2q_0}\underline{\iota}_1 \cdot \underline{\iota}_2 \mbox{ } ,
\label{simp}
\end{equation}
and 2) $\mbox{cos}\Phi_0 = 0$:
\begin{equation}
||\underline{\iota}_2||^4 =
2q_0\{||\underline{\iota}_1||^2||\underline{\iota}_2||^2 -
\{\underline{\iota}_1 \cdot \underline{\iota}_2\}^2 \} \mbox{ } .
\end{equation}
The second approximation for ${\cal R}$ small in which $L = 0$ but the $Q_2{\cal R}^2/2$ term is also
kept turns out to also often be necessary.
If $Q_0 = 0$ but $L \neq 0$ one has non-generic asymptotics not directly covered in this SSec, but
one can use the technique of Sec 5 to reduce this case to one of those covered in the present SSec
in a new set of variables.
If $Q_0, L = 0$, one resides within the special case, and the next leading term is $Q_2{\cal R}^2/2$,
which case is included in my working below; indeed now the ``second" approximation is always necessary.
If $Q_2 = 4\mbox{\sffamily E} + K_1 - 3K_2 = 16\{\overline{\mbox{\sffamily E}} - A - 2B\} = 0$ also, one is within the very
special case, and so one does not need any asymptotic calculations as one has the exact solution of
SSec 4.8.
For the second approximation integrating the ${\cal R}$ quadrature gives the orbits
\begin{equation}
\pm 1/\sqrt{q_0 + \sqrt{q_0^2 - q_2} \mbox{cos}(2\{\Phi - \bar{\Phi}\})} = {\cal R} = \mbox{tan$\frac{\Theta}{2}$} = \iota_1/\iota_2
\mbox{ }
\label{Sexactsoln}
\end{equation}
(following through with variable transformations so as to cast it in terms of straightforward relational
variables, and using the ${\cal J}$-absorbing constant $q_2 = Q_2/{\cal J}^2$).
This is straightforwardly rearrangeable into quite a standard form (e.g \cite{Moulton, Whittaker})
\begin{equation}
{\cal R}^2 = \frac{1}{q_0 + \sqrt{q_0^2 - q_2}\mbox{cos}(2\{\Phi - \Phi_0\})} \mbox{ } ,
\end{equation}
the case-by-case analysis of which is provided in Fig 6b).
While, in terms of the problem's original quantities, and using $g$ for $\sqrt{q_0^2 - q_2}$,
$$
||\underline{\iota}_2||^8 + \{q_0^2 +
g^2\mbox{cos}^2(2\Phi_0)\}||\underline{\iota}_1||^4||\underline{\iota}_2||^4
-2q_0||\underline{\iota}_2||^6||\underline{\iota}_1||^2 +
4g^2\{\underline{\iota_1} \cdot \underline{\iota_2}\}^4 +
$$
\begin{equation}
2g||\underline{\iota}_2||^2\{q_0||\underline{\iota_1}||^2 - ||\underline{\iota_2}||^2\}
\{2\{\underline{\iota}_1\cdot\underline{\iota}_2\}^2 -
||\underline{\iota}_1||^2||\underline{\iota}_2||^2\}\mbox{cos}(2\Phi_0) =
4g^2\{\underline{\iota_1}\cdot\underline{\iota_2}\}^2||\underline{\iota}_1||^2||\underline{\iota}_2||^2
\mbox{ } ,
\end{equation}
which is an eighth-order homogeneous polynomial in $||\underline{\iota}_1||$,
$||\underline{\iota}_2||$ and $\sqrt{\underline{\iota}_1\cdot\underline{\iota}_2}$.
It admits the simpler cases 1) $\mbox{sin}(2\Phi_0) = 0$ (which is merely fourth-order):
\begin{equation}
||\underline{\iota}_2||^4 = \{q_0 - g\}||\underline{\iota}_1||^2||\underline{\iota}_2||^2 +
2g\{\underline{\iota}_1\cdot\underline{\iota}_2\}^2
\end{equation}
and 2) $\mbox{cos}(2\Phi_0) = 0$:
\begin{equation}
||\underline{\iota}_2||^8 + q_0^2||\underline{\iota}_1||^4||\underline{\iota}_2||^4
- 2q_0||\underline{\iota}_2||^6||\underline{\iota}_1||^2 =
4g^2
\{\underline{\iota}_1\cdot\underline{\iota}_2\}^2
\{||\underline{\iota}_1||^2||\underline{\iota}_2||^2
- \{\underline{\iota}_1\cdot\underline{\iota}_2\}^2 \} \mbox{ } .
\end{equation}
Taking $q_0 = 0$ further simplifies the simple subcases.
While, taking $g = 0$ collapses the general solution to
\begin{equation}
||\underline{\iota}_2|| = 0 \mbox{ (2, 1, 3 collinearity with 1 at the centre of mass of 2, 3) or}
||\underline{\iota}_2|| = \pm\sqrt{q_0}||\underline{\iota}_1|| \mbox{ (rectilinear motion: $\Phi$ fixed) } .
\end{equation}
\mbox{ }
\noindent{\footnotesize[{\bf Figure 6.} a) The vertical straight lines of the first small approximation.
\noindent b) $q_0 > 0$, $q_2 > 0$ (which is the usual mathematics of the isotropic harmonic oscillator), gives ellipses
centred about the origin \cite{PrincipiaI}, up to maximum values on the $q_2 = q_0^2$ parabola on which
they become circles centred on the origin; beyond this the solution ceases to exist.
While, $q_0 > 0, q_2 = 0$ recovers pairs of straight lines belonging to a).
$q_2 < 0$ is the upside-down isotropic harmonic oscillator and gives hyperbolae, which are obtuse for $q_0 > 0$
rectangular on $q_0 = 0$ and acute for $q_0 < 0$.
Finally, $q_0 = q_2 = 0$ is the circle at infinity, while $q_0 < 0$, $q_2 \geq 0$ and $q_0 = 0, q_2 > 0$
are classically forbidden, as can be seen from the quadrature having to be real.
\noindent c) The partition of parameter space into these cases; it is obvious from (50)
that the shaded region is classically disallowed.]}
\mbox{ }
\noindent{\footnotesize[{\bf Figure 7.} Using a form valid for both first and second small
approximations, the partial moments of inertia are, for $\Phi_0 = 0$,
\noindent
$\mbox{I}_1 = {\mbox{I}}/\{1 + q_0 + g\mbox{cos}(2\Phi)\}$ and
$\mbox{I}_2 = {\mbox{I}\{q_0 + g\mbox{cos}(2\Phi)\}}/\{1 + q_0 + g\mbox{cos}(2\Phi)\}$.
\noindent a) Then the three types of behaviour of the first large approximation are: `$\mbox{I}_2$ is
surrounded by $\mbox{I}_1$', `$\mbox{I}_1$ and $\mbox{I}_2$ touching', and `$\mbox{I}_1$ and
$\mbox{I}_2$ cross-over' (of which I provide two representatives).
The touching case is the limiting case between the other two cases.
The only instance in which the small approximation is self-consistent is in the last picture for
$\Phi$ within wedges around five angular tick-marks wide either side of 0 and of $\pi$.
\noindent b) Next, here are sketches of the nine further behaviours exhibited by the second approximation.
The first three are limiting behaviours on the $q_2 = q_0^2$ parabola, of which the second is
itself the limiting behaviour between the other two a single point (corresponding again to
the ${\cal R} = 1$ solution).
The next three are the unfolded version of `cross-over', the reverse case of `touching' (again a limiting
behaviour) and the reverse case of `is surrounded'.
The last four are all cases in which there occurs
2, 1, 3 collinearity with particle 1 at the centre of mass of particles 2, 3:
`is surrounded', a `touching' limiting case, and two instances of `cross-over'.
The outwards-lying arrows indicate for which $\Phi$ in each case the small approximation
is self-consistent.
\noindent c) Here I provide a sketch (not to scale) of which regions of the parameter space these
various cases reside in.
The dashed curve touches the parabola, thus trisecting the classically-allowed region into:
a region above and to the left of the dashed curve where the small asymptotics is everywhere-valid,
a region to the right of the dashed curve where it is valid in some wedges,
and a region below and to the left of the dashed curve where it is nowhere-valid.
\noindent d) A few examples of reading off what is happening in the particle position picture.
The third and sixth cases in 7b) correspond to particle 1 describing a closed curve round particles
2, 3.
The fifth case describes particle 1 coming in from infinity (= particles 2, 3 colliding) and the
approximation breaking down for some $\Phi_c$.
The tenth case describes particle 1 coming in from infinity and reaching the centre of mass of
particles 2, 3 at some angle $\Phi_c$.]}
\subsection{Large relative scale asymptotic behaviour}
For analogous notions of first and second {\sl large} approximations, now the quadrature in
${\cal U} = 1/{\cal R}$ takes the same form as the ${\cal R}$-quadrature in the above workings with
\begin{equation}
Q_0 \longrightarrow Q_4 \mbox{ } , \mbox{ } Q_2 \longrightarrow Q_6
\end{equation}
($q_4$, $q_6$ and $f$ below are then the obvious analogues of $q_0$, $q_2$ and $g$).
Hence the solutions are dual to those of SSec 4.9's.
Thus all of SSec 4.9's results apply again under the duality substitutions (and the
subsequently-induced language changes `small' $\longrightarrow$ `large', and
``2, 1, 3 collinearity with particle 1 at the centre of mass of particles 2, 3" $\longrightarrow$
``collision between particles 2, 3, also interpretable as particle 1 escaping to infinity").
In particular, the $\mbox{I}_1, \mbox{I}_2$ plots of Fig 7b) and which parts of the parameter space the various
cases hold in and where the now small asymptotics is valid can just be read off the existing figures
under these substitutions, and then the region in which small asymptotics applied before [Fig 7c)] is
now that in which large asymptotics now applies.
New ${\cal R}, \Phi$ plots are, however, required (Fig 8)
Some particular comments are: 1) the first approximation is then
\begin{equation}
\pm\sqrt{2q_4}\mbox{cos}(\Phi - \Phi_0) = {\cal R} = \mbox{tan$\frac{\Theta}{2}$} = \iota_1/\iota_2 \mbox{ } .
\end{equation}
In the (${\cal R}, \Phi$) plane and for $\Phi_0 = 0$, this takes the form of a family of circles of radius
$\sqrt{q_4/2}$ and centre $(\sqrt{q_4/2}, 0)$, so that they are all tangent to the vertical axis through
the origin (\cite{TriCl} and Fig 6a)).
2) The second approximation gives
\begin{equation}
\pm\sqrt{q_4 + \sqrt{q_4^2 - q_6} \mbox{cos}(2\{\Phi - \bar{\Phi}\})} = {\cal R} =
\mbox{tan$\frac{\Theta}{2}$} = \iota_1/\iota_2 \mbox{ } .
\end{equation}
In the ${\cal R}, \Phi$ plane and for $\Phi_0 = 0$, this take the forms in Fig 8b) in the parameter
regions delineated by Fig 6c).
\mbox{ }
\noindent{\footnotesize[{\bf Figure 8} a) The first large approximation's family of tangent circles in the
(${\cal R}, \Phi$) plane.
\noindent b) Additional behaviours of the second large approximation:
bulging, rectangular and thin tear drops, ellipse-like and peanut-like curves and circles
centred on the origin.
The partition of $q_4$, $q_6$ parameter space here is the same as that of $q_0$, $q_2$
parameter space in Fig 6c); indeed Fig 6c) is an important clarification of the behaviour
exhibited by the large approximation.]}
\mbox{ }
Note: for general $\widetilde{\mbox{\sffamily V}}_{(\alpha, 0)}$ with constant of proportionality $\Lambda_{\alpha}$, one
gets exactly the same large-asymptotics analysis as here, with $q_0 = 2\mbox{\sffamily E} - \Lambda_{\alpha}$,
$q_2 = 4\mbox{\sffamily E} - \{4 + \alpha\}\Lambda_{\alpha}$.
Thus generally this duality to the isotropic harmonic oscillator of the universal large-scale asymptotics of scalefree
triangleland is a useful and important result for this Machian mechanics, for the classical and quantum
mechanics of isotropic harmonic oscillators is rather well-studied and thus a ready source of classical and quantum
methods, results, and insights.
\subsection{Investigation of the intermediate-${\cal R}$ region}
Note that only two of $Q_0$, $Q_2$, $Q_4$ and $Q_6$ are independent.
Thus prescribing a particular small asymptotics entails prescribing a particular large asymptotics too.
There are however no compatibility restrictions: each small asymptotic behaviour is capable of
connecting to each large asymptotic behaviour.
Also note from the twin circles, tear drops and peanuts that more than one region in which large
asymptotics applies is possible.
Then numerical integration using Maple's \cite{Maple} rkf45 solver reveals that precession can occur
in the intermediate-${\cal R}$ region, so that orbits can have arbitrarily many large (and small)
asymptotic regions.
However, the spiraling reported in \cite{TriCl} does not match this paper's asymptotic
solutions in all relevant places.
I account for this as follows.
1) I observe that when the bona fide energy constant E $\neq 0$, solutions do include the circle centred
on the origin and spirals to/from that \cite{Moulton} which resemble the previously-reported spiraling.
2) However the scalefree triangleland problem in hand has E = 0 and so possesses no such solutions.
3) The previously-reported spiraling is due to \cite{TriCl}'s code not implementing the E = 0
restriction.
Thus the observation of spiraling with that code is unsurprising, and wrong.
The code I used for the present paper specifically builds its ${\cal R}$-derivative `initial datum' by
solving the E = 0 energy equation for the input ${\cal R}$ initial datum (and checks that the evolution
does not take one away from E = 0), thus amending the error.
\section{Normal coordinates for scalefree triangleland with multi-harmonic oscillator type potentials}
\subsection{A rotation sending the general case to the special case in new coordinates}
One can avoid having a $C$-term by performing a rotation or normal coordinates construction, which
preserves the form of the kinetic term while sending the potential to a $C$-free form in the new
coordinates.
One can get to these by inserting such a rotation between Step 1 and Step 2 in Sec 3.2, or by
rotating coordinates at the level of the equations of motion.
I denote the new coordinates with N-subscripts (N for normal).
Due to the general case in N-subscripted coordinates taking the same form as the special case in the
original coordinates, we can uplift Sec 4 to an exact solution of general multiple harmonic oscillator case by inserting
N-subcripts and, at the end of the calculation, rotating back from normal Jacobi coordinates to a more
complicated form in terms of the original Jacobi coordinates.
What is the requisite rotation angle?
From the matrix equation
$\underline{\underline{R}} (\mbox{through angle } \alpha \mbox{ about } y \mbox{ axis} )
\underline{n} = \underline{n}_{\sN}$,
\begin{equation}
\mbox{\fontsize{1.1cm}{1.1cm}\selectfont (}
\stackrel{ \stackrel{ \mbox{cos$\alpha$} \mbox{ } \mbox{0} \mbox{ } -\mbox{sin$\alpha$} }
{ \mbox{0} \mbox{ } \mbox{ } \mbox{1} \mbox{ } \mbox{ } \mbox{0} } }
{ \mbox{sin$\alpha$} \mbox{ } 0 \mbox{ } \mbox{cos$\alpha$} }
\mbox{\fontsize{1.1cm}{1.1cm}\selectfont )}
\mbox{\fontsize{1.1cm}{1.1cm}\selectfont (}
\stackrel{\stackrel{\mbox{$C$}}{\mbox{0}}}{\mbox{$B$}}
\mbox{\fontsize{1.1cm}{1.1cm}\selectfont )}
=
\mbox{\fontsize{1.1cm}{1.1cm}\selectfont (}
\stackrel{ \stackrel{ \mbox{0} }{ \mbox{0} } }{ \mbox{$B_{\sN}$} }
\mbox{\fontsize{1.1cm}{1.1cm}\selectfont )} \mbox{ } ,
\end{equation}
where $\underline{n}$, $\underline{n}_{\sN}$ are unit vectors considered to be in unit-radius spherical
coordinate form about the original and normal-coordinate axes, the requisite rotation is by
\begin{equation}
\alpha = \mbox{arctan}(C/B) \mbox{ } :
\end{equation}
this sends $B\mbox{cos}\Theta + C\mbox{sin}\Theta\mbox{cos}\Phi$ to $B_{\sN}\mbox{cos}\Theta_{\sN}$.
Then
\begin{equation}
\mbox{sin}\alpha = {C}/{\sqrt{B^2 + C^2}} \mbox{ } , \mbox{ }
\mbox{cos}\alpha = {B}/{\sqrt{B^2 + C^2}} \mbox{ } ,
B_{\sN} = \sqrt{B^2 + C^2} \mbox{ } ,
\end{equation}
\begin{equation}
\mbox{cos}\Theta_{\sN} = \{B\mbox{cos}\Theta + C\mbox{sin}\Theta\mbox{cos}\Phi\}/{\sqrt{B^2 + C^2}}
\mbox{ }
\mbox{ and }
\mbox{ }
\Phi_{\sN} = \mbox{arctan}
\left(
{\sqrt{B^2 + C^2}\mbox{sin}\Theta\mbox{sin}\Phi}
/\{B\mbox{sin}\Theta\mbox{cos}\Phi - C\mbox{cos}\Theta\}
\right) \mbox{ } .
\label{Rex}
\end{equation}
The potential is now
\begin{equation}
\{K_1^{\sN}\rho_{1_{\mbox{\tiny N}}}^2 + K_2^{\sN}\rho_{2_{\mbox{\tiny N}}}^2 \}/{8\mbox{I}} = A_{\sN} + B_{\sN}\mbox{cos}\Theta_{\sN}
\mbox{ } .
\end{equation}
It is also useful to note for later use the following coefficient interconversions:
\begin{equation}
A_{\sN} = A
\mbox{ } , \mbox{ } ,
K_1^{\sN} = 8\{A - \sqrt{B^2 + C^2}\}
\mbox{ } , \mbox{ }
K_2^{\sN} = 8\{A + \sqrt{B^2 + C^2}\}
\mbox{ } ,
\label{Lex}
\end{equation}
\begin{equation}
K_1^{\sN} = K_1 + K_2 - \sqrt{\{K_1 - K_2\}^2 + L^2}\}
\mbox{ } , \mbox{ }
K_2^{\sN} = K_1 + K_2 + \sqrt{\{K_1 - K_2\}^2 + L^2}\} \mbox{ } .
\end{equation}
From the spherical perspective, the normal coordinates solution has the same form as the special
solution in the original coordinates, but now one is to project onto the general tangent plane rather
than the tangent plane at the North Pole, interpreting now that general stereographic coordinate
as the ratio of the square roots of the barycentric partial moments of inertia.
This permits graphical sketches of the qualitative behaviour rather than fairly lengthy analytical
expressions constructed by passing from ($\Theta_{\sN}, \Phi_{\sN}$) coordinates to ($\Theta, \Phi$)
coordinates and then on to mechanically significant variables.
\subsection{Examples: preamble.}
Note that the very special case's solution is invariant under this rotation: that has {\sl no} preferred
axis.
So one needs to look slightly further to obtain nontrivial examples.
First I give the simplest nontrivial example of the first small asymptotics from both an analytical and
a graphical presentation.
For this paper to be of manageable length I then only provide a graphical perspective for the first
large asymptotics, and brief comments on further cases.
\mbox{ }
\noindent{\footnotesize[{\bf Figure 9} N-large and N-small domains of applicability map to the original
(${\cal R}$, $\Phi$) plane as indicated.]}
\subsection{First small asymptotics solution for general case}
From (\ref{exactsoln}) with N-subscripts, tan$\frac{\Theta_{\mbox{\tiny N}}}{2}$ = $\mbox{sin}\Theta/$\{ 1 + cos$\Theta_{\sN}\}$
(\ref{Rex}, \ref{Lex}) and elementary cancellations, for $\Phi_{\sN}^{0} = 0$ the analytic solution for this takes
the form
\begin{equation}
B_{\sN} + B\mbox{cos}\Theta + C\mbox{sin}\Theta\mbox{cos}\Phi =
\sqrt{2q_{\sN}^0}\{B\mbox{sin}\Theta\mbox{cos}\Phi - C\mbox{cos}\Theta\} \mbox{ } ,
\end{equation}
where $q_{0}^{\sN}$ bears the same relation to $K_2^{\sN}$ as $q_0$ bears to $K_2$.
Then in terms of (${\cal R}, \Phi$),
\begin{equation}
\left\{
B_{\sN} - B - \sqrt{2q_{\sN}^0}C
\right\}
{\cal R}^2 + 2
\left\{
{C} - \sqrt{2q_{\sN}^0}B
\right\}
{\cal R}\mbox{cos}\Phi + B_{\sN} + B + \sqrt{2q_{\sN}^0}C = 0 \mbox{ } .
\end{equation}
Or, in terms of straightforward relational variables $\mbox{I}_1, \mbox{I}_2, \Phi$,
\begin{equation}
\left\{
B_{\sN} - B - \sqrt{2q_{\sN}^0} C
\right\}
\mbox{I}_1 +
\left\{
B_{\sN} + B + \sqrt{2q_{\sN}^0C}
\right\}
\mbox{I}_2 +
2\left\{
C - \sqrt{2q_{\sN}^0}B
\right\}
\sqrt{\mbox{I}_1\mbox{I}_2}\mbox{cos}\Phi = 0 \mbox{ } .
\end{equation}
Finally in terms of the original variables for the problem,
\begin{equation}
\left\{
B_{\sN} - B - \sqrt{2q_{\sN}^0}C
\right\}
||\underline{\iota}_1||^2 +
\left\{
B_{\sN} + B + \sqrt{2q_{\sN}^0C}
\right\}
||\underline{\iota}_2||^2 + 2
\left\{
{C} - \sqrt{2q_{\sN}^0}B
\right\}
\underline{\iota}_1\cdot\underline{\iota}_2 = 0 \mbox{ } .
\end{equation}
Like for the $C = 0$ case (\ref{simp}), this is a second-order homogeneous polynomial in
$||\underline{\iota}_1||$, $||\underline{\iota}_2||$ and
$\sqrt{\underline{\iota}_1\cdot\underline{\iota}_2}$.
\mbox{ }
\noindent{\footnotesize[{\bf Figure 10} a) Sketch of how the first small approximation's parallel
straight lines in the (${\cal R}$, $\Theta$) plane project onto the ($\Theta$, $\Phi$) sphere.
\noindent b) (${\cal R}$, $\Phi$) sketches for $C \neq 0$, either from plotting the analytical function
or from projecting the sketch on the sphere onto the tangent plane of the appropriately-rotated
North Pole.
The family of parallel straight lines for $C = 0$ is now a family of circles
tangent to a single point.
Within the (unshaded) region where this solution is valid, one therefore obtains circular arcs.
\noindent c) Subsequent sketches of $\mbox{I}_1$ and $\mbox{I}_2$ as functions of $\Phi$ showing some of the
distortions which occur when a $C$-term is switched on in the `cross-over' case of Fig 7a).
All subfigures have $A = 1 = B$ and $\mbox{\sffamily E}$ = 9.
The first picture is for $C = 0.1$.
The second picture is for $C = 0.5$, by which stage $\mbox{I}_2$
encloses the origin.
The third picture is for $C \approx 2.9$, for which $\mbox{I}_1$ and $\mbox{I}_2$ touch,
after which $\mbox{I}_2$ `is surrounded' by $\mbox{I}_1$ (fourth picture).]}
\subsection{First large asymptotics solution for general case}
\noindent{\footnotesize[{\bf Figure 11} a) Sketch of how the first large approximation's family of
tangent circles in the (${\cal R}$, $\Theta$) plane project onto the ($\Theta$, $\Phi$) sphere.
\noindent b) (${\cal R}$, $\Phi$) sketches for C $\neq 0$ either from plotting the analytical function
or from projecting the sketch on the sphere onto the tangent plane of the appropriately-rotated
North Pole.
One still has arcs that touch at a point, but this point is no longer at the origin but is rather
shifted away from it along the $\Phi = 0$ axis.
This does not cause any noteable changes to $\mbox{I}_1$ and $\mbox{I}_2$ as functions of $\Phi$,
so I do not provide sketches of these.]}
\subsection{Further examples}
One can go on from here to provide in rotated coordinates general solutions to the second small and
large asymptotics, and of the general numerical behaviour in the region in between.
Here are some brief comments.
\noindent 1) Similarly to the C = 0 approximation having turn-around ellipses rather than having
to follow straight lines, the second small approximation allows for turn-around behaviour rather than
having to complete the first small approximation's circles: approximate circular arc, turn-around,
approximate circular arc in opposite direction to the original.
\noindent 2) One can now have now asymmetric bulges where one had symmetric ones before (e.g. in the
first large approximation or in the peanut case of the second large approximation); indeed one bulge
can non-generically become infinitely big (like the straight line in the first large approximation) and
even form another contribution `beyond infinity' which shows up on the other side of the opposite bulge.
\section{Scaled triangleland at the classical level}
\subsection{Scale--shape coordinates ($\mbox{I}, {\cal R}, \Phi$) or ($\mbox{I}, \Theta, \Phi$)}
The general scaled triangleland with multi-harmonic oscillator like potential's Jacobi-type action is
\begin{equation}
\mbox{\sffamily S}_{}[\mbox{I}, \Theta, \Phi, \dot{\mbox{I}}, \dot{\Theta}, \dot{\Phi}] =
2\int\textrm{d}\lambda\sqrt{\check{\mbox{\sffamily T}}\{\check{\mbox{\sffamily E}} + \check{\mbox{\sffamily U}}\}} \equiv
2\int\textrm{d}\lambda\sqrt{
\frac{ \dot{\mbox{I}}^2 + \mbox{I}^2\{\dot{\Theta}^2 + \mbox{sin}^2\Theta\dot{\Phi}^2\} }{ 2 }
\left\{
\frac{ \mbox{\sffamily E} + \mbox{\sffamily U}(\mbox{I}, \Theta, \Phi) }{ 4\mbox{I} }
\right\} } \mbox{ } .
\end{equation}
The Euler--Lagrange equations are
\begin{equation}
\mbox{I}^{\check{*} \check{*}} - \mbox{I}\{\Theta^{\check{*}2} + \mbox{sin}^2\Theta\Phi^{\check{*}2}\} + \frac{1}{4\mbox{I}}
\left\{
\frac{\partial\mbox{\sffamily V}}{\partial \mbox{I}} + \frac{\mbox{\sffamily E} + \mbox{\sffamily U}}{\mbox{I}}
\right\} = 0 \mbox{ } \mbox{ } , \mbox{ }
\{\mbox{I}^2\Theta^{\check{*}}\}^{\check{*}} - \mbox{I}^2\Phi^{\check{*}2}\mbox{sin}\Theta\mbox{cos}\Theta +
\frac{1}{4\mbox{I}}\frac{\partial\mbox{\sffamily V}}{\partial\Theta} = 0 \mbox{ } ,
\label{ERPM_THETA_ELE}
\end{equation}
\begin{equation}
\{\mbox{I}^2\mbox{sin}^2\Theta\Phi^{\check{*}}\}^{\check{*}} + \frac{1}{4\mbox{I}}\frac{\partial\mbox{\sffamily V}}{\partial\Phi} = 0 \mbox{ } ,
\label{ERPM_PHI_ELE}
\end{equation}
and there is an accompanying energy integral
\begin{equation}
\{\mbox{I}^{\check{*}2} + \mbox{I}^2\{\Theta^{\check{*}2} + \mbox{sin}^2\Theta\Phi^{\check{*}2}\}\}/{2} +
{\mbox{\sffamily V}}/{4\mbox{I}} = {\mbox{\sffamily E}}/{4\mbox{I}}
\mbox{ } .
\end{equation}
[Above,
\begin{equation}
\check{*} \equiv \sqrt{\frac{\check{\mbox{\sffamily E}} + \check{\mbox{\sffamily U}}}{\check{\mbox{\sffamily T}}}} \mbox{ } \dot{\mbox{}} =
\frac{1}{4\mbox{I}}\sqrt{\frac{\mbox{\sffamily E} + \mbox{\sffamily U}}{\mbox{\sffamily T}}} \mbox{ } \dot{\mbox{}} = \frac{1}{4\mbox{I}} * \mbox{ } .]
\label{checkstardef}
\end{equation}
In the case in which $\mbox{\sffamily V}$ is independent of $\Phi$, (\ref{ERPM_PHI_ELE}) simplifies to the first
integral
\begin{equation}
\mbox{I}^2\mbox{sin}^2\Theta\Phi^{\check{*}} = J \mbox{ } .
\end{equation}
See App C for physical interpretations of the relative angular momentum quantity $J$.
Then the other Euler--Lagrange equations and the energy integral take the forms
\begin{equation}
\mbox{I}^{\check{*}\check{*}} - \mbox{I}\Theta^{\check{*}2} - \frac{ J^{2} }{ \mbox{I}^3\mbox{sin}^2\Theta } + \frac{1}{4\mbox{I}}
\left\{
\frac{\partial\mbox{\sffamily V}}{\partial \mbox{I}} + \frac{\mbox{\sffamily E} - \mbox{\sffamily V}}{\mbox{I}}
\right\} = 0 \mbox{ } \mbox{ } , \mbox{ } \mbox{ }
\{\mbox{I}^2\Theta^{\check{*}}\}^{\check{*}} - \frac{ J^2\mbox{cos}\Theta }{ \mbox{I}^2\mbox{sin}^3\Theta } +
\frac{1}{4\mbox{I}}\frac{\partial\mbox{\sffamily V}}{\partial\Theta} = 0 \mbox{ } ,
\label{ERPM_THETA_ELE2}
\end{equation}
\begin{equation}
\frac{\mbox{I}^{\check{*}2}}{2} + \frac{\mbox{I}^2\Theta^{\check{*}2}}{2} + \frac{ J^2 }{ 2\mbox{I}^2\mbox{sin}^2\Theta }
+ \frac{\mbox{\sffamily V}}{4\mbox{I}} = \frac{\mbox{\sffamily E}}{4\mbox{I}}
\mbox{ } .
\end{equation}
\subsection{Scaled triangleland with general multi-harmonic oscillator potential}
The usual general multi-harmonic oscillator potential maps to
\begin{equation}
\{{1}/{4\mbox{I}}\}\mbox{I}
\left\{
\{K_1 + K_2\}/{4} + \{K_2 - K_1\}/{4}\mbox{cos}\Theta +
\{{L}/{4}\}\mbox{sin}\Theta\mbox{cos}\Phi
\right\}
= A + B\mbox{cos}\Theta + C\mbox{sin}\Theta\mbox{cos}\Theta \mbox{ } .
\end{equation}
\subsection{Sketch of the potential}
The sketch of potential is as for the corresponding similarity problem.
The sketch of $\check{\mbox{\sffamily V}} - \check{\mbox{\sffamily E}}$ is slightly different from that of $\overline{\mbox{\sffamily V}} -
\overline{\mbox{\sffamily E}}$ for the similarity problem in that one has a further equation in looking for critical points,
\begin{equation}
{\partial{\{\check{\mbox{\sffamily V}} - \check{\mbox{\sffamily E}}\}}}/{\partial \mbox{I}} = {\mbox{\sffamily E}}/{4\mbox{I}^2} \mbox{ } ,
\end{equation}
so one also needs $\mbox{\sffamily E} = 0$ in order to have these.
After that the analysis is as before (including the picking out of a preferred axis except in the
$B = C = 0$ case), except that the Hessian has an extra row and column of zeros.
\subsection{Classical equations of motion}
The Jacobi-type action for this problem is then
\begin{equation}
\mbox{\sffamily S}_{} = 2\int\textrm{d}\lambda\sqrt{
\frac{\dot{\mbox{I}}^2 + \mbox{I}^2\{\dot{\Theta}^2 + \mbox{sin}^2\Theta\dot{\Phi}^2\}}{2}
\left\{
\frac{\mbox{\sffamily E}}{4\mbox{I}} - A - B\mbox{cos}\Theta - C\mbox{sin}\Theta\mbox{cos}\Phi
\right\} }
\mbox{ } .
\end{equation}
The Euler--Lagrange equations are
\begin{equation}
\mbox{I}^{\check{*}\check{*}} - \mbox{I}\{\Theta^{\check{*}2} + \mbox{sin}^2\Theta\Phi^{\check{*}2}\} + {\overline{\mbox{\sffamily E}}}/{\mbox{I}} = 0 \mbox{ } ,
\label{A}
\end{equation}
\begin{equation}
\{\mbox{I}^2\Theta^{\check{*}}\}^{\check{*}} - \mbox{I}^2\Phi^{\check{*}2}\mbox{sin}\Theta\mbox{cos}\Theta +
C\mbox{cos}\Theta\mbox{cos}\Phi - B\mbox{sin}\Theta = 0 \mbox{ } ,
\label{B}
\end{equation}
\noindent
\begin{equation}
\{\mbox{I}^2\mbox{sin}^2\Theta\Phi^{\check{*}}\}^{\check{*}} - C\mbox{sin}\Theta\mbox{sin}\Phi = 0 \mbox{ } .
\label{C}
\end{equation}
The energy integral is
\begin{equation}
\frac{ \mbox{I}^{\check{*}2} + \mbox{I}^2\{\Theta^{\check{*}2} + \mbox{sin}^2\Theta\Phi^{\check{*}2}\} }{ 2 } +
A + B\mbox{cos}\Theta + C\mbox{sin}\Theta\mbox{cos}\Phi = \frac{\overline{\mbox{\sffamily E}}}{\mbox{I}}
\mbox{ } .
\label{D}
\end{equation}
For small ${\cal R}$ or ${\Theta}$ and for large ${\cal R}$ or small $\Xi = \pi - \Theta$, Sec 4's
results for approximations to the potential hold again.
\subsection{Special Case}
In the special case of $C = 0$, (\ref{ERPM_PHI_ELE}) applies and the remaining Euler--Lagrange equations and energy integral
become
\begin{equation}
\mbox{I}^{\check{*}\check{*}} - \mbox{I}\Theta^{\check{*}2} - { J^{2} }/{ \mbox{I}^3\mbox{sin}^2\Theta } +
{\overline{\mbox{\sffamily E}}}/{\mbox{I}} = 0 \mbox{ } \mbox{ } , \mbox{ } \mbox{ }
\{\mbox{I}^2\Theta^{\check{*}}\}^{\check{*}} - { J^2\mbox{cos}\Theta }/{ \mbox{I}^2\mbox{sin}^3\Theta } +
C\mbox{cos}\Theta\mbox{cos}\Phi - B\mbox{sin}\Theta = 0 \mbox{ } ,
\label{F}
\end{equation}
\begin{equation}
{\mbox{I}^{\check{*}2}}/{2} + {\mbox{I}^2\Theta^{\check{*}2}}/{2} +
{ J^2 }/{ 2\mbox{I}^2\mbox{sin}^2\Theta } +
A + B\mbox{cos}\Theta + C\mbox{sin}\Theta\mbox{cos}\Phi = {\overline{\mbox{\sffamily E}}}/{\mbox{I}}
\mbox{ } .
\label{G}
\end{equation}
\subsection{Analogy between very special case and Kepler--Coulomb problem}
The very special Euclidean relational particle mechanics harmonic oscillator banal-conformally maps to the Kepler problem with
\begin{equation}
\mbox{ (radius) } = r \mbox{ } \longleftrightarrow \mbox{ } \mbox{I} \mbox{ (total moment of inertia) } , \mbox{ }
\end{equation}
\begin{equation}
\mbox{ (test mass) } = m \mbox{ } \longleftrightarrow \mbox{ } 1 \mbox{ } , \mbox{ }
\end{equation}
\begin{equation}
\mbox{ (angular momentum) } = \mbox{L} \mbox{ } \longleftrightarrow \mbox{ } J \mbox{ (relative angular momentum -- see App C) } , \mbox{ }
\end{equation}
\begin{equation}
\mbox{ (total energy) } = E \mbox{ } \longleftrightarrow \mbox{ } - A =
\mbox{ -- (sum of mass-weighted Jacobi--Hooke coefficients)/16 }
\end{equation}
and
\begin{equation}
\mbox{ (Newton's gravitational constant)(massive mass)(test mass) } = GMm \mbox{ } \longleftrightarrow \mbox{ } \overline{\mbox{\sffamily E}}
\mbox{ (total energy)/4 }
\end{equation}
[or to the 1-electron atom Coulomb problem with this last analogy replaced by
\begin{equation}
\mbox{ (nuclear charge)(test charge of electron)/4$\pi$(permettivity of free space) } =
(Ze)e/4\pi\epsilon_0 \mbox{ } \longleftrightarrow \mbox{ } \overline{\mbox{\sffamily E}} \mbox{ (total energy)/4 ] }
\mbox{ } .
\end{equation}
Also note that the positivity of the Hooke's coefficients translates to the requirement that the
gravitational or atomic energy be negative, i.e. to bound states.
While, the positivity of $\mbox{\sffamily E}$ required for classical consistency corresponds to attractive problems
like the Kepler problem or the atomic problem being picked out, as opposed to repulsive Coulomb problems.
Also, the special case corresponds to the same `background electric field' that the rotor was subjected to in
Sec 4, which, moreover, is proportional to cos$\Theta$ which is analogous to cos$\theta$, which is in the
axial (`$Z$') direction but is {\sl not} the well-known mathematics of the axial (`$z$') direction
{\sl Stark effect} for the atom, which involves, rather, $r \mbox{cos}\theta$.
But, nevertheless, the situation in hand is both closely related to the rotor situation in Sec 4 and
to the mathematics of the atom in parabolic coordinates (see e.g. \cite{LLQM, Hecht}).
The general case is then the same situation but with the `electric field' pointing in an arbitrary
direction.
The idea then is to use the obvious analogue of the scheme in Fig 3 to solve Euclidean relational particle mechanics problems in straightforward
relational, relative and absolute terms.
\subsection{Exact solution for the very special case}
Now
\begin{equation}
{ \mbox{I}^{\check{*}2} }/{ 2 } + { J^2 }/{ 2\mbox{I}^2\mbox{sin}^2{\Theta}_0 } + A = {\mbox{\sffamily E}}/{\mbox{I}}
\end{equation}
[usually one would set $\Theta_0 = \pi/2$ without loss of generality, however the present physical
interpretation has the value of $\Theta_0$ be meaningful, as
$\Theta_0 = 2\mbox{arctan}(\iota_1/\iota_2)$].
Thus the solutions are conic sections
\begin{equation}
\mbox{I} = {l}/\{1 + \mbox{$e$ cos}(\Phi - \Phi_0)\}
\end{equation}
where the semi-latus rectum and the eccentricity are given by
\begin{equation}
l = {J^2}/{\overline{\mbox{\sffamily E}}\mbox{sin}^2\Theta_0}
\mbox{ } , \mbox{ }
e = \sqrt{1 - {2AJ^2}/{\overline{\mbox{\sffamily E}}^2\mbox{sin}^2\Theta_0}} \mbox{ } .
\end{equation}
So, in terms of straightforward relational variables,
\begin{equation}
\mbox{I}_1 + \mbox{I}_2 = {l}/\{1 + \mbox{$e$ cos}(\Phi - \Phi_0)\} \mbox{ } ,
\end{equation}
and in terms of the original variables of the problem,
\begin{equation}
\{||\underline{\iota}_1||^2 +
||\underline{\iota}_2||^2\}\{||\underline{\iota}_1||||\underline{\iota}_2|| + e\{
\underline{\iota}_1\cdot \underline{\iota}_2\mbox{cos}\Phi_0 +
\sqrt{||\underline{\iota}_1||^2 ||\underline{\iota}_2||^2 -
\{\underline{\iota}_1\cdot\underline{\iota}_2\}^2 }\mbox{sin}\Phi_0\}\} = l \mbox{ } .
\end{equation}
Note that this is {\sl non}-homogeneous (it rearranges to a non-homogeneous eighth order polynomial);
this is OK as solutions of {\sl Euclidean} relational particle mechanics do not have to be scale-invariant.
In moment of inertia--relative angle space, for $2A = \{\overline{\mbox{\sffamily E}}\mbox{sin}\Theta_0/J\}^2$ one has
circles, for $0 < 2A < \{\overline{\mbox{\sffamily E}}\mbox{sin}\Theta_0/J\}^2$ one has ellipses, for $A = 0$ one has
parabolae (corresponding to the case with no springs).
The hyperbolic solutions ($A < 0$) are not physically relevant here because this could only
be attained with negative Hooke's coefficient springs.
The circle's radius is $\mbox{I} = l = \overline{\mbox{\sffamily E}}/2A$ while for the ellipses $\mbox{I}$ is bounded to lie
between $J/\sqrt{2A}\mbox{sin}\Theta_0$ and $\overline{\mbox{\sffamily E}}/2A$.
The smallest $\mbox{I}$ attained in the parabolic case is $J^2/2\overline{\mbox{\sffamily E}}\mbox{sin}^2\Theta_0$.
The period of motion for the circular and elliptic cases is $\pi\overline{\mbox{\sffamily E}}/\sqrt{2A^3}$.
As regards the individual subsystems, combining the fixed plane equation and the $\mbox{I}(\Phi)$ relation,
\begin{equation}
\mbox{I}_1 = { l\mbox{sin}^2\mbox{$\frac{\Theta}{2}$} }/
\{ 1 + \mbox{$e$ cos}(\Phi - \Phi_0) \}
\mbox{ } \mbox{ } , \mbox{ } \mbox{ }
\mbox{I}_2 = { l\mbox{cos}^2\mbox{$\frac{\Theta}{2}$} }/
\{ 1 + \mbox{$e$ cos}(\Phi - \Phi_0) \}
\end{equation}
so each of these behave individually similarly to the total $\mbox{I}$.
In the $\Theta_0 = \pi/2$ plane, they are both equal (and so equal to \mbox{I}/2).
Circle and ellipse cases have $\mbox{I}_1$ and $\mbox{I}_2$ as closed bounded curves which sit inside the
curve that $\mbox{I}$ traces.
The parabolic case has $\mbox{I}_1$, $\mbox{I}_2$ curves to the `inside' of the parabola that $\mbox{I}$ traces.
\subsection{Special case solved}
Use $\mbox{I}_1, \mbox{I}_2, \Phi$ coordinates given by
\begin{equation}
\mbox{I}_1 \equiv \{\mbox{I} - Z\}/{2} \equiv {\mbox{I}}\{1 - \mbox{cos}\Theta\}/2
\mbox{ } \mbox{ } \mbox{ and } \mbox{ } \mbox{ }
\mbox{I}_2 \equiv \{\mbox{I} + Z\}/{2} \equiv {\mbox{I}}\{1 + \mbox{cos}\Theta\}/2 \mbox{ } ,
\label{Idef}
\end{equation}
which invert to
\begin{equation}
\mbox{I} = \mbox{I}_1 + \mbox{I}_2 \mbox{ } , \mbox{ } \Theta = \mbox{arccos}
\left(
\{\mbox{I}_2 - \mbox{I}_1\}/\{\mbox{I}_1 + \mbox{I}_2\}
\right)
\end{equation}
and are mathematically parabolic coordinates scaled by 1/2, which moreover in the present relational
context have the physical interpretation of partial moments of inertia of the two subsystems.
Then
\begin{equation}
\mbox{\sffamily S}_{} = 2\int \textrm{d} \lambda \sqrt{\overline{\mbox{\sffamily T}}\{\overline{\mbox{\sffamily E}} + \overline{\mbox{\sffamily U}}\}} =
2\int\textrm{d}\lambda\sqrt{\frac{1}{2}
\left\{
\frac{\dot{\mbox{I}}_1^2}{\mbox{I}_1} + \frac{\dot{\mbox{I}}_2^2}{\mbox{I}_2} + \frac{4\mbox{I}_1\mbox{I}_2\dot{\Phi}^2}{\mbox{I}_1 + \mbox{I}_2}
\right\}
\left\{
\frac{\mbox{\sffamily E}}{4} - \frac{K_1\mbox{I}_1 + K_2\mbox{I}_2}{8}
\right\} }
\end{equation}
for $\overline{\mbox{\sffamily T}}$, $\overline{\mbox{\sffamily U}}$, $\overline{\mbox{\sffamily E}}$ as before.
Then the $\Phi$-Euler--Lagrange equation is
\begin{equation}
\frac{4\mbox{I}_1\mbox{I}_2\Phi^{\overline{*}}}{\mbox{I}_1 + \mbox{I}_2} = J \mbox{ } ,
\end{equation}
and the energy integral is, subsequently,
\begin{equation}
\frac{\mbox{I}_1^{\overline{*2}}}{2\mbox{I}_1} + \frac{\mbox{I}_2^{\overline{*2}}}{2\mbox{I}_2} + \frac{J^2}{8}
\left\{
\frac{1}{\mbox{I}_1} + \frac{1}{\mbox{I}_2}
\right\}
+ \frac{K_1\mbox{I}_1 + K_2\mbox{I}_2}{8} = \frac{\mbox{\sffamily E}}{4} \mbox{ } ,
\end{equation}
which separates into
\begin{equation}
4\mbox{I}_i^{\overline{*}2} + J^2 + K_i\mbox{I}_i^2 = 2\mbox{\sffamily E}_i\mbox{I}_i
\end{equation}
for $\mbox{\sffamily E}_1 + \mbox{\sffamily E}_2 = \mbox{\sffamily E}$.
This is solved by
\begin{equation}
\overline{t} - \overline{t}_0 = \{2/\sqrt{K_i}\}\mbox{arccos}
\left(
\{\mbox{I}_iK_i - \mbox{\sffamily E}_i\}/{\sqrt{\mbox{\sffamily E}_i^2 - K_iJ^2}}
\right)
\end{equation}
(in agreement with \cite{TriCl}, once differences in convention are taken into account).
Thus, synchronizing, one part of the equation for the orbits is
\begin{equation}
\sqrt{K_2}\mbox{arccos}
\left(
\{\mbox{I}_1K_1 - \mbox{\sffamily E}_1\}/{\sqrt{\mbox{\sffamily E}_1^2 - K_1J^2}}
\right) =
\sqrt{K_1}\mbox{arccos}
\left(
\{\mbox{I}_2K_2 - \mbox{\sffamily E}_2\}/{\sqrt{\mbox{\sffamily E}_2^2 - K_2J^2}}
\right) \mbox{ } .
\end{equation}
[One can see how the arccosines cancel in the very special case...
Then $\mbox{\sffamily E}_1 = \mbox{\sffamily E}_2 = \mbox{\sffamily E}/2$ gives ${\iota_1} = {\iota_2}$ i.e. $\Theta =
2\,\mbox{arctan} \left(\iota_1/\iota_2\right) = 2$, $\mbox{arctan}(1) = {\pi}/{2}$,
so are confined to the plane perpendicular to the chosen Z-axis.]
Then the $\Phi$-Euler--Lagrange equation implies
$$
\Phi - \Phi_0 = J\int\textrm{d}\overline{t}
\left\{
\frac{1}{\mbox{I}_1} + \frac{1}{\mbox{I}_2}
\right\} =
\frac{J}{2}\sum_{i = 1}^{2}\sqrt{K_i}\int\frac{\textrm{d}\tau_i}{\mbox{\sffamily F}_i\mbox{cos}\tau_i + \mbox{\sffamily E}_i} =
\sum_{i = 1}^2 \mbox{arctan}
\left(
\sqrt{\frac{\{\mbox{\sffamily E}_i - \mbox{\sffamily F}_i\}\{\mbox{\sffamily F}_i - \mbox{\sffamily A}_i\}}{\{\mbox{\sffamily E}_i + \mbox{\sffamily F}_i\}\{\mbox{\sffamily F}_i + \mbox{\sffamily A}_i\}}}
\right)
$$
(for $\tau_i = 2\sqrt{K_i}\{t - t_0\}$, $\mbox{\sffamily F}_i \equiv \sqrt{\mbox{\sffamily E}_i^2 - K_iJ^2}$ and
$\mbox{\sffamily A}_i = K_i\mbox{I}_i - \mbox{\sffamily E}_i$, ), which simplifies to
\begin{equation}
\Phi - \Phi_0 = \sum_{i = 1}^{2}\mbox{arctan}
\left(
\sqrt{\left.\left\{\left\{\sqrt{\mbox{\sffamily E}_i^2 - K_iJ^2} - \mbox{\sffamily E}_i\right\}\mbox{I}_i + J^2\right\}\right/
\left\{\left\{\sqrt{\mbox{\sffamily E}_i^2 - K_iJ^2} + \mbox{\sffamily E}_i\right\}\mbox{I}_i^2 - J^2\right\} }
\right)
\end{equation}
in the straightforward relational variables.
While, in the original variables of the problem,
\begin{equation}
\sqrt{K_2}\mbox{arccos}
\left(
\{||\underline{\iota}_1||^2K_1 - \mbox{\sffamily E}_1\}/{\sqrt{\mbox{\sffamily E}_1^2 - K_1J^2}}
\right) =
\sqrt{K_1}\mbox{arccos}
\left(
\{||\underline{\iota}_2||^2K_2 - \mbox{\sffamily E}_2\}/{\sqrt{\mbox{\sffamily E}_2^2 - K_2J^2}}
\right) \mbox{ } ,
\end{equation}
\begin{equation}
\mbox{arccos}
\left(
\frac{\underline{\iota}_1\cdot\underline{\iota}_2}{||\underline{\iota}_1||||\underline{\iota}_2||}
\right) = \Phi_0 + \sum_{i = 1}^{2}\mbox{arctan}
\left(
\sqrt{\left.\left\{\left\{\sqrt{\mbox{\sffamily E}_i^2 - K_iJ^2} - \mbox{\sffamily E}_i\right\}||\underline{\iota}_i||^2 + J^2\right\}\right/
\left\{\left\{\sqrt{\mbox{\sffamily E}_i^2 - K_iJ^2} + \mbox{\sffamily E}_i\right\}||\underline{\iota}_i||^2 - J^2\right\} }
\right) \mbox{ } .
\end{equation}
\subsection{The single harmonic oscillator case requires a separate working}
For $K_1 = K_2 = 0$, the trajectories are given by, after some manipulation,
\begin{equation}
\sqrt{ {\mbox{\sffamily E}_2}/{\mbox{\sffamily E}_1} }\mbox{I}_2 = \mbox{I}_1 = \mbox{sec}
(\mbox{\sffamily E}_1\{\Phi - \Phi_0\}/\mbox{\sffamily E})/{\sqrt{2\mbox{\sffamily E}_1}} \mbox{ } ,
\end{equation}
which is obviously the expected straight line in the absense of forces.
For $K_2 = 0$, $K_1 \neq 0$, the trajectories are given by, in straightforward relational variables,
\begin{equation}
\{\mbox{I}_1K_1 - \mbox{\sffamily E}_1\}/\sqrt{\mbox{\sffamily E}_1^2 - K_1 J^2} =
\mbox{cos}
\left(
\sqrt{{2K_1}/{\mbox{\sffamily E}_2}}\sqrt{2\mbox{\sffamily E}_2\mbox{I}_2 - J^2}
\right)
\label{melenes}
\end{equation}
and
\begin{equation}
\Phi - \Phi_0 = \mbox{arctan}
\left(
\sqrt{ \left.\left\{\left\{\sqrt{\mbox{\sffamily E}_1^2 - K_1J^2} - \mbox{\sffamily E}_1 \right\}\mbox{I}_1 + J^2 \right\}\right/
\left\{\left\{\sqrt{\mbox{\sffamily E}_1^2 - K_1J^2} + \mbox{\sffamily E}_1 \right\}\mbox{I}_1 - J^2 \right\} }
\right)
+ \mbox{arctan}
\left(
\sqrt{ \frac{2\mbox{\sffamily E}_2\mbox{I}_2}{J^2} - 1 }
\right) \mbox{ } .
\label{hiperboliques}
\end{equation}
While, in terms of the original variables of the problem,
\begin{equation}
\{K_1||\underline{\iota}_1||^2 - \mbox{\sffamily E}_1\}/\sqrt{\mbox{\sffamily E}_1^2 - K_1 J^2} =
\mbox{cos}
\left(
\sqrt{{2K_1}/{\mbox{\sffamily E}_2}}\sqrt{2\mbox{\sffamily E}_2||\underline{\iota}_2||^2 - J^2}
\right) \mbox{ } ,
\label{Omelenes}
\end{equation}
$$
\mbox{arccos}
\left(
\frac{\underline{\iota}_1\cdot\underline{\iota}_2}{||\underline{\iota}_1||||\underline{\iota}_2||}
\right) = \Phi_0 + \mbox{arctan}
\left(
\sqrt{ \frac{2\mbox{\sffamily E}_2||\underline{\iota}_2||^2}{J^2} - 1 }
\right)
$$
\begin{equation}
+ \mbox{arctan}
\left(
\sqrt{ \left.\left\{\left\{\sqrt{\mbox{\sffamily E}_1^2 - K_1J^2} - \mbox{\sffamily E}_1 \right\}||\underline{\iota}_1||^2 + J^2 \right\}\right/
\left\{\left\{\sqrt{\mbox{\sffamily E}_1^2 - K_1J^2} + \mbox{\sffamily E}_1 \right\}||\underline{\iota}_1||^2 - J^2 \right\} }
\right)
\mbox{ } .
\label{Ohiperboliques}
\end{equation}
\subsection{A brief interpretation of the previous two subsections' examples}
In SSec 6.9's example, $\iota_2$ (or $\mbox{I}_2$) makes a good time-standard as the absolute space intuition of it
`moving in a straight line' survives well enough to confer monotonicity.
It is convenient then to rewrite (\ref{melenes}, \ref{hiperboliques}) as a curve in parametric form
with $\mbox{I}_2$ playing the role of parameter, leading to the plots in Fig 12.
\mbox{ }
\noindent {\footnotesize [{\bf Figure 12} a) 3-d plot showing oscillatory behaviour including some
changes in the size of the relative angle that occur at regular intervals but can involve `sporadic'
changes in how much the relative angle changes in each interval. (The particular plot given is for the 1
harmonic oscillator case, with $K_1 = \mbox{\sffamily E}_1 = \mbox{\sffamily E}_2 = 1$ and $J = 0.1$, with $\Phi$ plotted vertically, $\mbox{I}_2$ out of
the page and $\mbox{I}_1$ into the page.)
This is because although 1 is `moving in a straight line' in absolute space, the position with respect
to which its separation is being measured from in relational space (the centre of mass of particles 2,
3) is then also moving around due to the oscillations of the `spring' between these particles.
\noindent b) Polar plot of $\mbox{I}_1$ and $\mbox{I}_2$ as functions of $\Phi$ for the first oscillation.
Further oscillations correspond to similar angular variations at larger radius.
N.B. that $\mbox{I}_1$ and $\mbox{I}_2$ are independent in Euclidean relational particle mechanics, as opposed to
summing to a constant I in similarity relational particle mechanics.
In the solution exhibited, the particles expand away from triple collision while relative angle varying
oscillations occur, involving almost-isosceles to almost-collinear changes in shape.]}
\mbox{ }
In SSec 6.8's example, $\iota_1$ and $\iota_2$ oscillate boundedly, so neither of these
(or $\mbox{I}_1$ or $\mbox{I}_2$ is a good clock parameter from the point of view of monotonicity.
There is again some scope for variation in relative angle $\Phi$, including `sporadic' amplitude
variations.
\subsection{Normal coordinates for scaled triangleland with multi-harmonic oscillator potential}
The working of Sec 5 holds again (using now $x$, $x_{\sN}$ in place of $n$ and $n_{\sN}$ but radii are
unaffected by rotations and so cancel out giving the same rotation and $\Theta$, $\Phi$ to
$\Theta_{\sN}$, $\Phi_{\sN}$ coordinate change as before.
For Euclidean relational particle mechanics, one can uplift from the preceding parts of Sec 6 by inserting
the above extra steps into the triangleland case of Sec 3.1.
The potential is now
\begin{equation}
\{K_1^{\sN}\rho_{1_{\mbox{\tiny N}}}^2 + K_2^{\sN}\rho_{2_{\mbox{\tiny N}}}^2 \}/{8\mbox{I}} =
A_{\sN} + B_{\sN}\mbox{cos}\Theta_{\sN}
\mbox{ } .
\end{equation}
Then the Jacobi-type action for scaled triangleland with general multi-harmonic oscillator potential in shape--scale
variables is
\begin{equation}
\mbox{\sffamily S}_{} = 2\int\textrm{d}\lambda \sqrt{
\frac{ \{\dot{\mbox{I}}^2 + \mbox{I}^2\{\dot{\Theta}_{\sN}^2 + \mbox{sin}^2\Theta_{\sN}\dot{\Phi}_{\sN}^2\} }{ 2 }
\left\{
\frac{\overline{\mbox{\sffamily E}}}{\mbox{I}} - A_{\sN} - B_{\sN}\mbox{cos}\Theta_{\sN}
\right\} }
\mbox{ } .
\end{equation}
The Euler--Lagrange equations and energy integral that follow from this are, respectively and after discovering the
conserved quantity $J$ and eliminating it (\ref{F}, \ref{G}) with N-subscripts appended.
The momenta are (\ref{b}) with N's appended in the last two and the Hamiltonian and the energy
constraint are (\ref{c}) treated likewise.
I have then rotated the above two exact solutions for the special case to obtain solutions to the
general case, but these are too lengthy to include in this paper.
\section{Conclusion}
\subsection{Results Summary}
Relational particle models are useful as regards the long-standing absolute versus relative motion debate, and
also due to structural similarities with the geometrodynamical formulation of General Relativity,
for Problem of Time in Quantum Gravity.
1- and 2-d relational particle mechanics are tractable due to the simple nature of their configuration space geometries:
these are, respectively, $\mathbb{S}^k$ and $\mathbb{CP}^k$ for 1- and 2-d similarity relational particle mechanics.
Additionally, for the 3-particle case of the latter (`scalefree triangleland'),
$\mathbb{CP}^1 = \mathbb{S}^2$ holds, making this case even more tractable.
This and its Euclidean relational particle mechanics counterpart (`scaled triangleland') furbish this
paper's particular examples.
I consider models with general multiple harmonic oscillator type potentials which, as compared with
the earlier study \cite{TriCl}, include the new feature of relative angular momentum exchange
between the two constituent subsystems.
I get there by first considering the `special' subcase (which has no relative angle $\Phi$ dependence in
its potential and involves {\sl no} relative angular momentum exchange) and its `very special' sub-subcase
(which has constant potential in one presentation).
I then identify scalefree triangleland's very special case's mathematics with that of the linear rigid rotor; the
special case is then analogous to that with a background electric field aligned with its axis.
The Euclidean relational particle mechanics special and very special cases' mathematics reduces to some of the mathematics that arises
in the Kepler--Coulomb problem.
Finally, I use a rotation or normal coordinates construction to cast the general ($\Phi$-dependent,
relative angular momentum exchanging) case as the special case in the transformed coordinates.
This has the mathematics corresponding to the analogue background electric field being unaligned with
the axis.
In each case I use the standard spherical or planar mathematics that I have been able to cast the
problem into so as to obtain solutions, and then map back to provide physical interpretation in terms
of various mechanically-significant quantities that are more intuitively associated with the original
relational problems: (barycentric partial moment of inertia, $\Phi$) variables, mass-weighted Jacobi
inter-particle (cluster) coordinates, and by sketches, what the particles themselves are doing.
In returning to these various levels, the standard spherical, planar and flat space mathematics becomes
unusual and nontrivial.
Highlights of my results are as follows.
\noindent 1) The very special multiple harmonic oscillator like potential scalefree triangleland problem is
straightforwardly soluble on the sphere and retains a manageable form in terms of the underlying
mechanical variables.
\noindent 2) While the special multiple harmonic oscillator like potential scalefree triangleland problem is also
classically exactly soluble on the sphere, its form is very complicated, so I just provide more
manageable large and small asymptotic solutions.
\noindent 3) In the stereographic plane, the small asymptotics solution has the mathematics of the 2-d
isotropic harmonic oscillator [($k$, constant) $\times$ (radius)$^2$ potential, including the
upside-down ($k < 0$) and degenerate ($k = 0$) cases], whereby I obtain complete control of it and then
characterize it in terms of the underlying mechanical variables.
\noindent 4) The large asymptotics solution maps to the small asymptotics solution again, under
inversion of the radius, so I also obtain complete control of it and can characterize it in terms
of the underlying mechanical variables.
Moreover, this is important beyond the case with harmonic oscillator like potentials, as scalefree triangleland
exhibits {\sl universal} large-scale behaviour, various cases of which I can now understand through
their being related by the inversion to the various cases of conic sections that occur for
$k \mbox{ } \times $(radius)$^2$ potentials.
\noindent 5) The very special and special multiple harmonic oscillator Euclidean relational particle mechanics problems are also
classically exactly soluble in the conformally-related flat space in which they repectively take the
forms of the Kepler--Coulomb problem and a nonstandard composition of parabolic coordinate subproblems
thereof.
\noindent 6) Then by a rotation or normal coordinates construction, the general relative-angle dependent
similarity and Euclidean relational particle mechanics multiple HO (type) potential problems are exactly
soluble because they map to their special cases in the new coordinates.
However, these solutions are very complicated in terms of the underlying mechanical variables, and as
such I mostly only provide sketches of some aspects of their behaviour.
\subsection{Further tractable cases: 4-stop and N-stop metrolands}
Because for scalefree 4-stop metroland (relational particle mechanics of 4 particles in 1-d) the $R_j$
are different and related to $\Theta$, $\Phi$ in a different way, physically interesting potentials in
this case generally map to {\sl different} functions of these coordinates for the 4-stop metroland
interpretation and for the triangleland interpretation.
Thus one needs new calculations
for 4-stop metroland rather than straightforward deduction from this paper's triangleland results.
E.g. the harmonic oscillator type potential maps to
\begin{equation}
\mbox{\sffamily V} = \frac{{K_3}\mbox{cos}^2\Theta + {K_1}\mbox{sin}^2\Theta\mbox{cos}^2\Phi +
{K_2}\mbox{sin}^2\Theta\mbox{sin}^2\Phi}{2} =
\bar{\mbox{\tt A}} + \bar{\mbox{\tt B}}\mbox{cos}(2\Theta) + \bar{\mbox{\tt C}}\mbox{sin}^2\Theta\mbox{cos}(2\Phi)
\end{equation}
for $\bar{\mbox{\tt A}} = \{K_1 + K_2 + 2K_3\}/8$, $\bar{\mbox{\tt B}} = \{- K_1 - K_2 + 2K_3\}/8$,
$\bar{\mbox{\tt C}} = \{K_1 - K_2\}/4$.
Thus this model admits direct analogues of this paper's `special' ($\tt{C} = 0$) and `very special'
($\tt{B} = \tt{C} = 0$) subcases.
While, generalizing to scalefree N-stop metroland, the multiple harmonic oscillator type potential maps to
\begin{equation}
\mbox{\sffamily V} = \frac{1}{2}\frac{ \sum_{\bar{p} = 1}^{\sn - 1}K_{\bar{p}}{\cal R}_{\bar{p}}^2 + K_{\sn} }
{ \sum_{\bar{p} = 1}^{\sn - 1}{\cal R}_{\bar{p}}^2 + 1 } =
\frac{1}{2}\sum_{p = 1}^{\sn}K_{p}n_{p}^2
\end{equation}
for $n_{p}$ the unit vector in the Euclidean configuration space $\mathbb{R}^{n}$.
Scaled N-stop metroland is also of interest \cite{Cones}.
\subsection{Further work and Applications}
This paper's models remain to be studied from the perspective of dynamical systems \cite{Dyn}.
Its principal application at the moment is that many aspects of this work carry over to QM in paper II
(similarity case) and \cite{08III} (Euclidean case), and on towards the study of many Problem of Time
strategies (in particular emergent semiclassical time and records theory \cite{EOT, Records, New} but
also conceivably internal time approaches and histories theory, as well as investigation of various
Quantum Gravity and Quantum Cosmology applications such as the problem of observables, operator
ordering, closed-universe effects, finite-universe effects and the study of small inhomogeneities/clumps).
Some of these applications would benefit from study of further models: with other potentials (for which
universal large asymptotics results in this paper will be useful), the above 4-stop and N-stop
metrolands of comparable tractability to this paper, as well as somewhat harder relational particle
mechanics in 2-d of N $>$ 3 particles (requiring $\mathbb{CP}^{\sN - 2}$ geometry based methods) and
3-d models that are much harder \cite{Kendall} even for modest values of N.
\mbox{ }
\noindent{\bf Acknowledgments}
\mbox{ }
\noindent I thank Dr Julian Barbour, Dr Brendan Foster and Miss Anne Franzen for discussions,
the two Anonymous Referees for comments, Professor Gary Gibbons for references,
the organizers of ``Space and Time 100 Years after Minkowski" Conference at Bad Honnef, Germany,
the Perimeter Institute and Queen Mary's Relativity Group for invitations to speak and hospitality
and Dr Julian Barbour also for hospitality.
Peterhouse for funding this work in 2006-08.
Professors Malcolm MacCallum, Gary Gibbons, Don Page, Reza Tavakol and Jonathan Halliwell,
and Dr's Julian Barbour and Fay Dowker, for support in the furthering of my career.
My Wife for help with assembling the figures, and my Wife, Alicia, Amelia, Beth, Emma, Emilie,
Emily, Joshua, Luke, Simeon and Will for keeping my spirits up.
\mbox{ }
\noindent{\bf\large Appendix A: Mechanics on an in general curved configuration space}
\mbox{ }
\noindent For a general finite theory of the quantities ${\cal Q}_{\mbox{\sffamily{\scriptsize A}}}$ with a curved configuration
space, consider the Jacobi action
\begin{equation}
\mbox{\sffamily S} = 2\int\textrm{d}\lambda\sqrt{\mbox{\sffamily T}\{\mbox{\sffamily U} + \mbox{\sffamily E}\}}
\end{equation}
where $\mbox{\sffamily U}({\cal Q}_{\mbox{\sffamily{\scriptsize A}}})$ is minus the potential energy $\mbox{\sffamily V}({\cal Q}_{\mbox{\sffamily{\scriptsize A}}})$, $\mbox{\sffamily E}$ is the total
energy and $\mbox{\sffamily T}$ is the kinetic energy,
\begin{equation}
\mbox{\sffamily T} = {\cal M}_{\mbox{\sffamily{\scriptsize A}}\mbox{\sffamily{\scriptsize B}}}\dot{{\cal Q}}^{\mbox{\sffamily{\scriptsize A}}}\dot{{\cal Q}}^{\mbox{\sffamily{\scriptsize B}}}/2 \mbox{ } ,
\label{Te}
\end{equation}
for ${\cal M}_{\mbox{\sffamily{\scriptsize A}}\mbox{\sffamily{\scriptsize B}}}$ the curved configuration space metric.
This action works as follows.
For * $\equiv \sqrt{\{\mbox{\sffamily U} + \mbox{\sffamily E}\}/{\mbox{\sffamily T}}} \mbox{ } \dot{\mbox{}}$, the Euler--Lagrange equations are, in geometrical form,
\begin{equation}
{\cal Q}^{\mbox{\sffamily{\scriptsize A}}**} + \Gamma^{\mbox{\sffamily{\scriptsize A}}}\mbox{}_{\mbox{\sffamily{\scriptsize B}}\mbox{\sffamily{\scriptsize C}}}{\cal Q}^{\mbox{\sffamily{\scriptsize B}}*}{\cal Q}^{\mbox{\sffamily{\scriptsize C}}*} =
- \frac{\partial\mbox{\sffamily V}}{\partial {\cal Q}_{\mbox{\sffamily{\scriptsize A}}}}
\label{123}
\end{equation}
and there is an energy first integral
\begin{equation}
{\cal M}_{\mbox{\sffamily{\scriptsize A}}\mbox{\sffamily{\scriptsize B}}}{\cal Q}^{\mbox{\sffamily{\scriptsize A}} *}{\cal Q}^{\mbox{\sffamily{\scriptsize B}} *}/2 + \mbox{\sffamily V}({\cal Q}^{\mbox{\sffamily{\scriptsize C}}}) = \mbox{\sffamily E} \mbox{ } .
\end{equation}
\mbox{ }
\noindent{\bf\large Appendix B: Momenta, Hamiltonians and energy constraints}
\mbox{ }
\noindent These are important as regards the passage to Quantum Theory in paper II and \cite{08III}.
\mbox{ }
\noindent{\bf B.1 General curved configuration space mechanics}
\mbox{ }
\noindent The conjugate momenta are
\begin{equation}
{\cal P}_{\mbox{\sffamily{\scriptsize A}}} = {\cal M}_{\mbox{\sffamily{\scriptsize A}}\mbox{\sffamily{\scriptsize B}}}{\cal Q}^{\mbox{\sffamily{\scriptsize B}} *} \mbox{ } ,
\end{equation}
and there is then as a primary constraint the quadratic energy constraint
\begin{equation}
\mbox{\sffamily H} \equiv {\cal N}^{\mbox{\sffamily{\scriptsize A}}\mbox{\sffamily{\scriptsize B}}}{\cal P}_{\mbox{\sffamily{\scriptsize A}}}{\cal P}_{\mbox{\sffamily{\scriptsize B}}}/2 + \mbox{\sffamily V} = \mbox{\sffamily E} \mbox{ }
\end{equation}
for ${\cal N}^{\mbox{\sffamily{\scriptsize A}}\mbox{\sffamily{\scriptsize B}}}$ the inverse of ${\cal M}_{\mbox{\sffamily{\scriptsize A}}\mbox{\sffamily{\scriptsize B}}}$ and $\mbox{\sffamily H}$ the Hamiltonian for the system.
This is propagated by (\ref{123}).
Such energy constraints that have quadratic but not linear dependence on the momenta are analogous to
the GR Hamiltonian constraint and carries associated with its form the frozen formalism aspect of the
Problem of Time \cite{K92, I93}.
\mbox{ }
\mbox{ }
\mbox{ }
\noindent{\bf B.2 Scaled triangleland in ($\mbox{\boldmath$\iota$}_{\mbox{\scriptsize\bf 1}}$,
$\mbox{\boldmath$\iota$}_{\mbox{\scriptsize\bf 2}}$,
$\mbox{\boldmath$\Phi$}$) coordinates}
\mbox{ }
\noindent The conjugate momenta are
\begin{equation}
P_{\rho}^{i} = \iota_i^* \mbox{ } , \mbox{ }
P_{\Phi} = {\iota_1^2\iota_2^2}\Phi^*/\{\iota_1^2 + \iota_2^2\} \mbox{ } .
\end{equation}
The classical Hamiltonian and energy constraint are then
\begin{equation}
\mbox{\sffamily H} = \frac{1}{2}\sum_{i = 1}^{2}
\left\{
\{P_{\rho}^i\}^2 + \frac{P_{\Phi}^2}{\iota_i^2}
\right\}
+ \mbox{\sffamily V} = \mbox{\sffamily E}
\mbox{ } .
\end{equation}
In the $\Phi$-independent case, $P_{\Phi} = J$, constant, so one has furthermore
\begin{equation}
\mbox{\sffamily H} = \frac{1}{2}\sum_{i = 1}^{2}
\left\{
\{P_{\rho}^i\}^2 + \frac{J^2}{\iota_i^2}
\right\}
+ \mbox{\sffamily V} = \mbox{\sffamily E}
\mbox{ } .
\end{equation}
\mbox{ }
\noindent{\bf B.3 Scaled triangleland in ($\mbox{I}_{\mbox{\scriptsize\bf 1}}$,
$\mbox{I}_{\mbox{\scriptsize\bf 2}}$,
$\mbox{\boldmath$\Phi$}$) coordinates}
\mbox{ }
\noindent These are useful in the context of a $\Phi$-independent potential energy in the special and
very special cases, for which the conjugate momenta are
\begin{equation}
P_i = {\mbox{I}_i^{\overline{*}}}/{\mbox{I}_i} \mbox{ } , \mbox{ }
P_{\Phi} = {4\mbox{I}_1\mbox{I}_2\Phi^{\overline{*}}}/{\mbox{I}} = J \mbox{ } , \mbox{ constant } .
\end{equation}
The Hamiltonian and the energy constraint are then
\begin{equation}
\overline{\mbox{\sffamily H}} \equiv \frac{\mbox{I}_1P_1^2}{2} + \frac{\mbox{I}_2P_2^2}{2} +
\frac{J^2}{8}\left\{\frac{1}{\mbox{I}_1} + \frac{1}{\mbox{I}_2}\right\} + \frac{K_1\mbox{I}_1 + K_2\mbox{I}_2}{8} =
\frac{\mbox{\sffamily E}}{4} \mbox{ } .
\end{equation}
\mbox{ }
\noindent{\bf B.4 N-paricle d-dimensional preshape space theory and scalefree N-stop metroland}
\mbox{ }
\noindent The conjugate momenta are
\begin{equation}
P_{\bar{q}} =
\left\{
\prod_{\bar{p} = 1}^{\bar{q} - 1}\mbox{sin}^2\Theta_{\bar{p}}
\right\}
{\Theta}_{\bar{q}}^* \mbox{ } .
\end{equation}
The Hamiltonian and the energy constraint are then
\begin{equation}
\mbox{\sffamily H} \equiv \frac{1}{2}\sum_{\bar{q} = 1}^{\sn - 1}
\frac{P_{\bar{q}}^2}{\prod_{\bar{p} = 1}^{\bar{q} - 1}\mbox{sin}^2\Theta_{\bar{p}}} + \mbox{\sffamily V}(\Theta_{\bar{p}})
= \mbox{\sffamily E} \mbox{ } .
\end{equation}
\mbox{ }
\noindent{\bf B.5 Scalefree N-a-gonland and the exceptional case of triangleland}
\mbox{ }
\noindent The conjugate momenta are
\begin{equation}
{\cal P}_{{\cal R}_{\bar{p}}} =
\left\{
\frac{\delta_{\bar{p}\bar{q}}}{1 + ||{\cal R}||^2} -
\frac{{\cal R}_{\bar{p}}{\cal R}_{\bar{q}}}{\{1 + ||{\cal R}||^2\}^2}
\right\}
{\cal R}_{\bar{q}}^{\widetilde{*}}
\mbox{ } \mbox{ } , \mbox{ } \mbox{ }
{\cal P}_{{\Theta}_{\widetilde{\mbox{\scriptsize p}}}} =
\left\{
\frac{\delta_{\widetilde{\mbox{\scriptsize p}}\widetilde{\mbox{\scriptsize q}}}}{1 + ||{\cal R}||^2} -
\frac{{\cal R}_{\widetilde{\mbox{\scriptsize p}}}{\cal R}_{\widetilde{\mbox{\scriptsize q}}}}{\{1 + ||{\cal R}||^2\}^2}
\right\}
{\cal R}_{\widetilde{\mbox{\scriptsize p}}}{\cal R}_{\widetilde{\mbox{\scriptsize q}}}\Theta_{\widetilde{\mbox{\scriptsize q}}}^{\widetilde{*}} \mbox{ } .
\end{equation}
The Hamiltonian and the energy constraint are then
\begin{equation}
\mbox{\sffamily H} \equiv \frac{1}{2\{1 + ||{\cal R}||^2\}}
\left\{
\{\delta^{\bar{p}\bar{q}} + {\cal R}^{\bar{p}}{\cal R}^{\bar{q}}\}
{\cal P}_{{\cal R}_{\bar{p}}}{\cal P}_{{\cal R}_{\bar{p}}} +
\left\{
\frac{\delta^{\widetilde{\mbox{\scriptsize p}}\widetilde{\mbox{\scriptsize q}}}}{{\cal R}_{\bar{p}}^2} + 1|^{\widetilde{\mbox{\scriptsize p}}\widetilde{\mbox{\scriptsize q}}}
\right\}
{\cal P}_{\Theta_{\widetilde{\mbox{\scriptsize p}}}}{\cal P}_{\Theta_{\widetilde{\mbox{\scriptsize q}}}}
\right\}
+ \mbox{\sffamily V}({\cal R}_{\bar{p}}, \Theta_{\widetilde{\mbox{\scriptsize p}}}) = \mbox{\sffamily E}
\mbox{ } .
\end{equation}
\noindent For the specific example of scalefree triangleland with harmonic oscillator like potentials in
stereographic coordinates, the conjugate momenta are
\begin{equation}
p_{\cal R} = {\cal R}^{\widetilde{*}}
\mbox{ } \mbox{ } , \mbox{ } \mbox{ }
p_{\Theta} = {\cal R}^2\Theta^{\widetilde{*}}
\end{equation}
and the Hamiltonian and the energy constraint are given by
\begin{equation}
\overline{\mbox{\sffamily H}} \equiv \frac{1}{2}
\left\{
p_{\cal R}^2 + \frac{p_{\Phi}^2}{{\cal R}^2}
\right\}
+ \frac{K_1{\cal R}^2 + L{\cal R}\mbox{cos}\Phi + K_2}{2\{1 + {\cal R}^2\}^3} =
\frac{\mbox{\sffamily E}}{\{1 + {\cal R}^2\}^2} \mbox{ } .
\end{equation}
While, in spherical coordinates, one has
\begin{equation}
p_{\Theta} = \Theta^{\overline{*}} \mbox{ } \mbox{ } , \mbox{ } \mbox{ }
p_{\Phi} = \mbox{sin}^2\Theta \Phi^{\overline{*}}
\label{b}
\end{equation}
and
\begin{equation}
\widetilde{\mbox{\sffamily H}} \equiv \frac{1}{2}
\left\{
p_{\Theta}^2 + \frac{p_{\Phi}^2}{\mbox{sin}^2\Theta}
\right\}
+ A + B\mbox{cos}\Theta + C\mbox{sin}\Theta\mbox{cos}\Phi \mbox{ } = \overline{\mbox{\sffamily E}} \mbox{ } .
\label{c}
\end{equation}
\mbox{ }
\noindent{\bf B.6 Scale-shape formulation of scaled triangleland}
\mbox{ }
\noindent The conjugate momenta are
\noindent
\begin{equation}
p_{\mbox{\scriptsize I}} = \mbox{I}^{\check{*}} \mbox{ } , \mbox{ }
p_{\Theta} = \mbox{I}^2\Theta^{\check{*}} \mbox{ } , \mbox{ }
p_{\Phi} = \mbox{I}^2\mbox{sin}^2\Theta\Phi^{\check{*}} \mbox{ } .
\end{equation}
The Hamiltonian and the quadratic energy constraint are then
\begin{equation}
\check{\mbox{\sffamily H}} = \frac{p_{\mbox{\scriptsize I}}\mbox{}^2}{2} + \frac{p_{\Theta}\mbox{}^2}{2\mbox{I}^2} +
\frac{p_{\Phi}\mbox{}^2}{2\mbox{I}^2\mbox{sin}^2\Theta} + \frac{\mbox{\sffamily V}(\mbox{I}, \Theta, \Phi)}{4\mbox{I}} =
\check{\mbox{\sffamily E}} \mbox{ } .
\end{equation}
In the special case, $p_{\Phi} = {\cal J}$.
The Hamiltonian and the energy constraint are then
\begin{equation}
\check{\mbox{\sffamily H}} = \frac{p_{\mbox{\scriptsize I}}\mbox{}^2}{2} + \frac{p_{\Theta}\mbox{}^2}{2\mbox{I}^2} +
\frac{{\cal J}\mbox{}^2}{2\mbox{I}^2\mbox{sin}^2\Theta} + \frac{\mbox{\sffamily V}(\mbox{I}, \Theta)}{4\mbox{I}} = \check{\mbox{\sffamily E}}
\mbox{ } .
\end{equation}
In particular, for scaled triangleland with multi-harmonic oscillator potential, this is
\begin{equation}
\check{\mbox{\sffamily H}} = \frac{p_{\mbox{\scriptsize I}}\mbox{}^2}{2} + \frac{p_{\Theta}\mbox{}^2}{2\mbox{I}^2} +
\frac{p_{\Phi}\mbox{}^2}{2\mbox{I}^2\mbox{sin}^2\Theta} + A + B\mbox{cos}\Theta +
C\mbox{sin}\Theta\mbox{cos}\Phi = \check{\mbox{\sffamily E}} = \frac{\mbox{\sffamily E}}{4\mbox{I}} \mbox{ } ,
\label{this}
\end{equation}
which is close to but not exactly the same as the classical Hamiltonian for an atom in a background
homogeneous electric field pointing in an arbitrary direction.
And in normal coordinates, the Hamiltonian and the energy constraint are
\begin{equation}
\check{\mbox{\sffamily H}} = \frac{p_{\mbox{\scriptsize I}}^2}{2} + \frac{p_{\Theta_{\mbox{\tiny N}}}^2}{2\mbox{I}^2} +
\frac{p_{\Phi_{\mbox{\tiny N}}}^2}{2\mbox{I}^2\mbox{sin}^2\Theta} + A_{\sN} + B_{\sN} \mbox{cos}\Theta
= \frac{\mbox{\sffamily E}}{4\mbox{I}} \mbox{ } ,
\end{equation}
which is close to but not exactly the same as the classical Hamiltonian for an atom in a
background homogeneous electric field pointing in the axial `$z$' direction.
\mbox{ }
\noindent{\bf \large Appendix C: Physical interpretation of this paper's relative angular momentum quantities}
\mbox{ }
\noindent The relative angular momentum quantity in scaled triangleland in simple relational variables is
\begin{equation}
J = {\mbox{I}_1\mbox{I}_2}\Phi^*/{\mbox{I}} \mbox{ } .
\end{equation}
This is equivalent to the scale--shape spherical polar form
\begin{equation}
J = \mbox{I}^2\mbox{sin}^2\Theta \Phi^{\check{*}}
\label{baralda}
\end{equation}
by (\ref{Idef}) and (\ref{checkstardef}).
It has the following interpretation.
\begin{equation}
J\mbox{I} = \mbox{I}_1\mbox{I}_2\Phi^* = \mbox{I}_1\mbox{I}_2\{\theta_2^* - \theta_1^*\} = \mbox{I}_1\mbox{L}_2 - \mbox{I}_2\mbox{L}_1 =
\{\mbox{I}_1 + \mbox{I}_2\}\mbox{L}_2 = - \{\mbox{I}_1 + \mbox{I}_2\}\mbox{L}_1
\end{equation}
where the fourth equality uses the zero angular momentum constraint, and so, as $\mbox{I}_1 + \mbox{I}_2 = \mbox{I}$,
\begin{equation}
J = \mbox{L}_2 = - \mbox{L}_1 = \{\mbox{L}_2 - \mbox{L}_1\}/2 \mbox{ } .
\label{walda}
\end{equation}
So it is interpretable as the angular momentum of one of the two constituent subsystems, minus the angular momentum of the other or
half of the difference between the two subsystems' angular momenta, which is a relative angular momentum presentation.
Using spherical coordinates, scalefree triangleland's relative angular momentum quantity is
\begin{equation}
{\cal J} = \mbox{sin}^2\Theta \Phi^{\overline{*}} = \frac{ \mbox{I}^2\mbox{sin}^2\Theta }{ \mbox{I} }
\frac{1}{4\mbox{I}}\Phi^* =
\frac{ \mbox{I}^2\mbox{sin}^2\Theta\Phi^{\check{*}} }{\mbox{I}} = \frac{J}{\mbox{I}}
\end{equation}
by (\ref{overlinestardef}), (\ref{checkstardef}) and (\ref{baralda}).
Thus, by (\ref{walda}),
\begin{equation}
{\cal J} = J/\mbox{I} = \mbox{L}_2/\mbox{I} = - \mbox{L}_1/\mbox{I} = \{\mbox{L}_2 - \mbox{L}_1\}/2\mbox{I} \mbox{ } ,
\end{equation}
but $\mbox{I}$ is constant in similarity relational particle mechanics, so this is still, up to a constant
of proportion, the angular momentum of one of the two constituent subsystems, minus the angular momentum of the other or half of
the difference between the two subsystems' angular momenta.
Additionally, it has the dimensions of rate of change of angle, which makes sense since on the sphere
only angles are meaningful.
In either case, $\Phi$-independence in the potential corresponds to there being no means for angular momentum to be
exchanged between the subsystem composed of particles 2, 3 and that composed of particle 1.
For scaled triangleland, one can consider the configuration space vector
\begin{equation}
\underline{\mbox{I}} =
\mbox{\fontsize{1.3cm}{1.3cm}\selectfont (}
\stackrel{ \mbox{$\mbox{I}$sin$\Theta$cos$\Phi$} }
{ \stackrel{ \mbox{$\mbox{I}$sin$\Theta$sin$\Phi$} }
{ \mbox{$\mbox{I}$cos$\Theta$} \hspace{0.1in} } }
\mbox{\fontsize{1.3cm}{1.3cm}\selectfont )} =
\mbox{\fontsize{1.3cm}{1.3cm}\selectfont (}
\stackrel{ \mbox{$2\iota_1\iota_2\mbox{cos}\Phi$} }
{ \stackrel{ \mbox{$2\iota_1\iota_2\mbox{sin}\Phi$} }
{ \mbox{$\iota_2^2 - \iota_1^2$} } }
\mbox{\fontsize{1.3cm}{1.3cm}\selectfont )} \mbox{ } .
\end{equation}
Then from this and its conjugate momentum $\underline{\mbox{P}}$, the vector
\begin{equation}
\underline{\mbox{J}} = \underline{\mbox{I}} \mbox{\scriptsize{\bf $\mbox{ } \times \mbox{ }$}} \underline{\mbox{P}}
\end{equation}
can be formed, which is conserved in the very special case; $J$ is the axial `$Z$' component of this,
$\mbox{J}_Z$, while the new components are
\begin{equation}
\mbox{J}_X
= \frac{2}{\iota_1^2 + \iota_2^2}
\{ \{{\iota_1^2 + \iota_2^2}\}\{ \iota_1\iota_2^* - \iota_2\iota_1^* \}\mbox{sin}\Phi +
\{\iota_1^2 - \iota_2^2\}\iota_1\iota_2\mbox{cos}\Phi \Phi^* \}
\end{equation}
and
\begin{equation}
\mbox{J}_Y
= \frac{2}{\iota_1^2 + \iota_2^2}
\{ \{{\iota_1^2 + \iota_2^2}\}\{ \iota_1\iota_2^* - \iota_2\iota_1^* \}\mbox{cos}\Phi +
\{\iota_1^2 - \iota_2^2\}\iota_1\iota_2\mbox{sin}\Phi \Phi^* \} \mbox{ } .
\end{equation}
Additionally, the configuration space Laplace--Runge--Lenz type vector
\begin{equation}
\underline{\mbox{Q}} = \underline{\mbox{P}} \mbox{\scriptsize{\bf $\mbox{ } \times \mbox{ }$}} \underline{\mbox{J}} - \overline{\mbox{\sffamily E}}\underline{\mbox{I}}/\mbox{I} =
\end{equation}
\begin{equation}
\left\{
4\{ \{\iota_1^2 + \iota_2^2\} \{\iota_1^{*2} + \iota_2^{*2}\} + \iota_1^2\iota_2^2\Phi^{*2} \}
- \frac{\overline{\mbox{\sffamily E}}}{\iota_1^2 + \iota_2^2}
\right\}
\mbox{\fontsize{1.3cm}{1.3cm}\selectfont (}
\stackrel{ \mbox{$2\iota_1\iota_2\mbox{cos}\Phi$} }
{ \stackrel{ \mbox{$2\iota_1\iota_2\mbox{sin}\Phi$} }
{ \mbox{$\iota_2^2 - \iota_1^2$} } }
\mbox{\fontsize{1.3cm}{1.3cm}\selectfont )}
- 4 \{ \iota_1^2 + \iota_2^2 \}\{ \iota_1\iota_1^* + \iota_2\iota_2^* \}
\mbox{\fontsize{1.3cm}{1.3cm}\selectfont (}
\stackrel{ \mbox{$\{\iota_1^*\iota_2 + \iota_2^*\iota_1\}\mbox{cos}\Phi -
\iota_1\iota_2\mbox{sin}\Phi \Phi^*$} }
{ \stackrel{\mbox{$\{\iota_1^*\iota_2 + \iota_2^*\iota_1\}\mbox{sin}\Phi +
\iota_1\iota_2\mbox{cos}\Phi\Phi^*$} }
{ \mbox{$\iota_2\iota_2^* - \iota_1\iota_1^*$} } }
\mbox{\fontsize{1.3cm}{1.3cm}\selectfont )}
\end{equation}
which is also conserved.
However, this only furbishes one further independent conserved quantity due to the interdependences
\begin{equation}
\underline{\mbox{J}}\cdot\underline{\mbox{Q}} = 0 \mbox{ } \mbox{ and } \mbox{ }
\mbox{Q}^2 = \overline{\mbox{\sffamily E}}^2 - 2A\mbox{J}^4/\overline{\mbox{\sffamily E}}^2 \mbox{ } .
\end{equation}
The very special multiple harmonic oscillator like potential case of scalefree triangleland also has not just a conserved quantity ${\cal J}$ but a conserved vector
$\underline{\cal J}$ of which ${\cal J}$ is the axial `$Z$' component (so that
$\underline{\cal J} = \underline{\mbox{J}}/\mbox{I}$).
$\Phi$--independent scalefree triangleland can still have yet more conserved quantities, but these are
of a rather more complicated nature along the lines described in e.g. \cite{+LRL, Goldstein}.
|
1,116,691,499,908 | arxiv | \section{Introduction}
The special type solutions for elliptic equations or systems play
an essential role in inverse problems since the pioneer work of Cald$\acute{e}$ron.
In \cite{Sylvester}, Sylvester and Uhlmann used complex geometric
optics (CGO) solutions to solve the inverse boundary value problems
for the conductivity equation. Based on CGO solutions, Ikehata proposed
the so called enclosure method to reconstruct the inclusion obstacle,
see \cite{Ike-enclosue}.There are many results in this reconstruction
algorithm, in \cite{UhlWang-disCGO}, they construct CGO-solutions
with polynomial-type phase function for the Helmholtz equation $\Delta u+k^{2}u=0$
or elliptic system having the Laplacian as the principal part. In
\cite{Nakamura}, he constructed a very special solution of a conductivity
equation $\nabla\cdot(\gamma(x)\nabla u)=0$ (called the oscillating-decaying
solutions), the leading parts is also isotropic. However, when the
medium is anisotropic, we need to consider more general elliptic equations,
such as anisotropic scalar elliptic equations $\nabla\cdot(A^{0}(x)\nabla u)+k^{2}u=0$,
where $A^{0}(x)=(a_{ij}^{0}(x))$, $a_{ij}^{0}(x)=a_{ji}^{0}(x)$
and assume the uniform ellipticity condition, that is, $\forall\xi=(\xi_{1},\xi_{2},\cdots\xi_{n})\in\mathbb{R}^{n}$,
$\lambda^{0}|\xi|^{2}\leq\sum_{i,j}a_{ij}^{0}(x)\xi_{i}\xi_{j}\leq\Lambda^{0}|\xi|^{2}$.
In this paper, we want to use the oscillating-decaying solutions in
our reconstruction algorithm. We have some assumptions. First, we
consider this problem in $\mathbb{R}^{3}$ and assume that $D$ is
an unknown obstacle such that $D\Subset\Omega\subset\mathbb{R}^{3}$
with an inhomogeneous index of refraction subset of a larger domain
$\Omega$.and $D$, $\Omega$ are $C^{1}$ domains. Second, we assume
$a_{ij}(x)=a_{ij}^{0}(x)\chi_{\Omega\backslash D}+\widetilde{a_{ij}}(x)\chi_{D}$,
where $\widetilde{a_{ij}}(x)$ is regarded as a perturbation in the
unknown obstacle $D$ and $\widetilde{a_{ij}}(x)$ satisfies $\widetilde{\lambda}|\xi|^{2}\leq\sum_{i,j}\widetilde{a_{ij}}(x)\xi_{i}\xi_{j}\leq\widetilde{\Lambda}|\xi|^{2}$.
Moreover, we need to assume that there exists a universal constant
$0<\widehat{\lambda}\leq\widehat{\Lambda}$ such that $\forall\xi\in\mathbb{R}^{3}$,
we have $\widehat{\lambda}|\xi|^{2}\leq\sum(\widetilde{a_{ij}}(x)\chi_{D}-a_{ij}^{0}(x))\xi_{i}\xi_{j}\leq\widehat{\Lambda}|\xi|^{2}$,
which mean the perturbed term $\widetilde{A}(x)$ is ``greater''
than the unperturbed term $A^{0}$ inside the unknown obstacle $D$.
Denote $A(x)=(a_{ij}(x))$, $A^{0}(x)=(a_{ij}^{0}(x))$ and let $k>0$
and consider the steady state anisotropic acoustic wave equation in
with Dirichlet boundary condition
\begin{equation}
\begin{cases}
\nabla\cdot(A(x)\nabla u)+k^{2}u=0 & \mbox{ in }\Omega\\
u=f & \mbox{ on }\partial\Omega.
\end{cases}\label{eq:1.1}
\end{equation}
In the unperturbed case, we have
\begin{equation}
\begin{cases}
\nabla\cdot(A^{0}(x)\nabla u_{0})+k^{2}u_{0}=0 & \mbox{ in }\Omega\\
u_{0}=f & \mbox{ on }\partial\Omega.
\end{cases}\label{eq:1.2}
\end{equation}
In this paper, we assume that $k^{2}$ is not a Dirichlet eigenvalue
of the operator $-\nabla\cdot(A\nabla\bullet)$ and $-\nabla\cdot(A^{0}\nabla\bullet)$
in $\Omega$. It is known that for any $f\in H^{1/2}(\partial\Omega)$,
there exists a unique solution $u$ to (\ref{eq:1.1}). We define
the Dirichlet-to-Neumann map in the anisotropic case, say $\Lambda_{D}:H^{1/2}(\partial\Omega)\to H^{-1/2}(\partial\Omega)$
as the following.
\begin{defn}
$\Lambda_{D}f:=A\nabla u\cdot\nu=\sum_{i.j=1}^{3}a_{ij}\partial_{j}u\cdot\nu_{i}$
and $\Lambda_{\emptyset}f:=A^{0}\nabla u_{0}\cdot\nu=\sum_{i.j=1}^{3}a_{ij}\partial_{j}u_{0}\cdot\nu_{i}$,
where $\nu=(\nu_{1},\nu_{2},\nu_{3})$ is an outer normal on $\partial\Omega$.
\end{defn}
\textbf{Inverse problem}: Identify the location and the convex hull
of $D$ from the DN-map $\Lambda_{D}$. The domain $D$ can also be
treated as an inclusion embedded in $\Omega$. The aim of this work
is to give a reconstruction algorithm for this problem. Note that
the information on the medium parameter $(\widetilde{a_{ij}}(x))$
inside $D$ is not known a priori.
The main tool in our reconstruction method is the oscillating-decaying
solutions for the second order anisotropic elliptic differential equations.
We use the results coming from the paper \cite{JN-ods} to construct
the oscillating-decaying solution. In section 2, we will construct
the oscillating-decaying solutions for anisotropic elliptic equations,
note that even if $k=0$, which means the equation is $\nabla\cdot(A(x)\nabla u)=0$,
we do not have any CGO-type solutions. Roughly speaking, given a hyperplane,
an oscillating-decaying solution is oscillating very rapidly along
this plane and decaying exponentially in the direction transversely
to the same plane. They are also CGO-solutions but with the imaginary
part of the phase function non-negative. Note that the domain of the
oscillating-decaying solutions is not over the whole $\Omega$, so
we need to extend such solutions to the whole domain. Fortunately,
the Runge approximation property provides us a good approach to extend
this special solution in section 3.
In Ikehata's work, the CGO-solutions are used to define the indicator
function (see \cite{Ike-enclos1} for the definition). In order to
use the oscillating-decaying solutions to the inverse problem of identifying
an inclusion, we have to modify the definition of the indicator function
using the Runge approximation property. It was first recognized by
Lax \cite{Lax} that the Runge approximation property is a consequence
of the weak unique continuation property. In our case, it is clear
that the anisotropic elliptic equation has the weak unique continuation
property if the leading part is Lipschitz continuous.
\section{Construction of oscillating-decaying solutions}
In this section, we follow the paper \cite{JN-ods} to construct the
oscillating-decaying solution in the anisotropic elliptic equations.
In our case, since we only consider a scalar elliptic equation, it's
construction is simpler than the construction in \cite{JN-ods}. Consider
the Dirichlet problem
\begin{equation}
\begin{cases}
\nabla\cdot(A(x)\nabla u)+k^{2}u=0 & \mbox{ in }\Omega\\
u=f & \mbox{ on }\partial\Omega.
\end{cases}\label{eq:2.1}
\end{equation}
Note that the oscillating-decaying solutions of
\[
\begin{cases}
\nabla\cdot(A(x)\nabla u)=0 & \mbox{ in }\Omega\\
u=f & \mbox{ on }\partial\Omega
\end{cases}
\]
will have the same representation as the equation (\ref{eq:2.1}),
that is, the lower order term $k^{2}u$ will not affect the form of
the oscillating-decaying solutions, we will see the detail in the
following constructions. Now, we assume that the domain $\Omega$
is an open, bounded smooth domain in $\mathbb{R}^{3}$ and the coefficients
$A(x)=(a_{ij}(x))$ satisfying $\sum_{i.j=1}^{3}a_{ij}(x)\xi_{i}\xi_{j}\geq\lambda|\xi|^{2}$,
$\forall\xi=(\xi_{1},\xi_{2},\xi_{3})\in\mathbb{R}^{3}$ and $\lambda$
is a universal constant.
Assume that
\[
A(x)=(a_{ij}(x))\in B^{\infty}(\mathbb{R}^{3})=\{f\in C^{\infty}(\mathbb{R}^{3}):\partial^{\alpha}f\in L^{\infty}(\mathbb{R}^{3}),\mbox{ }\forall\alpha\in\mathbb{Z}_{+}^{3}\}
\]
is the anisotropic coefficients satisfying $a_{ij}(x)=a_{ji}(x)$
$\forall i,j$ and there exists a $\lambda>0$ such that $\sum_{i,j}a_{ij}(x)\xi_{i}\xi_{j}\geq\lambda|\xi|^{2}$
$\forall x\in\mathbb{R}^{3}$ (uniform ellipticity). It is clear that
$A(x)$ is Lipschitz continuous if each $a_{ij}(x)\in B^{\infty}(\mathbb{R}^{3})$,
it has weak continuation property.
We give several notations as follows. Assume that $\Omega\subset\mathbb{R}^{3}$
is an open set with smooth boundary and $\omega\in S^{2}$ is given.
Let $\eta\in S^{2}$ and $\zeta\in S^{2}$ be chosen so that $\{\eta,\zeta,\omega\}$
forms an orthonormal system of $\mathbb{R}^{3}$. We then denote $x'=(x\cdot\eta,x\cdot\zeta)$.
Let $t\in\mathbb{R}$, $\Omega_{t}(\omega)=\Omega\cap\{x\cdot\omega>t\}$
and $\Sigma_{t}(\omega)=\Omega\cap\{x\cdot\omega=t\}$ be a non-empty
open set. We consider a scalar function $u_{\chi_{t},t,b,N,\omega}(x,\tau):=u(x,\tau)\in C^{\infty}(\overline{\Omega_{t}(\omega)}\backslash\partial\Sigma_{t}(\omega))\cap C^{0}(\overline{\Omega_{t}(\omega)})$
with $\tau\gg1$ satisfying:
\begin{equation}
\begin{cases}
L_{A}u=\nabla\cdot(A(x)\nabla u)+k^{2}u=0 & \mbox{ in }\Omega_{t}(\omega)\\
u=e^{i\tau x\cdot\xi}\{\chi_{t}(x')Q_{t}(x')b+\beta_{\chi_{t},t,b,N,\omega}\} & \mbox{ on }\Sigma_{t}(\omega),
\end{cases}\label{eq:2.2}
\end{equation}
where $\xi\in S^{2}$ lying in the span of $\eta$ and $\zeta$ is
chosen and fixed, $\chi_{t}(x')\in C_{0}^{\infty}(\mathbb{R}^{2})$
with supp$(\chi_{t})\subset\Sigma_{t}(\omega)$, $Q_{t}(x')$ is a
nonzero smooth function and $0\neq b\in\mathbb{C}$. Moreover, $\beta_{\chi_{t},b,t,N,\omega}(x',\tau)$
is a smooth function supported in supp($\chi_{t}$) satisfying:
\[
\|\beta_{\chi_{t},b,t,N,\omega}(\cdot,\tau)\|_{L^{2}(\mathbb{R}^{2})}\leq c\tau^{-1}
\]
for some constant $c>0$. From now on, we use $c$ to denote a general
positive constant whose value may vary from line to line. As in the
paper \cite{JN-ods}, $u_{\chi_{t},b,t,N,\omega}$ can be written
as
\[
u_{\chi_{t},b,t,N,\omega}=w_{\chi_{t},b,t,N,\omega}+r_{\chi_{t},b,t,N,\omega}
\]
with
\begin{equation}
w_{\chi_{t},b,t,N,\omega}=\chi_{t}(x')Q_{t}e^{i\tau x\cdot\xi}e^{-\tau(x\cdot\omega-t)A_{t}(x')}b+\gamma_{\chi_{t},b,t,N,\omega}(x,\tau)\label{eq:2.3}
\end{equation}
and $r_{\chi_{t}b,t,N,\omega}$ satisfying
\begin{equation}
\|r_{\chi_{t},b,t,N,\omega}\|_{H^{1}(\Omega_{t}(\omega))}\leq c\tau^{-N-1/2},\label{eq:2.4}
\end{equation}
where $A_{t}(\cdot)\in B^{\infty}(\mathbb{R}^{2})$ is a complex function
with its real part Re$A_{t}(x')>0$, and $\gamma_{\chi_{t},b,t,N,\omega}$
is a smooth function supported in supp($\chi_{t}$) satisfying
\begin{equation}
\|\partial_{x}^{\alpha}\gamma_{\chi_{t},b,t,N,\omega}\|_{L^{2}(\Omega_{s}(\omega))}\leq c\tau^{|\alpha|-3/2}e^{-\tau(s-t)a}\label{eq:2.5}
\end{equation}
for $|\alpha|\leq1$ and $s\geq t$, where $a>0$ is some constant
depending on $A_{t}(x')$.
Without loss of generality, we consider the special case where $t=0$,
$\omega=e_{3}=(0,0,1)$ and choose $\eta=(1,0,0)$, $\zeta=(0,1,0)$.
The general case can be obtained from this special case by change
of coordinates. Define $L=L_{A}$ and $\widetilde{M}\cdot=e^{-i\tau x'\cdot\xi'}L(e^{i\tau x'\cdot\xi'}\cdot)$,
where $x'=(x_{1},x_{2})$ and $\xi'=(\xi_{1},\xi_{2})$ with $|\xi'|=1$,
then $\widetilde{M}$ is a differential operator. To be precise, by
using $a_{jl}=a_{lj}$, we calculate $\widetilde{M}$ to be given
by
\begin{eqnarray*}
\widetilde{M} & = & -\tau^{2}\sum_{jl}a_{jl}\xi_{j}\xi_{l}+2\tau\sum_{jl}a_{jl}(i\xi_{l})\partial_{j}+\sum_{jl}a_{jl}\partial_{j}\partial_{l}\\
& & +\sum_{jl}(\partial_{j}a_{jl})(i\tau\xi_{l})+\sum_{jl}(\partial a_{jl})\partial_{l}+k^{2}\\
& = & -\tau^{2}\sum_{jl}a_{jl}\xi_{j}\xi_{l}+2\tau\sum_{l}a_{3l}(i\xi_{l})\partial_{3}+a_{33}\partial_{3}\partial_{3}\\
& & +2\tau\sum_{j\neq3,l}a_{jl}(i\xi_{l})\partial_{j}+\sum_{(j,l)\backslash\{3,3\}}a_{jl}\partial_{j}\partial_{l}\\
& & +\sum_{jl}(\partial_{j}a_{jl})(i\tau\xi_{l})+\sum_{jl}(\partial_{j}a_{jl})\partial_{l}+k^{2}
\end{eqnarray*}
with $\xi_{3}=0$. Now, we want to solve
\[
\widetilde{M}v=0,
\]
which is equivalent to $Mv=0$, where $M=a_{33}^{-1}\widetilde{M}$.
Now, we use the same idea in \cite{JN-ods}, define $\left\langle e,f\right\rangle =\sum_{ij}a_{ij}e_{i}f_{j}$,
where $e=(e_{1},e_{2},e_{3})$, $f=(f_{1},f_{2},f_{3})$ and denote
$\left\langle e,f\right\rangle _{0}=\left\langle e,f\right\rangle |_{x_{3}=0}$.
Let $P$ be a differential operator, and we define the order of $P$,
denoted by $ord(P)$, in the following sense:
\[
\|P(e^{-\tau x_{3}A(x')}\varphi(x')\|_{L^{2}(\mathbb{R}_{+}^{3})}\leq c\tau^{ord(P)-1/2},
\]
where $\mathbb{R}_{+}^{3}=\{x_{3}>0\}$, $A(x')$ is a smooth complex
function with its real part greater than 0 and $\varphi(x')\in C_{0}^{\infty}(\mathbb{R}^{2})$.
In this sense, similar to \cite{JN-ods}, we can see that $\tau$,
$\partial_{3}$ are of order 1, $\partial_{1},\partial_{2}$ are of
order 0 and $x_{3}$ is of order -1.
Now according to this order, the principal part $M_{2}$ (order 2)
of $M$ is:
\[
M_{2}=-\{D_{3}^{2}+2\tau\left\langle e_{3},e_{3}\right\rangle _{0}^{-1}\left\langle e_{3},\rho\right\rangle _{0}D_{3}+\tau^{2}\left\langle e_{3},e_{3}\right\rangle _{0}^{-1}\left\langle \rho,\rho\right\rangle _{0}\}
\]
with $D_{3}=-i\partial_{3}$ and $\rho=(\xi_{1},\xi_{2},0)$. Note
that the principal part $M_{2}$ does not involve the lower order
term $k^{2}\cdot$. Note that $M_{2}$ is obtained by the Taylor's
expansion of $M$ at $x_{3}=0$, that is,
\begin{eqnarray*}
M(x',x_{3}) & = & M(x',0)+x_{3}\partial_{3}M(x',0)+\cdots+\dfrac{x_{3}^{N-1}}{(N-1)!}\partial_{3}^{N-1}M(x',0)+R\\
& = & M_{2}+M_{1}+\cdots+M_{-N+1}+R,
\end{eqnarray*}
where ord$(M_{j})=j$ and ord$(R)=-N$. To solve $Mv=0$ is equivalent
to solve
\begin{equation}
M_{2}v=-(M_{1}+\cdots+M_{-N+1}+R)v:=f.\label{eq:2.6}
\end{equation}
If we set $w_{1}=v$ and $w_{2}=-\tau^{-1}\left\langle e_{3},e_{3}\right\rangle _{0}D_{3}v-\left\langle e_{3},\rho\right\rangle _{0}v$,
then we can compute
\begin{equation}
D_{3}w_{1}=-\tau\left\langle e_{3},e_{3}\right\rangle _{0}^{-1}\left\langle e_{3},\rho\right\rangle _{0}w_{1}-\tau\left\langle e_{3},e_{3}\right\rangle _{0}^{-1}w_{2}\label{eq:2.7}
\end{equation}
and
\begin{eqnarray}
D_{3}w_{2} & = & -\tau\{\left\langle \rho,e_{3}\right\rangle _{0}^{2}\left\langle e_{3},e_{3}\right\rangle _{0}^{-1}-\left\langle \rho,\rho\right\rangle _{0}\}w_{1}-\tau\left\langle \rho,e_{3}\right\rangle _{0}\left\langle e_{3},e_{3}\right\rangle _{0}^{-1}w_{2}\label{eq:2.8}\\
& & +\tau^{-1}\left\langle e_{3},e_{3}\right\rangle _{0}f.\nonumber
\end{eqnarray}
For detail calculations, we refer readers to see \cite{JN-ods}. If
we set $W=[w_{1},w_{2}]^{T}$ and use (\ref{eq:2.7}) and (\ref{eq:2.8}),
we have
\[
D_{3}W=\tau KW+\left[\begin{array}{c}
0\\
\tau^{-1}\left\langle e_{3},e_{3}\right\rangle _{0}f
\end{array}\right],
\]
where
\begin{equation}
K=\left[\begin{array}{cc}
\left\langle e_{3},e_{3}\right\rangle _{0}^{-1}\left\langle e_{3},\rho\right\rangle _{0} & \left\langle e_{3},e_{3}\right\rangle _{0}^{-1}\\
\left\langle \rho,e_{3}\right\rangle _{0}^{2}\left\langle e_{3},e_{3}\right\rangle _{0}^{-1}-\left\langle \rho,\rho\right\rangle _{0} & \left\langle \rho,e_{3}\right\rangle _{0}\left\langle e_{3},e_{3}\right\rangle _{0}^{-1}
\end{array}\right].\label{eq:2.9}
\end{equation}
By (\ref{eq:2.6}), we can express (\ref{eq:2.9}) as
\begin{equation}
D_{3}W=(\tau K+K_{0}+\cdots+K_{-N}+S)W,\label{eq:2.10}
\end{equation}
where ord$(K_{j})=j$ and ord$(S)=-N-1$ and all the differential
operators $K_{j}$ involves only $x'$ derivatives. Moreover, $K$
is a matrix function independent of $x_{3}$ and its eigenvalues are
determined from
\[
\det(\lambda I-K)=0,
\]
which is equivalent to
\begin{equation}
\lambda^{2}-2\left\langle e_{3},e_{3}\right\rangle _{0}^{-1}\left\langle e_{3},\rho\right\rangle _{0}\lambda+\left\langle \rho,\rho\right\rangle _{0}=0.\label{eq:2.11}
\end{equation}
By using the uniform elliptic assumption on $(a_{ij})$ that (\ref{eq:2.11})
has roots $\lambda^{\pm}$ with Im$\lambda^{\pm}>0$. Similar to \cite{JN-ods},
we can set $\widetilde{Q}=[q^{+},q^{-}]$ be a nonsingular matrix
with linearly independent vectors $q^{\pm}$ such that
\[
\widetilde{K}=\widetilde{Q}^{-1}K\widetilde{Q}=\left[\begin{array}{cc}
\lambda^{+} & 0\\
0 & \lambda^{-}
\end{array}\right],
\]
where $\lambda^{\pm}\in\mathbb{C}_{\pm}:=\{\pm\mbox{Im}\lambda>0\}$,
respectively. Moreover, we choose
\begin{equation}
\widetilde{Q}=\left[\begin{array}{cc}
q & \overline{q}\\
q' & \overline{q'}
\end{array}\right],\label{eq:2.12}
\end{equation}
where
\[
\left[\begin{array}{c}
q\\
q'
\end{array}\right]=[q^{+}]\mbox{ and }\left[\begin{array}{c}
\overline{q}\\
\overline{q'}
\end{array}\right]=[q^{-}]
\]
By virtue of the matrix $\widetilde{Q}$ in (\ref{eq:2.12}), we have
$\lambda^{-}=\overline{\lambda^{+}}$, and $\widetilde{Q}$ is nonsingular.
If we set $\widehat{Q}=\widetilde{Q}^{-1}W$, we get from (\ref{eq:2.10})
that
\begin{equation}
D_{3}\widehat{W}=(\tau\widetilde{K}+\widehat{K}_{0}+\cdots+\widehat{K}_{-N}+S)\widehat{W},\label{eq:2.13}
\end{equation}
where ord$(\widehat{K}_{j})=j$ and ord($\widehat{S})=-N-1$. Similar
as before, we know that $\widehat{K}_{j}$ contains only $x'$ derivatives
since the original $K_{j}$ involves only $x'$ derivatives. In addition,
$\widehat{K}_{0}$ can be divided into terms involving $\tau x_{3}$
and terms formed by the differential operator in $\partial_{x'}$
with coefficients independent of $x_{3}$. Likewise, $\widehat{K}_{j}$
can be grouped into terms containing $\tau x_{3}^{-j+1},\tau^{-1}x_{3}^{j-1},x_{3}^{-j}$,
respectively, where $-N\leq j\leq-1$.
From now on, we have decoupled $K$ by choosing a suitable matrix
function $\widetilde{Q}$, next we want to decouple $\widehat{K}_{0},\cdots\widehat{K}_{-N}$.
First, we show how to decouple $\widehat{K}_{0}$. Let $\widehat{W}=(1+x_{3}A^{(0)}+\tau^{-1}B^{(0)})\widetilde{W}^{(0)}$
with $A^{(0)},B^{(0)}$ being differential operators in $\partial_{x'}$
with coefficients independent of $x_{3}$, then we have
\begin{eqnarray*}
D_{3}\widehat{W}^{(0)} & = & \{\tau\widetilde{K}+(\widehat{K}_{0}-\tau x_{3}A^{(0)}\widetilde{K}+\tau x_{3}\widetilde{K}A^{(0)}-B^{(0)}\widetilde{K}+\widetilde{K}B^{(0)}+iA^{(0)})\\
& & +\widehat{K}'_{-1}+\cdots\}\widehat{W}^{(0)},
\end{eqnarray*}
where ord($\widehat{K}'_{-1})=-1$ and the remainder contains terms
of order at most -2. Let $\widetilde{K}_{0}:=\widehat{K}_{0}-\tau x_{3}A^{(0)}\widetilde{K}+\tau x_{3}\widetilde{K}A^{(0)}-B^{(0)}\widetilde{K}+\widetilde{K}B^{(0)}+iA^{(0)}$,
we analyze $\widetilde{K}_{0}$ more carefully. Set $\widehat{K}_{0}=\tau x_{3}\widehat{K}_{0,1}+\widehat{K}_{0,2}$
and express $\widehat{K}_{0,1},\widehat{K}_{0,2},A^{(0)}$ and $B^{(0)}$
in block forms, that is,
\[
\widehat{K}_{0,l}=\left[\begin{array}{cc}
\widehat{K}_{0,l}(1,1) & \widehat{K}_{0,l}(1,2)\\
\widehat{K}_{0,l}(2,1) & \widehat{K}_{0,l}(2,2)
\end{array}\right],\mbox{ }l=1,2,
\]
\[
A^{(0)}=\left[\begin{array}{cc}
A^{(0)}(1,1) & A^{(0)}(1,2)\\
A^{(0)}(2,1) & A^{(0)}(2,2)
\end{array}\right]\mbox{ and }B^{(0)}=\left[\begin{array}{cc}
B^{(0)}(1,1) & B^{(0)}(1,2)\\
B^{(0)}(2,1) & B^{(0)}(2,2)
\end{array}\right].
\]
Then the off-diagonal blocks of $\widetilde{K}_{0}$ are given by:
\begin{eqnarray*}
\widetilde{K}_{0}(1,2) & = & \tau x_{3}\{\widehat{K}_{0,1}(1,2)-A^{(0)}(1,2)\lambda^{-}+\lambda^{+}A^{(0)}(1,2)\}\\
& & +\{\widehat{K}_{0,2}(1,2)+iA^{(0)}(1,2)-B^{(0)}(1,2)\lambda^{-}+\lambda^{+}B^{(0)}(1,2)\},
\end{eqnarray*}
\begin{eqnarray*}
\widetilde{K}_{0}(2,1) & = & \tau x_{3}\{\widehat{K}_{0,1}(2,1)-A^{(0)}(2,1)\lambda^{-}+\lambda^{+}A^{(0)}(2,1)\}\\
& & +\{\widehat{K}_{0,2}(2,1)+iA^{(0)}(2,1)-B^{(0)}(2,1)\lambda^{-}+\lambda^{+}B^{(0)}(2,1)\}.
\end{eqnarray*}
Since $\lambda^{\pm}\in\mathbb{C}_{\pm}$, we can find suitable $A^{(0)}(1,2)$
and $A^{(0)}(2,1)$ such that
\[
\begin{cases}
\widehat{K}_{0,1}(1,2)-A^{(0)}(1,2)\lambda^{-}+\lambda^{+}A^{(0)}(1,2)=0\\
\widehat{K}_{0,1}(2,1)-A^{(0)}(2,1)\lambda^{-}+\lambda^{+}A^{(0)}(2,1)=0
\end{cases}
\]
(see similar arguments in \cite{Taylor}). Similarly, we can use the
same method to find $B^{(0)}(1,2)$ and $B^{(0)}(2,1)$ so that
\begin{equation}
\begin{cases}
\widehat{K}_{0,2}(1,2)+iA^{(0)}(1,2)-B^{(0)}(1,2)\lambda^{-}+\lambda^{+}B^{(0)}(1,2)=0,\\
\widehat{K}_{0,2}(2,1)+iA^{(0)}(2,1)-B^{(0)}(2,1)\lambda^{-}+\lambda^{+}B^{(0)}(2,1)=0.
\end{cases}\label{eq:2.14}
\end{equation}
Since $\widehat{K}_{0,2}(1,2)$ and $\widehat{K}_{0,2}(2,1)$ are
differential operators in $\partial_{x'}$ with coefficients independent
of $x_{3}$, we will look for $B^{(0)}(1,2)$ and $B^{(0)}(2,1)$
as the same type of differential operators. By (\ref{eq:2.14}) and
using $\lambda^{\pm}\in\mathbb{C}_{\pm}$, we can solve for $B^{(0)}(1,2)$
and $B^{(0)}(2,1)$. To find $A^{(0)}$ and $B^{(0)}$, we simply
set diagonal blocks of them are zero, i.e.,
\[
A^{(0)}=\left[\begin{array}{cc}
0 & A^{(0)}(1,2)\\
A^{(0)}(2,1) & 0
\end{array}\right]\mbox{ and }B^{(0)}=\left[\begin{array}{cc}
0 & B^{(0)}(1,2)\\
B^{(0)}(2,1) & 0
\end{array}\right].
\]
With these matrices $A^{(0)}$ and $B^{(0)}$, we can see that
\begin{equation}
D_{3}\widetilde{W}^{(0)}=\{\tau\widetilde{K}+\widetilde{K}_{0}+\widehat{K}'_{-1}+\cdots\}\widetilde{W}^{(0)}\label{eq:2.15}
\end{equation}
where
\[
\widetilde{K}_{0}=\left[\begin{array}{cc}
\widetilde{K}_{0}(1,1) & 0\\
0 & \widetilde{K}_{0}(2,2)
\end{array}\right].
\]
Moreover, we want to decouple $\widehat{K}'_{-1}$ and $\widehat{K}'_{-1}$
can be written as $\widehat{K}'_{-1}=\tau x_{3}^{2}\widehat{K}'_{-1,1}+x_{3}\widehat{K}'_{-1,2}+\tau^{-1}\widehat{K}'_{-1,3}$.
We can see that $\widehat{K}'_{-1.1}$, $\widehat{K}'_{-1,2}$ and
$\widehat{K}'_{-1,3}$ are differential operators in $\partial_{x'}$
of order zero, one and two with coefficients independent of $x_{3}$,
respectively. Similarly, we can set $\widetilde{W}^{(0)}=(I+x_{3}^{2}A^{(1)}+\tau^{-1}x_{3}B^{(1)}+\tau^{-2}C^{(1)})\widetilde{W}^{(1)}$,
where $A^{(1)}$, $B^{(1)}$ and $C^{(1)}$ are differential operators
in $\partial_{x'}$. Now plugging $\widetilde{W}^{(0)}$ of above
form into (\ref{eq:2.15}), we have
\begin{eqnarray}
D_{3}\widetilde{W}^{(1)} & = & \{\tau\widetilde{K}+\widetilde{K}_{0}+\tau x_{3}^{2}(\widehat{K}'_{-1,1}-A^{(1)}\widetilde{K}+\widetilde{K}A^{(1)})+x_{3}(\widehat{K}'_{-1,2}-B^{(1)}\widetilde{K}\nonumber \\
& & +\widetilde{K}B^{(1)}+2A^{(1)})+\tau^{-1}(\widehat{K}'_{-1,3}-C^{(1)}\widetilde{K}+\widetilde{K}C^{(1)}+iB^{(1)})]\nonumber \\
& & +\cdots\}\widetilde{W}^{(1)}\label{eq:2.16}
\end{eqnarray}
where the remainder consists of terms with order at most -2. Then
we use the same argument, we can find suitable $A^{(1)}$, $B^{(1)}$
and $C^{(1)}$ such that the off-diagonal blocks of the order -1 term
on the right hand side of (\ref{eq:2.16}) are zero. Therefore, we
obtain
\[
D_{3}\widetilde{W}^{(1)}=\{\tau\widetilde{K}+\widetilde{K}_{0}+\widetilde{K}_{-1}+\cdots\}\widetilde{W}^{(1)}
\]
with
\[
\widetilde{K}_{-1}=\left[\begin{array}{cc}
\widetilde{K}_{-1}(1,1) & 0\\
0 & \widetilde{K}_{-1}(2,2)
\end{array}\right].
\]
Recursively, by defining
\begin{eqnarray*}
\widehat{W} & = & (I+x_{3}A^{(0)}+\tau^{-1}B^{(0)})(I+x_{3}^{2}A^{(1)}+\tau^{-1}x_{3}B^{(1)}+\tau^{-2}C^{(1)})\cdots\\
& & (I+x_{3}^{N+1}A^{(N)}+\tau^{-1}x_{3}^{N}B^{(N)}+\tau^{-2}x_{3}^{N-1}C^{(N)})\widetilde{W}^{(N)}
\end{eqnarray*}
with suitable $A^{(j)}$, $B^{(j)}$ and $C^{(j)}$ for $0\leq j\leq N$
($C^{(0)}=0$), we can transform the equation (\ref{eq:2.13}) into
\begin{equation}
D_{3}\widetilde{W}^{(N)}=\{\tau\widetilde{K}+\widetilde{K}_{0}+\cdots+\widetilde{K}_{-N}+\widetilde{S}\}\widetilde{W}^{(N)},\label{eq:2.17}
\end{equation}
where $\widetilde{K}_{-j}$ for all $0\leq j\leq N$ are decoupled
and ord$(\widetilde{S})=-N-1$. Note that all diagonal blocks of $A^{(j)}$
and $B^{(j)}$ are zero.
Now in view of (\ref{eq:2.17}), we consider the equation
\[
D_{3}\hat{v}^{(N)}=\{\tau\lambda^{+}+\widetilde{K}_{0}(1,1)+\cdots+\widetilde{K}_{-N}(1,1)\}\hat{v}^{(N)},
\]
with an approximated solution of the form
\[
\hat{v}^{(N)}=\sum_{j=0}^{N+1}\hat{v}_{-j}^{(N)},
\]
where $\hat{v}_{-j}^{(N)}$ for $0\leq j\leq N$ satisfy
\[
\begin{cases}
D_{3}\hat{v}_{0}^{(N)}=\tau\lambda^{+}\hat{v}_{0}^{(N)}, & \hat{v}_{0}^{(N)}|_{x_{3}=0}=\chi_{t}(x')b\\
D_{3}\hat{v}_{-1}^{(N)}=\tau\lambda^{+}\hat{v}_{-1}^{(N)}+\widetilde{K}_{0}(1,1)\hat{v}_{0}^{(N)}, & \hat{v}_{-1}^{(N)}|_{x_{3}=0}=0\\
\vdots & \vdots\\
D_{3}\hat{v}_{-N-1}^{(N)}=\tau\lambda^{+}\hat{v}_{-N-1}^{(N)}+\sum_{j=0}^{N}\widetilde{K}_{-j}(1,1)\hat{v}_{j-N}^{(N)}, & \hat{v}_{-N-1}^{(N)}|_{x_{3}=0}=0,
\end{cases}
\]
where $\chi_{t}(x')\in C_{0}^{\infty}(\mathbb{R}^{2})$ and $b\in\mathbb{C}$.
It easy to solve $\hat{v}_{0}^{(N)}=\exp(i\tau x_{3}\lambda^{+})\chi_{t}(x')b$
and $\hat{v}_{-1}^{(N)}=\exp(i\tau x_{3}\lambda^{+})\int_{0}^{x_{3}}\exp(-i\tau s\lambda^{+})\widetilde{K}_{0}(1,1)\hat{v}_{0}^{(N)}ds$.
Moreover, we can use the ord($x_{3})=-1$ and ord$(\partial_{j})=0$
with $j=1,2$ to derive that
\[
\|x_{3}^{\beta}\partial_{x'}^{\alpha}\hat{v}_{0}^{(N)}\|_{L^{2}(\mathbb{R}_{+}^{3})}\leq c\tau^{-\beta-1/2}
\]
for $\beta\in\mathbb{Z}_{+}$ and multi-index $\alpha$. Similarly,
we can compute
\begin{equation}
\|\hat{v}_{-1}^{(N)}\|_{L^{2}(\mathbb{R}_{+}^{3})}^{2}\leq c\tau^{-3}.\label{eq:2.18}
\end{equation}
For the derivation of (\ref{eq:2.18}), it can be found in \cite{JN-ods}.
Moreover, by similar computations we can show that
\[
\|x_{3}^{\beta}\partial_{x'}^{\alpha}(\hat{v}_{-1}^{(N)})\|_{L^{2}(\mathbb{R}_{+}^{3})}\leq c\tau^{-\beta-3/2}
\]
and for $\hat{v}_{-j}^{(N)}$, $j=2,\ldots,N+1$, we have
\[
\|x_{3}^{\beta}\partial_{x'}^{\alpha}(\hat{v}_{-j}^{(N)})\|_{L^{2}(\mathbb{R}_{+}^{3})}\leq c\tau^{-\beta-j-1/2}
\]
for $2\leq j\leq N+1$.
Thus, if we set $V^{(N)}=\left[\begin{array}{c}
\hat{v}^{(N)}\\
0
\end{array}\right]$, then we have
\[
\begin{cases}
D_{3}V^{(N)}-\{\tau\widetilde{K}+\widetilde{K}_{0}+\cdots+\widetilde{K}_{-N}\}V^{(N)}=\widetilde{R},\\
V^{(N)}|_{x_{3}=0}=\left[\begin{array}{c}
\chi_{t}(x')b\\
0
\end{array}\right],
\end{cases}
\]
where
\[
\|\widetilde{R}\|_{L^{2}(\mathbb{R}_{+}^{3})}\leq c\tau^{-N-3/2}.
\]
Define $v$ to be the function of the first component of $\widetilde{Q}(I+x_{3}A^{(0)}+\tau^{-1}B^{(0)})(I+x_{3}^{2}A^{(1)}+\tau^{-1}x_{3}B^{(1)}+\tau^{-2}C^{(1)})\cdots(I+x_{3}^{N+1}A^{(N)}+\tau^{-1}x_{3}^{N}B^{(N)}+\tau^{-2}x_{3}^{N-1}C^{(N)})V^{(N)}$
and set $w=\exp(i\tau x'\cdot\xi')\tilde{v}$, we have
\begin{eqnarray*}
w & = & q\exp(i\tau x'\cdot\xi')\exp(i\tau x_{3}\lambda^{+}(x'))\chi_{t}(x')b+\exp(i\tau x'\cdot\xi')\tilde{\gamma}(x,\tau)\\
& & q\exp(i\tau x'\cdot\xi')\exp(-\tau x_{3}(-i\lambda^{+}(x')))\chi_{t}(x')b+\gamma(x,\tau)
\end{eqnarray*}
and
\[
w|_{x_{3}=0}=\exp(i\tau x'\cdot\xi')\{\chi_{t}(x')qb+\beta_{0}(x',\tau)\},
\]
where $\gamma$ satisfies the estimate (\ref{eq:2.5}) on $\Omega_{s}:=\{x_{3}>s\}\cap\Omega$
for $s\ge0$ and $\beta_{0}(x',\tau)=\tilde{\gamma}(x',0,\tau)$ is
supported in supp($\chi_{t})$ with $\|\beta_{0}(\cdot,\tau)\|_{L^{\infty}}\leq c\tau^{-1}$.
Also, we have
\[
\|M\tilde{v}\|_{L^{2}(\Omega_{0})}\leq c\tau^{-N-1/2}.
\]
Let $u=w+r=e^{ix'\cdot\xi'}\tilde{v}+r$ and r be the solution to
the boundary value problem
\begin{equation}
\begin{cases}
Lr=-e^{ix'\cdot\xi'}\widetilde{M}\tilde{v} & \mbox{ in }\Omega_{0},\\
r=0 & \mbox{ on }\partial\Omega_{0}.
\end{cases}\label{eq:2.19}
\end{equation}
The existence of $r$ solving (\ref{eq:2.19}) is by using the Lax-Milgram
theorem and we have the following estimate
\[
\|r\|_{H^{1}(\Omega_{0})}\leq c\tau^{-N-1/2},
\]
which is the estimate (\ref{eq:2.4}) on $\Omega_{0}$. We complete
the construction of the oscillating-decaying solutions for the case
$t=0$ and $\omega=(0,0,1)$ in the anisotropic elliptic equations
case. The oscillating-decaying solution in the general case can be
obtained by using change of coordinates.
\section{Tools and estimates}
In this section, we introduce the Runge approximation property and
a very useful elliptic estimate: Meyers $L^{p}$-estimates.
\subsection{Runge approximation property}
\begin{defn}
\cite{Lax} Let $L$ be a second order elliptic operator, solutions
of an equation $Lu=0$ are said to have the Runge approximation property
if, whenever $K$ and $\Omega$ are two simply connected domains with
$K\subset\Omega$, any solution in $K$ can be approximated uniformly
in compact subsets of $K$ by a sequence of solutions which can be
extended as solution to $\Omega$.
\end{defn}
There are many applications for Runge approximation property in inverse
problems. Similar results for some elliptic operators can be found
in \cite{Lax}, \cite{Malgrange}. The following theorem is a classical
result for Runge approximation property for a second order elliptic
equation.
\begin{thm}
(Runge approximation property) Let $L_{0}\cdot=\nabla(A^{0}(x)\nabla\cdot)+k^{2}\cdot$
be a second order elliptic differential operator with $A^{0}(x)$
to be Lipschitz. Assume that $k^{2}$ is not a Dirichlet eigenvalue
of $-\nabla(A^{0}(x)\nabla\cdot)$. Let $O$ and $\Omega$ be two
open bounded domains with smooth boundary in $\mathbb{R}^{3}$ such
that $O$ is convex and $\bar{O}\subset\Omega$.
Let $u_{0}\in H^{1}(O)$ satisfy
\[
L_{0}u_{0}=0\mbox{ in }O.
\]
Then for any compact subset $K\subset O$ and any $\epsilon>0$, there
exists $U\in H^{1}(\Omega)$ satisfying
\[
L_{0}U=0\mbox{ in }\Omega,
\]
such that
\[
\|u_{0}-U\|_{H^{1}(K)}\leq\epsilon.
\]
\end{thm}
Note that we have assumed that $A^{0}\in B^{\infty}(\mathbb{R}^{3})$,
it is easy to see $A^{0}(x)$ is a Lipschitz continuous function,
it possesses the weak continuation property.The proof can be found
in \cite{Lax} and \cite{JN-ods}, we omit details here.
\subsection{Elliptic estimates and some identities}
We need some estimates for solutions to some Dirichlet problems which
will be used in next section. Recall that, for $f\in H^{1/2}(\partial\Omega)$,
let $u$ and $u_{0}$ be solutions to the Dirichlet problems (\ref{eq:1.1})
and (\ref{eq:1.2}), respectively. Note that $a_{ij}(x)=a_{ij}^{0}(x)\chi_{\Omega\backslash D}+\widetilde{a_{ij}}(x)\chi_{D}$
and we set $w=u-u_{0}$, then $w$ satisfies the Dirichlet problem
\begin{equation}
\begin{cases}
\nabla\cdot(A(x)\nabla w)+k^{2}w=-\nabla\cdot((\widetilde{A}\chi_{D}-A^{0}\chi_{D})\nabla u_{0}) & \mbox{ in }\Omega\\
w=0 & \mbox{ on }\partial\Omega
\end{cases}\label{eq:3.1}
\end{equation}
where $A(x)=(a_{ij}(x))$, $A^{0}(x)=(a_{ij}^{0}(x))$ and $\widetilde{A}(x)=(\widetilde{a_{ij}}(x))$.
Then we have some estimates for $w$.
\begin{lem}
There exists a positive constant $C$ independent of $w$ such that
we have
\[
\|w\|_{L^{2}(\Omega)}\leq C\|\nabla w\|_{L^{p}(\Omega)}
\]
for $\dfrac{6}{5}\leq p\leq2$ if $n=3$.
\end{lem}
The proof follow from \cite{Sini} by Freidrichs inequality, see \cite{Majza}
p.258 and use a standard elliptic regularity.
\begin{lem}
There exists $\epsilon\in(0,1)$, depending only on $\Omega$, $A^{0}(x)=(a_{ij}^{0}(x))$
and $\widetilde{A}(x)=(\widetilde{a_{ij}}(x))$ such that
\[
\|\nabla w\|_{L^{p}(\Omega)}\leq C\|u_{0}\|_{W^{1,p}(D)}
\]
for $\max\{2-\epsilon,\dfrac{6}{5}\}<p\leq2$ if $n=3$.\end{lem}
\begin{proof}
The proof is also followed from \cite{Sini}. Set $f:=-(\widetilde{A}\chi_{D}-A^{0}\chi_{D})\nabla u_{0}$,
$h:=0$. Let $w_{0}$ be a solution of
\begin{equation}
\begin{cases}
\nabla\cdot(A(x)\nabla w_{0})+k^{2}w_{0}=\nabla\cdot f & \mbox{ in }\Omega,\\
w_{0}=0 & \mbox{ on }\partial\Omega.
\end{cases}\label{eq:3.2}
\end{equation}
The following $L^{p}$-estimate of $w_{0}$, followed from \cite{Meyer},
then we can get
\begin{equation}
\|\nabla w_{0}\|_{L^{p}(\Omega)}\leq C\|f\|_{L^{p}(\Omega)}\label{eq:3.3}
\end{equation}
for $p\in(\max\{2-\epsilon,\dfrac{6}{5}\},2]$, where $\epsilon\in(0,1)$
depends on $\Omega$, $A^{0}(x)=(a_{ij}^{0}(x))$ and $\widetilde{A}(x)=(\widetilde{a_{ij}}(x))$.
We set $W:=w-w_{0}$, then since $w=w_{0}+W$, we have
\begin{equation}
\|\nabla w\|_{L^{p}(\Omega)}\leq C(\|\nabla w_{0}\|_{L^{p}(\Omega)}+\|\nabla W\|_{L^{p}(\Omega)}).\label{eq:3.4}
\end{equation}
Moreover, $W$ satisfies
\begin{equation}
\begin{cases}
\nabla\cdot(A(x)\nabla W)+k^{2}W=0 & \mbox{ in }\Omega,\\
W=0 & \mbox{ on }\partial\Omega.
\end{cases}\label{eq:3.5}
\end{equation}
By the standard elliptic regularity, we have
\[
\|W\|_{H^{1}(\Omega)}\leq C\|w_{0}\|_{L^{2}(\Omega)}.
\]
Thus, we get for $p\leq2$,
\begin{equation}
\|\nabla W\|_{L^{p}(\Omega)}\leq C\|\nabla W\|_{L^{2}(\Omega)}\leq C\|W\|_{H^{1}(\Omega)}\leq C\|w_{0}\|_{L^{2}(\Omega)}.\label{eq:3.6}
\end{equation}
By Sobolev embedding theorem, we get
\begin{equation}
\|w_{0}\|_{L^{2}(\Omega)}\leq C\|w_{0}\|_{W^{1.p}(\Omega)}\label{eq:3.7}
\end{equation}
for $p\geq\dfrac{6}{5}$ if $n=3$. Use Poincar$\acute{e}$'s inequality
in $L^{p}$ spaces ($w_{0}|_{\partial\Omega}=0$), we have
\begin{equation}
\|w_{0}\|_{L^{2}(\Omega)}\leq C\|\nabla w_{0}\|_{L^{p}(\Omega)}\label{eq:3.8}
\end{equation}
for $p\geq\dfrac{6}{5}$ if $n=3$. Combining (\ref{eq:3.3}) with
(\ref{eq:3.4}), (\ref{eq:3.6}) and (\ref{eq:3.8}), we can obtain
\[
\|\nabla w\|_{L^{p}(\Omega)}\leq C\|f\|_{L^{p}(\Omega)}\leq C\|u_{0}\|_{W^{1,p}(D)}
\]
for $\max\{2-\epsilon,\dfrac{6}{5}\}<p\leq2$ if $n=3$.
\end{proof}
Recall the Dirichlet-to-Neumann map which we have defined in the section
1: $\Lambda_{D}f:=A\nabla u\cdot\nu$ and $\Lambda_{\emptyset}f:=A^{0}\nabla u_{0}\cdot\nu$,
where $\nu=(\nu_{1},\nu_{2},\nu_{3})$ is an outer normal on $\partial\Omega$.
We next prove some useful identities.
\begin{lem}
$\int_{\partial\Omega}(\Lambda_{D}-\Lambda_{\emptyset})f\bar{f}d\sigma=\mbox{Re}\int_{D}(\widetilde{A}-A^{0})\nabla u_{0}\cdot\overline{\nabla u}dx$.\end{lem}
\begin{proof}
It is clear that
\begin{eqnarray*}
\int_{\partial\Omega}A\nabla u\cdot\nu\bar{\varphi}d\sigma & = & \int_{\Omega}\nabla\cdot(A\nabla u\bar{\varphi})dx\\
& = & \int_{\Omega}\nabla\cdot(A\nabla u)\bar{\varphi}+A\nabla u\cdot\overline{\nabla\varphi}dx\\
& = & -k^{2}\int_{\Omega}u\bar{\varphi}dx+\int_{\Omega}A\nabla u\cdot\overline{\nabla\varphi}dx
\end{eqnarray*}
$\forall\varphi\in H^{1}(\Omega)$. Since $u=u_{0}=f$ on $\partial\Omega$,
the left hand side of the identity has the same value whether we take
$\varphi=u$ or $\varphi=u_{0}$, and it is equal to $\int_{\partial\Omega}\Lambda_{D}f\bar{f}d\sigma$.
\begin{eqnarray*}
\int_{\partial\Omega}\Lambda_{D}f\bar{f}d\sigma & = & -k^{2}\int_{\Omega}u\overline{u_{0}}dx+\int_{\Omega}A\nabla u\cdot\overline{\nabla u_{0}}dx\\
& = & -k^{2}\int_{\Omega}|u|^{2}dx+\int_{\Omega}A\nabla u\cdot\overline{\nabla u}dx.
\end{eqnarray*}
The right hand side of the identity above is real. Hence, by taking
the real part, we have
\[
\int_{\partial\Omega}\Lambda_{D}f\bar{f}d\sigma=-k^{2}\mbox{Re}\int_{\Omega}u\overline{u_{0}}dx+\mbox{Re}\int_{\Omega}A\nabla u\cdot\overline{\nabla u_{0}}dx
\]
and
\[
\int_{\partial\Omega}\Lambda_{\emptyset}f\bar{f}d\sigma=-k^{2}\mbox{Re}\int_{\Omega}u\overline{u_{0}}dx+\mbox{Re}\int_{\Omega}A^{0}\nabla u\cdot\overline{\nabla u_{0}}dx.
\]
Therefore, we have
\begin{eqnarray}
\int_{\partial\Omega}(\Lambda_{D}-\Lambda_{\emptyset})f\bar{f}d\sigma & = & \mbox{Re}\int_{\Omega}(A-A^{0})\nabla u\cdot\overline{\nabla u_{0}}dx\label{eq:3.9}\\
& = & \mbox{Re}\int_{\Omega}(\widetilde{A}-A^{0})\chi_{D}\nabla u\cdot\overline{\nabla u_{0}}dx.\nonumber
\end{eqnarray}
\end{proof}
The estimates in the following lemma play an important role in our
reconstruction algorithm.
\begin{lem}
We have the following identities:
\begin{eqnarray}
\int_{\partial\Omega}(\Lambda_{D}-\Lambda_{\emptyset})f\bar{f}d\sigma & = & -\int_{\Omega}A\nabla w\cdot\overline{\nabla w}dx+k^{2}\int_{\Omega}|w|^{2}dx\label{eq:3.10}\\
& & +\int_{D}(A^{0}-\widetilde{A})\nabla u_{0}\cdot\overline{\nabla u_{0}}dx,\nonumber
\end{eqnarray}
\begin{eqnarray}
\int_{\partial\Omega}(\Lambda_{D}-\Lambda_{\emptyset})f\bar{f}d\sigma & = & \int_{\Omega}A^{0}\nabla w\cdot\overline{\nabla w}dx-k^{2}\int_{\Omega}|w|^{2}dx\label{eq:3.11}\\
& & +\int_{D}(\widetilde{A}-A^{0})\nabla u\cdot\overline{\nabla u}dx.\nonumber
\end{eqnarray}
In particular, we have
\begin{equation}
\int_{\partial\Omega}(\Lambda_{D}-\Lambda_{\emptyset})f\bar{f}d\sigma\leq k^{2}\int_{\Omega}|w|^{2}dx+\widehat{\Lambda}\int_{D}|\nabla u_{0}|^{2}dx,\label{eq:3.12}
\end{equation}
\begin{equation}
\int_{\partial\Omega}(\Lambda_{D}-\Lambda_{\emptyset})f\bar{f}d\sigma\geq c\int_{\Omega}|\nabla u_{0}|^{2}dx-k^{2}\int_{\Omega}|w|^{2}dx,\label{eq:3.13}
\end{equation}
where $c$ depending only on $\widetilde{\lambda}$ and $\lambda^{0}$.\end{lem}
\begin{proof}
Multiplying the identity
\[
\nabla\cdot(A(x)\nabla w)+k^{2}w+\nabla\cdot((\widetilde{A}\chi_{D}-A^{0}\chi_{D})\nabla u_{0})=0
\]
by $\bar{w}$ and integrating over $\Omega$, we get
\begin{eqnarray*}
0 & = & \int_{\Omega}\nabla\cdot(A\nabla w)\bar{w}dx+\int_{\Omega}\nabla\cdot((A^{0}-\widetilde{A})\chi_{D}\nabla u_{0})\bar{w}dx+k^{2}\int_{\Omega}|w|^{2}dx\\
& = & -\int_{\Omega}A\nabla w\cdot\overline{\nabla w}dx+\int_{\partial\Omega}A\dfrac{\partial w}{\partial\nu}\bar{w}d\sigma-\int_{\Omega}(A^{0}-\widetilde{A})\chi_{D}\nabla u_{0}\cdot\overline{\nabla w}dx\\
& & +\int_{\partial\Omega}(A^{0}-\widetilde{A})\chi_{D}\dfrac{\partial u_{0}}{\partial\nu}\bar{w}d\sigma+k^{2}\int_{\Omega}|w|^{2}dx\\
& = & -\int_{\Omega}A\nabla w\cdot\overline{\nabla w}dx-\int_{D}(A^{0}-\widetilde{A})\nabla u_{0}\cdot\overline{\nabla w}dx+k^{2}\int_{\Omega}|w|^{2}dx\\
& = & \int_{\Omega}A\nabla w\cdot\overline{\nabla w}dx-\int_{D}(A^{0}-\widetilde{A})\nabla u_{0}\cdot\overline{\nabla u}dx+k^{2}\int_{\Omega}|w|^{2}dx\\
& & +\int_{\Omega}(A^{0}-\widetilde{A})\chi_{D}\nabla u_{0}\cdot\overline{\nabla u_{0}}dx,
\end{eqnarray*}
and use (\ref{eq:3.9}) we can obtain
\[
\int_{\partial\Omega}(\Lambda_{D}-\Lambda_{\emptyset})f\bar{f}d\sigma=-\int_{\Omega}A\nabla w\cdot\overline{\nabla w}dx+\int_{D}(A^{0}-\widetilde{A}){}_{D}\nabla u_{0}\cdot\overline{\nabla u_{0}}dx+k^{2}\int_{\Omega}|w|^{2}dx.
\]
Similarly, multiplying the identity
\[
0=\nabla\cdot((\widetilde{A}-A^{0})\chi_{D}\nabla u)+\nabla\cdot(A^{0}\nabla w)+k^{2}w=0
\]
by $\bar{w}$ and integrating over $\Omega$, we get
\begin{eqnarray*}
0 & = & \int_{\Omega}\nabla\cdot((\widetilde{A}-A^{0})\chi_{D}\nabla u)\bar{w}dx+\int_{\Omega}\nabla\cdot(A^{0}\nabla w)\bar{w}dx+k^{2}\int_{\Omega}|w|^{2}dx\\
& = & -\int_{D}(\widetilde{A}-A^{0})\nabla u\cdot\overline{\nabla w}dx-\int_{\Omega}A^{0}\nabla w\cdot\overline{\nabla w}dx+k^{2}\int_{\Omega}|w|^{2}dx\\
& = & -\int_{D}(\widetilde{A}-A^{0})\nabla u\cdot\overline{\nabla u}dx-\int_{D}(\widetilde{A}-A^{0})\nabla u\cdot\overline{\nabla u_{0}}dx+k^{2}\int_{\Omega}|w|^{2}dx\\
& & -\int_{\Omega}A^{0}\nabla w\cdot\overline{\nabla w}dx,
\end{eqnarray*}
and use (\ref{eq:3.9}) again, we can obtain
\[
\int_{\partial\Omega}(\Lambda_{D}-\Lambda_{\emptyset})f\bar{f}d\sigma=\int_{\Omega}A^{0}\nabla w\cdot\overline{\nabla w}dx-k^{2}\int_{\Omega}|w|^{2}dx+\int_{D}(\widetilde{A}-A^{0})\nabla u\cdot\overline{\nabla u}dx.
\]
For the remaining part, (\ref{eq:3.12}) is an easy consequence of
(\ref{eq:3.10})
\begin{eqnarray*}
\int_{\partial\Omega}(\Lambda_{D}-\Lambda_{\emptyset})f\bar{f}d\sigma & \leq & k^{2}\int_{\Omega}|w|^{2}dx+\int_{D}(A^{0}-\widetilde{A})\nabla u_{0}\cdot\overline{\nabla u_{0}}dx\\
& \leq & k^{2}\int_{\Omega}|w|^{2}dx+\widehat{\Lambda}\int_{D}|\nabla u_{0}|^{2}dx
\end{eqnarray*}
Finally, for the lower bound, we use
\begin{eqnarray*}
A^{0}\nabla w\cdot\overline{\nabla w}+(\widetilde{A}-A^{0})\nabla u\cdot\overline{\nabla u} & = & \widetilde{A}\nabla u\cdot\overline{\nabla u}-2\mbox{Re}A^{0}\nabla u\cdot\overline{\nabla u_{0}}+A^{0}\nabla u_{0}\cdot\overline{\nabla u_{0}}\\
& = & \widetilde{A}(\nabla u-(\widetilde{A})^{-1}A^{0}\nabla u_{0})\cdot(\overline{\nabla u-(\widetilde{A})^{-1}A^{0}\nabla u_{0}})\\
& & +(A^{0}-(\widetilde{A})^{-1}(A^{0})^{2})\nabla u_{0}\cdot\overline{\nabla u_{0}}\\
& \geq & (A^{0}-(\widetilde{A})^{-1}(A^{0})^{2})\nabla u_{0}\cdot\overline{\nabla u_{0}}\\
& \geq & c|\nabla u_{0}|^{2},
\end{eqnarray*}
since $\widetilde{A}(\nabla u-(\widetilde{A})^{-1}A^{0}\nabla u_{0})\cdot(\overline{\nabla u-(\widetilde{A})^{-1}A^{0}\nabla u_{0}})\geq0$
and note that $A^{0}-(\widetilde{A})^{-1}(A^{0})^{2}=(\widetilde{A})^{-1}(\widetilde{A}-A^{0})A^{0}$
has a positive lower bound depending only on $\widetilde{\lambda}$
and $\lambda^{0}$.
\end{proof}
Before stating our main theorem, we need to estimate $\|w\|_{L^{2}(\Omega)}$.
Fortunately, we can use Meyers $L^{p}$ estimates to help us to overcome
the difficulties (see lemma 3.2 and lemma 3.3). For the upper bound
of $\int_{\partial\Omega}(\Lambda_{D}-\Lambda_{\emptyset})f\bar{f}d\sigma$,
see (\ref{eq:3.11}), we use $\|w\|_{L^{2}(\Omega)}\leq C\|u_{0}\|_{W^{1,p}(D)}$
for $p\leq2$. Then we have
\begin{equation}
\int_{\partial\Omega}(\Lambda_{D}-\Lambda_{\emptyset})f\bar{f}d\sigma\leq C\|u_{0}\|_{W^{1,p}(D)}^{2}.\label{eq:3.14}
\end{equation}
By (\ref{eq:3.13}) and the Meyers $L^{p}$ estimate $\|w\|_{L^{2}(\Omega)}\leq C\|u_{0}\|_{W^{1,p}(D)}$,
we have
\begin{equation}
\int_{\partial\Omega}(\Lambda_{D}-\Lambda_{\emptyset})f\bar{f}d\sigma\ge c\int_{\Omega}|\nabla u_{0}|^{2}dx-c\|u_{0}\|_{W^{1,p}(D)}^{2}.\label{eq:3.15}
\end{equation}
\section{Detecting the convex hull of the unknown obstacle}
\subsection{Main theorem}
Recall that we have constructed the oscillating-decaying solutions
in section 2, and note that this solution can not be defined on the
whole domain, that is, the oscillating-decaying solutions $u_{\chi_{t},b,t,N,\omega}(x,\tau)$
only defined on $\Omega_{t}(\omega)\subsetneq\Omega$. Nevertheless,
with the help of the Runge approximation property, we can prove that
one can determine the convex hull of the unknown obstacle $D$ by$\Lambda_{D}f$
for infinitely many $f$.
We define $B$ to be an open ball in $\mathbb{R}^{3}$ such that $\overline{\Omega}\subset B$.
Assume that $\widetilde{\Omega}\subset\mathbb{R}^{3}$ is an open
Lipschitz domain with $\overline{B}\subset\widetilde{\Omega}$. As
in the section 2, set $\omega\in S^{2}$ and $\{\eta,\zeta,\omega\}$
forms an orthonormal basis of $\mathbb{R}^{3}$. Suppose $t_{0}=\inf_{x\in D}x\cdot\omega=x_{0}\cdot\omega$,
where $x_{0}=x_{0}(\omega)\in\partial D$. For any $t\leq t_{0}$
and $\epsilon>0$ small enough, we can construct
\begin{eqnarray*}
u_{\chi_{t-\epsilon},b,t-\epsilon,N,\omega} & = & \chi_{t-\epsilon}(x')Q_{t-\epsilon}(x')e^{i\tau x\cdot\xi}e^{-\tau(x\cdot\omega-(t-\epsilon))A_{t-\epsilon}(x')}b+\gamma_{\chi_{t-\epsilon},b,t-\epsilon,N,\omega}\\
& & +r_{\chi_{t-\epsilon},b,t-\epsilon,N,\omega}
\end{eqnarray*}
to be the oscillating-decaying solution for $\nabla\cdot(A^{0}(x)\nabla\cdot)+k^{2}\cdot$
in $B_{t-\epsilon}(\omega)=B\cap\{x\cdot\omega>t-\epsilon\}$, where
$\chi_{t-\epsilon}(x')\in C_{0}^{\infty}(\mathbb{R}^{2})$ and $b\in\mathbb{C}$.
Note that in section 2, we have assumed the leading coefficient $A^{0}(x)\in B^{\infty}(\mathbb{R}^{3})$.
Similarly, we have the oscillating-decaying solution
\[
u_{\chi_{t},b,t,N,\omega}(x,\tau)=\chi_{t}(x')Q_{t}e^{i\tau x\cdot\xi}e^{-\tau(x\cdot\omega-t)A_{t}(x')}b+\gamma_{\chi_{t},b,t,N,\omega}(x,\tau)+r_{\chi_{t},b,t,N,\omega}
\]
for $L_{A^{0}}$ in $B_{t}(\omega)$. In fact, for any $\tau$, $u_{\chi_{t-\epsilon},b,t-\epsilon,N,\omega}(x,\tau)\to u_{\chi_{t},b,t,N,\omega}(x,\tau)$
in an appropriate sense as $\epsilon\to0$. For details, we refer
readers to consult all the details and results in \cite{JN-ods},
and we list consequences in the following.
\[
\chi_{t-\epsilon}(x')Q_{t-\epsilon}(x')e^{i\tau x\cdot\xi}e^{-\tau(x\cdot\omega-(t-\epsilon))A_{t-\epsilon}(x')}b\to\chi_{t}(x')Q_{t}e^{i\tau x\cdot\xi}e^{-\tau(x\cdot\omega-t)A_{t}(x')}b
\]
in $H^{2}(B_{t}(\omega))$ as $\epsilon$ tends to 0,
\[
\gamma_{\chi_{t-\epsilon},b,t-\epsilon,N,\omega}\to\gamma_{\chi_{t},b,t,N,\omega}
\]
in $H^{2}(B_{t}(\omega))$ as $\epsilon$ tends to 0, and finally,
\[
r_{\chi_{t-\epsilon},b,t-\epsilon,N,\omega}\to r_{\chi_{t},b,t,N,\omega}
\]
in $H^{1}(B_{t}(\omega))$ as $\epsilon$ tends to 0.
Obviously, $B_{t-\epsilon}(\omega)$ is a convex set and $\overline{\Omega_{t}(\omega)}\subset B_{t-\epsilon}(\omega)$
for all $t\leq t_{0}$. By using the Runge approximation property,
we can see that there exists a sequence of functions $\tilde{u}_{\epsilon,j}$,
$j=1,2,\cdots$, such that
\[
\tilde{u}_{\epsilon,j}\to u_{\chi_{t-\epsilon},b,t-\epsilon,N,\omega}\mbox{ in }H^{1}(B_{t}(\omega)),
\]
where $\tilde{u}_{\epsilon,j}\in H^{1}(\widetilde{\Omega})$ satisfy
$L_{A^{0}}\tilde{u}_{\epsilon,j}=0$ in $\widetilde{\Omega}$ for
all $\epsilon,j$. Define the indicator function $I(\tau,\chi_{t},b,t,\omega)$
by the formula:
\[
I(\tau,\chi_{t},b,t,\omega)=\lim_{\epsilon\to0}\lim_{j\to\infty}\int_{\partial}(\Lambda_{D}-\Lambda_{\emptyset})f_{\epsilon,j}\overline{f_{\epsilon,j}}d\sigma,
\]
where $f_{\epsilon,j}=\tilde{u}_{\epsilon,j}|_{\partial\Omega}$.
Note that in \cite{JN-ods}, they assume that $D$ satisfying the
following condition: For each $\omega\in S^{2}$, there exist $c_{\omega}>0$,
$\epsilon_{\omega}>0$ and $p_{\omega}\in[0,1]$ such that
\[
\dfrac{1}{c_{\omega}}s^{p_{\omega}}\leq\mu(\{x\in D|x\cdot\omega=t_{0}+s\})\leq c_{\omega}s^{p_{\omega}}\mbox{ for all }s\in(0,\epsilon_{\omega}),
\]
where $\mu$ is the surface measure, but we drop this condition in
the following theorem. Now the characterization of the convex hull
of $D$ is based on the following theorem:
\begin{thm}
(1) If $t<t_{0}$, then for any $\chi_{t}\in C_{0}^{\infty}(\mathbb{R}^{2})$
and $b\in\mathbb{C}$, we have
\[
\limsup_{\tau\to\infty}|I(\tau,\chi_{t},b,t,\omega)|=0.
\]
.
(2) If $t=t_{0}$, then for any $\chi_{t_{0}}\in C_{0}^{\infty}(\mathbb{R}^{2})$
with $x_{0}'=(x_{0}\cdot\eta,x_{0}\cdot\zeta)$ being an interior
point of $\mbox{supp}(\chi_{t_{0}})$ and $0\neq b\in\mathbb{C}$,
we have
\[
\liminf_{\tau\to\infty}|I(\tau,\chi_{t_{0}},b,t_{0},\omega)|>0.
\]
\end{thm}
\begin{proof}
(1) Note that we have a sequence of functions $\{\tilde{u}_{\epsilon,j}\}$
satisfies the equation $\nabla\cdot(A^{0}\nabla u)+k^{2}u=0\mbox{ in }\Omega,$
as in the beginning of the section 3, let $w_{\epsilon,j}=u-\tilde{u}_{\epsilon,j}$,
then $w_{\epsilon,j}$ satisfies the Dirichlet problem
\[
\begin{cases}
\nabla\cdot(A(x)\nabla w_{\epsilon,j})+k^{2}w=-\nabla\cdot((\widetilde{A}\chi_{D}-A^{0}\chi_{D})\nabla\tilde{u}_{\epsilon,j}) & \mbox{ in }\Omega,\\
w_{\epsilon,j}=0 & \mbox{ on }\partial\Omega.
\end{cases}
\]
So we can apply (\ref{eq:3.14}) directly, which means
\[
\int_{\partial\Omega}(\Lambda_{D}-\Lambda_{\emptyset})f_{\epsilon,j}\overline{f_{\epsilon,j}}d\sigma\leq C\|\tilde{u}_{\epsilon,j}\|_{W^{1,p}(D)}^{2}\leq C\|\tilde{u}_{\epsilon,j}\|_{H^{1}(D)}^{2},
\]
where the last inequality obtained by the H$\ddot{o}$lder's inequality.
By the Runge approximation property we have
\[
\tilde{u}_{\epsilon,j}\to u_{\chi_{t-\epsilon},b,t-\epsilon,N,\omega}\mbox{ in }H^{1}(B_{t}(\omega))
\]
as $j\to\infty$ and we know that the obstacle $D\subset B_{t}(\omega)$,
so we have
\[
\|\tilde{u}_{\epsilon,j}-u_{\chi_{t-\epsilon},b,t-\epsilon,N,\omega}\|_{H^{1}(D)}\to0
\]
as $j\to\infty$ for all $\epsilon>0$. Moreover, we know that $u_{\chi_{t-\epsilon},b,t-\epsilon,N,\omega}\to u_{\chi_{t},b,t,N,\omega}$
as $\epsilon\to0$ in $H^{1}(B_{t}(\omega))$, which implies
\[
\|\tilde{u}_{\epsilon,j}-u_{\chi_{t},b,t,N,\omega}\|_{H^{1}(D)}\to0
\]
as $\epsilon\to0$, $j\to\infty$. Now by the definition of $I(\tau,\chi_{t},b,t,\omega)$,
we have
\[
I(\tau,\chi_{t},b,t,\omega)\leq C\|u_{\chi_{t},b,t,N,\omega}\|_{H^{1}(D)}^{2}.
\]
Now if $t<t_{0}$, we substitute $u_{\chi_{t},b,t,N,\omega}=w_{\chi_{t},b,t,N,\omega}+r_{\chi_{t},b,t,N,\omega}$
with $w_{\chi_{t},b,t,N,\omega}$ being described by (\ref{eq:2.3})
into
\[
I(\tau,\chi_{t},b,t,\omega)\leq C(\int_{D}|u_{\chi_{t},b,t,N,\omega}|^{2}dx+\int_{D}|\nabla u_{\chi_{t},b,t,N,\omega}|^{2}dx)
\]
and use estimates (\ref{eq:2.4}), (\ref{eq:2.5}) to obtain that
\[
|I(\tau,\chi_{t},b,t,\omega)|\leq C\tau^{-2N-1}
\]
which finishes
\[
\limsup_{\tau\to\infty}|I(\tau,\chi_{t},b,t,\omega)=0.
\]
For the second part, we use (\ref{eq:3.15}), which means that we
have
\[
\int_{\partial\Omega}(\Lambda_{D}-\Lambda_{\emptyset})f_{\epsilon,j}\overline{f_{\epsilon,j}}d\sigma\geq c\int_{D}|\nabla\tilde{u}_{\epsilon,j}|^{2}dx-k^{2}\int_{\Omega}|\tilde{w}_{\epsilon,j}|^{2}dx-\int_{D}|\tilde{u}_{\epsilon,j}|^{2}dx.
\]
From (\ref{eq:4.1}) and the similar argument in the first part, it
is easy to get
\begin{equation}
I(\tau,\chi_{t},b,t,\omega)\geq c\int_{D}|\nabla u_{\chi_{t},b,t,N,\omega}|^{2}dx-c\|u_{\chi_{t},b,t,N,\omega}\|_{W^{1,p}(D)}^{2},\label{eq:4.1}
\end{equation}
where $w_{\chi_{t},b,t,N,\omega}=u-u_{\chi_{t},b,t,N,\omega}$.
\end{proof}
For the remaining part, we need some extra estimates in the following
section.
\subsection{End of the proof of Theorem 4.1}
In view of the lower bound, we need to introduce the sets $D_{j,\delta}\subset D$,
$D_{\delta}\subset D$ in the following. Recall that $h_{D}(\omega)=\inf_{x\in D}x\cdot\omega$
and $t_{0}=h_{D}(\omega)=x_{0}\cdot\omega$ for some $x_{0}\in\partial D$.
$\forall\alpha\in\partial D\cap\{x\cdot\rho=h_{D}(\omega)\}:=K$,
define $B(\alpha,\delta)=\{x\in\mathbb{R}^{3};|x-\alpha|<\delta\}$
($\delta>0$). Note $K\subset\cup_{\alpha\in K}B(\alpha,\delta)$
and $K$ is compact, so there exists $\alpha_{1},\cdots,\alpha_{m}\in K$
such that $K\subset\cup_{j=1}^{m}B(\alpha_{j},\delta)$. Thus, we
define
\[
D_{j,\delta}:=D\cap B(\alpha_{j},\delta)\mbox{ and }D_{\delta}:=\cup_{j=1}^{m}D_{j,\delta}.
\]
It is easy to see that
\[
\int_{D\backslash D_{\delta}}e^{-p\tau(x\cdot\omega-t_{0})A_{t_{0}}(x')}bdx=O(e^{-pa\tau}),
\]
where $A_{t_{0}}(x')\in B^{\infty}(\mathbb{R}^{2})$ is bounded and
its real part strictly greater than 0. so $\exists a>0$ such that
$\mbox{Re}A_{t_{0}}(x')\geq a>0$. Let $\alpha_{j}\in K$, by rotation
and translation, we may assume $\alpha_{j}=0$ and the vector $\alpha_{j}-x_{0}=-x_{0}$
is parallel to $e_{3}=(0,0,1)$. Therefore, we consider the change
of coordinates near each $\alpha_{j}$ as follows:
\[
\begin{cases}
y'=x'\\
y_{3}=x\cdot\omega-t_{0},
\end{cases}
\]
where $x=(x_{1},x_{2},x_{3})=(x',x_{3})$ and $y=(y_{1},y_{2},y_{3})=(y',y_{3})$.
Denote the parametrization of $\partial D$ near $\alpha_{j}$ by
$l_{j}(y')$, then we have the following estimates.
\begin{lem}
For $q\leq2$, we have
\begin{eqnarray}
\int_{D}|u_{\chi_{t_{0}},b,t_{0},N,\omega}|^{q}dx & \leq & c\tau^{-1}\sum_{j=1}^{m}\iint_{|y'|<\delta}e^{-aq\tau l_{j}(y')}dy'+O(\tau^{-1}e^{-qa\delta\tau})\nonumber \\
& & +O(e^{-qa\tau})+O(\tau^{-3})+O(\tau^{-2N-1}),\label{eq:4.2}
\end{eqnarray}
\begin{eqnarray}
\int_{D}|u_{\chi_{t_{0}},b,t_{0},N,\omega}|^{2}dx & \geq & C\tau^{-1}\sum_{j=1}^{m}\iint_{|y'|<\delta}e^{-2a\tau l_{j}(y')}dy'+O(\tau^{-1}e^{-2a\delta\tau})\nonumber \\
& & +O(\tau^{-3})+O(\tau^{-2N-1}),\label{eq:4.3}
\end{eqnarray}
\begin{eqnarray}
\int_{D}|\nabla u_{\chi_{t_{0}},b,t_{0},N,\omega}|^{q}dx & \leq & C\tau^{q-1}\sum_{j=1}^{m}\iint_{|y'|<\delta}e^{-qa\tau l_{j}(y')}dy'+O(\tau^{-1}e^{-aq\delta\tau})\nonumber \\
& & +O(e^{-qa\tau})+O(\tau^{-1})+O(\tau^{-2N-1}),\label{eq:4.4}
\end{eqnarray}
and
\begin{eqnarray}
\int_{D}|\nabla u_{\chi_{t_{0}},b,t_{0},N,\omega}|^{2}dx & \geq & C\tau\sum_{j=1}^{m}\iint_{|y'|<\delta}e^{-2a\tau l_{j}(y')}dy'+O(\tau^{-1}e^{-2\delta a\tau})\nonumber \\
& & +O(\tau^{-1})+O(\tau^{-2N-1}).\label{eq:4.5}
\end{eqnarray}
\end{lem}
\begin{proof}
The proof follows from \cite{Sini}. We only prove (\ref{eq:4.2})
and (\ref{eq:4.3}) and the proof of (\ref{eq:4.4}) and (\ref{eq:4.5})
are similar arguments.
For (\ref{eq:4.2}):
\begin{eqnarray*}
\int_{D}|u_{\chi_{t_{0}},b,t_{0},N,\omega}|^{q}dx & \leq & C\int_{D}e^{-qa\tau(x\cdot\omega-t_{0})}dx+C_{q}\int_{D}|\gamma_{\chi_{t_{0}},b,t_{0},N,\omega}|^{q}dx\\
& & +C_{q}\int_{D}|r_{\chi_{t_{0}},b,t_{0},N,\omega}|^{q}dx\\
& \leq & C\int_{D_{\delta}}e^{-qa\tau(x\cdot\omega-t_{0})}dx+C\int_{D\backslash D_{\delta}}e^{-qa\tau(x\cdot\omega-t_{0})}dx\\
& & +C\int_{D}|\gamma_{\chi_{t_{0}},b,t_{0},N,\omega}|^{2}dx+C\int_{D}|r_{\chi_{t_{0}},b,t_{0},N,\omega}|^{2}dx\\
& \leq & C\sum_{j=1}^{m}\iint_{|y'|<\delta}dy'\int_{l_{j}(y')}^{\delta}e^{-qa\tau y_{3}}dy_{3}+Ce^{-qa\tau}\\
& & +C\|\gamma_{\chi_{t_{0}},b,t_{0},N,\omega}\|_{L^{2}(D)}^{2}+C\|r_{\chi_{t_{0}},b,t_{0},N,\omega}\|_{H^{1}(D)}^{2}\\
& \leq & C\tau^{-1}\sum_{j=1}^{m}\iint_{|y'|<\delta}e^{-aq\tau l_{j}(y')}dy'-\dfrac{C}{q}\tau^{-1}e^{-qa\delta\tau}\\
& & +Ce^{-qa\tau}+C\tau^{-3}+C\tau^{-2N-1}
\end{eqnarray*}
note that $D\subset\Omega_{t_{0}}(\omega)$, which proves (\ref{eq:4.1}).
For (\ref{eq:4.3}):
\begin{eqnarray*}
\int_{D}|u_{\chi_{t_{0}},b,t_{0},N.\omega}|^{2}dx & \geq & C\int_{D}e^{-2a\tau(x\cdot\omega-t_{0})}dx-C\|\gamma_{\chi_{t_{0}},b,t_{0},N,\omega}\|_{L^{2}(\Omega_{t_{0}}(\omega))}^{2}\\
& & -+C\|r_{\chi_{t_{0}},b,t_{0},N,\omega}\|_{H^{1}(\Omega_{t_{0}}(\omega))}^{2}\\
& \geq & C\int_{D_{\delta}}e^{-2a\tau(x\cdot\omega-t_{0})}dx-C\tau^{-3}-C\tau^{-2N-1}\\
& = & C\tau^{-1}\sum_{j=1}^{m}\iint_{|y'|<\delta}e^{-2a\tau l_{j}(y')}dy'-\dfrac{C}{2}\tau^{-1}e^{-2a\tau}\\
& & -C\tau^{-3}-C\tau^{-2N-1}.
\end{eqnarray*}
\end{proof}
Recall that we have (\ref{eq:4.1}), the lower bound of $I(\tau,\chi_{t_{0}},b,t_{0},\omega)$,
so we want to compare the order (in $\tau$) of $\|u_{\chi_{t_{0}},b,t_{0},N,\omega}\|_{L^{2}(D)}$,
$\|\nabla u_{\chi_{t_{0}},b,t_{0},N,\omega}\|_{L^{2}(D)}$, $\|u_{\chi_{t_{0}},b,t_{0},N,\omega}\|_{L^{p}(D)}$
and $\|\nabla u_{\chi_{t_{0}},b,t_{0},N,\omega}\|_{L^{p}(D)}$.
\begin{lem}
For $\max\{2-\epsilon,\dfrac{6}{5}\}<p\leq2$, we have the estimates
as follows:
\[
\dfrac{\|\nabla u_{\chi_{t_{0}},b,t_{0},N,\omega}\|_{L^{2}(D)}^{2}}{\|u_{\chi_{t_{0}},b,t_{0},N,\omega}\|_{L^{2}(D)}^{2}}\geq C\tau^{2},\mbox{ }\dfrac{\|u_{\chi_{t_{0}},b,t_{0},N,\omega}\|_{L^{p}(\Omega)}^{2}}{\|u_{\chi_{t_{0}},b,t_{0},N,\omega}\|_{L^{2}(D)}^{2}}\geq C\tau^{1-\frac{2}{p}}
\]
and
\[
\dfrac{\|\nabla u_{\chi_{t_{0}},b,t_{0},N,\omega}\|_{L^{p}(D)}^{2}}{\|u_{\chi_{t_{0}},b,t_{0},N,\omega}\|_{L^{2}(D)}^{2}}\geq C\tau^{3-\frac{2}{p}}
\]
for $\tau\gg1$.\end{lem}
\begin{proof}
The idea of the proof comes from \cite{Sini}, but here we still need
to deal with the $\gamma_{\chi_{t_{0}},b,t_{0},N,\omega}$ and $r_{\chi_{t_{0}},b,t_{0},N,\omega}$
in $D\subset\Omega_{t_{0}}(\omega)$. Note that if $\partial D$ is
Lipschitz, in our parametrization $l_{j}(y')$, we have $l_{j}(y')\leq C|y'|$.
Hence,
\begin{eqnarray*}
\sum_{j=1}^{m}\iint_{|y'|<\delta}e^{-2a\tau l_{j}(y')}dy' & \geq & C\sum_{j=1}^{m}\iint_{|y'|<\delta}e^{-2\tau|y'|}dy'\\
& \geq & C\tau^{-1}\sum_{j=1}^{m}\iint_{|y'|<\tau\delta}e^{-2|y'|}dy'\\
& = & O(\tau^{-1}).
\end{eqnarray*}
For simplicity, we define $u_{0}:=u_{\chi_{t_{0}},b,t_{0},N,\omega}$
in the following calculations. Using lemma 4.2, we obtain
\[
\dfrac{\int_{D}|\nabla u_{0}|^{2}dx}{\int_{D}|u_{0}|^{2}dx}
\]
\begin{eqnarray*}
& \geq & C\dfrac{\tau\sum_{j=1}^{m}\iint_{|y'|<\delta}e^{-2a\tau l_{j}(y')}dy'+O(\tau^{-1}e^{-2a\delta\tau})+O(\tau^{-1})+O(\tau^{-2N-1})}{\tau^{-1}\sum_{j=1}^{m}\iint_{|y'|<\delta}e^{-2a\tau l_{j}(y')}dy'+O(\tau^{-1}e^{-2a\delta\tau})+O(\tau^{-3})+O(\tau^{-2N-1})}\\
& \geq & C\tau^{2}\dfrac{1+\frac{O(\tau^{-2}e^{-2a\delta\tau})+O(\tau^{-2})+O(\tau^{-2N-2})}{\sum_{j=1}^{m}\iint_{|y'|<\delta}e^{-2a\tau l_{j}(y')}dy'}}{1+\frac{O(e^{-2a\delta\tau})+O(\tau^{-2})+O(\tau^{-2N})}{\sum_{j=1}^{m}\iint_{|y'|<\delta}e^{-2a\tau l_{j}(y')}dy'}}\\
& = & O(\tau^{2})
\end{eqnarray*}
as $\tau\gg1$, where
\[
\lim_{\tau\to\infty}\dfrac{O(\tau^{-2}e^{-2a\delta\tau})+O(\tau^{-2})+O(\tau^{-2N-2})}{\sum_{j=1}^{m}\iint_{|y'|<\delta}e^{-2a\tau l_{j}(y')}dy'}=0
\]
and
\[
\lim_{\tau\to\infty}\dfrac{O(e^{-2a\delta\tau})+O(\tau^{-2})+O(\tau^{-2N})}{\sum_{j=1}^{m}\iint_{|y'|<\delta}e^{-2a\tau l_{j}(y')}dy'}=0.
\]
Now, by using the H$\ddot{o}$lder's inequality with the exponent
$q=\dfrac{2}{p}\geq1$, we have
\[
\sum_{j=1}^{m}\iint_{|y'|<\delta}e^{-pa\tau l_{j}(y')}dy'\leq C(\sum_{j=1}^{m}\iint_{|y'|<\delta}e^{-2a\tau l_{j}(y')}dy')^{\frac{p}{2}}.
\]
Hence we use lemma 4.2 again, we have
\[
\dfrac{(\int_{D}|u_{0}|^{p}dx)^{\frac{2}{p}}}{\int_{D}|u_{0}|^{2}dx}
\]
\begin{eqnarray*}
& \leq & C\dfrac{\tau^{-\frac{2}{p}}(\sum_{j=1}^{m}\iint_{|y'|<\delta}e^{-pa\tau l_{j}(y')}dy')^{\frac{2}{p}}+O(\tau^{-\frac{2}{p}}e^{-2a\delta\tau})+O(e^{-2a\tau})}{\tau^{-1}\sum_{j=1}^{m}\iint_{|y'|<\delta}e^{-2a\tau l_{j}(y')}dy'+O(\tau^{-1}e^{-2a\delta\tau})+O(\tau^{-3})+O(\tau^{-2N-1})}\\
& & +\dfrac{O(\tau^{-\frac{6}{p}})+O(\tau^{\frac{-4N-2}{p}})}{\tau^{-1}\sum_{j=1}^{m}\iint_{|y'|<\delta}e^{-2a\tau l_{j}(y')}dy'+O(\tau^{-1}e^{-2a\delta\tau})+O(\tau^{-3})+O(\tau^{-2N-1})}
\end{eqnarray*}
\begin{eqnarray*}
& \leq & C\tau^{-\frac{2}{p}+1}\dfrac{\sum_{j=1}^{m}\iint_{|y'|<\delta}e^{-2a\tau l_{j}(y')}dy'+O(e^{-2c\delta\tau})+O(e^{-2a\tau}\tau^{\frac{2}{p}})}{\sum_{j=1}^{m}\iint_{|y'|<\delta}e^{-2a\tau l_{j}(y')}dy'+O(e^{-2a\delta\tau})+O(\tau^{-2})+O(\tau^{-2N})}\\
& & +\dfrac{O(\tau^{-\frac{4}{p}})+O(\tau^{\frac{-4N}{p}})}{\tau^{-1}\sum_{j=1}^{m}\iint_{|y'|<\delta}e^{-2a\tau l_{j}(y')}dy'+O(\tau^{-1}e^{-2a\delta\tau})+O(\tau^{-3})+O(\tau^{-2N-1})}
\end{eqnarray*}
\begin{eqnarray*}
& = & \tau^{-\frac{2}{p}+1}\dfrac{1+\frac{O(e^{-2c\delta\tau})+O(e^{-2c\tau}\tau^{\frac{2}{p}})+O(\tau^{-\frac{4}{p}})+O(\tau^{\frac{-4N}{p}})}{\sum_{j=1}^{m}\iint_{|y'|<\delta}e^{-2a\tau l_{j}(y')}dy'}}{1+\frac{O(e^{-2a\delta\tau})+O(\tau^{-2})+O(\tau^{-2N})}{\sum_{j=1}^{m}\iint_{|y'|<\delta}e^{-2a\tau l_{j}(y')}dy'}}\\
& = & O(\tau^{-\frac{2}{p}+1})
\end{eqnarray*}
as $\tau\gg1$ and
\[
\dfrac{(\int_{D}|\nabla u_{0}|^{p}dx)^{\frac{2}{p}}}{\int_{D}|u_{0}|^{2}dx}
\]
\begin{eqnarray*}
& \leq & C\dfrac{\tau^{(p-1)\frac{2}{p}}(\sum_{j=1}^{m}\iint_{|y'|<\delta}e^{-pa\tau l_{j}(y')}dy')^{\frac{2}{p}}+O(\tau^{-\frac{2}{p}}e^{-2a\delta\tau})+O(e^{-2a\tau})}{\tau^{-1}\sum_{j=1}^{m}\iint_{|y'|<\delta}e^{-2a\tau l_{j}(y')}dy'+O(\tau^{-1}e^{-2a\delta\tau})+O(\tau^{-3})+O(\tau^{-2N-1})}\\
& & +C\dfrac{O(\tau^{-\frac{2}{p}})+O(\tau^{\frac{-4N-2}{p}})}{\tau^{-1}\sum_{j=1}^{m}\iint_{|y'|<\delta}e^{-2a\tau l_{j}(y')}dy'+O(\tau^{-1}e^{-2a\delta\tau})+O(\tau^{-3})+O(\tau^{-2N-1})}
\end{eqnarray*}
\begin{eqnarray*}
& \leq & C\tau^{3-\frac{2}{p}}\dfrac{\sum_{j=1}^{m}\iint_{|y'|<\delta}e^{-2a\tau l_{j}(y')}dy'+O(\tau^{-1}e^{-2a\delta\tau})+O(e^{-2a\tau}\tau^{\frac{2}{p}-1})}{\sum_{j=1}^{m}\iint_{|y'|<\delta}e^{-2a\tau l_{j}(y')}dy'+O(e^{-2a\delta\tau})+O(\tau^{-2})+O(\tau^{-2N})}\\
& & +C\dfrac{O(\tau^{-1})+O(\tau^{\frac{-4N}{p}-1})}{+O(\tau^{-\frac{2}{p}})+O(\tau^{\frac{-4N-2}{p}})}
\end{eqnarray*}
\begin{eqnarray*}
& = & C\tau^{3-\frac{2}{p}}\dfrac{1+\frac{O(\tau^{-1}e^{-2a\delta\tau})+O(e^{-2a\tau}\tau^{\frac{2}{p}-1})+O(\tau^{-1})+O(\tau^{\frac{-4N}{p}-1})}{\sum_{j=1}^{m}\iint_{|y'|<\delta}e^{-2a\tau l_{j}(y')}dy'}}{1+\frac{O(e^{-2a\delta\tau})+O(\tau^{-2})+O(\tau^{-2N})}{\sum_{j=1}^{m}\iint_{|y'|<\delta}e^{-2a\tau l_{j}(y')}dy'}}\\
& = & O(\tau^{3-\frac{2}{p}})
\end{eqnarray*}
as $\tau\gg1$. By (\ref{eq:4.1}) and above estimates, we have
\begin{eqnarray*}
\dfrac{I(\tau,\chi_{t},b,t,\omega)}{\|u_{\chi_{t},b,t,N,\omega}\|_{L^{2}(D)}^{2}} & \geq & C\tau^{2}-C\tau^{1-\frac{2}{p}}-C\tau^{3-\frac{2}{p}}\\
& \geq & C\tau^{2}
\end{eqnarray*}
for $\tau\gg1$. On the other hand, for $\|u_{\chi_{t},b,t,N,\omega}\|_{L^{2}(D)}$,
we have
\begin{eqnarray*}
\int_{D}|u_{\chi_{t},b,t,N,\omega}|^{2}dx & \geq & C\tau^{-1}\sum_{j=1}^{m}\iint_{|y'|<\delta}e^{-2a\tau l_{j}(y')}dy'+O(\tau^{-1}e^{-qa\delta\tau})\\
& & +O(\tau^{-3})+O(\tau^{-2N-1})\\
& \geq & C\tau^{-1}\sum_{j=1}^{m}\iint_{|y'|<\delta}e^{-2a\tau|y'|}dy'+O(\tau^{-1}e^{-qa\delta\tau})\\
& & +O(\tau^{-3})+O(\tau^{-2N-1})\\
& \geq & C\tau^{-2}\sum_{j=1}^{m}\iint_{|y'|<\tau\delta}e^{-2a|y'|}dy'+O(\tau^{-1}e^{-qa\delta\tau})\\
& & +O(\tau^{-3})+O(\tau^{-2N-1})\\
& = & O(\tau^{-2}).
\end{eqnarray*}
Therefore, we have
\[
I(\tau,\chi_{t},b,t,\omega)\geq C\tau^{2}\|u_{\chi_{t},b,t,N,\omega}\|_{L^{2}(D)}^{2}\geq C>0
\]
for $\tau\gg1$.
\end{proof}
In view of theorem 4.1 and lemma 4.2, we can give an algorithm for
reconstructing the convex hull of an inclusion $D$ by the Dirichlet-to-Neumann
map $\Lambda_{D}$ as long as $A(x)$ and $D$ satisfy the described
conditions.\\
\textbf{Reconstruction algorithm}.
\begin{enumerate}
\item Give $\omega\in S^{2}$ and choose $\eta,\zeta,\xi\in S^{2}$ so that
$\{\eta,\zeta,\xi\}$ forms a basis of $\mathbb{R}^{3}$ and $\xi$
lies in the span of $\eta$ and $\zeta$;
\item Choose a starting $t$ such that $\Omega\subset\{x\cdot\omega\geq t\}$;
\item Choose a ball $B$ such that the center of $B$ lies on $\{x\cdot\omega=s\}$
for some $s<t$ and $\Omega\subset\overline{B_{t}(\omega)}$ and take
$0\neq b\in\mathbb{C}$;
\item Choose $\chi_{t}\in C_{0}^{\infty}(\mathbb{R}^{2})$ such that $\chi_{t}>0$
in $\Sigma_{t}(\omega)$ and $\chi_{t}=0$ on $\partial\Sigma_{t}(\omega)$;
\item Construct the oscillating-decaying solution $u_{\chi_{t-\epsilon},b,t-\epsilon,N,\omega}$
in $B_{t-\epsilon}(\omega)$ with $\chi_{t-\epsilon}=\chi_{t}$ and
the approximation sequence $\tilde{u}_{\epsilon,j}$ in $\widetilde{\Omega}$;
\item Compute the indicator function $I(\tau,\chi_{t},b,t,\omega)$ which
is determined by boundary measurements;
\item If $I(\tau,\chi_{t},b,t,\omega)\to0$ as $\tau\to\infty$, then choose
$t'>t$ and repeat (iv), (v), (vi);
\item If $I(\tau,\chi_{t},b,t,\omega)\nrightarrow0$ for some $\chi_{t'}$,
then $t'=t_{0}=h_{D}(\omega)$;
\item Varying $\omega\in S^{2}$ and repeat (i) to (viii), we can determine
the convex hull of $D$.\end{enumerate}
|
1,116,691,499,909 | arxiv | \section{Introduction}
The collective migration of Eukaryotic cells attracts the attention of physicists as one of non-equilibrium collective phenomena \cite{Hakim:2017}. In particular, the driving mechanism of the collective migration have been investigated with focus on their various microscopic cell-scale factors, including the chemotaxis \cite{Weijer:2009}, cytoskeleton contraction\cite{Rauzi:2008}, contact inhibition of locomotion \cite{Carmona-Fontaine:2008}, cell-substrate adhesion \cite{Pascalis:2017}, and cell--cell adhesion \cite{Takeichi:2014}. The roles of these factors are well explained through theoretical reproduction of the collective migration in a macroscopic scale \cite{Rappel:1999, Szabo:2006, Lober:2015, Sato:2015a, Camley:2016, Najem:2016, Sato:2017, Campo:2019, Oelz:2019, Hiraiwa:2020, Alert:2020, Okuda:2021}.
The cell--cell adhesion has been secondarily considered as a stabilizer of cell--cell contacts, but it is not direct driving force of the collective cell migration \cite{Lee:2011a, Kabla:2012}.
Here, we theoretically propose a possibility of the cell-cell adhesion as the driving force from the physics point of view.
Cell--cell adhesion regulates various cellular processes and promotes tissue organization by stabilizing mechanical contacts between cells \cite{Takeichi:2014}.
This adhesion results from molecular adhesion binding between the surface membrane of two cells
One typical binding appears between different adhesion molecules, as shown in Fig.~\ref{fig:adhesion}(a), called heterophilic adhesion.
As the particular characteristics of this adhesion, when only either of two heterophilic adhesion molecules exists in a cell population (tissue) shown in Fig.~\ref{fig:adhesion}(b), the adhesion does not affect the tissue.
In contrast, when each and every one of heterophilic adhesion molecules separately exists only on one of two tissues, respectively, as shown in Fig.~\ref{fig:adhesion}(c), the heterophilic adhesion can regulate the tension at the interface of the tissues.
Therefore, this adhesion is expected to be effective in the regulation of tissue interfaces with avoiding side effects in each tissue.
In particular, when the molecules induce the gradient of the interface tension by polarization in their molecule concentrations on each cell surface \cite{Sesaki:1996,Coates:2001} as shown in Fig.~\ref{fig:adhesion}(d), we expect the interface regulation using the Marangoni effect of each cell on the interface.
In this case, the cell-scale Marangoni effect becomes the physical mechanism of collective cell motions for the heterophilic cell--cell adhesion.
\begin{figure}[t]
\begin{center}
\includegraphics[width=1\linewidth]{Fig1.png}
\caption{The schematic view of heterophilic cell--cell adhesion. White and shaded circles represent cells with different heterophilic adhesion molecules on their membranes. Circular and triangle symbols with a line represent two different heterophilic adhesion molecules, which stabilize a cell--cell contact. (a) The stabilization of cells is due to heterophilic adhesion. (b) The case of tissue with only one of heterophilic molecules. (c) The case of an interface between two tissues, which have either of heterophilic adhesion molecules respectively.
(d) The case of tissues with surface tension gradient on each cell surface in one of tissues.}
\label{fig:adhesion}
\end{center}
\end{figure}
This mechanism of collective cell motion is expected in the slug stage of {\it Dictyostelium discoideum} (dicty) \cite{Bonner:2009}.
At this stage, dicty cells are differentiated into two types—prestalk and prespore. In particular, the prestalk cells form the tissue in the leading region of the slug. They sometimes convectively flow during the movement of the slug in response to light, as illustrated in Fig.~\ref{fig:phototaxis}(a) \cite{Kimura:2000,Hashimura:2019b}.
This flow is speculated to regulate the slug’s phototaxis by inducing the exertion of a torque on the leading region of the slug. For a long time, the flow was hypothesized to be an effect of chemotaxis. However, recent observations have revealed that chemotaxis is inert at this stage \cite{Hashimura:2019a}.
Further, a chemotaxis-deficient mutant of dicty, KI5 \cite{Kuwayama:1993, kuwayama:1995,kuwayama:2013} has exhibits normal slug movement \cite{Kida:2019}.
Instead of chemotaxis, we contend that the interface tension gradient between prestalk and prespore tissues acts as the driving force of this flow. In particular, the tissue interface tension, $\gamma(\bm x)$, induces a flow, $\bm v(\bm x)$, in the interface \cite{Levich:1969,Brochard:1989,Getling:1998,Squires:2005}:
\begin{align}
\bm v(\bm x) \propto \nabla \gamma(\bm x). \label{eq:Marangoni_flow}
\end{align}
Here, $\bm x$ denotes a position on the interface.
In this paper, we hypothesize that this gradient is induced by the spontaneous polarization in heterophilic adhesion in response to light in prestalk cells, as depicted in Fig.~\ref{fig:phototaxis}(b). In this case, the polarization vector, $\bm p(\bm x)$ induces the tension. Thus, we have:
\begin{align}
- \nabla \gamma(\bm x) \propto \bm p(\bm x). \label{eq:polarization-tension}
\end{align}
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.75\linewidth]{Fig2.png}
\caption{The hypothetical mechanism of collective cell flow during phototaxis of the slug. (a) The schematic structure of dicty slug. The dotted and shaded regions represent the prestalk (leading) and prespore (following) tissues, respectively. The cross-hatched region represents the scaffold, which is where the dicty are manifested. In the leading tissue, cells convectively flow with the slug’s phototaxis. (b) A magnified view of the region surrounding the interface between the two tissues. The dotted and dashed circles represent the prestalk and prespore cells, respectively. The prestalk cells are assumed to be polarized in concentration of a heterophilic adhesion molecule responding to light. Here, the solid arrows in the prestalk cells represent the polarization in concentration, $\bm p(x)$ , and the dotted arrows on the tissue interface represent the tension gradient, $-\nabla \gamma(\bm x)$.}
\label{fig:phototaxis}
\end{center}
\end{figure}
A hypothetical scenario of this gradient is schematically illustrated in Fig.~\ref{fig:phototaxis}(b).
The concentration of heterophilic adhesion molecules in the prestalk tissue is assumed to be polarized in the direction of a light source.
This polarization, $\bm p(\bm x)$, induces a tension gradient, $-\nabla \gamma(\bm x)$, along the interface.
This gradient drives the collective cell flow near the interface as a relative motion between two tissues and, thereby, a cell flow in the prestalk tissue. With respect to the common origin of the tension gradient, this effect is similar to the Marangoni flow on liquid interfaces with a tension gradient \cite{Levich:1969,Getling:1998}.
Therefore, this cell flow may be called “cell Marangoni flow”. Unlike the Marangoni flow, the tension gradient, $-\nabla \gamma(\bm x)$, is manifested on the cell scale \cite{Coates:2001} because the polarization is expected only in individual cells. Further, cells flow via their shape deformations, which are absent in a liquid. Therefore, the polarization in heterophilic adhesion does not simply result in a Marangoni flow. To address these concerns in our scenario, the theoretical confirmation of the flow based on the scenario is at least necessary.
In this paper, we undertake a theoretical examination of the “cell Marangoni flow” based on the aforementioned scenario. To this end, we consider the two-dimensional cellular Potts model \cite{Graner:1992,Graner:1993,Glazier:1993} consisting of two tissues, which correspond to the two different types of molecules participating in heterophilic cell-cell adhesion.
We demonstrate that the polarization in heterophilic cell-cell adhesion in one tissue induces the relative motion between two tissues. In contrast to the Marangoni flow
\cite{Getling:1998}, the direction of this flow is aligned to that of the tension gradient and against the direction of the Marangoni flow. This is expected as low and high tensions promote cell-shape extension and shrinkage \cite{Matsushita:2017}, respectively, during cell movement. Further, we investigate this flow as a function of adhesion strength and schematically clarify the steady states. Based on this clarification, we determine the emergence condition of the flow in heterophilic adhesion.
\section{Model}
The cellular Potts model is a variant of the Potts model and is widely used to express cellular dynamics \cite{Scianna:2013, Hirashima:2017}.
As the effects of cell-cell adhesion is easily expressible using this model compared to others \cite{Anderson:2007},
it is particularly useful for our examination of heterophilic cell-cell adhesion.
The state space of this model expresses the space of cell configurations, each of which is represented by a Potts state in the lattice.
A Potts state, $m(\bm r)$ is defined corresponding to each lattice site, $\bm r$ and it takes a value in $\{0, 1, \dots, N\}$. The value of $m(\bm r)$ represents the index of the cell that occupies $\bm r$, when $m(\bm r)$ $\not = 0$. In contrast, $m(\bm r)$ = 0 denotes that the site, $\bm r$, is empty. The largest index, $N$, is equal to the number of cells. Based on this interpretation, the Potts state, $\{m(\bm r)\}$, expresses cells as the domains of corresponding Potts states and, thereby, cell configurations.
In a Potts state, cells are classified into types with different features \cite{Graner:1992}.
In the model proposed in this paper, two categories of cells are introduced to represent the two tissues corresponding to the two different heterophilic adhesion molecules binding with each other. Each model cell is of either of these two types and can participate in heterophilic adhesion only with cells of the other type. The type of the $m$th cell is denoted by $T(m)$.
$T(m) = 1$ corresponds to a polarized concentration of heterophilic adhesion molecules and $T(m) = 2$ corresponds to an isotropic concentration. In the dicty slug depicted in Fig.~\ref{fig:phototaxis}(b), the cells with $T(m) = 1$ correspond to the prestalk tissue and those with $T(m) = 2$ correspond to the prespore tissue.
For convenience during the construction of this model, the type function, $T(m)$, is further extended to Potts states that do not correspond to real cells. For an empty space with $m$ = 0, we set $T(0)$ = 0.
In addition, we introduce a fixed scaffold on which the cells live. The motivation behind this is to inhibit artificial translational motions of the whole system, which give rise to systematic noise during the analysis of collective cell motions \cite{Matsushita:2020}.
To this end, we consider fixed cells with $T(m)$ = 3 and define some cells of this type to constitute the scaffold. In this case, we assume cells with $T(m)$ = 2 to be highly amenable to adhesive contact with the scaffold and cells with $T(m)$ = 1 to be incapable of such contact.
In the dicty slug, these configurations correspond to the situation in which only the tissue consisting of prespore cells preferably makes contact with the scaffold due to its low surface tension. Henceforth, the extended type is denoted by the italicized capital letter, $T$. In summary, $T$ takes the value 0 corresponding to empty spaces, 1 corresponding to tissues with $T(m)$ = 1, 2 corresponding to tissues with $T(m)$ = 2, and 3 corresponding to the scaffold.
The Potts state $\{m(\bm r)\}$ is repeatedly updated using a Markov Chain Monte Carlo simulation and is regarded as a snapshot in a time series of moving cells. In this case, the occurrence probability of $\{m(\bm r)\}$ is given by the Boltzmann factor, $P(\{m(\bm r)\})$ = $\exp\{-\beta {\cal H}(\{m(\bm r)\})\}$ / $\sum_{\{m(\bm r)\}}\exp\{-\beta {\cal H}(m(\bm r))\}$.
Here, $\beta$ denotes an inverse temperature representing fluctuations in cell shapes and $\cal H$ denotes the free energy, which is defined to be the summation of the following four terms:
\begin{align}
{\cal H}(m(\bm r)) = {\cal H}_{\rm Ten} + {\cal H}_{\rm Hom} + {\cal H}_{\rm Het} + {\cal H}_{\rm Are}. \label{eq:Free_enegy}
\end{align}
The first term in the right hand side (RHS) of Eq.~\eqref{eq:Free_enegy} can be further decomposed into two terms:
\begin{align}
{\cal H}_{\rm Ten} = {\cal H}_{\rm E} + \sum_{T=1}^2 {\cal H}_{{\rm S}}^T. \label{eq:interaction_medium}
\end{align}
The first term in RHS of Eq.~\eqref{eq:interaction_medium} represents the surface tension between cells and
empty spaces. ${\cal H}_{\rm E}$ is defined as follows:
\begin{align}
{\cal H}_{E} = \Gamma_{\rm E} \sum_{T=1}^2\sum_{\bm r \bm r'} \left(\delta_{T(m(\bm r))T}\delta_{T(m(\bm r'))0} + \right. \nonumber \\
+ \left.\delta_{T(m(\bm r'))T}\delta_{T(m(\bm r))0} \right).
\end{align}
Here, $\delta_{ab}$ denotes the Kronecker $\delta$. The summation with respect to the pair, $\bm r$ and $\bm r'$, is taken over the nearest and next nearest sites. This summation rule is also applied to all equations that appear hereafter. The surface tension, $\Gamma_{\rm E}$, is assumed to be identical for $T(m)$ = 1 and $T(m)$ = 2, for simplicity.
The second term in the RHS of Eq.~\eqref{eq:interaction_medium} represents the surface tension between cells and scaffolds.
Here, ${\cal H}_{{\rm S}}^T$ is given by
\begin{align}
{\cal H}_{{\rm S} }^T = \Gamma_{{\rm S}}^T \sum_{\bm r \bm r'} \left(\delta_{T(m(\bm r))T}\delta_{T(m(\bm r'))3} + \right. \nonumber \\
+ \left.\delta_{T(m(\bm r'))T}\delta_{T(m(\bm r))3} \right).
\end{align}
The surface tension with the scaffold, $\Gamma_{{\rm S}}^T$, depends on the type of the cell, $T$.
$\Gamma_S^2$ is assumed to be significantly lower than $\Gamma_S^1$ to capture the relative ease with which cells with $T$ = 2 establish contact with the scaffold compared to those with $T$ = 1.
The second term in the RHS of Eq.~\eqref{eq:Free_enegy} is given by
\begin{align}
{\cal H}_{\rm Hom} = \sum_{T=1}^{2} \Gamma^{T} \sum_{\bm r \bm r'}\delta_{T(m(\bm r))T} \delta_{T(m(\bm r'))T} \eta_{m(\bm r) m(\bm r')} \nonumber\\
+ \frac{\Gamma_{\rm I}}{2} \sum_{T\not=T}\sum_{\bm r \bm r'}\delta_{T(m(\bm r))T} \delta_{T(m(\bm r'))T'}. \label{eq:H_hom}
\end{align}
The first and second terms in the RHS of Eq.~\eqref{eq:H_hom} represent the interface tension between cells of identical types and those of different types, respectively. Further,
$\eta_{ab}$ = 1 $-$ $\delta_{ab}$. The interface tensions, $\Gamma^T$ ($T$ = 1 or 2) and $\Gamma_{\rm I}$ , correspond to those of homophilic adhesion between cells of identical and different types, respectively. Homophilic adhesion stabilizes the tissues.
The third term in the RHS of Eq.~\eqref{eq:Free_enegy} is given by
\begin{align}
{\cal H}_{\rm Het} &= - \sum_{\bm r \bm r'} \Gamma_{\rm Het}(\bm r, \bm r') \nonumber \\
&\times (\delta_{T(m(\bm r))1}\delta_{T(m(\bm r'))2}+\delta_{T(m(\bm r))2}\delta_{T(m(\bm r'))1}),
\end{align}
and it establishes heterophilic adhesion on the tissue interface to be the driving force of the flow. Here, $\Gamma_{\rm Het}(\bm r, \bm r')$ denotes the reduction in tissue interface tension induced by heterophilic adhesion between two cells occupying the sites, $\bm r$ and $\bm r'$.
$\Gamma_{\rm Het}(\bm r, \bm r')$ denotes the polarization in adhesion \cite{Zajac:2002, Vroomans:2015, Matsushita:2017, Matsushita:2018}.
In this expression, $\Gamma_{\rm Het}(\bm r, \bm r')$ is assumed to depend on the concentrations of the adhesion molecule, $\rho_{m({\bm r})}({\bm r})$ and $\rho_{m({\bm r'})}({\bm r'})$, on a microscopic level \cite{Matsushita:2017}. Here, $\rho_{m}({\bm r})$ denotes the concentration of the adhesion molecule at $\bm r$ in the $m$th cell. In the leading order terms of these concentrations, we assume the surface tension to be given by:
\begin{eqnarray}
\Gamma_{\rm Het}(\bm r, \bm r') = \varepsilon \rho_{m(\bm r)}(\bm r)\rho_{m(\bm r')}(\bm r'). \label{eq:g-het}
\end{eqnarray}
This equation qualitatively realizes that the surface tension is reduced by
the heterophilic adhesion molecule binding between $\bm r$ and $\bm r'$.
To further introduce the concept of polarization in heterophilic adhesion, we consider the multipole expansion of $\rho(\bm r)$ \cite{Arfken:2012, Marchetti:2013, Matsushita:2017}:
\begin{eqnarray}
\rho_m(\bm r) = \rho_m^{\rm M} + \rho_m^{\rm D}(\bm e_m(\bm r)\cdot \bm p_m)+ \dots.
\end{eqnarray}
Then, we utilize the terms up to the order of the dipole part of $\rho_m^{\rm D}$ corresponding to $T(m)=1$ and that of only the monopole part of $\rho_m^{\rm M}$ for $T(m)=2$ to represent the polarization of heterophilic adhesion in the two types of cells, respectively. Here, $\bm p_m$ denotes the unit vector indicating the direction of polarization in heterophilic cell-cell adhesion for the $m$th cell. In this paper, this is simply referred to as “adhesion polarity”. This polarity is an adhesion variant of that of the chemical compass during chemotaxis\cite{Bourne:2002}.
The dynamics of this concentration is assumed to be quasistatically slow and, hence, $\bm p_m$ is a slow variable. Additionally, the unit vector, $\bm e_m (\bm r)$, represents the direction from the position of the $m$th cell, $\bm R_m$ to the peripheral position of the cell, $\bm r$, which is defined by $\bm e_m (\bm r)$ = $(\bm r - \bm R_m)$/$|\bm r - \bm R_m|$.
By definition, $\bm R_m$ is a slow coordinate variable of the $m$th cell in this expansion. $\bm R_m$ is referred to as the center of the $m$th cell because its dynamics is assumed to be quasistatically equal to that of the center of mass of the $m$th cell. For simplicity, we assume that $\rho^{\rm M}_m$ and $\rho^{\rm D}_m$ depend only on the type function, $T(m)$. In this case, we represent $\rho^{\rm M}_m$ and $\rho^{\rm D}_m$ for $T(m) = 1$ by $\rho^{\rm M}_{T=1}$ and $\rho^{\rm D}_{T=1}$, respectively, and represent $\rho^{\rm M}_m$ for $T(m)=2$ by $\rho^{\rm M}_{T=2}$.
The other higher order terms in the expansion are ignored because their effect is not of interest to our analysis. In this context, we have:
\begin{align}
\rho_m(\bm r) \simeq \delta_{T(m)1}\left[\rho_{T=1}^{\rm M} + \rho_{T=1}^{\rm D}(\bm e_m(\bm r)\cdot \bm p_m)\right] + \delta_{T(m)2}\rho_{T=2}^{\rm M}.
\end{align}
Substitution of $\rho_m(\bm r)$ with the aforementioned expansion yields the following expression for the surface tension \cite{Matsushita:2020}:
\begin{eqnarray}
\Gamma_{\rm Het}(\bm r, \bm r') =
\left\{\delta_{T(m(\bm r))1}\delta_{T(m(\bm r'))2}\left[\Gamma_{np} + \Gamma_p \rho_{p}(\bm p_{m(\bm r)}, \bm r)\right] \right.\nonumber\\
\left. +\delta_{T(m(\bm r))2}\delta_{T(m(\bm r'))1}\left[\Gamma_{np} + \Gamma_p\rho_{p}(\bm p_{m(\bm r')}, \bm r')\right]\right\}. \label{eq:interaction_hetero}
\end{eqnarray}
Here, the strength of the isotropic adhesion is $\Gamma_{np}$ = $\varepsilon(\rho^{\rm M}_{T=1}-\rho^{\rm D}_{T=1})\rho^{\rm M}_{T=2}$ and that of polarized
adhesion is $\Gamma_p$ = $\varepsilon \rho^{\rm D}_{T=1}\rho^{\rm M}_{T=2}$.
These strengths should be restricted to induce adhesion corresponding to only positive values in $ \Gamma_{\rm Het}(\bm r, \bm r')$.
To ensure the positivity of the strengths, the terms are redefined in this equation. For the same purpose, the positive function, $\rho_{p}(\bm p_m, \bm r)$ = $\left[1+\bm e_{m(\bm r)}(\bm r)\cdot \bm p_{m(\bm r)}\right]$ is introduced to express the polarized component of adhesion molecule concentration and realize the tension gradient at the cell level on the tissue interface.
In this model, the adhesion polarity, $\bm p_m$, denotes the degree of freedom and its dynamics correspond to that of the adhesion molecular concentration. To analyze its dynamics, we consider the binding between the adhesion molecules in the cell membrane to the edge of the cytoskeleton related to the movement of cells. In this case, the polarization in the adhesion molecule concentration follows the direction of the movement and localizes at the leading edge of cells, as observed in experiments \cite{Coates:2001, Fujimori:2019}.
In this case, the polarity obeys the following equation \cite{Matsushita:2017}:
\begin{eqnarray}
\frac{d \bm p_m}{d t} = \frac{1}{a \tau} \hat P(\bm p_m) \cdot \frac{d{\bm R}_m}{dt}. \label{eq:p}
\end{eqnarray}
Here, $a$ denotes the lattice constant, $t$ denotes time, and $\tau$ denotes the ratio of the relaxation time of $\bm p_m$ to that of ${\bm R}_m$.
For the time scale, the Monte Carlo step was assumed to be the unit time. Let
$\hat P(\bm x)$ denote the projection operator onto the direction perpendicular to a vector, $\bm x$, given by:
\begin{eqnarray}
\hat P(\bm x) = \hat 1 - \bm x^{\dagger}\otimes \bm x.
\end{eqnarray}
Here, $\hat 1$ denotes the unit tensor and $\otimes$ denotes the tensor product. This formulation is a variant of the definition of polarity for persistent walks \cite{Li:2008, Takagi:2008}.
In contrast to the latter case \cite{Szabo:2007,Fily:2012,Berthier:2014,Nagai:2015,Matsushita:2019,Matsushita:2019b}, the model cell with polarity adhesion exhibits a simple random walk in isolation owing to the absence of the driving term in the expression of free energy in this model. The motility of the cells can only be induced by heterophilic adhesive contact on interfaces of tissues following Eq.~\eqref{eq:interaction_hetero}\cite{Matsushita:2020}.
The fourth term in the RHS of Eq.~\eqref{eq:Free_enegy} is:
\begin{align}
{\cal H}_{\rm Are} = \kappa A \sum_m \left(1 - \frac{A_m}{A}\right)^2. \label{eq:vol}
\end{align}
Here, $\kappa$ denotes the area stiffness and $A$ denotes the reference area of the area elasticity. Further, $A_m$ = $\sum_{\bm r} \delta_{m(\bm r)m}$ denotes occupation area of the $m$th cell.
Based on the existence probability, $P(\{m(\bm r)\})$ determined on the basis of the free energy, the procedure of the proposed Monte Carlo simulation is as follows. First, the Monte Carlo simulation is used to generate a series of Potts states, $\{m(\bm r)\}$, which capture the dynamics of cell configurations. The Potts state is conventionally generated by
16$L^2$ copies of the state, $m$, from a position, $\bm r$, to its neighboring position, $\bm r'$ \cite{Graner:1992}.
In this case, $\bm r$ is chosen randomly and $\bm r'$ is randomly chosen from the nearest or next nearest neighbor of $\bm r$.
If a site of the scaffold is chosen, the copy is automatically rejected to maintain a fixed scaffold. Otherwise, the copy of the state is accepted by the Metropolis probability, $\min \{1, P(\{m(\bm r')\})/P(\{m(\bm r)\}) \}$. Here,
$P(\{m(\bm r)\})$ and $P(\{m(\bm r')\})$ denote the Boltzmann factor of the state preceding the copy and that of a candidate of the update state following the copy, respectively.
This set of copies is called a single Monte Carlo step (MCs). Following this single Monte Carlo step, the adhesion polarity is updated once by integrating Eq.~\eqref{eq:p} via the Euler method. Simultaneously, the center of the cell, $\bm R_m$ is also updated using $\bm R_m$ = $\sum_{\bm r}\delta_{m(\bm r)m} \bm r/A_m$.
\section{Simulation and Results}
In this section, we examine the emergence of Marangoni flow on the one-dimensional tissue interface under predefined conditions. In the first half, the simulation configuration for this analysis are explained in detail. The simulation size is determined as follows. With regard to the space of the system, we assume the lattice to be a two-dimensional square lattice for simplicity and the $x$- and $y$-directions axes are assumed to be the lattice axes.
The linear system size is taken to be $L$=128$a$ equipped with a periodic boundary condition to realize tractable simulation.
The cell flow on the interface is expected in the case of flat tissue, because of the possible adhesion of the rough interface to the tissues. To realize an almost flat interface between the two tissues, the area of each cell is taken to be $A$ = 64$a^2$ combined with the number of cells:
The number of cells with $T(m)$ = 1 is taken to be $N_1$ = 64 and the number of those with $T(m)$ = 2 is taken to be $N_2$ = 112.
In particular, this configuration realizes a flat interface along the axes even corresponding to heterophilic adhesion of low strength (See Fig.~\ref{fig:states}(b)).
To realize stable model cells, the following parameters are chosen. To achieve polarity dynamics of $T(m)$ = 1, a large value of $\tau$ = 5.0 is selected, which reproduces cell movements \cite{Matsushita:2020}.
The area stiffness of $\kappa$ in Eq.~\eqref{eq:vol} is taken to be a large value, 10, to ensure the stable maintenance of the cellular area. To easily obtain adhesion propulsion of cells alongside stability of cellular area, $\beta$ should be selected within an optimal range. In this simulation, we empirically set $\beta$ = 0.5 based on a previous work \cite{Matsushita:2020}.
This value is also suitable for maintaining the flat interface corresponding to the following tension parameters.
\begin{figure}[t]
\begin{center}
\includegraphics[width=1.0\linewidth]{Fig3.png}
\caption{(Color online) The initial Potts state with adhesion polarities used in our simulation. The yellow, red, and violet domains represent the initial configuration of the scaffold, cells with $T(m)$ = 2, and cells with $T(m)$ = 1, respectively. The black region denotes empty space. Arrows on the cells represent the adhesion polarities associated with individual cells with $T(m)$ = 1.}
\label{fig:initial}
\end{center}
\end{figure}
\begin{figure*}[t]
\begin{center}
\includegraphics[width=1.0\linewidth]{Fig4.png}
\caption{(Color online) (a) The order parameter $P$ as a function of $\Gamma_p$. (b) The average collective velocity for $T(m)=1$ ($+$), $v_y^1$, and that for $T(m)=2$ ($\times$), $v_y^2$.}
\label{fig:order_parameter}
\end{center}
\end{figure*}
Now, let us consider the tension parameters in free energy. These parameters are estimated to stabilize the layered tissue structure with a flat interface, as depicted in Fig.~\ref{fig:initial}.
To consider the contact states of cells with empty spaces and scaffolds, we set the values of tensions as follows. $\Gamma_{\rm E}$ = 6.0. is taken to be the base value of tension. Therefore, the cells with
$T(m)$ = 1 and 2 form tissues when the tensions between pairs are smaller than 2$\Gamma_{\rm E}$ = 12.0.
We set both $\Gamma^{1}$ = 4.0 and $\Gamma^{2}$ = 4.0 are taken to stabilize the respective tissues. In addition, because intermixing of tissues destabilizes them. Further, $\Gamma_{\rm I}$ is required to be larger than half of $\Gamma^1$ and $\Gamma^2$ because intermixing of tissues destabilizes them.
Further, $\Gamma_{\rm I}$ is also required to be smaller than $\Gamma_E$ to stabilize the interface between the tissues against invasions of empty spaces between them. To satisfy the aforementioned stability conditions, $\Gamma_{\rm I}$ = 4.0 is taken.
To eliminate the effect of the nonpolarized part of heterophilic adhesion, $\Gamma_{np}$ = 0.0
is taken. Various values of $\Gamma_p$ are used to investigate the permissible range of adhesion that promotes cell flow between tissues. As mentioned previously, cells with $T(m)$ = 1 are assumed to be incapable of forming adhesive contact with scaffolds and those with $T(m)$ = 2 are assumed to form adhesive contact with scaffolds. To reproduce this situation, we take $\Gamma_{\rm S}^1$ = 13.0 and $\Gamma_{\rm S}^2$ = 4.0.
The initial state of the simulation is schematically depicted in Fig.~\ref{fig:initial}.
In this state, 16 cells are aligned along the $y$-direction at
the left edge of the system (around $x$ = 1) ), constituting a scaffold. The array spans the range from $y$ = 1 to $y$ = $L$ in the $y$-direction.
The scaffold cells do not move and thereby inhibit any cell movement passing through themselves. On the right side of the array, cells with $T(m)$ = 2 form 7 $\times$ 16 array and constitute a tissue with a band structure. The tissue connects with itself between $y$ = 1 and $y$ = $L$ in the periodic boundary condition. The left sides of these arrays are adhered to the array of scaffold cells. On the right side of this tissue, cells with $T(m)$ = 1 form 4 $\times$ 16 array and constitute a tissue with a band structure similar to that of cells with $T(m)$ = 2.
In these cells, the initial directions of adhesion polarities are random. From this initial state, a steady state is attained via simulation over $t_0$ = $10^5$ MCs and the dependence of the steady states on $\Gamma_p$ is thereby analyzed.
To examine the emergence of the Marangoni flow, we calculate an observable metric under the aforementioned configuration. The observable metric relevant to probing the emergence of the Marangoni flow is an order parameter of the adhesion polarity, $\bm p_m$, defined as follows:
\begin{eqnarray}
P = \left| \frac{1}{N_1} \int_{t_0}^{t_1+t_0}dt \sum_{m \in \Omega_1} \bm p_m(t) \right|,
\end{eqnarray}
where $\Omega_1$ denotes the set of indices for cells with $T(m)$ = 1 and $t_1$denotes the Monte Carlo step used to average the order parameter. We take 10$^6$ MCs as $t_1$.
When the value of the order parameter becomes almost unity, the emergence of polarity order is confirmed. The order is a necessary condition for the existence of adhesion-induced collective cell flow \cite{Matsushita:2018}. In turn, the Marangoni flow can be investigated based on this collective flow.
We calculate the values of the aforementioned parameter with respect to varying values of $\Gamma_p$ to identify the emergence of the Marangoni flow.
If heterophilic adhesion drives the flow around the tissue interface, large values of $P$ are expected to correspond to large values of $\Gamma_p$ \cite{Kabla:2012,Matsushita:2020}.
To verify this, $P$ is plotted in Fig.~\ref{fig:order_parameter}(a) as a function of $\Gamma_p$.
It is evident that the order parameter takes significantly small values when $\Gamma_p^{\rm D}$ $\sim$ 0.1 or lower. When $\Gamma_p$ exceeds $\Gamma_p^{\rm D}$, $P$ becomes almost unity. Therefore, heterophilic adhesion of high strength at least stabilizes the order of adhesion polarity and may drive the collective motion of the cells with $T(m)$ = 1.
When $\Gamma_p$ exceeds $\Gamma_p^{\rm U}$ $\sim$ 1.2, the value of $P$ decreases again, indicating the vanishing of the order.
When $\Gamma_p$ lies within $\Gamma_p^{\rm D}$ and $\Gamma_p^{\rm U}$, the collective motion of cells with $T(m)$ = 1 is expected. In this range, the collective motion can be of two types--uniform motion over two tissues and relative motion between two tissues. Therefore, large values of $P$ are not directly correlated to relative motion between two tissues as in the case of the Marangoni flow. To directly verify relative motion, the velocities of the two tissues is required to be monitored.
To this end, we calculate the average velocities of both tissues and plot them in Fig.~\ref{fig:order_parameter}(b). Here, the average velocity corresponding to each tissue is aligned along the $y$-direction because of the geometry of the tissue, and so, only the $y$-component of velocities are plotted. The average velocity of a tissue with $T(m)$ = $T$ is defined as follows:
\begin{align}
v^T_y = \frac{1}{N_T}\sum_{m \in \Omega_{T}} \int_{t_0}^{t_1+t_0} dt d_y^m.
\end{align}
Here $d_y^m$ denotes the displacement of the $m$th cell per MCs in the $y$-direction and $\Omega_T$ denotes the set of indices for cells with $T(m)$ = $T$.
The two average velocities, $v_y^1$ and $v_y^2$, in the range from $\Gamma_p^{\rm U}$ to $\Gamma_p^{R}$ $\sim$ 0.3 are almost identical.
This corresponds to uniform motion over the two tissues in this range of $\Gamma_p$.
In contrast, in the range from $\Gamma_p^{\rm R}$ to $\Gamma_p^{\rm U}$, the average velocities of the two tissues are not only finite but also distinct. This indicates relative motion between the two tissues and, consequently, the Marangoni flow. In particular, $v_y^1$ increases with increasing $\Gamma_p$ unlike $v_y^2$, which remains more or less constant in this range of $\Gamma_p$.
This indicates that the enhancement of heterophilic adhesion accelerates the flow.
\begin{figure}[t]
\begin{center}
\includegraphics[width=1.0\linewidth]{Fig5.png}
\caption{The product $P_yv_y^1$ as a function of $\Gamma_p$.}
\label{fig:direction}
\end{center}
\end{figure}
\begin{figure}[t]
\begin{center}
\includegraphics[width=1.0\linewidth]{Fig6.png}
\caption{(Color online) (a) Schematic phase diagram. (b-e) Configuration snapshots of cells and polarities for (b) the pinned state corresponding to $\Gamma_p$ = 0.05, (c) the uniformly moving state corresponding to $\Gamma_p$ = 0.20, (d) the relatively moving state corresponding to $\Gamma_p$ = 0.90, and (e) the unsteady state corresponding to $\Gamma_p$ = 1.30. The yellow, red, and violet regions represent the tissues comprising the scaffold, cells with $T(m)$ = 2, and cells with $T(m)$ = 1, respectively. The black region denotes empty space. Arrows on the cells represent the adhesion polarity. }
\label{fig:states}
\end{center}
\end{figure}
Next, we focus on the direction of the observed flow. The Marangoni flow is oriented in the direction opposite to that of the tension gradient, i.~e., from low to high tension \cite{Getling:1998}.
If the observed flow shares its origin with the Marangoni flow, the direction of this flow would also be opposite to that of the tension gradient. To estimate its direction, the adhesion polarity, $\bm p_m$ having the same direction as the tension gradient can be used. In particular, by using $\bm p(\bm x)$ = $\sum_m \bm p_m \delta(\bm R_m - \bm x)$ with the delta function $\delta(\bm x)$,
\begin{align}
- \nabla \gamma(\bm x) \propto \sum_m \bm p_m \delta(\bm R_m - \bm x)
\end{align}
can be naively expected from Eq.~\eqref{eq:polarization-tension}.
In this case, the $y$-component of the average $\bm p_m$,
\begin{eqnarray}
P_y = \frac{1}{N_1} \sum_{m \in \Omega_1}\int_{t_0}^{t_1+t_0} dt p_m^y
\end{eqnarray}
and $v^1_y$ are expected to have opposite signs by Eq.~\eqref{eq:Marangoni_flow}. Hence, their product is expected to be negative by naive speculation. Here, $p_m^y$ denotes the $y$-component of $\bm p_m$.
To verify the negativity of the product, $P_yv^1_y$ is plotted in Fig.~\ref{fig:direction}.
Unexpectedly, the product is observed to be positive, which indicates that the tension gradient drives the cell movement along its own direction. Thus, the tissue interface tension, $\gamma(\bm x)$, seemingly induces the cell flow velocity, $\bm v_{\rm Cell}(\bm x)$, given by
\begin{eqnarray}
\bm v_{\rm Cell}(\bm x) \propto - \nabla \gamma(\bm x),
\end{eqnarray}
which differs from that in Eq.~\eqref{eq:Marangoni_flow}.
This prominent difference between the observed flow and the Marangoni flow indicates the existence of a different microscopic mechanism in the effect of the tension gradient.
Now, we summarize the states identified thus far using the phase diagram depicted in Fig.~\ref{fig:states}(a). The first state corresponds to when $\Gamma_p$ is lower than $\Gamma_p^{\rm D}$ which induces no collective motion and cells are pinned. Here, $\Gamma_p^{\rm D} $ denotes $\Gamma_p$ for the depinning of collective motion. The second state corresponds to when $\Gamma_p$ lies between $\Gamma_p^{\rm D}$ and $\Gamma_p^{\rm R}$, which induces uniform motion over the two tissues. Here, $\Gamma_p^{R}$ denotes the threshold of $\Gamma_p$ for the inducement of relative motion. The third state corresponds to when $\Gamma_p$ lies between $\Gamma_p^{\rm R}$ and $\Gamma_p^{\rm U}$, which induces relative motion between the two tissues. Here, $\Gamma_p^{\rm U}$ acts as an unstable point for collective motion. When $\Gamma_p$ exceeds $\Gamma_p^{\rm U}$, the state transitions to one without motion ordering. In order, the aforementioned states are referred to as a pinned state, a uniformly moving state, a relatively moving state, and an unsteady state. Of these, the characteristics of the unsteady state are not completely known yet.
To clarify the characteristics of the unsteady state, we calculate the cell configurations and polarities in the four typical states, as illustrated in Figs.~\ref{fig:states}(b)-\ref{fig:states}(e).
The pinned state corresponds to $\Gamma_p$ = 0.05, as depicted in Fig.~\ref{fig:states}(b); the uniformly moving state corresponds to $\Gamma_p$ = 0.20 as recorded in Fig.~\ref{fig:states}(c); and the relatively moving state corresponds to $\Gamma_p$ = 0.90, as shown in Fig.~\ref{fig:states}(d). Further, all three exhibit the same layered tissue structure depicted in Fig.~\ref{fig:initial} in the initial state. These observations indicate that the layered structure of tissues is stable against the driving force of collective cell movement. In particular, the unsteady state corresponding to $\Gamma_p$ = 1.30, as illustrated in Fig~\ref{fig:states}(e), exhibits a mixing of tissues 1 and 2. This indicates that strong heterophilic adhesion destabilizes the two tissues by overcoming the free energy barrier required to roughen the interface between them. Further, the domain shapes of tissues are complex in the unsteady state, which introduces a high degree of randomness into the collective motion. This reflects in significant fluctuations in $v_y^1$ and $v_y^2$ in Fig.~\ref{fig:order_parameter}(b) when $\Gamma_p$ exceeds $\Gamma_p^{\rm U}$.
\section{Summary and Discussion}
In this paper, we examined
the possibility of “cell Marangoni flows” in the cell-scale surface tension gradient. As per expectations, we confirmed relative motion between two tissues induced by the tension gradient as in the case of the Marangoni flow. However, the direction of this flow was observed to be in the opposite direction to that of the Marangoni flow. Further, the flow state was only observed within an optimal range of strength of heterophilic adhesion, which was ascertained to be determined by the thresholds, $\Gamma_p^R$ and $\Gamma_p^{\rm U}$.
Now, we discuss two characteristics of this flow. The first concerns the emergence conditions of the relatively moving state with respect to heterophilic adhesion. As its emergence is determined by the threshold values, $\Gamma_p^{\rm R}$ and $\Gamma_p^{\rm U}$, they are now discussed further. First, let us consider $\Gamma_p^{\rm R}$.
Figure~\ref{fig:order_parameter}(c) depicts the acceleration of $v_y^2$ with respect to $v_y^1$ in the range between $\Gamma_p^{\rm D}$ and $\Gamma_p^{\rm R}$ in contrast with that in the range between $\Gamma_p^{\rm R}$ and $\Gamma_p^{\rm U}$.
This indicates that by forming stable solid-like arrangements on the interface, cells in the first tissue drug the cells in the second tissue, which induces uniform motion in the uniformly moving state. Further, as illustrated in Fig.~\ref{fig:states}(d), the interface becomes rougher than that shown in Fig.~\ref{fig:states}(c). These observations imply a phase change to liquid surrounding the interface. Based on this information, $\Gamma_p^{\rm R}$ is expected to pin down the interface, which may result from the stability of the solid-like alignment of cells surrounding the interface in the uniformly moving state. Therefore, $\Gamma_p^{\rm R}$ corresponds to the induced melting of cell alignments on the interface. However, the theoretical evaluation of $\Gamma_p^{\rm R}$ remains elusive.
This melting can also be indirectly confirmed by the difference between the average movement of cells and that of a single cell in the same tissue in the uniformly and relatively moving states. In Figs.~\ref{fig:displacements}(a)-\ref{fig:displacements}(d), the displacements of a single cell, $\Delta x(t)$ and $\Delta y(t)$, are plotted with those of average cells, $\overline{ \Delta x(t)}$ and $\overline{\Delta y(t)}$. Here, their origin are set to zero at $t$ = 0, and they are a function of $t$ during a short period when $t$ = 0 corresponds to $t = t_0$ in simulation.
In the uniformly moving state corresponding to values below $\Gamma_p^{\rm R}$, $\Gamma_p =0.20$ is chosen. The positions of the average and single cells are observed to exhibit identical behavior for $T$=1, as illustrated in Fig.~\ref{fig:displacements}(a), and $T=2$, as illustrated in Fig.~\ref{fig:displacements}(b), except for $\Delta x(t)$ in $T$ = 1. Even $\Delta x(t)$ of a single cell in $T$ = 1, it at most is confined in a range of a single cell size displacement from that of the average cells, namely $|\Delta x(t) - \overline{\Delta x(t)}| \lesssim 2\sqrt{A/\pi}$. These observations indicate that the cells behave as a uniform solid in this case. Unlike the uniformly moving state, in the relatively moving state, the $T=1$ tissue is fluidized. To verify this, the motion of $T=1$ is plotted in Fig.~\ref{fig:displacements}(c) and that of $T=2$ is plotted in Fig.~\ref{fig:displacements}(d) during the relatively moving state corresponding to $\Gamma_p$ = 0.9.
As depicted in Fig.~\ref{fig:displacements}(c), a single cell actively moves in the $x$-direction beyond the single cell size and accordingly varies its velocity in the $y$-direction, as evidenced by the slope of $\Delta y(t)$. This behavior is prominently different from those of average cells. This difference implies that the tissue $T=1$ is in liquid phase and its velocities depend on the position in the $x$-direction. In particular, the position $\Delta x(t)$ is observed to be negatively correlated to the slope of $\Delta y(t)$. Therefore, owing to the melting of the tissue $T=1$, a laminar flow is expected, which exhibits high velocity near the interface. Further, single cell remain stable corresponding to positions with high $\Delta x(t)$ compared to those with low $\Delta x$ positions. Because the values of $\Delta x(t)$ decreases with increasing proximity of the cell to the interface, this indicates melting at positions near the interface.
\begin{figure}[t]
\begin{center}
\includegraphics[width=1.0\linewidth]{Fig7.png}
\caption{(Color online) $x(t)$ and $y(t)$ for a uniformly moving state with (a) $T=1$ and $\Gamma_p$=0.2; (b) $T=2$ and $\Gamma_p$=0.2, and for a relatively moving state with (c) $T=1$ and $\Gamma_p$=0.9; (d) $T=1$ and $\Gamma_p$=0.9. The origin ($t=t_0$) is set zero for all x(t) and y(t). The symbols $+$, $\times$, $+$ \hspace{-3.0mm} {$\times$}, $\boxdot$ denote the average $x$ over all cells in the tissue, the average $y$ coordinate over all cells in the tissue, the $x$ coordinate of a single cell, and the $y$ coordinate of a single cell, respectively.}
\label{fig:displacements}
\end{center}
\end{figure}
Now, let us consider $\Gamma_p^{\rm D}$. It is expected to be the threshold for the invasion of cells with $T(m)$ = 1 to $T(m)$ = 2 because of the mixing of tissues illustrated in Fig.~\ref{fig:states}(e).
The threshold for the invasion is estimated from $\Gamma_p^{\rm U}$ $\gtrsim$ $[\Gamma_{\rm I} - \Gamma_1/2]$ / $\max_{\bm r}\rho_p(\bm p_m, \bm r)$ $\simeq$ 1.0 via underestimation using the maximum strength for heterophilic cell-cell adhesion.
This estimated value is consistent with the observed value of 1.2 depicted in Fig.~\ref{fig:order_parameter}(b) and directly confirms that $\Gamma_p^{\rm U}$ is determined by the invasion.
The other characteristic to be discussed is the direction of the flow, which is opposite of that of the Marangoni flow. This direction is the only natural feature commonly associated with a self-propelled droplet \cite{Levan:1981,Wasan:2001}, and is induced by cell movement in cell-scale surface tension. The most important feature in the mechanism of cell movement that contributes to the difference is the fact that cells use shape deformations during the movement. The peripheral part of the cell, i.e., that corresponding to low tension and high adhesion, is easily extensible \cite{DiMilla:1991,Huttenlocher:1995, Matsushita:2018, Okuda:2021}.
Therefore, the cell moves from regions with high tension to those with low tension. This is in contrast with the direction of the Marangoni flow, which is from low to high tension. This explains the difference between the flow directions.
Lastly, we consider the possibility of confirming the discussed scenario using experimental observations. The conclusions of this paper are based on the existence of polarization in the concentration of heterophilic adhesion molecules. Thus, the hypothesis may be verified by directly examining this polarization. In addition, this paper predicts the existence of a threshold for the strength of adhesion that determines the existence of relative motion between tissues. This prediction may be effectively confirmed by observing slug velocity in a light strength control \cite{Poff:1973,Poff:1984}. If phototaxis is based on the polarization of heterophilic adhesion, the prediction stated in this paper indicates the existence of a threshold of light strength for the existence of phototaxis in dicty.
For the experimental confirmation, clarification of the molecular basis for heterophilic adhesion is important to support the experimental idea. A possible candidate for the heterophilic adhesion molecules are TgrB1 and TgrC1, which is manifested in dicty slug \cite{Siu2004}. Further, the adhesion distribution on the cell membrane exhibits a polarization \cite{Fujimori:2019}. This polarization of these adhesion molecules is the preferred feature for inducing cell Marangoni effect on tissue interfaces. However, the discussed scenario assumes the manifestation of different heterophilic adhesion molecules in the two tissues. In the case of dicty, this corresponds to the situation in which TgrB1(or C1) acts only in the prestalk region of the slug and TgrC1(or B1) acts only in the remaining region (See Fig.~\ref{fig:phototaxis}(a)).
To the best of our knowledge, such separated region-specific action of heterophilic adhesion molecules has not been observed. At the very least, the genes encoding TgrB1 and TgrC1 have a common promoter region and are usually transcribed simultaneously \cite{Hirose:2017}.
Therefore, the direct confirmation of cell Marangoni flow should need the direct confirmation of the insitu functional adhesion activity of TgrB1 and TgrC1 on cell membrane.
\begin{comment}
{\color{blue} Besides dicty, possibilities of phenomena using cell Marangoni flow are expected. Here, we mention a possibility. Cell Marangoni flow can drive a relative motion between tissues.
Therefore, the flow can induce a tissue-layered rearrangement in organogenesis as a driving force.
There exists that of looping morphogenesis in {\it Drosophila} male terminalia \cite{Kuranaga:2011}
as such relative motion between layers. The relative motion of the A8 layer occurs with respect to the other layers. In this case, edge contraction force with polarity \cite{Sato:2015a, Sato:2015b, Sato:2017,Okuda:2019} on the boundary of the layers be partially directed by the polarization in adhesion. Now, the heterophilic adhesion in this looping morphogenesis is not shown.}
\end{comment}
\begin{acknowledgments}
We thank the support on the research resource by M. Kikuchi and H. Yoshino. We also thank I. Shibano for helpful comments on the coexpression of TgrB1/TgrC1 and thank S. Yabunaka for information of self-propelled droplets with an interface tension. This work is supported by JSPS KAKENHI (Grant Number 19K03770, 18K13516) and by AMED (Grant Number JP19gm1210007).
We acknowledge utilizing the supercomputers of ISSP.
\end{acknowledgments}
|
1,116,691,499,910 | arxiv | \section{Introduction}\label{section_intro}
Federated learning (FL) has drawn lots of attentions recently due to its wide applications in modern machine learning. Canonical FL aims at solving the following finite-sum problem~\cite{konevcny2016federated,mcmahan2017communication,kairouz2021advances}:
\begin{equation}\label{finite_sum}
\min_{x\in\mathbb{R}^d} f(x):=\frac{1}{n}\sum_{i=1}^{n}f_i(x),
\end{equation}
where each of the $f_i$ (or the data associated with $f_i$) is stored in different client/agent that could have different physical locations and different hardware. This makes the mutual connection impossible~\cite{konevcny2016federated}. Therefore, there is a central server that can collect the information from different agents and output a consensus that minimizes the summation of the loss functions from all the clients. The aim of such a framework is to utilize the computation resources of different agents while still maintain the data privacy by not sharing data among all the local agents. Thus the communication is always between the central server and local servers. This setting is commonly observed in modern smart-phone-APP based machine learning applications~\cite{konevcny2016federated}. We emphasize that we always consider the heterogeneous data scenario where the functions $f_i$'s might be different and have different optimal solutions. This problem is inherently hard to solve because each local minima will empirically diverge the update from the global optimum~\cite{li2020federated,mitra2021linear}.
In this paper, we consider the following FL problem over a Riemannian manifold $\M$:
\begin{equation}\label{problem_finite_sum}
\min_{x\in\M} f(x):=\frac{1}{n}\sum_{i=1}^{n}f_i(x)
\end{equation}
where $f_i:\M\rightarrow\RR$ are smooth but not necessarily (geodesically) convex. It is noted that most FL algorithms are designed for the unconstrained setting and convex constraint setting \cite{konevcny2016federated,mcmahan2017communication, karimireddy2020scaffold, li2019convergence, malinovskiy2020local, charles2021convergence, pathak2020fedsplit, mitra2021linear}, and FL problems with nonconvex constraints such as \eqref{problem_finite_sum} have not been considered. The main difficulty for solving \eqref{problem_finite_sum} lies in aggregating points over a nonconvex set, which may lead to the situation where the averaging point is outside of the constraint set.
One motivating application of \eqref{problem_finite_sum} is the federated kPCA problem
\begin{equation}\label{problem_kPCA}
\min_{X\in\St(d, r)} f(X):=\frac{1}{n}\sum_{i=1}^{n}f_i(X),\ \mbox{ where } f_i(X)=-\frac{1}{2}\tr(X^\top A_i X),
\end{equation}
where $\St(d, r)=\{X\in\RR^{d\times r}| X^\top X=I_r\}$ denotes the Stiefel manifold, and $A_i$ is the covariance matrix of the data stored in the $i$-th local agent. When $r=1$, \eqref{problem_kPCA} reduces to classical PCA
\begin{equation}\label{problem_PCA}
\min_{\|x\|_2=1} f(x):=\frac{1}{n}\sum_{i=1}^{n}f_i(x),\ \mbox{ where } f_i(x)=-\frac{1}{2}x^\top A_i x.
\end{equation}
Existing FL algorithms are not applicable to \eqref{problem_kPCA} and \eqref{problem_PCA} due to the difficulty on aggregating points on nonconvex set.
\subsection{Main Contributions}
We focus on designing efficient federated algorithms for solving \eqref{problem_finite_sum}. Our main contributions are:
\begin{enumerate}[leftmargin=*]
\item We propose a Riemannian federated SVRG algorithm (\texttt{RFedSVRG}) for solving \eqref{problem_finite_sum}. We prove that the convergence rate of our RFedSVRG algorithm is $\mathcal{O}(1/\epsilon^2)$ for obtaining an $\epsilon$-stationary point. This result matches that of its Euclidean counterparts~\cite{mitra2021linear}. To the best of our knowledge, this is the first algorithm for solving FL problems over Riemannian manifolds with convergence guarantees.
\item The main novelty of our \texttt{RFedSVRG} algorithm is a consensus step on the tangent space of the manifold. We compare this new approach with the widely used Karcher mean approach. We show that our method achieves certain "regularization" property and performs very well in practice.
\item We conduct extensive numerical experiments on our method for solving the PCA \eqref{problem_PCA} and kPCA \eqref{problem_kPCA} problems with both synthetic and real data. The numerical results demonstrate that our \texttt{RFedSVRG} algorithm significantly outperforms the Riemannian counterparts of two widely used FL algorithms: \texttt{FedAvg} \cite{mcmahan2017communication} and \texttt{FedProx} \cite{li2020federated}.
\end{enumerate}
\subsection{Related Work}
\textbf{Federated optimization.}
The most natural idea for FL is the \texttt{FedAvg} algorithm \cite{mcmahan2017communication}, which averages local gradient descent updates and yields a good empirical convergence. However in the data heterogeneous situation, \texttt{FedAvg} suffers from the client-drift effect that each local client will drift the solution towards the minimum of their own local loss function \cite{karimireddy2020scaffold, li2019convergence, malinovskiy2020local, charles2021convergence, pathak2020fedsplit, mitra2021linear}. Many ideas were studied to resolve this issue. For example, \cite{li2020federated} proposed the \texttt{FedProx} algorithm, which regularizes each of the local gradient descent update to ensure that the local iterates are not far from the previous consensus point. The \texttt{FedSplit}~\cite{pathak2020fedsplit} was proposed later to further mitigate the client-drift effect and convergence results were obtained for convex problems.
\texttt{FedNova}~\cite{wang2020tackling} was also proposed to improve the performance of \texttt{FedAvg}, however it still suffers from a fundamental speed-accuracy conflict under objective heterogeneity~\cite{mitra2021linear}. Variance reduction techniques were also incorporated to FL leading to two new algorithms: federated SVRG (\texttt{FSVRG}) \cite{konevcny2016federated} and \texttt{FedLin} \cite{mitra2021linear}. These two algorithms require transmitting the full gradient from the central server to each local client for local gradient updates, therefore require more communication between clients and the central server. Nevertheless, \texttt{FedLin} achieves the theoretical lower bound for strongly convex objective functions~\cite{mitra2021linear} with an acceptable amount of increase in the communication cost.
\textbf{Decentralized optimization on manifolds.} Decentralized distributed optimization on manifold has also drawn attentions in recent years~\cite{chen2021decentralized, shah2017distributed,alimisis2021distributed}. Under this setting, each local agent solves a local problem and then the central server takes the consensus step. The consensus step is usually done by calculating the Karcher mean on the manifold~\cite{tron2012riemannian,shah2017distributed}, or calculating the minimizer of the sum of the square of the Euclidean distances in the embedded submanifold case~\cite{chen2021decentralized}. Such consensus steps usually require solving an additional problem inexactly with no exact convergence rate guarantee~\cite{tron2012riemannian, chen2021local}.
It is worth mentioning that the PCA problem under federated learning setting has been considered in the literature \cite{grammenos2020federated}. The proposed method in~\cite{grammenos2020federated} relies on the SVD of data matrices and a subspace merging technique, which is very different from our method. The aim of the algorithm in~\cite{grammenos2020federated} is to achieve $(\epsilon,\delta)$-differential privacy. In contrast, we mainly consider the convergence rate of our method. Therefore our work is totally different from \cite{grammenos2020federated}.
\section{Preliminaries on Riemannian Optimization}
In this part, we briefly review the basic tools we use for optimization on Riemannian manifolds~\cite{absil2009optimization,lee2006riemannian,Tu2011manifolds,boumal2022intromanifolds}. Due to the limit of space, more detailed discussions are given in supplementary material \ref{appendix_manifold}. Suppose $\M$ is an $m$-dimensional Riemannian manifold with Riemannian metric $g:T\M\times T\M\rightarrow\RR$. We first review the notion of the Riemannian gradients.
\begin{definition}[Riemannian gradients]
For a Riemannian manifold with Riemannian metric $g$, the Riemannian gradient for $f\in C^\infty(\M)$ is the unique tangent vector $\grad f(x)\in T_x\M$ such that $df(\xi) = g(\grad f, \xi),\ \forall \xi\in T_x\M$, where $d f$ is the differential of function $f$ defined as $d f(\xi):=\xi(f)$.
\end{definition}
For the convergence analysis, we also need the notion of exponential mapping and parallel transport. We first review the definition of exponential mapping
\begin{definition}[Exponential mapping]
Given $x\in\M$ and $\xi\in T_x\M$, the exponential mapping $\Exp_x$ is defined as a mapping from $T_x\M$ to $\M$ s.t. $\Exp_x(\xi):= \gamma(1)$ with $\gamma$ being the geodesic with $\gamma(0)=x$, $\Dot{\gamma}(0)=\xi$. A natural corollary is $\Exp_x(t\xi):= \gamma(t)$ for $t\in[0, 1]$. Another useful fact is $d(x,\Exp_x(\xi))=\|\xi\|_x$ since $\gamma'(0)=\xi$ which preserves the speed.
\end{definition}
Throughout this paper, we always assume that $\M$ is complete, so that $\Exp_x$ is always defined for every $\xi\in T_x\M$. For $\forall x,y\in\M$, the inverse of the exponential mapping $\Exp_{x}^{-1}(y)\in T_x\M$ is called the logarithm mapping, and we have $d(x,y)=\|\Exp_{x}^{-1}(y)\|_x$, which will be a useful fact in the convergence analysis. We now present the definition of parallel transport.
\begin{definition}[Parallel transport]
Given a Riemannian manifold $(\M, g)$ and two points $x,y\in\M$, the parallel transport $P_{x\rightarrow y}:T_x\M\rightarrow T_y\M$\footnote{Notice that the existence of parallel transport depends on the curve connecting $x$ and $y$, which is not a problem for complete Riemannian manifold because we always take the unique geodesic that connects $x$ and $y$.} is a linear operator which keeps the inner product: $\forall \xi,\zeta\in T_x\M$, we have $\langle P_{x\rightarrow y}\xi, P_{x\rightarrow y}\zeta\rangle_y = \langle\xi, \zeta\rangle_x$.
\end{definition}
Parallel transport is useful since the Lipschitz condition for the Riemannian gradient requires moving the gradients in different tangent spaces "parallel" to the same tangent space.
We now present the definition of Lipschitz smoothness and convexity on Riemannian manifolds, which will be utilized in our convergence analysis.
\begin{definition}[$L$-smoothness on manifolds]\label{assumption_manifold_smooth}
$f$ is called Lipschitz smooth on manifold $\M$ if there exists $L\geq0$ such that the following inequality holds for function $f$:
\begin{equation}\label{eq:lgsmoothness1}
\|\grad f(y) - P_{y\rightarrow x}\grad f(x)\|\leq L d(x,y).
\end{equation}
For complete Riemannian manifold, we have~\cite{zhang2016first}:
\begin{equation}\label{eq:lgsmoothness2}
f(y) \leq f(x)+\left\langle g_{x}, \Exp_{x}^{-1}(y)\right\rangle_{x}+\frac{L_{g}}{2} d^{2}(x, y),\ \forall x,y\in\M.
\end{equation}
\end{definition}
The definition of geodesic convexity is given below (see, e.g., \cite{zhang2016first}).
\begin{definition}[Geodesic convex]\label{assumption_geodesic_convex}
A function $f\in C^1(\M)$ is geodesically convex if for all $x,y\in\M$, there exists a geodesic $\gamma$ such that $\gamma(0)=x$, $\gamma(1)=y$ and
$$
f(\gamma(t))\leq (1-t)f(x)+t f(y),\ \forall t\in[0,1].
$$
Or equivalently,
$$
f(y)\geq f(x) + \langle \grad f(x), \Exp_{x}^{-1}(y) \rangle_x.
$$
\end{definition}
\section{The RFedSVRG Algorithm}
The most challenging task for FL on Riemannian manifolds is the consensus step. Suppose the central server receives $x^{(i)}$, $i\in S_t\subset[n]$ from each of the local clients at round $t$, the question is how the central server aggregates the points to output a unique consensus. In Euclidean space, the most straightforward way is to take the average $\frac{1}{k}\sum_{i\in S_t}x^{(i)}$ with $k=|S_t|$. However, this approach does not apply to the Riemannian setting due to the loss of linearity: the arithmetic average of points can be outside of the manifold. A natural choice for the consensus step on the manifold is to take the Karcher mean of the points \cite{tron2012riemannian}:
\begin{equation}\label{karcher_mean}
x_{t+1}\leftarrow\argmin_x \frac{1}{k}\sum_{i\in S_t}d^2(x, x^{(i)}),
\end{equation}
where $x_{t+1}$ is the next iterate point on the central server. This is a natural generalization of the arithmetic average because $d^2(x,y)=\|x-y\|^2$ in Euclidean space. However, solving \eqref{karcher_mean} can be time consuming in practice.
We propose the following tangent space consensus step:
\begin{equation}\label{tangent_space_mean}
x_{t+1}\leftarrow \Exp_{x_{t}}\left(\frac{1}{k}\sum_{i\in S_t}\Exp_{x_{t}}^{-1}(x^{(i)})\right),
\end{equation}
where we project each of the point $x_t^{(i)}$ back to the tangent space $T_{x_t}\M$ and then take their average on the tangent space. The consensus step \eqref{tangent_space_mean} has several advantages over the Karcher mean method \eqref{karcher_mean}. First, \eqref{tangent_space_mean} is of closed-form and easy to compute. Second, \eqref{tangent_space_mean} still coincides with the arithmetic mean when the manifold reduces to the Euclidean space. Third, the tangent space mean \eqref{tangent_space_mean} can easily be extended to the following moving average mean:
\[
\Exp_{x_{t}}\left(\frac{\beta}{k}\sum_{i\in S_t}\Exp_{x_{t}}^{-1}(x^{(i)})\right),
\]
which corresponds to $(1-\beta)x_t+\frac{\beta}{k}\sum_{i\in S_t}x^{(i)}$ in the Euclidean space, while the Karcher mean cannot be easily extended in this scenario. Last, \eqref{tangent_space_mean} has the following "regularization" property as the distance between two consensus points can be controlled, and the Karcher mean method \eqref{karcher_mean} does not have this kind of property.
\begin{lemma}\label{lemma_regularization_tangent_mean}
For the update defined in \eqref{tangent_space_mean}, it holds that
$$
d(x_{t+1}, x_t)\leq \frac{1}{k}\sum_{i\in S_t} d(x^{(i)}, x_t).
$$
\end{lemma}
To further illustrate this "regularization" property of the tangent space mean \eqref{tangent_space_mean}, we consider an (extreme) example on the unit sphere $\mathcal{S}^2$ (see Figure \ref{fig:consensus}) . Here we take $x_t$ on the north pole and two point from the local server as $x^{(1)}$ and $x^{(2)}$, also $\xi^{(i)}=\Exp_{x_t}^{-1}(x^{(i)})\in T_{x_t}\M$. Then the tangent space mean \eqref{tangent_space_mean} would yield the original point $x_t$, whereas the Karcher mean could yield any point on the vertical great circle, depending on the starting point in solving the optimization problem \eqref{karcher_mean}.
\begin{figure}
\centering
\begin{tikzpicture}[
point/.style = {draw, circle, fill=black, inner sep=0.7pt}, scale = 0.8
]
\def2cm{2cm}
\coordinate (O) at (0,0);
\coordinate (N) at (0,2cm);
\filldraw[ball color=white] (O) circle [radius=2cm];
\draw[dashed]
(0, 2cm) arc [start angle=90,end angle=-90,x radius=5mm,y radius=2cm];
\draw
(0, 2cm) arc [start angle=90,end angle=270,x radius=5mm,y radius=2cm];
\begin{scope}[xslant=0.5,yshift=2cm,xshift=-2]
\filldraw[fill=gray!10,opacity=0.3]
(-4.5,1) -- (2.5,1) -- (3,-1) -- (-4,-1) -- cycle;
\node at (2,0.6) {$T_{x_t}\mathcal{S}^2$};
\end{scope}
\draw[dashed]
(N) node[above] {$x_t$} -- (O) node[below] {$O$};
\node[point] at (N) {};
\draw[line width=1pt,blue,-stealth](0,2cm)--(pi,2cm) node[anchor=north east]{$\xi^{(1)}$};
\draw[line width=1pt,red,-stealth](0,2cm)--(-pi,2cm) node[anchor=south west]{$\xi^{(2)}$};
\node[point] at (2, 0) {};
\node[right] at (2, 0) {$x^{(1)}$};
\node[point] at (-2, 0) {};
\node[left] at (-2, 0) {$x^{(2)}$};
\end{tikzpicture}
\caption{Comparison of two consensus methods on $\mathcal{S}^2$}
\label{fig:consensus}
\end{figure}
Our \texttt{RFedSVRG} algorithm is presented in Algorithm \ref{manifold_fedsvrg}, which is a non-trivial manifold extension of the FSVRG algorithm \cite{konevcny2016federated}.
For \texttt{RFedSVRG}, the local gradient update becomes
\begin{equation}\label{local_update_fedsvrg}
x_{\ell+1}^{(i)}\leftarrow \Exp_{x_{\ell}^{(i)}}\left[-\eta^{(i)} \left(\grad f_i(x_{\ell}^{(i)}) - P_{x_t\rightarrow x_{\ell}^{(i)}}(\grad f_i(x_t) - \grad f(x_t))\right)\right],
\end{equation}
which matches the existing manifold SVRG work \cite{zhang2016fast}. The introduction of the parallel transport $P_{x_t\rightarrow x_{\ell}^{(i)}}$ is necessary because we need to "transport" all the vectors to the same tangent space to conduct addition and subtraction. The algorithm utilizes the gradient information at the previous iterate $\grad f(x_t)$, thus avoids the "client-drift" effect and correctly converges to the global stationary points. This is confirmed by both the theory and the numerical experiments.
\begin{algorithm}[ht]
\SetKwInOut{Input}{input}
\SetKwInOut{Output}{output}
\SetAlgoLined
\Input{$n$, $k$, $T$, $\{\eta^{(i)}\}$, $\{\tau_i\}$}
\Output{\textbf{Option 1:} $\Tilde{x}=x_T$; or \textbf{Option 2:} $\Tilde{x}$ is uniformly sampled from $\{x_1,...,x_T\}$}
\For{$t=0,...,T-1$}{
Uniformly sample $S_t\subset [n]$ with $|S_t|=k$\;
\For{each agent $i$ in $S_t$}{
Receive $x_0^{(i)}=x_t$ from the central server\;
\For{$\ell=0,...,\tau_i-1$}{
Take the local gradient step \eqref{local_update_fedsvrg}.
}
Send $\hat{x}^{(i)}$ (obtained by one of the following options) to the central server
\begin{itemize}
\item {\textbf{Option 1:} $\hat{x}^{(i)}=x_{\tau_i}^{(i)}$;}
\item {\textbf{Option 2:}} $\hat{x}^{(i)}$ is uniformly sampled from $\{x_{1}^{(i)},...,x_{\tau_i}^{(i)}\}$\;
\end{itemize}
}
The central server aggregates the points by the tangent space mean \eqref{tangent_space_mean}\;
}
\caption{Riemannian FedSVRG Algorithm (RFedSVRG)}\label{manifold_fedsvrg}
\end{algorithm}
\section{Convergence analysis}
In this section we analyze the convergence behaviour of the \texttt{RFedSVRG} algorithm (Algorithm \ref{manifold_fedsvrg}). Before we proceed to the convergence results, we briefly review the necessary assumptions, which are standard assumptions for optimization on manifolds~\cite{zhang2016first,boumal2018global}.
\begin{assumption}[Smoothness]\label{assumption_smoothness}
Suppose $f_i$ is $L_i$-smooth as defined in \eqref{assumption_manifold_smooth}. It implies that $f$ is $L$-smooth with $L=\sum_{i=1}^{n}L_i$.
\end{assumption}
Now we give the convergence rate results for Algorithm \ref{manifold_fedsvrg}. Specifically, Theorem \ref{thm_nonconvex1} gives the convergence rate of Algorithm \ref{manifold_fedsvrg} with $\tau_i=1$, Theorem \ref{thm_nonconvex1.1} gives the convergence rate of Algorithm \ref{manifold_fedsvrg} with $\tau_i>1$, and Theorem \ref{thm_geodesic_convex} gives the convergence rate of Algorithm \ref{manifold_fedsvrg} when the objective function is geodescially convex.
\begin{theorem}[Nonconvex, Algorithm \ref{manifold_fedsvrg} with $\tau_i=1$]\label{thm_nonconvex1}
Suppose the problem \eqref{problem_finite_sum} satisfies Assumption \ref{assumption_smoothness}. If we run Algorithm \ref{manifold_fedsvrg} with \textbf{Option 1} in Line 8, $\eta^{(i)}\leq \frac{1}{L}$ and $\tau_i=1$ (i.e. only one step of gradient update for each agent), then the \textbf{Option 1} of the output of Algorithm \ref{manifold_fedsvrg} satisfies:
\begin{equation}\label{thm-ineq}
\min_{t=0,...,T}\|\grad f(x_t)\|^2\leq \mathcal{O}\left(\frac{L (f(x_0)-f(x^*))}{T}\right).
\end{equation}
\end{theorem}
\begin{remark}\label{remark_multiple_innersteps}
Our proof of Theorem \ref{thm_nonconvex1} relies heavily on the choice of $\tau_i=1$ and the consensus step \eqref{tangent_space_mean}.
When $\tau_i>1$, we need to introduce multiple exponential mappings at multiple points for each iteration, which makes the convergence analysis much more challenging due to the loss of linearity. Moreover, the aggregation step makes the situation even worse. However, we are able to show the convergence of Algorithm \ref{manifold_fedsvrg} with $\tau_i>1$ when $k=1$. Our numerical experiments show the effectiveness of the \texttt{RFedSVRG} algorithm with both $\tau_i=1$ and $\tau_i>1$.
\end{remark}
To prove the convergence of Algorithm \ref{manifold_fedsvrg} with $\tau_i> 1$, we also need the following regularization assumption over the manifold $\M$~\cite{zhang2016fast}.
\begin{assumption}[Regularization over manifold]\label{assumption_regu_manifold}
The manifold is complete and there exists a compact set $\mathcal{D}\subset \M$ (diameter bounded by $D$) so that all the iterates of Algorithm \ref{manifold_fedsvrg} and the optimal points are contained in $\mathcal{D}$. The sectional curvature is bounded in $[\kappa_{\min}, \kappa_{\max}]$. Moreover, we denote the following key geometrical constant that captures the impact of manifold:
\begin{equation}\label{zeta_eq}
\zeta= \begin{cases}\frac{\sqrt{\left|\kappa_{\min }\right|} D}{\tanh \left(\sqrt{\left|\kappa_{\min }\right|} D\right)}, & \text { if } \kappa_{\min }<0 \\ 1, & \text { if } \kappa_{\min } \geq 0.\end{cases}
\end{equation}
\end{assumption}
Notice that this assumption holds when the manifold is a sphere or a Stiefel manifold (since they are compact). Now we are ready to give the convergence rate result of Algorithm \ref{manifold_fedsvrg} with $\tau_i>1$ and $k=1$, the proof of which is inspired by~\cite{zhang2016fast}.
\begin{theorem}[Nonconvex, Algorithm \ref{manifold_fedsvrg} with $\tau_i>1$ and $k=1$]\label{thm_nonconvex1.1}
Suppose the problem \eqref{problem_finite_sum} satisfies Assumptions \ref{assumption_smoothness} and \ref{assumption_regu_manifold}. If we run Algorithm \ref{manifold_fedsvrg} with \textbf{Option 2} in Line 8, $k=1$, $\tau_i=\tau>1$, $\eta^{(i)}=\eta\leq \mathcal{O}(\frac{1}{n L \zeta^2})$, then the \textbf{Option 2} of the output of Algorithm \ref{manifold_fedsvrg} satisfies:
\[
\E\|\grad f(\Tilde{x})\|^2\leq \mathcal{O}\left(\frac{\rho (f(x_0)-f(x^*))}{\tau T}\right),
\]
where $\rho$ is an absolute constant specified in the proof and the expectation is taken with respect to the random index $i$, as well as the randomness introduced by the \textbf{Option 2}.
\end{theorem}
Finally, we have the convergence result when the objective function of \eqref{problem_finite_sum} is geodesically convex.
\begin{theorem}[Geodesic convex]\label{thm_geodesic_convex}
Suppose the problem \eqref{problem_finite_sum} satisfies Assumption \ref{assumption_smoothness} and \ref{assumption_regu_manifold}. Also the functions $f_i$'s are geodesically convex (see Definition \ref{assumption_geodesic_convex}) in $\mathcal{D}$ (as in Assumption \ref{assumption_regu_manifold}). If we run Algorithm \ref{manifold_fedsvrg} with \textbf{Option 1} in Line 8, $\tau_i=1$, $S_t=[n]$ (full parallel gradient), and $\eta=\eta^{(1)}=\cdots=\eta^{(n)}\leq \frac{1}{2 L}$, then the \textbf{Option 1} of the output of Algorithm \ref{manifold_fedsvrg} satisfies:
\begin{equation}
f(x_T) - f^*\leq \mathcal{O}\left(\frac{L d^2(x_0,x^*)}{T}\right).
\end{equation}
\end{theorem}
\section{Numerical experiments}
We now show the performance of RFedSVRG and compare it with two natural ideas for solving \eqref{finite_sum}: Riemannian FedAvg (\texttt{RFedAvg}) and Riemannian FedProx (\texttt{RFedProx}), which are natural extensions of FedAvg \cite{mcmahan2017communication} and FedProx \cite{li2020federated} to the Riemannian setting. Algorithms \texttt{RFedAvg} and \texttt{RFedProx} are descried in Algorithm \ref{manifold_fedavg} and Algorithm \ref{manifold_fedprox} in the supplementary material. We conducted our experiments on a desktop with Intel Core 9600K CPU, 32GB RAM and NVIDIA GeForce RTX 2070 GPU. For the codes of operations on Riemannian manifolds we used the ones from the \texttt{Manopt} and \texttt{PyManopt} packages~\cite{manopt,pymanopt}. Since the logarithm mapping (the inverse of the exponential mapping) on the Stiefel manifold is not easy to compute \cite{zimmermann2021computing}, we adopted the projection-like retraction~\cite{absil2012projection} and the inverse of it~\cite{kaneko2012empirical} to approximate the exponential and the logarithm mappings, respectively.
We tested the three algorithms on PCA \eqref{problem_PCA} and kPCA \eqref{problem_kPCA} problems. For both problems, we measure the norm of the global Riemannian gradients. Additionally, we also measure the sum of principal angles \cite{knyazev2012principal} for kPCA. \footnote{For the loss $f$ in \eqref{problem_kPCA}, note that $f(X)=f(XQ)$ for any orthogonal matrix $Q\in\RR^{r\times r}$. As a result, the optimal solution of $f(X)$ only represents the eigen-space corresponds to the $r$-largest eigenvalues. Therefore we need the principal angles to measure the angles between the subspaces.}
\subsection{Comparison of the two consensus methods \eqref{karcher_mean} and \eqref{tangent_space_mean}}
We first compare the two consensus methods \eqref{karcher_mean} and \eqref{tangent_space_mean}. To this end, we randomly generate $x_t$ and $k=100$ points $x^{(i)}$ on the unit ball $\mathcal{S}^{d-1}$ with different dimensions $d$. We then compare the distances $\frac{1}{k}\sum_i d^2(x_{t}, x^{(i)})$, $\frac{1}{k}\sum_i d^2(x_{t+1}, x^{(i)})$ and $d^2(x_t, x_{t+1})$, as well as the CPU time for computing them. Note that the smaller these distances are, the better. To calculate the Karcher mean, we run the Riemannian gradient descent method starting at $x_t$ until the norm of the Riemannian gradient is smaller than $\epsilon=10^{-6}$. The results are shown in Table \ref{table:consensus}.
From Table \ref{table:consensus} we see that the tangent space mean \eqref{tangent_space_mean} is indeed better than Karcher mean \eqref{karcher_mean} in terms of both quality and CPU time.
\begin{table}[t]
\begin{center}
\caption{Comparison of the two consensus methods \eqref{karcher_mean} and \eqref{tangent_space_mean}. Here $h(x):=\frac{1}{k}\sum_i d^2(x^{(i)},x)$, CPU time is in seconds and the experiments are repeated and averaged over 10 times.}\label{table:consensus}
\begin{small}
\begin{tabular}{c|c|c|c|c|c|c|c}
\hline
\multirow{2}{*}{Dim $d$} & \multirow{2}{*}{$h(x_t)$} & \multicolumn{3}{c}{Karcher mean \eqref{karcher_mean}} & \multicolumn{3}{|c}{Tangent space mean \eqref{tangent_space_mean}} \\
\cline{3-8}
& & $d^2(x_{t+1}, x_t)$ & $h(x_{t+1})$ & Time & $d^2(x_{t+1}, x_t)$ & $h(x_{t+1})$ & Time \\
\hline
100 & 2.478 & 2.469 & 2.813 & 0.706 & 0.025 & 2.427 & 0.004 \\
\hline
200 & 2.472 & 2.484 & 2.804 & 0.641 & 0.025 & 2.422 & 0.004 \\
\hline
500 & 2.469 & 2.469 & 2.795 & 0.725 & 0.024 & 2.421 & 0.005 \\
\hline
\end{tabular}
\end{small}
\end{center}
\end{table}
\subsection{Experiments for PCA and kPCA on synthetic data}
In this section, we report the results of the three algorithms for solving PCA \eqref{problem_PCA} and kPCA \eqref{problem_kPCA} on synthetic data. We first generate the data $X_i\in\RR^{d\times p}$ whose entries are drawn from standard normal distribution. We then set $A_i:=X_i X_i^\top$. Notice that under this experiment setting the data in different agents are homogeneous in distribution, which provides a mild environment for comparing the behavior of the proposed algorithms. We test highly heterogeneous real data later.
\paragraph{Experiments on PCA.} We first test the three algorithms on the standard PCA problem \eqref{problem_PCA}. We test our codes with different numbers of agents $n$ and set $k=n/10$ as the number of clients we pick up for each round. We terminate the algorithms if the number of rounds of communication exceeds 600. We sample $10000$ data points in $\RR^{100}$ and partition them into $n$ agents, each of which contains equal number of data. We test \texttt{RFedSVRG} with one iteration for each local agents, i.e. $\tau_i=1$ and test \texttt{RFedAvg} and \texttt{RFedProx} with $\tau_i=5$ iterations in \eqref{temp6}. We use the constant stepsizes for all three algorithms, and take $\mu=n/10$ for each choice of $n$. The results are presented in Figure \ref{fig:pca_changing_nk_norm}, from which we see that only \texttt{RFedSVRG} can efficiently decrease $\|\grad f(x_t)\|$ to an acceptable level.
\begin{figure}[t!]
\begin{center}
\setcounter{subfigure}{0}
\subfigure[$n=500$]{\includegraphics[width=0.32\textwidth]{changing_nk/pca_grad_norm_500_50.pdf}}
\subfigure[$n=1000$]{\includegraphics[width=0.32\textwidth]{changing_nk/pca_grad_norm_1000_100.pdf}}
\subfigure[$n=2500$]{\includegraphics[width=0.32\textwidth]{changing_nk/pca_grad_norm_2500_250.pdf}}
\caption{Results for PCA \eqref{problem_PCA}. The y-axis denotes $\|\grad f(x_t)\|$. For each figure, the experiments are repeated and averaged over 10 times.}
\label{fig:pca_changing_nk_norm}
\end{center}
\end{figure}
\paragraph{Experiments on kPCA.} We now test the three algorithms on the kPCA problem \eqref{problem_kPCA}. In the first experiment we sample $10000$ data points in $\RR^{200}$ and partition them into $n$ agents, each of which contains equal number of data. We test our codes with different number of agents $n$, and again set $k=n/10$. Here we take $(d, r)=(200, 5)$. The results are given in Figure \ref{fig:kpca_changing_nk}, where we see that \texttt{RFedSVRG} can efficiently decrease $\|\grad f(x_t)\|$ and the principal angle in all tested cases.
In the second experiment we test the effect of the number of inner loops $\tau_i$.
We generate $10000$ standard Gaussian vectors. We set $(d,r) = (200,5)$, $k=10$ and $n=100$ so that $p=100$. We choose $\tau=[1, 10, 50, 100]$ for the inner steps for all three algorithms. The results are presented in Figure \ref{fig:kpca_changing_tau}. From this figure we again observe the great performance of \texttt{RFedSVRG}.
\begin{figure}[t!]
\begin{center}
\subfigure{\includegraphics[width=0.23\textwidth]{changing_nk/kpca_grad_norm_50_5.pdf}}
\subfigure{\includegraphics[width=0.23\textwidth]{changing_nk/kpca_grad_norm_100_10.pdf}}
\subfigure{\includegraphics[width=0.23\textwidth]{changing_nk/kpca_grad_norm_500_50.pdf}}
\subfigure{\includegraphics[width=0.23\textwidth]{changing_nk/kpca_grad_norm_1000_100.pdf}}
\setcounter{subfigure}{0}
\subfigure[$(n, k)=(50, 5)$]{\includegraphics[width=0.23\textwidth]{changing_nk/kpca_angle_50_5.pdf}}
\subfigure[$(n, k)=(100, 10)$]{\includegraphics[width=0.23\textwidth]{changing_nk/kpca_angle_100_10.pdf}}
\subfigure[$(n, k)=(500, 50)$]{\includegraphics[width=0.23\textwidth]{changing_nk/kpca_angle_500_50.pdf}}
\subfigure[$(n, k)=(1000, 100)$]{\includegraphics[width=0.23\textwidth]{changing_nk/kpca_angle_1000_100.pdf}}
\caption{Results for kPCA. The y-axis of the figures in the first row denotes $\|\grad f(x_t)\|$, and the y-axis of the figures in the second row denotes the principal angle between $x_t$ and $x^*$. The experiments are repeated and averaged over 10 times.}
\label{fig:kpca_changing_nk}
\end{center}
\end{figure}
\begin{figure}[t!]
\begin{center}
\subfigure{\includegraphics[width=0.23\textwidth]{changing_tau/kpca_grad_norm_tau_1.pdf}}
\subfigure{\includegraphics[width=0.23\textwidth]{changing_tau/kpca_grad_norm_tau_10.pdf}}
\subfigure{\includegraphics[width=0.23\textwidth]{changing_tau/kpca_grad_norm_tau_50.pdf}}
\subfigure{\includegraphics[width=0.23\textwidth]{changing_tau/kpca_grad_norm_tau_100.pdf}}
\setcounter{subfigure}{0}
\subfigure[$\tau=1$]{\includegraphics[width=0.23\textwidth]{changing_tau/kpca_angle_tau_1.pdf}}
\subfigure[$\tau=10$]{\includegraphics[width=0.23\textwidth]{changing_tau/kpca_angle_tau_10.pdf}}
\subfigure[$\tau=50$]{\includegraphics[width=0.23\textwidth]{changing_tau/kpca_angle_tau_50.pdf}}
\subfigure[$\tau=100$]{\includegraphics[width=0.23\textwidth]{changing_tau/kpca_angle_tau_100.pdf}}
\caption{Results for kPCA \eqref{problem_kPCA} with different number of inner loops $\tau=[1, 10, 50, 100]$. The y-axis of the figures in the first row denotes $\|\grad f(x_t)\|$, and the one in the second row denotes the principal angle between $x_t$ and $x^*$. The experiments are repeated and averaged over 10 times.}
\label{fig:kpca_changing_tau}
\end{center}
\end{figure}
\subsection{Experiments for kPCA on real data}
We now show the numerical results of the three algorithms on real data. We focus on the kPCA problem \eqref{problem_kPCA} here. We test the three algorithms on three real data sets: the Iris dataset~\cite{forinaextendible}, the wine dataset~\cite{forinaextendible} and the MNIST hand-written dataset~\cite{lecun1998gradient}. For all three datasets, we calculate the first $r$ principal directions and the true optimal loss value directly. We can thus compute the principal angles between the iterate and the ground truth. The experiments are repeated and averaged for 10 random initializations.
For the first two datasets, we randomly partition the datasets into $10$ agents and at each iteration we take $k=5$ agents. The Figures \ref{fig:kpca_iris} and \ref{fig:kpca_wine} show that \texttt{RFedSVRG} is able to effectively decrease the norm of Riemannian gradient and the principal angles while the other two are not as efficient.
\begin{figure}[t!]
\begin{center}
\subfigure[Gradient norm]{\includegraphics[width=0.36\textwidth]{kpca_iris_grad_norm.pdf}}
\subfigure[Principal angle]{\includegraphics[width=0.36\textwidth]{kpca_iris_angle.pdf}}
\caption{Results for kPCA \eqref{problem_kPCA} on Iris dataset. The data is in $\RR^4$ ($d=4$) and we take $r=2$. The first figure is the norm of Riemannian gradient $\|\grad f(x_t)\|$ and the second is the principal angle between $x_t$ and the true solution $x^*$. }
\label{fig:kpca_iris}
\end{center}
\end{figure}
\begin{figure}[t!]
\begin{center}
\subfigure[Gradient norm]{\includegraphics[width=0.36\textwidth]{kpca_wine_grad_norm.pdf}}
\subfigure[Principal angle]{\includegraphics[width=0.36\textwidth]{kpca_wine_angle.pdf}}
\caption{Results for kPCA \eqref{problem_kPCA} with wine dataset. The data is in $\RR^{13}$ ($d=13$) and we take $r=5$. The first figure is the norm of Riemannian gradient $\|\grad f(x_t)\|$ and the second is the principal angle between $x_t$ and the true solution $x^*$.}
\label{fig:kpca_wine}
\end{center}
\end{figure}
For the MNIST hand-written dataset, the (training) dataset contains $60000$ hand-written images of size $28\times 28$, i.e. $d=784$. This is a relatively large dataset and we test the proposed algorithms with different number of clients. The results are shown in Figure \ref{fig:kpca_mnist} where the efficiency of \texttt{RFedSVRG} is demonstrated again. The comparison of the two rows of Figure \ref{fig:kpca_mnist} concludes that \texttt{RFedSVRG} shows better efficiency even with a larger number of clients $n$.
\begin{figure}[t!]
\begin{center}
\subfigure{\includegraphics[width=0.36\textwidth]{mnist/kpca_mnist_grad_norm_100.pdf}}
\subfigure{\includegraphics[width=0.36\textwidth]{mnist/kpca_mnist_angle_100.pdf}}
\setcounter{subfigure}{0}
\subfigure[Gradient norm]{\includegraphics[width=0.36\textwidth]{mnist/kpca_mnist_grad_norm_200.pdf}}
\subfigure[Principal angle]{\includegraphics[width=0.36\textwidth]{mnist/kpca_mnist_angle_200.pdf}}
\caption{Results for kPCA \eqref{problem_kPCA} with MNIST dataset. The data is in $\RR^{784}$ ($d=784$) and we take $r=5$. The first column is the norm of Riemannian gradient $\grad f(x_t)$ and the second is the principal angle between $x_t$ and the true solution $x^*$. The two rows corresponds to $n=100$ and $n=200$. We take $k=n/10$ and $\tau=5$ for all algorithms.}
\label{fig:kpca_mnist}
\end{center}
\end{figure}
\section{Conclusions}
In this paper, we studied the federated optimization over Riemannian manifolds. We proposed a Riemannian federated SVRG algorithm and analyzed its convergence rate to an $\epsilon$-stationary point. To the best of our knowledge, this is the first federated algorithm over Riemannian manifolds with convergence guarantees. Numerical experiments on federated PCA and federated kPCA were conducted to demonstrate the efficiency of the proposed method. {Developing algorithms with lower communication cost, better scalability and sparse solutions are some important topics for future research.}
\newpage
{\small
\section{Introduction}\label{section_intro}
Federated learning (FL) has drawn lots of attentions recently due to its wide applications in modern machine learning. Canonical FL aims at solving the following finite-sum problem~\cite{konevcny2016federated,mcmahan2017communication,kairouz2021advances}:
\begin{equation}\label{finite_sum}
\min_{x\in\mathbb{R}^d} f(x):=\frac{1}{n}\sum_{i=1}^{n}f_i(x),
\end{equation}
where each of the $f_i$ (or the data associated with $f_i$) is stored in different client/agent that could have different physical locations and different hardware. This makes the mutual connection impossible~\cite{konevcny2016federated}. Therefore, there is a central server that can collect the information from different agents and output a consensus that minimizes the summation of the loss functions from all the clients. The aim of such a framework is to utilize the computation resources of different agents while still maintain the data privacy by not sharing data among all the local agents. Thus the communication is always between the central server and local servers. This setting is commonly observed in modern smart-phone-APP based machine learning applications~\cite{konevcny2016federated}. We emphasize that we always consider the heterogeneous data scenario where the functions $f_i$'s might be different and have different optimal solutions. This problem is inherently hard to solve because each local minima will empirically diverge the update from the global optimum~\cite{li2020federated,mitra2021linear}.
In this paper, we consider the following FL problem over a Riemannian manifold $\M$:
\begin{equation}\label{problem_finite_sum}
\min_{x\in\M} f(x):=\frac{1}{n}\sum_{i=1}^{n}f_i(x)
\end{equation}
where $f_i:\M\rightarrow\RR$ are smooth but not necessarily (geodesically) convex. It is noted that most FL algorithms are designed for the unconstrained setting and convex constraint setting \cite{konevcny2016federated,mcmahan2017communication, karimireddy2020scaffold, li2019convergence, malinovskiy2020local, charles2021convergence, pathak2020fedsplit, mitra2021linear}, and FL problems with nonconvex constraints such as \eqref{problem_finite_sum} have not been considered. The main difficulty for solving \eqref{problem_finite_sum} lies in aggregating points over a nonconvex set, which may lead to the situation where the averaging point is outside of the constraint set.
One motivating application of \eqref{problem_finite_sum} is the federated kPCA problem
\begin{equation}\label{problem_kPCA}
\min_{X\in\St(d, r)} f(X):=\frac{1}{n}\sum_{i=1}^{n}f_i(X),\ \mbox{ where } f_i(X)=-\frac{1}{2}\tr(X^\top A_i X),
\end{equation}
where $\St(d, r)=\{X\in\RR^{d\times r}| X^\top X=I_r\}$ denotes the Stiefel manifold, and $A_i$ is the covariance matrix of the data stored in the $i$-th local agent. When $r=1$, \eqref{problem_kPCA} reduces to classical PCA
\begin{equation}\label{problem_PCA}
\min_{\|x\|_2=1} f(x):=\frac{1}{n}\sum_{i=1}^{n}f_i(x),\ \mbox{ where } f_i(x)=-\frac{1}{2}x^\top A_i x.
\end{equation}
Existing FL algorithms are not applicable to \eqref{problem_kPCA} and \eqref{problem_PCA} due to the difficulty on aggregating points on nonconvex set.
\subsection{Main Contributions}
We focus on designing efficient federated algorithms for solving \eqref{problem_finite_sum}. Our main contributions are:
\begin{enumerate}[leftmargin=*]
\item We propose a Riemannian federated SVRG algorithm (\texttt{RFedSVRG}) for solving \eqref{problem_finite_sum}. We prove that the convergence rate of our RFedSVRG algorithm is $\mathcal{O}(1/\epsilon^2)$ for obtaining an $\epsilon$-stationary point. This result matches that of its Euclidean counterparts~\cite{mitra2021linear}. To the best of our knowledge, this is the first algorithm for solving FL problems over Riemannian manifolds with convergence guarantees.
\item The main novelty of our \texttt{RFedSVRG} algorithm is a consensus step on the tangent space of the manifold. We compare this new approach with the widely used Karcher mean approach. We show that our method achieves certain "regularization" property and performs very well in practice.
\item We conduct extensive numerical experiments on our method for solving the PCA \eqref{problem_PCA} and kPCA \eqref{problem_kPCA} problems with both synthetic and real data. The numerical results demonstrate that our \texttt{RFedSVRG} algorithm significantly outperforms the Riemannian counterparts of two widely used FL algorithms: \texttt{FedAvg} \cite{mcmahan2017communication} and \texttt{FedProx} \cite{li2020federated}.
\end{enumerate}
\subsection{Related Work}
\textbf{Federated optimization.}
The most natural idea for FL is the \texttt{FedAvg} algorithm \cite{mcmahan2017communication}, which averages local gradient descent updates and yields a good empirical convergence. However in the data heterogeneous situation, \texttt{FedAvg} suffers from the client-drift effect that each local client will drift the solution towards the minimum of their own local loss function \cite{karimireddy2020scaffold, li2019convergence, malinovskiy2020local, charles2021convergence, pathak2020fedsplit, mitra2021linear}. Many ideas were studied to resolve this issue. For example, \cite{li2020federated} proposed the \texttt{FedProx} algorithm, which regularizes each of the local gradient descent update to ensure that the local iterates are not far from the previous consensus point. The \texttt{FedSplit}~\cite{pathak2020fedsplit} was proposed later to further mitigate the client-drift effect and convergence results were obtained for convex problems.
\texttt{FedNova}~\cite{wang2020tackling} was also proposed to improve the performance of \texttt{FedAvg}, however it still suffers from a fundamental speed-accuracy conflict under objective heterogeneity~\cite{mitra2021linear}. Variance reduction techniques were also incorporated to FL leading to two new algorithms: federated SVRG (\texttt{FSVRG}) \cite{konevcny2016federated} and \texttt{FedLin} \cite{mitra2021linear}. These two algorithms require transmitting the full gradient from the central server to each local client for local gradient updates, therefore require more communication between clients and the central server. Nevertheless, \texttt{FedLin} achieves the theoretical lower bound for strongly convex objective functions~\cite{mitra2021linear} with an acceptable amount of increase in the communication cost.
\textbf{Decentralized optimization on manifolds.} Decentralized distributed optimization on manifold has also drawn attentions in recent years~\cite{chen2021decentralized, shah2017distributed,alimisis2021distributed}. Under this setting, each local agent solves a local problem and then the central server takes the consensus step. The consensus step is usually done by calculating the Karcher mean on the manifold~\cite{tron2012riemannian,shah2017distributed}, or calculating the minimizer of the sum of the square of the Euclidean distances in the embedded submanifold case~\cite{chen2021decentralized}. Such consensus steps usually require solving an additional problem inexactly with no exact convergence rate guarantee~\cite{tron2012riemannian, chen2021local}.
It is worth mentioning that the PCA problem under federated learning setting has been considered in the literature \cite{grammenos2020federated}. The proposed method in~\cite{grammenos2020federated} relies on the SVD of data matrices and a subspace merging technique, which is very different from our method. The aim of the algorithm in~\cite{grammenos2020federated} is to achieve $(\epsilon,\delta)$-differential privacy. In contrast, we mainly consider the convergence rate of our method. Therefore our work is totally different from \cite{grammenos2020federated}.
\section{Preliminaries on Riemannian Optimization}
In this part, we briefly review the basic tools we use for optimization on Riemannian manifolds~\cite{absil2009optimization,lee2006riemannian,Tu2011manifolds,boumal2022intromanifolds}. Due to the limit of space, more detailed discussions are given in supplementary material \ref{appendix_manifold}. Suppose $\M$ is an $m$-dimensional Riemannian manifold with Riemannian metric $g:T\M\times T\M\rightarrow\RR$. We first review the notion of the Riemannian gradients.
\begin{definition}[Riemannian gradients]
For a Riemannian manifold with Riemannian metric $g$, the Riemannian gradient for $f\in C^\infty(\M)$ is the unique tangent vector $\grad f(x)\in T_x\M$ such that $df(\xi) = g(\grad f, \xi),\ \forall \xi\in T_x\M$, where $d f$ is the differential of function $f$ defined as $d f(\xi):=\xi(f)$.
\end{definition}
For the convergence analysis, we also need the notion of exponential mapping and parallel transport. We first review the definition of exponential mapping
\begin{definition}[Exponential mapping]
Given $x\in\M$ and $\xi\in T_x\M$, the exponential mapping $\Exp_x$ is defined as a mapping from $T_x\M$ to $\M$ s.t. $\Exp_x(\xi):= \gamma(1)$ with $\gamma$ being the geodesic with $\gamma(0)=x$, $\Dot{\gamma}(0)=\xi$. A natural corollary is $\Exp_x(t\xi):= \gamma(t)$ for $t\in[0, 1]$. Another useful fact is $d(x,\Exp_x(\xi))=\|\xi\|_x$ since $\gamma'(0)=\xi$ which preserves the speed.
\end{definition}
Throughout this paper, we always assume that $\M$ is complete, so that $\Exp_x$ is always defined for every $\xi\in T_x\M$. For $\forall x,y\in\M$, the inverse of the exponential mapping $\Exp_{x}^{-1}(y)\in T_x\M$ is called the logarithm mapping, and we have $d(x,y)=\|\Exp_{x}^{-1}(y)\|_x$, which will be a useful fact in the convergence analysis. We now present the definition of parallel transport.
\begin{definition}[Parallel transport]
Given a Riemannian manifold $(\M, g)$ and two points $x,y\in\M$, the parallel transport $P_{x\rightarrow y}:T_x\M\rightarrow T_y\M$\footnote{Notice that the existence of parallel transport depends on the curve connecting $x$ and $y$, which is not a problem for complete Riemannian manifold because we always take the unique geodesic that connects $x$ and $y$.} is a linear operator which keeps the inner product: $\forall \xi,\zeta\in T_x\M$, we have $\langle P_{x\rightarrow y}\xi, P_{x\rightarrow y}\zeta\rangle_y = \langle\xi, \zeta\rangle_x$.
\end{definition}
Parallel transport is useful since the Lipschitz condition for the Riemannian gradient requires moving the gradients in different tangent spaces "parallel" to the same tangent space.
We now present the definition of Lipschitz smoothness and convexity on Riemannian manifolds, which will be utilized in our convergence analysis.
\begin{definition}[$L$-smoothness on manifolds]\label{assumption_manifold_smooth}
$f$ is called Lipschitz smooth on manifold $\M$ if there exists $L\geq0$ such that the following inequality holds for function $f$:
\begin{equation}\label{eq:lgsmoothness1}
\|\grad f(y) - P_{y\rightarrow x}\grad f(x)\|\leq L d(x,y).
\end{equation}
For complete Riemannian manifold, we have~\cite{zhang2016first}:
\begin{equation}\label{eq:lgsmoothness2}
f(y) \leq f(x)+\left\langle g_{x}, \Exp_{x}^{-1}(y)\right\rangle_{x}+\frac{L_{g}}{2} d^{2}(x, y),\ \forall x,y\in\M.
\end{equation}
\end{definition}
The definition of geodesic convexity is given below (see, e.g., \cite{zhang2016first}).
\begin{definition}[Geodesic convex]\label{assumption_geodesic_convex}
A function $f\in C^1(\M)$ is geodesically convex if for all $x,y\in\M$, there exists a geodesic $\gamma$ such that $\gamma(0)=x$, $\gamma(1)=y$ and
$$
f(\gamma(t))\leq (1-t)f(x)+t f(y),\ \forall t\in[0,1].
$$
Or equivalently,
$$
f(y)\geq f(x) + \langle \grad f(x), \Exp_{x}^{-1}(y) \rangle_x.
$$
\end{definition}
\section{The RFedSVRG Algorithm}
The most challenging task for FL on Riemannian manifolds is the consensus step. Suppose the central server receives $x^{(i)}$, $i\in S_t\subset[n]$ from each of the local clients at round $t$, the question is how the central server aggregates the points to output a unique consensus. In Euclidean space, the most straightforward way is to take the average $\frac{1}{k}\sum_{i\in S_t}x^{(i)}$ with $k=|S_t|$. However, this approach does not apply to the Riemannian setting due to the loss of linearity: the arithmetic average of points can be outside of the manifold. A natural choice for the consensus step on the manifold is to take the Karcher mean of the points \cite{tron2012riemannian}:
\begin{equation}\label{karcher_mean}
x_{t+1}\leftarrow\argmin_x \frac{1}{k}\sum_{i\in S_t}d^2(x, x^{(i)}),
\end{equation}
where $x_{t+1}$ is the next iterate point on the central server. This is a natural generalization of the arithmetic average because $d^2(x,y)=\|x-y\|^2$ in Euclidean space. However, solving \eqref{karcher_mean} can be time consuming in practice.
We propose the following tangent space consensus step:
\begin{equation}\label{tangent_space_mean}
x_{t+1}\leftarrow \Exp_{x_{t}}\left(\frac{1}{k}\sum_{i\in S_t}\Exp_{x_{t}}^{-1}(x^{(i)})\right),
\end{equation}
where we project each of the point $x_t^{(i)}$ back to the tangent space $T_{x_t}\M$ and then take their average on the tangent space. The consensus step \eqref{tangent_space_mean} has several advantages over the Karcher mean method \eqref{karcher_mean}. First, \eqref{tangent_space_mean} is of closed-form and easy to compute. Second, \eqref{tangent_space_mean} still coincides with the arithmetic mean when the manifold reduces to the Euclidean space. Third, the tangent space mean \eqref{tangent_space_mean} can easily be extended to the following moving average mean:
\[
\Exp_{x_{t}}\left(\frac{\beta}{k}\sum_{i\in S_t}\Exp_{x_{t}}^{-1}(x^{(i)})\right),
\]
which corresponds to $(1-\beta)x_t+\frac{\beta}{k}\sum_{i\in S_t}x^{(i)}$ in the Euclidean space, while the Karcher mean cannot be easily extended in this scenario. Last, \eqref{tangent_space_mean} has the following "regularization" property as the distance between two consensus points can be controlled, and the Karcher mean method \eqref{karcher_mean} does not have this kind of property.
\begin{lemma}\label{lemma_regularization_tangent_mean}
For the update defined in \eqref{tangent_space_mean}, it holds that
$$
d(x_{t+1}, x_t)\leq \frac{1}{k}\sum_{i\in S_t} d(x^{(i)}, x_t).
$$
\end{lemma}
To further illustrate this "regularization" property of the tangent space mean \eqref{tangent_space_mean}, we consider an (extreme) example on the unit sphere $\mathcal{S}^2$ (see Figure \ref{fig:consensus}) . Here we take $x_t$ on the north pole and two point from the local server as $x^{(1)}$ and $x^{(2)}$, also $\xi^{(i)}=\Exp_{x_t}^{-1}(x^{(i)})\in T_{x_t}\M$. Then the tangent space mean \eqref{tangent_space_mean} would yield the original point $x_t$, whereas the Karcher mean could yield any point on the vertical great circle, depending on the starting point in solving the optimization problem \eqref{karcher_mean}.
\begin{figure}
\centering
\begin{tikzpicture}[
point/.style = {draw, circle, fill=black, inner sep=0.7pt}, scale = 0.8
]
\def2cm{2cm}
\coordinate (O) at (0,0);
\coordinate (N) at (0,2cm);
\filldraw[ball color=white] (O) circle [radius=2cm];
\draw[dashed]
(0, 2cm) arc [start angle=90,end angle=-90,x radius=5mm,y radius=2cm];
\draw
(0, 2cm) arc [start angle=90,end angle=270,x radius=5mm,y radius=2cm];
\begin{scope}[xslant=0.5,yshift=2cm,xshift=-2]
\filldraw[fill=gray!10,opacity=0.3]
(-4.5,1) -- (2.5,1) -- (3,-1) -- (-4,-1) -- cycle;
\node at (2,0.6) {$T_{x_t}\mathcal{S}^2$};
\end{scope}
\draw[dashed]
(N) node[above] {$x_t$} -- (O) node[below] {$O$};
\node[point] at (N) {};
\draw[line width=1pt,blue,-stealth](0,2cm)--(pi,2cm) node[anchor=north east]{$\xi^{(1)}$};
\draw[line width=1pt,red,-stealth](0,2cm)--(-pi,2cm) node[anchor=south west]{$\xi^{(2)}$};
\node[point] at (2, 0) {};
\node[right] at (2, 0) {$x^{(1)}$};
\node[point] at (-2, 0) {};
\node[left] at (-2, 0) {$x^{(2)}$};
\end{tikzpicture}
\caption{Comparison of two consensus methods on $\mathcal{S}^2$}
\label{fig:consensus}
\end{figure}
Our \texttt{RFedSVRG} algorithm is presented in Algorithm \ref{manifold_fedsvrg}, which is a non-trivial manifold extension of the FSVRG algorithm \cite{konevcny2016federated}.
For \texttt{RFedSVRG}, the local gradient update becomes
\begin{equation}\label{local_update_fedsvrg}
x_{\ell+1}^{(i)}\leftarrow \Exp_{x_{\ell}^{(i)}}\left[-\eta^{(i)} \left(\grad f_i(x_{\ell}^{(i)}) - P_{x_t\rightarrow x_{\ell}^{(i)}}(\grad f_i(x_t) - \grad f(x_t))\right)\right],
\end{equation}
which matches the existing manifold SVRG work \cite{zhang2016fast}. The introduction of the parallel transport $P_{x_t\rightarrow x_{\ell}^{(i)}}$ is necessary because we need to "transport" all the vectors to the same tangent space to conduct addition and subtraction. The algorithm utilizes the gradient information at the previous iterate $\grad f(x_t)$, thus avoids the "client-drift" effect and correctly converges to the global stationary points. This is confirmed by both the theory and the numerical experiments.
\begin{algorithm}[ht]
\SetKwInOut{Input}{input}
\SetKwInOut{Output}{output}
\SetAlgoLined
\Input{$n$, $k$, $T$, $\{\eta^{(i)}\}$, $\{\tau_i\}$}
\Output{\textbf{Option 1:} $\Tilde{x}=x_T$; or \textbf{Option 2:} $\Tilde{x}$ is uniformly sampled from $\{x_1,...,x_T\}$}
\For{$t=0,...,T-1$}{
Uniformly sample $S_t\subset [n]$ with $|S_t|=k$\;
\For{each agent $i$ in $S_t$}{
Receive $x_0^{(i)}=x_t$ from the central server\;
\For{$\ell=0,...,\tau_i-1$}{
Take the local gradient step \eqref{local_update_fedsvrg}.
}
Send $\hat{x}^{(i)}$ (obtained by one of the following options) to the central server
\begin{itemize}
\item {\textbf{Option 1:} $\hat{x}^{(i)}=x_{\tau_i}^{(i)}$;}
\item {\textbf{Option 2:}} $\hat{x}^{(i)}$ is uniformly sampled from $\{x_{1}^{(i)},...,x_{\tau_i}^{(i)}\}$\;
\end{itemize}
}
The central server aggregates the points by the tangent space mean \eqref{tangent_space_mean}\;
}
\caption{Riemannian FedSVRG Algorithm (RFedSVRG)}\label{manifold_fedsvrg}
\end{algorithm}
\section{Convergence analysis}
In this section we analyze the convergence behaviour of the \texttt{RFedSVRG} algorithm (Algorithm \ref{manifold_fedsvrg}). Before we proceed to the convergence results, we briefly review the necessary assumptions, which are standard assumptions for optimization on manifolds~\cite{zhang2016first,boumal2018global}.
\begin{assumption}[Smoothness]\label{assumption_smoothness}
Suppose $f_i$ is $L_i$-smooth as defined in \eqref{assumption_manifold_smooth}. It implies that $f$ is $L$-smooth with $L=\sum_{i=1}^{n}L_i$.
\end{assumption}
Now we give the convergence rate results for Algorithm \ref{manifold_fedsvrg}. Specifically, Theorem \ref{thm_nonconvex1} gives the convergence rate of Algorithm \ref{manifold_fedsvrg} with $\tau_i=1$, Theorem \ref{thm_nonconvex1.1} gives the convergence rate of Algorithm \ref{manifold_fedsvrg} with $\tau_i>1$, and Theorem \ref{thm_geodesic_convex} gives the convergence rate of Algorithm \ref{manifold_fedsvrg} when the objective function is geodescially convex.
\begin{theorem}[Nonconvex, Algorithm \ref{manifold_fedsvrg} with $\tau_i=1$]\label{thm_nonconvex1}
Suppose the problem \eqref{problem_finite_sum} satisfies Assumption \ref{assumption_smoothness}. If we run Algorithm \ref{manifold_fedsvrg} with \textbf{Option 1} in Line 8, $\eta^{(i)}\leq \frac{1}{L}$ and $\tau_i=1$ (i.e. only one step of gradient update for each agent), then the \textbf{Option 1} of the output of Algorithm \ref{manifold_fedsvrg} satisfies:
\begin{equation}\label{thm-ineq}
\min_{t=0,...,T}\|\grad f(x_t)\|^2\leq \mathcal{O}\left(\frac{L (f(x_0)-f(x^*))}{T}\right).
\end{equation}
\end{theorem}
\begin{remark}\label{remark_multiple_innersteps}
Our proof of Theorem \ref{thm_nonconvex1} relies heavily on the choice of $\tau_i=1$ and the consensus step \eqref{tangent_space_mean}.
When $\tau_i>1$, we need to introduce multiple exponential mappings at multiple points for each iteration, which makes the convergence analysis much more challenging due to the loss of linearity. Moreover, the aggregation step makes the situation even worse. However, we are able to show the convergence of Algorithm \ref{manifold_fedsvrg} with $\tau_i>1$ when $k=1$. Our numerical experiments show the effectiveness of the \texttt{RFedSVRG} algorithm with both $\tau_i=1$ and $\tau_i>1$.
\end{remark}
To prove the convergence of Algorithm \ref{manifold_fedsvrg} with $\tau_i> 1$, we also need the following regularization assumption over the manifold $\M$~\cite{zhang2016fast}.
\begin{assumption}[Regularization over manifold]\label{assumption_regu_manifold}
The manifold is complete and there exists a compact set $\mathcal{D}\subset \M$ (diameter bounded by $D$) so that all the iterates of Algorithm \ref{manifold_fedsvrg} and the optimal points are contained in $\mathcal{D}$. The sectional curvature is bounded in $[\kappa_{\min}, \kappa_{\max}]$. Moreover, we denote the following key geometrical constant that captures the impact of manifold:
\begin{equation}\label{zeta_eq}
\zeta= \begin{cases}\frac{\sqrt{\left|\kappa_{\min }\right|} D}{\tanh \left(\sqrt{\left|\kappa_{\min }\right|} D\right)}, & \text { if } \kappa_{\min }<0 \\ 1, & \text { if } \kappa_{\min } \geq 0.\end{cases}
\end{equation}
\end{assumption}
Notice that this assumption holds when the manifold is a sphere or a Stiefel manifold (since they are compact). Now we are ready to give the convergence rate result of Algorithm \ref{manifold_fedsvrg} with $\tau_i>1$ and $k=1$, the proof of which is inspired by~\cite{zhang2016fast}.
\begin{theorem}[Nonconvex, Algorithm \ref{manifold_fedsvrg} with $\tau_i>1$ and $k=1$]\label{thm_nonconvex1.1}
Suppose the problem \eqref{problem_finite_sum} satisfies Assumptions \ref{assumption_smoothness} and \ref{assumption_regu_manifold}. If we run Algorithm \ref{manifold_fedsvrg} with \textbf{Option 2} in Line 8, $k=1$, $\tau_i=\tau>1$, $\eta^{(i)}=\eta\leq \mathcal{O}(\frac{1}{n L \zeta^2})$, then the \textbf{Option 2} of the output of Algorithm \ref{manifold_fedsvrg} satisfies:
\[
\E\|\grad f(\Tilde{x})\|^2\leq \mathcal{O}\left(\frac{\rho (f(x_0)-f(x^*))}{\tau T}\right),
\]
where $\rho$ is an absolute constant specified in the proof and the expectation is taken with respect to the random index $i$, as well as the randomness introduced by the \textbf{Option 2}.
\end{theorem}
Finally, we have the convergence result when the objective function of \eqref{problem_finite_sum} is geodesically convex.
\begin{theorem}[Geodesic convex]\label{thm_geodesic_convex}
Suppose the problem \eqref{problem_finite_sum} satisfies Assumption \ref{assumption_smoothness} and \ref{assumption_regu_manifold}. Also the functions $f_i$'s are geodesically convex (see Definition \ref{assumption_geodesic_convex}) in $\mathcal{D}$ (as in Assumption \ref{assumption_regu_manifold}). If we run Algorithm \ref{manifold_fedsvrg} with \textbf{Option 1} in Line 8, $\tau_i=1$, $S_t=[n]$ (full parallel gradient), and $\eta=\eta^{(1)}=\cdots=\eta^{(n)}\leq \frac{1}{2 L}$, then the \textbf{Option 1} of the output of Algorithm \ref{manifold_fedsvrg} satisfies:
\begin{equation}
f(x_T) - f^*\leq \mathcal{O}\left(\frac{L d^2(x_0,x^*)}{T}\right).
\end{equation}
\end{theorem}
\section{Numerical experiments}
We now show the performance of RFedSVRG and compare it with two natural ideas for solving \eqref{finite_sum}: Riemannian FedAvg (\texttt{RFedAvg}) and Riemannian FedProx (\texttt{RFedProx}), which are natural extensions of FedAvg \cite{mcmahan2017communication} and FedProx \cite{li2020federated} to the Riemannian setting. Algorithms \texttt{RFedAvg} and \texttt{RFedProx} are descried in Algorithm \ref{manifold_fedavg} and Algorithm \ref{manifold_fedprox} in the supplementary material. We conducted our experiments on a desktop with Intel Core 9600K CPU, 32GB RAM and NVIDIA GeForce RTX 2070 GPU. For the codes of operations on Riemannian manifolds we used the ones from the \texttt{Manopt} and \texttt{PyManopt} packages~\cite{manopt,pymanopt}. Since the logarithm mapping (the inverse of the exponential mapping) on the Stiefel manifold is not easy to compute \cite{zimmermann2021computing}, we adopted the projection-like retraction~\cite{absil2012projection} and the inverse of it~\cite{kaneko2012empirical} to approximate the exponential and the logarithm mappings, respectively.
We tested the three algorithms on PCA \eqref{problem_PCA} and kPCA \eqref{problem_kPCA} problems. For both problems, we measure the norm of the global Riemannian gradients. Additionally, we also measure the sum of principal angles \cite{knyazev2012principal} for kPCA. \footnote{For the loss $f$ in \eqref{problem_kPCA}, note that $f(X)=f(XQ)$ for any orthogonal matrix $Q\in\RR^{r\times r}$. As a result, the optimal solution of $f(X)$ only represents the eigen-space corresponds to the $r$-largest eigenvalues. Therefore we need the principal angles to measure the angles between the subspaces.}
\subsection{Comparison of the two consensus methods \eqref{karcher_mean} and \eqref{tangent_space_mean}}
We first compare the two consensus methods \eqref{karcher_mean} and \eqref{tangent_space_mean}. To this end, we randomly generate $x_t$ and $k=100$ points $x^{(i)}$ on the unit ball $\mathcal{S}^{d-1}$ with different dimensions $d$. We then compare the distances $\frac{1}{k}\sum_i d^2(x_{t}, x^{(i)})$, $\frac{1}{k}\sum_i d^2(x_{t+1}, x^{(i)})$ and $d^2(x_t, x_{t+1})$, as well as the CPU time for computing them. Note that the smaller these distances are, the better. To calculate the Karcher mean, we run the Riemannian gradient descent method starting at $x_t$ until the norm of the Riemannian gradient is smaller than $\epsilon=10^{-6}$. The results are shown in Table \ref{table:consensus}.
From Table \ref{table:consensus} we see that the tangent space mean \eqref{tangent_space_mean} is indeed better than Karcher mean \eqref{karcher_mean} in terms of both quality and CPU time.
\begin{table}[t]
\begin{center}
\caption{Comparison of the two consensus methods \eqref{karcher_mean} and \eqref{tangent_space_mean}. Here $h(x):=\frac{1}{k}\sum_i d^2(x^{(i)},x)$, CPU time is in seconds and the experiments are repeated and averaged over 10 times.}\label{table:consensus}
\begin{small}
\begin{tabular}{c|c|c|c|c|c|c|c}
\hline
\multirow{2}{*}{Dim $d$} & \multirow{2}{*}{$h(x_t)$} & \multicolumn{3}{c}{Karcher mean \eqref{karcher_mean}} & \multicolumn{3}{|c}{Tangent space mean \eqref{tangent_space_mean}} \\
\cline{3-8}
& & $d^2(x_{t+1}, x_t)$ & $h(x_{t+1})$ & Time & $d^2(x_{t+1}, x_t)$ & $h(x_{t+1})$ & Time \\
\hline
100 & 2.478 & 2.469 & 2.813 & 0.706 & 0.025 & 2.427 & 0.004 \\
\hline
200 & 2.472 & 2.484 & 2.804 & 0.641 & 0.025 & 2.422 & 0.004 \\
\hline
500 & 2.469 & 2.469 & 2.795 & 0.725 & 0.024 & 2.421 & 0.005 \\
\hline
\end{tabular}
\end{small}
\end{center}
\end{table}
\subsection{Experiments for PCA and kPCA on synthetic data}
In this section, we report the results of the three algorithms for solving PCA \eqref{problem_PCA} and kPCA \eqref{problem_kPCA} on synthetic data. We first generate the data $X_i\in\RR^{d\times p}$ whose entries are drawn from standard normal distribution. We then set $A_i:=X_i X_i^\top$. Notice that under this experiment setting the data in different agents are homogeneous in distribution, which provides a mild environment for comparing the behavior of the proposed algorithms. We test highly heterogeneous real data later.
\paragraph{Experiments on PCA.} We first test the three algorithms on the standard PCA problem \eqref{problem_PCA}. We test our codes with different numbers of agents $n$ and set $k=n/10$ as the number of clients we pick up for each round. We terminate the algorithms if the number of rounds of communication exceeds 600. We sample $10000$ data points in $\RR^{100}$ and partition them into $n$ agents, each of which contains equal number of data. We test \texttt{RFedSVRG} with one iteration for each local agents, i.e. $\tau_i=1$ and test \texttt{RFedAvg} and \texttt{RFedProx} with $\tau_i=5$ iterations in \eqref{temp6}. We use the constant stepsizes for all three algorithms, and take $\mu=n/10$ for each choice of $n$. The results are presented in Figure \ref{fig:pca_changing_nk_norm}, from which we see that only \texttt{RFedSVRG} can efficiently decrease $\|\grad f(x_t)\|$ to an acceptable level.
\begin{figure}[t!]
\begin{center}
\setcounter{subfigure}{0}
\subfigure[$n=500$]{\includegraphics[width=0.32\textwidth]{changing_nk/pca_grad_norm_500_50.pdf}}
\subfigure[$n=1000$]{\includegraphics[width=0.32\textwidth]{changing_nk/pca_grad_norm_1000_100.pdf}}
\subfigure[$n=2500$]{\includegraphics[width=0.32\textwidth]{changing_nk/pca_grad_norm_2500_250.pdf}}
\caption{Results for PCA \eqref{problem_PCA}. The y-axis denotes $\|\grad f(x_t)\|$. For each figure, the experiments are repeated and averaged over 10 times.}
\label{fig:pca_changing_nk_norm}
\end{center}
\end{figure}
\paragraph{Experiments on kPCA.} We now test the three algorithms on the kPCA problem \eqref{problem_kPCA}. In the first experiment we sample $10000$ data points in $\RR^{200}$ and partition them into $n$ agents, each of which contains equal number of data. We test our codes with different number of agents $n$, and again set $k=n/10$. Here we take $(d, r)=(200, 5)$. The results are given in Figure \ref{fig:kpca_changing_nk}, where we see that \texttt{RFedSVRG} can efficiently decrease $\|\grad f(x_t)\|$ and the principal angle in all tested cases.
In the second experiment we test the effect of the number of inner loops $\tau_i$.
We generate $10000$ standard Gaussian vectors. We set $(d,r) = (200,5)$, $k=10$ and $n=100$ so that $p=100$. We choose $\tau=[1, 10, 50, 100]$ for the inner steps for all three algorithms. The results are presented in Figure \ref{fig:kpca_changing_tau}. From this figure we again observe the great performance of \texttt{RFedSVRG}.
\begin{figure}[t!]
\begin{center}
\subfigure{\includegraphics[width=0.23\textwidth]{changing_nk/kpca_grad_norm_50_5.pdf}}
\subfigure{\includegraphics[width=0.23\textwidth]{changing_nk/kpca_grad_norm_100_10.pdf}}
\subfigure{\includegraphics[width=0.23\textwidth]{changing_nk/kpca_grad_norm_500_50.pdf}}
\subfigure{\includegraphics[width=0.23\textwidth]{changing_nk/kpca_grad_norm_1000_100.pdf}}
\setcounter{subfigure}{0}
\subfigure[$(n, k)=(50, 5)$]{\includegraphics[width=0.23\textwidth]{changing_nk/kpca_angle_50_5.pdf}}
\subfigure[$(n, k)=(100, 10)$]{\includegraphics[width=0.23\textwidth]{changing_nk/kpca_angle_100_10.pdf}}
\subfigure[$(n, k)=(500, 50)$]{\includegraphics[width=0.23\textwidth]{changing_nk/kpca_angle_500_50.pdf}}
\subfigure[$(n, k)=(1000, 100)$]{\includegraphics[width=0.23\textwidth]{changing_nk/kpca_angle_1000_100.pdf}}
\caption{Results for kPCA. The y-axis of the figures in the first row denotes $\|\grad f(x_t)\|$, and the y-axis of the figures in the second row denotes the principal angle between $x_t$ and $x^*$. The experiments are repeated and averaged over 10 times.}
\label{fig:kpca_changing_nk}
\end{center}
\end{figure}
\begin{figure}[t!]
\begin{center}
\subfigure{\includegraphics[width=0.23\textwidth]{changing_tau/kpca_grad_norm_tau_1.pdf}}
\subfigure{\includegraphics[width=0.23\textwidth]{changing_tau/kpca_grad_norm_tau_10.pdf}}
\subfigure{\includegraphics[width=0.23\textwidth]{changing_tau/kpca_grad_norm_tau_50.pdf}}
\subfigure{\includegraphics[width=0.23\textwidth]{changing_tau/kpca_grad_norm_tau_100.pdf}}
\setcounter{subfigure}{0}
\subfigure[$\tau=1$]{\includegraphics[width=0.23\textwidth]{changing_tau/kpca_angle_tau_1.pdf}}
\subfigure[$\tau=10$]{\includegraphics[width=0.23\textwidth]{changing_tau/kpca_angle_tau_10.pdf}}
\subfigure[$\tau=50$]{\includegraphics[width=0.23\textwidth]{changing_tau/kpca_angle_tau_50.pdf}}
\subfigure[$\tau=100$]{\includegraphics[width=0.23\textwidth]{changing_tau/kpca_angle_tau_100.pdf}}
\caption{Results for kPCA \eqref{problem_kPCA} with different number of inner loops $\tau=[1, 10, 50, 100]$. The y-axis of the figures in the first row denotes $\|\grad f(x_t)\|$, and the one in the second row denotes the principal angle between $x_t$ and $x^*$. The experiments are repeated and averaged over 10 times.}
\label{fig:kpca_changing_tau}
\end{center}
\end{figure}
\subsection{Experiments for kPCA on real data}
We now show the numerical results of the three algorithms on real data. We focus on the kPCA problem \eqref{problem_kPCA} here. We test the three algorithms on three real data sets: the Iris dataset~\cite{forinaextendible}, the wine dataset~\cite{forinaextendible} and the MNIST hand-written dataset~\cite{lecun1998gradient}. For all three datasets, we calculate the first $r$ principal directions and the true optimal loss value directly. We can thus compute the principal angles between the iterate and the ground truth. The experiments are repeated and averaged for 10 random initializations.
For the first two datasets, we randomly partition the datasets into $10$ agents and at each iteration we take $k=5$ agents. The Figures \ref{fig:kpca_iris} and \ref{fig:kpca_wine} show that \texttt{RFedSVRG} is able to effectively decrease the norm of Riemannian gradient and the principal angles while the other two are not as efficient.
\begin{figure}[t!]
\begin{center}
\subfigure[Gradient norm]{\includegraphics[width=0.36\textwidth]{kpca_iris_grad_norm.pdf}}
\subfigure[Principal angle]{\includegraphics[width=0.36\textwidth]{kpca_iris_angle.pdf}}
\caption{Results for kPCA \eqref{problem_kPCA} on Iris dataset. The data is in $\RR^4$ ($d=4$) and we take $r=2$. The first figure is the norm of Riemannian gradient $\|\grad f(x_t)\|$ and the second is the principal angle between $x_t$ and the true solution $x^*$. }
\label{fig:kpca_iris}
\end{center}
\end{figure}
\begin{figure}[t!]
\begin{center}
\subfigure[Gradient norm]{\includegraphics[width=0.36\textwidth]{kpca_wine_grad_norm.pdf}}
\subfigure[Principal angle]{\includegraphics[width=0.36\textwidth]{kpca_wine_angle.pdf}}
\caption{Results for kPCA \eqref{problem_kPCA} with wine dataset. The data is in $\RR^{13}$ ($d=13$) and we take $r=5$. The first figure is the norm of Riemannian gradient $\|\grad f(x_t)\|$ and the second is the principal angle between $x_t$ and the true solution $x^*$.}
\label{fig:kpca_wine}
\end{center}
\end{figure}
For the MNIST hand-written dataset, the (training) dataset contains $60000$ hand-written images of size $28\times 28$, i.e. $d=784$. This is a relatively large dataset and we test the proposed algorithms with different number of clients. The results are shown in Figure \ref{fig:kpca_mnist} where the efficiency of \texttt{RFedSVRG} is demonstrated again. The comparison of the two rows of Figure \ref{fig:kpca_mnist} concludes that \texttt{RFedSVRG} shows better efficiency even with a larger number of clients $n$.
\begin{figure}[t!]
\begin{center}
\subfigure{\includegraphics[width=0.36\textwidth]{mnist/kpca_mnist_grad_norm_100.pdf}}
\subfigure{\includegraphics[width=0.36\textwidth]{mnist/kpca_mnist_angle_100.pdf}}
\setcounter{subfigure}{0}
\subfigure[Gradient norm]{\includegraphics[width=0.36\textwidth]{mnist/kpca_mnist_grad_norm_200.pdf}}
\subfigure[Principal angle]{\includegraphics[width=0.36\textwidth]{mnist/kpca_mnist_angle_200.pdf}}
\caption{Results for kPCA \eqref{problem_kPCA} with MNIST dataset. The data is in $\RR^{784}$ ($d=784$) and we take $r=5$. The first column is the norm of Riemannian gradient $\grad f(x_t)$ and the second is the principal angle between $x_t$ and the true solution $x^*$. The two rows corresponds to $n=100$ and $n=200$. We take $k=n/10$ and $\tau=5$ for all algorithms.}
\label{fig:kpca_mnist}
\end{center}
\end{figure}
\section{Conclusions}
In this paper, we studied the federated optimization over Riemannian manifolds. We proposed a Riemannian federated SVRG algorithm and analyzed its convergence rate to an $\epsilon$-stationary point. To the best of our knowledge, this is the first federated algorithm over Riemannian manifolds with convergence guarantees. Numerical experiments on federated PCA and federated kPCA were conducted to demonstrate the efficiency of the proposed method. {Developing algorithms with lower communication cost, better scalability and sparse solutions are some important topics for future research.}
\newpage
{\small
|
1,116,691,499,911 | arxiv | \section{Introduction}
Wolf-Rayet\,(WR) bubbles are the final result of the evolution of the
circumstellar medium (CSM) of massive stars with initial masses $M
\gtrsim$35 $M_\odot$. These stars exhibit high mass-loss rates
throughout their lives, peaking during their post-main-sequence
evolution that involves a Red or Yellow Supergiant (RSG or YSG) or
Luminous Blue Variable (LBV) stage (e.g.,
\citealp{2003A&A...404..975M}) during which the mass-loss rate can be
as high as $10^{-4}$--$10^{-3}\,M_\odot$ yr$^{-1}$
\citep{2000A&A...360..227N}, although the stellar wind velocity is low
($10$-$10^{2}$~km~s$^{-1}$). The final WR stage is characterized by a
fast stellar wind ($v_{\infty}\gtrsim 10^{3}$~km~s$^{-1}$), which
sweeps up, shocks, and compresses the RSG/LBV material. Thin-shell
and Rayleigh-Taylor instabilities lead to the corrugation and eventual
fragmentation of the swept-up shell
\citep{{1996A&A...305..229G},{1996A&A...316..133G},{2003ApJ...594..888F},{2006ApJ...638..262F},{2011ApJ...737..100T}}.
Clumpy WR wind-blown bubbles have been detected at optical wavelengths
around $\sim$10 WR stars in our Galaxy
\citep{{1983ApJS...53..937C},{2000AJ....120.2670G},{2010MNRAS.409.1429S}}.
Their optical emission is satisfactorily modeled as photoionized dense
clumps and shell material \citep{1993A&A...272..299E}.
X-ray emission has been detected so far in only two WR bubbles,
NGC\,6888 and S\,308
\citep{{1988Natur.332..518B},{1994A&A...286..219W},
{1998LNP...506..425W},{1999A&A...343..599W},{2003ApJ...599.1189C},
{2002A&A...391..287W},{2005ApJ...633..248W},{2011ApJ...728..135Z}}.
The most sensitive X-ray observations of a WR bubble are those of the
northwest (NW) quadrant of S\,308 presented by
\citet{2003ApJ...599.1189C}. Their \emph{XMM-Newton} EPIC-pn X-ray
spectrum of S\,308 revealed very soft X-ray emission dominated by the
\ion{N}{6} He-like triplet at $\sim$0.43~keV and declining sharply
toward higher energies. This spectrum was fit with a two-temperature
optically thin MEKAL plasma emission model, with a cold main component
at $kT_1 = 0.094$~keV (i.e., $T_\mathrm{X}\sim 1.1 \times 10^6$~K),
and a hot secondary component at $kT_2 \sim 0.7$~keV contributing
$\leq$6\% of the observed X-ray flux. The comparison of the X-ray and
optical H$\alpha$ and [\ion{O}{3}] images of S\,308 showed that the
X-ray emission is confined by the ionized shell.
In this paper, we present the analysis of three additional
\textit{XMM-Newton} observations of S\,308, which, in conjunction with
those of the NW quadrant presented by \citet{2003ApJ...599.1189C}, map
90\% of this WR bubble (see \S2). In \S3 and \S4 we discuss the
spatial distribution and spectral properties of the X-ray-emitting
plasma in S\,308, respectively. In \S5 we present our results of the
X-ray emission from the central star in the WR bubble. A discussion is
presented in \S6 and summary and conclusions in \S7.
\section{\textit{XMM-Newton} Observations}
\begin{figure*}[!htbp]
\includegraphics[width=1.0\linewidth]{fig1.eps}
\caption{\textit{XMM-Newton} EPIC images of the four observations of
S\,308 in the 0.3--1.15~keV band. The images have been extracted
using a pixel size of 2\farcs0 and adaptively smoothed using a
Gaussian kernel between 5\arcsec\ and 30\arcsec. The source regions
used for spectral analysis are indicated by solid lines and the
background regions by dotted lines. Note that the point sources that
are present in the images were excised for the spectral analysis.}
\label{fig:4images}
\end{figure*}
The unrivaled sensitivity of the \textit{XMM-Newton} EPIC cameras to
large-scale diffuse emission makes them the preferred choice for the
observation of S\,308. \citet{2003ApJ...599.1189C} presented
\textit{XMM-Newton} observations of the brightest NW quadrant of
S\,308 \citep{1999A&A...343..599W}, but the large angular size of
S\,308 ($\sim$40\arcmin\ in diameter) exceeds the field of view of the
EPIC camera and a significant fraction of the nebula remained
unobserved. To complement these observations, additional
\textit{XMM-Newton} observations of three overlapping fields covering
the northeast (NE), southwest (SW), and southeast (SE) quadrants of
the nebula have been obtained. The previous and new observations
result in the coverage of $\sim$90\% of the area of S\,308.
The pointings, dates and revolutions of the observations, and their
exposure times are listed in Table~\ref{tab:table1}. In the
following, we will refer to the individual observations by the
quadrant of S\,308 that is covered, namely the NW, NE, SW, and SE
quadrants. All observations were obtained using the Medium Filter and
the Extended Full-Frame Mode for EPIC-pn and Full-Frame Mode for
EPIC-MOS.
\begin{table*}[ht!]
\caption{\textit{XMM-Newton} Observations of S\,308}
\centering
\begin{tabular}{lcccccrrrcrrr}
\hline\hline\noalign{\smallskip}
\multicolumn{1}{l}{Pointing} &
\multicolumn{1}{c}{R.A.} &
\multicolumn{1}{c}{Dec.} &
\multicolumn{1}{c}{Obs.\ ID} &
\multicolumn{1}{c}{Rev.} &
\multicolumn{1}{c}{Observation start} &
\multicolumn{3}{c}{Total exposure time} &
\multicolumn{1}{c}{} &
\multicolumn{3}{c}{Net exposure time} \\
\cline{7-9}
\cline{11-13}
\multicolumn{1}{c}{} &
\multicolumn{2}{c}{(J2000)} &
\multicolumn{1}{c}{} &
\multicolumn{1}{c}{} &
\multicolumn{1}{c}{UTC} &
\multicolumn{1}{c}{pn} &
\multicolumn{1}{c}{MOS1} &
\multicolumn{1}{c}{MOS2} &
\multicolumn{1}{c}{} &
\multicolumn{1}{c}{pn} &
\multicolumn{1}{c}{MOS1} &
\multicolumn{1}{c}{MOS2} \\
\multicolumn{1}{c}{} &
\multicolumn{1}{c}{} &
\multicolumn{1}{c}{} &
\multicolumn{1}{c}{} &
\multicolumn{1}{c}{} &
\multicolumn{1}{c}{} &
\multicolumn{1}{c}{[ks]} &
\multicolumn{1}{c}{[ks]} &
\multicolumn{1}{c}{[ks]} &
\multicolumn{1}{c}{} &
\multicolumn{1}{c}{[ks]} &
\multicolumn{1}{c}{[ks]} &
\multicolumn{1}{c}{[ks]} \\
\hline
\noalign{\smallskip}
NW & 06:53:30 & $-$23:43:00 & 0079570201 & 343 & 2001-10-23T22:00:09 & 43.5 & 47.6 & 47.5 & & 11.9 & 19.6 & 19.9 \\
SW & 06:53:24 & $-$23:56:18 & 0204850401 & 781 & 2004-03-15T14:30:41 & 20.0 & 23.3 & 23.4 & & 6.4 & 9.0 & 9.2 \\
SE & 06:55:16 & $-$24:00:00 & 0204850501 & 781 & 2004-03-14T23:00:41 & 22.0 & 25.4 & 25.4 & & 8.2 & 12.4 & 12.7 \\
NE & 06:54:47 & $-$23:46:18 & 0204850601 & 781 & 2004-03-15T06:45:41 & 22.0 & 25.4 & 25.4 & & 5.4 & 8.9 & 8.4 \\
\hline
\end{tabular}
\label{tab:table1}
\end{table*}
The \textit{XMM-Newton} pipeline products were processed using the
\textit{XMM-Newton} Science Analysis Software (SAS) Version 11.0, and
the Calibration Access Layer available on 2011-09-13. In order to
analyze the diffuse and soft X-ray emission from S\,308, the
\textit{XMM-Newton} Extended Source Analysis Software (XMM-ESAS)
package \citep{2008A&A...478..615S,KS08} has been used. This
procedure applies very restrictive selection criteria for the
screening of bad events registered during periods of high background
to ensure a reliable removal of the background and instrumental
contributions, particularly in the softest energy bands. As a result,
the final net exposure times resulting from the use of the XMM-ESAS
tasks, as listed in Table~\ref{tab:table1}, are noticeably shorter
than the original exposure times. Since we are interested in the best
time coverage of the central WR star to assess its possible X-ray
variability and given that its X-ray emission level is much brighter
than that of a mildly enhanced background, we applied less restrictive
criteria in selecting good time intervals for this star. For this
particular analysis, the 10--12~keV energy band is used to assess the
charged particle background, and we excised periods of high background
with EPIC-pn count rates $\geq$1.5~counts~s$^{-1}$ and EPIC-MOS count
rates $\geq$0.3~counts~s$^{-1}$.
\section{Spatial Distribution of the Diffuse X-ray Emission}
\subsection{Image processing}
Following Snowden \& Kuntz' cookbook for analysis procedures for
\emph{XMM-Newton} EPIC observations of extended objects and diffuse
background, Version 4.3 \citep{2011AAS...21734417S}, the XMM-ESAS
tasks and the associated Current Calibration Files (CCF), as obtained
from {\small
\url{ftp://xmm.esac.esa.int/pub/ccf/constituents/extras/esas\_caldb}},
have been used to remove the contributions from the astrophysical
background, soft proton background, and solar wind charge-exchange
reactions, which have contributions at low energies ($<$1.5 keV). The
resulting exposure-map-corrected, background-subtracted EPIC images of
each observed quadrant of S\,308 in the 0.3--1.15~keV band are
presented in Figure~\ref{fig:4images}. The new observations of the
NE, SW, and SE quadrants of S\,308 detect diffuse emission, as well as
a significant number of point sources superimposed on this diffuse
emission. With the single exception of HD\,50896 (a.k.a.\ WR\,6), the
WR star progenitor of this bubble registered in the SW and NE
observations, all point sources are either background or foreground
sources that we have removed prior to our analysis.
\begin{figure*}[!t]
\includegraphics[width=1.0\linewidth]{fig2.eps}
\caption{
{\it (left)} Adaptively smoothed \emph{XMM-Newton} EPIC Image of S\,308 in
the 0.3--1.15~keV band.
All point sources, except for the central star HD\,50896 (WR\,6), have
been excised.
{\it (right)}
Ground-based [\protect\ion{O}{3}] image of S\,308 obtained with the Michigan
Curtis Schmidt telescope at Cerro Tololo Inter-American Observatory\,(CTIO)
with superimposed X-ray emission contours.
The position of the central star HD\,50896 is indicated in both panels.
}
\label{fig:mosaic}
\end{figure*}
\begin{figure}[!t]
\includegraphics[bb=120 200 520 708,width=1.0\linewidth]{fig3.eps}
\caption{ S\,308 X-ray surface brightness profiles along the SE--NW
(PA=135$^\circ$) and SW--NE (PA=45$^\circ$) directions extracted
from the smoothed \emph{XMM-Newton} EPIC image presented in
Figure~\protect\ref{fig:mosaic}-\textit{left}. For comparison, a
surface brightness profile of a representative background region
towards the West of S\,308 is shown at the same intensity and
spatial scales.}
\label{fig:prof}
\end{figure}
\subsection{Analysis of the diffuse X-ray emission}
In order to analyze the spatial distribution of the diffuse X-ray
emission in S\,308, the four individual observations have been
mosaicked using the XMM-ESAS tasks and all point sources removed using
the \emph{Chandra} Interactive Analysis of Observations (CIAO) Version
4.3 \emph{dmfilth} routine, except the one corresponding to WR\,6.
The final image (Figure~\ref{fig:mosaic}-\textit{left}), extracted in
the 0.3--1.15~keV energy band with a pixel size of 3\farcs0, has been
adaptively smoothed using the ESAS task \emph{adapt-2000} requesting
100 counts of the original image for each smoothed pixel, with typical
smoothing kernel scales $\leqslant1'$ in the brightest regions and
$1'-2'$ in the faintest ones. This image reveals that the diffuse
X-ray emission from S\,308 has a limb-brightened morphology, with an
irregular inner cavity $\sim22'$ in size. The surface brightness
distribution displayed by this image confirms and adds further details
to the results of previous X-ray observations
\citep{1999A&A...343..599W,2003ApJ...599.1189C}. The X-ray emission
from the bubble is brighter towards the northwest blowout and the
western rim, and fainter towards the east. The bubble seems to lack
detectable X-ray emission towards the central regions around the WR
star.
\begin{figure*}[!t]
\includegraphics[width=1.0\linewidth]{fig4.eps}
\caption{Composite color picture of the \emph{XMM-Newton} EPIC image
(blue) and CTIO [\ion{O}{3}] (green) and H$\alpha$ (red) images of
S\,308. The apparent X-ray emission outside the optical shell is
caused by large fluctuations in the background in regions near the
EPIC cameras edge where the net exposure is much shorter than at the
aimpoint.}
\label{fig:color_image}
\end{figure*}
The limb-brightened spatial distribution of the X-ray emission from
S\,308 is further illustrated by the surface brightness profiles along
the SE--NW and NE--SW directions shown in Figure~\ref{fig:prof}. The
emission in the innermost regions, close to the central WR star, falls
to levels comparable to those of a background region to the west of
S\,308 shown in the bottom panel of Fig.~\ref{fig:prof}. Besides the
SE region, whose surface brightness distribution is best described by
a plateau, the X-ray emission along the other directions increases
steadily with radial distance, peaking near the shell rim and
declining sharply outwards. The thickness of the X-ray-emitting shell
is difficult to quantify; along the SW direction, it has a FWHM
$\sim$5\arcmin, whereas it has a FWHM $\sim$8\arcmin\ along the NE
direction. Figure~\ref{fig:prof} also illustrates that the
X-ray-emitting shell is larger along the SE--NW direction
($\sim$44\arcmin\ in size) than along the NE--SW direction
($\sim$40\arcmin\ in size).
Finally, the spatial distribution of the diffuse X-ray emission from
S\,308 is compared to the [\ion{O}{3}] emission from the ionized
optical shell in Figure~\ref{fig:mosaic}-\textit{right}. The X-ray
emission is interior to the optical emission not only for the NW
quadrant but for the entire bubble. This is also illustrated in the
color composite picture shown in Figure~\ref{fig:color_image}, in which
the distribution of the X-ray emission is compared to the optical
H$\alpha$ and [\ion{O}{3}] images. This image shows that the diffuse
X-ray emission is closely confined by the filamentary emission in the
H$\alpha$ line, whereas the smooth emission in the [O~{\sc iii}] line
extends beyond both the H$\alpha$ and X-ray rims.
\section{Physical Properties of the Hot Gas in S\,308}
The spectral properties of the diffuse X-ray emission from S\,308 can
be used to investigate the physical conditions and chemical abundances
of the hot gas inside this nebula. In order to proceed with this analysis,
we have defined several polygonal aperture regions, as shown in
Figure~\ref{fig:4images}, which correspond to distinct morphological
features of S\,308: regions with $\#$1 designations correspond to the
rim revealed by the limb-brightened morphology, $\#$2 the NW blowout,
$\#$3, $\#$5, and $\#$6 shell interior, and $\#$4 the central star
HD\,50896. We note that any particular morphological feature may
have been registered in more than one quadrant, in which case several
spectra can have the same numerical designation (for instance, there
are four spectra for the rim of the shell, namely 1NE,
1NW, 1SE, and 1SW).
\\
\\
\subsection{Spectra Extraction and Background Subtraction}
Perhaps the most challenging problem associated with the analysis of
the X-ray spectra of S\,308 is a reliable subtraction of the
background contribution. The diffuse X-ray emission from S\,308 fills
a significant fraction of the field of view of the EPIC-pn and
EPIC-MOS cameras, making the selection of suitable background regions
difficult because the instrumental spectral response of the cameras
close to their edges may not be the same as those for the source
apertures.
The background contribution to the diffuse emission from clusters of
galaxies that fills the field of view is typically assessed from high
signal-to-noise ratio observations of blank fields. In the case of
S\,308, however, the comparison of spectra extracted from background
regions with those extracted from the same detector regions of the
most suitable EPIC Blank Sky observations \citep{2007A&A...464.1155C}
clearly indicates that they have different spectral shapes. The
reason for this discrepancy lies in the typical high Galactic latitude
of the EPIC Blank Sky observations, implying low hydrogen absorption
column densities and Galactic background emission, while S\,308 is
located in regions close to the Galactic Plane where extinction and
background emission are significant. We conclude that EPIC Blank Sky
observations, while suitable for the analysis of the diffuse emission
of a large variety of extragalactic objects, cannot be used in our
analysis of S\,308.
Alternatively, the different contributions to the complex background
emission in \emph{XMM-Newton} EPIC observations can be modeled, taking
into account the contributions from the astrophysical background,
solar wind charge-exchange reactions, high-energy (soft protons)
particle contributions, and electronic noise. This is the procedure
recommended by the XMM-ESAS in the release of SAS v11.0, following the
background modeling methodology devised by \citet{Snowden_etal04} and
\citet{KS08}. Even though the modeling of the different contributions
is a complex task, it can be routinely carried out. Unfortunately,
S\,308 is projected close to the Galactic Plane and the \emph{ROSAT}
All Sky Survey (RASS) reveals that it is located in a region of strong
soft background emission with small-scale spatial variations. As shown
in Figure~\ref{fig:back}, the X-ray emission from this background is
soft and shows lines in the 0.3-1.0 keV energy band from thermal
components as the emission from S\,308. Therefore, it is not possible
to model independently the emission from the S\,308 bubble and that
from the soft background.
\begin{figure}[!t]
\includegraphics[bb=50 200 560 708,width=1.0\linewidth]{fig5.eps}
\caption{Comparison of the background-unsubtracted raw EPIC-pn
spectrum of S\,308 (black) and scaled EPIC-pn background spectrum
(red). The Al-K line at $\sim$1.5 keV is an instrumental line.}
\label{fig:back}
\end{figure}
\begin{figure}[!t]
\includegraphics[bb=48 175 548 710,width=1.0\linewidth]{fig6.eps}
\vspace{0.1cm}
\caption{
Background-subtracted blank sky spectra extracted from source and
background regions equal in detector coordinates to the regions 1NW
(\textit{top}) and 1SW (\textit{bottom}) of S\,308.
}
\label{fig:blank}
\end{figure}
The only viable procedure for the analysis seems the use of background
spectra extracted from areas near the camera edges of the same
observations. It can be expected that the spectral properties of
these background spectra differ from those of the background
registered by the central regions of the cameras, given the varying
spectral responses of the peripheral and central regions of the
cameras to the various background components. To assess these
differences, we have used EPIC Blank Sky observations to extract
spectra from source and background regions identical in detector units
to those used for S\,308. Two typical examples of blank sky
background-subtracted spectra are presented in Figure~\ref{fig:blank}.
While these spectra are expected to be flat, several deviations can be
noticed: (1) clear residuals at $\sim 7.5$ and $\sim 8.1$~keV, which
can be attributed to the defective removal of the strong instrumental
Cu lines that affect the EPIC-pn spectra \citep{KS08}, (2) a
noticeable deviation at $\sim0.65$~keV of the O-K line, and (3) most
notably, deviations at energies below 0.3~keV, which is indicative of
the faulty removal of the electronic noise component of the background
\citep{2007A&A...464.1155C}. Note that the Al-K line at $\sim1.5$~keV
and the Si-K line at $\sim1.8$~keV, which may be expected to be strong
in the background EPIC-pn spectra, are correctly removed.
Consequently, we have chosen to use the background spectra extracted
from the observations of S\,308, but restrict our spectral fits to the
0.3--1.3~keV band.
\begin{figure*}[!htbp]
\includegraphics[bb=18 165 592 718, width=\linewidth]{fig7.eps}
\vspace{0.1cm}
\caption{Background-subtracted \emph{XMM-Newton} EPIC-pn spectra of
S\,308 corresponding to the 11 individual source regions shown in
Figure~\protect\ref{fig:4images}, as well as the combined spectra
of the entire nebula, its shell or rim (regions $\#1$ and $\#2$) and the
interior region of the shell ($\#3$, $\#5$, and $\#6$). Each spectrum is
overplotted with its best-fit two-temperature APEC model in the
energy range 0.3--1.3 keV, assuming fixed abundances, while the lower
panel displays the residuals of the fit.
}
\label{fig:spec_diffuse}
\end{figure*}
\subsection{Spectral properties}
\label{sec:specprops}
The individual background-subtracted EPIC-pn spectra of the diffuse
emission of S\,308 are presented in Figure~\ref{fig:spec_diffuse}.
This figure also includes spectra of the whole nebula, its rim, and
central cavity obtained by combining all spectra, those of regions
$\#$1 and $\#$2 (the rim or limb-brightened shell), and those of
regions $\#$3, $\#$5, and $\#$6 (the central cavity), respectively.
Spectra and response and ancillary calibrations matrices from
different observations of the same spatial regions were merged using
the \textit{mathpha}, \textit{addarf}, and \textit{addrmf} HEASOFT
tasks according to the prescriptions of the SAS threads. The EPIC-MOS
spectra have also been examined and found to be consistent with the
EPIC-pn ones although, due to the lower sensitivity of the EPIC-MOS
cameras, they have fewer counts. Therefore, our spectral analysis of
the diffuse X-ray emission will concentrate on the EPIC-pn spectra.
The spectra shown in Figure~\ref{fig:spec_diffuse} are all soft, with
a prominent peak near $\gtrsim 0.4$~keV, and a rapid decline in
emission towards higher energies. Some spectra (e.g., 1NE, 1NW, 1SW,
5NE, and 5SE) show a secondary peak near $\lesssim 0.6$~keV that is
only hinted in other spectra. There is little emission above $\sim
0.7$~keV, although some spectra (e.g., 1SE, 1SW, 2NW, 3NE, 5SE, and
6SW) appear to present a hard component between 0.8 and 1.0~keV.
\begin{table*}[ht!]
\caption{Spectral Fits of the Diffuse X-ray Emission of S\,308}
\scriptsize
\centering
\begin{tabular}{lrllcclccll}
\hline\hline\noalign{\smallskip}
\multicolumn{1}{l}{Region} &
\multicolumn{1}{l}{Counts} &
\multicolumn{1}{c}{$N_\mathrm{H}$} &
\multicolumn{1}{c}{$kT_1$} &
\multicolumn{1}{c}{EM$_1^{\mathrm{a}}$} &
\multicolumn{1}{c}{$f_1^{\mathrm{b}}$} &
\multicolumn{1}{c}{$kT_2$} &
\multicolumn{1}{c}{EM$_2^{\mathrm{a}}$} &
\multicolumn{1}{c}{$f_2^{\mathrm{b}}$} &
\multicolumn{1}{c}{$f_2/f_1$} &
\multicolumn{1}{c}{$\chi^2$/DoF} \\
\multicolumn{1}{c}{} &
\multicolumn{1}{c}{} &
\multicolumn{1}{c}{[$10^{20}$cm$^{-2}$]} &
\multicolumn{1}{c}{[keV]} &
\multicolumn{1}{c}{[cm$^{-3}$]} &
\multicolumn{1}{c}{[erg\,cm$^{-2}$s$^{-1}$]} &
\multicolumn{1}{c}{[keV]} &
\multicolumn{1}{c}{[cm$^{-3}$]} &
\multicolumn{1}{c}{[erg\,cm$^{-2}$s$^{-1}$]} &
\multicolumn{1}{c}{} &
\multicolumn{1}{c}{} \\
\hline
\noalign{\smallskip}
S\,308 & 10290$\pm$19 &~~~6.2 & 0.096$\pm$0.002 & 7.6$\times$10$^{56}$ & 2.7$\times$10$^{-12}$ & 1.12$\pm$0.22 & 1.2$\times$10$^{55}$ & 2.4$\times$10$^{-13}$ & 0.090 & 2.01 (=319.6/159) \\
Shell & 7920$\pm$23 &~~~6.2 & 0.092$\pm$0.003 & 4.9$\times$10$^{56}$ & 1.5$\times$10$^{-12}$ & $\dots$ & $\dots$ & $\dots$ & $\dots$ & 1.54 (=233.9/151) \\
Interior & 2390$\pm$120 &~~~6.2 & 0.116$\pm$0.011 & 8.3$\times$10$^{55}$ & 5.6$\times$10$^{-13}$ & 0.95 & 4.8$\times$10$^{54}$ & 7.5$\times$10$^{-14}$ & 0.134 & 1.35 (=188.2/139) \\
\hline
1NE & 965$\pm$10 &~~~6.2 & 0.094$\pm$0.010 & 1.7$\times$10$^{56}$ & 5.1$\times$10$^{-13}$ & 0.95 & 1.1$\times$10$^{53}$ & 2.5$\times$10$^{-15}$ & 0.005 & 0.82 (=46.9/57) \\
1NW & 1000$\pm$60 &~~~6.2 & 0.102$\pm$0.009 & 4.7$\times$10$^{55}$ & 2.0$\times$10$^{-13}$ & 0.95 & 4.4$\times$10$^{50}$ & 1.0$\times$10$^{-17}$ & $<10^{-3}$ & 1.13 (=57.9/51) \\
1SE & 540$\pm$50 &~~~6.2 & 0.094$\pm$0.010 & 1.8$\times$10$^{56}$ & 5.1$\times$10$^{-13}$ & 0.95 & 1.1$\times$10$^{53}$ & 2.5$\times$10$^{-15}$ & 0.005 & 0.82 (=46.9/57) \\
1SW & 1100$\pm$50 &~~~6.2 & 0.097$\pm$0.012 & 1.0$\times$10$^{56}$ & 3.9$\times$10$^{-13}$ & 0.95 & 2.3$\times$10$^{54}$ & 5.2$\times$10$^{-14}$ & 0.135 & 1.31 (=69.3/53) \\
\hline
2NW & 4620$\pm$100 &~~~6.2 & 0.095$\pm$0.003 & 2.2$\times$10$^{56}$ & 7.4$\times$10$^{-13}$ &0.96$\pm$0.21 & 4.3$\times$10$^{54}$& 9.8$\times$10$^{-14}$ & 0.130 & 1.30 (=137.9/106) \\
\hline
3NE & 300$\pm$33 &~~~6.2 & 0.095$\pm$0.023 & 3.5$\times$10$^{55}$ & 1.2$\times$10$^{-13}$ & 0.95 & 1.4$\times$10$^{54}$ & 3.1$\times$10$^{-14}$ & 0.262 & 0.55 (=9.4/17) \\
3NW & 160$\pm$25 &~~~6.2 & 0.11$\pm$0.04 & 5.0$\times$10$^{54}$ & 2.5$\times$10$^{-14}$ & 0.95 & 5.6$\times$10$^{51}$ & 1.2$\times$10$^{-16}$ & 0.005 & 0.91 (=27.4/30) \\
\hline
5NE & 530$\pm$50 &~~~6.2 & 0.090$\pm$0.015 & 7.6$\times$10$^{55}$ & 2.1$\times$10$^{-13}$ & 0.95 & 9.0$\times$10$^{53}$ & 1.8$\times$10$^{-14}$ & 0.083 & 1.06 (=44.5/42) \\
5SE & 820$\pm$60 &~~~6.2 & 0.103$\pm$0.016 & 4.4$\times$10$^{55}$ & 1.9$\times$10$^{-13}$ & 0.95 & 2.2$\times$10$^{54}$ & 5.1$\times$10$^{-14}$ & 0.268 & 1.05 (=54.5/52) \\
6NW & 400$\pm$50 &~~~6.2 & 0.112$\pm$0.015 & 1.8$\times$10$^{55}$ & 1.0$\times$10$^{-13}$ & $\dots$ & $\dots$ & $\dots$ & $\dots$ & 1.42 (=50.0/35) \\
6SW & 210$\pm$32 &~~~6.2 & 0.12$\pm$0.05 & 8.3$\times$10$^{54}$ & 5.7$\times$10$^{-14}$ & 0.95 & 9.7$\times$10$^{53}$ & 1.5$\times$10$^{-14}$ & 0.270 & 1.69 (=27.0/16) \\
\hline
W99$^{\mathrm{c}}$ & 4560& ~~~35 & 0.129 & 1.0$\times$10$^{56}$ & 6.5$\times$10$^{-12}$ & 2.4 & 4.3$\times$10$^{55}$ & 1.2$\times$10$^{-12}$ & 0.185 & 20 (=40/2) \\
C03$^{\mathrm{d}}$ & $\dots$& ~~~11 & 0.094$\pm$0.009 & 8.2$\times$10$^{56}$ & 7.2$\times$10$^{-12}$ & 0.7$^{+1.5}_{-0.5}$ & 5.1$\times$10$^{54}$ & 1.1$\times$10$^{-13}$ & 0.015 & 1.02 \\
\hline
\end{tabular}
\begin{list}{}{}
\item{$^{\mathrm{a}}$EM = $\int n_{\rm e}^2 dV$.}
\item{$^{\mathrm{b}}$Observed (absorbed) fluxes for the
two-temperature models components in the energy range
0.3-1.3~keV.}
\item{$^{\mathrm{c}}$\citet{1999A&A...343..599W}.}
\item{$^{\mathrm{d}}$\citet{2003ApJ...599.1189C}.}
\end{list}
\label{tab:spec_fits}
\end{table*}
The feature at $\sim 0.4$~keV can be identified with the 0.43~keV
\ion{N}{6} triplet, while the fainter feature at $\sim 0.6$~keV can be
associated with the 0.57~keV \ion{O}{7} triplet. The occurrence of
spectral lines is suggestive of optically thin plasma emission,
confirming previous X-ray spectral analyses of S\,308
\citep{1999A&A...343..599W,2003ApJ...599.1189C}. The predominance of
emission from the He-like species of nitrogen and oxygen over their
corresponding H-like species implies a moderate ionization stage of
the plasma. Furthermore, the relative intensity of the \ion{N}{6} and
\ion{O}{7} lines suggests nitrogen enrichment, since the intensity of
the \ion{O}{7} lines from a plasma with solar abundances would be
brighter than that of the \ion{N}{6} lines.
In accordance with their spectral properties and previous spectral
fits of the NW regions of S\,308 \citep{2003ApJ...599.1189C}, all the
X-ray spectra of S\,308 have been fit with XSPEC v12.7.0
\citep{Arnaud1996} for an absorbed two-temperature APEC optically thin
plasma emission model. The absorption model uses
\citet{1992ApJ...400..699B} cross-sections. A low temperature
component is used to model the bulk of the X-ray emission, while a
high temperature component is used to model the faint emission above
0.7~keV. We have adopted the same chemical abundances as
\citet{2003ApJ...599.1189C}, i.e., C, N, O, Ne, Mg, and Fe to be 0.1,
1.6, 0.13, 0.22, 0.13, and 0.13 times their solar values
\citep{1989GeCoA..53..197A}, respectively, which correspond to the
nebular abundances. The simulated two-temperature APEC model spectra
were then absorbed by an interstellar absorption column and convolved
with the EPIC-pn response matrices. The resulting spectra were then
compared to the observed spectrum in the 0.3--1.3~keV energy range and
$\chi^{2}$ statistics are used to determine the best-fit models. A
minimum of 50 counts per bin was required for the spectral fit. The
foreground absorption ($N_\mathrm{H}$), plasma temperatures ($kT_1$,
$kT_2$) with 1-$\sigma$ uncertainties, and emission measures (EM$_1$,
EM$_2$) of the best-fit models are listed in
Table~\ref{tab:spec_fits}. The best-fit models are overplotted on the
background-subtracted spectra in Figure~\ref{fig:spec_diffuse},
together with the residuals of the fits. Multi-temperature models do
not provide a substantial reduction of the value of the reduced
$\chi^{2}$ of the fit. We note that the values of the reduced
$\chi^{2}$ differ the most from unity for large regions, implying
inconsistencies of the relative calibrations across the FoV, but the
spectral fits still provide a fair description of the observed
spectrum. In the following sections we discuss the spectral fits of
the emission from the different morphological components of S\,308 in
more detail.
Spectral fits using models with varying chemical abundances of C, N,
and O were also attempted, but they did not provide any statistical
improvement of the fit. In particular, models with N/O abundance
ratios different from those of the nebula resulted in notably worst
quality spectral fits. As noted by other authors
\citep[see][]{2003ApJ...599.1189C,2011ApJ...728..135Z}, an
X-ray-emitting plasma with chemical abundances similar to those of the
optical nebulae seems at this moment to be the most adequate model for
the soft X-ray emission from WR bubbles.
\subsubsection{Properties of the Global X-ray Emission from S\,308}
\label{sec:globalprops}
The best-fit to the combined spectrum of the whole nebula results in
unphysically high values of the hydrogen absorption column density,
$N_\mathrm{H}$, well above the range $[0.2 -
1.05]\times10^{21}$\,cm$^{-2}$ implied by the optical extinction
values derived from Balmer decrement of the nebula
\citep{1992A&A...259..629E}. The effects of $N_\mathrm{H}$ and
nitrogen abundance on the $\chi^{2}$ of the spectral fits appear to be
correlated, i.e., models with high $N_\mathrm{H}$ and low nitrogen
abundance fit the spectra equally well as models with low
$N_\mathrm{H}$ and high nitrogen abundance. If we adopt the high
absorption column density from the best-fit model
($N_{\mathrm{H}}\gtrsim3\times10^{21}$~cm$^{-2}$), the elevated
nitrogen abundance reported by \citet{2003ApJ...599.1189C} will not be
reproduced. As the high absorption column density is not supported by
the optical extinction, in the subsequent spectral fits we will adopt
a fixed $N_\mathrm{H}$ of $6.2\times10^{20}$\,cm$^{-2}$ that is
consistent with the optical extinction measurements. We note that
this choice results in an imperfect modeling of the spectral features
in the 0.3--0.5~keV range, as indicated by the S-shaped distribution
of residuals in this spectral region in Fig.~\ref{fig:spec_diffuse}.
If we allow the value of $N_{\mathrm{H}}$ to float during the spectral
fit, the improvement of the value of the reduced $\chi^{2}$ is
negible.
The parameters of the best-fit model, listed in the first line of
Table~\ref{tab:spec_fits}, show two plasma components at temperatures
$\sim$1.1$\times$10$^6$~K and $\sim$1.3$\times$10$^7$~K with an
observed flux ratio, $f_2/f_1\sim$0.09, corresponding to an intrinsic
flux ratio $F_2/F_1\sim$0.06. The total observed flux is
$\sim3\times10^{-12}$~erg~cm$^{-2}$~s$^{-1}$. The intrinsic
luminosity at a distance of 1.5~kpc\footnote{See discussion about the
distance to WR\,6 of \citet{2003ApJ...599.1189C}.}, after accounting
for a fraction of $\sim1/3$ of S\,308 which is not included in the
source apertures considered here, is
$\sim2\times$10$^{33}$~erg~s$^{-1}$. The emission measure of the
best-fit to the combined spectrum, along with the spatial distribution
of the X-ray-emitting gas in a spherical thick shell with a thickness
$\sim$8\arcmin\ and inner radius of $\sim$11\arcmin, implies an
average electron density $n_e\sim 0.1$~cm$^{-3}$. We note that the
quality of the spectral fit is not exceptionally good, but more
sophisticated fits using multi-temperature models failed to improve
the quality of the fit. The proposed 2-T model, providing a fair
description of the spectral shape, should be considered as a first
approximate of the hot gas content and its physical conditions.
\\
\\
\subsubsection{Northwest Blowout (Region $\#2$)}
The northwest blowout of S\,308 has the brightest X-ray emission, with
a surface brightness
$\sim2.0\times10^{-18}$\,erg\,cm$^{-2}$\,s$^{-1}$\,arcsec$^{-2}$ and
its individual spectrum has a high signal-to-noise ratio. The
spectral shape is consistent with those of the shell spectra, with a
prominent 0.43~keV \ion{N}{6} line, a weaker \ion{O}{7} line, and a
clear detection of X-ray emission to energies of 0.8--1.0~keV. The
best-fit parameters are rather similar to those of the spectrum of the
entire nebula, with a marginally lower temperature for the hard
component. We will adopt this value of the hard component temperature
for those regions whose spectra do not have an adequate count number
to fit this parameter.
\subsubsection{The Limb-Brightened Shell}
\label{sec:shell}
The diffuse X-ray emission from S\,308 has a clear limb-brightened
morphology surrounding a cavity of diminished X-ray surface
brightness. The emission from regions at the rim of this shell
(1NE, 1NW, 1SE, and 1SW) is relatively bright, with an averaged
surface brightness of the rim $\sim$
$1.2\times10^{-18}$\,erg\,cm$^{-2}$\,s$^{-1}$\,arcsec$^{-2}$. All
individual spectra of the rim regions show the bright 0.43~keV \ion{N}{6}
emission line and indications of the weaker 0.57~keV \ion{O}{7}
emission line. The hard component is faint, except for the
spectrum of region 1SW. The fit to the combined spectrum confirms the
temperature of the soft component, but it is not possible to provide
statistical proof of the detection of the hard component. The fits to
the individual spectra only provide upper limits for this component,
except for region 1SW where it seems relatively bright.
\subsubsection{The Central Cavity}
The level of X-ray emission from the innermost regions of the optical
shell of S\,308 is lower than that of its edge, with an averaged
surface brightness of
$5\times10^{-19}$\,erg\,cm$^{-2}$\,s$^{-1}$\,arcsec$^{-2}$, i.e.,
$\sim 2.5$--4.0 times fainter than the shell and blowout regions. The
combined X-ray spectrum of the interior regions shown in
Figure~\ref{fig:spec_diffuse} indicates a stronger relative
contribution from the hard component. This is indeed confirmed by the
spectral fit: on average, the hard X-ray component has a flux
$\sim13\%$ that of the soft component. There is a noticeable lack of
emission from this component in the region 6NW, but otherwise the
average contributions derived from the individual fits are higher than
for the spectra of apertures on the shell rim.
\subsubsection{Comparison with Previous X-ray Studies}
Table~\ref{tab:spec_fits} also lists the best-fit parameters of the
spectral fits to the diffuse X-ray emission from S\,308 obtained by
\citet{1999A&A...343..599W} and \citet{2003ApJ...599.1189C}. It is
worthwhile discussing some of the differences with these previous
X-ray analyses. The \citet{2003ApJ...599.1189C} joint fit of our
regions 1NW, 2NW, 3NW and 6NW yields very similar results to the ones
shown in Table~\ref{tab:spec_fits}.
For the second thermal component, the derived temperatures from our
spectral fits and those of \citet{2003ApJ...599.1189C} are consistent
with each other, but \citet{1999A&A...343..599W} provides a much
higher temperature for this component. This discrepancy highlights
the difficulty of fitting the hard component using \emph{ROSAT} PSPC
data given its low spectral resolution, as well as the very likely
contamination of the \emph{ROSAT} PSPC spectrum of S\,308 by
unresolved hard point sources superposed on the diffuse emission.
\section{The Central Star WR6 (HD\,50896)}
\begin{figure}[t]
\includegraphics[bb=18 200 592 700,width=1.0\linewidth]{fig8.eps}
\vspace{0.1cm}
\caption{
EPIC-pn (top), MOS1 (center), and MOS2 (bottom) exposure-map-corrected
light curves of WR\,6 in the 0.2--9.0~keV energy band.
The time is referred to the starting time of the NE observation,
2004-03-15T06:45:41 UTC.
}
\label{fig:lc}
\end{figure}
\begin{figure*}[!htbp]
\includegraphics[width=1.0\linewidth]{fig9.eps}
\caption{
Background-subtracted \textit{XMM-Newton} EPIC-pn (black), MOS1 (red),
and MOS2 (blue) spectra of WR\,6 obtained during the observation of
the NE (\textit{left}) and SW (\textit{right}) pointings of S\,308.
}
\label{fig:wr_spectrum}
\end{figure*}
\begin{table*}[ht!]
\caption{Spectral Fits of HD\,50896}
\centering
\begin{tabular}{lrrr}
\hline\hline\noalign{\smallskip}
\multicolumn{1}{c}{Parameter} &
\multicolumn{1}{c}{NE Spectrum} &
\multicolumn{1}{c}{SW Spectrum} &
\multicolumn{1}{c}{\citet{2002ApJ...579..764S}} \\
\hline\noalign{\smallskip}
$N_{H}$ [$\times$10$^{21}$\,cm$^{-2}$] & 6.4$^{+1.0}_{-0.9}$ & 5.9$^{+1.2}_{-0.9}$ & 4.0$^{+0.4}_{-0.6}$ \\
$kT_1$ [keV] & 0.28$^{+0.03}_{-0.04}$ & 0.28 & 0.6$^{+0.4}_{-0.4}$ \\
$A_1$ [cm$^{-5}$] & 7.1$\times$10$^{-3}$ & 5.9$\times$10$^{-3}$ & 3.4$\times$10$^{-5}$ \\
$kT_2$ [keV] & 2.5$^{+1.5}_{-0.5}$ & 2.48 & 3.5$^{+0.7}_{-0.5}$\\
$A_2$ [cm$^{-5}$] & 1.8$\times$10$^{-4}$ & 2.2$\times$10$^{-4}$ & 1.0$\times$10$^{-5}$\\
$\chi^2$/DoF & 1.10=137.0/124 & 1.10=113.2/103 & 1.08=234.5/217 \\
$f_{1}$ (0.2--10 keV) [$\times$10$^{-12}$ erg\,cm$^{-2}$\,s$^{-1}$] & 1.14 & 1.23 & 0.97 \\
$f_{1}$ (2.5--10 keV) [$\times$10$^{-12}$ erg\,cm$^{-2}$\,s$^{-1}$] & 0.33 & 0.39 & 0.31 \\
$f_{2}$ (0.2--10 keV) [$\times$10$^{-12}$ erg\,cm$^{-2}$\,s$^{-1}$] & 0.58 & 0.70 & 0.49 \\
$F_{1}$ (0.2--10 keV) [$\times$10$^{-12}$ erg\,cm$^{-2}$\,s$^{-1}$] & 19.8 & 17.6 & 2.90 \\
$F_{1}$ (2.5--10 keV) [$\times$10$^{-12}$ erg\,cm$^{-2}$\,s$^{-1}$] & 0.35 & 0.41 & 0.33 \\
$F_{2}$ (0.2--10 keV) [$\times$10$^{-12}$ erg\,cm$^{-2}$\,s$^{-1}$] & 1.10 & 1.35 & 0.78 \\
\hline
\end{tabular}
\label{tab:WR}
\end{table*}
The central star of S\,308 is WR\,6 (a.k.a.\ HD\,50896), which has a
spectral subtype WN4 \citep{1988A&A...199..217V}. The star is
detected by the \textit{XMM-Newton} EPIC cameras in the NE and SW
pointings of the nebula. The EPIC-pn, EPIC-MOS1, and EPIC-MOS2 count
rates are 160$\pm$4 counts~ks$^{-1}$, 65$\pm$2 counts~ks$^{-1}$, and
65$\pm$2 counts~ks$^{-1}$, respectively, from the NE observation, and
141$\pm$4 counts~ks$^{-1}$, 53$\pm$2 counts~ks$^{-1}$, and 52$\pm$2
counts~ks$^{-1}$, respectively, from the SW observation using a source
aperture of $50''$ in radius. These count rates appear to imply that
the X-ray luminosity of WR\,6 diminished by 10--20\% from the NE to
the SW observations, which were only $\sim8$~hours apart\footnote{ The
on-axis \textit{XMM-Newton} images presented by
\citet{2002ApJ...579..764S} revealed a faint X-ray source at a
radial distance of $\sim$57\arcsec\ from WR\,6. The flux from this
source is $(2\pm1)$\% that of WR\,6 and thus it is not likely that
fraction of the emission from this source entering into the aperture
used for WR\,6 would contribute significantly to the observed X-ray
variability.}. We note, however, that these count rates are largely
affected by vignetting due to the offset position of WR\,6 on the EPIC
cameras. Indeed, the light curves shown in Figure~\ref{fig:lc}, after
accounting for the effects of vignetting, may imply the opposite,
i.e., that the X-ray flux of WR\,6 was slightly higher in the second
(SW) observation than in the first (NE) observation.
In Figure~\ref{fig:wr_spectrum} we present the EPIC
background-subtracted spectra for the two different epochs. Following
\citet{2002ApJ...579..764S}, we have modeled these spectra with a
two-temperature VAPEC model with initial abundances set to those shown
in Table~1 of \citet{1986A&A...168..111V}. The fit allowed the
foreground absorption column density, temperatures, and abundances of
N, Ne, Mg, Si, and Fe to vary \citep{2002ApJ...579..764S}. Table 3
displays the parameters resulting from our best-fit models: absorption
column densities $N_{\rm H}$, plasma temperatures $T$, normalization
parameters $A$\footnote{$A=1\times10^{-14} \int n_{e}^{2}dV/4 \pi
d^{2}$, where $d$ is the distance, $n_{e}$ is the electron density,
and $V$ the volume in cgs units.}, observed (absorbed) fluxes $f$,
and intrinsic (unabsorbed) fluxes $F$. Model fits for the spectra from
the NE and SW observations are listed separately alongside those from
\citet{2002ApJ...579..764S} for comparison. The column density and
temperatures of the two components are within 1-$\sigma$ of one
another among the three different models. The observed fluxes are
also consistent, although the \citet{2002ApJ...579..764S} flux seems
to be a bit lower, while our fluxes are closer to the ones derived
from the October 1995 \textit{ASCA} observations of WR\,6
\citep{1996AAS...189.7717S}. The total observed fluxes and the
observed fluxes of the hot thermal component, $f_{2}$, listed in
Table~\ref{tab:WR} may indicate a hardening of the X-ray emission from
WR\,6 during the last observation. To assess this issue, we performed
statistical evaluation of the lightcurves showed in
Figure~\ref{fig:lc} using the HEASOFT task \textit{lcstats} and found
no significant variations. Thus, WR\,6 does not show evidence of
variability in time-scales of hours.
We would like to point out that the absorption column density obtained
from our best fits are in good agreement with the values obtained from
\citet{2002ApJ...579..764S}, which are higher than that used to fit
the soft X-ray emission from the nebula. Such higher column density
values are commonly observed towards massive stars such as WR stars
and are recognized to be caused by absorption at the base of the wind
\citep{1981ApJ...250..677C,1994ApJ...436L..95C,2009A&A...506.1055N,2010AJ....139..825S,2011A&A...527A..66G}.
\\
\\
\section{Discussion}
\label{sec:discussion}
The \textit{XMM-Newton} images and spectra analyzed in the previous
sections reveal that the hot plasma in S\,308 is spatially distributed
in a thick shell plus the northwest blowout, with most emission being
attributable to a hot plasma at $\sim$1.1$\times$10$^6$ K. For an
adiabatically shocked stellar wind, its temperature is determined by
the stellar wind velocity, $kT = 3\mu m_{\mathrm{H}} V_{w}^{2}/16$,
where $\mu$ is the mean particle mass for fully ionized gas
\citep{Dyson1997}. Therefore, the temperature expected for the shocked
stellar wind of WR\,6, with a terminal wind velocity of
1700~km~s$^{-1}$ \citep{1998A&A...333..251H} and $\mu\gtrsim1.3$
\citep[][]{1986A&A...168..111V}, would be $T > 8\times10^{7}$~K, in
sharp contrast with the observed temperature. The same issue has been
pointed out for the WR bubble NGC\,6888 by several authors
\citep[see][and references therein]{2011ApJ...728..135Z}, and it is
also a common issue in planetary nebulae \citep[e.g.,
NGC\,6543;][]{2001ApJ...553L..69C}.
Electron thermal conduction has been proposed as a mechanism capable
of reducing the temperature of the hot plasma in shocked stellar wind
bubbles. Thermal conduction was applied by \citet{1977ApJ...218..377W}
to stellar wind bubbles to produce a self-similar solution for the
density and temperature structure in bubbles. The soft X-ray
luminosities predicted by Weaver et al.'s model for the Omega Nebula
and the Rosette Nebula, according to the stellar wind parameters of
their associated young clusters (M\,17 and NGC\,2244, respectively),
are several orders of magnitude higher than those observed
\citep{2003ApJ...593..874T,2003ApJ...590..306D}. Thus, the standard
Weaver et al.\ model for a stellar wind bubble with thermal conduction
cannot be taken at face value. Recent work by
\citet{2008A&A...489..173S} in the context of planetary nebulae, which
are produced in a very similar manner to WR wind bubbles, has
calculated the time-dependent radiation-hydrodynamic evolution of
planetary nebula wind bubbles including thermal conduction in 1D
models with spherical symmetry. In these models, the cold shell
material from the previous AGB superwind phase is evaporated into the
hot bubble. Saturated conduction was taken into account in these
calculations by limiting the electron mean free path and it was found
that thermal conduction was able to lower the temperature and raise
the density at the edge of the hot bubble enough to explain the soft
X-ray emission and low X-ray luminosities observed in some planetary
nebulae \citep[e.g.,][]{2001ApJ...553L..69C}.
In the case of WR bubbles, \citet{2011ApJ...737..100T} presented
time-dependent 2D radiation-hydrodynamic models of the evolution of
the CSM around single massive stars, including classical and saturated
thermal conduction. They found that in the absence of a magnetic
field, thermal conduction does not seem to play an important role in
shaping WR bubbles, but that models with thermal conduction have
slightly greater soft-band luminosities than those without thermal
conduction. They suggested that the morphology of S\,308 could result
from a star with initial mass of 40~$M_{\odot}$ whose stellar
evolution model includes stellar rotation
\citep{2003A&A...404..975M}. They obtained that $\sim$20,000~yr after
the onset of the WR phase, the X-ray-emitting gas will present a
clump-like morphology with an electron density of
$n_{e}\sim0.1$~cm$^{-3}$ inside an optical ($T\sim10^{4}$~K) shell
with radius of $\sim9$~pc.
Whereas the \citet{2011ApJ...737..100T} models reproduce the
morphology and X-ray luminosity of wind-blown bubbles, their
simulations predict higher temperatures of the hot plasma that result
in X-ray spectra that do not match the observed spectral shape. This
might imply that additional physical processes must be taken into
account. An interesting alternative to thermal conduction for the
apparently low ionization state of the plasma is non-equilibrium
ionization (NEI). \citet{2010ApJ...718..583S} calculate the
timescales to reach collisional ionization equilibrium (CIE) for
ionized plasmas and their results suggest that, for values of the
parameters relevant to S\,308 (derived electron density and time in
the WR stage; \citealp{2011ApJ...737..100T}), the CIE assumptions may
not hold. These ideas will be pursued in subsequent works.
\section{Summary and conclusions}
We present \textit{XMM-Newton} observations of three fields of the WR
bubble S\,308 which, in conjunction with the observation of its NW
quadrant presented by \citet{2003ApJ...599.1189C}, map most of the
nebula except for its southernmost section. We have used these
observations to study the spatial distribution of the X-ray-emitting
material within this bubble, to derive global values for its physical
conditions ($T_{e}$, $n_{e}$), and to search for
their spatial variations among different morphological components of
the nebula.
The X-ray emission from S\,308 is found to have a limb-brightened
morphology, with a shell thickness 5\arcmin--8\arcmin, and extend to
the northwest blowout region. The X-ray-emitting shell is notably
larger along the SE-NW direction than along the SW-NE direction, and
it is always confined by the optical shell of ionized material. The
X-ray surface brightness decreases notably from the blowout region and
the western rim shell to the shell interior, where the X-ray emission
falls to background levels. The western quadrants are also brighter
than the eastern quadrants.
The X-ray emission from S\,308 shows prominent emission from the
He-like triplet of \ion{N}{6} at 0.43~keV and fainter emission of the
\ion{O}{7} 0.57~keV triplet, and declines towards high energies, with
a faint tail up to 1~keV. This spectrum can be described by a
two-temperature optically thin plasma emission model with temperatures
$\sim 1.1 \times 10^6$~K and $\sim 1.3 \times 10^7$~K. The latter
component is notably fainter than the former by at least a factor of
$\sim 6$. There is an appreciable difference in the relative
contributions of the hot component to the X-ray-emitting gas between
the rim and the nebula interior, of which the latter has a higher
contribution from the hard component. The total X-ray luminosity is
estimated to be $\sim2\times 10^{33}$~erg~s$^{-1}$ for a distance of
1.5~kpc.
\acknowledgements This research was supported by the NASA
\textit{XMM-Newton} Guest Observer Program Grant NNG\,04GH99G. SJA and
JAT acknowledge financial support from DGAPA-UNAM through grant PAPIIT
IN100309. JAT also thanks CONACyT, CONACyT-SNI (Mexico) and CSIC
JAE-PREDOC (Spain) for a student grant. JAT and MAG are partially
funded by grant AYA2001-29754-C03-02 of the Spanish Ministerio de
Econom\'{i}a y Competitividad.
|
1,116,691,499,912 | arxiv | \section{Introduction}
The power of the constraints that Lorentz invariance imposes on the
S-matrix of four dimensional theories has been well known at least
since the work of Weinberg \cite{Weinberg:1964ev, Weinberg:1964ew}.
Impressive results like the impossibility of long-range forces
mediated by massless particles with spin $>2$, charge conservation
in interactions mediated by a massless spin $1$ particle, or the
universality of the coupling to a massless spin $2$ particle are
examples beautifully obtained by simply using the pole structure of
the S-matrix governing soft limits in combination with Lorentz
invariance \cite{Weinberg:1964ew, Weinberg:1965rz}.
Weinberg's argument does not rule out the possibility of non-trivial
Lagrangians describing self-interacting massless particles of higher
spins. It rules out the possibility of those fields producing
macroscopic effects. Actually, the theory of massless particles of
higher spins has been an active research area for many years (see
reviews \cite{Sorokin:2004ie, Bouatta:2004kk} and reference therein,
also see \cite{Saitou:2006ca, Bastianelli:2007pv} for alternative
approaches). Lagrangians for free theories have been well understood
while interactions have been a stumbling block. Recent progress
shows that in spaces with negative cosmological constant it is
possible to construct consistent Lagrangian theories but no similar
result exists for flat space-time
\cite{Fotopoulos:2006ci,Buchbinder:2006eq}. Despite the difficulties
of constructing an interactive Lagrangian, several attempts have
been made in studying the consistency of specific couplings among
higher spin particles. For example, cubic interactions have been
studied in \cite{Berends:1984wp, Fradkin:1987ks, Fradkin:1986qy,
Deser:1990bk}. Also, very powerful techniques for constructing
interaction vertices systematically have been developed using
BRST-BV cohomological methods
\cite{Henneaux:1997bm,Bekaert:2006us,Fotopoulos:2007nm} and references
therein.
In this paper we introduce a technique for finding theories of
massless particles that can have non-trivial S-matrices within a
special set of theories we call constructible. The starting point is
always assuming a Poincar{\'e} covariant theory where the S-matrix
transformation is derived from that of one-particle states which are
irreducible representations of the Poincar{\'e} group. There will
also be implicit assumptions of locality and parity invariance.
The next step is to show that for complex momenta, on-shell
three-particle S-matrices of massless particles of any spin can be
uniquely determined. As is well known, on-shell three-particle
amplitudes vanish in Minkowski space. That this need not be the case
for amplitudes in signatures different from Minkowski or for complex
momenta was explained by Witten in \cite{Witten:2003nn}.
We consider theories for which four-particle tree-level S-matrix
elements can be completely determined by three-particle ones. These
theories are called {\it constructible}. This is done by introducing
a one parameter family of complex deformations of the amplitudes and
using its pole structure to reconstruct it. The physical
singularities are on-shell intermediate particles connecting
physical on-shell three-particle amplitudes. This procedure is known
as the BCFW construction \cite{Britto:2004ap, Britto:2005fq}. One
can also introduce the terminology {\it fully constructible} if this
procedure can be extended to all $n$-particle amplitudes. Examples of
fully constructible theories are Yang-Mills \cite{Britto:2005fq} and
General Relativity \cite{Bedford:2005yy, Cachazo:2005ca,
Benincasa:2007qj} (the fact that cubic couplings could play
a key role in Yang-Mills theory and General Relativity was already understood
in \cite{Boulware:1974sr, Deser:1969wk}).
The main observation is that by using the BCFW deformation, the
four-particle amplitude is obtained by summing over only a certain
set of channels, say the ${\sf s}$- and the ${\sf u}$- channels.
However, if the theory under consideration exists, then the answer
should also contain the information about the ${\sf t}$- channel. In
particular, one could construct the four-particle amplitude using a
different BCFW deformation that sums only over the ${\sf t}$- and
the ${\sf u}$- channel.
Choosing different deformations for constructing {\it the same}
four-particle amplitude and requiring the two answers to agree is
what we call the four-particle test. This simple consistency
condition turns out to be a powerful constraint that is very
difficult to satisfy.
It is important to mention that the constraints are only valid for
constructible theories. Luckily, the set of constructible theories
is large and we find many interesting results. We also discuss some
strategies for circumventing this limitation.
As illustrations of the simplicity and power of the four-particle
test we present several examples. The first is a general analysis of
theories of a single spin $s$ particle. We find that if $s>0$ all
theories must have a trivial S-matrix except for $s=2$ which passes
the test. As a second example we allow for several particles of the
same spin. We find that, again in the range $s>0$, the only theories
that can have a nontrivial S-matrix are those of spin $1$ with
completely antisymmetric three-particle coupling constants which
satisfy the Jacobi identity and spin 2 particles with completely
symmetric three-particle coupling constants which define a
commutative and associative algebra. We also study the possible
theories of particles of spin $s$, without self-couplings and with
$s>1$, that can couple non-trivially to a spin $2$ particle. In this
case, we find that only $s=3/2$ passes the test. Moreover, all
couplings in the theory must be related to that of the three-spin-2
particle amplitude. Such a theory is linearized ${\cal N} =1$
supergravity.
The paper is organized as follows. In section II, we review the
construction of the S-matrix and of scattering amplitudes for
massless particles. In section III, we discuss how three-particle
amplitudes are non-zero and uniquely determined up the choice of the
values of the coupling constant. In section IV, we apply the BCFW
construction to show how, for certain theories, four-particle
amplitudes can be computed from three-particle ones. A theory for
which this is possible is called {\it constructible}. We then
introduce the four-particle test. In section V, we discuss
sufficient conditions for a theory to be constructible. In section
VI, we give examples of the use of the four-particle test. In
section VII, we conclude with a discussion of possible future
directions including how to relax the constructibility constraint.
Finally, in the appendix we illustrate one of the methods to relax
the constructibility condition.
\section{Preliminaries}
\subsection{S-Matrix}
In this section we define the S-matrix and scattering amplitudes. We
do this in order to set up the notation. Properties of the S-matrix,
which we exploit in this paper like factorization, have been well
understood since at least the time of the S-matrix program
\cite{Olive:1964, Chew:1966, Olive:1967}.
Recall that physically, one is interested in the probability for, say,
two asymptotic states to scatter and to produce $n-2$ asymptotic
states. Any such probability can be computed from the matrix
elements of momentum eigenstates
\begin{equation}
\left._{\rm out}\langle p_1\ldots p_{n-2} | p_a p_b\rangle_{\rm in}
\right. = \langle p_1\ldots p_{n-2} |S| p_a p_b\rangle
\end{equation}
where $S$ is a unitary operator. As usual, it is convenient to write
$S= {\mathbb I}+i T$ with
\begin{equation}
\langle p_1\ldots p_{n-2} |iT| p_a p_b\rangle =
\delta^{(4)}\left(p_a+p_b-\sum_{i=1}^{n-2}p_i\right)
M(p_a,p_b\rightarrow \{p_1,p_2,\ldots, p_{n-2}\}).
\end{equation}
$M(p_a,p_b\rightarrow \{p_1,p_2,\ldots, p_{n-2}\})$ is called the
scattering amplitude (see for example chapter 4 in \cite{Peskin:1995ev}).
Assuming crossing symmetry one can write $p_a=-p_{n-1}$ and
$p_b=-p_{n}$ and introduce a scattering amplitude where all
particles are outgoing. Different processes are then obtained by
analytic continuation of
\begin{equation}
M_n = M_n(p_1,p_2,\ldots, p_{n-1},p_{n}).
\end{equation}
$M_n$ is our main object of study. Our goal is to determine when
$M_n$ can be non-zero. Up to now we have exhibited only the
dependence on momenta of external particles. However, if they have
spin $s>0$ one also has to specify their free wave functions or
polarization tensors. We postpone the discussion of the explicit
form of polarization tensors until section V.
\subsection{Massless Particles Of Spin $s$}
It turns out that all the information needed to describe the
physical information of an on-shell massless spin $s$ particle is
contained in a pair of spinors $\{ \lambda_a,\tilde\lambda_{\dot
a}\}$, left- and right-handed respectively, and the helicity of the
particle~\cite{Berends:1981rb,DeCausmaecker:1981bg,Kleiss:1985yh,Witten:2003nn}.
Recall that in a Poincar{\'e} invariant theory, irreducible massless
representations are classified by their helicity which can be $h =
\pm s$ with $s$ any integer or half-integer known as the spin of the
particle.
The spinors $\{ \lambda_a,\tilde\lambda_{\dot a}\}$ transform in the
representations $(1/2,0)$ and $(0,1/2)$ of the universal cover of
the Lorentz group, $SL(2,{\mathbb C})$, respectively. Invariant
tensors are $\epsilon^{ab}$, $\epsilon^{\dot a\dot b}$ and
$(\sigma^\mu)_{a\dot a}$ where $\sigma^\mu = ({\mathbb
I},\vec{\sigma})$. The most basic Lorentz invariants, from which any
other is made of, can be constructed as follows:
\begin{equation}
\lambda_a\lambda'_b\epsilon^{ab} \equiv \langle \lambda,
\lambda'\rangle , \qquad \tilde\lambda_{\dot a}\tilde\lambda'_{\dot
b}\epsilon^{\dot a\dot b} \equiv [ \lambda, \lambda' ].
\end{equation}
Finally, using the third invariant tensor we can define the momentum
of the particle by $p^\mu = \lambda^a (\sigma^\mu)_{a\dot
a}\tilde\lambda^{\dot a}$, where indices are raised using the first
two tensors. A simple consequence of this is that the scalar product
of two vectors, $p^\mu$ and $q^\mu$ is given by $2p\cdot q = \langle
\lambda^p ,\lambda^q\rangle [\tilde\lambda^p,\tilde\lambda^q]$.
\section{Three Particle Amplitudes: A Uniqueness Result}
In this section we prove that three-particle amplitudes of massless
particles of any spin can be uniquely determined.
The statement that on-shell scattering amplitudes of three massless
particles can be non-zero might be somewhat surprising. However, as
shown by Witten \cite{Witten:2003nn}, three-particle amplitudes are
naturally non-zero if we choose to work with the complexified
Lorentz group $SL(2,{\mathbb C})\times SL(2,{\mathbb C})$, where
$(1/2,0)$ and $(0,1/2)$ are completely independent representations
and hence momenta are not longer real. In other words, if
$\tilde\lambda_{\dot a} \neq \pm \bar\lambda_a$ then $p^\mu$ is
complex.
Let us then consider a three-particle amplitude
$M_3(\{\lambda^{(i)},\tilde\lambda^{(i)},h_i\})$ where the spinors
of each particle, $\lambda^{(i)}$ and $\tilde\lambda^{(i)}$, are
independent vectors in ${\mathbb C}^2$.
Momentum conservation $(p_1+p_2+p_3)_{a\dot a}=0$ and the on-shell
conditions, $p_i^2=0$, imply that $p_i\cdot p_j = 0$ for any $i$ and
$j$. Therefore we have the following set of equations
\begin{equation}
\label{nosi} \langle 1,2\rangle [1,2] = 0, \qquad \langle 2,3\rangle
[2,3] = 0, \qquad \langle 3,1\rangle [3,1] = 0.
\end{equation}
Clearly, if $[1,2]=0$ and $[2,3]=0$ then $[3,1]$ must be zero. The
reason is that the spinors live in a two dimensional vector space
and if $\tilde\lambda^{(1)}$ and $\tilde\lambda^{(3)}$ are
proportional to $\tilde\lambda^{(2)}$ then they must also be
proportional.
This means that the non-trivial solutions to (\ref{nosi}) are either
$\langle 1,2\rangle = \langle 2,3\rangle= \langle 3,1\rangle = 0$ or
$[1,2]=[2,3]=[3,1]=0$.
Take for example $[1,2]=[2,3]=[3,1]=0$ and set
$\tilde\lambda^{(2)}_{\dot a} = \alpha_2 \tilde\lambda^{(1)}_{\dot
a}$ and $\tilde\lambda^{(3)}_{\dot a} = \alpha_3
\tilde\lambda^{(1)}_{\dot a}$. Then momentum conservation implies
that $\lambda^{(1)}_a + \alpha_2 \lambda^{(2)}_a+\alpha_3
\lambda^{(3)}_a = 0$ which is easily seen to be satisfied if
$\alpha_2=-\langle 1,3\rangle/\langle 2,3\rangle$ and
$\alpha_3=-\langle 1,2\rangle/\langle 3,2\rangle$.
The conclusion of this discussion is that three-particle amplitudes,
$M_3(\{\lambda^{(i)},\tilde\lambda^{(i)},h_i\})$, which by Lorentz
invariance are only restricted to be a generic function of $\langle
i,j\rangle$ and $[i,j]$ turn out to split into a ``holomorphic" and
an ``anti-holomorphic" part\footnote{Using ``holomorphic" and
``anti-holomorphic" is an abuse of terminology since
$\tilde\lambda_{\dot a} \neq \pm \bar\lambda_a$. We hope this will
not cause any confusion.}. More explicitly
\begin{equation}
M_3 = M^H_3(\langle 1,2\rangle,\langle 2,3\rangle,\langle
3,1\rangle) + M^A_3([1,2],[2,3],[3,1]).
\end{equation}
It is important to mention that we are considering the full
three-particle amplitude and not just the tree-level one. Therefore
$M^H_3$ and $M^A_3$ are not restricted to be rational
functions\footnote{We thank L. Freidel for discussions on this
point.}. In other words, we have purposefully avoided to talk about
perturbation theory. We will be forced to do so later in section V
but we believe that this discussion can be part of a more general
analysis.
\subsection{Helicity Constraint and Uniqueness}
One of our basic assumptions about the S-matrix is that the
Poincar{\'e} group acts on the scattering amplitudes as it acts on
individual one-particle states. This in particular means that the
helicity operator must act as
\begin{equation}
\label{helioper} \left( \lambda_i^a\frac{\partial}{\partial
\lambda_i^a} - \tilde\lambda_i^a\frac{\partial}{\partial
\tilde\lambda_i^a}\right) M_3(1^{h_1},2^{h_2},3^{h_3}) = -2h_i
M_3(1^{h_1},2^{h_2},3^{h_3}).
\end{equation}
Equivalently,
\begin{equation}
\label{helima} \left( \lambda_i^a\frac{\partial}{\partial
\lambda_i^a} + 2h_i\right) M^H_3(\langle 1,2\rangle,\langle
2,3\rangle,\langle 3,1\rangle) = 0
\end{equation}
on the holomorphic one and as
\begin{equation}
\label{antihelima} \left( \tilde\lambda_i^a\frac{\partial}{\partial
\tilde\lambda_i^a} -2h_i\right) M^A_3([1,2],[2,3],[3,1]) = 0
\end{equation}
on the anti-holomorphic one.
It is not difficult to show that if $d_1=h_1-h_2-h_3$,
$d_2=h_2-h_3-h_1$ and $d_3=h_3-h_1-h_2$, then
\begin{equation}
F = \langle 1,2\rangle^{d_{3}}\langle 2,3\rangle^{d_{1}}\langle
3,1\rangle^{d_{2}}, \qquad G = [1,2]^{-d_{3}}[ 2,3]^{-d_{1}}[
3,1]^{-d_{2}}
\end{equation}
are particular solutions of the equations (\ref{helima}) and
(\ref{antihelima}) respectively.
Therefore, $M^H_3/F$ and $M^A_3/G$ must be ``scalar" functions,
i.e., they have zero helicity.
Let $x_1$ be either $\langle 2,3\rangle$ or $[2,3]$ depending on
whether we are working with the holomorphic or the antiholomorphic
pieces. Also let $x_2$ be either $\langle 3,1\rangle$ or $[3,1]$ and
$x_3$ be either $\langle 1,2\rangle$ or $[1,2]$. Finally, let ${\cal
M}$ be either $M^H_3/F$ or $M^A_3/G$. Then we find that
\begin{equation}
x_i\frac{\partial {\cal M}(x_1,x_2,x_3)}{\partial x_i} = 0
\end{equation}
for $i=1,2,3$. Therefore, up to solutions with delta function
support which we discard based on analyticity, the only solution for
${\cal M}$ is a constant. Let such a constant be denoted by
$\kappa_H$ or $\kappa_A$ respectively.
We then find that the exact three-particle amplitude must be
\begin{equation}
M_3(\{\lambda^{(i)},\tilde\lambda^{(i)},h_i\}) = \kappa_H \langle
1,2\rangle^{d_{3}}\langle 2,3\rangle^{d_{1}}\langle
3,1\rangle^{d_{2}}+ \kappa_A [1,2]^{-d_{3}}[ 2,3]^{-d_{1}}[
3,1]^{-d_{2}}.
\end{equation}
Finally, we have to impose that $M_3$ has the correct physical
behavior in the limit of real momenta. In other words, we must
require that $M_3$ goes to zero when both $\langle i,j\rangle$ and
$[i,j]$ are taken to zero\footnote{Taking to zero $\langle
i,j\rangle$ means that $\lambda^{(i)}$ and $\lambda^{(j)}$ are
proportional vectors. Therefore, all factors $\langle i,j\rangle$
can be taken to be proportional to the same small number $\epsilon$
which is then taken to zero.}. Simple inspection shows that if
$d_1+d_2+d_3$, which is equal to $-h_1-h_2-h_3$, is positive then we
must set $\kappa_A =0$ in order to avoid an infinity while if
$-h_1-h_2-h_3$ is negative then $\kappa_H$ must be zero. The case
when $h_1+h_2+h_3 = 0$ is more subtle since both pieces are allowed.
In this paper we restrict our study to $h_1+h_2+h_3 \neq 0$ and
leave the case $h_1+h_2+h_3 = 0$ for future work.
\subsection{Examples}
Let us consider few examples, which will appear in the next
sections, as illustrations of the uniqueness of three-particle
amplitudes.
Consider a theory of several particles of a given integer spin s.
Since all particles have the same spin we can replace $h =\pm s$ by
the corresponding sign. Let us use the middle letters of the
alphabet to denote the particle type.
There are only four helicity configurations:
\begin{equation}
\label{exione} M_3(1^-_m,2^-_r,3^+_s) = \kappa_{mrs} \left(
\frac{\langle 1,2\rangle^3}{\langle 2,3\rangle\langle
3,1\rangle}\right)^s, \qquad M_3(1^+_m,2^+_r,3^-_s) = \kappa_{mrs}
\left( \frac{[ 1,2 ]^3}{ [2,3] [3,1]}\right)^s
\end{equation}
and
\begin{equation}
\label{exitwo} M_3(1^-_m,2^-_r,3^-_s) = \kappa'_{mrs} \left( \langle
1,2\rangle\langle 2,3\rangle\langle 3,1\rangle\right)^s, \qquad
M_3(1^+_m,2^+_r,3^+_s) = \kappa'_{mrs} \left( [ 1,2 ][2,3]
[3,1]\right)^s.
\end{equation}
The subscripts on the coupling constants $\kappa$ and $\kappa'$ mean
that they can depend on the particle type\footnote{Note that here we
have implicitly assumed parity invariance by equating the couplings
of conjugate amplitudes.}. We will use the amplitudes in (\ref{exione})
in section VI.
A simple but important observation is that if the spin is odd then
the coupling constant must be completely antisymmetric in
its indices. This is because due to crossing symmetry the
amplitude must be invariant under the exchange of labels.
This leads to our first result, a theory of less than three massless
particles of odd spin must have a trivial three-particle S-matrix.
Under the conditions of constructibility, this can be extended to
higher-particle sectors of the S-matrix and even to the full
S-matrix.
\section{The Four-Particle Test And Constructible Theories}
In this section we introduce what we call the four-particle test.
Consider a four-particle amplitude $M_4$. Under the assumption that
one-particle states are stable in the theory, $M_4$ must have poles
and multiple branch cuts emanating from them at locations where
either ${\sf s}=(p_1+p_2)^2$, ${\sf t}=(p_2+p_3)^2$ or ${\sf
u}=(p_3+p_1)^2$ vanish\footnote{We have introduced the notation
${\sf s}$ for the center of mass energy in order to avoid confusion
with the spin $s$ of the particles.}.
We choose to consider only the pole structure. Branch cuts will
certainly lead to very interesting constraints but we leave this for
future work. Restricting to the pole structure corresponds
to working at tree-level in field theory.
As we will see, under certain conditions, one can construct physical
on-shell tree-level four-particle amplitudes as the product of two
on-shell three-particle amplitudes (evaluated at complex momenta
constructed out of the real momenta of the four external particles)
times a Feynman propagator. In general this can be done in at least
two ways. Roughly speaking, these correspond to summing over the
${\sf s}$-channel and ${\sf u}$-channel or summing over the ${\sf
t}$-channel and ${\sf u}$-channel. A necessary condition for the
theory to exists is that the two four-particle amplitudes
constructed this way give the same answer. This is what we call the
four-particle test. It might be surprising at first that a sum over
the ${\sf s}$- and ${\sf u}$-channels contains information about the
${\sf t}$-channel but as we will see this is a natural consequence
of the BCFW construction which we now review.
\subsection{Review Of The BCFW Construction And Constructible Theories}
The key ingredient for the four-particle test is the BCFW
construction \cite{Britto:2005fq}. The construction can be applied to
$n$-particle amplitudes, but for the purpose of this paper we only need
four-particle amplitudes.
We want to study $M_4(\{\lambda^{(i)}_a, \tilde\lambda^{(i)}_{\dot
a}, h_i\})$. Recall that momenta constructed from the spinors of
each particle are required to satisfy momentum conservation, i.e.,
$(p_1+p_2+p_3+p_4)^\mu = 0$.
Choose two particles, one of positive and one of negative
helicity\footnote{Here we do not consider amplitudes with all equal
helicities.}, say $i^{+s_i}$ and $j^{-s_j}$, where $s_i$ and $s_j$
are the corresponding spins, and perform the following deformation
\begin{equation}
\label{bcfw} \lambda^{(i)}(z) = \lambda^{(i)} + z\lambda^{(j)},
\qquad \tilde\lambda^{(j)}(z) = \tilde\lambda^{(j)} -
z\tilde\lambda^{(i)}.
\end{equation}
All other spinors remain the same.
The deformation parameter $z$ is a complex variable. It is easy to
check that this deformation preserves the on-shell conditions, i.e.,
$p_k(z)^2 = 0$ for any $k$ and momentum conservation since
$p_i(z)+p_j(z) = p_i+p_j$.
The main observation is that the scattering amplitude is a rational
function of $z$ which we denote by $M_4(z)$. This fact follows from
$M_4(1^{h_1},\ldots , 4^{h_4})$ being, at tree-level, a rational
function of spinor products. Being a rational function of $z$,
$M_4(z)$ can be determined if complete knowledge of its poles,
residues and behavior at infinity is found.
\bigskip
{\bf Definition:} We call a theory {\it constructible} if $M_4(z)$
vanishes at $z=\infty$. As we will see this means that $M_4(z)$ can
only be computed from $M_3$ and hence the name.
\bigskip
In the next section we study sufficient conditions for a theory to
be constructible. The proof of constructibility relies very strongly
on the fact that on-shell amplitudes should only produce the two
physical helicity states of a massless particle\footnote{This in
turn is simply a consequence of imposing Lorentz invariance
\cite{Weinberg:1964ew}.}. In this section we assume that the theory
under consideration is constructible.
Any rational function that vanishes at infinity can be written as a
sum over its poles with the appropriate residues. In the case at
hand, $M_4(z)$ can only have poles of the form
\begin{equation}
\frac{1}{(p_i(z)+p_k)^2} = \frac{1}{\langle
\lambda^{(i)}(z),\lambda^{(k)}\rangle [i,k]} = \frac{1}{(\langle i,
k\rangle + z\langle j, k\rangle )[i,k]}
\end{equation}
where $k$ has to be different from $i$ and $j$.
As mentioned at the beginning of this section, $M_4(z)$ can be
constructed as a sum over only two of the three channels. The reason
is the following. For definiteness let us set $i=1$ and $j=2$, then
the only propagators that can be $z$-dependent are
$1/(p_1(z)+p_4)^2$ and $1/(p_1(z)+p_3)^2$. By construction
$1/(p_1+p_2)^2$ is $z$-independent.
The rational function $M_4(z)$ can thus be written as
\begin{equation}
M_4^{(1,2)}(z) = \frac{c_{\sf t}}{z-z_{\sf t}} + \frac{c_{\sf
u}}{z-z_{\sf u}}
\end{equation}
where $z_{\sf t}$ is such that ${\sf t}=(p_1(z)+p_4)^2$ vanishes,
i.e., $z_{\sf t} = -\langle 1, 4\rangle/\langle 2, 4\rangle$ while
$z_{\sf u}$ is where ${\sf u}=(p_1(z)+p_3)^2$ vanishes, i.e.,
$z_{\sf u} = -\langle 1, 3\rangle/\langle 2, 3\rangle$. Note that we
have added the superscript $(1,2)$ to $M_4(z)$ to indicate that it
was obtained by deforming particles $1$ and $2$.
Finally, we need to compute the residues. Close to the location of
one of the poles, $M_4(z)$ factorizes as the product of two on-shell
three-particle amplitudes. Note that each of the three-particle
amplitudes is on-shell since the intermediate particle is also
on-shell. See figure~\ref{fig:Factorization} for a schematic representation. Therefore, we
find that
\begin{equation}
\begin{split}
M_4^{(1,2)}(z) = & \sum_h M_3(p_1^{h_1}(z_{\sf t}), p_4^{h_4},
-P_{1,4}^h(z_{\sf t}))\frac{1}{P_{1,4}^2(z)}M_3(p_2^{h_2}(z_{\sf t}),
p_3^{h_3}, P_{1,4}^{-h}(z_{\sf t})) + \\ & \sum_h
M_3(p_1^{h_1}(z_{\sf u}), p_3^{h_3}, -P_{1,3}^h(z_{\sf
u}))\frac{1}{P_{1,3}^2(z)}M_3(p_2^{h_2}(z_{\sf u}), p_4^{h_4},
P_{1,3}^{-h}(z_{\sf u})).
\end{split}
\end{equation}
where the sum over $h$ runs over all possible helicities in the
theory under consideration and also over particle types if there is
more than one.
The scattering amplitude we are after is simply obtained by setting
$z=0$, i.e, $M_4(\{\lambda^{(i)},\tilde\lambda^{(i)}, h_i\}) =
M_4^{(1,2)}(0)$.
Recall that we assumed $h_1 = s_1$ and $h_2 = -s_2$. Let us further
assume that $h_4 = -s_4$. Therefore we could repeat the whole
procedure but this time deforming particles $1$ and $4$. In this way
we should find that $M_4(\{\lambda^{(i)},\tilde\lambda^{(i)}, h_i\})
= M_4^{(1,4)}(0)$.
We have finally arrived at the consistency condition we call the
four-particle test. One has to require that
\begin{equation}
\label{testy} M_4^{(1,2)}(0) = M_4^{(1,4)}(0).
\end{equation}
\begin{figure}
\[M_{4}^{(1,2)}\:=\:
\sum_{h}\!\!\!\raisebox{2.23cm}{\scalebox{.85}{{\includegraphics*[116pt,735pt][281pt,592pt]{figure_1a.eps}}}}\!\!\frac{1}{P_{14}^2}\;+\;
\sum_{h}\!\!\!\raisebox{2.23cm}{\scalebox{.85}{{\includegraphics*[116pt,735pt][281pt,592pt]{figure_1b.eps}}}}\!\!\frac{1}{P_{13}^2}
\]
\vspace{1cm}
\caption{Factorization of a four-particle amplitude into two on-shell three-particle amplitudes.
In constructible theories, four-particle amplitudes are given by a sum over simple poles
of the 1-parameter family of amplitudes $M_{4}(z)$ times the corresponding residues. At
the location of the poles the internal propagators go on-shell and the residues are the
product of two on-shell three-particle amplitudes.\label{fig:Factorization}}
\end{figure}
As we will see in examples, this is a very strong condition that
very few constructible theories satisfy non-trivially. In other
words, most constructible theories satisfy (\ref{testy}) only if all
three-particle couplings are set to zero and hence four-particle
amplitudes vanish. If the theory is fully constructible, this
implies that the whole S-matrix is trivial.
\subsection{Simple Examples}
We illustrate the use of the four-particle test by first working out
the general form of $M_4^{(1,2)}(0)$ and $M_4^{(1,4)}(0)$ for a
theory containing only integer spin particles\footnote{Including
half-integer spins is straightforward and we give an example in
section VI.}. We then specialize to the case of a theory containing
a single particle of integer spin $s$. It turns out that the theory
is constructible only when $s>0$. For $s>0$, we explicitly find the
condition on $s$ for the theory to pass the four-particle test.
\subsubsection{General Formulas For Integer Spins}
Consider first $M_4^{(1,2)}(0)$. In order to keep the notation
simple we will denote $\langle\lambda^{(1)}(z), \bullet\rangle$ by
$\langle \hat 1, \bullet\rangle$ and so on. The precise value of $z$
depends on the deformation and channel being considered.
\begin{equation}
\label{geko}
\begin{split}
M_4^{(1,2)}(0) = & \sum_h \left( \kappa^H_{(1+h_1+h_4+h)}
\langle \hat 1,4\rangle^{h-h_1-h_4}\langle 4,{\hat P}_{1,4}\rangle^{h_1-h_4-h}
\langle {\hat P}_{1,4} , \hat 1 \rangle^{h_4-h-h_1} + \right. \\
& \left. \kappa^A_{(1-h_1-h_4-h)}[1,4]^{h_1+h_4-h}[4,{\hat
P}_{1,4}]^{h_4+h-h_1}[{\hat P}_{1,4},1]^{h+h_1-h_4}\right)
\times\frac{1}{P_{1,4}^2}\times
\\ & \left( \kappa^H_{(1+h_2+h_3-h)} \langle 3,2\rangle^{-h-h_3-h_2}\langle
2,{\hat P}_{1,4}\rangle^{h_3-h_2+h}\langle {\hat P}_{1,4},3\rangle^{h_2-h_3+h} \right. + \\
& \left. \kappa^A_{(1-h_2-h_3+h)}[3,\hat 2]^{h+h_3+h_2}[\hat 2,{\hat
P}_{1,4}]^{-h_3+h_2-h}[{\hat P}_{1,4},3]^{-h_2+h_3-h}\right) + \\
& \sum_h (4\leftrightarrow 3).
\end{split}
\end{equation}
Here the subscripts on the three-particle couplings denote the
dimension of the coupling. The range of values of the helicity of
the internal particle depends on the details of the specific theory
under consideration. Even though (\ref{geko}) is completely general
we choose to exclude theories where $h$ can take values such that
$h+h_1+h_2 =0$ or $-h+h_2+h_3 =0$. The main reason is that formulas
will simplify under this assumption.
Note also that we have kept the two pieces of all three-particle
amplitudes entering in (\ref{geko}). However, recall that we should
set either the holomorphic or the anti-holomorphic coupling to zero.
As we will now see this condition is very important for the
consistency of (\ref{geko}).
Let us solve the condition $P_{1,4}(z)^2 =0$. As mentioned above
this leads to $z_t=-\langle 1,4\rangle/\langle 2,4\rangle$. Since
$P_{1,4}(z_t)$, which we denoted by ${\hat P}_{1,4}$, is a null
vector, it must be possible to find spinors $\lambda^{(\hat P)}$ and
$\tilde\lambda^{(\hat P)}$ such that ${\hat P}_{1,4}^\mu =
\lambda^{(\hat P)a}(\sigma^\mu)_{a\dot a}\tilde\lambda^{(\hat P)\dot
a}$. Clearly, given ${\hat P}_{1,4}$ it is not possible to uniquely
determine the spinors since any pair of spinors $\{ t\lambda^{(\hat
P)}, t^{-1}\tilde\lambda^{(\hat P)}\}$ gives rise to the same ${\hat
P}_{1,4}$. This ambiguity drops out of (\ref{geko}) as we will see.
After some algebra we find that
\begin{equation}
P_{1,4}(z_t) = {\hat P}_{1,4} =
\frac{[1,4]}{[1,3]}\lambda^{(4)}\tilde\lambda^{(3)}.
\end{equation}
Therefore we can choose
\begin{equation}
\lambda^{(\hat P)} = \alpha \lambda_4, \quad \tilde\lambda^{(\hat
P)} = \beta \tilde\lambda_3, \quad {\rm with} \quad \alpha\beta
=\frac{[1,4]}{[1,3]}.
\end{equation}
Moreover, it is also easy to get
\begin{equation}
\hat\lambda_1 = \frac{\langle 2,1\rangle}{\langle
2,4\rangle}\lambda_4, \quad \hat{\tilde\lambda}_2 =
\frac{[1,2]}{[1,3]}\tilde\lambda_3.
\end{equation}
Using the explicit form of all the spinors one can check that the
three-particle amplitude with coupling constant
$\kappa^H_{(1+h_1+h_4+h)}$ in (\ref{geko}) possesses a factor of the
form $\langle 4,4\rangle = 0$ to the power $-h_1-h_4-h$. From our
discussion in section III, if $-h_1-h_4-h$ is less than zero then
the coupling $\kappa^H_{(1+h_1+h_4+h)}=0$. In this way a possible
infinity is avoided. Therefore we get a contribution from the term
with coupling $\kappa^A_{(1-h_1-h_4-h)}$ whenever $h>-(h_1+h_4)$.
Now, if $-h_1-h_4-h$ is positive then $\kappa^H_{(1+h_1+h_4+h)}$
need not vanish but the factor multiplying it vanishes. In this case
$\kappa^A_{(1-h_1-h_4-h)}$ must be zero and we find no
contributions.This means that the only non-zero contributions to the
sum over $h$ can only come from the region where $h>-(h_1+h_4)$.
Turning to the other three-particle amplitude, we find that the
piece with coupling $\kappa^A_{(1-h_2-h_3+h)}$ has a factor
$[3,3]=0$ to the power $-h+h_2+h_3$. A similar analysis shows that
the only nonzero contributions come from regions where
$h>(h_2+h_3)$.
Putting the two conditions together we find that the first term
gives a non-zero contribution only when $h> {\rm
max}(-(h_1+h_4),(h_2+h_3))$.
Simplifying we find
\begin{equation}
\label{hasi}
\begin{split}
M_4^{(1,2)}(0) = & \sum_{h > {\rm max}(-(h_1+h_4),(h_2+h_3))} \left(
\kappa^A_{1-h_1-h_4-h}\kappa^H_{1+h_2+h_3-h}
\frac{(-P_{3,4}^2)^h}{P_{1,4}^2}\left(\frac{[1,4][3,4]}{[1,3]}\right)^{h_4}
\right. \\
& \left.
\left(\frac{[1,3][1,4]}{[3,4]}\right)^{h_1}\left(\frac{\langle
3,4\rangle}{\langle 2,3\rangle\langle
2,4\rangle}\right)^{h_2}\left(\frac{\langle 2,4\rangle}{\langle
2,3\rangle\langle 3,4\rangle}\right)^{h_3}\right) + \sum_{h > {\rm
max}(-(h_1+h_3),(h_2+h_4))}\!\!\!\!\!\!\!\!(4\leftrightarrow 3).
\end{split}
\end{equation}
Finally, it is easy to obtain $M_4^{(1,4)}(0)$ from (\ref{hasi}) by
simply exchanging the labels $2$ and $4$.
Next we will write down all formulas explicitly for the case when
$|h_i|=s$ for all $i$.
\subsubsection{Theories Of A Single Spin s Particle}
Consider now the case $h_1 = s$, $h_2 = -s$, $h_3 = s$ and $h_4 =
-s$. We also assume that the theory under consideration has a single
particle of spin s. This restriction is again for simplicity. If one
decided to allow for more internal particles then the different
terms would have to satisfy the four-particle test independently
since the dimensions of the coupling constants would be
different\footnote{There might be cases where the dimensions might
agree by accident. Such cases might actually lead to new interesting
theories. We briefly elaborate in section VII but we leave the
general analysis for future work.}.
Using (\ref{hasi}) we find that the first sum contributing to
$M^{(1,2)}_4(0)$ allows only for $h=s$ while the second one allows
for $h=-s$ and $h=s$. Using momentum conservation\footnote{One can
easily show that momentum conservation for four particles implies
that $\langle a,b\rangle/\langle a,c\rangle = - [d,c]/[d,b]$ for any
choice of $\{a,b,c,d\}$.} to simplify the expressions we find
\begin{equation}
\label{pola}
\begin{split}
M_4^{(1,2)}(0) = & \kappa^A_{1-s}\kappa^H_{1-s}\left(\frac{\langle
2,4\rangle^3[1,3]}{\langle 1,2\rangle\langle 3,4\rangle}\right)^s
\frac{1}{\langle 1,4\rangle [1,4]} +
\kappa^A_{1-s}\kappa^H_{1-s}\left(\frac{[1,3]^3\langle
4,2\rangle}{[4,3][1,2]}\right)^s \frac{1}{\langle 1,3\rangle [1,3]}
+ \\
& \kappa^A_{1-3s}\kappa^H_{1-3s}([1,3]\langle
4,2\rangle)^{2s}\frac{\left(-P_{3,4}^2\right)^s}{P_{1,3}^2}.
\end{split}
\end{equation}
We would like to set all couplings with the same dimension to the
same value. In other words, we define $\kappa =\kappa^A_{1-s} =
\kappa^H_{1-s}$. We also choose to study the case $\kappa'
=\kappa^A_{1-3s} = \kappa^H_{1-3s} = 0$. It turns out that if we had
chosen $\kappa=0$ and $\kappa'$ non-zero the resulting theories
would not have been constructible. In section VII we explore
strategies for relaxing this condition.
As mentioned above we can write $M^{(1,4)}(0)$ by simply exchanging
the labels 2 and 4. We then find
\begin{equation}
\label{seca}
\begin{split}
M_4^{(1,2)}(0) = & \kappa^2 \left(\frac{\langle
2,4\rangle^3[1,3]}{\langle 1,2\rangle\langle 3,4\rangle}\right)^s
\frac{1}{\langle 1,4\rangle [1,4]} +
\kappa^2\left(\frac{[1,3]^3\langle 4,2\rangle}{[4,3][1,2]}\right)^s
\frac{1}{\langle 1,3\rangle [1,3]},
\\
M_4^{(1,4)}(0) = & \kappa^2 \left(\frac{\langle
4,2\rangle^3[1,3]}{\langle 1,4\rangle\langle 3,2\rangle}\right)^s
\frac{1}{\langle 1,2\rangle [1,2]} +
\kappa^2\left(\frac{[1,3]^3\langle 2,4\rangle}{[2,3][1,4]}\right)^s
\frac{1}{\langle 1,3\rangle [1,3]}.
\end{split}
\end{equation}
Both amplitudes can be further simplified to
\begin{equation}
M_4^{(1,2)}(0) = -(-1)^s \kappa^2 \frac{\left( [1,3]\langle
2,4\rangle\right)^{2s}}{{\sf s t u}} \times {\sf s}^{2-s}, \quad
M_4^{(1,4)}(0) = -(-1)^s \kappa^2 \frac{\left( [1,3]\langle
2,4\rangle\right)^{2s}}{{\sf s t u}} \times {\sf t}^{2-s}.
\end{equation}
Finally, the four-particle test requires $M_4^{(1,2)}(0)
=M_4^{(1,4)}(0)$ or equivalently $M_4^{(1,2)}(0)/M_4^{(1,4)}(0) =
1$. The latter gives the condition $({\sf s}/{\sf t})^{2-s} =1$
which can only be satisfied for generic choices of kinematical
invariants if $s=2$. If $s\neq 2$ the four-particle test
$M_4^{(1,2)}(0) =M_4^{(1,4)}(0)$ then requires $\kappa =0$ and hence
a trivial S-matrix.
\section{Conditions For Constructibility}
The example in the previous section showed that the only theory of a
single massless spin s particle that passes the four-particle test
is that with $s=2$. This theory turns out to be linearized General
Relativity. For $s=1$, the result is also familiar: a single photon
should be free. However, if $s=0$ one knows that a single scalar can
have a non-trivial S-matrix. The reason we did not find $s=0$ as a
possible solution in the previous example is that precisely for
$s=0$ the four-particle amplitude is not constructible. Therefore
our calculation was valid only for $s>0$.
In this section we study the criteria for constructibility in more
detail. Unfortunately, we do not know a way of carrying out this
discussion without first assuming the existence of a Lagrangian. The
conditions for constructibility will therefore be given in terms of
conditions on the interaction vertices of a Lagrangian. We will also
assume that it is possible to perform a perturbative expansion using
Feynman diagrams. The starting point of all theories we consider is
a canonical kinetic term (free Lagrangian) which for $s=0,1,2$ is
very well known and for $s>2$ can be found for example in
\cite{Fronsdal:1978rb, Buchbinder:1998qv, Sorokin:2004ie}.
The first ingredient is the polarization tensors of massless
particles of spin $s$. Polarization tensors of particles of integer
spin $s$ can be expressed in terms of polarization vectors of spin
$1$ particles as follows:
\begin{equation}
\epsilon^+_{a_1\dot a_1, \ldots ,a_s\dot a_s} =
\prod_{i=1}^s\epsilon_{a_i\dot a_i}^+ \, , \qquad
\epsilon^-_{a_1\dot a_1, \ldots ,a_s\dot a_s} =
\prod_{i=1}^s\epsilon_{a_i\dot a_i}^- .
\end{equation}
For half-integer spin $s+1/2$ they are
\begin{equation}
\epsilon^+_{a_1\dot a_1, \ldots ,a_s\dot a_s,\dot b} =
\tilde\lambda_{\dot b}\prod_{i=1}^s\epsilon_{a_i\dot a_i}^+ \, ,
\qquad \epsilon^-_{a_1\dot a_1, \ldots ,a_s\dot a_s, b} =
\lambda_b\prod_{i=1}^s\epsilon_{a_i\dot a_i}^- ,
\end{equation}
and where polarization vectors of spin $1$ particles are given by
\begin{equation}
\epsilon^+_{a\dot a} = \frac{\mu_a\tilde\lambda_{\dot a}}{\langle
\mu ,\lambda \rangle}, \qquad \epsilon^-_{a\dot a}
=\frac{\lambda_a\tilde\mu_{\dot a}}{[\tilde\lambda , \tilde\mu]}
\end{equation}
with $\mu_a$ and $\tilde\mu_{\dot a}$ arbitrary reference spinors.
This explains how all the physical data of a massless particle can
be recovered from $\lambda, \tilde\lambda$ and $h$. A comment is in
order here. The presence of arbitrary reference spinors means that
polarization tensors cannot be uniquely fixed once $\{ \lambda,
\tilde\lambda ,h\}$ is given. If a different reference spinor is
chosen, say, $\mu'$ for $\epsilon^+_{a\dot a}$ then
\begin{equation}
\label{asom} \epsilon^+_{a\dot a}(\mu') = \epsilon^+_{a\dot a}(\mu)
+ \omega \lambda_a\tilde\lambda_{\dot a}
\end{equation}
where
$$\omega =\frac{ \langle\mu',\mu\rangle}{\langle\mu',\lambda\rangle\langle\lambda,
\mu\rangle}.$$
If the particle has helicity $h=1$ then it is easy to recognize
(\ref{asom}) as a gauge transformation and the amplitude must be
invariant.
However, one does not have to invoke gauge invariance or assume any
new principle. As shown by Weinberg in \cite{Weinberg:1964ew} for any
spin $s$, the only way to guarantee the correct Poincar{\'e} transformations
of the S-matrix of massless particles is by imposing invariance under
(\ref{asom}). In that sense, there is no assumption in this
section that has not already been made in section II. In other words,
Poincar{\'e} symmetry requires that $M_n$ gives the
same answer independently of the choice of reference spinor $\mu$.
\subsection{Behavior at Infinity}
If a theory comes from a Lagrangian then the three-particle
amplitudes derived in section III can be computed as the product of
three polarization tensors times a three-particle vertex that
contains some power of momenta which we denote by $L_3$. Simple
dimensional arguments indicate that if all particles have integer
spin then $L_3 = |h_1+h_2+h_3|$. Let us denote the power of momenta
in the four-particle vertex by $L_4$.
We are interested in the behavior of $M_4$, constructed using
Feynman diagrams, under the deformation of $\lambda^{(1)}$ and
$\tilde\lambda^{(2)}$ defined in (\ref{bcfw}) as $z$ is taken to
infinity.
Feynman diagrams fall into three different categories corresponding to
different behaviors at infinity. Representatives of each type are
shown in figure~~\ref{fig:Feynman_diagrams}. The first kind corresponds to the $(1,2)$-channel
(${\sf s}$-channel). The second corresponds to either the
$(1,3)$-channel (${\sf u}$-channel) or the $(1,4)$-channel (${\sf
t}$-channel). Finally, the third kind is the four-particle coupling.
Under the deformation $\lambda^{(1)}(z) =
\lambda^{(1)}+z\lambda^{(2)}$ and $\tilde\lambda^{(2)}(z) =
\tilde\lambda^{(2)}-z\tilde\lambda^{(1)}$, polarization tensors give
contributions that go as $z^{-s_1}$ and $z^{-s_2}$ respectively in
the case of integer spin and like $z^{-s_1+1/2}$ and $z^{-s_2+1/2}$
in the case of half-integer spin. Recall that we chose particle 1 to
have positive helicity while particle 2 to have negative helicity. Had
we chosen the opposite helicities, polarization tensors would have
given positive powers of $z$ at infinity. For simplicity, let us
restrict the rest of the discussion in this section to integer spin
particles.
For the first kind of diagrams, only a single three-particle vertex
is $z$ dependent and gives $z^{L_3}$. Combining the contributions we
find $z^{L_3-s_1-s_2}$. Therefore, we need $s_1+s_2>L_3$.
For the second kind of diagrams, two three-particle vertices
contribute giving $z^{L_3+L_3'}$. This time a propagator also
contributes with $z^{-1}$. Combining the contributions we get
$z^{L_3+L_3'-s_1-s_2-1}$. Therefore we need $s_1+s_2>L_3+L_3'-1$.
Finally, for the third kind of diagrams, only the four-particle
vertex contributes giving $z^{L_4}$. Combining the contributions we
find $z^{L_4-s_1-s_2}$. Therefore we need $s_1+s_2>L_4$.
Summarizing, a four-particle amplitude is constructible, i.e.,
$M_4^{(1,2)}(z)$ vanishes as $z\to \infty$ if $s_1+s_2>L_3$,
$s_1+s_2>L_3+L_3'-1$ and $s_1+s_2>L_4$. It is important to mention
that these are sufficient conditions but not necessary. Recall that
we are interested in the behavior of the whole amplitude and not on
that of individual diagrams. Sometimes it is possible that
the sum of Feynman diagrams vanishes at infinity even though individual ù
diagrams do not.
Also possible is that since our analysis does not take into account
the precise structure of interaction vertices, there might be
cancellations within the same diagram. In other words, our Feynman
diagram analysis only provides an upper bound on the behavior at
infinity.
Let us go back to the example in the previous section. There
$s_1=s_2=s$, $L_3=L_3'=s$. Note that $s_1+s_2>L_3$ implies $s>0$, as
mentioned at the beginning of this section. The second condition is
empty and the third implies that $L_4<2s$. Thus, our conclusions in
the example are valid only if $s>0$ and four-particle interactions
have at most $2s-1$ derivatives. Note that for $s=1$ this excludes
$(F^2)^2$ terms and for $s=2$ this excludes $R^2$ terms. We will
comment on possible ways to make these theories constructible in
section VII.
\begin{figure}
\[
\raisebox{2.15cm}{\scalebox{.85}{{\includegraphics*[121pt,733pt][277pt,598pt]{figure_2a.eps}}}}
\:+\:
\raisebox{1.9cm}{\scalebox{.85}{{\includegraphics*[104pt,725pt][320pt,607pt]{figure_2b.eps}}}}
\:+\:
\raisebox{2.15cm}{\scalebox{.85}{{\includegraphics*[116pt,733pt][241pt,598pt]{figure_2c.eps}}}}
\]
\vspace{1cm}
\caption{The three different kinds of Feynman diagrams which exhibit different behavior as
$z\rightarrow\infty$. They correspond to the ${\bf s}$-channel,
${\bf t}\,({\bf u})$-channel and the four-particle coupling respectively.
\label{fig:Feynman_diagrams}}
\end{figure}
\subsection{Physical vs. Spurious Poles}
There is an apparent contradiction when in section IV we used that
the only poles of $M_4(z)$ come from propagators and when earlier in
this section we used that polarization tensors behave as $z^{-s}$.
The resolution to this puzzle is very simple yet amusing. Recall
that polarization tensors are defined only up to the choice of a
reference spinor $\mu$ or $\tilde\mu$ of positive or negative
chirality depending on the helicity of the particle.
The $z$-dependence in polarization tensors comes from the factors
in the denominator of the form $\langle \lambda(z), \mu\rangle^s$ or
$[\tilde\lambda(z),\tilde\mu]^s$. The deformed spinors are given by
$\lambda(z) = \lambda + z\lambda'$ (or
$\tilde\lambda(z)=\tilde\lambda + z\tilde\lambda'$) where $\lambda'$
(or $\tilde\lambda'$) are the spinors of a different particle. Now
we see that if $\mu$ is not proportional to $\lambda'$ then
individual Feynman diagrams go to zero as $z$ becomes large due to
the $z$ dependence in the polarization tensors. In the same way,
individual Feynman diagrams possess more poles than just those
coming from propagators. Now let us choose $\mu$ proportional to
$\lambda'$. Then the $z$ dependence in polarization tensors
disappears. We then find that individual Feynman diagrams do not
vanish as $z$ becomes large but they show only poles at the
propagators. Recall that we are not interested in individual Feynman
diagrams, but rather in the full amplitude, which is independent of the
choice of reference spinor. Therefore, since $M_4(z)$ vanishes for
large $z$ for some choice of reference spinors
it must also do so for any other choice. This means that the pole at
infinity is spurious. Similarly, poles coming from polarization
tensors are spurious as well.
\section{More Examples}
In this section we give more examples of how the four-particle test
can be used to constrain many theories. In previous sections we
studied theories of a single particle of integer spin $s$ and found
that only $s=2$ admits self-interactions. Here we allow for several
particles of the same spin. In this section we consider the
coupling of a particle of spin $s$ and one of spin $2$. The spin $s$
can be integer or half-integer.
\subsection{Several Particles Of Same Integer Spin}
Consider theories of several particles of the same integer spin $s$.
The idea is to see whether allowing for several particles relaxes
the constraint found in section IV.B.2 that sets $s=2$.
We are interested in four-particle amplitudes where each particle
carries an extra quantum number. We can call it a color label. The
data for each particle is thus
$\{\lambda^{(i)},\tilde\lambda^{(i)},h_i,a_i\}$. As discussed in
section III.B, the most general three-particle amplitudes possess
coupling constants that can depend on the color of the particles.
Here we drop the superscripts $H$ and $A$ in order to avoid
cluttering the equations and define $\kappa_{a_1a_2a_3} =
\kappa_{1-s}f_{a_1a_2a_3}$ where $f_{a_1a_2a_3}$ are dimensionless
factors. The subscript $(1-s)$ is the dimension of the coupling
constant.
Repeating the calculation that led to (\ref{seca}) but this time
keeping in mind that we have to sum not only over the helicity of
the internal particle but also over all possible colors, we find
\begin{equation}
\label{kiko} M^{(1,2)}_4(0) = \kappa_{1-s}^2\sum_{a_I} f_{a_1a_4
a_I}f_{a_I a_3a_2} {\cal A} + \kappa_{1-s}^2\sum_{a_I}f_{a_1a_3
a_I}f_{a_I a_4a_2}{\cal B},
\end{equation}
while
\begin{equation}
\label{kika} M^{(1,4)}_4(0) = \kappa_{1-s}^2\sum_{a_I} f_{a_1a_2
a_I}f_{a_I a_3a_4} {\cal C} + \kappa_{1-s}^2\sum_{a_I} f_{a_1a_3
a_I}f_{a_I a_2a_4}{\cal D}
\end{equation}
with
\begin{equation}
\begin{split}
& {\cal A} = \frac{\langle 2,4 \rangle^4}{\langle 1,2 \rangle\langle
2,3 \rangle\langle 3,4 \rangle\langle 4,1
\rangle}\left(\frac{\langle 2,4 \rangle^3 [1,3]}{\langle
1,2\rangle\langle 3,4\rangle}\right)^{s-1},\quad {\cal B} =
\frac{\langle 2,4 \rangle^3}{\langle 1,2 \rangle\langle 4,3
\rangle\langle 3,1 \rangle}\left(\frac{\langle 2,4 \rangle^3
[1,3]}{\langle
1,2\rangle\langle 3,4\rangle}\right)^{s-1}, \\
& {\cal C} =\frac{\langle 2,4 \rangle^4}{\langle 1,2 \rangle\langle
2,3 \rangle\langle 3,4 \rangle\langle 4,1
\rangle}\left(\frac{\langle 2,4 \rangle^3 [1,3]}{\langle
1,4\rangle\langle 2,3\rangle}\right)^{s-1}, \quad {\cal D} =
\frac{\langle 2,4 \rangle^3}{\langle 1,3 \rangle\langle 3,2
\rangle\langle 4,1 \rangle}\left(\frac{\langle 2,4 \rangle^3
[1,3]}{\langle 1,4\rangle\langle 2,3\rangle}\right)^{s-1}.
\end{split}
\end{equation}
In order to understand why we have chosen to factor out the pieces
that survive when $s=1$ let us study this case in detail.
\subsubsection{Spin 1}
Before setting $s=1$ it is important to recall that three-particle
amplitudes for any odd integer spin did not have the correct
symmetry structure under the exchange of particle labels. At the end
of section III, we concluded that if no other labels were introduced
then the three-particle couplings had to vanish. Now we have
theories with a color label. In this case, it is easy to check that
in order to ensure the correct symmetry properties we must require
$f_{a_1a_2a_3}$ to be completely antisymmetric in its indices.
Let us now set $s=1$. The four-particle test requires
$M^{(1,2)}_4(0) - M^{(1,4)}_4(0) = 0$. First note that the factor in
front of ${\cal B}$ and ${\cal D}$ are equal up to a sign (due to
the antisymmetric property of $f$). Therefore they can be combined
and simplified to give
\begin{equation}
\label{jimi} \sum_{a_I}f_{a_1a_3 a_I}f_{a_I a_4a_2}\left({\cal
B}+{\cal D}\right) =-\sum_{a_I}f_{a_1a_3 a_I}f_{a_I a_4a_2}\left(
\frac{\langle 2,4 \rangle^4}{\langle 1,2 \rangle\langle 2,3
\rangle\langle 3,4 \rangle\langle 4,1 \rangle} \right)
\end{equation}
where the right hand side was obtained by a simple application of
the identity $\langle 1,2 \rangle\langle 3,4 \rangle+\langle 1,4
\rangle\langle 2,3 \rangle=\langle 1,3 \rangle\langle 2,4 \rangle$
which follows from the fact that spinors are elements of a
two-dimensional vector space\footnote{Readers familiar with
color-ordered amplitudes possibly have recognized (\ref{jimi}) as
the $U(1)$ decoupling identity, i.e.,
$A(1,2,3,4)+A(2,1,3,4)+A(2,3,1,4) = 0$.}.
Note that the right hand side of (\ref{jimi}) can nicely be combined
with the other terms to give rise to the following condition
\begin{equation}
\sum_{a_I} f_{a_1a_4 a_I}f_{a_I a_3a_2} + \sum_{a_I}f_{a_1a_3
a_I}f_{a_I a_4a_2} + \sum_{a_I} f_{a_1a_2 a_I}f_{a_I a_3a_4} = 0.
\end{equation}
This condition is nothing but the Jacobi identity! Therefore, we
have found that the four-particle test implies that a theory of
several spin $1$ particles can be non-trivial only if the
dimensionless coupling constants $f_{a_1a_2a_3}$ are the structure
constants of a Lie algebra.
\subsubsection{Spin 2}
After the success with spin $1$ particles, the natural question is
to ask whether a similar structure is possible for spin $2$. Once
again, before setting $s=2$ let us mention that like in the case of
odd integer spin particles, the requirement of having the correct
symmetry properties under the exchanges of labels implies that the
dimensionless structure constants, $f_{a_1a_2a_3}$, must be
completely symmetric for even integer spin particles.
Imposing the four-particle test using (\ref{kiko}) and (\ref{kika})
we find that the most general solution requires
\begin{equation}
\label{lasi} \sum_{a_I} f_{a_1a_4 a_I}f_{a_I a_3a_2} =
\sum_{a_I}f_{a_1a_3 a_I}f_{a_I a_4a_2}
\end{equation}
which due to the symmetry properties of $f_{abc}$ implies that all
the other products of structure constants are equal and they factor
out of (\ref{kiko}) and (\ref{kika}) leaving behind the amplitudes
for a single spin 2 particle which we know satisfy the four-particle
test.
Note that (\ref{lasi}) implies that the algebra defined by
\begin{equation}
{\cal E}_a\star {\cal E}_b = f_{abc}~{\cal E}_c
\end{equation}
must be commutative and associative. It turns out that those
algebras are reducible and the theory reduces to that of several
non-interacting massless spin 2 particles. This proves that it is
not possible to define a non-abelian generalization of a theory of
spin 2 particles that is constructible\footnote{We thank L. Freidel
for useful discussions about this point.}. The same conclusion was
proven by using BRST methods in \cite{Boulanger:2000rq}.
Finally, let us mention that for $s>2$ there is no non-trivial
way of satisfying the four-particle test.
\subsection{Coupling Of A Spin s Particle To A Spin 2 Particle}
Our final example of the use of the four-particle test is to
theories of a single spin $s$ particle $(\Psi)$ and a spin 2
particle $(G)$. Here we assume that the spin 2 particle only has
cubic couplings of the form $(++-)$ and $(--+)$. This means that we
are dealing with a graviton. Let the coupling constant of three
gravitons be $\kappa$ while that of a graviton to two $\Psi$'s be
$\kappa'$. Assume that the graviton coupling preserves the helicity
of the $\Psi$ particle. This implies that $\kappa$ and $\kappa'$
have the same dimensions. Also assume that there no any cubic
coupling of $\Psi$'s\footnote{This last condition is not essential
since such a coupling would have dimension different from that of
$\kappa$ and $\kappa'$ and hence it would have to satisfy the
four-particle test independently.}.
We need to analyze two different 4 particle amplitudes:
$M_4(G_1,G_2,\Psi_3,\Psi_4)$ and $M_4(\Psi_1,\Psi_2,\Psi_3,\Psi_4)$.
Consider first $M_4(\Psi_1^{-},\Psi_2^{+},\Psi_3^{-},\Psi_4^{+})$
under a BCFW deformation. A Feynman diagram analysis shows that the
theory is constructible, i.e., the deformed amplitude vanishes at
infinity, for $s>1$. This implies that the following discussion
applies only to particles $\Psi$'s with spin higher than 1.
Let us consider the four-particle test. We choose to deform
$(1^{-},2^{+})$ and $(1^{-},4^{+})$:
\begin{equation}\label{test1}
\begin{split}
M_4^{(1,2)}\:=&\:(\kappa')^{2}
\frac{\langle1,4\rangle}{[1,4]}
\frac{[2,4]^{4s}}{[1,2]^{2s-2}[2,3]^{2}[3,4]^{2s-2}}\\
M_4^{(1,4)}\:=&\:(\kappa')^{2}
\frac{\langle1,2\rangle}{[1,2]}
\frac{[2,4]^{4s}}{[1,4]^{2s-2}[3,4]^{2}[2,3]^{2s-2}}.
\end{split}
\end{equation}
Notice that $M_4^{(1,4)}$ is obtained from $M_4^{(1,2)}$ by
exchanging $2$ and $4$. Taking the ratio of the quantities in
(\ref{test1}) leads to:
\begin{equation}\label{ratio1}
\frac{M_4^{(1,2)}}{M_4^{(1,4)}}\:=\:
\left(\frac{\sf t}{\sf s}\right)^{2s-3},
\end{equation}
where ${\sf s} = P_{12}^{2}$ and ${\sf t}=P_{14}^{2}$. This ratio is
equal to one only if $s=3/2$. Thus, the only particle with spin
higher than $1$ which can couple to a graviton, giving a
constructible theory, has the same spin as a gravitino in ${\cal
N}=1$ supergravity.
At this point the couplings $\kappa$ and $\kappa'$ are independent
and it is not possible to conclude that the theory is linearized
supergravity. Quite nicely, the next amplitude constrains the
couplings.
Consider the four-particle test on the amplitude
$M_4(G_1,G_2,\Psi_3,\Psi_4)$. Again we choose to deform $(1,2)$ and
$(1,4)$:
\begin{equation}\label{test2}
\begin{split}
M_{4}^{(1,2)}\:=&\:-(\kappa')^{2}
\frac{\langle 1,3\rangle^{2}[2,4]^{2s+2}}{[1,2]^{2}[3,4]^{2}[2,3]^{2s-4}}
\frac{\sf s}{\sf t u}\\
M_{4}^{(1,4)}\:=&\:\kappa' \frac{\langle
1,3\rangle^{2}[2,4]^{2s+2}}{[1,4]^{2}[2,3]^{2s-2}}
\left(\frac{\kappa}{\sf s}+\frac{\kappa'}{\sf
u}\right)
\end{split}
\end{equation}
where ${\sf u}=P_{13}^2$.
Taking their ratio and setting $s=3/2$, we get
\begin{equation}\label{ratio2}
\frac{M_{4}^{(1,4)}}{M_{4}^{(1,2)}}\:=\: 1 -\frac{\sf u}{\sf t}
\left(\frac{\kappa}{\kappa'} - 1\right).
\end{equation}
Requiring the right hand side to be equal to one implies that
$\kappa'=\kappa$. This means that this theory is unique and turns
out to agree with linearized ${\cal N}=1$ supergravity.
An interesting observation is that the local supersymmetry of this
theory arises as an accidental symmetry. The only symmetry we used
in our derivation was under the Poincar{\' e} group; not even global
supersymmetry was assumed. It has been known for a long time
\cite{Grisaru:1976vm} that if one imposes global supersymmetry, then
${\cal N}=1$ supergravity is the unique theory of spin $2$ and spin
$3/2$ massless particles. The uniqueness of ${\cal N}=1$ supergravity
was successively \cite{Deser:1979zb} derived from the non-interactive form
by using gauge invariances. More recently and by using cohomological
BRST methods, the assumption of global supersymmetry was dropped
\cite{Boulanger:2001wq}.
Finally, let us stress that this analysis does not apply to the
coupling of particles with spin $s\le 1$ since the deformed
amplitude under the BFCW deformation does not vanish at infinity.
This simply means that we need to implement our procedure in a
different way. We discuss this briefly in the next section as well
as in the appendix.
\section{Conclusions And Future Directions}
Starting from the very basic assumptions of Poincar{\'e} invariance
and factorization of the S-matrix, we have derived powerful
consistency requirements that constructible theories must satisfy.
We also found that many constructible theories satisfy the
conditions only if the S-matrix is trivial. Non-trivial S-matrices
seem to be rare.
The consistency conditions we found came from studying theories
where four-particle scattering amplitudes can be constructed out of
three-particle ones via the BCFW construction. While failing to
satisfy the four-particle constraint non-trivially means that the
theory should have a trivial S-matrix, passing the test does not
necessarily imply that the interacting theory exists. Once the
four-particle test is satisfied one should check the five- and
higher-particle amplitudes. A theory where all $n$-particle amplitudes
can be determined from the three-particle ones is called fully
constructible.
It is interesting to note that Yang-Mills \cite{Britto:2005fq} and
General Relativity \cite{Benincasa:2007qj} are fully constructible.
This means that the theories are unique in that once the
three-particle amplitudes are chosen (where the only ambiguity is in
the value of the coupling constants) then the whole tree-level
S-matrix is determined. In the case of General Relativity it turns
out that general covariance emerges from Poincar{\'e} symmetry. In
the case of Yang-Mills, the structure of Lie algebras, i.e.,
antisymmetric structure constants that satisfy the Jacobi identity,
also emerges from Poincar{\'e} symmetry. In both cases, the only
non-zero coupling constants of three-particle amplitudes were chosen
to be those of $M_3(++-)$ and $M_3(--+)$. It is important to mention
that our analysis does not discard the possibility of theories with
three-particle amplitudes of the form $M_3(---)$ and $M_3(+++)$.
Dimensional analysis shows that these theories are non-constructible
due to the high power of momenta in the cubic vertex. For example,
if $s=2$ one finds six derivatives. Indeed, for spin $2$, Wald
\cite{Wald:1986bj} found consistent classical field theories that
propagate only massless spin $2$ fields and which are not linearized
General Relativity. Those theories do not possess general covariance
and the simplest of them possesses cubic couplings with six
derivative interactions. In this class of theories might be the spin 3 self-interaction,
which seems to be possible from \cite{Damour:1987fp}, as well as the
recent proposal for spin 2 and spin 3 interaction of
\cite{Boulanger:2006gr}.
There are some natural questions for the future. One of them is to
ask what the corresponding statements are if one replaces
Poincar{\'e} symmetry by some other group. In particular, it is
known that interactions of higher spins are possible in anti-de
Sitter space (see \cite{Bekaert:2005vh} and references therein). It
would be interesting to reproduce such results from an S-matrix
viewpoint.
The constraints we obtained in this paper only concern the pole
structure of the S-matrix. It is natural to expect that branch cuts
might lead to more constraints. In field theory one is very
familiar with this phenomenon; some theories that are classically
well defined become anomalous at loop level. It would be very
interesting to find out whether the approach presented in this paper
can lead to constraints analogous to anomalies. Speculating even
more, one could imagine that since three-particle amplitudes are
determined exactly, even non-perturbatively, then it might be
possible to find constraints that are only visible outside
perturbation theory.
A well known way to handle quantum corrections is supersymmetry. A
natural generalization of the results of this paper is to replace
Poincar{\'e} symmetry by super Poincar{\'e} and then explore
consistency conditions for theories involving different
supermultiplets.
All of these generalizations, if possible, will only be valid for the
set of constructible theories. In order to increase the power of
these constraints one has to find ways of relaxing the condition of
constructibility. Two possibilities are worth mentioning.
The first approach is to compose several BCFW deformations
\cite{Bern:2005hs} so that more polarization tensors
vanish at infinity and make the amplitude constructible. This
procedure works in many cases but it is not very useful for four
particles since deforming three particles means that one has to sum
over all channels at once and the four-particle constraint is
guaranteed to be satisfied. One can however go to five and more
particles and then there will be non-trivial constraints.
Some peculiar cases can arise because, as it was stressed in section
V, the behavior at infinity obtained by a Feynman diagram analysis
is only an upper bound. It turns out in many examples that a Feynman
diagram analysis shows a non-zero behavior at infinity under a
single BCFW deformation and a vanishing behavior under a composition
of BCFW deformations. Using the composition, one computes the
amplitude which naturally comes out in a very compact form. When one
takes this new compact, but equivalent, form of the amplitude and
looks again at the behavior under a single BCFW deformation, one
finds that it does go to zero at infinity! This shows that there are
cancellations that are not manifest from Feynman diagrams. It would
be very interesting if there was a simple and systematic way of
improving the Feynman diagram analysis so that it will produce
tighter upper bounds. It would be even more interesting to find a way
of carrying out the analysis only in terms of the S-matrix.
The second possibility is to introduce auxiliary massive fields such
that quartic vertices with too many derivatives arise as effective
couplings once the auxiliary field is integrated out. Propagators of
the auxiliary field create poles in $z$ whose location is
proportional to the mass of the auxiliary field. The theory is then
constructible, in the sense that no poles are located at infinity.
Once the amplitudes are obtained one can take the mass of the
auxiliary field to infinity and then recover the original theory.
This gives a nice interpretation to the physics at infinity of some
non-constructible theories: {\it the presence of poles at infinity
implies that the theory is an effective theory where some massive
particles have been integrated out}. The simplest example is
a theory of a massless scalar $s=0$. Recall that one condition for a
theory to be constructible is that the quartic interaction has to
have $l< 2s$ derivatives. In the case at hand, with $s=0$, this
means that the quartic interaction must be absent. Therefore, a
scalar theory with a $\lambda\phi^4$ interaction is not
constructible. In the appendix, we show that this theory can be made
constructible by introducing an auxiliary field (and deforming three
particles).
A necessary ingredient to carry out the program of auxiliary fields
is to find three-particle amplitudes where one or more particles are
massive. More generally, it will be interesting to extend our
methods for general massive representations of the Poincar{\'e}
group. A good reason to believe that this might be possible is the
analysis of \cite{Badger:2005zh} where amplitudes of massive scalars
and gluons were constructed using a suitable modification of BCFW
deformations. In the case of massive particles of higher spins one
might try to generate a mass term using the Higgs mechanism.
Finally, there are two more directions that, in our view, deserve
further study. The first is the extension to theories in higher or
less number of dimensions, including theories in ten dimensions. The
second is to carry out a systematic search for theories where
several three-particle amplitudes might have coupling constants with
different dimensions but that when multiplied to produce
four-particle amplitudes produce accidental degeneracies. Such
degeneracies might lead to new consistent non-trivial theories which
we might call {\it exceptional theories}.
\begin{acknowledgments}
The authors would like to thank E. Buchbinder, B. Dittrich, L.
Freidel, X. Liu and S. Speziale for useful discussion. PB would like
to thank Perimeter Institute for hospitality during a visit where
part of this research was done. The authors are also grateful
to Natasha Kirby for reading the manuscript. The research of FC at
Perimeter Institute for Theoretical Physics is supported in part by the
Government of Canada through NSERC and by the Province of Ontario
through MRI.
\end{acknowledgments}
\begin{appendix}
\section{Relaxing Constructibility: Auxiliary Fields}
Our proposal for studying arbitrary spin theories is very general,
but it suffers from the fact that some interesting theories are not
constructible. In section VII, we mentioned several ways of trying
to extend the range of applicability of our technique. One of them
was the introduction of auxiliary fields. In this appendix we
illustrate the idea by showing how the $\lambda \phi^4$ theory,
which is not constructible (even under compositions of BCFW
deformations), can be thought of as the effective theory of a
constructible theory which contains a massive field. The
constructibility here is under a composition of two BCFW
deformations.
The failure to be constructible of the four-particle amplitude in
the $\lambda \phi^4$ theory is understood as a consequence of
sending the mass of the heavy auxiliary field to infinity.
Let us start with a massless scalar with a $\lambda\phi^{4}$
interaction:
\begin{equation}\label{phi4}
\mathcal{L}(\phi)\:=
\frac{1}{2}\left(\partial_{\mu}\phi\right)\left(\partial^{\mu}\phi\right)
-\frac{\lambda}{4!}\phi^{4}.
\end{equation}
We can remove the quartic coupling by introducing a massive
auxiliary field $\chi$:
\begin{equation}\label{chiphi2}
\mathcal{L}(\phi,\chi)\:=
\frac{1}{2}\left(\partial_{\mu}\phi\right)\left(\partial^{\mu}\phi\right)+
\frac{1}{2}\left(\partial_{\mu}\chi\right)\left(\partial^{\mu}\chi\right)
-\frac{1}{2}m_{\chi}^{2}\chi^{2}
-g\chi\phi^{2}.
\end{equation}
It is straightforward to check that (\ref{phi4}) can be obtained
from (\ref{chiphi2}) by integrating out the field $\chi$ taking the
limit of large $g$ and large $m_{\chi}$, and by keeping
$g^{2}/2m_{\chi}^{2}\equiv \lambda/4!$ finite.
The theory (\ref{chiphi2}) now has only cubic interactions. Since
massless scalar fields do not possess polarization tensors that can
be made to vanish at infinity, the theory with only cubic
interactions is still not constructible under a BCFW deformation of
two particles. This problem is resolved by applying a composition
and deforming three particles.
Another problem one has to deal with is that the new vertex in
(\ref{chiphi2}) involves a massive scalar. This implies that the
analysis of section III is not readily applicable. However, in this
specific case, the three particle amplitude is simply given by the
coupling constant $g$.
Since we are interested in the scattering of the massless scalars
represented by the field $\phi$, we consider only amplitudes where
$\chi$ appears as an internal particle. This means that an internal
propagator takes the form
\begin{equation}\label{intprop}
\frac{1}{P^{2}-m_{\chi}^{2}}.
\end{equation}
Let $M_{4}(\phi_{1},\phi_{2},\phi_{3},\phi_{4})$ be the four
particle amplitude of interest. From Feynman diagrams, it is easy to
see that it is given by
\begin{equation}\label{4phiampl}
M_{4}(\phi_{1},\phi_{2},\phi_{3},\phi_{4})\:=\:
\sum_{j=2}^{4}\frac{g^{2}}{P_{1j}^{2}-m_{\chi}^{2}},
\end{equation}
where $P_{1j}=p^{(1)}+p^{(j)}$. Already from (\ref{4phiampl}), one
can see that the correct limit leads to the four point vertex of the
original theory:
\begin{equation}\label{4philim}
\frac{g^{2}}{P_{1j}^{2}-m_{\chi}^{2}}\:\rightarrow\:
-\frac{g^{2}}{m_{\chi}^{2}}\:\sim\:\lambda.
\end{equation}
Let us apply a three-particle deformation:
\begin{equation}\label{4phi3def}
\begin{split}
\tilde{\lambda}^{(1)}(z)&=\tilde{\lambda}^{(1)}-
z\left(\frac{[1,3]}{[2,3]}\tilde{\lambda}^{(2)}
+\frac{[1,3]}{[3,4]}\tilde{\lambda}^{(4)}\right)\\
\lambda^{(2)}(z)&=\lambda^{(2)}+z\frac{[1,3]}{[2,3]}\lambda^{(1)}\\
\lambda^{(4)}(z)&=\lambda^{(4)}+z\frac{[1,3]}{[3,4]}\lambda^{(1)}.
\end{split}
\end{equation}
A Feynman diagram analysis shows that the deformed amplitude
vanishes at infinity as $z^{-1}$. Taking the ${\sf t}$-channel as an
example, the deformed propagator in this channel is:
\begin{equation}\label{4phidefp}
\frac{1}{P_{14}^{2}(z)-m_{\chi}^{2}}, \quad
P_{14}(z)\:=\:P_{14}-z\frac{[1,3]}{[2,3]}\lambda^{(1)}\tilde{\lambda^{(2)}},
\end{equation}
and its pole is given by
\begin{equation}\label{4phipole}
z_{{\sf u}}\:=\:\frac{[2,3]}{[1,3]}
\frac{(P_{14}^{2}-m_{\chi}^{2})}{\langle1,4\rangle[2,4]}.
\end{equation}
The momentum $P_{14}$ on-shell becomes:
\begin{equation}\label{4phishell}
P_{14}(z_{{\sf u}})\:=\:P_{14}-
\frac{(P_{14}^{2}-m_{\chi}^{2})}{\langle1,4\rangle[2,4]}
\lambda^{(1)}\tilde{\lambda}^{(2)}.
\end{equation}
As stated at the beginning of the appendix, the three-particle
amplitude is just the coupling constant $g$, so it is easy to
reconstruct the result (\ref{4phiampl}) and, as a consequence,
(\ref{4philim}).
\end{appendix}
|
1,116,691,499,913 | arxiv | \section{Introduction}
\label{sec1}
The Bose-Einstein condensation (BEC) and superfluidity of dipolar
(indirect) excitons, formed by electrons and holes, spatially
separated in two parallel two-dimensional (2D) layers of
semiconductor, were proposed~\cite{Lozovik} and recent progress on
BEC of semiconductor dipolar excitons was
reviewed~\cite{Moskalenko_Snoke,Combescot}. Due to relatively large
exciton binding energies in novel 2D semiconductors, the BEC and
superfluidity of dipolar excitons in double layers of transition
metal dichalcogenides (TMDCs) was
studied~\cite{Fogler,MacDonald,BK}.
\medskip
\par
Phosphorene, an atom-thick layer of the black phosphorus
\cite{Warren_ACS} that does have a natural band gap, has aroused
considerable interest currently. It has been shown that monolayer
black phosphorene is an relatively unexplored two dimensional
semiconductor with a high hole mobility and exhibits unique
many-electron effects~\cite{1a}. In particular, first principles
calculations have predicted unusual strong anisotropy for the
in-plane thermal conductivity in these materials~\cite{1}. Among
the intriguing band structure features found are large excitonic
binding energy \cite{LiuACS2014,Tran2014}, prominent anisotropic
electron and hole effective masses
\cite{Natute2014,Appelbaum,Rodin,Chaves2015} and carrier mobility
\cite{Xia2014,Natute2014}. Recently the exciton binding energy for
direct excitons in monolayer black phosphorus, placed on a
SiO$_\mathrm{2}$ substrate was obtained experimentally by
polarization-resolved photoluminescence measurements at room
temperature \cite{5b}. External perpendicular electric
fields~\cite{3} and mechanical strain~\cite{4,4a} have been applied
to demonstrate that the electronic properties of phosphorene may
be significantly modified. According to
Refs.~[\onlinecite{Tran2014,5b}], excitons and highly anisotropic
optical responses of few-layer black phosphorous may be possible.
Specifically,
black phosphorous absorbs light polarized along its armchair direction and is
transparent to light polarized along the zigzag direction. Consequently,
black phosphorene may be employed as a viable linear polarizers. Also the interest in these recently fabricated
2D phosphorene crystals has been growing because
they have displayed potential for applications in electronics
including field effect transistors~\cite{2}.
\medskip
\par
This paper explores the way in which the anisotropy of black
phosphorene is capable of affecting superfluidity in double layer
structure. While it is important to mention that whereas the exciton
binding energy was calculated using density functional theory (DFT)
and quasiparticle self-consistent GW methods for direct excitons in
suspended few-layer black phosphorus~\cite{Tran2014}, here we apply
an analytical approach for indirect excitons in a phosphorene double
layer. In our model, electrons and holes are confined to two
separated parallel phosphorene layers which are embedded in a
dielectric medium. We have taken screening of the interaction
potential between an electron and hole through the Keldysh potential
~\cite{Keldysh}. The dilute system of dipolar excitons form a weakly
interacting Bose gas, which can can be treated in the Bogoliubov
approximation~\cite{Abrikosov}. The anisotropic dispersion relation
for the single dipolar exciton in a phosphorene double layer results
in the angle dependent spectrum of collective excitations with the
angle dependent sound velocity, which causes the dependence of the
critical velocity for the superfluidity on the direction of motion
of dipolar excitons.
While the concentrations of the normal and superfluid components
for the BCS-type fermionic superfluid with the anisotropic order
parameter do not depend on the direction of motion of the Cooper
pairs~\cite{Saslow}, we obtain the concentrations of the normal and
superfluid components for dipolar excitons in a double layer
phosphorene to be dependent on the directions of motion of excitons.
Therefore, the mean field temperature of the superfluidity for
dipolar excitons in a phosphorene double layer also depends on the
direction of motion of the dipolar excitons. At some fixed
temperatures, the motion of dipolar excitons in some directions is
superfluid, while in other directions is dissipative. This effect
makes superfluidity of dipolar excitons in a phosphorene double
layer to be different from other 2D semiconductors, due to high
anisotropy of the dispersion relations for the charge carriers in
phosphorene. The calculations have been performed for both the
Keldysh and Coulomb potentials, describing the interactions between
the charge carriers. Such approach allows to analyze the influence
of the screening effects on the properties of a weakly interacting
Bose gas of dipolar excitons in a phosphorene double layer. We also
study the dependence of the binding energy, the sound velocity, and
the mean field temperature of the superfluidity for dipolar excitons
on the electron and hole effective masses.
\medskip
\par
The paper is organized in the following way. In Sec.~\ref{tm}, the
energy spectrum and wave functions for a single dipolar exciton in
a phosphorene double layer are obtained, and the dipolar exciton
effective masses and binding energies are calculated. The angle
dependent spectrum of collective excitations and the sound velocity
for the dilute weakly interacting Bose gas of dipolar excitons in
the Bogoliubov approximation are derived in Sec.~\ref{collect}. In
Sec.~\ref{super}, the concentrations of the normal and superfluid
components and the mean field critical temperature of superfluidity
are obtained. The proposed experiment to
study the superfluidity of dipolar excitons in different directions
of motion of dipolar excitons is discussed in
Sec.~\ref{experiment}. The conclusions follow in Sec.~\ref{conc}.
\section{Theoretical Model}
\label{tm}
\medskip
\par
In the system under consideration in this paper, electrons are
confined in a 2D phosphorene monolayer, while an
equal number of positive holes are located in a parallel phosphorene
monolayer at a distance $D$ away. The system of the
charge carriers in two parallel phosphorene layers is treated as a
two-dimensional system without interlayer hopping. In this system,
the electron-hole recombination due to the tunneling of electrons
and holes between different phosphorene monolayers is suppressed by
the dielectric barrier with the dielectric constant $\varepsilon
_{d}$ that separates the phosphorene monolayers. Therefore, the
dipolar excitons, formed by electrons and holes, located in two
different phosphorene monolayers, have a longer lifetime than the
direct excitons. The electron and hole via electromagnetic interaction
V(r_{eh}),$ where $r_{eh}$ is distance between the electron and hole, could form a bound state, i.e.,
an exciton, in three-dimensional (3D) space. Therefore, to
determine the binding energy of the exciton one must solve a two
body problem in restricted 3D space. However, if one projects the
electron position vector onto the black phosphorene plane with holes
and replace the relative coordinate vector ${\bf r}_{eh}$ by its
projection $\mathbf{r}$ on this plane, the potential $V(r_{eh})$ may
be expressed as $V(r)= V(\sqrt{r^{2}+D^{2}}),$ where $r$ is the
relative distance between the hole and the projection of the
electron position vector onto the phosphorene plane with holes. A
schematic illustration of the exciton is presented in Fig.
\ref{Fig1}. By introducing in-plane coordinates
$\mathbf{r}_{1}=(x_{1},y_{1})$ and $\mathbf{r}_{2}=(x_{2},y_{2})$\
for the electron and the projection vector of the hole,
respectively, so that $\mathbf{r}=$ $\mathbf{r}_{1}-\mathbf{r}_{2}$,
one can describe the exciton by employing a two-body 2D
Schr\"{o}dinger equation with potential $V(\sqrt{r^{2}+D^{2}}).$ In
this way, we have reduced the restricted 3D two-body problem to a 2D
two-body problem on a phosphorene layer with the holes.
\begin{figure}[h]
\includegraphics[width=14.0cm]{Fig1.eps} \vspace{-12cm}
\caption{(Color online) Schematic illustration of a dipolar exciton
consisting of a spatially separated electron and hole in a black
phosphorene double layer.} \label{Fig1}
\end{figure}
\subsection{Hamiltonian for an electron-hole pair in a black phosphorene double layer}
\label{single}
\medskip
\par
Within the framework of our model the
coordinate vectors of the electron and hole may be replaced by their
2D projections onto the plane of one phosphorene layer. These
in-plane coordinates $\mathbf{r}_{1}=(x_{1},y_{1})$ and
$\mathbf{r}_{2}=(x_{2},y_{2})$ for an electron and a hole,
respectively, will be used in our description. We assume that at
low momentum $\mathbf{p}=(p_{x},p_{y})$, i.e., near the $\Gamma$
point, the single electron and hole energy spectrum
$\varepsilon _{l}^{(0)}(\mathbf{p})$ is given by
\begin{eqnarray}
\varepsilon_{l}^{(0)}(\mathbf{p}) = \frac{p_{x}^{2}}{2m_{x}^{l}} +
\frac{ p_{y}^{2}}{2m_{y}^{l}}, \ \ l =e,\ h, \label{esingl}
\end{eqnarray}
where $m_{x}^{l}$ and $m_{y}^{l}$ are the electron/hole effective
masses along the $x$ and $y$ directions, respectively. We assume
that $OX$ and $OY$ axes correspond to the armchair and zigzag
directions in a phosphorene monolayer, respectively.
\medskip
\par
The model Hamiltonian within the effective mass approximation for
a single electron-hole pair in a black phosphorene
double layer is given by
\begin{eqnarray}
\hat{H}_{0} = -
\frac{\hbar^{2}}{2m_{x}^{e}}\frac{\partial^{2}}{\partial x_{1}^{2}}
+ \frac{\hbar^{2}}{2m_{y}^{e}}\frac{\partial^{2}}{\partial
y_{1}^{2}} -
\frac{\hbar^{2}}{2m_{x}^{h}}\frac{\partial^{2}}{\partial x_{2}^{2}}
- \frac{\hbar^{2}}{2m_{y}^{h}}\frac{\partial^{2}}{\partial
y_{2}^{2}} + V\left(\sqrt{r^{2}+D^{2}}\right)\ , \label{H0}
\end{eqnarray}
where $V\left(\sqrt{r^{2}+D^{2}}\right)$ is the potential energy for
electron-hole pair attraction, when the electron and hole are
located in two different 2D planes. To separate the relative motion
of the electron-hole pair from their center-of-mass motion one can
introduces variables for the center-of-mass of an
electron-hole pair $\mathbf{R} =(X,Y)$ and the relative motion of an electron and a hole
\mathbf{r} = (x,y)$, as $X = (m_{x}^{e} x_{1} + m_{x}^{h} x_{2})/(
m_{x}^{e}+ m_{x}^{h})$, \
$Y = (m_{x}^{e} y_{1} + m_{x}^{h} y_{2})/(
m_{x}^{e}+ m_{x}^{h})$, \ $x = x_{1} - x_{2} \ ,
y = y_{1} - y_{2}$ \ , $r^{2}=x^2+y^2$.
The latter allows to rewrite the Hamiltonian as $\hat{H}_{0} =
\hat{H}_{c} + \hat{H}_{rel} $,
where
\begin{eqnarray}
\hat{H}_{c} = - \frac{\hbar^{2}}{2M_{x}}\frac{\partial^{2}}{\partial X^{2}}
- \frac{\hbar^{2}}{2M_{y}}\frac{\partial^{2}}{\partial Y^{2}} \ ,
\label{Hc}
\end{eqnarray}
\begin{eqnarray}
\hat{H}_{rel} = - \frac{\hbar^{2}}{2\mu_{x}}\frac{\partial^{2}}{\partial
x^{2}} - \frac{\hbar^{2}}{2\mu_{y}}\frac{\partial^{2}}{\partial y^{2}} + V
\sqrt{r^{2}+D^{2}}) \
\label{Hrel}
\end{eqnarray}
are the Hamiltonians of the center-of-mass and relative motion of an
electron-hole pair, respectively. In Eqs.\ (\ref{Hc}) and
(\ref{Hrel}), $M_{x} = m_{x}^{e}+ m_{x}^{h}$ and $M_{y} = m_{y}^{e}+
m_{y}^{h}$ are the effective exciton masses, describing the motion
of an electron-hole center-of-mass in the $x$ and $y$
directions, respectively, while $\mu_{x} = \frac{m_{x}^{e} m_{x}^{h}}
m_{x}^{e}+ m_{x}^{h}}$ and $\mu_{y} = \frac{m_{y}^{e} m_{y}^{h}}
m_{y}^{e}+ m_{y}^{h}}$ are the reduced masses, describing the
relative motion of an electron-hole pair in the $x$ and $y$
directions, respectively.
\medskip
\par
In general the Schr\"{o}dinger equation for this electron-hole pair has the form:
$\hat{H}_{0} \Psi(\mathbf{r}_{1},\mathbf{r}_{2}) = E\Psi(\mathbf{r}_{1}
\mathbf{r}_{2})$, where $\Psi(\mathbf{r}_{1},\mathbf{r}_{2})$ and
$E$ are its eigenfunction and eigenenergy. Substituting Eqs.\
(\ref{Hc}) and (\ref{Hrel}) into $\hat{H}_{0} $, due to the
separation of variables
for the center-of-mass and relative motion, one can write $\Psi(\mathbf{r}_{1}
\mathbf{r}_{2})$ in the form $\Psi(\mathbf{r}_{1},\mathbf{r}_{2}) = \Psi
\mathbf{R},\mathbf{r}) = e^{i\mathbf{P}\cdot\mathbf{R}/\hbar}\varphi(\mathbf
r})$, where $\mathbf{P} = (P_{x},P_{y})$ is the momentum for the
center-of-mass of the electron-hole pair and $\varphi(\mathbf{r})$
is the wave function for the electron-hole pair, given by the 2D
Schr\"{o}dinger equation:
\begin{eqnarray}
\left[ - \frac{\hbar^{2}}{2\mu_{x}}\frac{\partial^{2}}{\partial x^{2}} -
\frac{\hbar^{2}}{2\mu_{y}}\frac{\partial^{2}}{\partial y^{2}} + V\left(\sqrt{
r^{2}+D^{2}}\right)\right]\varphi(x,y) = \mathcal{E}\varphi(x,y) ,
\label{Schrel}
\end{eqnarray}
where $\mathcal{E}$ is the eigenenergy
of the electron-hole pair in a black phosphorene double layer.
\subsection{Electron-hole interaction in a black phosphorene double layer}
\medskip
\par
The electromagnetic interaction in a thin layer of material has a
nontrivial form due to screening \cite{Keldysh,Rubio}. Whereas the
electron and hole are interacting via the Coulomb potential, in
black phosphorene the electron-hole interaction is affected by
screening which causes the electron-hole attraction to be described
by the Keldysh potential \cite{Keldysh}. This potential has been
widely used to describe the electron-hole interaction in TMDC
\cite{Reichman,Prada,Saxena,VargaPRB2016,Kezerashvili2016} and black phosphorene \cite{Rodin,Chaves2015,Katsnelson} monolayers.
The Keldysh potential has the form~\cite{Rodin}
\begin{eqnarray}
V(r_{eh}) = -\frac{\pi k e^{2}}{\left(\varepsilon_{1} +
\varepsilon_{2}\right)\rho_{0}} \left[H_{0}\left(\frac{r_{eh}}{\rho_{0}
\right) - Y_{0}\left(\frac{r_{eh}}{\rho_{0}}\right) \right] ,
\label{Keldysh}
\end{eqnarray}
where $r_{eh}$ is the distance between the
electron and hole located in the different parallel planes, $k=9\times 10^{9}\ N\times m^{2}/C^{2}$,
$H_{0}(x)$ and $Y_{0}(x)$ are Struve and Bessel functions of the
second kind of order $\nu=0$, respectively, $\varepsilon_{1}$ and $\varepsilon_{2}$
denote the background dielectric constants of the dielectrics,
surrounding the black phosphorene layer, and the screening length
$\rho_{0}$ is defined by $\rho _{0}=2\pi
\zeta/\left[\left(\varepsilon_{1} +
\varepsilon_{2}\right)/2\right]$, where $\zeta = 4.1 \
{\AA}$~\cite{Rodin}. Assuming that the dielectric between two
phosphorene monolayers is the same as substrate material with
dielectric constant $\varepsilon_{d}$, we set $\varepsilon_{1} =
\varepsilon_{2} = \varepsilon_{d}$. The screening length $\rho_{0}$
determines the boundary between two different behaviors for the
potential due to a nonlocal macroscopic screening. For large
separation between the electron and hole, i.e., $r_{eh} \gg \rho
_{0\text{ }}$, the potential has the three-dimensional Coulomb tail.
On the other hand, for small $r_{eh} \ll \rho _{0\text{ }}$distances
it becomes a logarithmic Coulomb potential of interaction between
two point charges in 2D. \ A crossover between these two regimes
takes place around distance $\rho _{0}$.
\medskip
\par
Making use of $r_{eh} = \sqrt{r^{2}+ D^{2}}$ in Eq.~(\ref{Keldysh})
and assuming that $r\ll D$, one can expand Eq.\ (\ref{Keldysh}) as a
Taylor series in terms of $\left(r/D\right)^{2}$. By limiting
ourselves to the first order with respect to $\left(r/D\right)^{2}$,
we obtain
\begin{eqnarray}
V(r) = -V_{0} + \gamma r^{2}, \label{expand}
\end{eqnarray}
with
\begin{eqnarray}
V_{0} &=& \frac{\pi k e^{2}}{\left(\varepsilon_{1} +
\varepsilon_{2}\right)\rho_{0}} \left[H_{0}\left(\frac{D}{\rho_{0}}\right) -
Y_{0}\left(\frac{D}{\rho_{0}}\right) \right], \label{V0}
\nonumber\\
\gamma &=& - \frac{\pi k e^{2}}{2\left(\varepsilon_{1} +
\varepsilon_{2}\right)\rho_{0}^{2}D} \left[H_{-1}\left(\frac{D}{\rho_{0}
\right) - Y_{-1}\left(\frac{D}{\rho_{0}}\right) \right] ,
\label{gamma}
\end{eqnarray}
where $H_{-1}\left(\frac{D}{\rho_{0}}\right)$ and $Y_{-1}\left(\frac{D}
\rho_{0}}\right)$ are Struve and Bessel functions of the
second kind of order $\nu=-1$, respectively.
\medskip
\par
To illustrate the screening effect of the Keldysh interaction let us
use for the electron-hole interaction the Coulomb potential. The
potential energy of the electron-hole attraction in this case is
$V(r)=-ke^{2}/(\epsilon _{d}\sqrt{r^{2}+D^{2}})$. Assuming $r\ll D$
and retaining only the first two terms of the Taylor series, one
obtains the same form for a potential as Eq.~(\ref{expand}) but
with the following expressions for $V_{0}$ and $\gamma$:
\begin{eqnarray}
V_{0}=\frac{ke^{2}}{\epsilon _{d}D},\hspace{1cm}\gamma =\frac{ke^{2}}
2\epsilon _{d}D^{3}}.
\label{V0g}
\end{eqnarray}
Replacement of $V\left(\sqrt{r^{2}+D^{2}}\right)$ in Eq.\ (\ref{Hrel}) by the potential (\ref{expand}) allows to reduce the problem of
indirect exciton formed between two layers to an exactly solvable two-body problem as this is demonstrated in the next subsection.
\medskip
\par
\subsection{Wave function and binding energy of an exciton}
Substituting (\ref{expand}) with parameters (\ref{gamma}) for the Keldysh potential or
(\ref{V0g}) for the Coulomb potential, into Eq.\ (\ref{Hrel}) and
using $r^2=x^2+y^2$, one obtains an equation which has the form of
the Schr\"{o}dinger equation for a 2D anisotropic harmonic
oscillator. This equation allows to separate the $x$ and $y$
variables and can be reduced to two independent Schr\"{o}dinger
equations for 1D harmonic oscillators, i.e.,
\begin{eqnarray}
&-& \frac{\hbar^{2}}{2\mu_{x}}\frac{d^{2}}{d x^{2}}\psi(x) + \gamma
x^{2}\psi (x) = \left( \mathcal{E}_{x}+\frac{V_{0}}{2}\right)\psi
(x), \nonumber \\
&-& \frac{\hbar^{2}}{2\mu_{y}}\frac{d^{2}}{d y^{2}}\psi(y) + \gamma
y^{2}\psi (y) = \left( \mathcal{E}_{y}+\frac{V_{0}}{2}\right)
\psi(y), \label{1DSchrel}
\end{eqnarray}
which have eigenfunctions given by \cite{Landau}:
\begin{eqnarray}
\psi_{n} (x) &=& \frac{1}{\pi^{1/4}a_{x}^{1/2}}\frac{1}{\sqrt{2^{n}n!}}
e^{-x^{2}/\left(2a_{x}^{2}\right)}\mathcal{H}_{n}\left(\frac{x}{a_{x}
\right), \nonumber \\
\psi_{m} (y) &=& \frac{1}{\pi^{1/4}a_{y}^{1/2}}\frac{1}{\sqrt{2^{m}m!}}
e^{-y^{2}/\left(2a_{y}^{2}\right)}\mathcal{H}_{m}\left(\frac{y}{a_{y}
\right),
\label{1dapsu}
\end{eqnarray}
where $n=0,1,2,3,\ldots$ and $m=0,1,2,3,\ldots$ are the quantum numbers,
\mathcal{H}_{n}(\xi)$ are Hermite polynomials, and $a_{x}=\left( \hbar
\sqrt{2\mu_{x}\gamma }\right)^{1/2}$ and $a_{y}=\left( \hbar /\sqrt
2\mu_{y}\gamma }\right)^{1/2}$, respectively. The corresponding
eigenenergies for the 1D harmonic oscillators
are given by~\cite{Landau}:
\begin{eqnarray}
\mathcal{E}_{xn} &=& - \frac{V_{0}}{2} + \hbar\sqrt{\frac{2\gamma}{\mu_{x}}
\left(n + \frac{1}{2}\right) , \ n=0, 1, 2,... \ , \nonumber \\
\mathcal{E}_{ym} &=& - \frac{V_{0}}{2} + \hbar\sqrt{\frac{2\gamma}{\mu_{y}}
\left(m + \frac{1}{2}\right) , \ m=0, 1, 2,.... \ . \label{1daen}
\end{eqnarray}
\medskip
\par
\medskip
\par
Thus, the energy spectrum $\mathcal{E}_{nm}$ of an electron and hole
comprising a dipolar exciton in a black phosphorene double layer,
described by Eq.~(\ref{Schrel}), is
\begin{eqnarray}
\mathcal{E}_{nm} =\mathcal{E}_{xn}+\mathcal{E}_{ym}= - V_{0} + \hbar\sqrt
\frac{2\gamma}{\mu_{x}}}\left(n + \frac{1}{2}\right) + \hbar\sqrt{\frac
2\gamma}{\mu_{y}}}\left(m + \frac{1}{2}\right) , \ n=0, 1, 2,... ;
\ m=0, 1, 2, \cdots \ , \label{relsp}
\end{eqnarray}
while the wave function $\varphi_{nm}(x,y)$ for the relative motion
of an electron and a hole in a dipolar exciton in a black
phosphorene double layer, described by Eq.~(\ref{Schrel}), is given
by
\begin{eqnarray}
\varphi_{nm}(x,y) = \psi_{n} (x) \psi_{m} (y) ,
\label{relwf}
\end{eqnarray}
where $\psi_{n} (x)$ and $\psi_{m} (y)$ are defined by
Eq.~(\ref{1dapsu}).
The corresponding binding energy is
\begin{eqnarray}
B = - \mathcal{E}_{00} = V_{0} - \hbar\sqrt{\frac{\gamma}{2\mu_{x}}} - \hba
\sqrt{\frac{\gamma}{2\mu_{y}}} = V_{0} -
\hbar\sqrt{\frac{\gamma}{2\mu_{0}}} \ . \label{bind}
\end{eqnarray}
In Eqs.~(\ref{relsp}) and~(\ref{bind}) $\mu_{0} =
\frac{\mu_{x}\mu_{y}}{\left(\sqrt{\mu_{x}}+\sqrt{\mu_{y}}
\right)^{2}}$ is ``the reduced mass of the exciton reduced masses''.
Setting $\mu_{x} = \mu_{y} = \tilde{\mu}$ corresponding to an
isotropic system, we have $\mu_{0} = \tilde{\mu}/4$.
We consider the phosphorene monolayers to be separated
by $h$-BN insulating layers. Besides we assume $h$-BN insulating
layers to be placed on the top and on the bottom of the phosphorene
double layer. For this insulator $\varepsilon_{d} = 4.89$ is the
effective dielectric constant, defined as $\varepsilon_{d} =
\sqrt{\varepsilon^{\bot}}\sqrt{\varepsilon^{\parallel}}$~\cite{Fogler},
where $\varepsilon^{\bot}= 6.71$ and $\varepsilon^{\parallel} =
3.56$ are the components of the dielectric tensor for
$h$-BN~\cite{CaihBN}. Since the thickness of a $h$-BN monolayer is
given by $c_{1} = 3.33 \ \mathrm{{\AA}}$~\cite{Fogler}, the
interlayer separation $D$ is presented as $D = N_{L}c_{1}$, where
$N_{L}$ is the number of $h$-BN monolayers, placed between two
phosphorene monolayers. Let us mention that $h$-BN monolayers are
characterized by relatively small density of the defects of their
crystal structure, which allowed to measure the quantum Hall effect
in the few-layer black phosphorus sandwiched between two $h$-BN
flakes~\cite{LiHBN}.
One can obtain the square of the in-plane gyration radius $r_{X}$ of
a dipolar exciton, which is the average squared projection of the
electron-hole separation onto the plane of a phosphorene
monolayer~\cite{Fogler}, as
\begin{equation}
r_{X}^{2} = \int \varphi_{00}^{*}(x,y) (\mathbf{r}) r^{2}
\varphi_{00}(x,y) (\mathbf{r}) d^{2} r = \frac{1}{a_{x}\sqrt{\pi}}
\int_{-\infty}^{\infty} x^{2} e^{- \frac{r^{2}}{a_{x}^{2}}} d x +
\frac{1}{a_{y}\sqrt{\pi}} \int_{-\infty}^{\infty} y^{2} e^{-
\frac{y^{2}}{a_{y}^{2}}} d y = \frac{a_{x}^{2}+ a _{y}^{2}}{2} \ .
\label{rx2}
\end{equation}
\medskip
\par
We emphasize that the Taylor series expansion of the electron-hole
attraction potential to first order in $(r/D)^{2}$, presented in
Eq.\ (\ref{expand}) is valid if the inequality $\left\langle r^{2}
\right\rangle = r_{X}^{2} = \left(a_{x}^{2} + a_{y}^{2}\right)/2 \ll
D^{2}$ is satisfied, where $a_{x}$ and $a_{y}$ are defined above.
Consequently, one finds that $\hbar/\left(2\sqrt{2
\mu_{0}\gamma}\right) \ll D^{2}$. The latter inequality holds for $D
\gg D_{0}$. For the Coulomb potential $D_{0} =
\hbar^{2}\varepsilon_{d}/ \left(4 ke^{2} \mu_{0}\right)$. If
$\mu_{x} = \mu_{y} = \tilde{\mu}$ for the isotropic system, we have
$D_{0} = \hbar^{2}\varepsilon_{d}/ \left(ke^{2}\tilde{\mu}\right)$.
For the Keldysh potential, one has to use Eq.\ (\ref{gamma}) for
$\gamma$ and solve the following transcendental equation
\begin{equation}
D_{0}^{3} = - \frac{\hbar^{2}\left(\varepsilon_{1}+
\varepsilon_{2}\right)\rho_{0}^{2}}{4 \pi k e^{2}\mu_{0}\left[H_{-1}\left(\frac
D_{0}}{\rho_{0}}\right) - Y_{-1}\left(\frac{D_{0}}{\rho_{0}}\right)
\right] } .
\label{KeldD0}
\end{equation}
The values of $D_{0}$ for the Keldysh and Coulomb
potentials depends on $\mu_{0}$, therefore, on the effective masses
of the electron and hole. Here and below in our calculations we use
effective masses for electron and hole from
Refs.~\onlinecite{Peng2014,Tran2014-2,Paez2014,Qiao2014}. The results, reported in these four papers,
were performed by using the first principles calculations. The different
functionals for the correlation energy and setting parameters for
the hopping lead to some difference in their results, like geometry
structures, e. g. The lattice constants in the four papers do not
coincide with each other, and this can cause the difference in the band
curvatures and effective masses. The latter motivate us to use in
calculations the different sets of masses from
Refs.~\onlinecite{Peng2014,Tran2014-2,Paez2014,Qiao2014} that allows
to understand the dependence of the binding energy, the sound
velocity, and the mean field temperature of the superfluidity on effective masses of electrons and holes.
The values of $D_{0}$ for the Keldysh potential,
obtained by solving Eq.\ (\ref{KeldD0}), and the Coulomb potential
for the sets of the masses from
Refs.~[\onlinecite{Peng2014,Tran2014-2,Paez2014,Qiao2014}],
respectively, are given in Table~\ref{tab1}. As it can be seen in
Table~\ref{tab1}, the characteristic value of
$D_{0}$, entering the condition $D \gg D_{0}$ of validity
of the first order Taylor expansion of electron-hole attraction
potential, given by Eq.\ (\ref{expand}), is about one order of
magnitude smaller for the Keldysh potential than for the Coulomb
potential. Therefore, the first order Taylor expansion can be
valid for the smaller interlayer separations $D$ for the Keldysh
potential than for the Coulomb potential. Thus, validity of the
harmonic oscillator approximation of the Keldysh potential is more
reasonable. This is due to the fact that the Keldysh potential
describes the screening, which makes the Keldysh potential to be
more short-range than the Coulomb potential. Therefore, the harmonic
oscillator approximation of electron-hole attraction potential,
given by Eq.\ (\ref{expand}), can be valid for smaller number
$N_{L}$ of $h$-BN insulating layers between two phosphorene
monolayers for the Keldysh potential than for the Coulomb potential.
According to Table~\ref{tab1}, for both potentials $D_{0}$ is not
sensitive to the choice of the set of effective electrons and holes
masses. Comparisons of the Keldysh and Coulomb interaction
potentials for an electron-hole pair and their approximations using
harmonic oscillator potentials obtained from a Taylor series
expansion are presented in Fig. \ref{Fig2}. According to Fig.
\ref{Fig2}a, the Keldysh potential is weaker than the Coulomb
potential at small projections $r$ of the electron-hole distance on
the phosphorene monolayer plane, while the both potentials become
closer to each other as $r$ increases, demonstrating almost no
difference at $r \gtrsim 25 \ {\AA}$.
\medskip
\par
\begin{table}[t]
\caption{Value for $D_{0}$ for Keldysh and Coulomb potentials for different sets of masses for electron and hole from Refs. \cite{Peng2014}, \cit
{Tran2014-2}, \cite{Paez2014}, and \cite{Qiao2014}. }
\begin{center}
\begin{tabular}{cccccc}
\hline\hline Mass from Ref: & & \cite{Peng2014} & \cite{Tran2014-2}
& \cite{Paez2014} & \cite{Qiao2014} \\ \hline
Keldysh potential & $D_{0},\ \mathrm{{\AA}}$ & 1.0 & 0.98 & 0.9 & 0.9 \\
Coulomb potential & $D_{0},\ \mathrm{{\AA}}$ & 14.7 & 14.4 & 12.2 & 12.3 \\
\hline\hline
\end{tabular}
\end{center}
\label{tab1}
\end{table}
\begin{figure}[h]
\centering
\includegraphics[width=18.0cm]{Fig2.eps} \vspace{-2.7cm}
\caption{(Color online) (a) The Keldysh and Coulomb potentials for
electron-hole attraction in a black phosphorene double layer. (b)
Comparison of the Keldysh and Coulomb electron-hole attractions in a
black phosphorene double layer approximated by the harmonic
oscillator potential. The calculations were performed for the number
$N_{L} = 7$ of $h$-BN monolayers, placed between two phosphorene
monolayers, the set of masses from Ref. \protect\cite{Peng2014} and
polarizability from Ref. \protect\cite{Rodin}.} \label{Fig2}
\end{figure}
For the number $N_{L} = 7$ of $h$-BN monolayers, placed between two
phosphorene monolayers, the binding energies of dipolar excitons,
calculated for the sets of the masses from
Refs.~[\onlinecite{Peng2014,Tran2014-2,Paez2014,Qiao2014}] by using
Eq.~(\ref{bind}), are given by $28.2 \ \mathrm{meV}$, $ 29.6 \
\mathrm{meV}$, $ 37.6 \ \mathrm{meV}$, and $ 37.2 \ \mathrm{meV}$.
Let us mention that the maximal dipolar exciton binding energy was
obtained for the set of the masses, taken from
Ref.~[\onlinecite{Qiao2014}]. The dipolar exciton binding energy
increases when the reduced mass $\mu_{0}$ of the exciton reduced
masses increases. The reduced mass $\mu_{0}$
for the sets of the masses from
Refs.~[\onlinecite{Peng2014,Tran2014-2,Paez2014,Qiao2014}] is
presented in Table~\ref{tab2}. One can conclude that while $D_{0}$
is not sensitive to the choice of the set of effective electrons and
holes masses, the binding energy of indirect exciton
depends on the exciton reduced mass $\mu_{0}$, which is
defined by the effective electron and hole masses.
\medskip
\par
It is worthy of note that the energy spectrum of the center-of-mass
of an electron-hole pair $\varepsilon_{0}(\mathbf{P})$ may be
expressed as
\begin{eqnarray}
\varepsilon_{0}(\mathbf{P}) = \frac{P_{x}^{2}}{2M_{x}} + \frac{P_{y}^{2}}
2M_{y}} .
\label{eps0}
\end{eqnarray}
Substituting the polar coordinate for the momentum $P_{x} = P \cos
\Theta$ and $P_{y} = P \sin \Theta$ into Eq.~(\ref{eps0}), we obtain
\begin{eqnarray}
\varepsilon_{0}(\mathbf{P}) = \varepsilon_{0}(P,\Theta) = \frac{P^{2}}
2M_{0}(\Theta)},
\label{eps0pol}
\end{eqnarray}
where $M_{0}(\Theta)$ is the effective angle-dependent exciton mass in a
black phosphorene double layer, given by
\begin{eqnarray}
M_{0}(\Theta) = \left[\frac{\cos^{2}\Theta}{M_{x}} + \frac{\sin^{2}\Theta}
M_{y}} \right]^{-1}.
\label{M0}
\end{eqnarray}
\section{Collective excitations for dipolar excitons in a black phosphorene
double layer}
\label{collect}
We now turn our attention to a dilute distribution of electrons and
holes in a pair of parallel black phosphorene layers spatially
separated by a dielectric, when $n r_{X}^{2}\ll 1$, where $n$ is
the concentration for dipolar excitons. In this limit, the dipolar
excitons are formed by electron-hole pairs with the electrons and
holes spatially separated in two different phosphorene layers.
\medskip
\par
The distinction between excitons, which are not an elementary but a
composite bosons~\cite{Comberscot} and bosons is caused by exchange
effects~\cite{Moskalenko_Snoke}. At large interlayer separations
$D$, the exchange effects in the exciton-exciton interactions in a
phosphorene double layer can be neglected, since the exchange
interactions in a spatially separated electron-hole system in a
double layer are suppressed due to the low tunneling probability,
caused by the shielding of the dipole-dipole interaction by the
insulating barrier~\cite{BK,BLG}. Therefore, we treat the dilute
system of dipolar excitons in a phosphorene double layer as a weakly
interacting Bose gas.
\medskip
\par
The model Hamiltonian $\hat{H}$ of the 2D interacting dipolar excitons is given by
\begin{eqnarray}
\hat{H}= \sum_{\mathbf{P}} \varepsilon_{0}(P,\Theta)a_{\mathbf{P}}^{\dagger
}a_{\mathbf{P}} + \frac{g}{S}\sum_{\mathbf{P}_{1}\mathbf{P}_{2}\mathbf{P
_{3}}a_{\mathbf{P}_{1}}^{\dagger}a_{\mathbf{P}_{2}}^{\dagger}a_{\mathbf{P
_{3}}a_{\mathbf{P}_{1}+\mathbf{P}_{2}-\mathbf{P}_{3}} ,
\label{Ham}
\end{eqnarray}
where $a_{\mathbf{P}}^{\dagger }$ and $a_{\mathbf{P}}$ are Bose creation and
annihilation operators for dipolar excitons with momentum $\mathbf{P}$,
$S$ is a normalization area for the system, $\varepsilon_{0}(P,\Theta)$ is the
angular-dependent energy spectrum of non-interacting dipolar excitons, given
by Eq.\ (\ref{eps0pol}), and $g$ is a coupling constant for the interaction
between two dipolar excitons.
\medskip
\par
We expect that at $T=0$ K almost all dipolar excitons condense into
a BEC. One can treat this weakly interacting gas of
dipolar excitons within the Bogoliubov
approximation~\cite{Abrikosov,Lifshitz}. The Bogoliubov
approximation for a weakly interacting Bose gas allows us to
diagonalize the many-particle Hamiltonian, replacing the product of
four operators in the interaction term by the product of two
operators. This is justified under the assumption that most of the
particles belong to the BEC, and only the interactions between the
condensate and non-condensate particles are taken into account,
while the interactions between non-condensate particles are
neglected. The condensate operators are replaced by
numbers \cite{Abrikosov}, and the resulting Hamiltonian is quadratic
with respect to the creation and annihilation operators. Employing
the Bogoliubov approximation~\cite{Lifshitz}, we obtain the chemical
potential $\mu$ of the entire exciton system by minimizing
$\hat{H}_{0}-\mu \hat{N}$ with respect to the 2D concentration $n$,
where $\hat{N}$ denotes the number operator. The later one is
\begin{eqnarray}
\hat{N}=\sum_{\mathbf{k}}a_{\mathbf{P}}^{\dagger }a_{\mathbf{P}},
\label{Nop}
\end{eqnarray}
while $H_{0}$ is the Hamiltonian describing the particles in the
condensate with zero momentum $\mathbf{P}=0$. The minimization of
$\hat{H}_{0}-\mu \hat{N}$ with respect to the number of excitons
$N=Sn$ results in the standard expression~\cite{Abrikosov,Lifshitz}
\begin{eqnarray}
\mu = g n . \label{mu1}
\end{eqnarray}
Following the procedure presented in Ref.\ [\onlinecite{BKKL}], the
interaction parameters for the exciton-exciton interaction in very
dilute systems could be obtained assuming the exciton-exciton
dipole-dipole repulsion exists only at distances between excitons
greater than distance from the exciton to the classical turning
point. The distance between two excitons cannot be less than this
distance, which is determined by the conditions reflecting the fact
that the energy of two excitons cannot exceed the doubled chemical
potential $\mu $ of the system, i.e.,
\begin{eqnarray}
U(R_{0})= 2\mu . \label{cond}
\end{eqnarray}
In Eq.~(\ref{cond}) $U(R_{0})$ is the potential of interaction
between two dipolar excitons at the distance $R_{0}$, where $R_{0}$
corresponds to the distance between two dipolar excitons at their
classical turning point.
For our model we investigate the formation of dipolar excitons in a
phosphorene double layer with the use of the Keldysh and Coulomb interactions.
Therefore, it is reasonable to adopt the general approach for
treating collective excitations of dipolar excitons. If the distance
between two dipolar excitons is $R$ and the electron and hole of one
dipolar exciton interact with the electron and hole of the other
dipolar exciton, it is straightforward to show that the
exciton-exciton interaction $U(R)$ has the form:
\begin{equation}
U(R)=2V(R)-2V\left(R\sqrt{1+\frac{D^{2}}{R^{2}}}\right),
\label{Keldysh Dipole}
\end{equation
where $V(R)$ represents the interaction potential between two
electrons or two holes in the same phosphorene monolayer. We can
assume the potential $V(R)$ to be given by either Keldysh potential
(\ref{Keldysh}) or by Coulomb potential.
In a very dilute system of dipolar excitons and, therefore, $D \ll
R$, one may expand the second term in Eq. (\ref{Keldysh Dipole}) in
terms of $(D/R)^{2},$ and by retaining only the first order terms
with respect to $(D/R)^{2}$, finally obtains
\begin{eqnarray} \label{Dipolar Keldysh Approx}
U(R)=\left\{
\begin{array}{c}
\frac{\pi ke^{2}D^{2}}{2\varepsilon _{d}\rho _{0}^{2}R}\left[
Y_{-1}\left(
\frac{R}{\rho _{0}}\right) -H_{-1}(y)\left( \frac{R}{\rho _{0}}\right)
\right],\text{ for the Keldysh potential, } \\
\frac{ke^{2}D^{2}}{\epsilon _{d}R^{3}},\text{ \ \ \ \ \ \ \ \ \ \ \
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ for the
Coulomb potential.
\end{array
\right.
\end{eqnarray}
\medskip
\par
Following the procedure presented in Ref.\ [\onlinecite{BKKL}], one
can obtain the coupling constant for the exciton-exciton
interaction:
\begin{eqnarray}
g = 2\pi \int_{R_{0}}^{\infty} R dR\ U(R)
. \label{gK1}
\end{eqnarray}
Substituting Eq.~~(\ref{Dipolar Keldysh Approx}) into
Eq.~~(\ref{gK1}), one obtains the exciton-exciton coupling
constant $g$ as following
\begin{eqnarray} \label{gK}
g=\left\{
\begin{array}{c}
\frac{2\pi ^{2}ke^{2}D^{2}}{2\epsilon _{d}\rho _{0}}\left[ H_{0}\left( \frac
R_{0}}{\rho _{0}}\right) -Y_{0}\left( \frac{R_{0}}{\rho _{0}}\right)
\right]
,\text{ for the Keldysh potential,} \\
\frac{2\pi ke^{2}D^{2}}{\epsilon _{d}R_{0}},\text{ \ \ \ \ \ \ \ \ \
\ \ \ \
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ for the Coulomb potential
\end{array
\right.
\end{eqnarray}
Combining Eqs.\ (\ref{cond}),~(\ref{Dipolar Keldysh Approx})
and~(\ref{gK}), for the Keldysh potential we obtain the following
equation for $R_{0}$:
\begin{eqnarray}
4 \pi n \rho_{0}^{2} y \left[ H_{0}(y)-Y_{0}(y)\right] = - \left[
H_{-1}(y)-Y_{-1}(y)\right]
, \label{R0K}
\end{eqnarray}
where $y = R_{0}/\rho_{0}$.
Combining Eqs.\ (\ref{cond}),~(\ref{Dipolar Keldysh Approx})
and~(\ref{gK}), we obtain the following expression for $R_{0}$ in the case of Coulomb
potential
\begin{eqnarray}
R_{0} = \frac{1}{2\sqrt{\pi n}} . \label{r0}
\end{eqnarray}
From Eqs.\ (\ref{r0}),~(\ref{gK}) and~(\ref{mu1}), one obtains the
exciton-exciton coupling constant $g$ for the Coulomb potential
\begin{eqnarray}
g=\frac{4\pi ke^{2}D^{2}\sqrt{\pi n}}{\epsilon _{d}} .
\label{geqeq1}
\end{eqnarray}
\medskip
\par
The coupling constant $g$ and the distance $R_{0}$ between two
dipolar excitons at the classical turning point for the Keldysh and
Coulomb potentials for a phosphorene double layer as functions of
the exciton concentration are represented in Fig.~\ref{Figg}.
According to Fig.~\ref{Figg}, $R_{0}$ decreases with the increase
of the exciton concentration $n$. While for the Coulomb potential
$R_{0}$ is slightly larger than for the Keldysh potential, the
difference is very small. As shown in Fig.~\ref{Figg}, the
coupling constant $g$ is larger for the Coulomb potential than
for the Keldysh potential, because the interaction between the
charge carriers, interacting via the Kledysh potential, is
suppressed by the screening effects. The difference between $g$ for
the Keldysh and Coulomb potentials increases as the exciton
concentration $n$ increases.
\begin{figure}[h]
\centering
\includegraphics[width=15.0cm]{Fig3.eps}
\caption{(Color online) The coupling constant $g$ and the
distance $R_{0}$ between two dipolar excitons at the classical
turning point for the Keldysh and Coulomb potentials for a
phosphorene double layer as functions of the exciton concentration.
The number of $h$-BN monolayers between the phosphorene monolayers
is $N_{L} = 7$.} \label{Figg}
\end{figure}
\medskip
\par
The many-particle Hamiltonian of dipolar excitons in a black
phosphorene double layer given by Eq.\ (\ref{Ham}) is standard
for a weakly interacting Bose gas with the only difference being that the
single-particle energy spectrum of non-interacting excitons is
angular-dependent due to the orientation variation of the exciton effective
mass. Whereas the first term in Eq.\ (\ref{Ham}) which is responsible for the
single-particle kinetic energy is angular dependent, the second interaction
term in Eq.\ (\ref{Ham}) does not depend on an angle because the
dipole-dipole repulsion between excitons does not depend on an angle.
Therefore, for a weakly interacting gas of dipolar excitons in a black
phosphorene double layer, in the framework of the Bogoliubov approximation, we could apply exactly the same procedure which has been adapted for a standard weakly
interacting Bose gas \cite{Abrikosov,Lifshitz}, but taking into account the
angular dependence of a single-particle energy spectrum of dipolar excitons.
Therefore, the Hamiltonian $\hat{H}_{col}$ of the collective excitations in
the Bogoliubov approximation for the weakly interacting gas of dipolar
excitons in black phosphorene is given by
\begin{eqnarray}
\hat{H}_{col} = \sum_{P \neq 0,\Theta}\varepsilon(P,\Theta)\alpha_{\mathbf{P
}^{\dagger}\alpha_{\mathbf{P}} ,
\label{Hamq}
\end{eqnarray}
where $\alpha_{j\mathbf{P}}^{\dagger}$ and $\alpha_{j\mathbf{P}}$ are the
creation and annihilation Bose operators for the quasiparticles with the
energy dispersion corresponding to the angular dependent spectrum of the
collective excitations $\varepsilon(P,\Theta)$, described by
\begin{eqnarray}
\varepsilon(P,\Theta) = \left[ \left(\varepsilon_{0}(P,\Theta) +
gn\right)^{2} - \left(gn\right)^{2}\right]^{1/2} .
\label{colsp}
\end{eqnarray}
\medskip
\par
In the limit of small momenta $P$, when $\varepsilon_{0}(P,\Theta)
\ll gn$, we expand the spectrum of collective excitations
$\varepsilon(P,\Theta)$ up to first order with respect to the
momentum $P$ and obtain the sound mode of the collective excitations
$\varepsilon(P,\Theta) = c_{S}(\Theta) P $, where $c_{S}(\Theta)$ is
the angular dependent sound velocity, given by
\begin{eqnarray}
c_{S}(\Theta)=\sqrt{\frac{gn}{M_{0}(\Theta)} } . \label{cs}
\end{eqnarray}
\medskip
\par
The asymmetry of the electron and hole dispersion in black
phosphorene is reflected in the angular dependence of the sound
velocity through the angular dependence of the effective exciton
mass. The angular dependence of the sound velocity for the Keldysh
and Coulomb potentials is presented in Fig.~\ref{Fig4}, where it is
demonstrated that the exciton sound velocity is maximal at $\Theta
= 0$ and $\Theta = \pi$ and minimal at $\Theta = \pi/2$. As it
follows from comparison of Fig.~\ref{Fig4}a with Fig.~\ref{Fig4}b,
at the same parameters, the sound velocity $c_{S}(\Theta)$ is
greater in the case of Coulomb potential for the
interaction between the charge carriers than for the Keldysh
potential, because the Keldysh potential implies the screening
effects, which make the interaction between the carriers weaker.
According to Fig.~\ref{Fig4}, the sound velocity depends on the
effective electron and hole masses. However, the sound velocities
are coincided at all angles $\Theta$ for two sets of masses from
Refs.~[\onlinecite{Paez2014}] and~[\onlinecite{Qiao2014}],
correspondingly. Since at low momenta the sound-like energy spectrum
of collective excitations in the dipolar exciton system in a
phosphorene double layer satisfies to the Landau
criterion for superfluidity, the dipolar exciton superfluidity in a
black phosphorene double layer is possible. Let us mention that the
exciton concentration, used for the calculations, represented in
Fig.~\ref{Fig4} and below, corresponds by the order of
magnitude to the experimental values~\cite{Warren,Surrente}.
\begin{figure}[h]
\centering
\includegraphics[width=18.0cm]{Fig4.eps} \vspace{-2.7cm}
\caption{(Color online) The angular dependence of the sound
velocity. (a) The interaction between the carriers is described by
the Keldysh potential. (b) The interaction between the carriers is
described by the Coulomb potential. The calculations were performed
for the exciton concentration $n= 2 \times 10^{16} \
\mathrm{m^{-2}}$ and the number $N_{L} = 7$ of $h$-BN monolayers,
placed between two phosphorene monolayers.} \label{Fig4}
\end{figure}
\medskip
\par
\section{Superfluidity of dipolar excitons in a black phosphorene double Layer}
\label{super}
\medskip
\par
Since, at small momenta, the energy spectrum of the quasiparticles
for a weakly interacting gas of dipolar excitons is sound-like,
this means that the system satisfies to the Landau
criterion for superfluidity \cite{Abrikosov,Lifshitz}. The critical
exciton velocity for superfluidity is angular-dependent, and it is
given by $v_{c}(\Theta)= c_{S}(\Theta)$, because the quasiparticles
are created at velocities above the angle dependent velocity of
sound.
According to Fig.~\ref{Fig4}, the critical exciton
velocity for superfluidity has maximum at $\Theta = 0$ and $\Theta =
\pi$ and has minimum at $\Theta = \pi/2$. Therefore, as shown in
Fig.~\ref{Fig4}a, if the excitons move with the velocities in the
range of approximately between $8 \times 10^{3} \ \mathrm{m/s}$ and
$3.4 \times 10^{4} \ \mathrm{m/s}$, the superfluity is present for
the angles at the edges of the angle range between $\Theta = 0$ and
$\Theta = \pi$, while the superfluidity is absent at the center of
this angle range.
\medskip
\par
The density of the superfluid component $\rho_{s}(T)$ is defined as
$\rho_{s}(T) = \rho - \rho_{n}(T)$, where $\rho$ is the total 2D density of
the system and $\rho_{n}(T)$ is the density of the normal component. We
define the normal component density $\rho_{n}(T)$ in the usual
way.\cite{Pitaevskii}. Suppose that the excitonic system moves with a velocity
$\mathbf{u}$, which means that the superfluid component moves with the
velocity $\mathbf{u}$. At nonzero temperatures $T$ dissipating
quasiparticles will appear in this system. Since their density is small at
low temperatures, one may assume that the gas of quasiparticles is an ideal
Bose gas. To calculate the superfluid component density, we define the total
mass current $\mathbf{J}$ for a Bose-gas of quasiparticles in the frame of reference
where the superfluid component is at rest, by
\begin{eqnarray}
\mathbf{J}=\int \frac{s d^{2}P}{(2\pi \hbar )^{2}}\mathbf{P} f\left[
\varepsilon(P,\Theta)-\mathbf{P}\mathbf{u}\right] . \label{nnor}
\end{eqnarray}
In Eq.\ (\ref{nnor}) $f\left[ \varepsilon(P,\Theta)\right] =\left( \exp \left[
\varepsilon(P,\Theta)/(k_{B}T)\right] -1\right)^{-1}$ is the
Bose-Einstein distribution function for quasiparticles with the
angule dependent dispersion $\varepsilon(P,\Theta)$, $s = 4$
is the spin degeneracy factor, and $k_{B}$ is the Boltzmann
constant. Expanding the integrand of Eq.\ (\ref{nnor}) in terms of
$\mathbf{P}\mathbf{u}/(k_{B}T)$ and restricting ourselves by
the first order term, we obtain
\begin{eqnarray}
\mathbf{J}= - \frac{s}{k_{B} T}\int \frac{d^{2}P}{(2\pi \hbar )^{2}}
\mathbf{P}\left(\mathbf{Pu}\right) \frac{\partial f\left[
\varepsilon(P,\Theta)\right] }{\partial \varepsilon(P,\Theta)} .
\label{J_Tot}
\end{eqnarray}
The normal density $\rho_{n}$ in the anisotropic system has tensor form \cite{Saslow}.
We define the tensor elements for the normal component density
$\rho_{n}^{(ij)}(T)$ by
\begin{eqnarray}
J_{i} = \rho_{n}^{(ij)}(T) u _{j} ,
\label{rhodef}
\end{eqnarray}
where $i$ and $j$ denote either the $x$ or $y$ component of the
vectors. Assuming that the vector ${\bf u}\uparrow\uparrow OX$
($\uparrow\uparrow$ denotes that ${\bf u}$ is parallel to the $OX$
axis and has the same direction as the $OX$ axis),
we have ${\bf u} = u_{x}{\bf i}$
and ${\bf P} = P_{x}{\bf i} + P_{y}{\bf j}$. Therefore, we obtain
\begin{eqnarray}
{\bf P}\cdot {\bf u} &=& P_{x}u_{x},
\nonumber \\
{\bf P}\left({\bf P}\cdot{\bf u}\right) &=& P_{x}^{2}u_{x}{\bf i} +
P_{x}P_{y}u_{x}{\bf j} ,
\label{i2}
\end{eqnarray}
where ${\bf i}$ and ${\bf j}$ are unit vectors in the $x$ and $y$
directions, respectively. Upon substituting Eq.\ (\ref{i2}) into
Eq.\ (\ref{J_Tot}), one obtains
\begin{eqnarray}
J_{x}= - \frac{s}{k_{B} T}\int_{0}^{\infty}dP
\frac{P^{3}}{(2\pi \hbar )^{2}} \int_{0}^{2\pi} d \Theta \frac{\partial
\left[ \varepsilon(P,\Theta)\right] }{\partial \varepsilon(P,\Theta)}
\cos^{2}\Theta u_{x} .
\label{Jx}
\end{eqnarray}
Using the definition of the density for the normal component from
Eq.\ (\ref{rhodef}), we obtain
\begin{eqnarray}
\rho_{n}^{(xx)}(T) = \frac{s}{k_{B} T}\int_{0}^{\infty} dP\frac{
P^{3} }{(2\pi \hbar )^{2}}
\int_{0}^{2\pi} d \Theta \frac{\exp \left[ \varepsilon(P,\Theta)/(k_{B}T
\right]}{\left( \exp \left[ \varepsilon(P,\Theta)/(k_{B}T)\right]
-1\right)^{2}}\cos^{2}\Theta .
\label{rhonx}
\end{eqnarray}
Substitution of Eq.\ (\ref{i2}) into Eq.\ (\ref{J_Tot}) gives
\begin{eqnarray}
J_{y}&=& - \frac{s}{k_{B} T}\int \frac{d^{2}P}{(2\pi \hbar )^{2}}
P_{x}P_{y} \frac{\partial f\left[ \varepsilon(P,\Theta)\right]
}{\partial \varepsilon(P,\Theta)} u_{x}
\nonumber \\
&=& \frac{s}{k_{B} T}\int_{0}^{\infty}dP \frac{P^{3}}{(2\pi \hbar
)^{2}}
\int_{0}^{2\pi} d \Theta \frac{\exp \left[ \varepsilon(P,\Theta)/(k_{B}T
\right]}{\left( \exp \left[ \varepsilon(P,\Theta)/(k_{B}T)\right]
-1\right)^{2}} \cos\Theta \sin\Theta u_{x}= 0 .
\label{Jy}
\end{eqnarray}
The integral in Eq.~(\ref{Jy}) equals to zero, since the integral
over the angle $\Theta$
over the period of the function results in zero. Therefore, one obtains
\rho_{n}^{(xy)} = 0$.
\medskip
\par
Now assuming the vector ${\bf u}\uparrow\uparrow OY$,
we obtain analogously the following relations:
\begin{eqnarray}
\rho_{n}^{(yy)}(T) &=& \frac{s}{k_{B} T}\int_{0}^{\infty} dP \frac{ P^{3}}
(2\pi \hbar )^{2}} \int_{0}^{2\pi} d \Theta \frac{\exp \left[
\varepsilon(P,\Theta)/(k_{B}T)\right]}{\left( \exp \left[ \varepsilon(P
\Theta)/(k_{B}T)\right] -1\right)^{2}} \sin^{2} \Theta , \nonumber \\
\rho_{n}^{(yx)}(T) &=& 0 \ .
\label{rhony}
\end{eqnarray}
\medskip
\par
By defining the tensor of the concentration of the normal component as the
linear response of the flow of quasiparticles on the external velocity as
n_{n}^{(ij)} = \rho_{n}^{(ij)}/M_{i}$, one obtains:
\begin{eqnarray}
n_{n}^{(xx)}(T) &=& \frac{s}{k_{B} M_{x}T}\int_{0}^{\infty}dP \frac{
P^{3} }{(2\pi \hbar )^{2}} \int_{0}^{2\pi} d \Theta \frac{\exp
\left[
\varepsilon(P,\Theta)/(k_{B}T)\right]}{\left( \exp \left[ \varepsilon(P
\Theta)/(k_{B}T)\right] -1\right)^{2}} \cos^{2} \Theta , \nonumber \\
n_{n}^{(xy)}(T) &=& 0 \ \nonumber \\
n_{n}^{(yy)}(T) &=& \frac{s}{k_{B} M_{y}T}\int_{0}^{\infty}dP \frac{
P^{3} }{(2\pi \hbar )^{2}} \int_{0}^{2\pi} d \Theta \frac{\exp
\left[
\varepsilon(P,\Theta)/(k_{B}T)\right]}{\left( \exp \left[ \varepsilon(P
\Theta)/(k_{B}T)\right] -1\right)^{2}} \sin^{2} \Theta , \nonumber \\
n_{n}^{(yx)}(T) &=& 0 . \label{nnxy}
\end{eqnarray}
The linear response of the flow of quasiparticles $\mathbf{J}_{qp}$
with respect to the external velocity at any angle measured from the
$OX$ direction is given in terms of the angle dependent
concentration for the normal component $\tilde{n}_{n}(\Theta, T)$
as
\begin{eqnarray}
\left|\mathbf{J}_{qp}\right| &=& \left|n_{n}^{(xx)}(T)u_{x}{\bf i} +
n_{n}^{(yy)}(T)u_{y}{\bf j}\right| \nonumber \\
&=& \sqrt{\left[n_{n}^{(xx)}(T)\right]^{2}u^{2}\cos^{2} \Theta + \left
n_{n}^{(yy)}(T)\right]^{2} u^{2} \sin^{2} \Theta } =
\tilde{n}(\Theta, T)u , \label{nabs}
\end{eqnarray}
where the concentration of the normal component
$\tilde{n}_{n}(\Theta, T)$ is
\begin{eqnarray}
\tilde{n}_{n}(\Theta, T) =
\sqrt{\left[n_{n}^{(xx)}(T)\right]^{2}\cos^{2} \Theta +
\left[n_{n}^{(yy)}(T)\right]^{2} \sin^{2} \Theta } . \label{nnang}
\end{eqnarray}
From Eq.~(\ref{nnang}) it follows that $n_{n}^{(xx)} = \tilde{n}_{n}(\Theta =
0)$ and $n_{n}^{(yy)} = \tilde{n}_{n}(\Theta = \frac{\pi}{2})$.
Eq.~(\ref{nnang}) can be rewritten in the following form:
\begin{eqnarray}
\tilde{n}_{n}(\Theta, T) =
\sqrt{\frac{\left[n_{n}^{(xx)}(T)\right]^{2} +
\left[n_{n}^{(yy)}(T)\right]^{2}}{2} +
\frac{\left(\left[n_{n}^{(xx)}(T)\right]^{2} -
\left[n_{n}^{(yy)}(T)\right]^{2}\right)\cos\left( 2 \Theta
\right)}{2}} . \label{nnang1}
\end{eqnarray}
\medskip
\par
We define the angle dependent concentration of the superfluid
component $\tilde{n}_{s}(\Theta, T)$ by
\begin{eqnarray}
\tilde{n}_{s}(\Theta, T) = n - \tilde{n}_{n}(\Theta, T) ,
\label{superfli}
\end{eqnarray}
where $n$ is the total concentration of the dipolar excitons. The
mean field critical temperature $T_{c}(\Theta)$ of the phase
transition related to the occurrence of superfluidity in the
direction with the angle $\Theta$ relative to the $x$ direction is
determined by the condition
\begin{eqnarray}
\tilde{n}_{n}(\Theta, T_{c}(\Theta)) = n . \label{Tc}
\end{eqnarray}
\subsection{Superfluidity for the sound-like spectrum
of collective excitations}
\medskip
\par
For small momenta, substituting the sound spectrum of collective
excitations $\varepsilon(P,\Theta) = c_{S}(\Theta) P $ with the
angular-dependent sound velocity $c_{S}(\Theta)$, given by
Eq.~(\ref{cs}), into Eq.~(\ref{nnxy}), we obtain
\begin{eqnarray}
n_{n}^{(xx)}(T) &=& \frac{2s(k_{B}T)^{3}\zeta (3)}{(\pi
\hbar)^{2}M_{x}} \int_{0}^{2\pi} \frac{\cos^{2}
\Theta}{c_{S}^{4}(\Theta)} d \Theta =
\frac{2s(k_{B}T)^{3}\zeta (3)}{(\pi \hbar
g n)^{2}M_{x}} \int_{0}^{2\pi} \frac{\cos^{2} \Theta}{\left(\frac{\cos^{2}\Theta}{M_{x}} + \frac{\sin^{2}\Theta}
M_{y}} \right)^{2} } d \Theta
, \nonumber \\
n_{n}^{(xy)}(T) &=& 0 , \nonumber \\
n_{n}^{(yy)}(T) &=& \frac{2s(k_{B}T)^{3}\zeta (3)}{(\pi \hbar
)^{2}M_{y}} \int_{0}^{2\pi} \frac{\sin^{2}
\Theta}{c_{S}^{4}(\Theta)} d \Theta = \frac{2s(k_{B}T)^{3}\zeta
(3)}{(\pi \hbar g n)^{2}M_{y}} \int_{0}^{2\pi} \frac{\sin^{2}
\Theta}{\left(\frac{\cos^{2}\Theta}{M_{x}} + \frac{\sin^{2}\Theta}
M_{y}} \right)^{2} } d \Theta , \nonumber \\
n_{n}^{(yx)}(T) &=& 0 , \label{nnxy00}
\end{eqnarray}
where $\zeta (z)$ is the Riemann zeta function ($\zeta (3)\simeq
1.202$).
The integrals in Eq.~(\ref{nnxy00}) can be evaluated analytically.
Substituting the following expressions
\begin{eqnarray}
\int_{0}^{2\pi} \frac{\cos^{2} \Theta}{\left(\frac{\cos^{2}\Theta}{M_{x}} + \frac{\sin^{2}\Theta}
M_{y}} \right)^{2} } d \Theta = \pi M_{x}\sqrt{M_{x}M_{y}} ,
\hspace{2cm} \int_{0}^{2\pi} \frac{\sin^{2}
\Theta}{\left(\frac{\cos^{2}\Theta}{M_{x}} + \frac{\sin^{2}\Theta}
M_{y}} \right)^{2} } d \Theta = \pi M_{y}\sqrt{M_{x}M_{y}} ,
\label{angint}
\end{eqnarray}
into Eq.~(\ref{nnxy00}), one obtains
\begin{eqnarray}
n_{n}^{(xx)}(T) = n_{n}^{(yy)}(T) = \frac{2 \zeta
(3)s(k_{B}T)^{3}\sqrt{M_{x}M_{y}}}{\pi (\hbar g n)^{2}} ,
\hspace{2cm} n_{n}^{(xy)}(T) = n_{n}^{(yx)}(T) = 0 .
\label{nnxyfinal}
\end{eqnarray}
Let us mention that Eq.~(\ref{angint}) is valid if
$\frac{M_{x}}{2\left(M_{y} - M_{x}\right)}>0$, which is true for a
phosphorene double layer. Note that for the anisotropic superfluid,
formed by paired fermions,
the relation $n_{n}^{(xx)}(T) = n_{n}^{(yy)}(T)$ is also valid~\cite{Saslow}.
Under the assumption of the sound spectrum of collective excitations
using Eq.~(\ref{nnxyfinal}), implying $n_{n}^{(xx)}(T) =
n_{n}^{(yy)}(T)$, one obtains from Eq.~(\ref{nnang}) the concentration of the normal
component $\tilde{n}_{n}(T)$ as
\begin{eqnarray}
\tilde{n}_{n}(T) = n_{n}^{(xx)}(T) = n_{n}^{(yy)}(T) = \frac{2\zeta
(3)s(k_{B}T)^{3}\sqrt{M_{x}M_{y}}}{\pi (\hbar g n)^{2}} .
\label{nnsound}
\end{eqnarray}
Therefore, in case of the sound-like spectrum of
collective excitations, the concentration of the superfluid
component $\tilde{n}_{s}(T)$ is given by
\begin{eqnarray}
\tilde{n}_{s}(T) = n - \frac{2 \zeta
(3)s(k_{B}T)^{3}\sqrt{M_{x}M_{y}}}{\pi (\hbar g n)^{2}} .
\label{nssound}
\end{eqnarray}
It follows from Eqs.~(\ref{nnsound}) and~(\ref{nssound}) that for the sound-like spectrum of collective excitations, the concentrations of the
normal and superfluid components do not depend on an angle.
For the sound-like spectrum of collective excitations, the mean
field critical temperature $T_{c}$ can be obtained by substitution
Eq.~(\ref{nnsound}) into the condition $\tilde{n}_{n}(T_{c}) = n$ as
following
\begin{eqnarray}
T_{c} = \left(\frac{\pi(\hbar g)^{2}}{2\zeta
(3)s\sqrt{M_{x}M_{y}}}\right)^{1/3}\frac{n}{k_{B}} . \label{Tcso}
\end{eqnarray}
It follows from Eq.~(\ref{Tcso}) that under the assumption about the
sound-like spectrum of collective excitations, the mean field
critical temperature $T_{c}$ does not depend on an angle. The mean
field critical temperature of the superfluidity $T_{c}$ for the
Keldysh and Coulomb potentials for the
sound-like spectrum of collective excitations obtained by using Eq.~(\ref{Tcso}) as a
function of the interlayer separation $D$, is presented in
Fig.~\ref{Fig44}. The calculations are performed for the sets of effective electron and hole masses from Refs.~\onlinecite{Peng2014,Tran2014-2,Paez2014,Qiao2014}.
Comparing Fig.~\ref{Fig44}a with
Fig.~\ref{Fig44}b, one concludes that at the same parameters, the
critical temperature for the superfluidity $T_{c}(\Theta)$
is much larger for the Coulomb
potential than for the Keldysh potential, because the sound velocity
for the Coulomb potential is larger than for the Kelsysh potential
due to the screening effects, implied by the Keldysh potential. However, for both potentials
the mean field critical temperature for superfluidity shows the similar depends on the electron and hole effective masses.
\medskip
\par
\begin{table}[b]
\caption{The critical temperatures under the assumption about
the sound-like spectrum of collective excitations
for different sets of masses from Refs. \cite{Peng2014}, \cit
{Tran2014-2}, \cite{Paez2014}, and \cite{Qiao2014}. The phosphorene
layers are separated by 7 layers of h-BN. $\mu _{0}$ and
$M_{x}M_{y}$ are expressed in units of free electron mass $m_{0}$
and $m_{0}^{2},$ respectively.}
\begin{center}
\begin{tabular}{ccccc}
\hline\hline Mass from Ref: & \cite{Peng2014} & \cite{Tran2014-2} &
\cite{Paez2014} & \cite{Qiao2014} \\ \hline
$\mu _{0},$ $\times $10$^{-2}m_{0}$ & 3.99 & 4.11 & 4.84 & 4.79 \\
Coulomb potential $T_{c},$ K & 182 & 192 & 174 & 172 \\
Keldysh potential $T_{c},$ K & 115 & 121 & 109 & 107 \\
$M_{x}M_{y},$ $\times m_{0}^{2}$ & 1.67 & 1.23 & 2.24 & 2.39 \\
\hline\hline
\end{tabular}
\end{center}
\label{tab2}
\end{table}
As it is demonstrated in Table~\ref{tab2}, the
critical temperature for the superfluidity $T_{c}$ decreases when
$M_{x}M_{y}$ increases. Therefore, $T_{c}$ is sensitive to the
electron and hole effective masses.
\medskip
\par
Assuming the sound-like spectrum of collective
excitations, the mean field critical temperature of the
superfluidity $T_{c}$ obtained by using Eq.~(\ref{Tcso}) as a
function of the exciton concentration $n$ and the interlayer
separation $D$, is presented in Fig.~\ref{Fig5}. While the
calculations, presented in Fig.~\ref{Fig5}, were performed for the
Coulomb potential, one can obtain the similar behavior for the mean
field critical temperature of the superfluidity by employing the
Keldysh potential. According to Figs.~\ref{Fig44} and~\ref{Fig5},
the mean field critical temperature of the superfluidity $T_{c}$ is
an increasing function of the exciton concentration $n$ and the
interlayer separation $D$.
\begin{figure}[h]
\centering
\includegraphics[width=18.0cm]{Fig5.eps} \vspace{-2.7cm}
\caption{(Color online) The mean field critical temperature for
superfluidity $T_{c}$ for a phosphorene double layer as a function
of the interlayer separation $D$, assuming the sound-like spectrum
of collective excitations. (a) The interaction between the carriers
is described by the Keldysh potential. (b) The interaction between
the carriers is described by the Coulomb potential. The exciton
concentration is $n = 2 \times 10^{12} \ \mathrm{cm^{-2}}$.}
\label{Fig44}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=20.0cm]{Fig6.eps}
\vspace{-3cm} \caption{(Color online) The critical temperature for
superfluidity $T_{c}$ for a phosphorene double layer as a function
of the exciton concentration $n$ and the interlayer separation $D$,
assuming the sound-like spectrum of collective excitations. The
calculations are performed for the Coulomb potential. The set of
masses is taken from Ref.~[\onlinecite{Paez2014}].} \label{Fig5}
\end{figure}
\subsection{Superfluidity when the spectrum
of collective excitations is given by Eq.~(\ref{colsp})}
Beyond the assumption of the sound-like spectrum,
substituting Eq.~(\ref{colsp}) for the spectrum of collective
excitations into Eq.~(\ref{nnxy}), and using Eq.~(\ref{nnang1}), we
obtain the mean field critical temperature of the superfluidity
$T_{c}(\Theta)$, by solving numerically Eq.~(\ref{Tc}). Since in
this case $n_{n}^{(xx)}(T) \neq n_{n}^{(yy)}(T)$, the mean field
critical temperature of the superfluidity $T_{c}(\Theta)$ is angular
dependent. The angular dependence of critical temperature
$T_{c}(\Theta)$ for the Keldysh and Coulomb potentials for different
exciton concentrations, calculated by
solution of transcendental equation~(\ref{Tc}), is presented in
Fig.~\ref{Fig6}. According to Fig.~\ref{Fig6}, the mean field
critical temperature of the superfluidity $T_{c}(\Theta)$, is an
increasing function of the exciton concentration $n$. According to
Fig.~\ref{Fig6}, the critical critical temperature of the
superfluidity is maximal at $\Theta = 0$ and $\Theta = \pi$ and
minimal at $\Theta = \pi/2$.
As it follows from comparison of
Fig.~\ref{Fig6}a with Fig.~\ref{Fig6}b, at the same parameters, the
mean field critical temperature for the superfluidity
$T_{c}(\Theta)$ is greater when one considers the Coulomb
potential for the interaction between the charge carriers than for
the Keldysh potential, because the sound velocity for the Coulomb
potential is greater than for the Keldysh potential due to the
screening effects, taken into account by the Keldysh potential.
It is interesting to mention that the ratio of the maximal critical
temperature $T_{c}^{(\mathrm{max})} = T_{c}(0)$ to the minimal
critical temperature $T_{c}^{\mathrm{(min)}} = T_{c}(\pi/2)$,
$T_{c}^{(\mathrm{max})}/T_{c}^{\mathrm{(min)}}$, in case of both the
Keldysh and Coulomb interactions between the charge carriers
decreases from $3.55$ to $2.69$ for the Keldysh potential, and from
$3.29$ to $2.64$ for the Coulomb potential, when the density of
exciton increases from $n = 2 \times 10^{11} \ \mathrm{cm^{-2}}$ to
$n = 3 \times 10^{12} \ \mathrm{cm^{-2}}$. One concludes that the
angular dependence of the mean field critical temperature $T_{c}$
decreases, when the exciton concentration increases.
At the fixed exciton concentration $n$, at the temperatures below
$T_{c}^{\mathrm{(min)}}$, exciton superfluidity exists at any
direction of exciton motion with any angle $\Theta$ relative to the
armchair direction, while at the temperatures above
$T_{c}^{\mathrm{(max)}}$, exciton superfluidity is absent at any
direction of exciton motion with any angle $\Theta$. At the fixed
exciton concentration $n$, at the temperatures in the range
$T_{c}^{\mathrm{(min)}} < T < T_{c}^{\mathrm{(max)}}$, exciton
superfluidity exists only for the directions of exciton motion with
the angles in the ranges $0< \Theta < \Theta_{c1}(T)$ and
$\Theta_{c2}(T)< \Theta < \pi$, while the superfluidity is absent
for the directions of exciton motion with the angles in the range
$\Theta_{c1}(T) < \Theta < \Theta_{c2}(T)$. The critical angles of
superfluidity $\Theta_{c1}(T)$ and $\Theta_{c2}(T)$ correspond in
Fig.~\ref{Fig6} to the left and right crossing points of the
horizontal line at the temperature $T$ with the curve at the fixed
exciton concentration $n$, respectively.
Let us mention that the critical temperature for the superfluidity
for a BCS-like fermionic superfluid with the anisotropic order
parameter does not depend on the direction of motion of Cooper pairs
because in this case $n_{n}^{(xx)}(T) =
n_{n}^{(yy)}(T)$~\cite{Saslow}.
Let us mention that we chose to use
the set of masses from Ref.~[\onlinecite{Paez2014}], because
this set results in higher exciton binding
energy. We used the number of $h$-BN monolayers between the
phosphorene monolayers $N_{L} = 7$ for Figs.~\ref{Fig4}
and~\ref{Fig6}, because higher $N_{L}$ corresponds to higher
interlayer separation $D$, which results in higher critical exciton
velocity of superfluidity equal to the sound velocity
$c_{S}(\Theta)$ and higher mean field critical temperature of the
superfluidity $T_{c}(\Theta)$.
\begin{figure}[h]
\centering
\includegraphics[width=18.0cm]{Fig7.eps} \vspace{-2.4cm}
\caption{(Color online) The angular dependence of the critical
temperature for superfluidity $T_{c}(\Theta)$ for a phosphorene
double layer for different exciton concentrations. (a) The
interaction between the carriers is described by the Keldysh
potential. (b) The interaction between the carriers is described by
the Coulomb potential. The number of $h$-BN monolayers between the
phosphorene monolayers is $N_{L} = 7$. The set of masses is taken from
Ref.~[\onlinecite{Paez2014}].} \label{Fig6}
\end{figure}
\medskip
\par
According to Eq.~(\ref{nnang1}), the angular dependent concentration
of the normal component $\tilde{n}_{n}(\Theta, T)$ for $0 \leq
\Theta \leq \pi/2$ increases with $\Theta$ if $n_{n}^{(yy)}(T) >
n_{n}^{(xx)}(T)$ and decreases with $\Theta$ if $n_{n}^{(yy)}(T) <
n_{n}^{(xx)}(T)$. Therefore, at $n_{n}^{(yy)}(T) > n_{n}^{(xx)}(T)$
the superfluidity can exist only if $\Theta < \Theta_{c} (T)$, while
at $n_{n}^{(yy)}(T) < n_{n}^{(xx)}(T)$ the
superfluidity can exist only if $\Theta > \Theta_{c} (T)$, where
\Theta_{c}(T)$ is the critical angle of the occurrence of superfluidity.
\medskip
\par
For a chosen temperature, the critical angle $\Theta_{c}(T)$, which
corresponds to the occurrence of superfluidity, is given by the
condition
\begin{equation}
\tilde{n}_{n}(\Theta_{c} (T), T) = n \ .
\label{Thetac}
\end{equation}
Substituting Eq.~(\ref{nnang1}) into Eq,~(\ref{Thetac}), one
obtains a closed form analytic expression for $\Theta _{c}(T)$ as
\begin{equation}
\Theta _{c}(T)=\frac{1}{2} \arccos \left[\frac
2n^{2}-\left(\left[n_{n}^{(xx)}(T)\right]^{2}+\left[n_{n}^{(yy)}(T)\right]^{2}\right)}
\left[n_{n}^{(xx)}(T)\right]^{2}-\left[n_{n}^{(yy)}(T)\right]^{2}}
\right] \ . \label{critang}
\end{equation}
\medskip
\par
\section{Proposed experiment to observe the angular dependent superfluidity
of dipolar excitons in a phosphorene Double Layer}
\label{experiment}
\medskip
\par
The angular dependent superfluidity in a phosphorene double layer
may be observed in electron-hole Coulomb drag experiments. The
Coulomb attraction between electrons and holes can introduce a
Coulomb drag that is a process in spatially separated conductors,
which enables a current to flow in one of the layers to induce a
voltage drop in the other one. In the case when the adjacent layer
is part of a closed electrical circuit, an induced current flows.
The experimental observation of exciton condensation and perfect
Coulomb drag was claimed recently for spatially separated electrons
and holes in GaAs/AlGaAs coupled quantum wells in the presence of
high magnetic field perpendicular to the quantum wells \cite{Nandi}.
A steady transport current of electrons driven through one quantum
well was accompanied by an equal current of holes in another. In
Ref.\ [\onlinecite{Pogrebinskii}], the authors discussed the drag
of holes by electrons in a semiconductor-insulator-semiconductor
structure. The prediction was that for two conducting layers
separated by an insulator there will be a drag of carriers in one
layer due to the direct Coulomb attraction with the carriers in the
other layer. The Coulomb drag effect in the electron-hole double
layer BCS system was also analyzed in Refs.\ [\onlinecite{VM,JBL}].
If the external potential difference is applied to one of the
layers, it will produce an electric current. The current in an
adjacent layer will be initiated as a result of the correlations
between electrons and holes at temperatures below the critical one.
Consequently, the Coulomb drag effect was explored for semiconductor
coupled quantum wells in a number of theoretical and experimental
studies~\cite{Gramila,Sivan,Gramila2,Jauho,Zheng,Sirenko,Tso,Flensberg,Tanatar,EMN}.
The Coulomb drag effect in two coaxial nanotubes was studied in
Ref.\ [\onlinecite{BGK}]. The experimental and theoretical
achievements in Coulomb drag effect have been reviewed in Ref.\
[\onlinecite{Narrmp}].
\medskip
\par
We propose to study experimentally the angular dependent
superfluidity of dipolar excitons in a phosphorene double layer by
applying a voltage difference for current flowing in one layer in a
chosen direction at a chosen angle $\Theta$ relative to the armchair
direction and measuring the drag current in the same direction in
another layer. This drag current in another layer in the same
direction as the current in the first layer will be initiated by the
electron-hole Coulomb drag effect due to electron-hole attraction.
The measurement of the drag current in an adjacent layer for a
certain direction with the corresponding $\Theta$ will indicate the
existence of superfluidity in this direction. Due to the angular
dependence of the sound velocity, the critical exciton velocity for
superfluidity depends on an angle. Therefore, for certain exciton
velocities, there are the angle ranges, which correspond to the
superfluid exciton flow, and other angle ranges, which correspond to
the normal exciton flow. This can be applied as a working principal
for switchers, controlling the exciton flows in different directions
of exciton motion, caused by the Coulomb drag effect.
\medskip
\par
\section{Conclusions}
\label{conc}
In summary, the influence of the anisotropy of the dispersion
relation of dipolar excitons in a double layer of black phosphorene
on the excitonic BEC and directional superfluidity has been
investigated. The analytical expressions for the single dipolar
exciton energy spectrum and wave functions have been derived. The
angle dependent spectrum of collective excitations and sound
velocity have been derived. It is predicted that a weakly
interacting gas of dipolar excitons in a double layer of black
phosphorus exhibits superfluidity at low temperatures due to the
dipole-dipole repulsion between the dipolar excitons. It is
concluded that the anisotropy of the energy band structure in a
black phosphorene causes the critical velocity of the superfluidity
to depend on the direction of motion of dipolar excitons. It is
demonstrated that the dependence of the concentrations of the
normal and superfluid components and the mean field critical
temperatures for superfluidity on the direction of motion of dipolar
excitons occurs beyond the sound-like approximation for the spectrum
of collective excitations. Therefore, the directional superfluidity
of dipolar excitons in a phosphorene double layer is possible.
Moreover, the presented results, obtained for both Keldysh and
Coilomb potentials, describing the interactions between the charge
carriers, allow to study the influence of the screening effects on
the dipolar exciton binding energy, exciton-exciton interaction, the
spectrum of collective excitations, and the critical temperature of
superfluidity for a weakly interacting Bose gas of dipolar excitons
in a phosphorene double layer. It is important to mention that the
binding energy of dipolar excitons, and mean field critical
temperature for superfluidity are sensitive to the electron and hole
effective masses. Besides, the possibilities of the experimental
observation of the superfluidity for various directions of motion
of excitons were briefly discussed.
\medskip
\par
Our analytical and numerical results provide motivation for future experimental and theoretical
investigations on excitonic BEC and superfluidity for double layer phosohorene.
|
1,116,691,499,914 | arxiv | \section{Introduction}
\label{sec:int}
From observations of light element abundances and of the Cosmic
microwave background radiation \cite{Hinshaw:2008kr} the Cosmic
baryon asymmetry,
\begin{equation}
\label{eq:baryon-asymm}
{\cal Y}_B=(8.75\pm 0.23)\times 10^{-10}\,,
\end{equation}
can be inferred. The conditions for a dynamical generation of this
asymmetry (baryogenesis) are well known \cite{Sakharov:1967dj} and
depending on how they are realized different scenarios for
baryogenesis can be defined (see ref. \cite{Dolgov:1991fr} for a
througout discussion).
Leptogenesis \cite{Fukugita:1986hr} is a scenario in which an initial
lepton asymmetry, generated in the out-of-equilibrium decays of heavy
standard model singlet Majorana neutrinos ($N_\alpha$), is partially
converted in a baryon asymmetry by anomalous sphaleron
interactions~\cite{Kuzmin:1985mm} that are standard model processes.
Singlet Majorana neutrinos are an essential ingredient for the
generation of light neutrino masses through the seesaw mechanism
\cite{Minkowski:1977sc}. This means
that if the seesaw is the source of neutrino masses then qualitatively
leptogenesis is unavoidable. Consequently, whether the baryon
asymmetry puzzle can be solved within this framework turn out to be a
quantitative question. This has triggered a great deal of interest on
quantitative analysis of the standard leptogenesis model and indeed a
lot of progress during the last years have been achieved (see
ref. \cite{Davidson:2008bu} for details).
Here we focus on variations of the standard leptogenesis picture which
can arise if, apart from the lepton number breaking scale ($M_N$), an
additional energy scale, related to the breaking of a new symmetry,
exist. We consider a simple realization of this idea in which at an
energy scale of the order of the lepton number violating scale the
tree level coupling linking light and heavy neutrinos is forbiden by
an exact $U(1)_X$ flavor symmetry which below (or above) $M_N$ becomes
spontaneously broken by the vacuum expectation value, $\sigma$, of a
standard model singlet scalar field $S$ and involves heavy vectorlike
fields $F_a$. As will be discussed, according to the relative size of
the relevant scales of the model ($M_{N_\alpha}$, $\sigma$,
$M_{F_a}$), different scenarios for leptogenesis can be defined
\cite{AristizabalSierra:2007ur}. Of particular interest is the case
in which the total CP violating (CPV) asymmetries in the decays and
scatterings of the singlet seesaw neutrinos vanish. As we
will discuss further on two remarkable features distinguish this
scenario \cite{amn}: ($a$) Flavor effects are entirely responsible for successful
leptogenesis; ($b$) leptogenesis can be lowered down to the TeV scale.
The rest of this paper is organized as follows: In section
\ref{sec:model} we briefly discuss the model while in
section~\ref{sec:different-possibilities} we discuss two particular
realizations of our scheme paying special attention to the {\it purely
flavored leptogenesis} (PFL) case for which we analyse the evolution of
the generated lepton asymmetry by solving the corresponding Boltzmann
equations (BE).
\section{The model}
\label{sec:model}
The model we consider here \cite{AristizabalSierra:2007ur} is a simple
extension of the standard model containing a set of $SU(2)_L\times U(1)_Y$
fermionic singlets, namely three right-handed neutrinos ($N_\alpha = N_{\alpha
R} + N_{\alpha R}^c$) and three heavy vectorlike fields ($F_a=F_{aL} +
F_{aR}$). In addition, we assume that at some high energy scale, taken to be
of the order of the leptogenesis scale $M_{N_1}$, an exact $U(1)_X$ horizontal
symmetry forbids direct couplings of the lepton $\ell_i$ and Higgs $\Phi$
doublets to the heavy Majorana neutrinos $N_\alpha$. At lower energies,
$U(1)_X$ gets spontaneously broken by the vacuum expectation value (vev)
$\sigma$ of a $SU(2)$ singlet scalar field $S$. Accordingly, the Yukawa
interactions of the high energy Lagrangian read
\begin{equation}
\label{eq:lag}
-{\cal L}_Y =
\frac{1}{2}\bar{N}_{\alpha}M_{N_\alpha}N_{\alpha} +
\bar{F}_{a}M_{F_a}F_{a} +
h_{ia}\bar{\ell}_{i}P_{R}F_{a}\Phi +
\bar{N}_{\alpha}
\left( \lambda_{\alpha a} + \lambda^{(5)}_{\alpha a}\gamma_5\right)
F_{a}S
+ \mbox{h.c.}
\end{equation}
We use Greek indices $\alpha,\beta\dots =1,2,3$ to label the heavy Majorana
neutrinos, Latin indices $a,b\dots =1,2,3$ for the vectorlike messengers, and
$i, j, k, \dots$ for the lepton flavors $e,\mu,\tau$. Following reference
\cite{AristizabalSierra:2007ur} we chose the simple $U(1)_X$ charge
assignments $X(\ell_{L_i},F_{L_a},F_{R_a})=+1$, $X(S)=-1$ and
$X(N_{\alpha},\Phi)=0$. This assignment is sufficient to enforce the absence
of $\bar N \ell \Phi$ terms, but clearly it does not constitute an attempt to
reproduce the fermion mass pattern, and accordingly we will also avoid
assigning specific charges to the right-handed leptons and quark fields that
have no relevance for our analysis. The important point is that it is likely
that any flavor symmetry (of the Froggatt-Nielsen type) will forbid the the
same tree-level couplings, and will reproduce an overall model structure
similar to the one we are assuming here. Therefore we believe that our
results, that are focused on a new realization of the leptogenesis mechanism,
can hint to a general possibility that could well occur also in a complete
model of flavor.
As discussed in~\cite{AristizabalSierra:2007ur}, depending on the
hierarchy between the relevant scales of the model
($M_{N_1},\,M_{F_a},\,\sigma$), quite different {\it scenarios} for
leptogenesis can arise. Here we will concentrate on two cases: ($i$)
The standard leptogenesis case ($M_F,\sigma\gg M_N$); ($ii$) the PFL
case ($\sigma < M_{N_1}< M_{F_a}$) that is, when the flavor symmetry
$U(1)_X$ is still unbroken during the leptogenesis era and at the same
time the messengers $F_a$ are too heavy to be produced in $N_1$ decays
and scatterings, and can be integrated away \cite{amn}.
As is explicitly shown by the last term in eq.~(\ref{eq:lag}), in
general the vectorlike fields can couple to the heavy singlet
neutrinos via scalar and pseudoscalar couplings, In
ref.~\cite{AristizabalSierra:2007ur} it was assumed for simplicity a
strong hierarchy $\lambda\gg \lambda^{(5)}$ so $\lambda^{(5)}$ was
neglected. However, in all the relevant quantities (scatterings, CP
asymmetries, light neutrino masses) at leading order the scalar and
pseudoscscalar couplings always appear in the combination $\lambda +
\lambda^{(5)}$, and thus such an assumption is not necessary. The
replacement $\lambda \to \lambda + \lambda^{(5)}$ would suffice to
include in the analysis the effects of both type of interactions.
\subsection{Effective seesaw and light neutrino masses}
\label{sec:nmgen}
\begin{figure}[t]
\centering
\includegraphics[width=8cm,height=2.5cm]{neutrino-mm.eps}
\caption{Effective mass operator responsible for neutrino mass generation}
\label{fig:neutrino-massmatrix}
\end{figure}
After $U(1)_X$ and electroweak symmetry breaking
the set of Yukawa interactions in (\ref{eq:lag}) generate light
neutrino masses through the effective mass operator shown in figure
\ref{fig:neutrino-massmatrix}. The resulting mass matrix can be written as
\cite{AristizabalSierra:2007ur}
\begin{equation}
\label{eq:nmm}
-{\cal M}_{ij}=
\left[
h^{*}\frac{\sigma}{M_{F}}\lambda^{T}\frac{v^{2}}
{M_{N}}\lambda\frac{\sigma}{M_{F}}h^{\dagger}
\right]_{ij}
= \left[
\tilde{\lambda}^{T}\frac{v^{2}}{M_{N}}\tilde{\lambda}
\right]_{ij} \,.
\end{equation}
Here we have introduced the seesaw-like couplings
\begin{equation}
\label{eq:seesaw-couplings}
\tilde{\lambda}_{\alpha i} =
\left(
\lambda \frac{\sigma}{M_F}h^\dagger
\right)_{\alpha i}\,.
\end{equation}
Note that, in contrast to the standard seesaw, the
neutrino mass matrix is of fourth order in the {\it fundamental}
Yukawa couplings ($h$ and $\lambda$) and due to the factor
$\sigma^2/M_F^2$ is even more suppressed.
\section{Different scenarios for leptogenesis}
\label{sec:different-possibilities}
In this section we discuss the features of each one of the
cases we previously mentioned and derive expressions for the CP asymmetries.
Henceforth we will use the following notation for the different mass ratios:
\begin{equation}
\label{eq:mass-ratios}
z_{\alpha}=\frac{M_{N_\alpha}^2}{M_{N_1}^2},\qquad \omega_{a}=
\frac{M_{F_a}^2}{M_{F_1}^2}, \qquad r_a=\frac{M_{N_1}}{M_{F_a}}.
\end{equation}
\subsection{The standard leptogenesis case}
\label{sec:standard}
When the masses of the heavy fields $F_a$ and the $U(1)_X$ symmetry breaking
scale are both larger than the Majorana neutrino masses ($M_F\,, \sigma >
M_N$) there are no major differences from the standard Fukugita-Yanagida
leptogenesis model \cite{Fukugita:1986hr}. After integrating out the $F$ fields one
obtains the standard seesaw Lagrangian containing the effective operators
$\tilde\lambda_{\alpha i} \bar N_\alpha l_i \Phi$ with the seesaw couplings
$\tilde\lambda_{\alpha i}$ given in eq.~(\ref{eq:seesaw-couplings}). The right
handed neutrino $N_1$ decays predominantly via 2-body channels as shown in
fig.~\ref{fig:case0}. This yields the standard results that for convenience
we recall here. The total decay width is $\Gamma_{N_1}= \left({M_{N_1}}/{16
\pi}\right) (\tilde\lambda\tilde\lambda^\dagger)_{11}$ and the sum of the
vertex and self-energy contributions to the $CP$-asymmetry for $N_1$ decays
into the flavor $l_j$ reads \cite{Covi:1996wh}
\begin{equation}
\label{eq:6}
\epsilon_{N_1\to l_j}=\frac{1}{8\pi (\tilde\lambda \tilde\lambda^\dagger)_{11}}
\sum_{\beta\neq 1}{\mathbb I}\mbox{m}
\left\{\tilde\lambda_{\beta j}\tilde\lambda^*_{1j} \left[
(\tilde\lambda\tilde\lambda^\dagger)_{\beta1} \tilde F_1(z_\beta) +
(\tilde\lambda\tilde\lambda^\dagger
)_{1\beta}
\tilde F_2(z_\beta)
\right]\right\}\,,
\end{equation}
where
\begin{equation}
\label{eq:tildeF}
\tilde F_1(z )= \frac{\sqrt{z }}{1-z } +
\sqrt{z }\left(1-(1+z )\ln
\frac{1+z }{z }\right),
\qquad \tilde F_2(z)=\frac{1}{1-z}.
\end{equation}
At leading order in $1/z_\beta$ and after summing over all leptons $l_j$,
eq.~(\ref{eq:6}) yields for the total asymmetry:
\begin{equation}
\label{eq:8}
\epsilon_{N_1}=\frac{3}{16\pi (\tilde\lambda \tilde\lambda^\dagger)_{11}}
\sum_{\beta}
{\mathbb I}\mbox{m}
\left\{\frac{1}{\sqrt{z_\beta}}
(\tilde\lambda\tilde\lambda^\dagger)_{\beta1}^2 \right\}.
\end{equation}
where the sum over the heavy neutrinos has been extended to include also $N_1$
since for $\beta=1$ the corresponding combination of couplings is real.
In the hierarchical case $M_{N_1}\ll M_{N_{2,3}}$
the size of the total asymmetry in (\ref{eq:8}) is bounded by
the Davidson-Ibarra limit \cite{Davidson:2002qv}
\begin{equation}
\label{eq:DI}
|\epsilon_{N_1}|\leq \frac{3}{16\pi}\frac{M_{N_1}}{v^2} \,(m_{\nu_3}-m_{\nu_1})
\lesssim \frac{3}{16\pi}\frac{M_{N_1}}{v^2} \frac{\Delta m^2_{\text{atm}}}{2m_{\nu_3}}\,,
\end{equation}
where $m_{\nu_i}$ (with $m_{\nu_1} < m_{\nu_2} < m_{\nu_3}$) are the
light neutrinos mass eigenstates and $\Delta m^2_{\text{atm}}\sim
2.5\times 10^{-3}\,$eV$^2$ is the atmospheric neutrino mass difference
\cite{Schwetz:2008er}. It is now easy to see that (\ref{eq:DI})
implies a lower limit on $M_{N_1}$. The amount of $B$ asymmetry that
can be generated from $N_1$ dynamics can be written as:
\begin{equation}
\label{eq:etaB}
\frac{n_B}{s}=-\kappa_s\,\epsilon_{N_1}\,\eta ,
\end{equation}
where $\kappa_s\approx 1.3\times 10^{-3}$ accounts for the dilution of
the asymmetry due to the increase of the Universe entropy from the
time the asymmetry is generated with respect to the present time,
$\eta $ (that can range between 0 and 1, with typical values
$10^{-1}-10^{-2}$) is the {\it efficiency factor} that accounts for
the amount of $L$ asymmetry that can survive the washout process.
Assuming that $\epsilon_{N_1}$ is the main source of the $B-L$
asymmetry~\cite{Engelhard:2006yg}, eqs.~(\ref{eq:DI}) and (\ref{eq:etaB}) in
addition from the oserved baryon asymmetry eq. (\ref{eq:baryon-asymm})
yield:
\begin{equation}
\label{eq:M1limit0}
M_{N_1} \gtrsim 10^{9}\,
\frac{m_{\nu_3}}{\eta\, \sqrt{\Delta m^2_{\text{atm}}}} \, {\rm GeV}.
\end{equation}
This limit can be somewhat relaxed depending on the specific initial
conditions~\cite{Giudice:2003jh} or when flavor effects are
included~\cite{Nardi:2006fx,Abada:2006fw,Abada:2006ea,JosseMichaux:2007zj} but the main point remains,
and that is that the value of $M_{N_1}$ should be well above the electroweak
scale.
\begin{figure}[t]
\centering
\includegraphics[width=10cm,height=2.5cm]{standard-lep.eps}
\caption{Diagrams responsible for the CP violating asymmetry in the standard case}
\label{fig:case0}
\end{figure}
\subsection{Purely flavored leptogenesis case}
\label{sec:PFL}
Differently from standard leptogenesis in the present case, since $M_F >
M_{N_1}$, two-body $N_1$ decays are kinematically forbidden. However, via
off-shell exchange of the heavy $F_a$ fields, $N_1$ can decay to the three
body final states $S\Phi l$ and $\bar S\bar\Phi \bar l$. The corresponding
Feynman diagram is depicted in figure \ref{fig:fig0}$(a)$. At leading order
in $r_a=M_{N_1}/M_{F_a}$, the total decay width reads
\cite{AristizabalSierra:2007ur}
\begin{equation}
\label{eq:total-decay-width}
\Gamma_{N_1}\equiv
\sum_j \Gamma(N_1\to S\Phi l_j + \bar S\bar\Phi \bar l_j) =\frac{M_{N_1}}{192\pi^3}
\left(
\frac{M_{N_1}}{\sigma}
\right)^2
(\tilde{\lambda}\tilde{\lambda}^\dagger)_{11}\,.
\end{equation}
As usual, CPV asymmetries in $N_1$ decays arise from the interference between
tree-level and one-loop amplitudes. As was noted
in~\cite{AristizabalSierra:2007ur}, in this model at one-loop there are no
contributions from vertex corrections, and the only contribution to the CPV
asymmetries comes from the self-energy diagram~\ref{fig:fig0}$(b)$. Summing
over the leptons and vectorlike fields running in the loop, at leading order in
$r_a$ the CPV asymmetry for $N_1$ decays into leptons of flavor $j$ can be
written as
\begin{equation}
\label{eq:cp-violating-asymm}
\epsilon_{1j} \equiv \epsilon_{N_1\to\ell_j} =
\frac{3}{128\pi}
\frac{\sum_{m} \mathbb{I}\mbox{m}
\left[
\left(
h r^{2} h^{\dagger}
\right)_{mj}\tilde{\lambda}_{1m}\tilde{\lambda}^{*}_{1j}
\right]}{\left(\tilde{\lambda}\tilde{\lambda}^{\dagger}\right)_{11}}\,.
\end{equation}
Note that since the loop correction does not violate lepton number, the total
CPV asymmetry that is obtained by summing over the flavor of the final state
leptons vanishes~\cite{Kolb:1979qa}, that is $\epsilon_{1}\equiv \sum_j
\epsilon_{1j}=0$. This is the condition that defines PFL; namely there is no
CPV {\it and} lepton number violating asymmetry, and the CPV lepton flavor
asymmetries are the only seed of the Cosmological lepton and baryon
asymmetries.
It is important to note that the effective couplings $\tilde\lambda$ defined
in eq.~(\ref{eq:seesaw-couplings}) are invariant under the reparameterization
\begin{equation}
\label{eq:couplingtrans}
\lambda\to \lambda\cdot (rU)^{-1},\quad
h^\dagger \to (U r)\cdot h^\dagger\,,
\end{equation}
\begin{figure}[t]
\begin{center}
\includegraphics[width=10cm,height=3cm]{cp-asymm-plot.eps}
\end{center}
\caption{Feynman diagrams responsible for the CPV asymmetry.}
\label{fig:fig0}
\end{figure}
where $U$ is an arbitrary $3\times 3$ non-singular matrix. Clearly
the light neutrino mass matrix is invariant under this transformation.
Moreover, also the flavor dependent washout processes, that correspond
to tree level amplitudes that are determined, to a good approximation,
by the effective $\tilde \lambda$ couplings, are left essentially
unchanged.\footnote{The approximation is exact in the limit of
pointlike $F$-propagators $(s-M^2_F+iM_F\Gamma_F)\to M^2_F$.} On the
contrary, the flavor CPV asymmetries
eq.~(\ref{eq:cp-violating-asymm}), that are determined by loop
amplitudes containing an additional factor of $h r^2 h^\dagger$, get
rescaled as $h r^2 h^\dagger\to h (rUr)^\dagger (rUr) h^\dagger $.
Clearly, this rescaling affects in the same way all the lepton flavors
(as it should be to guarantee that the PFL conditions
$\epsilon_\alpha\equiv \sum\epsilon_{\alpha j}=0$ are not spoiled),
and thus for simplicity we will consider only rescaling by a global
scalar factor $r.U=U.r=\kappa\,I$ (with $I$ the $3\times 3$ identity
matrix) that, for our purposes, is completely equivalent to the more
general transformation~(\ref{eq:couplingtrans}). Thus, while rescaling
the Yukawa couplings through
\begin{equation}
\label{eq:coupling-rescaling-gen}
\lambda\to \lambda \,\kappa^{-1},\quad
h^\dagger\to\kappa \,h^\dagger\,,
\end{equation}
does not affect neither low energy neutrino physics nor the washout processes,
the CPV asymmetries get rescaled as:
\begin{equation}
\label{eq:rescaled-CPV-asymm}
\epsilon_{1j}\to\kappa^2\epsilon_{1j}\,.
\end{equation}
By choosing $\kappa>1$, all the CPV asymmetries get enhanced as
$\kappa^2$ and, being the Cosmological asymmetries generated through
leptogenesis linear in the CPV asymmetries, the final result gets
enhanced by the same factor. Therefore, for any given set of
couplings, one can always find an appropriate rescaling such that the
correct amount of Cosmological lepton asymmetry is generated. In
practice, the rescaling factors $\kappa$ cannot be arbitrarily large:
first, they should respect the condition that all the fundamental
Yukawa couplings remain in the perturbative regime; second the size of
the $h$ couplings (and thus also of the rescaling parameter $\kappa$)
is also constrained by experimental limits on lepton flavor violating
decays.
\begin{figure}[t]
\begin{center}
\includegraphics[width=13.0cm,height=2.3cm,angle=0]{decaySc.eps}
\end{center}
\caption{Feynman diagrams for $1\leftrightarrow 3$ and
$2\leftrightarrow 2$ $s$, $t$ and $u$ channel processes
after integrating out the heavy vectorlike fields $F_a$.}
\label{fig:fig1}
\end{figure}
\subsubsection{Boltzmann Equations}
\label{sec:BE}
In this section we compute the lepton asymmetry by solving the
appropriate BE. In general, to consistently derive the evolution
equation of the lepton asymmetry all the possible processes at a given
order in the couplings have to be included. In the present case
$1\leftrightarrow 3$ decays and inverse decays, and $2\leftrightarrow
2$ $s$, $t$ and $u$ channel scatterings all occur at the same order in
the couplings and must be included altogether in the BE. The Feynman
diagrams for these processes are shown in Figure~\ref{fig:fig1}. In
addition, the CPV asymmetries of some higher order multiparticle
reactions involving the exchange of one off-shell $N_1$, also
contribute to the source term of the asymmetries at the same order in
the couplings than the CPV asymmetries of decays and $2\leftrightarrow
2$ scatterings. More precisely, for a proper derivation of the BE it
is essential that the CPV asymmetries of the off-shell
$3\leftrightarrow 3$ and $2\leftrightarrow 4$ scattering processes are
also taken into account \cite{amn}.
As regards the equation for the evolution of the heavy neutrino density
$Y_{N_1}$, only the diagrams in fig.~\ref{fig:fig1}, that are of leading order
in the couplings, are important \cite{amn}
\begin{align}
\label{eq:BEforN-section}
\dot{Y}_{N_1} &=
-\left(y_{N_1} - 1 \right) \gamma_{\text{tot}}\,, \\
\label{eq:BEforLA-section}
\dot{Y}_{\Delta \ell_{i}} &=
\left(y_{N_1} - 1 \right)\epsilon_{i}\gamma_{\text{tot}} - \Delta y_{i}
\left[
\gamma_{i} + \left(y_{N_1} - 1 \right)\gamma^{N_{1}\bar\ell_{i}}_{S\Phi}
\right]\,,
\end{align}
Here we have normalized particle densities to their equilibrium
densities $y_a\equiv Y_a/Y_a^{\text{eq}}$ where $Y_a=n_a/s$ with $n_a$
the particle number density and $s$ the entropy density. The time
derivative is defined as $\dot Y = sHz\,dY/dz$ with $z=M_{N_1}/T$ and
$H$ is the Hubble parameter. In the last term of the second equation
we have used the compact notation for the reaction densities
$\gamma^{N_{1}\bar\ell_{i}}_{S\Phi} =\gamma(N_{1}\bar\ell_{i}\to
{S\Phi})$ and in addition we have defined
\begin{eqnarray}
\label{eq:rates}
\gamma_{i} &=&
\gamma^{N_{1}}_{S\ell_{i}\Phi}+
\gamma^{N_{1}\bar{S}}_{\Phi\ell_{i}} +
\gamma^{N_{1}\bar{\Phi}}_{S\ell_{i}} + \gamma_{S\Phi}^{N_{1}\bar\ell_{i}}\,, \\
\gamma_{\text{tot}} &=& \sum_{i=e,\mu,\tau}
\gamma_{i}+\bar\gamma_i\,,
\end{eqnarray}
where in the second equation $\bar\gamma_i$ represents the
sum of the CP conjugates of the processes summed in $\gamma_{i}$.
Since in this model $N_1$ decays are of the same order in the couplings
than scatterings (that is ${\cal O}(\tilde\lambda^2)$), the appropriate condition
that defines the {\it strong washout} regime in the case at hand reads:
\begin{equation}
\label{eq:strong-washout}
\left .\frac{\gamma_{\text{tot}}}{z\,H\,s}\right|_{z\sim 1}>1 \,\qquad {\rm
(strong\ washout)},
\end{equation}
and conversely $\gamma_{\text{tot}}/(z\,H\,s)|_{z\sim 1}<1$ defines the {\it
weak washout} regime. Note that this is different from standard
leptogenesis, where at $z\sim 1$ two body decays generally dominate over
scatterings, and e.g. the condition for the strong washout regime can be
approximated as $\gamma_{\text{tot}}/(z\,H\,s)|_{z\sim 1}\sim
\Gamma_{N_1}/H|_{z\sim 1}>1$.
\subsubsection{Results}
\label{sec:results}
In this section we discuss a typical example of successful
leptogenesis at the scale of a few TeV. The example presented is a
general one. No particular choice of the parameters has been
performed, except for the fact that the low energy neutrino data are
reproduced within errors, and that the choice yields an interesting
washout dynamics well suited to illustrate how PFL works. The
numerical value of the final lepton asymmetry ($Y_{\Delta L} \sim
-7.2\times 10^{-10}$) is about a factor of 3 {\it larger} than what is
indicated by measurements of the Cosmic baryon asymmetry. This is
however irrelevant since, as was already discussed, it would be
sufficient a minor rescaling of the couplings (or a slight change in
the CPV phases) to obtain the precise experimental result. In the
numerical analysis we have neglected the dynamics of the heavier
singlet neutrinos since the $N_\alpha$ masses are sufficiently hierarchical
to ensure that $N_{2,3}$ related washouts do not interfere with $N_1$
dynamics. Moreover, in the (strong washout) fully flavored regime
(that is effective as long as $T < 10^{9}\,$GeV) the $N_{2,3}$ CPV
asymmetries do not contribute to the final lepton number
asymmetry~\cite{Engelhard:2006yg}.
In figure~\ref{fig:fig4} we show the behavior of the various reaction
densities for decays and scatterings, normalized to $sHz$, as a function of
$z$. The results correspond to a mass of the lightest singlet
neutrino fixed to $M_{N_1}=2.5\,\text{TeV}$, the heavier neutrino masses are $
M_{N_2}=10\,$TeV and $M_{N_3}=15\,$TeV, and the relevant mass ratios
$r_a=M_{N_1}/M_{F_a}$ for the messenger fields are $r_{1,2,3}=0.1,0.01,0.001$
(the effects of the lightest $F$ resonances can be seen in the $s$-channel
rates in both panels in fig.~\ref{fig:fig4}). The fundamental Yukawa
couplings $h$ and $\lambda$ are chosen to satisfy the requirement that the
seesaw formula eq.~(\ref{eq:nmm}) reproduces within $2\,\sigma$ the low energy
data on the neutrino mass squared differences and mixing
angles~\cite{Schwetz:2008er}. Typically, when this requirement is fulfilled,
one also ends up with a dynamics for all the lepton flavors in the strong
washout regime. This is shown in the left panel in fig.~\ref{fig:fig5}
where we present the total rates for the three flavors.
\begin{figure}[t]
\begin{center}
{\includegraphics[width=6.5cm,height=5.5cm,angle=0]{tau-rates.eps}}
\hspace{5mm}
{\includegraphics[width=6.5cm,height=5.5cm,angle=0]{el-rates.eps}}
\end{center}
\caption{
Reaction densities normalized to $zHs$ for $N_1\to S\ell\Phi$ decays
(red solid lines), $s$-channel $\bar S N_1
\leftrightarrow \ell \Phi$ scatterings (green dashed lines), and
$t,u$-channel scatterings in the point-like approximation (blue dotted lines).
Left panel: $\tau$ flavor. Right panel: electron flavor.
}
\label{fig:fig4}
\end{figure}
The left panel in fig.~\ref{fig:fig4} refers to the decay and scattering rates
involving the $\tau$-flavor that, in our example, is the flavor more strongly
coupled to $N_1$, and that thus suffers the strongest washout. It is worth
noticing that, due to the fact that in this model scatterings are not
suppressed by additional coupling constants with respect to the decays, the
decay rate starts dominating the washouts only at $z\gtrsim 1$. The right panel
in fig.~\ref{fig:fig4} depicts the reaction rates for the electron flavor,
that is the more weakly coupled, and for which the strong washout condition
eq.~(\ref{eq:strong-washout}) is essentially ensured by sizeable $s$-channel
scatterings. Scatterings and decay rates for the $\mu$-flavor are not shown,
but they are in between the ones of the previous two flavors.
\begin{figure}[t]
\begin{center}
{\includegraphics[width=6.5cm,height=5.5cm,angle=0]{total-rates.eps}}
\hspace{5mm}
{\includegraphics[width=6.5cm,height=5.5cm,angle=0]{Yel-mu-tau-total.eps}}
\end{center}
\caption{Left panel: the total washout rates for each lepton flavor
normalized to $zHs$ as a function of $z$. Right panel: the
evolution of the absolute value of the flavored density asymmetries and of
the lepton number asymmetry (black solid line). The flavor CPV asymmetries
are $\epsilon_{1e} = -4.7\times 10^{-4}$, $\epsilon_{1\mu} = -1.9\times
10^{-4}$ and $\epsilon_{1\tau} = 6.6\times 10^{-4}$. The final values of
the asymmetry densities (at $z\gg 1$) are $Y_{\Delta_{\ell_e}} =-7.1\times
10^{-10}$, $Y_{\Delta_{\ell_\mu}} =-0.3\times 10^{-10}$,
$Y_{\Delta_{\ell_\tau}} =0.2\times 10^{-10}$.
}
\label{fig:fig5}
\end{figure}
The total reaction densities that determine the washout rates for the
different flavors are shown in the first panel in figure~\ref{fig:fig5}. The
evolution of these rates with $z$ should be confronted with the evolution of
(the absolute value of) the asymmetry densities for each flavor, depicted in
the second panel on the right. Since, as already stressed several times, PFL
is defined by the condition that the sum of the flavor CPV asymmetry vanishes
($\sum_j \epsilon_{1j}=0$), it is the hierarchy between these washout rates
that in the end is the responsible for generating a net lepton number
asymmetry. In the case at hand, the absolute values of the flavor CPV
asymmetries satisfy the condition $|\epsilon_\mu| < |\epsilon_e| <
|\epsilon_\tau|$, as can be inferred directly by the fact that at $z< 0.1$,
when the effects of the washouts are still negligible, the asymmetry densities
satisfy this hierarchy. Moreover, since $\epsilon_{\mu,e} < 0$ while
$\epsilon_\tau>0$, initially the total lepton number asymmetry, that is
dominated by $Y_{\Delta_{\ell_\tau}}$, is positive. As washout effects become
important, the $\tau$-related reactions (blue dotted line in the left panel)
start erasing $Y_{\Delta_{\ell_\tau}}$ more efficiently than what happens for
the other two flavors, and thus the initial positive asymmetry is driven
towards zero, and eventually changes sign around $z=0.2$. This change of sign
corresponds to the steep valley in the absolute value $|Y_{\Delta L}|$ that is
drawn in the figure with a black solid line.
Note that when all flavors are in the strong washout regime,
as in the present case, the condition for the occurrence of
this `sign inversions' is simply given by
$ {\rm max_{j\in e,\mu}}
\left(|\epsilon_j|/|\tilde\lambda_{1j}|^2\right)
\gtrsim \epsilon_\tau/|\tilde\lambda_{1\tau}|^2$.
From this point onwards, the
asymmetry remains negative, and since the electron flavor is the one that
suffers the weakest washout, $Y_{\Delta_{\ell_e}}$ ends up dominating all the
other density asymmetries. In fact, as can be seen from the
right panel in fig.~\ref{fig:fig5}, it is $Y_{\Delta_{\ell_e}}$
that determines to a large extent the final value of the lepton asymmetry
$Y_{\Delta L}=-7.2\times 10^{-10}$.
A few comments are in order regarding the role played by the $F_a$ fields.
Even if $M_{N_1}\ll M_{F_a}$, at large temperatures $z\gg 1$ the tail of the
thermal distributions of the $N_1,\,S$ and $\Phi$ particles allows the
on-shell production of the lightest $F$ states. A possible asymmetry generated
in the decays of the $F$ fields can be ignored for two reasons: first because
due to the rather large $h$ and $\lambda$ couplings $F$ decays occur to a good
approximation in thermal equilibrium, ensuring that no sizeable asymmetry can
be generated, and second because the strong washout dynamics that
characterizes $N_1$ leptogenesis at lower temperatures is in any case
insensitive to changes in the initial conditions.
In conclusion, it is clear from the results of this section that the model
encounters no difficulties to allow for the possibility of generating the
Cosmic baryon asymmetry at a scale of a few TeVs. Moreover, our analysis
provides a concrete example of PFL, and shows that the condition
$\epsilon_1\neq 0$ is by no means required for successful leptogenesis.
\section{Conclusions}
Variations of the standard leptogenesis picture can arise from the
presence of an additional energy scale different from that of lepton
number violation. Quite generically the resulting scenarios are
expected to yield qualitative and quantitative changes on
leptogenesis. Here we have considered what we regard as the simplest
possibility namely, the presence of an abelian flavor symmetry
$U(1)_X$. We have described two possible scenarios within this framework
and have explored their implications for leptogenesis.
We have found that as long as the abelian flavor symmetry energy
scales remain above the lepton number violating scale neither
qualitative nor quantitative differences with the standard
leptogenesis model arise. Conversely if the $U(1)_X$ is unbroken
during the leptogenesis era and the messengers fields $F_a$ are too
heavy to be produced on-shell in $N_1$ decays {\it purely flavored
leptogenesis} at the TeV scale results. By solving the corresponding
BE we have shown that within this scenario the non-vanishing of the
CPV lepton flavor asymmetries in addition to the lepton and flavor
violating washout processes occuring in the plasma provide the
necessary ingredients to generate the Cosmic baryon asymmetry.
Accordingly, if below the leptogenesis scale new energy scales are
present -as might be expected- the interplay between these scales
could have a quite interesting impact on leptogenesis.
\section*{References}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.